Key words

1 Introduction

The general setting of this article is the class of log-concave probability measures on \({\mathbb{R}}^{n}\); these are the Borel probability measures μ on \({\mathbb{R}}^{n}\) with the property that

$$\mu ((1 - \lambda )A + \lambda B) \geq {(\mu (A))}^{1-\lambda }{(\mu (B))}^{\lambda }.$$
(1)

for any pair of Borel subsets A, B of \({\mathbb{R}}^{n}\) and any λ ∈ (0, 1). The study of geometric properties of log-concave probability measures is a central topic in asymptotic geometric analysis and several questions asking for universal bounds for important geometric parameters of these measures remain open. Let us briefly introduce two of them, the hyperplane conjecture and the Kannan-Lovász-Simonovits conjecture.

A log-concave probability measure μ on \({\mathbb{R}}^{n}\) is called isotropic if the barycentre of μ is at the origin and its covariance matrix Cov(μ) with entries

$$\mathrm{Cov}(\mu )_{ij} :=\displaystyle\int _{{\mathbb{R}}^{n}}x_{i}x_{j}f_{\mu }(x)\,dx$$
(2)

is the identity matrix. Then the isotropic constant of μ is defined by f(0)1 ∕ n where f is the density of μ with respect to Lebesgue measure. The hyperplane conjecture asks if there exists an absolute constant C > 0 such that L μ  ≤ C for all n ≥ 1 and all isotropic log-concave probability measures μ. Bourgain in [9] proved that one always has \(L_{\mu } \leq C\root{4}\of{n}\log n\), and Klartag [14] improved this bound to \(L_{\mu } \leq C\root{4}\of{n}\); a second proof of this estimate appears in [15]. On the other hand, one of the equivalent versions of the Kannan–Lovász–Simonovits conjecture asks if there exists an absolute constant C > 0 such that the Poincaré inequality

$$\displaystyle\int {\varphi }^{2}d\mu \leq C\displaystyle\int \|\nabla \varphi \|_{ 2}^{2}d\mu $$
(3)

holds true (with constant C) for all isotropic log-concave probability measures and all smooth enough functions φ satisfying φ d μ = 0.

Both questions are known to have an affirmative answer if we restrict our attention to special classes of log-concave probability measures. One way to introduce such a class is to impose some assumption of uniform boundedness on one geometric parameter for this subclass and to study other main geometric parameters of the measures in this subclass, trying to obtain uniform estimates for them which should depend on the bound for the chosen parameter only.

The purpose of this article is to provide a survey on the basic geometric properties of the class \(\mathcal{L}S(\kappa )\) of probability measures μ on \({\mathbb{R}}^{n}\) which satisfy the logarithmic Sobolev inequality with a given constant κ > 0. We obtain bounds in terms of κ, but independent of the dimension, for several of these parameters and we emphasize some questions which remain open even if we impose this additional assumption.

A Borel probability measure μ on \({\mathbb{R}}^{n}\) is said to satisfy the logarithmic Sobolev inequality with constant κ > 0 if for any (locally) Lipschitz function \(f : {\mathbb{R}}^{n} \rightarrow \mathbb{R}\) one has,

$$\mathrm{Ent}_{\mu }({f}^{2}) \leq 2\kappa \displaystyle\int \|\nabla f\|_{ 2}^{2}\,d\mu ,$$
(4)

where \(\mathrm{Ent}_{\mu }(g) = \mathbb{E}_{\mu }(g\log g) - \mathbb{E}_{\mu }g\log (\mathbb{E}_{\mu }g)\) is the entropy of g with respect to μ. It is well-known (see e.g. [18, Chap. 5]) that the log-Sobolev inequality implies normal concentration. For every measurable function f on \({\mathbb{R}}^{n}\) consider the logarithmic Laplace transform

$$L_{f}(u) =\log \left (\displaystyle\int {e}^{uf}d\mu \right ),\quad u \in \mathbb{R}.$$
(5)

Then, the Herbst argument shows that if f is 1-Lipschitz and \(\mathbb{E}_{\mu }(f) = 0\), one has L f (u) ≤ κu 2 ∕ 2 for all \(u \in \mathbb{R}\), and hence, from Markov’s inequality,

$$\mu (x : \vert f(x)\vert \geq t) \leq 2{e}^{-{t}^{2}/2\kappa },\qquad t > 0.$$
(6)

It is also known that the log-Sobolev inequality implies Poincaré inequality, namely: if μ belongs to the class \(\mathcal{L}S(\kappa )\), then for any (locally) Lipschitz function \(f : {\mathbb{R}}^{n} \rightarrow \mathbb{R}\) we have

$$\mathrm{Var}_{\mu }(f) \leq \kappa \displaystyle\int \|\nabla f\|_{2}^{2}\,d\mu ,$$
(7)

where \(\mathrm{Var}_{\mu }(g) = \mathbb{E}_{\mu }({g}^{2}) - {(\mathbb{E}_{\mu }(g))}^{2}\) denotes the variance of g with respect to μ. We denote the class of probability measures satisfying Poincaré inequality with a given constant κ > 0 by \(\mathcal{P}(\kappa )\).

We are mainly interested in the subclasses \(\mathcal{L}S_{lc}(\kappa )\) and \(\mathcal{P}_{lc}(\kappa )\) of isotropic log-concave probability measures that belong to \(\mathcal{L}S(\kappa )\) and \(\mathcal{P}(\kappa )\) respectively. In particular, we study the dependence on κ of various parameters that play a crucial role in recent works about isotropic log-concave measures—see the next section for definitions and background information. It turns out that, from this point of view, \(\mathcal{L}S_{lc}(\kappa )\) is a rather restricted class with very nice properties:

Theorem 1.

Let μ be an isotropic log-concave probability measure on \({\mathbb{R}}^{n}\) which satisfies the logarithmic Sobolev inequality with constant κ > 0. Then,

(i):

All directions are sub-Gaussian: μ is a ψ 2 -measure with constant \(c_{1}\sqrt{\kappa }.\)

(ii):

The isotropic constant of μ is bounded: \(L_{\mu } \leq c_{2}\sqrt{\kappa }.\)

(iii):

Let \(I_{q}(\mu ) ={ \left (\int \|x\|_{2}^{q}d\mu \right )}^{1/q}\) , − n < q < ∞, q≠0. Then, \(I_{q}(\mu ) \leq I_{2}(\mu ) + \sqrt{\kappa }\sqrt{q}\) for all 2 ≤ q < ∞. In particular, \(I_{q}(\mu ) \leq c_{3}\sqrt{n}\) for all q ≤ c 4 n∕κ. Also, \(I_{-q}(\mu ) \geq c_{5}\sqrt{n}\) for all q ≤ c 6 n∕κ.

(iv):

Most directions are “regular” and super-Gaussian: there exists a subset A of S n−1 with measure \(\sigma (A) > 1 - {e}^{-c_{7}n/\kappa }\) such that for any θ ∈ A we have

$${ \left (\displaystyle\int \vert \langle x,\theta \rangle {\vert }^{q}\,d\mu (x)\right )}^{1/q} \leq c_{ 8}\sqrt{\kappa }\sqrt{q/p}{\left (\displaystyle\int \vert \langle x,\theta \rangle {\vert }^{p}\,d\mu (x)\right )}^{1/p}$$
(8)

for any 1 ≤ p ≤ c 9 n∕κ and any q ≥ p, and also,

$$\mu (x : \vert \langle x,\theta \rangle \vert \geq t) \geq {e}^{-c_{10}{t}^{2}/\kappa },$$
(9)

for all \(1 \leq t \leq c_{11}\sqrt{n}/\kappa .\)

The proofs of the previous statements are given in Sect. 3. Our basic tools are the classical Herbst argument and the theory of L q -centroid bodies as it is developed in [10, 12, 2729]. All these assertions show that measures belonging to \(\mathcal{L}S_{lc}(\kappa )\) (with κ ≃ 1) share many of the properties of the standard n-dimensional Gaussian measure γ n (recall that γ n satisfies the log-Sobolev inequality with κ = 1). We close Sect. 3 with a strengthened version of a recent result of Latała (see [16]) about the tails of order statistics of log-concave isotropic probability measures μ in \({\mathbb{R}}^{n}\): Latała showed that

$$\mu (x : x_{m}^{{\ast}}\geq t) \leq \exp (-\sqrt{m}t/c)$$
(10)

for all 1 ≤ m ≤ n and t ≥ log(en ∕ m), where (x 1  ∗ , , x n  ∗ ) is the decreasing rearrangement of ( | x 1 | , , | x n  | ). We show that if \(\mu \in \mathcal{L}S(\kappa )\) is centered then, for every 1 ≤ m ≤ n and for any \(t \geq C\sqrt{\kappa \log (en/m)}\), we have

$$\mu (x : x_{m}^{{\ast}}\geq t) \leq {e}^{-cm{t}^{2}/\kappa }.$$
(11)

In fact, using a recent result from [1], one can obtain a similar estimate in the setting of log-concave isotropic probability measures with bounded ψ 2-constant, but for a slightly different range of t’s.

According to Theorem 1 (i), if \(\mu \in \mathcal{L}S_{lc}(\kappa )\) then μ is a ψ 2-measure. It is natural to ask what is the exact relation of ψ 2-measures with this class: more precisely, what is the best upper bound m(b, n)—with respect to b and the dimension n—that one can have for the log-Sobolev constant of an isotropic measure on \({\mathbb{R}}^{n}\) with ψ 2 constant less than or equal to b. In Sect. 4 we show that a transportation of measure argument from [17] allows one to show that the log-Sobolev constants of the q n balls for 2 ≤ q ≤  are uniformly bounded. It is well known that these bodies are ψ 2 (actually, the list of known ψ 2 measures is also rather poor).

Theorem 2.

There exists an absolute constant C > 0 such that for every n and every 2 ≤ q ≤∞ one has \(\mu _{q,n} \in \mathcal{L}S_{lc}(C)\) , where μ q,n is the Lebesgue measure on the normalized ℓ q n ball \(\overline{B}_{q}^{n}.\)

In Sect. 5 we discuss the infimum convolution conjecture of Latała and Wojtaszczyk for the class \(\mathcal{L}S_{lc}(\kappa )\). We first recall property (τ) which was introduced by Maurey in [21]. If μ is a probability measure on \({\mathbb{R}}^{n}\) and \(\varphi : {\mathbb{R}}^{n} \rightarrow [0,\infty ]\) is a measurable function, then the pair (μ, φ) is said to have property (τ) if

$$\displaystyle\int {e}^{f\square \varphi }\,d\mu \displaystyle\int {e}^{-f}\,d\mu \leq 1$$
(12)

for any bounded measurable function \(f : {\mathbb{R}}^{n} \rightarrow \mathbb{R}\), where

$$(f\square g)(x) :=\inf \{ f(x - y) + g(y) : y \in {\mathbb{R}}^{n}\}$$
(13)

is the infimum convolution of two functions \(f,g : {\mathbb{R}}^{n} \rightarrow \mathbb{R}\). Since (12) is clearly satisfied with φ ≡ 0, the question is to find the largest cost function φ for which it is still true. In [17] it is proved that if μ is symmetric and (μ, φ) has property (τ) for some convex cost function φ, then

$$\varphi (y) \leq 2\Lambda _{\mu }^{{\ast}}(y/2) \leq \Lambda _{ \mu }^{{\ast}}(y),$$
(14)

where

$$\Lambda _{\mu }^{{\ast}}(y) = \mathcal{L}\Lambda _{ \mu }(y) =\sup\limits_{x\in {\mathbb{R}}^{n}}\left \{\langle x,y\rangle -\log \displaystyle\int {e}^{\langle x,z\rangle }d\mu (z)\right\}$$
(15)

is the Legendre transform of the logarithmic Laplace transform Λ μ of μ. Thus, Λ μ  ∗  is the best cost function that might satisfy property (τ) with a given measure μ. Latała and Wojtaszczyk conjecture that there exists an absolute constant b > 0 such that \((\mu ,\Lambda _{\mu }^{{\ast}}( \frac{\cdot } {b}))\) has property (τ) for every symmetric log-concave probability measure μ on \({\mathbb{R}}^{n}\). This is a very strong conjecture. If true in full generality, this optimal infimum convolution inequality would imply a positive answer to the Kannan–Lovasz–Simonovits conjecture and the hyperplane conjecture.

We study the conjecture of Latała and Wojtaszczyk for the class of log-concave probability measures with log-Sobolev constant κ. It is not hard to check that \(\Lambda _{\mu }^{{\ast}}(y) \geq \frac{\|y\|_{2}^{2}} {2\kappa }\). Therefore, a weaker answer would be to show that, for any bounded measurable function f we have (12) for a function φ which is proportional to \(\|y\|_{2}^{2}\). At this point we are able to give a proof of this fact using the equivalence of the logarithmic Sobolev inequality and the Gaussian isoperimetric inequality in the context of log-concave measures, first established by Bakry and Ledoux (see [3]).

Theorem 3.

Let μ be a log-concave probability measure which satisfies the log-Sobolev inequality with constant κ > 0. Then, (μ,φ) has property (τ), where \(\varphi (y) = \frac{c} {\kappa }\|y\|_{2}^{2}\) and c > 0 is an absolute constant.

Theorem 3 is close in spirit to a result due to Maurey proved in [21] stating that if (μ, φ) has property (τ) with \(\varphi (y) = \frac{\|y\|_{2}^{2}} {2\kappa }\), then μ satisfies Poincaré inequality with constant κ (see Sect. 5 for the exact statement).

2 Notation and Background Material

We work in \({\mathbb{R}}^{n}\), which is equipped with a Euclidean structure ⟨ ⋅,  ⋅⟩. We denote by \(\|\cdot \|_{2}\) the corresponding Euclidean norm, and write B 2 n for the Euclidean unit ball, and S n − 1 for the unit sphere. Volume is denoted by |  ⋅ | . We write ω n for the volume of B 2 n and σ for the rotationally invariant probability measure on S n − 1. The Grassmann manifold G n, k of k-dimensional subspaces of \({\mathbb{R}}^{n}\) is equipped with the Haar probability measure ν n, k . Let k ≤ n and F ∈ G n, k . We will denote by P F the orthogonal projection from \({\mathbb{R}}^{n}\) onto F. We also write \(\overline{A}\) for the homothetic image of volume 1 of a compact set \(A \subseteq {\mathbb{R}}^{n}\) of positive volume, i.e. \(\overline{A} := \frac{A} {\vert A{\vert }^{1/n}}\). The letters c, c , c 1, c 2 etc. denote absolute positive constants which may change from line to line. Whenever we write a ≃ b, we mean that there exist absolute constants c 1, c 2 > 0 such that c 1 a ≤ b ≤ c 2 a.

A convex body in \({\mathbb{R}}^{n}\) is a compact convex subset C of \({\mathbb{R}}^{n}\) with non-empty interior. We say that C is symmetric if x ∈ C implies that − x ∈ C. We say that C is centered if it has barycentre at the origin, i.e. C x, θ⟩ dx = 0 for every θ ∈ S n − 1. The support function \(h_{C} : {\mathbb{R}}^{n} \rightarrow \mathbb{R}\) of C is defined by h C (x) = max{⟨x, y⟩ : y ∈ C}. We define the mean width of C by \(w(C) =\int _{{S}^{n-1}}h_{C}(\theta )\sigma (d\theta )\), and more generally, for each −  < q < , q≠0, we define the q-mean width of C by

$$w_{q}(C) ={ \left (\displaystyle\int _{{S}^{n-1}}h_{C}^{q}(\theta )\sigma (d\theta )\right )}^{1/q}.$$
(16)

The radius of C is the quantity \(R(C) =\max \{\| x\|_{2} : x \in C\}\) and, if the origin is an interior point of C, the polar body C  ∘  of C is

$${C}^{\circ } :=\{ y \in {\mathbb{R}}^{n} :\langle x,y\rangle \leq 1\;\mbox{ for all}\;x \in C\}.$$
(17)

Let C be a symmetric convex body in \({\mathbb{R}}^{n}\). Define k  ∗ (C) as the largest positive integer k ≤ n for which

$$\frac{1} {2}w(C)(B_{2}^{n} \cap F) \subseteq P_{ F}(C) \subseteq 2w(C)(B_{2}^{n} \cap F)$$
(18)

with probability greater than \(\frac{n} {n+k}\) with respect to the Haar measure ν n, k on G n, k . It is known (see [23, 26]) that the parameter k  ∗ (C) is completely determined by w(C) and R(C): There exist c 1, c 2 > 0 such that

$$c_{1}n\frac{w{(C)}^{2}} {R{(C)}^{2}} \leq k_{{\ast}}(C) \leq c_{2}n\frac{w{(C)}^{2}} {R{(C)}^{2}}$$
(19)

for every symmetric convex body C in \({\mathbb{R}}^{n}\). The same parameter is crucial for the behavior of the q-mean width of C: it is proved in [19] that for any symmetric convex body C in \({\mathbb{R}}^{n}\) one has (1) w q (C) ≃ w(C) if 1 ≤ q ≤ k  ∗ (C), (2) \(w_{q}(C) \simeq \sqrt{q/n}w(C)\) if k  ∗ (C) ≤ q ≤ n and (3) w q (C) ≃ R(C) if q ≥ n.

Recall that a Borel probability measure μ on \({\mathbb{R}}^{n}\) is called log-concave if \(\mu ((1 - \lambda )A + \lambda B) \geq {(\mu (A))}^{1-\lambda }{(\mu (B))}^{\lambda }\) for any pair of Borel subsets A, B of \({\mathbb{R}}^{n}\) and any λ ∈ (0, 1). It is known that if μ is log-concave and if μ(H) < 1 for every hyperplane H, then μ is absolutely continuous with respect to Lebesgue measure and its density f μ is a log-concave function, i.e. log f is concave (see [8]).

A well-known consequence of Borell’s lemma (see [25, Appendix III]) states that if \(f : {\mathbb{R}}^{n} \rightarrow \mathbb{R}\) is a seminorm and μ is a log-concave probability measure on \({\mathbb{R}}^{n}\), then, for any 1 ≤ p < q,

$$\|f\|_{L_{q}(\mu )} \leq \frac{cq} {p} \|f\|_{L_{p}(\mu )},$$
(20)

where c > 0 is an absolute constant. In particular, (20) holds true for any linear functional f(x) = ⟨x, θ⟩ and any norm \(f(x) =\| x\|\).

Let α ∈ [1, 2]. The ψ α-norm of f is defined by

$$\|f\|_{\psi _{\alpha }} =\inf \left \{t > 0 :\displaystyle\int \exp {(\vert f\vert /t)}^{\alpha }\,d\mu \leq 2\right \}.$$
(21)

One can check that \(\|f\|_{\psi _{\alpha }} \simeq \sup _{q\geq \alpha }\frac{\|f\|_{L_{q}(\mu )}} {{q}^{1/\alpha }}\). We say that a log-concave probability measure μ on \({\mathbb{R}}^{n}\) satisfies a ψ α-estimate with constant b α in the direction of θ if

$$\|\langle \cdot ,\theta \rangle \|_{\psi _{\alpha }} \leq b_{\alpha }\|\langle \cdot ,\theta \rangle \|_{2}.$$
(22)

The measure μ is called ψ α with constant b = b α if it satisfies a ψ α estimate with constant b in every direction θ ∈ S n − 1. The following are equivalent:

  1. 1.

    μ satisfies a ψ α-estimate with constant b in the direction of θ.

  2. 2.

    For all t > 0 we have \(\mu (x : \vert \langle x,\theta \rangle \vert \geq t\|\langle \cdot ,\theta \rangle \|_{2}) \leq 2{e}^{-{t}^{a}/{b}^{\alpha } }\).

From (20) we see that every log-concave probability measure has ψ 1 constant b ≤ C, where C > 0 is an absolute constant.

Let μ be a probability measure on \({\mathbb{R}}^{n}\). For every q ≥ 1 and θ ∈ S n − 1 we define

$$h_{Z_{q}(\mu )}(\theta ) :={ \left (\displaystyle\int _{{\mathbb{R}}^{n}}\vert \langle x,\theta \rangle {\vert }^{q}d\mu (x)\right )}^{1/q}.$$
(23)

Note that if μ is log-concave then \(h_{Z_{q}(\mu )}(\theta ) < \infty \). We define the L q -centroid body Z q (μ) of μ to be the centrally symmetric convex set with support function \(h_{Z_{q}(\mu )}\). L q -centroid bodies were introduced in [20]. Here we follow the normalization (and notation) that appeared in [27]. The original definition concerned the class of measures 1 K where K is a convex body of volume 1. In this case, we also write Z q (K) instead of Z q (1 K ). Additional information on L q -centroid bodies can be found in [28, 29].

An absolutely continuous (with respect to Lebesgue measure) probability measure μ on \({\mathbb{R}}^{n}\) with density f μ is called isotropic if it is centered and Z 2(μ) = B 2 n. Equivalently, if x, θ2d μ(x) = 1 for all θ ∈ S n − 1. In the log-concave case we define the isotropic constant of μ by \(L_{\mu } := f_{\mu }{(0)}^{ \frac{1} {n} }\). We refer to [11, 24, 29] for additional information on isotropic convex bodies and measures.

For every − n < q ≤ , q≠0, we define

$$I_{q}(\mu ) :={ \left (\displaystyle\int _{{\mathbb{R}}^{n}}\|x\|_{2}^{q}d\mu (x)\right )}^{1/q}.$$
(24)

Observe that if μ is isotropic then \(I_{2}(\mu ) = \sqrt{n}\). Next, we consider the parameter

$$q_{{\ast}}(\mu ) =\max \{ k \leq n : k_{{\ast}}(Z_{k}(\mu )) \geq k\}.$$
(25)

The main result of [28] asserts that the moments of the Euclidean norm on log-concave isotropic measures satisfy a strong reverse Hölder inequality up to the value q  ∗ : for every q ≤ q  ∗ (μ),

$$I_{q}(\mu ) \leq CI_{-q}(\mu ),$$
(26)

where C > 0 is an absolute constant. In other words, \(I_{q}(\mu ) \simeq \sqrt{n}\) if 1 ≤ | q | ≤ q  ∗ (μ). Moreover, one has a non-trivial estimate for the parameter q  ∗ : if μ is a ψ α-measure with constant b α, then

$$q_{{\ast}}(\mu ) \geq c{n}^{\alpha /2}/b_{ \alpha }^{\alpha },$$
(27)

where c > 0 is an absolute constant. In particular, \(q_{{\ast}}(\mu ) \geq c\sqrt{n}\) for every isotropic log-concave probability measure μ in \({\mathbb{R}}^{n}\).

3 Isotropic Log-Concave Measures with Bounded Logarithmic Sobolev Constant

3.1 Geometric Properties

In this section we assume that μ is an isotropic log-concave measure on \({\mathbb{R}}^{n}\) with logarithmic Sobolev constant κ and provide short proofs of the statements in Theorem 1. In some cases the results hold true under weaker assumptions on μ.

Let us first recall the classical Herbst argument (for a proof see [18]):

Lemma 1 (Herbst). 

Let μ be a Borel probability measure on \({\mathbb{R}}^{n}\) such that \(\mu \in \mathcal{L}S(\kappa )\) . Then, for any 1-Lipschitz function \(f : {\mathbb{R}}^{n} \rightarrow \mathbb{R}\) with \(\mathbb{E}_{\mu }(f) = 0\) , we have

$$L_{f}(t) \leq \frac{\kappa } {2}{t}^{2},$$
(28)

for any \(t \in \mathbb{R}.\)

Proposition 1.

Let μ be an isotropic measure in \(\mathcal{L}S(\kappa )\) . Then, for any θ ∈ S n−1 we have:

$$\|\langle \cdot ,\theta \rangle \|_{\psi _{2}} \leq c\sqrt{\kappa },$$
(29)

where c > 0 is an absolute constant.

Proof.

From Herbst’s Lemma and Markov’s inequality we conclude that \(\mu (x : \vert f(x)\vert \geq t) \leq 2{e}^{-{t}^{2}/2\kappa }\) for every 1-Lipschitz function f with \(\mathbb{E}_{\mu }(f) = 0\). Since μ is assumed isotropic, μ is centered and this result applies to the function x ↦ ⟨x, θ⟩, where θ ∈ S n − 1. Thus, we get

$$\mu (x : \vert \langle x,\theta \rangle \vert \geq t) \leq 2{e}^{-{t}^{2}/2\kappa }$$
(30)

for every t > 0, and this implies that

$$\|\langle \cdot ,\theta \rangle \|_{\psi _{2}} \leq c\sqrt{\kappa }.$$
(31)

In other words, μ is a ψ 2-measure with constant \(O(\sqrt{\kappa })\). Note that the log-concavity of μ is not necessary for this claim.

Next we prove that the isotropic constant of μ is bounded in terms of κ.

Proposition 2.

Let \(\mu \in \mathcal{L}S_{ls}(\kappa )\) . Then, one has

$$L_{\mu } \leq c\sqrt{\kappa },$$
(32)

where c > 0 is an absolute constant.

Proof.

It is known that ψ 2-isotropic log-concave measures have bounded isotropic constant. Actually, it was recently proved in [15] that the dependence on the ψ 2-constant is linear. This follows from the following main result of [15]: if q ≤ q  ∗ (μ) then

$$\vert Z_{q}(\mu ){\vert }^{1/n} \geq c\sqrt{ \frac{q} {n}}.$$
(33)

Since μ is a ψ 2-measure with constant \(\sqrt{\kappa }\) we have q  ∗ (μ) ≥ cn ∕ κ. Thus, using also the fact that \(\vert Z_{n}(\mu ){\vert }^{1/n}{[f_{\mu }(0)]}^{1/n} \simeq 1\) (see [28]) we get

$$L_{\mu } = {[f_{\mu }(0)]}^{1/n} \simeq \frac{1} {\vert Z_{n}(\mu ){\vert }^{1/n}} \leq \frac{1} {\vert Z_{q}(\mu ){\vert }^{1/n}} \leq C\sqrt{\frac{n} {q}},$$
(34)

for all q ≤ cn ∕ κ and the result follows.

Remark.

Let μ be a measure which satisfies Poincaré inequality with constant κ. Note that

$$\langle \mathrm{Cov}(\mu )(u),u\rangle = \mathrm{Var}_{\mu }(f) \leq \kappa \displaystyle\int \|\nabla f\|_{2}^{2}\,d\mu = \kappa \|u\|_{ 2}^{2},$$
(35)

where f(x) = ⟨x, u⟩. Thus, for any probability measure \(\mu \in \mathcal{P}(\kappa )\) we have

$$L_{\mu } :=\| \mu \|_{\infty }^{\frac{1} {n} }{[\det \mathrm{Cov}(\mu )]}^{ \frac{1} {2n} } \leq \| \mu \|_{\infty }^{1/n}\sqrt{k},$$
(36)

where \(\|\mu \|_{\infty } =\sup _{x}f_{\mu }(x)\) and f μ is the density of μ.

Proposition 3.

Let \(\mu \in \mathcal{L}S(\kappa )\) . Then, μ satisfies the following moment estimate: For any q ≥ 2 one has

$$I_{q}(\mu ) \leq I_{2}(\mu ) + \sqrt{\kappa }\sqrt{q}.$$
(37)

In particular, if μ is isotropic then we have:

$$I_{q}(\mu ) \leq (1 + \delta )I_{2}(\mu ),$$
(38)

for 2 ≤ q ≤ δ 2 n∕κ and \(\delta > {(2\kappa /n)}^{1/2}.\)

Proof.

We prove a more general result following [2]: if μ satisfies the logarithmic Sobolev inequality with constant κ > 0 then, for any Lipschitz function f on \({\mathbb{R}}^{n}\) and for any 2 ≤ p ≤ q, we have

$$\|f\|_{q}^{2} -\| f\|_{ p}^{2} \leq \kappa \|f\|_{\mathrm{ Lip}}^{2}(q - p).$$
(39)

For the proof we may assume that \(\|f\|_{\mathrm{Lip}} = 1\). Let \(g(p) =\| f\|_{p}\). Differentiating g we see

$$g^\prime(p) =\| f\|_{p}\left [\frac{1} {p} \frac{\displaystyle\int \vert f{\vert }^{p}\log \vert f\vert \,d\mu } {\displaystyle\int \vert f{\vert }^{p}\,d\mu } - \frac{1} {{p}^{2}}\log \displaystyle\int \vert f{\vert }^{p}\,d\mu \right ].$$
(40)

On the other hand using the logarithmic Sobolev inequality for | f | p ∕ 2, p > 2, after some calculations we arrive at:

$$\frac{1} {p} \frac{\displaystyle\int {f}^{p}\log f\,d\mu } {\displaystyle\int {f}^{p}\,d\mu } - \frac{1} {{p}^{2}}\log \displaystyle\int \vert f{\vert }^{p}\,d\mu \leq \frac{\kappa } {2} \frac{\displaystyle\int \vert f{\vert }^{p-2}\,d\mu } {\displaystyle\int \vert f{\vert }^{p}\,d\mu } .$$
(41)

That is

$$\frac{{g}^{{\prime}}(p)} {g(p)} \leq \frac{\kappa } {2} \frac{g{(p - 2)}^{p-2}} {{g}^{p}(p)} .$$
(42)

Then, using Hölder’s inequality, we get

$$2g^\prime(p)g(p) \leq \kappa $$
(43)

for all p > 2. Thus, for any 2 ≤ p ≤ q we get \(g{(q)}^{2} - g{(p)}^{2} \leq \kappa (q - p)\). Choosing \(f(x) =\| x\|_{2}\) and using the elementary inequality \(\sqrt{a + b} \leq \sqrt{a} + \sqrt{b}\), we see that

$$I_{q}(\mu ) \leq I_{2}(\mu ) + \sqrt{\kappa }\sqrt{q}$$
(44)

for all 2 ≤ q < . Note that the log-concavity assumption is not needed for the proof of this claim.

Proposition 4.

Let μ be an isotropic measure in \(\mathcal{L}S_{lc}(\kappa )\) . Then,

$$I_{-q}(\mu ) \geq c_{1}I_{2}(\mu )$$
(45)

for all q ≤ c 2 n∕κ, where c 1 ,c 2 > 0 are absolute constants.

Proof.

For the negative values of q we use the fact that q  ∗ (μ) ≥ c 6 n ∕ κ. This is a consequence of (27) because μ is a ψ 2-measure with constant \(O(\sqrt{\kappa })\) from Proposition 1. Then, from (26) we conclude that \(I_{-q}(\mu ) \geq {C}^{-1}I_{q}(\mu ) \geq c_{5}\sqrt{n}\) for all 2 ≤ q ≤ c 6 n ∕ κ.

Proposition 5.

Let μ be an isotropic measure in \(\mathcal{L}S_{lc}(\kappa )\) . Then, most directions are “regular” and super-Gaussian: There exists a subset A of S n−1 with measure \(\sigma (A) > 1 - {e}^{-c_{7}n/\kappa }\) such that for any θ ∈ A we have

$${ \left (\displaystyle\int \vert \langle x,\theta \rangle {\vert }^{q}\,d\mu (x)\right )}^{1/q} \leq c_{ 8}\sqrt{\kappa }\sqrt{q/p}{\left (\displaystyle\int \vert \langle x,\theta \rangle {\vert }^{p}\,d\mu (x)\right )}^{1/p}$$
(46)

for any 1 ≤ p ≤ c 9 n∕κ and any q ≥ p, and also,

$$\mu (x : \vert \langle x,\theta \rangle \vert \geq t) \geq {e}^{-c_{10}{t}^{2}/\kappa },$$
(47)

for all \(1 \leq t \leq c_{11}\sqrt{n}/\kappa .\)

Proof.

Under the weaker assumption that μ is an isotropic log-concave ψ 2-measure with constant b in \({\mathbb{R}}^{n}\), we show that there exists a subset A of S n − 1 of measure \(\sigma (A) > 1 - {e}^{-c_{1}n/{b}^{2} }\) such that for any θ ∈ A and for any 1 ≤ p ≤ c 2 n ∕ b 2 we have:

$${ \left (\displaystyle\int \vert \langle x,\theta \rangle {\vert }^{p}\,d\mu (x)\right )}^{1/p} \simeq \sqrt{p}.$$
(48)

The argument has more or less appeared in [12] (see also [29]). Since μ has ψ 2 constant b, from (27) we have q  ∗ (μ) ≥ cn ∕ b 2. Let k ≤ cn ∕ b 2. Then, if we fix p ≤ k, applying Dvoretzky’s theorem for Z p (μ) we have

$$\frac{1} {2}w(Z_{p}(\mu ))(B_{2}^{n} \cap F) \subseteq P_{ F}(Z_{p}(\mu )) \subseteq 2w(Z_{p}(\mu ))(B_{2}^{n} \cap F)$$
(49)

for all F in a subset B k, p of G n, k of measure

$$\nu _{n,k}(B_{k,p}) \geq 1 - {e}^{-c_{3}k_{{\ast}}(Z_{p}(\mu ))} \geq 1 - {e}^{-c_{4}n/{b}^{2} }.$$
(50)

Applying this argument for p = 2i, i = 1, , ⌊log2 k⌋, and taking into account the fact that, by (20), Z q (μ) ⊆ cZ p (μ) if p < q ≤ 2p, we conclude that there exists B k  ⊂ G n, k with \(\nu _{n,k}(B_{k}) \geq 1 - {e}^{-c_{5}n/{b}^{2} }\) such that (48) holds true for every F ∈ B k and every 1 ≤ p ≤ k. On the other hand, since \(I_{p}(\mu ) \simeq I_{2}(\mu ) = \sqrt{n}\) for all 2 ≤ p ≤ q  ∗ (μ), we see that

$$w(Z_{p}(\mu )) \simeq \sqrt{p}$$
(51)

for all p ≤ cn ∕ b 2. Therefore, (48) can be written in the form

$$h_{Z_{p}(\mu )}(\theta ) \simeq \sqrt{p}$$
(52)

for all F ∈ B k , θ ∈ S F and 1 ≤ p ≤ k. To conclude the proof, let \(k = \lfloor cn/{b}^{2}\rfloor \). Then, if we set \(A =\{ \theta \in {S}^{n-1} : h_{Z_{p}}(\theta ) \simeq \sqrt{p},\mbox{ for all }1 \leq p \leq k\}\), Fubini’s theorem gives:

$$\begin{array}{rlrlrl} \sigma (A) & =\displaystyle\int _{G_{n,k}}\sigma _{F}(A \cap F)\,d\nu _{n,k}(F) \geq \displaystyle\int _{B_{k}}\sigma _{F}(A \cap F)\,d\nu _{n,k}(F) & & \\ & \geq 1 - {e}^{-cn/{b}^{2} }. &\end{array}$$
(53)

Now, let θ ∈ A and let p ≤ cn ∕ b 2 and q ≥ p. From (48) we have \(\|\langle \cdot ,\theta \rangle \|_{p} \simeq \sqrt{p}\). Since μ is a ψ 2 measure we have \(\|\langle \cdot ,\theta \rangle \|_{q} \leq cb\sqrt{q}\) for all θ ∈ S n − 1 and all q ≥ 1. This shows that

$$\|\langle \cdot ,\theta \rangle \|_{q} \leq cb\sqrt{q/p}\,\|\langle \cdot ,\theta \rangle \|_{p}.$$
(54)

In the case \(\mu \in \mathcal{L}S_{lc}(\kappa )\) we know that \(b = O(\sqrt{\kappa })\), and this proves Proposition 5.

For the second part we use an argument which has essentially appeared in [12]. Using the fact that for all θ ∈ A and for all 1 ≤ q ≤ cn ∕ b 2 we have \(h_{Z_{q}(\mu )}(\theta ) \simeq \sqrt{q}\), we write

$$\mu \left (x : \vert \langle x,\theta \rangle \vert \geq \frac{1} {2}\|\langle \cdot ,\theta \rangle \|_{q}\right ) \geq {(1 - {2}^{-q})}^{2} \frac{\|\langle \cdot ,\theta \rangle \|_{q}^{2q}} {\|\langle \cdot ,\theta \rangle \|_{2q}^{2q}} \geq {e}^{-cq},$$
(55)

where we have used Paley–Zygmund inequality and (20). Therefore, for all θ ∈ A and all q ≤ cn ∕ b 2, we get

$$\mu (x : \vert \langle x,\theta \rangle \vert \geq c_{1}\sqrt{q}) \geq {e}^{-c_{2}q}.$$
(56)

Writing \(c_{1}\sqrt{q} = t\) we have that for all \(1 \leq t \leq c_{3}\sqrt{n}/b\) one has μ(x :  | ⟨x, θ⟩ | ≥ t) \(\geq {e}^{-c{t}^{2} }\) for all θ ∈ A, and \(\sigma (A) \geq 1 - {e}^{-cn/{b}^{2} }\).

Remark.

For a general measure \(\mu \in \mathcal{L}S_{lc}(\kappa )\) one cannot expect that every direction θ will be super-Gaussian (with a constant depending on κ). To see this, consider the uniform measure μ , n on the unit cube \(C_{n} ={ \left [-\frac{1} {2}, \frac{1} {2}\right ]}^{n}\). This is a product log-concave probability measure, and hence, it satisfies the logarithmic Sobolev inequality with an absolute constant κ (see [18, Corollary 5.7]). On the other hand, it is clearly not super-Gaussian in the directions of e i , because \(h_{C_{n}}(e_{i}) \simeq 1\). The same is true for all θ ∈ S n − 1 for which \(h_{C_{n}}(\theta )/\sqrt{n} = o_{n}(1)\).

3.2 Tail Estimates for Order Statistics

The starting point for the next property is a result of Latała from [16]: if μ is a log-concave isotropic probability measure on \({\mathbb{R}}^{n}\) then

$$\mu (x : x_{m}^{{\ast}}\geq t) \leq \exp (-\sqrt{m}t/c)$$
(57)

for all 1 ≤ m ≤ n and t ≥ log(en ∕ m), where (x 1  ∗ , , x n  ∗ ) is the decreasing rearrangement of ( | x 1 | , , | x n  | ). We will show that if \(\mu \in \mathcal{L}S(\kappa )\) and is centered, then a much better estimate holds true. The idea of the proof comes from [16, Proposition 2].

Proposition 6.

Let μ be a centered probability measure onn which belongs to the class \(\mathcal{L}S(\kappa )\) . For every 1 ≤ m ≤ n and for any \(t \geq C\sqrt{\kappa \log (en/m)}\) , we have

$$\mu (x : x_{m}^{{\ast}}\geq t) \leq {e}^{-cm{t}^{2}/\kappa }.$$
(58)

Proof.

Since μ satisfies the log-Sobolev inequality with constant κ, the following isoperimetric inequality holds (for a proof see [18]): if μ(A) ≥ 1 ∕ 2, then for any t > 0 one has

$$1 - \mu (A + tB_{2}^{n}) \leq {e}^{-{t}^{2}/8\kappa }.$$
(59)

Applying Herbst’s argument to the function xx i (that is (30) for θ = e i ) we have

$$\mu (x : \vert x_{i}\vert \geq t) \leq 2{e}^{-{t}^{2}/2\kappa }$$
(60)

for t > 0. Given 1 ≤ m ≤ n, for any t > 0 we define the set

$$A(t) :=\{ x : \mathrm{card}(i : \vert x_{i}\vert \geq t) < m/2\}.$$
(61)

Claim.

For every \(t \geq \sqrt{6\kappa \log (en/m)}\) we have μ(A(t)) ≥ 1 ∕ 2.

Indeed, using Markov’s inequality and (60) we obtain:

$$\begin{array}{rlrlrl} 1 - \mu (A(t)) & = \mu (x : \mathrm{card}(i : \vert x_{i}\vert \geq t) \geq m/2) & & \\ & = \mu \left (x :\displaystyle\sum _{ i=1}^{n}\mathbf{1}_{\{ \vert x_{i}\vert \geq t\}}(x) \geq \frac{m} {2} \right ) & & \\ & \leq \frac{2} {m}\displaystyle\sum _{i=1}^{n}\mu (x : \vert x_{ i}\vert \geq t) & & \\ & \leq \frac{4n} {m} {e}^{-\frac{{t}^{2}} {2\kappa } } \leq \frac{4n} {m}{ \left (\frac{en} {m} \right )}^{-3} < \frac{1} {2}. &\end{array}$$
(62)

Now, let \(t_{0} := \sqrt{6\kappa \log (en/m)}\). For any s > 0, if we write \(z = x + y \in A(t_{0}) + s\sqrt{m}B_{2}^{n}\) then less than m ∕ 2 of the | x i  | ’s are greater than t 0 and less than m ∕ 2 of the | y i  | ’s are greater than \(s\sqrt{2}\). Using the isoperimetric inequality once again, we get:

$$\mu (x : x_{m}^{{\ast}}\geq t_{ 0} + \sqrt{2}s) \leq 1 - \mu (A(t_{0}) + s\sqrt{m}B_{2}^{n}) \leq {e}^{-m{s}^{2}/8\kappa }.$$
(63)

Choosing s ≥ 2t 0 we get the result with \(C = 2\sqrt{6}\) and \(c = 1/64\).

Note that for previous argument neither isotropicity nor log-concavity is needed. Nevertheless, one can actually obtain the strong estimate of Proposition 6 in the setting of log-concave isotropic probability measures with bounded ψ 2-constant, using a more general result from [1, Theorem 3.3]: For every log-concave isotropic probability measure μ in \({\mathbb{R}}^{n}\), for every 1 ≤ m ≤ n and every t ≥ clog(en ∕ m), one has

$$\mu (x : x_{m}^{{\ast}}\geq t) \leq \exp \left (\sigma _{ \mu }^{-1}(\sqrt{m}t/c)\right ),$$
(64)

where

$$\sigma _{\mu }(p) =\max\limits_{\theta \in {S}^{n-1}}\|\langle \cdot ,\theta \rangle \|_{p} = R(Z_{p}(\mu )),\quad p \geq 1.$$
(65)

Assuming that μ is a ψ 2-measure with constant b > 0, we have \(\sigma _{\mu }(p) \leq c_{1}b\sqrt{p}\), and hence \(\sigma _{\mu }^{-1}(\sqrt{m}t/c) \geq c_{2}m{t}^{2}/{b}^{2}\). Then, we get the following.

Proposition 7.

Let μ be a log -concave isotropic probability measure on ℝ n with ψ 2 -constant b > 0. For every 1 ≤ m ≤ n and for any t ≥ Clog (en∕m), we have

$$\mu (x : x_{m}^{{\ast}}\geq t) \leq {e}^{-cm{t}^{2}/{b}^{2} }.$$
(66)

Similarly, we can state [1, Theorem 3.4] in the setting of ψ 2-measures with constant b (in particular, for all \(\mu \in \mathcal{L}S_{lc}(\kappa )\)):

Proposition 8.

Let μ be a log -concave isotropic probability measure onn with ψ 2 -constant b > 0. For every 1 ≤ m ≤ n and for any t ≥ 1, we have

$$\displaystyle\begin{array}{rcl} & & \mu \left (x :\max _{\vert \sigma \vert =m}\|P_{\sigma }(x)\|_{2} \geq ct\sqrt{m}\log (en/m)\right ) \\ & & \qquad \leq \exp \left (-cm{t{}^{2}\log }^{2}(en/m)/{b}^{2}\log b\right ), \end{array}$$
(67)

where P σ denotes the orthogonal projection onto \({\mathbb{R}}^{\sigma }\) and the maximum is over all σ ⊆{ 1,…,n} with |σ| = m.

4 Log-Sobolev Constant of ψ 2-Measures

The question whether log-concave probability measures with bounded ψ 2-constant exhibit a good behavior with respect to the Poincaré or log-Sobolev constant seems to be open. In fact, the Kannan–Lovasz–Simonovits conjecture, which asks if the Poincaré constants of all log-concave probability measures are uniformly bounded, has been verified only in some special cases: these include the Euclidean ball, the unit cube and log-concave product measures (see [22] for a complete picture of what is known). The “KLS-conjecture” was also confirmed for the normalized p n balls by S. Sodin [30] in the case 1 ≤ p ≤ 2 and by Latała and Wojtaszczyk [17] in the case p ≥ 2.

It is well-known (see [4]) that the ψ 2-constants of the q n-balls, 2 ≤ q ≤ , are uniformly bounded. Below we show that the argument of [17] allows one to show that their log-Sobolev constants are also uniformly bounded.

Proof of Theorem 2.

Recall that if μ, ν are Borel probability measures on \({\mathbb{R}}^{n}\) and \(T : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n}\) is a Borel measurable function, we say that T transports μ to ν if, for every Borel subset A of \({\mathbb{R}}^{n}\),

$$\mu ({T}^{-1}(A)) = \nu (A).$$
(68)

Equivalently, if for every Borel measurable function \(f : {\mathbb{R}}^{n} \rightarrow \mathbb{R}\),

$$\displaystyle\int f(Tx)\,d\mu (x) =\displaystyle\int f(x)\,d\nu (x).$$
(69)

Let 1 ≤ q < . We consider the probability distribution ν q on \(\mathbb{R}\) with density \({(2\delta _{q})}^{-1}\exp (-\vert x{\vert }^{q})\), where \(\delta _{q} = \Gamma (1 + 1/q)\), and write ν q n for the product measure ν q  ⊗ n on \({\mathbb{R}}^{n}\), with density \({(2\delta _{q})}^{-n}\exp (-\|x\|_{q}^{q})\). We define a function \(w_{q} : \mathbb{R} \rightarrow \mathbb{R}\) by the equation

$$\frac{1} {\sqrt{2\pi }}\displaystyle\int _{x}^{\infty }{e}^{-{t}^{2}/2 }\,dt = \frac{1} {2\delta _{q}}\displaystyle\int _{w_{q}(x)}^{\infty }{e}^{-\vert t{\vert }^{q} }\,dt.$$
(70)

We also define \(W_{q,n} : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n}\) by W q, n (x 1, , x n ) = (w q (x 1), , w q (x n )). It is proved in [17] that W q, n transports γ n to ν q n: for every Borel subset A of \({\mathbb{R}}^{n}\) we have \(\gamma _{n}(W_{q,n}^{-1}(A)) = \nu _{q}^{n}(A)\). Moroever, W q, n is Lipschitz: for any r ≥ 1 and for all \(x,y \in {\mathbb{R}}^{n}\) we have

$$\|W_{q,n}(x) - W_{q,n}(y)\|_{r} \leq \frac{2\delta _{q}} {\sqrt{2\pi }}\|x - y\|_{r}.$$
(71)

Next, we consider the radial transformation T q, n , which transports ν q n to μ q, n —the uniform probability measure on \(\overline{B}_{q}^{n}\), the normalized ball of q n. For every 1 ≤ q <  and \(n \in \mathbb{N}\) we define f q, n : [0, ) → [0, ) by the equation

$$\frac{1} {{(2\delta _{q})}^{n}}\displaystyle\int _{0}^{s}{r}^{n-1}{e}^{-{r}^{q} }\,dr =\displaystyle\int _{ 0}^{f_{q,n}(s)}{r}^{n-1}\,dr$$
(72)

and \(T_{q,n} : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n}\) by \(T_{q,n}(x) = f_{q,n}(\|x\|_{q}) \frac{x} {\|x\|_{q}}\). One can check that T q, n transports the probability measure ν q n to the measure μ q, n .

In the case 2 ≤ q < , the composition S q, n  = T q, n  ∘ W q, n transports the Gaussian measure γ n to μ p, n and is a Lipschitz map with respect to the standard Euclidean norm, with a Lipschitz norm which is bounded by an absolute constant: for every Borel subset A of \({\mathbb{R}}^{n}\) we have \(\gamma _{n}(S_{q,n}^{-1}(A)) = \mu _{q}^{n}(A)\), and for all \(x,y \in {\mathbb{R}}^{n}\) we have:

$$\|S_{q,n}(x) - S_{q,n}(y)\|_{2} \leq C\|x - y\|_{2},$$
(73)

where C > 0 is an absolute constant.

Now, we use the following simple Lemma.

Lemma 2.

Let μ,ν be two Borel probability measures on \({\mathbb{R}}^{n}\) . Assume that μ satisfies the log-Sobolev inequality with constant κ and that there exists a Lipschitz map \(T : ({\mathbb{R}}^{n},\mu ) \rightarrow ({\mathbb{R}}^{n},\nu )\) , with respect to the Euclidean metric, that transports μ to ν. Then, ν satisfies the log-Sobolev inequality with constant \(\kappa \|T\|_{\mathrm{Lip}}^{2}.\)

Proof.

Let \(f : {\mathbb{R}}^{n} \rightarrow \mathbb{R}\) be a Lipschitz map. Then, f ∘ T is Lipschitz. Since, μ satisfies the log-Sobolev inequality with constant κ, we get:

$$\mathrm{Ent}_{\mu }({(f \circ T)}^{2}) \leq 2\kappa \displaystyle\int \|\nabla (f \circ T)\|_{ 2}^{2}\,d\mu .$$
(74)

From (69) we obtain:

$$\mathrm{Ent}_{\mu }({(f \circ T)}^{2}) = \mathrm{Ent}_{ \nu }({f}^{2}),$$
(75)

while for the right-hand side we have:

$$\displaystyle\int \|\nabla (f \circ T)\|_{2}^{2}\,d\mu \leq \| T\|_{\mathrm{ Lip}}^{2}\displaystyle\int \|(\nabla f) \circ T\|_{ 2}^{2}\,d\mu =\| T\|_{\mathrm{ Lip}}^{2}\displaystyle\int \|\nabla f\|_{ 2}^{2}\,d\nu .$$
(76)

Combining the above, we conclude the proof.

We can now complete the proof of Theorem 2. The Gaussian measure γ n satisfies the log-Sobolev inequality with constant 1: for any Lipschitz function f in \({\mathbb{R}}^{n}\) we have

$$\mathrm{Ent}_{\gamma _{n}}({f}^{2}) \leq 2\displaystyle\int \|\nabla f\|_{ 2}^{2}\,d\gamma _{ n}.$$
(77)

Then, the result follows from (73) and Lemma 2.

Problem 1.

Determine the smallest constant m(b, n) such that every isotropic log-concave probability measure μ on \({\mathbb{R}}^{n}\), which is ψ 2 with constant less than or equal to b, satisfies the log-Sobolev (resp. Poincaré) inequality with constant m(b, n).

Remark 1.

At this point we should mention that Bobkov [5] has proved that if μ is a log-concave, centered probability measure on \({\mathbb{R}}^{n}\), then μ satisfies the log-Sobolev inequality with constant O(d 2), where

$$d =\inf \left \{t > 0 :\displaystyle\int \exp {(\|x\|_{2}/t)}^{2}\,d\mu (x) \leq 2\right \}\!,$$
(78)

that is, the ψ 2 norm of the Euclidean norm \(x\mapsto \|x\|_{2}\) with respect to the measure μ. In [5] he also proves that any log-concave, centered probability measure satisfies Poincaré inequality with constant I 2 2(μ), where

$$I_{2}(\mu ) ={ \left (\displaystyle\int \|x\|_{2}^{2}\,d\mu (x)\right )}^{1/2}.$$
(79)

Thus, if K is an isotropic convex body in \({\mathbb{R}}^{n}\) and μ = μ K is the uniform measure on K, then by Alesker’s theorem (see for example [11, Theorem 2.2.4]) we have that \(d \simeq \sqrt{n}L_{K} = I_{2}(K)\), thus we obtain that μ satisfies the log-Sobolev inequality and Poincaré inequality with constant O(nL K 2). Actually a better dependence for the Poincaré constant is known, due to recent developments on the central limit theorem for convex bodies (see [6]).

5 Infimum Convolution

In this paragraph we discuss the relation between the logarithmic Sobolev inequality and the infimum convolution conjecture, as formulated by Latała and Wojtaszczyk in [17]. By the classical Herbst’s argument we can easily verify that if μ is centered and satisfies the log-Sobolev inequality with constant κ, then

$${e}^{\Lambda _{\mu }(\xi )} =\displaystyle\int {e}^{\langle x,\xi \rangle }\,d\mu (x) \leq {e}^{\kappa \|\xi \|_{2}^{2}/2 },$$
(80)

for all \(\xi \in {\mathbb{R}}^{n}\). (Actually, this can be easily verified for all log-concave, isotropic, ψ 2 probability measures with ψ 2 constant \(\sqrt{\kappa }\) without the assumption on the log-Sobolev constant). This in turn gives that

$$\Lambda _{\mu }^{{\ast}}(\xi ) \geq \frac{\|\xi \|_{2}^{2}} {4\kappa } ,$$
(81)

for all \(\xi \in {\mathbb{R}}^{n}\). The main question is the following:

Problem 2.

We say that μ has the infimum convolution property with constant α (which we denote by IC(α)) if the pair \((\mu ,\Lambda _{\mu }^{{\ast}}( \frac{\cdot } {\alpha }))\) has property (τ). Given κ > 0, determine if there is a positive constant c(κ) such that every isotropic, log-concave probability measure μ on \({\mathbb{R}}^{n}\) which belongs to \(\mathcal{L}S_{lc}(\kappa )\) satisfies IC(c(κ)).

Since the infimum convolution property is of “maximal” nature, one could ask, in view of (81), if a probability measure μ which satisfies the log-Sobolev inequality with constant κ also has property (τ) with a cost function w of the form \(w(y) = \frac{c} {\kappa }\|y\|_{2}^{2}\) for some absolute constant c > 0. Below we give a proof of this fact under the assumption that μ is a log-concave probability measure. We first recall some well known facts.

Let μ be a Borel probability measure on \({\mathbb{R}}^{n}\). For every Borel subset A of \({\mathbb{R}}^{n}\) we define its surface area as follows:

$${\mu }^{+}(A) = \liminf _{ t\rightarrow {0}^{+}} \frac{\mu (A_{t}) - \mu (A)} {t} ,$$
(82)

where \(A_{t} = A + tB_{2}^{n}\) is the t-extension of A with respect to \(\|\cdot \|_{2}\). In other words,

$$A_{t} \equiv A + tB_{2}^{n} = \left \{x \in {\mathbb{R}}^{n} :\inf\limits_{ a\in A}\|x - a\|_{2} < t\right \}.$$
(83)

We say that μ satisfies a Gaussian isoperimetric inequality with constant c > 0 if

$${\mu }^{+}(A) \geq cI(\mu (A))$$
(84)

for every Borel subset A of \({\mathbb{R}}^{n}\), where I is the Gaussian isoperimetric function

$$I(x) = \phi \circ {\varPhi }^{-1}(x).$$
(85)

Here, Φ is the standard normal distribution function

$$\varPhi (x) = \frac{1} {\sqrt{2\pi }}\displaystyle\int_{-\infty }^{x}{e}^{-{t}^{2}/2 }\,dt$$
(86)

and ϕ = Φ is its density. Assuming that μ satisfies (84) with constant c = c(κ) we will show that (μ, φ) has property (τ), where \(\varphi (x) = \frac{{c}^{2}} {4} \|x\|_{2}^{2}\). Note that this condition is in general more restrictive than the condition \(\mu \in \mathcal{L}S_{lc}(\kappa )\): It is known that if μ satisfies (84) with constant c > 0 then \(\mu \in \mathcal{L}S(1/{c}^{2})\) (see [5]). Nevertheless, in the context of log-concave probability measures on \({\mathbb{R}}^{n}\), (84) and the log-Sobolev inequality are equivalent. This was first established by Bakry and Ledoux in [3]. Below, we first sketch an argument for the sake of completeness.

Assume that μ is a log-concave probability measure on \({\mathbb{R}}^{n}\). Then, the density of μ with respect to the Lebesgue measure is of the form e  − U, where \(U : {\mathbb{R}}^{n} \rightarrow [-\infty ,\infty )\) is a convex function. If we consider the differential operator

$$Lu = \Delta u -\langle \nabla U,\nabla u\rangle$$

for u ∈ C 2 and u ∈ L 2(μ), then using integration by parts we easily check that the log-Sobolev inequality can be written in the form

$$\mathrm{Ent}_{\mu }({f}^{2}) \leq 2\kappa \displaystyle\int f(-Lf)\,d\mu .$$
(87)

Using a hypercontractivity result of Gross [13] and semigroup arguments we can arrive at the following parametrized variant of the log-Sobolev inequality:

Theorem 4 (Bakry-Ledoux, 1996[3]). 

Let μ be a probability measure with density e −U with respect to Lebesgue measure, where \(U : {\mathbb{R}}^{n} \rightarrow [-\infty ,\infty )\) is a convex function. If μ satisfies the log-Sobolev inequality with constant κ > 0 then, for any t ≥ 0 and any smooth function f, we have

$$\|f\|_{2}^{2} -\| f\|_{ p(t)}^{2} \leq \sqrt{2t}\|f\|_{ \infty }\displaystyle\int \|\nabla f\|_{2}\,d\mu ,$$
(88)

where \(p(t) = 1 + {e}^{-t/\kappa }.\)

Using this Theorem we can derive the Gaussian isoperimetric inequality with constant \(c = O({\kappa }^{-1/2})\).

Proposition 9.

Let μ be a probability measure with density e −U with respect to the Lebesgue measure, where \(U : {\mathbb{R}}^{n} \rightarrow [-\infty ,\infty )\) is a convex function. If μ satisfies the log-Sobolev inequality with constant κ > 0 then, for any Borel set A in \({\mathbb{R}}^{n}\) we have

$${\mu }^{+}(A) \geq c(\kappa )I(\mu (A)).$$
(89)

Furthermore, we can have \(c(\kappa ) = c/\sqrt{\kappa }\) , where c > 0 is an absolute constant.

Proof.

Let A be a Borel set in \({\mathbb{R}}^{n}\). It is enough to consider the case 0 < μ(A) ≤ 1 ∕ 2. For any t ≥ 0, approximating χ A with smooth functions \(f_{\epsilon } : {\mathbb{R}}^{n} \rightarrow [0,1]\) and passing to the limit, from Theorem 4 we get

$$\mu (A)\left (1 - \mu {(A)}^{ \frac{2} {p(t)} -1}\right ) \leq \sqrt{2t}{\mu }^{+}(A).$$
(90)

Note that

$$\frac{2} {p(t)} - 1 =\tanh \left ( \frac{t} {2\kappa }\right ) \geq \frac{t} {2\kappa }\tanh (1),$$
(91)

for all 0 ≤ t ≤ 2κ. Therefore, we have

$$\mu (A)\left (1 - {e}^{-\frac{c_{1}t} {2\kappa } \log (1/\mu (A))}\right ) \leq \sqrt{2t}{\mu }^{+}(A).$$
(92)

Computing at time \(t_{0} = \frac{\kappa } {\log (1/\mu (A))} \in (0,2\kappa )\) we see that

$${\mu }^{+}(A) \geq \frac{1 - {e}^{-c_{1}/2}} {\sqrt{2}} \frac{1} {\sqrt{\kappa }}\mu (A)\sqrt{\log \frac{1} {\mu (A)}}.$$
(93)

Using the fact that \(I(x) \leq c_{2}x\sqrt{\log (1/x)}\) for all x ∈ (0, 1 ∕ 2) and some absolute constant c 2 > 0, we get the result with constant \(c = \frac{1-{e}^{-c_{1}/2}} {2c_{2}} {\kappa }^{-1/2}\).

Proof of Theorem 3.

Let γ denote the standard 1-dimensional Gaussian measure. It is known that (γ, w) has property (τ), where \(w(x) = {x}^{2}/4\)—see [21]. Let f be a bounded measurable function on \({\mathbb{R}}^{n}\). We consider a function \(g : \mathbb{R} \rightarrow \mathbb{R}\) which is increasing, continuous from the right, and such that, for any \(t \in \mathbb{R}\),

$$\mu (f < t) = \gamma (g < t).$$
(94)

Then, for the proof of (12) we just need to verify that

$$\displaystyle\int {e}^{f\square \varphi }\,d\mu \leq \displaystyle\int {e}^{g\square w}\,d\gamma .$$
(95)

To this end, it suffices to prove that for any u > 0,

$$\mu (f\square \varphi < u) \geq \gamma (g\square w < u).$$
(96)

Since g is increasing we get that g □ w is also increasing, thus the set D u  = { x : (g □ w)(x) < u} is a half-line. For every x ∈ D u there exist \(x_{1},x_{2} \in \mathbb{R}\) such that \(x_{1} + x_{2} = x\) and g(x 1) + w(x 2) < u. By a limiting argument, for the proof of (96) it is enough to prove that for any x ∈ D u and for any \(x_{1},x_{2} \in \mathbb{R}\) with g(x 1) + w(x 2) < u we have

$$\mu (f\square \varphi < u) \geq \gamma (-\infty ,x_{1} + x_{2}] = \varPhi (x_{1} + x_{2}).$$
(97)

For any g(x 1) < s 1 < u − w(x 2) the definition of g implies that: \(\mu (f < s_{1}) = \gamma (g < s_{1}) \geq \gamma (-\infty ,x_{1}] = \varPhi (x_{1})\). Moreover, the inclusion

$$\{f < s_{1}\} + \beta \frac{\vert x_{2}\vert } {2} B_{2}^{n} \subseteq \{ f\square \varphi < u\}$$
(98)

is valid with \(\varphi (x) =\| x\|_{2}^{2}/{\beta }^{2}\) for any β > 0.

In order to get the result, we ought to verify an inequality of the following form: if μ(A) ≥ Φ(x 1) one has \(\mu (A + \frac{\beta } {2} \vert x_{2}\vert B_{2}^{n}) \geq \varPhi (x_{ 1} + x_{2}).\) Equivalently, for any t > 0 and any Borel subset A in \({\mathbb{R}}^{n}\) we would like to have \(\mu (A + tB_{2}^{n}) \geq \varPhi ({\varPhi }^{-1}(\mu (A)) + \frac{2} {\beta }t)\).

To finish the proof we just observe that our assumption is equivalent to μ  + (A) ≥ c(κ)I(μ(A)) for any Borel subset A of \({\mathbb{R}}^{n}\) and this in turn to the fact that for any Borel subset A of \({\mathbb{R}}^{n}\) and any t > 0 we have

$$\mu (A + tB_{2}^{n}) \geq \varPhi ({\varPhi }^{-1}(\mu (A)) + tc(\kappa )).$$
(99)

A proof of this last assertion can be found in [18]. Thus, we have proved Theorem 3 with \(\varphi (y) = \frac{c{(\kappa )}^{2}} {4} \|y\|_{2}^{2} = \frac{c} {\kappa }\|y\|_{2}^{2}\), where c > 0 is an absolute constant.

Remark.

We should mention here that Maurey has proved in [21] that if (μ, w) has property (τ) and \(w : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{+}\) is a convex function such that \(w(x) \geq \frac{1} {2\kappa }\|x\|_{2}^{2}\) in some neighborhood of 0, then μ satisfies Poincaré inequality with constant κ. Thus, the previous Theorem shows that in the context of log-concave measures, the class \(\mathcal{L}S_{lc}(\kappa )\) is contained in the class of measures μ satisfying (τ) with a convex cost function which satisfies this hyperquadratic condition near zero, and in turn, this class is contained in \(\mathcal{P}_{lc}(c\kappa )\), where c > 0 is an absolute constant.

A weaker version of Problem 2 is the following:

Problem 3.

We say that μ has comparable weak and strong moments with constant α (which we denote by CWSM(α)) if for any norm \(\|\cdot \|\) in \({\mathbb{R}}^{n}\) and any q ≥ p ≥ 2 one has:

$${ \left [\displaystyle\int \Big{\vert }\|x\| -{\left (\displaystyle\int \|{x\|}^{p}\,d\mu (x)\right )}^{1/p}{\Big{\vert}}^{q}\right ]}^{1/q} \leq \alpha \sup\limits_{\| z\|_{{\ast}}\leq 1}{\left (\displaystyle\int\vert \langle z,x\rangle {\vert }^{p}\,d\mu (x)\right )}^{1/p}.\,\,\,$$
(100)

Determine if every measure \(\mu \in \mathcal{L}S_{lc}(\kappa )\) satisfies CWSM(c) for some constant c = c(κ).

It was communicated to us by R. Latała that a positive answer to Problems 2 and 3 is not known even if we restrict our attention to the following case:

Problem 4.

Given a probability measure μ of the form \(d\mu (x) = {e}^{-W(x)}\,dx\) with \(W : {\mathbb{R}}^{n} \rightarrow \mathbb{R}\) a convex function such that HessW ≥ α − 1 I for some given constant α > 0, determine if Problems 2 and 3 have a positive answer up to constants c(α).

This class of measures has been studied systematically, and it is well known that it is a subclass to \(\mathcal{L}S_{lc}(\alpha )\) (see for example [7]).