3.1 Introduction and Main Results

For a vector \(x\in {\mathbb R}^n\) let \(k{\text -}\max x_i\) (or \(k{\text -}\min x_i\)) denote its k-th maximum (respectively its k-th minimum), i.e. its k-th maximal (respectively k-th minimal) coordinate. For a random vector X = (X 1, …, X n), \(k{\text -}\min X_i\) is also called the k-th order statistic of X.

Let X = (X 1, …, X n) be a random vector with finite first moment. In this note we try to estimate \({\mathbb E} k{\text -}\max _i|X_i|\) and

$$\displaystyle \begin{aligned} {\mathbb E} \max_{|I|=k}\sum_{i\in I}|X_i|={\mathbb E} \sum_{l=1}^k l\text{-}\max_i|X_i|. \end{aligned}$$

Order statistics play an important role in various statistical applications and there is an extensive literature on this subject (cf. [2, 5] and references therein).

We put special emphasis on the case of log-concave vectors, i.e. random vectors X satisfying the property \({\mathbb P}(X\in \lambda K+(1-\lambda )L)\geq {\mathbb P}(X\in K)^{\lambda {\mathbb P}}(X\in L)^{1-\lambda }\) for any λ ∈ [0, 1] and any nonempty compact sets K and L. By the result of Borell [3] a vector X with full dimensional support is log-concave if and only if it has a log-concave density, i.e. the density of a form e h(x) where h is convex with values in (−, ]. A typical example of a log-concave vector is a vector uniformly distributed over a convex body. In recent years the study of log-concave vectors attracted attention of many researchers, cf. monographs [1, 4].

To bound the sum of k largest coordinates of X we define

$$\displaystyle \begin{aligned} t(k,X):=\inf\left\{t>0\colon \frac{1}{t}\sum_{i=1}^n{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i|\geq t\}}\leq k\right\}. \end{aligned} $$
(3.1)

and start with an easy upper bound.

Proposition 3.1

For any random vector X with finite first moment we have

$$\displaystyle \begin{aligned} {\mathbb E} \max_{|I|=k}\sum_{i\in I}|X_i|\leq 2kt(k,X). \end{aligned} $$
(3.2)

Proof

For any t > 0 we have

$$\displaystyle \begin{aligned} \max_{|I|=k}\sum_{i\in I}|X_i|\leq tk+\sum_{i=1}^n|X_i|{\mathbf 1}_{\{|X_i|\geq t\}}.\end{aligned} $$

It turns out that this bound may be reversed for vectors with independent coordinates or, more generally, vectors satisfying the following condition

$$\displaystyle \begin{aligned} {\mathbb P} (|X_i|\ge s, |X_j| \ge t) \le \alpha {\mathbb P}(|X_i|\ge s){\mathbb P}(|X_j|\ge t) \quad \mbox{for all } i\neq j \mbox{ and all } s,t>0.\end{aligned} $$
(3.3)

If α = 1 this means that moduli of coordinates of X are negatively correlated.

Theorem 3.2

Suppose that a random vector X satisfies condition (3.3) with some α ≥ 1. Then there exists a constant c(α) > 0 which depends only on α such that for any 1 ≤ k  n,

$$\displaystyle \begin{aligned} c(\alpha)kt(k,X)\leq {\mathbb E}\max_{|I|=k} \sum_{i\in I} |X_i|\leq 2kt(k,X). \end{aligned}$$

We may take c(α) = (288(5 + 4α)(1 + 2α))−1.

In the case of i.i.d. coordinates two-sided bounds for \({\mathbb E}\max _{|I|=k} \sum _{i\in I} | a_iX_i|\) in terms of an Orlicz norm (related to the distribution of X i) of a vector (a i)in where known before, see [7].

Log-concave vectors with diagonal covariance matrices behave in many aspects like vectors with independent coordinates. This is true also in our case.

Theorem 3.3

Let X be a log-concave random vector with uncorrelated coordinates (i.e. \({\operatorname {Cov}}(X_{i},X_{j})=0\) for i  j). Then for any 1 ≤ k  n,

$$\displaystyle \begin{aligned} ckt(k,X)\leq {\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i|\leq 2kt(k,X). \end{aligned}$$

In the above statement and in the sequel c and C denote positive universal constants.

The next two examples show that the lower bound cannot hold if n ≫ k and only marginal distributions of X i are log-concave or the coordinates of X are highly correlated.

Example 3.1

Let X = (ε 1g, ε 2g, …, ε ng), where ε 1, …, ε n, g are independent, \({\mathbb P}(\varepsilon _i=\pm 1)=1/2\) and g has the normal \({\mathcal N}(0,1)\) distribution. Then \({\operatorname {Cov}} X = {\operatorname {Id}}\) and it is not hard to check that \({\mathbb E}\max _{|I|=k}\sum _{i\in I}|X_i|=k \sqrt {2/\pi }\) and \(t(k, X)\sim \ln ^{1/2} (n/k)\) if k ≤ n∕2.

Example 3.2

Let X = (g, …, g), where \(g\sim {\mathcal N}(0,1)\). Then, as in the previous example, \({\mathbb E}\max _{|I|=k}\sum _{i\in I}|X_i|=k\sqrt {2/\pi }\) and \(t(k, X)\sim \ln ^{1/2} (n/k)\).

Question 3.1

Let \(X^{\prime }=(X^{\prime }_1,X^{\prime }_2,\ldots ,X^{\prime }_n)\) be a decoupled version of X, i.e. \(X^{\prime }_i\) are independent and \(X^{\prime }_i\) has the same distribution as X i. Due to Theorem 3.2 (applied to X ), the assertion of Theorem 3.3 may be stated equivalently as

$$\displaystyle \begin{aligned} {\mathbb E} \max_{|I|=k}\sum_{i\in I}|X_i|\sim {\mathbb E} \max_{|I|=k}\sum_{i\in I}|X^{\prime}_i|. \end{aligned}$$

Is the more general fact true that for any symmetric norm and any log-concave vector X with uncorrelated coordinates

$$\displaystyle \begin{aligned} {\mathbb E}\|X\|\sim {\mathbb E}\|X^{\prime}\|? \end{aligned}$$

Maybe such an estimate holds at least in the case of unconditional log-concave vectors?

We turn our attention to bounding k-maxima of |X i|. This was investigated in [8] (under some strong assumptions on the function \(t\mapsto {\mathbb P}(|X_i|\ge t)\)) and in the weighted i.i.d. setting in [7, 9, 15]. We will give different bounds valid for log-concave vectors, in which we do not have to assume independence, nor any special conditions on the growth of the distribution function of the coordinates of X. To this end we need to define another quantity:

$$\displaystyle \begin{aligned} t^*(p,X):=\inf\bigg\{t>0\colon\ \sum_{i=1}^n{\mathbb P}(|X_i|\geq t)\leq p\bigg\}\quad \mbox{ for }0<p<n. \end{aligned}$$

Theorem 3.4

Let X be a mean zero log-concave n-dimensional random vector with uncorrelated coordinates and 1 ≤ k  n. Then

$$\displaystyle \begin{aligned} {\mathbb E} k\mathit{\text{-}}\max_{i\leq n}|X_i|\geq \frac{1}{2}\mathrm{Med}\Big( k\mathit{\text{-}}\max_{i\leq n}|X_i|\Big) \geq ct^*\bigg(k-\frac 12,X\bigg). \end{aligned}$$

Moreover, if X is additionally unconditional then

$$\displaystyle \begin{aligned} {\mathbb E} k\mathit{\text{-}}\max_{i\leq n}|X_i|\leq Ct^*\bigg(k-\frac 12,X\bigg). \end{aligned}$$

The next theorem provides an upper bound in the general log-concave case.

Theorem 3.5

Let X be a mean zero log-concave n-dimensional random vector with uncorrelated coordinates and 1 ≤ k  n. Then

$$\displaystyle \begin{aligned} {\mathbb P}\bigg(k\mathit{\text{-}}\max_{i\leq n}|X_i|\geq Ct^*\bigg(k-\frac 12, X\bigg)\bigg)\leq 1-c \end{aligned} $$
(3.4)

and

$$\displaystyle \begin{aligned} {\mathbb E} k\mathit{\text{-}}\max_{i\leq n}|X_i|\leq Ct^*\bigg(k-\frac{1}{2}k^{5/6}, X\bigg). \end{aligned} $$
(3.5)

In the isotropic case (i.e. \({\mathbb E} X_i=0, {\operatorname {Cov}} X = {\operatorname {Id}}\)) one may show that t (k∕2, X) ∼ t (k, X) ∼ t(k, X) for k ≤ n∕2 and \(t^*( p,X)\sim \frac {n- p}{n}\) for p ≥ n∕4 (see Lemma 3.24 below). In particular t (n − k + 1 − (nk + 1)5∕6∕2, X) ∼ kn + n −1∕6 for k ≤ n∕2. This together with the two previous theorems implies the following corollary.

Corollary 3.6

Let X be an isotropic log-concave n-dimensional random vector and 1 ≤ k  n∕2. Then

$$\displaystyle \begin{aligned} {\mathbb E} k\mathit{\text{-}}{\max}_{i\leq n}|X_i|\sim t^{*}(k,X)\sim t(k,X) \end{aligned}$$

and

$$\displaystyle \begin{aligned} c\frac{k}{n}\leq {\mathbb E} k\mathit{\text{-}}{\min}_{i\leq n}|X_i|={\mathbb E} (n-k+1)\mathit{\text{-}}{\max}_{i\leq n}|X_i| \leq C\left(\frac{k}{n}+n^{-1/6}\right). \end{aligned}$$

If X is additionally unconditional then

$$\displaystyle \begin{aligned} {\mathbb E} k\mathit{\text{-}}{\min}_{i\leq n}|X_i|={\mathbb E} (n-k+1)\mathit{\text{-}}{\max}_{i\leq n}|X_i|\sim \frac{k}{n}. \end{aligned}$$

Question 3.2

Does the second part of Theorem 3.4 hold without the unconditionality assumptions? In particular, is it true that in the isotropic log-concave case \({\mathbb E} k\text{-}\min _{i\leq n}|X_i|\sim k/n\) for 1 ≤ k ≤ n∕2?

Notation

Throughout this paper by letters C, c we denote universal positive constants and by C(α), c(α) constants depending only on the parameter α. The values of constants C, c, C(α), c(α) may differ at each occurrence. If we need to fix a value of constant, we use letters C 0, C 1, … or c 0, c 1, …. We write f ∼ g if cf ≤ g ≤ Cg. For a random variable Z we denote \(\|Z\|{ }_p=({\mathbb E}|Z|{ }^p)^{1/p}\). Recall that a random vector X is called isotropic, if \({\mathbb E} X=0\) and \({\operatorname {Cov}} X={\operatorname {Id}}\).

This note is organised as follows. In Sect. 3.2 we provide a lower bound for the sum of k largest coordinates, which involves the Poincaré constant of a vector. In Sect. 3.3 we use this result to obtain Theorem 3.3. In Sect. 3.4 we prove Theorem 3.2 and provide its application to comparison of weak and strong moments. In Sect. 3.5 we prove the first part of Theorem 3.4 and in Sect. 3.6 we prove the second part of Theorems 3.4, 3.5, and Lemma 3.24.

3.2 Exponential Concentration

A probability measure μ on \({\mathbb R}^n\) satisfies exponential concentration with constant α > 0 if for any Borel set A with μ(A) ≥ 1∕2,

$$\displaystyle \begin{aligned} 1-\mu(A+uB_2^n)\leq e^{-u/\alpha}\quad \mbox{ for all }u>0. \end{aligned}$$

We say that a random n-dimensional vector satisfies exponential concentration if its distribution has such a property.

It is well known that exponential concentration is implied by the Poincaré inequality

$$\displaystyle \begin{aligned} \mathrm{Var}_\mu f \leq \beta \int |\nabla f|{}^2d\mu \quad \mbox{ for all bounded smooth functions } f\colon{\mathbb R}^n\mapsto {\mathbb R} \end{aligned}$$

and \(\alpha \leq 3\sqrt {\beta }\) (cf. [12, Corollary 3.2]).

Obviously, the constant in the exponential concentration is not linearly invariant. Typically one assumes that the vector is isotropic. For our purposes a more natural normalization will be that all coordinates have L 1-norm equal to 1.

The next proposition states that bound (3.2) may be reversed under the assumption that X satisfies the exponential concentration.

Proposition 3.7

Assume that Y = (Y 1, …, Y n) satisfies the exponential concentration with constant α > 0 and \({\mathbb E} |Y_i|\geq 1\) for all i. Then for any sequence \(a=(a_i)_{i=1}^n\) of real numbers and X i := a iY i we have

$$\displaystyle \begin{aligned} {\mathbb E} \max_{|I|=k}\sum_{i\in I}|X_i|\geq \Big(8+64\frac{\alpha}{\sqrt{k}}\Big)^{-1}kt(k,X), \end{aligned}$$

where t(k, X) is given by (3.1).

We begin the proof with a few simple observations.

Lemma 3.8

For any real numbers z 1, …, z n and 1 ≤ k  n we have

$$\displaystyle \begin{aligned} \max_{|I|=k}\sum_{i\in I}|z_i|=\int_0^{\infty} \min\bigg\{k,\sum_{i=1}^n{\mathbf 1}_{\{|z_i|\geq s\}}\bigg\}ds. \end{aligned}$$

Proof

Without loss of generality we may assume that z 1 ≥ z 2 ≥… ≥ z n ≥ 0. Then

$$\displaystyle \begin{aligned} \int_0^{\infty} \min\bigg\{k,\sum_{i=1}^n{\mathbf 1}_{\{|z_i|\geq s\}}\bigg\}ds &=\sum_{l=1}^{k-1}\int _{z_{l+1}}^{z_l}lds+\int_0^{z_k}kds =\sum_{l=1}^{k-1}l(z_{l}-z_{l+1})+kz_k \\ &=z_1+\ldots+z_k=\max_{|I|=k}\sum_{i\in I}|z_i|.\end{aligned} $$

Fix a sequence (X i)in and define for s ≥ 0,

$$\displaystyle \begin{aligned} N(s):=\sum_{i=1}^n {\mathbf 1}_{\{|X_i|\geq s\}}. \end{aligned} $$
(3.6)

Corollary 3.9

For any k = 1, …, n,

$$\displaystyle \begin{aligned} {\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i|=\int_0^{\infty}\sum_{l=1}^k{\mathbb P}(N(s)\geq l)ds, \end{aligned}$$

and for any t > 0,

$$\displaystyle \begin{aligned} {\mathbb E}\sum_{i=1}^n|X_i|{\mathbf 1}_{\{|X_i|\geq t\}}=t{\mathbb E} N(t)+\int_{t}^{\infty} \sum_{l=1}^{\infty}{\mathbb P}(N(s)\geq l)ds. \end{aligned}$$

In particular

$$\displaystyle \begin{aligned} {\mathbb E}\sum_{i=1}^n|X_i|{\mathbf 1}_{\{|X_i|\geq t\}} \kern-1pt\leq\kern-1pt {\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i|\kern-1pt+\kern-1pt \sum_{l=k+1}^{\infty} \left(t{\mathbb P}(N(t)\geq l)\kern-1pt+\kern-1pt\int_t^{\infty}{\mathbb P}(N(s)\kern-1pt\geq\kern-1pt l)ds\right). \end{aligned}$$

Proof

We have

$$\displaystyle \begin{aligned} \int_0^{\infty}\sum_{l=1}^k{\mathbb P}(N(s)\geq l)ds&=\int_0^{\infty}{\mathbb E}\min\{k,N(s)\}ds ={\mathbb E}\int_0^{\infty}\min\{k,N(s)\}ds \\ &={\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i|, \end{aligned} $$

where the last equality follows by Lemma 3.8.

Moreover,

$$\displaystyle \begin{aligned} t{\mathbb E} N(t)+\int_{t}^{\infty} \sum_{l=1}^{\infty}{\mathbb P}(N(s)\geq l)ds &=t{\mathbb E} N(t)+\int_{t}^{\infty} {\mathbb E} N(s)ds \\ &={\mathbb E}\sum_{i=1}^n \left(t{\mathbf 1}_{\{|X_i|\geq t\}}+\int_{t}^{\infty}{\mathbf 1}_{\{|X_i|\geq s\}}ds\right) \\ &={\mathbb E}\sum_{i=1}^n|X_i|{\mathbf 1}_{\{|X_i|\geq t\}}. \end{aligned} $$

The last part of the assertion easily follows, since

$$\displaystyle \begin{aligned} t{\mathbb E} N(t)=t\sum_{l=1}^n{\mathbb P}(N(t)\geq l) \leq \int_0^t \sum_{l=1}^k{\mathbb P}(N(s)\geq l)ds+\sum_{l=k+1}^{\infty} t{\mathbb P}(N(t)\geq l).\end{aligned} $$

Proof of Proposition 3.7

To shorten the notation put t k := t(k, X). Without loss of generality we may assume that a 1 ≥ a 2 ≥… ≥ a n ≥ 0 and a k∕4⌉ = 1. Observe first that

$$\displaystyle \begin{aligned} {\mathbb E} \max_{|I|=k}\sum_{i\in I}|X_i|\geq \sum_{i=1}^{\lceil k/4\rceil}a_i{\mathbb E} |Y_i|\geq k/4, \end{aligned}$$

so we may assume that \(t_k\geq 16\alpha /\sqrt {k}\).

Let μ be the law of Y  and

$$\displaystyle \begin{aligned} A:=\Bigg\{y\in {\mathbb R}^n\colon\ \sum_{i=1}^n{\mathbf 1}_{\{|a_iy_i|\geq \frac{1}{2}t_k\}}< \frac{k}{2}\Bigg\}. \end{aligned}$$

We have

$$\displaystyle \begin{aligned} {\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i| \geq \frac{k}{4}t_k{\mathbb P}\Bigg(\sum_{i=1}^k{\mathbf 1}_{\{|a_iY_i|\geq \frac{1}{2}t_k\}}\geq \frac{k}{2}\Bigg) = \frac{k}{4}t_k(1-\mu(A)), \end{aligned}$$

so we may assume that μ(A) ≥ 1∕2.

Observe that if y ∈ A and \(\sum _{i=1}^n {\mathbf 1}_{\{|a_iz_i|\geq s\}}\geq l >k\) for some s ≥ t k then

$$\displaystyle \begin{aligned} \sum_{i=1}^n(z_i-y_i)^2\geq \sum_{i=\lceil k/4\rceil}^n(a_iz_i-a_iy_i)^2\geq (l-3k/4)(s-t_k/2)^2>\frac{ls^2}{16}. \end{aligned}$$

Thus we have

$$\displaystyle \begin{aligned} {\mathbb P}(N(s)\geq l)\leq 1-\mu\bigg(A+\frac{s\sqrt{l}}{4}B_2^n\bigg)\leq e^{-\frac{s\sqrt{l}}{4\alpha}} \quad \mbox{ for }l > k,\ s\geq t_k.\end{aligned} $$

Therefore

$$\displaystyle \begin{aligned} \int_{t_k}^{\infty}{\mathbb P}(N(s)\geq l)ds \leq \int_{t_k}^{\infty} e^{-\frac{s\sqrt{l}}{4\alpha}}ds =\frac{4\alpha}{\sqrt{l}}e^{-\frac{t_k\sqrt{l}}{4\alpha}}\quad \mbox{for }l > k, \end{aligned}$$

and

$$\displaystyle \begin{aligned} &\sum_{l=k+1}^{\infty} \bigg(t_k{\mathbb P}(N(t_k)\geq l)+\int_{t_k}^{\infty}{\mathbb P}(N(s)\geq l)ds\bigg) \leq \sum_{l=k+1}^{\infty}\bigg(t_k+\frac{4\alpha}{\sqrt{l}}\bigg) e^{-\frac{t_k\sqrt{l}}{4\alpha}}\\ & \quad \leq \bigg(t_k+\frac{4\alpha}{\sqrt{k+1}}\bigg)\int_k^{\infty} e^{-\frac{t_k\sqrt{u}}{4\alpha}}du \leq \bigg(t_k+\frac{4\alpha}{\sqrt{k+1}}\bigg)e^{-\frac{t_k\sqrt{k}}{4\sqrt2 \alpha}} \int_k^{\infty} e^{-\frac{t_k\sqrt{u-k}}{4\sqrt2 \alpha}}du\\ & \quad =\bigg(t_k+\frac{4\alpha}{\sqrt{k+1}}\bigg)\frac{64\alpha^2}{t_k^2}e^{-\frac{t_k\sqrt{k}}{4\sqrt2 \alpha}} \leq \Big(t_k+\frac{1}{4}t_k\Big)\frac{k}{4}\leq \frac{1}{2}kt_k, \end{aligned} $$

where to get the next-to-last inequality we used the fact that \(t_k\geq 16\alpha /\sqrt {k}\).

Hence Corollary 3.9 and the definition of t k yields

$$\displaystyle \begin{aligned} kt_k&\leq {\mathbb E}\sum_{i=1}^n|X_i|{\mathbf 1}_{\{|X_i|\geq t_k\}} \\ &\leq {\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i|+ \sum_{l=k+1}^{\infty} \bigg(t_k{\mathbb P}(N(t_k)\geq l)+\int_{t_k}^{\infty}{\mathbb P}(N(s)\geq l)ds\bigg) \\ &\leq {\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i|+\frac{1}{2}kt_k, \end{aligned} $$

so \({\mathbb E}\max _{|I|=k}\sum _{i\in I}|X_i|\geq \frac {1}{2}kt_k\). □

We finish this section with a simple fact that will be used in the sequel.

Lemma 3.10

Suppose that a measure μ satisfies exponential concentration with constant α. Then for any c ∈ (0, 1) and any Borel set A with μ(A) > c we have

$$\displaystyle \begin{aligned} 1-\mu(A+uB_2^n)\leq \exp\bigg(-\Big(\frac{u}{\alpha}+\ln c\Big)_+\bigg)\quad \mathit{\mbox{ for }}u\geq 0.\end{aligned} $$

Proof

Let \(D:={\mathbb R}^n\setminus (A+rB_2^n)\). Observe that \(D+rB_2^n\) has an empty intersection with A so if μ(D) ≥ 1∕2 then

$$\displaystyle \begin{aligned} c<\mu(A)\leq 1-\mu(D+rB_2^n)\leq e^{-r/\alpha}, \end{aligned}$$

and \(r<\alpha \ln (1/c)\). Hence \(\mu (A+\alpha \ln (1/c)B_2^n)\geq 1/2\), therefore for s ≥ 0,

$$\displaystyle \begin{aligned} 1-\mu(A+(s+\alpha\ln(1/c))B_2^n)= 1-\mu((A+\alpha\ln(1/c)B_2^n)+sB_2^n)\leq e^{-s/\alpha}, \end{aligned}$$

and the assertion easily follows. □

3.3 Sums of Largest Coordinates of Log-Concave Vectors

We will use the regular growth of moments of norms of log-concave vectors multiple times. By [4, Theorem 2.4.6], if \(f:{\mathbb R}^n \to {\mathbb R}\) is a seminorm, and X is log-concave, then

$$\displaystyle \begin{aligned} ({\mathbb E} f( X)^p)^{1/p}\leq C_1\frac{p}{q}({\mathbb E} f( X)^q)^{1/q} \quad \text{for } p\geq q\geq 1, \end{aligned} $$
(3.7)

where C 1 is a universal constant.

We will also apply a few times the functional version of the Grünbaum inequality (see [14, Lemma 5.4]) which states that

$$\displaystyle \begin{aligned} {\mathbb P}(Z\geq 0)\geq \frac{1}{e} \quad \mbox{ for any mean-zero log-concave random variable Z.} \end{aligned} $$
(3.8)

Let us start with a few technical lemmas. The first one will be used to reduce proofs of Theorem 3.3 and lower bound in Theorem 3.4 to the symmetric case.

Lemma 3.11

Let X be a log-concave n-dimensional vector and X be an independent copy of X. Then for any 1 ≤ k  n,

$$\displaystyle \begin{aligned} {\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i-X_i^{\prime}|\leq 2{\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i|, \end{aligned}$$
$$\displaystyle \begin{aligned} t(k,X)\leq et(k,X-X^{\prime})+\frac{2}{k}\max_{|I|=k}\sum_{i\in I}{\mathbb E} |X_i|, \end{aligned} $$
(3.9)

and

$$\displaystyle \begin{aligned} t^*(2k,X-X^{\prime})\leq 2t^*(k,X). \end{aligned} $$
(3.10)

Proof

The first estimate follows by the easy bound

$$\displaystyle \begin{aligned} {\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i-X_i^{\prime}|\leq {\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i|+ {\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i^{\prime}|=2{\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i|. \end{aligned}$$

To get the second bound we may and will assume that \({\mathbb E} |X_1|\geq {\mathbb E} |X_2|\geq \ldots \geq {\mathbb E} |X_n|\). Let us define \(Y:=X-{\mathbb E} X\), \(Y^{\prime }:=X^{\prime }-{\mathbb E} X\) and \(M:=\frac {1}{k}\sum _{i=1}^k{\mathbb E}|X_i|\geq \max _{i\geq k}{\mathbb E}|X_i|\). Obviously

$$\displaystyle \begin{aligned} \sum_{i=1}^k{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i|\geq t\}}\leq kM \quad \mbox{for }t\geq 0. \end{aligned} $$
(3.11)

We have \({\mathbb E} Y_i=0\), thus \({\mathbb P}(Y_i\leq 0)\geq 1/e\) by (3.8). Hence

$$\displaystyle \begin{aligned} {\mathbb E} Y_i{\mathbf 1}_{\{Y_i > t\}}\leq e{\mathbb E} Y_i{\mathbf 1}_{\{Y_i > t,Y_i^{\prime}\leq 0\}} \leq e{\mathbb E} |Y_i-Y_i^{\prime}|{\mathbf 1}_{\{Y_i-Y_i^{\prime} > t\}}=e{\mathbb E} |X_i-X_i^{\prime}|{\mathbf 1}_{\{X_i-X_i^{\prime} > t\}} \end{aligned}$$

for t ≥ 0. In the same way we show that

$$\displaystyle \begin{aligned} {\mathbb E} |Y_i|{\mathbf 1}_{\{Y_i < -t\}}\leq e{\mathbb E} |Y_i|{\mathbf 1}_{\{Y_i <-t,Y_i^{\prime}\geq 0\}} \leq e{\mathbb E} |X_i-X_i^{\prime}|{\mathbf 1}_{\{X_i^{\prime}-X_i > t\}} \end{aligned}$$

Therefore

$$\displaystyle \begin{aligned} {\mathbb E} |Y_i|{\mathbf 1}_{\{|Y_i| > t\}}\leq e{\mathbb E} |X_i-X_i^{\prime}|{\mathbf 1}_{\{|X_i-X_i^{\prime}| > t\}}. \end{aligned}$$

We have

$$\displaystyle \begin{aligned} & \sum_{i=k+1}^n{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i| > e t(k, X-X^{\prime})+M\}} \leq \sum_{i=k+1}^n{\mathbb E} |X_i|{\mathbf 1}_{\{|Y_i| > et(k,X-X^{\prime})\}} \\ & \quad \leq \sum_{i=k+1}^n{\mathbb E} |Y_i|{\mathbf 1}_{\{|Y_i| > t(k,X-X^{\prime})\}} +\sum_{i=k+1}^n|{\mathbb E} X_i|{\mathbb P}(|Y_i| > et(k,X-X^{\prime})) \\ &\quad \leq e\sum_{i=1}^n{\mathbb E} |X_i-X_i^{\prime}|{\mathbf 1}_{\{|X_i-X_i^{\prime}| > t(k,X-X^{\prime})\}} +M\sum_{i=1}^n{\mathbb P}(|Y_i| > et(k,X-X^{\prime})) \\ &\quad \leq ekt(k,X-X^{\prime}) +M\sum_{i=1}^n \big(e t(k,X-X^{\prime}) \big)^{-1} {\mathbb E} |Y_i| {\mathbf 1}_{\{|Y_i| > e t(k,X-X^{\prime})\}} \\ &\quad \leq ekt(k,X-X^{\prime})+Mt(k,X-X^{\prime})^{-1}\sum_{i=1}^n {\mathbb E} |X_i-X_i^{\prime}| {\mathbf 1}_{\{|X_i-X_i^{\prime}| > t(k,X-X^{\prime})\}} \\ &\quad \leq ekt(k,X-X^{\prime})+kM. \end{aligned} $$

Together with (3.11) we get

$$\displaystyle \begin{aligned} \sum_{i=1}^n{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i| > et(k,X-X^{\prime})+M\}}\leq k(et(k,X-X^{\prime})+2M) \end{aligned}$$

and (3.9) easily follows.

In order to prove (3.10), note that for u > 0,

$$\displaystyle \begin{aligned} {\mathbb P}(|X_i-X_i^{\prime}|\ge 2u) \le {\mathbb P}\big(\max\{|X_i|, |X_i^{\prime}|\} \ge u\big) \le 2{\mathbb P}\big(|X_i| \ge u\big), \end{aligned} $$

thus the last part of the assertion follows by the definition of parameters t . □

Lemma 3.12

Suppose that V  is a real symmetric log-concave random variable. Then for any t > 0 and λ ∈ (0, 1],

$$\displaystyle \begin{aligned} {\mathbb E} |V|{\mathbf 1}_{\{|V|\geq t\}} \leq \frac{4}{\lambda}{\mathbb P}(|V|\geq t)^{1-\lambda} {\mathbb E} |V|{\mathbf 1}_{\{|V|\geq \lambda t\}}. \end{aligned}$$

Moreover, if \({\mathbb P}(|V|\ge t)\le 1/4\) , then \({\mathbb E} |V|{\mathbf 1}_{\{|V|\geq t\}} \leq 4t {\mathbb P}(|V|\geq t).\)

Proof

Without loss of generality we may assume that \({\mathbb P}(|V|\geq t)\leq 1/4\) (otherwise the first estimate is trivial).

Observe that \({\mathbb P}(|V|\geq s)=\exp (-N(s))\) where N : [0, ) → [0, ] is convex and N(0) = 0. In particular

$$\displaystyle \begin{aligned} {\mathbb P}(|V|\geq \gamma t)\leq {\mathbb P}(|V|\geq t)^{ \gamma}\quad \mbox{for } \gamma>1 \end{aligned}$$

and

$$\displaystyle \begin{aligned} {\mathbb P}(|V|\geq \gamma t)\geq {\mathbb P}(|V|\geq t)^{ \gamma}\quad \mbox{for } \gamma\in [0,1]. \end{aligned}$$

We have

$$\displaystyle \begin{aligned} {\mathbb E} |V|{\mathbf 1}_{\{|V|\geq t\}} &\leq \sum_{k=0}^{\infty} 2^{k+1}t{\mathbb P}(|V|\geq 2^k t) \leq 2t \sum_{k=0}^{\infty} 2^{k}{\mathbb P}(|V|\geq t)^{2^k} \\ &\leq 2t {\mathbb P}(|V|\geq t)\sum_{k=0}^{\infty} 2^{k}4^{1-2^k}\leq 4t {\mathbb P}(|V|\geq t). \end{aligned} $$

This implies the second part of the lemma.

To conclude the proof of the first bound it is enough to observe that

$$\displaystyle \begin{aligned} {\mathbb E} |V|{\mathbf 1}_{\{|V|\geq \lambda t\}}\geq \lambda t{\mathbb P}(|V|\geq \lambda t)\geq \lambda t{\mathbb P}(|V|\geq t)^{\lambda}.\end{aligned} $$

Proof of Theorem 3.3

By Proposition 3.1 it is enough to show the lower bound. By Lemma 3.11 we may assume that X is symmetric. We may also obviously assume that \(\|X_i\|{ }_2^2={\mathbb E} X_i^2>0\) for all i.

Let Z = (Z 1, …, Z n), where Z i = X i∕∥X i2. Then Z is log-concave, isotropic and, by (3.7), \({\mathbb E} |Z_i|\geq 1/ (2C_1)\) for all i. Set Y := 2C 1Z. Then X i = a iY i and \({\mathbb E} |Y_i|\geq 1\). Moreover, since any m-dimensional projection of Z is a log-concave, isotropic m-dimensional vector, we know by the result of Lee and Vempala [13], that it satisfies the exponential concentration with a constants Cm 1∕4. (In fact an easy modification of the proof below shows that for our purposes it would be enough to have exponential concentration with a constant Cm γ for some γ < 1∕2, so one may also use Eldan’s result [6] which gives such estimates for any γ > 1∕3). So any m-dimensional projection of Y  satisfies exponential concentration with constant C 2m 1∕4.

Let us fix k and set t := t(k, X), then (since X i has no atoms)

$$\displaystyle \begin{aligned} \sum_{i=1}^n{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i|\geq t\}}=kt. \end{aligned} $$
(3.12)

For l = 1, 2, … define

$$\displaystyle \begin{aligned} I_l:=\{i\in [n]\colon\ \beta^{l-1}\geq {\mathbb P}(|X_i|\geq t)\geq \beta^l\}, \end{aligned}$$

where β = 2−8. By (3.12) there exists l such that

$$\displaystyle \begin{aligned} \sum_{i\in I_l}{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i|\geq t\}}\geq kt2^{-l}. \end{aligned}$$

Let us consider three cases.

  1. (1)

    l = 1 and |I 1|≤ k. Then

    $$\displaystyle \begin{aligned} {\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i|\geq \sum_{i\in I_1}{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i|\geq t\}}\geq \frac{1}{2}kt. \end{aligned}$$
  2. (2)

    l = 1 and |I 1| > k. Choose J ⊂ I 1 of cardinality k. Then

    $$\displaystyle \begin{aligned} {\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i|\geq \sum_{i\in J}{\mathbb E}|X_i|\geq \sum_{i\in J}t{\mathbb P}(|X_i|\geq t)\geq \beta kt. \end{aligned}$$
  3. (3)

    l > 1. By Lemma 3.12 (applied with λ = 1∕8) we have

    $$\displaystyle \begin{aligned} \sum_{i\in I_l}{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i|\geq t/8\}}\geq \frac{1}{32}\beta^{-7(l-1)/8}\sum_{i\in I_l}{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i|\geq t\}} \geq \frac{1}{32}\beta^{-7(l-1)/8}2^{-l}kt.\end{aligned} $$
    (3.13)

    Moreover for i ∈ I l, \({\mathbb P}(|X_i|\geq t)\leq \beta ^{l-1}\leq 1/4\), so the second part of Lemma 3.12 yields

    $$\displaystyle \begin{aligned} 4t|I_l|\beta^{l-1}\geq \sum_{i\in I_l}{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i|\geq t\}}\geq kt2^{-l} \end{aligned}$$

    and |I l|≥ β 1−l2l−2k = 27l−10k ≥ k.

Set k  := β −7l∕82lk = 26lk. If k ≥|I l| then, using (3.13), we estimate

$$\displaystyle \begin{aligned} {\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i|&\geq \frac{k}{|I_l|}\sum_{ i\in I_l}{\mathbb E} |X_i| \geq \beta^{7l/8}2^l\sum_{i\in I_l}{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i|\geq t/8\}}\geq \frac{1}{32}\beta^{7/8}kt \\ &=2^{-12}kt. \end{aligned} $$

Otherwise set \(X^{\prime }=(X_i)_{i\in I_l}\) and \(Y^{\prime }=(Y_i)_{i\in I_l}\). By (3.12) we have

$$\displaystyle \begin{aligned} kt\geq \sum_{i\in I_l}{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i|\geq t\}}\geq |I_l|t\beta^{l}, \end{aligned}$$

so |I l|≤  l and Y satisfies exponential concentration with constant α  = C 2k 1∕4β l∕4. Estimate (3.13) yields

$$\displaystyle \begin{aligned} \sum_{i\in I_l}{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i|\geq 2^{-12}t\}}\geq \sum_{i\in I_l}{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i|\geq t/8\}}\geq 2^{-12}k^{\prime}t, \end{aligned}$$

so t(k , X ) ≥ 2−12t. Moreover, by Proposition 3.7 we have (since k ≤|I l|)

$$\displaystyle \begin{aligned} {\mathbb E}\max_{I\subset I_l,|I|=k^{\prime}}\sum_{i\in I}|X_i|\geq \frac{1}{8+64\alpha^{\prime}/\sqrt{k^{\prime}}}k^{\prime}t(k^{\prime},X^{\prime}). \end{aligned}$$

To conclude observe that

$$\displaystyle \begin{aligned} \frac{\alpha^{\prime}}{\sqrt{k^{\prime}}}= C_2 2^{- l}k^{- 1/4}\leq \frac{C_2}4 \end{aligned}$$

and since k ≥ k,

$$\displaystyle \begin{aligned} {\mathbb E}\max_{|I|=k}\sum_{i\in I}|X_i|\geq \frac{k}{k^{\prime}}{\mathbb E}\max_{I\subset I_l,|I|=k^{\prime}}\sum_{i\in I}|X_i| \geq \frac{1}{ 8+16C_2}2^{-12}tk.\end{aligned} $$

3.4 Vectors Satisfying Condition (3.3)

Proof of Theorem 3.2

By Proposition 3.1 we need to show only the lower bound. Assume first that variables X i have no atoms and k ≥ 4(1 + α).

Let t k = t(k, X). Then \({\mathbb E} \sum _{i=1}^n |X_i| {\mathbf 1}_{\{|X_i| \ge t_k\}} = kt_k \). Note, that (3.3) implies that for all i ≠ j we have

$$\displaystyle \begin{aligned} {\mathbb E} |X_iX_j| {\mathbf 1}_{\{|X_i|\ge t_k, |X_j|\ge t_k\}} \le \alpha {\mathbb E} |X_i| {\mathbf 1}_{\{|X_i|\ge t_k\} }{\mathbb E} |X_j|{\mathbf 1}_{\{|X_j|\ge t_k\} }. \end{aligned} $$
(3.14)

We may assume that \({\mathbb E} \max _{|I|=k} \sum _{i\in I} |X_i| \le \frac 16 kt_k\), because otherwise the lower bound holds trivially.

Let us define

$$\displaystyle \begin{aligned} Y:= \sum_{i=1}^n |X_i| {\mathbf 1}_{\{kt_k \ge |X_i| \ge t_k\}} \quad \mbox{ and }\quad A:=({\mathbb E} Y^2)^{1/2}. \end{aligned} $$

Since

$$\displaystyle \begin{aligned} {\mathbb E} \max_{|I|=k} \sum_{i\in I} |X_i| \ge {\mathbb E} \bigg[\frac{1}{2}kt_k {\mathbf 1}_{\{Y\ge kt_k/2\}}\bigg] = \frac{1}{2}kt_k {\mathbb P}\bigg(Y\ge \frac{kt_k}{2}\bigg), \end{aligned} $$

it suffices to bound below the probability that Y ≥ kt k∕2 by a constant depending only on α.

We have

$$\displaystyle \begin{aligned} A^2 & = {\mathbb E} Y^2 \le \sum_{i=1}^n {\mathbb E} X_i^2{\mathbf 1}_{\{kt_k\ge |X_i|\ge t_k\}} + \sum_{i\neq j }{\mathbb E} |X_iX_j| {\mathbf 1}_{\{|X_i|\ge t_k, |X_j|\ge t_k\}} \\ &\operatorname*{\le}^{\mbox{{(3.14)}}} kt_k{\mathbb E} Y + \alpha \sum_{i\neq j } {\mathbb E} |X_i| {\mathbf 1}_{\{|X_i|\ge t_k \}}{\mathbb E} |X_j|{\mathbf 1}_{\{|X_j|\ge t_k\}} \\ & \le kt_kA + \alpha \bigg( \sum_{i=1}^n {\mathbb E}|X_i|{\mathbf 1}_{\{|X_i|\ge t_k\}} \bigg)^2 \le \frac 12 (k^2t_k^2+A^2) + \alpha k^2t_k^2. \end{aligned} $$

Therefore \(A^2 \le (1+2\alpha )k^2t_k^2\) and for any l ≥ k∕2 we have

$$\displaystyle \begin{aligned} \notag {\mathbb E} Y{\mathbf 1}_{\{Y\ge kt_k/2\}} & \le lt_k{\mathbb P}(Y\ge kt_k/2) + \frac{1}{lt_k}{\mathbb E} Y^2 \\ {} & \le lt_k{\mathbb P}(Y\ge kt_k/2) + (1+2\alpha)k^2l^{-1}t_k. \end{aligned} $$
(3.15)

By Corollary 3.9 we have (recall definition (3.6))

$$\displaystyle \begin{aligned} \notag \sum_{i=1}^n {\mathbb E}|X_i|{\mathbf 1}_{\{|X_i|\ge kt_k\}} & \le {\mathbb E} \max_{|I|=k}\sum_{i\in I}|X_i|\\ & \quad + \sum_{l=k+1}^{\infty}\left(kt_k{\mathbb P}(N(kt_k)\geq l)+\int_{kt_k}^{\infty}{\mathbb P}(N(s)\geq l)ds\right) \\ \notag & \le \frac 16 kt_k +\sum_{l=k+1}^{\infty} \left(kt_k {\mathbb E} N(kt_k)^2 l^{-2}+\int_{kt_k}^{\infty} {\mathbb E} N(s)^2 l^{-2}ds\right) \\ {} & \le \frac 16 kt_k +\frac 1k \left(kt_k {\mathbb E} N(kt_k)^2+\int_{kt_k}^{\infty} {\mathbb E} N(s)^2 ds\right). \end{aligned} $$
(3.16)

Assumption (3.3) implies that

$$\displaystyle \begin{aligned} {\mathbb E} N(s)^2 &= \sum_{i=1}^n {\mathbb P}(|X_i|\ge s) + \sum_{i\neq j} {\mathbb P}(|X_i|\ge s, |X_j|\ge s) \\ &\le \sum_{i=1}^n {\mathbb P}(|X_i|\ge s) + \alpha\left(\sum_{i=1}^n {\mathbb P}(|X_i|\ge s)\right)^2. \end{aligned} $$

Moreover for s ≥ kt k we have

$$\displaystyle \begin{aligned} \sum_{i=1}^n {\mathbb P}(|X_i|\ge s) \le \frac{1}{s}\sum_{i=1}^n {\mathbb E}|X_i| {\mathbf 1}_{\{|X_i|\ge s\}} \le \frac{kt_k}{s}\le 1, \end{aligned}$$

so

$$\displaystyle \begin{aligned} {\mathbb E} N(s)^2 \le (1+\alpha) \sum_{i=1}^n {\mathbb P}(|X_i|\ge s) \quad \mbox{ for }s\geq kt_k. \end{aligned}$$

Thus

$$\displaystyle \begin{aligned} kt_k{\mathbb E} N(kt_k)^2 \le kt_k (1+\alpha) \sum_{i=1}^n{\mathbb P} (|X_i| \ge kt_k) \le (1+\alpha)\sum_{i=1}^n {\mathbb E}|X_i|{\mathbf 1}_{\{|X_i|\ge kt_k\}}, \end{aligned}$$

and

$$\displaystyle \begin{aligned} \int_{kt_k}^{\infty} {\mathbb E} N(s)^2 ds \le (1+\alpha )\sum_{i=1}^n\int_{kt_k}^{\infty} {\mathbb P} (|X_i|\ge s)ds \le (1+\alpha)\sum_{i=1}^n {\mathbb E}|X_i|{\mathbf 1}_{\{|X_i|\ge kt_k\}}. \end{aligned}$$

This together with (3.16) and the assumption that k ≥ 4(1 + α) implies

$$\displaystyle \begin{aligned} \sum_{i=1}^n {\mathbb E}|X_i|{\mathbf 1}_{\{|X_i|\ge kt_k\}} \leq \frac{1}{3}kt_k \end{aligned}$$

and

$$\displaystyle \begin{aligned} {\mathbb E} Y=\sum_{i=1}^n {\mathbb E}|X_i|{\mathbf 1}_{\{|X_i|\ge t_k\}}-\sum_{i=1}^n {\mathbb E}|X_i|{\mathbf 1}_{\{|X_i|\ge kt_k\}}\geq \frac{2}{3}kt_k. \end{aligned}$$

Therefore

$$\displaystyle \begin{aligned} {\mathbb E} Y{\mathbf 1}_{\{Y\ge kt_k/2\}}\geq {\mathbb E} Y-\frac{1}{2}kt_k\geq \frac{1}{6}kt_k. \end{aligned}$$

This applied to (3.15) with l = (12 + 24α)k gives us \({\mathbb P}(Y\ge kt_k/2)\ge ( 144+288\alpha )^{-1}\) and in consequence

$$\displaystyle \begin{aligned} {\mathbb E} \max_{|I|=k} \sum_{i\in I} |X_i| \ge \frac{1}{ 288(1+2\alpha)}kt(k,X). \end{aligned}$$

Since kkt(k, X) is non-decreasing, in the case k ≤⌈4(1 + α)⌉ =: k 0 we have

$$\displaystyle \begin{aligned} {\mathbb E} \max_{|I|=k} |X_i| &\ge \frac{k}{k_0} {\mathbb E} \max_{|I|=k_0} |X_i| \ge \frac{k}{5+4\alpha} \cdot \frac{1}{ 288(1+2\alpha)}k_0t(k_0,X) \\ & \ge \frac{1}{ 288(5+4\alpha)(1+2\alpha)}kt(k,X). \end{aligned} $$

The last step is to loose the assumption that X i has no atoms. Note that both assumption (3.3) and the lower bound depend only on \((|X_i|)_{i=1}^n\), so we may assume that X i are nonnegative almost surely. Consider \(X^{\varepsilon }:=(X_i +\varepsilon Y_i)_{i=1}^n\), where Y 1, …, Y n are i.i.d. nonnegative r.v’s with \({\mathbb E} Y_i <\infty \) and a density g, independent of X. Then for every s, t > 0 we have (observe that (3.3) holds also for s < 0 or t < 0).

$$\displaystyle \begin{aligned} &{\mathbb P}(X_i^{\varepsilon} \ge s, X_j^{\varepsilon} \ge t)\\ & \quad = \int_{0}^{\infty} \int_{0}^{\infty} {\mathbb P}(X_i +\varepsilon y_i \ge s, \ X_j +\varepsilon y_j \ge t) g(y_i)g(y_j)dy_idy_j\\ & \quad \operatorname*{\le}^{\mbox{{(3.3)}}} \alpha \int_{0}^{\infty} \int_{0}^{ \infty} {\mathbb P}(X_i \ge s - \varepsilon y_i) {\mathbb P}(X_j \ge t- \varepsilon y_j) g(y_i)g(y_j)dy_idy_j\\ & \quad =\alpha{\mathbb P}(X_i^{\varepsilon} \ge s){\mathbb P}( X_j^{\varepsilon} \ge t). \end{aligned} $$

Thus X ε satisfies assumption (3.3) and has the density function for every ε > 0. Therefore for all natural k we have

$$\displaystyle \begin{aligned} {\mathbb E} \max_{|I|=k} \sum_{i=1}^n X_i^{\varepsilon} \ge c(\alpha)kt(k, X^{\varepsilon}) \ge c(\alpha)kt(k, X). \end{aligned} $$

Clearly, \({\mathbb E} \max _{|I|=k} \sum _{i=1}^n X_i^{\varepsilon } \to {\mathbb E} \max _{|I|=k} \sum _{i=1}^n X_i\) as ε → 0, so the lower bound holds in the case of arbitrary X satisfying (3.3). □

We may use Theorem 3.2 to obtain a comparison of weak and strong moments for the supremum norm:

Corollary 3.13

Let X be an n-dimensional centered random vector satisfying condition (3.3). Assume that

$$\displaystyle \begin{aligned} \|X_i\|{}_{2p} \le \beta \|X_i\|{}_p \qquad \mathit{\mbox{for every }}p\ge 2\mathit{\mbox{ and }}i=1,\ldots,n. \end{aligned} $$
(3.17)

Then the following comparison of weak and strong moments for the supremum norm holds: for all \(a\in {\mathbb R}^n\) and all p ≥ 1,

$$\displaystyle \begin{aligned} \big({\mathbb E} \max_{i\le n}|a_iX_i|{}^p\big)^{1/p} \le C(\alpha, \beta)\Big[ {\mathbb E} \max_{i\le n}|a_iX_i| + \max_{i\le n}\big( {\mathbb E} |a_iX_i|{}^p \big)^{1/p}\Big], \end{aligned}$$

where C(α, β) is a constant depending only on α and β.

Proof

Let \(X^{\prime }=(X_i^{\prime })_{i\leq n}\) be a decoupled version of X. For any p > 0 a random vector (|a iX i|p)in satisfies condition (3.3), so by Theorem 3.2

$$\displaystyle \begin{aligned} \big({\mathbb E} \max_{i\le n}|a_iX_i|{}^p\big)^{1/p} \sim \big({\mathbb E} \max_{i\le n}|a_iX_i^{\prime}|{}^p\big)^{1/p} \end{aligned}$$

for all p > 0, up to a constant depending only on α. The coordinates of X are independent and satisfy condition (3.17), so due to [11, Theorem 1.1] the comparison of weak and strong moments of X holds, i.e. for p ≥ 1,

$$\displaystyle \begin{aligned} \big({\mathbb E} \max_{i\le n}|a_iX_i^{\prime}|{}^p\big)^{1/p} \le C(\beta)\Big[ {\mathbb E} \max_{i\le n}|a_iX_i^{\prime}| + \max_{i\le n}\big( {\mathbb E} |a_iX_i^{\prime}|{}^p \big)^{1/p}\Big], \end{aligned}$$

where C(β) depends only on β. These two observations yield the assertion. □

3.5 Lower Estimates for Order Statistics

The next lemma shows the relation between t(k, X) and t (k, X) for log-concave vectors X.

Lemma 3.14

Let X be a symmetric log-concave random vector in \({\mathbb R}^n\). For any 1 ≤ k  n we have

$$\displaystyle \begin{aligned} \frac{1}{3}\left(t^*(k,X)+\frac{1}{k}\max_{|I|=k}\sum_{i\in I}{\mathbb E} |X_i|\right) \leq t(k,X) \leq 4\left(t^*(k,X)+\frac{1}{k}\max_{|I|=k}\sum_{i\in I}{\mathbb E} |X_i|\right). \end{aligned}$$

Proof

Let t k := t(k, X) and \(t_k^*:=t^*(k,X)\). We may assume that any X i is not identically equal to 0. Then \(\sum _{i=1}^n {\mathbb P}(|X_i|\ge t_k^{ *})\,{=}\, k\) and \(\sum _{i=1}^n {\mathbb E}|X_i|{\mathbf 1}_{{ \{}|X_i|\ge t_k{ \}}}\,{=}\,kt_k\).

Obviously \(t_k^*\leq t_k\). Also for any |I| = k we have

$$\displaystyle \begin{aligned} \sum_{i\in I}{\mathbb E} |X_i|\leq \sum_{i\in I}\left(t_k+{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i|\geq t_k\}}\right) { \leq} |I|t_k+kt_k=2kt_k. \end{aligned}$$

To prove the upper bound set

$$\displaystyle \begin{aligned} I_1:=\{i\in [n]\colon\ {\mathbb P}(|X_i|\geq t_k^*)\geq 1/4\}. \end{aligned}$$

We have

$$\displaystyle \begin{aligned} k \ge\sum_{i\in |I_1|}{\mathbb P}(|X_i|\geq t_k^*)\geq \frac{1}{4}|I_1|, \end{aligned}$$

so |I 1|≤ 4k. Hence

$$\displaystyle \begin{aligned} \sum_{i\in I_1}{\mathbb E} |X_i|{\mathbf 1}_{\{|X_i|\geq t_k^*\}}\leq \sum_{i\in I_1}{\mathbb E} |X_i|\leq 4 \max_{|I|=k}\sum_{i\in I}{\mathbb E} |X_i|. \end{aligned}$$

Moreover by the second part of Lemma 3.12 we get

$$\displaystyle \begin{aligned} {\mathbb E}|X_i|{\mathbf 1}_{\{|X_i|\geq t_k^*\}}\leq 4t_k^*{\mathbb P}(|X_i|\geq t_k^*) \quad \mbox{for }i\notin I_1, \end{aligned}$$

so

$$\displaystyle \begin{aligned} \sum_{i\notin I_1}{\mathbb E}|X_i|{\mathbf 1}_{\{|X_i|\geq t_k^*\}}\leq 4t_k^*\sum_{i=1}^n{\mathbb P}(|X_i|\geq t_k^*)\leq 4kt_k^*. \end{aligned}$$

Hence if \(s=4t_k^*+\frac {4}{k}\max _{|I|=k}\sum _{i\in I}{\mathbb E}|X_i|\) then

$$\displaystyle \begin{aligned} \sum_{i=1}^n{\mathbb E}|X_i|{\mathbf 1}_{\{|X_i|\geq s\}} \leq \sum_{i=1}^n{\mathbb E}|X_i|{\mathbf 1}_{\{|X_i|\geq t_k^*\}}\leq 4 \max_{|I|=k}\sum_{i\in I}{\mathbb E} |X_i|+4kt_k^*=ks, \end{aligned}$$

that is t k ≤ s. □

To derive bounds for order statistics we will also need a few facts about log-concave vectors.

Lemma 3.15

Assume that Z is an isotropic one- or two-dimensional log-concave random vector with a density g. Then g(t) ≤ C for all t. If Z is one-dimensional, then also g(t) ≥ c for all |t|≤ t 0, where t 0 > 0 is an absolute constant.

Proof

We will use a classical result (see [4, Theorem 2.2.2, Proposition 3.3.1, Proposition 3.3.2, and Proposition 2.5.9]): \(\|g\|{ }_{\sup }\sim g(0) \sim 1\) (note that here we use the assumption that Z is isotropic, in particular that \({\mathbb E} Z=0\), and that the dimension of Z is 1 or 2). This implies the upper bound on g.

In order to get the lower bound in the one-dimensional case, it suffices to prove that g(u) ≥ c for \(|u|=\varepsilon {\mathbb E} |Z|\ge (2C_1)^{-1} \varepsilon \), where 1∕4 > ε > 0 is fixed and its value will be chosen later (then by the log-concavity we get g(u)sg(0)1−s ≤ g(su) for all s ∈ (0, 1)). Since − Z is again isotropic we may assume that u ≥ 0.

If g(u) ≥ g(0)∕e, then we are done. Otherwise by log-concavity of g we get

$$\displaystyle \begin{aligned} {\mathbb P}(Z\,{\ge}\, u)\, {=} \!\int_u^{\infty}\!\!\! g(s) ds \,{\le}\! \int_u^{\infty}\!\!\! g(u)^{s/u}g(0)^{-s/u +1} ds \,{\le}\, g(0) \!\int_u^{\infty} \!\!\!e^{-s/u}ds \,{\le}\, C_0u \,{\leq}\, C_0\varepsilon. \end{aligned}$$

On the other hand, Z has mean zero, so \({\mathbb E} |Z|=2{\mathbb E} Z_{+}\) and by the Paley–Zygmund inequality and (3.7) we have

$$\displaystyle \begin{aligned} {\mathbb P}(Z\ge u) = {\mathbb P}(Z_+\geq 2\varepsilon {\mathbb E} Z_{+}) \ge (1-2\varepsilon)^2 \frac{({\mathbb E} Z_+)^2}{{\mathbb E} Z_+^2} \ge \frac{1}{16}\frac{({\mathbb E} |Z|)^2}{{\mathbb E} Z^2}\geq c_0. \end{aligned}$$

For ε < c 0C 0 we get a contradiction. □

Lemma 3.16

Let Y  be a mean zero log-concave random variable and let \({\mathbb P}(|Y|\geq t)\leq p\) for some p > 0. Then

$$\displaystyle \begin{aligned} {\mathbb P}\left(|Y|\geq \frac{t}{2}\right)\geq \frac{1}{\sqrt{ep}}{\mathbb P}(|Y|\geq t). \end{aligned}$$

Proof

By the Grünbaum inequality (3.8) we have \({\mathbb P}(Y\geq 0)\geq 1/e\), hence

$$\displaystyle \begin{aligned} {\mathbb P}\left(Y\geq \frac{t}{2}\right)\geq \sqrt{{\mathbb P}(Y\geq t){\mathbb P}(Y\geq 0)} \geq \frac{1}{\sqrt{e}}\sqrt{{\mathbb P}(Y\geq t)} \geq \frac{1}{\sqrt{ep}}{\mathbb P}(Y\geq t). \end{aligned}$$

Since − Y  satisfies the same assumptions as Y  we also have

$$\displaystyle \begin{aligned} {\mathbb P}\left(-Y\geq \frac{t}{2}\right) \geq \frac{1}{\sqrt{ep}}{\mathbb P}(-Y\geq t).\end{aligned} $$

Lemma 3.17

Let Y  be a mean zero log-concave random variable and let \({\mathbb P}(|Y|\geq t)\geq p\) for some p > 0. Then there exists a universal constant C such that

$$\displaystyle \begin{aligned} {\mathbb P}(|Y|\leq \lambda t)\leq \frac{C\lambda}{\sqrt p} {\mathbb P}(|Y|\leq t)\quad \mathit{\mbox{for }}\lambda\in [0,1]. \end{aligned}$$

Proof

Without loss of generality we may assume that \({\mathbb E} Y^2=1\). Then by Chebyshev’s inequality t ≤ p −1∕2. Let g be the density of Y . By Lemma 3.15 we know that ∥g≤ C and g(t) ≥ c on [−t 0, t 0], where c, C and t 0 ∈ (0, 1) are universal constants. Thus

$$\displaystyle \begin{aligned} {\mathbb P}(|Y|\leq t)\geq {\mathbb P}(|Y|\leq t_0\sqrt{p}t)\geq 2ct_0\sqrt{p}t, \end{aligned}$$

and

$$\displaystyle \begin{aligned} {\mathbb P}(|Y|\leq \lambda t)\leq 2\|g\|{}_\infty\lambda t\leq 2C\lambda t\leq \frac{C\lambda}{ct_0\sqrt{p}}{\mathbb P}(|Y|\leq t).\end{aligned} $$

Now we are ready to give a proof of the lower bound in Theorem 3.4. The next proposition is a key part of it.

Proposition 3.18

Let X be a mean zero log-concave n-dimensional random vector with uncorrelated coordinates and let α > 1∕4. Suppose that

$$\displaystyle \begin{aligned} {\mathbb P}\big(|X_i|\geq t^*(\alpha,X)\big)\leq \frac{1}{C_3}\quad \mathit{\mbox{for all }}i. \end{aligned}$$

Then

$$\displaystyle \begin{aligned} {\mathbb P}\Big(\lfloor 4\alpha\rfloor \mathit{\text{-}}\max_{i}|X_i|\geq \frac{1}{C_4} t^*(\alpha,X)\Big)\geq \frac{3}{4}. \end{aligned}$$

Proof

Let t  = t (α, X), k := ⌊4α⌋ and \(L=\lfloor \frac {\sqrt {C_3}}{4 \sqrt {e}}\rfloor \). We will choose C 3 in such a way that L is large, in particular we may assume that L ≥ 2. Observe also that \(\alpha = \sum _{i=1}^n {\mathbb P}(|X_i|\geq t^*(\alpha ,X))\leq nC_3^{-1}\), thus \(Lk\leq C_3^{1/2}e^{-1/2}\alpha \leq e^{-1/2}C_3^{-1/2}n\leq n\) if \(C_3\geq 1>\frac 1e\). Hence

$$\displaystyle \begin{aligned} k\text{-}\max_{i}|X_i| & \geq \frac{1}{k(L-1)}\sum_{l=k+1}^{Lk}l\text{-}\max_{i}|X_i| \\ & =\frac{1}{k(L-1)} \bigg(\max_{|I|=Lk}\sum_{i\in I}|X_i|-\max_{|I|=k}\sum_{i\in I}|X_i|\bigg). \end{aligned} $$
(3.18)

Lemma 3.16 and the definition of t (α, X) yield

$$\displaystyle \begin{aligned} \sum_{i=1}^n{\mathbb P}\left(|X_i|\geq \frac{1}{2}t^*\right)\geq \frac{\sqrt{C_3}}{ \sqrt{e}}\alpha \geq Lk. \end{aligned}$$

This yields \(t(Lk,X)\geq t^*(Lk,X) \geq \frac {t^*}{2}\) and by Theorem 3.3 we have

$$\displaystyle \begin{aligned} {\mathbb E}\max_{|I|=Lk}\sum_{i\in I}|X_i|\geq c_1Lk\frac{t^*}{2}. \end{aligned}$$

Since for any norm \({\mathbb P}(\|X\|\leq t{\mathbb E} \|X\|)\leq Ct\) for t > 0 (see [10, Corollary 1]) we have

$$\displaystyle \begin{aligned} {\mathbb P}\left(\max_{|I|=Lk}\sum_{i\in I}|X_i|\geq c_2Lkt^*\right)\geq \frac{7}{8}. \end{aligned} $$
(3.19)

Let X be an independent copy of X. By the Paley-Zygmund inequality and (3.7), \({\mathbb P}(|X_i|\geq \frac {1}{2}{\mathbb E} |X_i|)\geq \frac {({\mathbb E}|X_i|)^2}{4{\mathbb E} |X_i|{ }^2}> \frac {1}{C_3}\) if \(C_3 >16C_1^2\), so \(\frac {1}{2}{\mathbb E}|X_i|\leq t^*\). Moreover it is easy to verify that k = ⌊4α⌋ > α for α > 1∕4, thus t (k, X) ≤ t (α, X) = t . Hence Proposition 3.1, Lemma 3.14, and inequality (3.10) yield

$$\displaystyle \begin{aligned} {\mathbb E} \max_{|I|=k}\sum_{i\in I}|X_i| &= {\mathbb E} \max_{|I|=k}\sum_{i\in I}|X_i-{\mathbb E} X_i^{\prime}| \leq {\mathbb E} \max_{|I|=k}\sum_{i\in I}|X_i-X_i^{\prime}| \\ & \leq {\mathbb E} \max_{|I|=2k}\sum_{i\in I}|X_i-X_i^{\prime}| \\ & \leq 4kt(2k,X-X^{\prime})\leq 16k\big(t^*(2k,X-X^{\prime})+\max_{i}{\mathbb E} |X_i-X_i^{\prime}|\big) \\ &\leq 16k\big(2t^*(k,X)+2\max_{i}{\mathbb E} |X_i|\big)\leq 96kt^*. \end{aligned} $$

Therefore

$$\displaystyle \begin{aligned} {\mathbb P}\left(\max_{|I|=k}\sum_{i\in I}|X_i|\geq 800kt^*\right)\leq \frac{1}{8}. \end{aligned} $$
(3.20)

Estimates (3.18)–(3.20) yield

$$\displaystyle \begin{aligned} {\mathbb P}\left(k\text{-}\max_{i}|X_i|\geq \frac{1}{L-1}(c_2L- 800)t^*\right)\geq \frac{3}{4}, \end{aligned}$$

so it is enough to choose C 3 in such a way that L ≥ 1600∕c 2. □

Proof of the First Part of Theorem 3.4

Let t  = t (k − 1∕2, X) and C 3 be as in Proposition 3.18. It is enough to consider the case when t  > 0, then \({\mathbb P}(|X_i|=t^*)=0\) for all i and \(\sum _{i=1}^n {\mathbb P}(|X_i|\geq t^*) = k-1/2\). Define

$$\displaystyle \begin{aligned} I_1:=\left\{i\leq n\colon\ {\mathbb P}(|X_i|\geq t^*)\leq \frac{1}{C_3}\right\},\quad \alpha:=\sum_{i\in I_1}{\mathbb P}(|X_i|\geq t^*),\end{aligned} $$
$$\displaystyle \begin{aligned} I_2:=\left\{i\leq n\colon\ {\mathbb P}(|X_i|\geq t^*)> \frac{1}{C_3}\right\},\quad \beta:=\sum_{i\in I_2}{\mathbb P}(|X_i|\geq t^*). \end{aligned}$$

If β = 0 then α = k − 1∕2, |I 1| = {1, …, n}, and the assertion immediately follows by Proposition 3.18 since 4α ≥ k.

Otherwise define

$$\displaystyle \begin{aligned} \tilde{N}(t):=\sum_{i\in I_2}{\mathbf 1}_{\{|X_i|\leq t\}}. \end{aligned}$$

We have by Lemma 3.17 applied with p = 1∕C 3

$$\displaystyle \begin{aligned} {\mathbb E} \tilde{N}(\lambda t^*)=\sum_{i\in I_2}{\mathbb P}(|X_i|\leq \lambda t^*) \leq C_5\lambda\sum_{i\in I_2}{\mathbb P}(|X_i|\leq t^*)=C_5\lambda(|I_2|-\beta). \end{aligned}$$

Thus

$$\displaystyle \begin{aligned} {\mathbb P}\left(\lceil \beta \rceil \text{-}\max_{i\in I_2}|X_i|\leq \lambda t^*\right) & = {\mathbb P}(\tilde{N}(\lambda t^*)\geq |I_2|+1-\lceil \beta \rceil) \\ & \leq \frac{1}{|I_2|+1-\lceil \beta \rceil}{\mathbb E} \tilde{N}(\lambda t^*)\leq C_5\lambda. \end{aligned} $$

Therefore

$$\displaystyle \begin{aligned} {\mathbb P}\Big(\lceil \beta \rceil \text{-}\max_{i\in I_2}|X_i|\geq \frac{1}{4C_5} t^*\Big)\geq \frac{3}{4}. \end{aligned}$$

If α < 1∕2 then ⌈β⌉ = k and the assertion easily follows. Otherwise Proposition 3.18 yields

$$\displaystyle \begin{aligned} {\mathbb P}\Big(\lfloor 4\alpha\rfloor \text{-}\max_{i\in I_1}|X_i|\geq \frac{1}{C_4}t^*\Big)\geq \frac{3}{4}. \end{aligned}$$

Observe that for α ≥ 1∕2 we have ⌊4α⌋ + ⌈β⌉ ≥ 4α − 1 + βα + 1∕2 + β = k, so

$$\displaystyle \begin{aligned} &{\mathbb P}\left(k\text{-}\max_{i}|X_i|\geq \min\left\{\frac{t^*}{C_4},\frac{t^*}{4C_5}\right\}\right)\\ &\qquad \qquad \qquad \geq{\mathbb P}\left(\lfloor 4\alpha\rfloor\text{-}\max_{i\in I_1}|X_i|\geq \frac{1}{C_4}t^*, \lceil \beta \rceil \text{-}\max_{i\in I_2}|X_i| \geq \frac{1}{4C_5} t^*\right) \geq \frac{1}{2}.\end{aligned} $$

Remark 3.19

A modification of the proof above shows that under the assumptions of Theorem 3.4 for any p < 1 there exists c(p) > 0 such that

$$\displaystyle \begin{aligned} {\mathbb P}\left( k\text{-}\max_{i\leq n}|X_i|\geq c(p)t^*(k-1/2,X)\right)\geq p. \end{aligned}$$

3.6 Upper Estimates for Order Statistics

We will need a few more facts concerning log-concave vectors.

Lemma 3.20

Suppose that X is a mean zero log-concave random vector with uncorrelated coordinates. Then for any i  j and s > 0,

$$\displaystyle \begin{aligned} {\mathbb P}(|X_i|\leq s,|X_j|\leq s)\leq C_6{\mathbb P}(|X_i|\leq s){\mathbb P}(|X_j|\leq s). \end{aligned}$$

Proof

Let C 7, c 3 and t 0 be the constants from Lemma 3.15. If s > t 0X i2 then, by Lemma 3.15, \({\mathbb P}(|X_i|\leq s)\geq 2c_3t_0\) and the assertion is obvious (with any C 6 ≥ (2c 3t 0)−1). Thus we will assume that \(s\leq t_0\min \{\|X_i\|{ }_2,\|X_j\|{ }_2\}\).

Let \(\widetilde {X}_i=X_i/\|X_i\|{ }_2\) and let g ij be the density of \((\widetilde {X}_i,\widetilde {X}_j)\). By Lemma 3.15 we know that ∥g i,j≤ C 7, so

$$\displaystyle \begin{aligned} {\mathbb P}(|X_i|\leq s,|X_j|\leq s) ={\mathbb P}(|\widetilde{X}_i|\leq s/\|X_i\|{}_2,|\tilde{X}_j|\leq s/\|X_j\|{}_2)\leq C_7\frac{s^2}{\|X_i\|{}_2\|X_j\|{}_2}. \end{aligned}$$

On the other hand the second part of Lemma 3.15 yields

$$\displaystyle \begin{aligned} {\mathbb P}(|X_i|\leq s){\mathbb P}(|X_j|\leq s) \geq \frac{4c_3^2s^2}{\|X_i\|{}_2\|X_j\|{}_2}.\end{aligned} $$

Lemma 3.21

Let Y  be a log-concave random variable. Then

$$\displaystyle \begin{aligned} {\mathbb P}(|Y|\geq ut)\leq {\mathbb P}(|Y|\geq t)^{(u-1)/2}\quad \mathit{\mbox{for }} u\geq 1,t\geq 0. \end{aligned}$$

Proof

We may assume that Y  is non-degenerate (otherwise the statement is obvious), in particular Y  has no atoms. Log-concavity of Y  yields

$$\displaystyle \begin{aligned} {\mathbb P}(Y\geq t)\geq {\mathbb P}(Y\geq -t)^{\frac{u-1}{u+1}}{\mathbb P}(Y\geq ut)^{\frac{2}{u+1}}. \end{aligned}$$

Hence

$$\displaystyle \begin{aligned} {\mathbb P}(Y\geq ut) &\leq \left(\frac{{\mathbb P}(Y\geq t)}{{\mathbb P}(Y\geq -t)}\right)^{\frac{u+1}{2}}{\mathbb P}(Y\geq -t) =\left(1-\frac{{\mathbb P}(|Y|\leq t)}{{\mathbb P}(Y\geq -t)}\right)^{\frac{u+1}{2}}{\mathbb P}(Y\geq -t) \\ &\leq (1-{\mathbb P}(|Y|\leq t))^{\frac{u+1}{2}}{\mathbb P}(Y\geq -t)={\mathbb P}(|Y|\geq t)^{\frac{u+1}{2}}{\mathbb P}(Y\geq -t). \end{aligned} $$

Since − Y  satisfies the same assumptions as Y , we also have

$$\displaystyle \begin{aligned} {\mathbb P}(Y\leq -ut)\leq {\mathbb P}(|Y|\geq t)^{\frac{u+1}{2}}{\mathbb P}(Y\leq t). \end{aligned}$$

Adding both estimates we get

$$\displaystyle \begin{aligned} {\mathbb P}(|Y|\geq ut)\leq {\mathbb P}(|Y|\geq t)^{\frac{u+1}{2}}(1+{\mathbb P}(|Y|\leq t)) ={\mathbb P}(|Y|\geq t)^{\frac{u-1}{2}}(1-{\mathbb P}(|Y|\leq t)^2).\end{aligned} $$

Lemma 3.22

Suppose that Y  is a log-concave random variable and \({\mathbb P}(|Y|\leq t)\leq \frac {1}{10}\). Then \({\mathbb P}(|Y|\leq 21t)\geq 5{\mathbb P}(|Y|\leq t)\).

Proof

Let \({\mathbb P}(|Y|\leq t)=p\) then by Lemma 3.21

$$\displaystyle \begin{aligned} {\mathbb P}(|Y|\leq 21t)&=1-{\mathbb P}(|Y|>21t)\geq 1-{\mathbb P}(|Y|>t)^{10}=1-(1-p)^{10} \\ &\geq 10p-45p^2\geq 5p.\end{aligned} $$

Let us now prove (3.4) and see how it implies the second part of Theorem 3.4. Then we give a proof of (3.5).

Proof of ( 3.4 )

Fix k and set t  := t (k − 1∕2, X). Then \(\sum _{i=1}^n {\mathbb P}(|X_i|\geq t^*)=k-1/2\). Define

$$\displaystyle \begin{aligned} I_1:=\left\{i\leq n\colon\ {\mathbb P}(|X_i|\geq t^*)\leq \frac{9}{10}\right\},\quad \alpha:=\sum_{i\in I_1}{\mathbb P}(|X_i|\geq t^*), \end{aligned} $$
(3.21)
$$\displaystyle \begin{aligned} I_2:=\left\{i\leq n\colon\ {\mathbb P}(|X_i|\geq t^*)> \frac{9}{10}\right\},\quad \beta:=\sum_{i\in I_2}{\mathbb P}(|X_i|\geq t^*). \end{aligned} $$
(3.22)

Observe that for u > 3 and 1 ≤ l ≤|I 1| we have by Lemma 3.21

$$\displaystyle \begin{aligned} {\mathbb P}(l\text{-}\max_{i\in I_1}|X_i|\geq ut^*) &\leq {\mathbb E}\frac{1}{l}\sum_{i\in I_1}{\mathbf 1}_{\{|X_i|\geq ut^*\}} =\frac{1}{l}\sum_{i\in I_1}{\mathbb P}(|X_i|\geq ut^*) \\ \notag &\leq \frac{1}{l}\sum_{i\in I_1}{\mathbb P}(|X_i|\geq t^*)^{(u-1)/2} \leq \frac{\alpha}{l}\left(\frac{9}{10}\right)^{(u-3)/2}. \end{aligned} $$
(3.23)

Consider two cases.

Case 2 β > |I 2|− 1∕2. Then |I 2| < β + 1∕2 ≤ k, so k −|I 2|≥ 1 and

$$\displaystyle \begin{aligned} \alpha=k-\frac{1}{2}-\beta \le k - |I_2|. \end{aligned}$$

Therefore by (3.23)

$$\displaystyle \begin{aligned} {\mathbb P}\left(k\text{-}\max |X_i|\geq 5t^*\right) \leq {\mathbb P}\left((k-|I_2|)\text{-}\max_{i\in I_1} |X_i|\geq 5t^*\right) \leq \frac{9}{10}. \end{aligned}$$

Case 2 β ≤|I 2|− 1∕2. Observe that for any disjoint sets J 1, J 2 and integers l, m such that l ≤|J 1|, m ≤|J 2| we have

$$\displaystyle \begin{aligned} (l+m-1)\text{-}\max_{i\in J_1\cup J_2}|x_i| & \leq \max\left\{l\text{-}\max_{i\in J_1}|x_i|,m\text{-}\max_{i\in J_2}|x_i|\right\} \\ & \leq l\text{-}\max_{i\in J_1}|x_i|+m\text{-}\max_{i\in J_2}|x_i|. \end{aligned} $$
(3.24)

Since

$$\displaystyle \begin{aligned} \lceil \alpha\rceil +\lceil\beta\rceil \leq \alpha+\beta+2<k+2\end{aligned} $$

we have ⌈α⌉ + ⌈β⌉≤ k + 1 and, by (3.24),

$$\displaystyle \begin{aligned} k\text{-}\max_{i}|X_i| \leq \lceil \alpha\rceil\text{-}\max_{i\in I_1}|X_i|+ \lceil \beta\rceil\text{-}\max_{i\in I_2}|X_i|. \end{aligned}$$

Estimate (3.23) yields

$$\displaystyle \begin{aligned} {\mathbb P}\left( \lceil \alpha\rceil\text{-}\max_{i\in I_1}|X_i|\geq ut^*\right)\leq \left(\frac{9}{10}\right)^{(u-3)/2} \quad \mbox{for }u\geq 3. \end{aligned}$$

To estimate \(\lceil \beta \rceil \text{-}\max _{i\in I_2}|X_i| =(|I_2|+1-\lceil \beta \rceil )\text{-}\min _{i\in I_2}|X_i|\) observe that by Lemma 3.22, the definition of I 2 and assumptions on β,

$$\displaystyle \begin{aligned} \sum_{i\in I_2}{\mathbb P}(|X_i|\leq 21t^*)\geq 5\sum_{i\in I_2}{\mathbb P}(|X_i|\leq t^*)= 5(|I_2|-\beta)\geq 2(|I_2|+1-\lceil \beta\rceil). \end{aligned}$$

Set l := (|I 2| + 1 −⌈β⌉) and

$$\displaystyle \begin{aligned} \tilde{N}(t):=\sum_{i\in I_2}{\mathbf 1}_{{\{}|X_i|\leq t{\}}}. \end{aligned}$$

Note that we know already that \({\mathbb E}\tilde {N}(21 t^*) \ge 2l\). Thus the Paley-Zygmund inequality implies

$$\displaystyle \begin{aligned} {\mathbb P}\left(\lceil \beta\rceil\text{-}\max_{i\in I_2}|X_i|\leq 21t^*\right) &={\mathbb P}\left(l\text{-}\min_{i\in I_2}|X_i|\leq 21t^*\right) \geq {\mathbb P}( \tilde{N}(21t^*)\geq l) \\ &\geq {\mathbb P}\left( \tilde{N}(21t^*)\geq \frac{1}{2}{\mathbb E}\tilde{N}(21t^*)\right) \geq \frac{1}{4}\frac{({\mathbb E}\tilde{N}(21t^*))^2}{{\mathbb E}\tilde{N}(21t^*)^2}. \end{aligned} $$

However Lemma 3.20 yields

$$\displaystyle \begin{aligned} {\mathbb E}\tilde{N}(21t^*)^2\leq {\mathbb E}\tilde{N}(21t^*)+C_6({\mathbb E}\tilde{N}(21t^*)))^2\leq (C_6+1)({\mathbb E}\tilde{N}(21t^*))^2. \end{aligned}$$

Therefore

$$\displaystyle \begin{aligned} {\mathbb P}\left(k\text{-}\max_{i}|X_i|> (21+u)t^*\right) &\leq {\mathbb P}\left( \lceil \alpha\rceil\text{-}\max_{i\in I_1}|X_i|\geq ut^*\right) \\ & \quad +{\mathbb P}\left(\lceil \beta\rceil\text{-}\max_{i\in I_2}|X_i|> 21t^*\right) \\ &\leq \left(\frac{9}{10}\right)^{(u-3)/2}\kern-1pt+\kern-1pt1-\frac{1}{4(C_6+1)}\kern-1pt\leq\kern-1pt 1\kern-1pt-\kern-1pt\frac{1}{5(C_6+1)} \end{aligned} $$

for sufficiently large u. □

The unconditionality assumption plays a crucial role in the proof of the next lemma, which allows to derive the second part of Theorem 3.4 from estimate (3.4).

Lemma 3.23

Let X be an unconditional log-concave n-dimensional random vector. Then for any 1 ≤ k  n,

$$\displaystyle \begin{aligned} {\mathbb P}\left(k\mathit{\text{-}}\max_{i\leq n}|X_i|\geq ut\right)\leq {\mathbb P}\left(k\mathit{\text{-}}\max_{i\leq n}|X_i|\geq t\right)^u \quad \mathit{\mbox{ for }}u>1,t>0. \end{aligned}$$

Proof

Let ν be the law of (|X 1|, …, |X n|). Then ν is log-concave on \({\mathbb R}_n^+\). Define for t > 0,

$$\displaystyle \begin{aligned} A_t:=\left\{x\in {\mathbb R}_n^+\colon\ k\text{-}\max_{i\leq n}|x_i|\geq t\right\}. \end{aligned}$$

It is easy to check that \(\frac {1}{u}A_{ut}+(1-\frac {1}{u}){\mathbb R}_+^n\subset A_t\), hence

$$\displaystyle \begin{aligned} {\mathbb P}\left(k\text{-}\max_{i\leq n}|X_i|\kern-1pt\geq\kern-1pt t\right)=\nu(A_t)\kern-1pt\geq\kern-1pt \nu(A_{ut})^{1/u}\nu({\mathbb R}_+^n)^{1-1/u} \kern-1pt=\kern-1pt{\mathbb P}\left(k\text{-}\max_{i\leq n}|X_i|\kern-1pt\geq\kern-1pt ut\right)^{1/u}.\end{aligned} $$

Proof of the Second Part of Theorem 3.4

Estimate (3.4) together with Lemma 3.23 yields

$$\displaystyle \begin{aligned} {\mathbb P}\left(k\text{-}\max_{i\leq n}|X_i|\geq Cut^*(k-1/2, X)\right) \leq (1-c)^u \quad \text{for }u\ge 1, \end{aligned}$$

and the assertion follows by integration by parts. □

Proof of ( 3.5 )

Define I 1, I 2, α and β by (3.21) and (3.22), where this time t  = t (k − k 5∕6∕2, X). Estimate (3.23) is still valid so integration by parts yields

$$\displaystyle \begin{aligned} {\mathbb E} l\text{-}\max_{i\in I_1}|X_i|\leq \left(3+20\frac{\alpha}{l}\right)t^*. \end{aligned}$$

Set

$$\displaystyle \begin{aligned} k_{\beta}:=\left\lceil\beta+\frac{1}{2}k^{5/6}\right\rceil.\end{aligned} $$

Observe that

$$\displaystyle \begin{aligned} \lceil\alpha\rceil+k_\beta< \alpha+\beta+\frac{1}{2}k^{5/6}+2=k+2.\end{aligned} $$

Hence ⌈α⌉ + k β ≤ k + 1.

If k β > |I 2|, then k −|I 2|≥⌈α⌉ + k β − 1 −|I 2|≥⌈α⌉, so

$$\displaystyle \begin{aligned} {\mathbb E} k\text{-}\max_{i}|X_i|\leq {\mathbb E} (k-|I_2 |)\text{-}\max_{i\in I_1}|X_i| \leq {\mathbb E} \lceil\alpha\rceil\text{-}\max_{i\in I_1}|X_i| \leq 23t^*.\end{aligned} $$

Therefore it suffices to consider case k β ≤|I 2| only.

Since ⌈α⌉ + k β − 1 ≤ k and k β ≤|I 2|, we have by (3.24),

$$\displaystyle \begin{aligned} {\mathbb E} k\text{-}\max_{i}|X_i|\leq {\mathbb E} \lceil\alpha\rceil\text{-}\max_{i\in I_1}|X_i| +{\mathbb E} k_\beta\text{-}\max_{i\in I_2}|X_i| \leq 23t^*+{\mathbb E} k_\beta\text{-}\max_{i\in I_2}|X_i|. \end{aligned}$$

Since \(\beta \leq k-\frac {1}{2}k^{5/6}\) and \(x\rightarrow x-\frac {1}{2}x^{5/6}\) is increasing for x ≥ 1∕2 we have

$$\displaystyle \begin{aligned} \beta\leq \beta+\frac{1}{2}k^{5/6}-\frac{1}{2}\left(\beta+\frac{1}{2}k^{5/6}\right)^{5/6} \leq k_\beta-\frac{1}{2}k_\beta^{5/6}. \end{aligned}$$

Therefore, considering \((X_{i})_{i\in I_2}\) instead of X and k β instead of k it is enough to show the following claim:

Let s > 0, n ≥ k and let X be an n-dimensional log-concave vector with uncorrelated coordinates. Suppose that

$$\displaystyle \begin{aligned} \sum_{i\leq n}{\mathbb P}(|X_i|\geq s)\leq k-\frac{1}{2}k^{5/6}\quad \mbox{ and } \quad \min_{i\leq n}{\mathbb P}(|X_i|\geq s)\geq 9/10 \end{aligned}$$

then

$$\displaystyle \begin{aligned} {\mathbb E} k\text{-}\max_{i\leq n}|X_i|\leq C_8s. \end{aligned}$$

We will show the claim by induction on k. For k = 1 the statement is obvious (since the assumptions are contradictory). Suppose now that k ≥ 2 and the assertion holds for k − 1.

Case 2 \({\mathbb P}(|X_{i_0}|\geq s)\geq 1-\frac {5}{12}k^{-1/6}\) for some 1 ≤ i 0 ≤ n. Then

$$\displaystyle \begin{aligned} \sum_{i\neq i_0}{\mathbb P}(|X_i|\geq s)\leq k-\frac{1}{2}k^{5/6}-\left(1-\frac{5}{12}k^{-1/6}\right) \leq k-1-\frac{1}{2}(k-1)^{5/6}, \end{aligned}$$

where to get the last inequality we used that x 5∕6 is concave on \({\mathbb R}_+\), so \((1-t)^{5/6}\leq 1-\frac {5}{6}t\) for t = 1∕k. Therefore by the induction assumption applied to \((X_i)_{i\neq i_0}\),

$$\displaystyle \begin{aligned} {\mathbb E} k\text{-}\max_{i}|X_i|\leq {\mathbb E} (k-1)\text{-}\max_{i\neq i_0}|X_i|\leq C_8s. \end{aligned}$$

Case 2 \({\mathbb P}(|X_{i}|\leq s)\geq \frac {5}{12}k^{-1/6}\) for all i. Applying Lemma 3.15 we get

$$\displaystyle \begin{aligned} \frac{5}{12}k^{-1/6} \leq {\mathbb P}\left(\frac{|X_i|}{\|X_i\|{}_2}\leq \frac{s}{\|X_i\|{}_2}\right) \le C\frac{s}{\|X_i\|{}_2}, \end{aligned}$$

so maxiX i2 ≤ Ck 1∕6s. Moreover \(n\leq \frac {10}{9}k\). Therefore by the result of Lee and Vempala [13] X satisfies the exponential concentration with α ≤ C 9k 5∕12s.

Let \(l=\lceil k-\frac {1}{2}(k^{5/6}-1)\rceil \) then s ≥ t (l − 1∕2, X) and \(k-l+1\geq \frac {1}{2}(k^{5/6}-1) \geq \frac {1}{9}k^{5/6}\). Let

$$\displaystyle \begin{aligned} A:=\left\{x\in {\mathbb R}^n\colon l\text{-}\max_{i}|{ x}_i|\leq C_{10}s \right\}. \end{aligned}$$

By (3.4) (applied with l instead of k) we have \({\mathbb P}(X\in A)\geq c_{4}\). Observe that

$$\displaystyle \begin{aligned} k\text{-}\max_{i}|x_i|\geq C_{10}s+u\Rightarrow \mathrm{dist}(x,A)\geq \sqrt{k-l+1}u\geq \frac{1}{3}k^{5/12}u. \end{aligned}$$

Therefore by Lemma 3.10 we get

$$\displaystyle \begin{aligned} {\mathbb P}\left(k\text{-}\max_{i}|X_i|\geq C_{10}s+3C_9us\right)\leq \exp\left(-(u+\ln c_{4})_{+}\right). \end{aligned}$$

Integration by parts yields

$$\displaystyle \begin{aligned} {\mathbb E} k\text{-}\max_{i}|X_i|\leq \left(C_{10}+3C_9(1-\ln c_{4})\right)s \end{aligned}$$

and the induction step is shown in this case provided that \(C_8\geq C_{10}+3C_9(1-\ln c_{4})\). □

To obtain Corollary 3.6 we used the following lemma.

Lemma 3.24

Assume that X is a symmetric isotropic log-concave vector in \({\mathbb R}^n\) . Then

$$\displaystyle \begin{aligned} t^*( p,X)\sim \frac{ n- p}{n} \quad \mathit{\text{for }} n > p\geq n/4. \end{aligned} $$
(3.25)

and

$$\displaystyle \begin{aligned} t^*( k/2,X)\sim t^*(k,X)\sim t(k,X) \quad \mathit{\text{for }} k\leq n/2. \end{aligned} $$
(3.26)

Proof

Observe that

$$\displaystyle \begin{aligned} \sum_{i=1}^n{\mathbb P}(|X_i|\leq t^*( p,X))=n- p. \end{aligned}$$

Thus Lemma 3.15 implies that for p ≥ c 5n (with \( c_5\in (\frac 12,1)\)) we have \(t^*( p,X)\sim \frac {n- p}{n}\). Moreover, by the Markov inequality

$$\displaystyle \begin{aligned} \sum_{i=1}^n {\mathbb P}(|X_i|\ge 4) \le \frac n {16}, \end{aligned}$$

so t (n∕4, X) ≤ 4. Since pt (p, X) is non-increasing, we know that t (p, X) ∼ 1 for n∕4 ≤ p ≤ c 5n.

Now we will prove (3.26). We have

$$\displaystyle \begin{aligned} t^*(k,X)\leq t^*( k/2,X)\leq t( k/2,X)\leq 2t(k,X), \end{aligned}$$

so it suffices to show that t (k, X) ≥ ct(k, X). To this end we fix k ≤ n∕2. By (3.25) we know that t := C 11t (k, X) ≥ C 11t (n∕2, X) ≥ e, so the isotropicity of X and Markov’s inequality yield \({\mathbb P}(|X_i|\geq t)\leq e^{-2}\) for all i. We may also assume that t ≥ t (k, X). Integration by parts and Lemma 3.21 yield

$$\displaystyle \begin{aligned} {\mathbb E}|X_i|{\mathbf 1}_{\{|X_i| \ge t\}}&\leq 3t{\mathbb P}(|X_i|\geq t)+t\int_0^{\infty}{\mathbb P}(|X_i| \ge (s+3)t) ds \\ &\leq 3t{\mathbb P}(|X_i|\geq t)+t\int_0^{\infty}{\mathbb P}(|X_i|\ge t) e^{-s}ds\leq 4t {\mathbb P}(|X_i|\geq t). \end{aligned} $$

Therefore

$$\displaystyle \begin{aligned} \sum_{i=1}^{n} {\mathbb E}|X_i|{\mathbf 1}_{\{|X_i| \ge t\}} \leq 4t\sum_{i=1}^{n} {\mathbb P}(|X_i| \ge t) \leq 4t\sum_{i=1}^{n} {\mathbb P}(|X_i| \ge t^*(k,X))\leq 4kt, \end{aligned}$$

so t(k, X) ≤ 4C 11t (k, X). □