1 Introduction

The concentration function for a random variable, say \(X\), is defined by

$$\begin{aligned} Q(X;\lambda ) = \sup _x \, \mathbf{P}\{x \le X \le x+\lambda \}, \quad \lambda \ge 0. \end{aligned}$$
(1.1)

For a long time, probabilists have investigated the asymptotic behavior of this function for sums

$$\begin{aligned} S_n = X_1 + \cdots + X_n \end{aligned}$$

of independent random variables \(X_k\) with respect to their concentration functions or with respect to \(n\) in the i.i.d. case. Let us start with an estimate, which involves the normalized truncated second moment at level \(\lambda \)

$$\begin{aligned} D(X;\lambda ) = \frac{1}{\lambda ^2}\, \mathbf{E}\, \big (\min \{|X|,\lambda \}\big )^2, \end{aligned}$$

a quantity playing a crucial role in the study of distributions of sums, cf. [14, 21, 22]. It was shown by Miroshnikov and Rogozin [19] that given \(\lambda _k > 0,\, k = 1,\ldots ,n\), if

$$\begin{aligned} \lambda \ge \max _{1 \le k \le n} \lambda _k, \end{aligned}$$
(1.2)

one then has

$$\begin{aligned} Q(S_n;\lambda ) \le C\lambda \bigg (\sum _{k=1}^n \lambda _k^2 \, D(\tilde{X}_k;\lambda _k)\, Q^{-2}(X_k;\lambda _k)\bigg )^{-1/2} \end{aligned}$$
(1.3)

up to some absolute constant \(C\), where \(\tilde{X}_k = X_k - X_k^{\prime }\) with \(X_k^{\prime }\) being an independent copy of \(X_k\) (cf. also [20] and [21], Theorem 2.16). This inequality generalizes and sharpens some previous results going back to Kolmogorov [18], see [9, 10, 16, 17, 23, 24]. A large number of works is devoted to the i.i.d. case and the problem of rates of \(Q(S_n;\lambda )\), cf. [8, 12, 13, 26] and references therein.

One natural question in connection with the inequality (1.3) is to determine conditions when its right-hand side may be simplified and sharpened by removing the term \(D(\tilde{X}_k;\lambda _k)\). In general, it may not be removed, which can already be seen in the example where all \(X_k\) are normal. To clarify the picture, we will examine the behavior of the concentration function for the broader class of log-concave distributions.

A random variable \(X\) is said to have a log-concave distribution, if it has a density of the form \(p(x) = e^{-V(x)}\), where \(V:\mathbf{R}\rightarrow (-\infty ,\infty ]\) is a convex function (in the generalized sense). Many standard probability distributions on the real line belong to this class, which is of a large interest especially in convex geometry. This class includes, for example, all marginals of random vectors uniformly distributed over arbitrary convex bodies in \(\mathbf{R}^n\). Specializing to log-concave distributions, it turns out to be possible to give a simple two-sided bound for the concentration function.

Theorem 1.1

If \((X_k)_{1 \le k \le n}\) are independent and have log-concave distributions, then for all \(\lambda \ge 0\),

$$\begin{aligned} \frac{1}{\sqrt{12}}\, \frac{\lambda }{\sqrt{\mathrm{Var}(S_n) + \frac{\lambda ^2}{12}}} \le Q(S_n;\lambda ) \le \frac{\lambda }{\sqrt{\mathrm{Var}(S_n) + \frac{\lambda ^2}{12}}}. \end{aligned}$$
(1.4)

Moreover, the left inequality continues to hold without the log-concavity assumption.

Here, the left-hand side is vanishing in case \(\mathrm{Var}(S_n) = \infty \). Note, however, that random variables with log-concave distribution have finite moments of any order.

Now, let us look at the possible desired sharpening of (1.3), namely for positive \(\lambda _1,\ldots \lambda _n\) and suitable \(\lambda >0\),

$$\begin{aligned} Q(S_n;\lambda ) \le C\lambda \bigg (\sum _{k=1}^n \lambda _k^2\, Q^{-2}(X_k;\lambda _k)\bigg )^{-1/2}. \end{aligned}$$
(1.5)

Suppose that the \(X_k\) are independent and each has a log-concave distribution, in which case the distribution of the sum \(S_n\) is log-concave, as well. For \(\varepsilon > 0\), we may apply (1.4) to \(\varepsilon S_n\) so as to derive a lower bound and to \(\varepsilon X_k\) to obtain an upper bound (in place of \(S_n\)). Then, we would get from (1.5)

$$\begin{aligned} \varepsilon ^2\, \mathrm{Var}(S_n) + \frac{1}{12}\, \lambda ^2 \ge \frac{1}{12\, C^2}\, \Big (\varepsilon ^2\, \mathrm{Var}(S_n) + \frac{1}{12}\, \sum _{k=1}^n \lambda _k^2\Big ). \end{aligned}$$

Letting \(\varepsilon \rightarrow 0\), the above inequality yields

$$\begin{aligned} \lambda ^2 \ge \frac{1}{12\,C^2}\,\sum _{k=1}^n \lambda _k^2. \end{aligned}$$
(1.6)

Hence, this restriction is necessary for (1.5) to hold in the class of all log-concave distributions. The appearance of the \(D\)-functional in (1.3) is therefore quite natural and may partly be explained by the broader range (1.2) of admissible values of \(\lambda _k\) and \(\lambda \). Nevertheless, for smaller regions such as (1.6), the refining inequality (1.5) does hold in general. We prove:

Theorem 1.2

Let \((X_k)_{1 \le k \le n}\) be independent random variables. For all \(\lambda _k > 0,\, k=1,\ldots ,n\), and all

$$\begin{aligned} \lambda \ge \Big (\sum _{k=1}^n \lambda _k^2\Big )^{1/2}, \end{aligned}$$
(1.7)

we have that with some universal constant \(C\)

$$\begin{aligned} Q(S_n;\lambda ) \le C\lambda \bigg (\sum _{k=1}^n \lambda _k^2\, Q^{-2}(X_k;\lambda _k)\bigg )^{-1/2}. \end{aligned}$$
(1.8)

This bound relies upon certain properties of another important functional—maximum of the density, which we consider in the next three sections.

2 Maximum of density

Given a random variable \(X\), put

$$\begin{aligned} M(X) = \mathrm{ess\,sup}_x \, p(x), \end{aligned}$$

provided that \(X\) has an absolutely continuous distribution on the real line with density \(p(x)\), and put \(M(X) = \infty \), otherwise.

We write \(U \sim \mathfrak{U }(a,b)\), when a random variable \(U\) is uniformly distributed in the finite interval \((a,b)\). A number of interesting relations for the concentration function can be obtained using the obvious general identity

$$\begin{aligned} Q(X;\lambda ) = \lambda M(X + \lambda U), \quad \lambda >0, \end{aligned}$$
(2.1)

where \(U \sim \mathfrak{U }(0,1)\) is independent of \(X\). For example, Theorem 1.1 follows from:

Proposition 2.1

If a random variable \(X\) has a log-concave distribution, then

$$\begin{aligned} \frac{1}{12} \le \mathrm{Var}(X) M^2(X) \le 1. \end{aligned}$$
(2.2)

Moreover, the left inequality holds without the log-concavity assumption.

One should note that the quantity \(\mathrm{Var}(X) M^2(X)\) is affine invariant (so it depends neither on the mean nor the variance of \(X\)). The equality on the left-hand side of (2.2) is attained for the uniform distribution in any finite interval, while on the right-hand side, there is equality when \(X\) has a one-sided exponential distribution with density \(p(x) = e^{-x}\, (x>0)\).

The lower bound \(\mathrm{Var}(X) M^2(X) \ge \frac{1}{12}\) is elementary; multidimensional extensions of this inequality were considered in [15], cf. also [3].

If \(X\) has a symmetric log-concave distribution, its density, say \(p\), attains its maximum at the origin, so \(M(X) = p(0)\). In this case, the upper bound in (2.2) can be sharpened to

$$\begin{aligned} \mathrm{Var}(X)\, p^2(0) \le \frac{1}{2}, \end{aligned}$$
(2.3)

where the two-sided exponential distribution with density \(p(x) = \frac{1}{2}\,e^{-|x|}\) plays an extremal role. This result was obtained by Hensley [15]; in [5], the symmetry assumption was relaxed to the property that \(X\) has median at zero.

However, if we want to replace \(p(0)\) with \(M(X)\) in the general log-concave case, the constant \(\frac{1}{2}\) in (2.3) should be increased. The sharp bound as in (2.2) is due to Fradelizi [11], who stated it for marginals of convex bodies in isotropic position (i.e., for the special family of log-concave distributions, sufficient to approximate general log-concave distributions on the line). For reader’s convenience, we include below an alternative short proof.

Proof of Proposition 2.1

We may assume that \(M(X)=1\).

For the lower bound, put \(H(x) = \mathbf{P}\{|X - \mathbf{E}X| \ge x\},\, x \ge 0\). Then, \(H(0)=1,\, H^{\prime }(x) \ge -2\) a.e., which gives \(H(x) \ge 1 - 2x\), so

$$\begin{aligned} \mathrm{Var}(X) \, = \, 2\int \limits _0^\infty xH(x)\,dx&\ge 2\int \limits _0^{1/2} xH(x)\,dx \\&\ge 2\int \limits _0^{1/2} x(1 - 2x)\,dx \, = \, \frac{1}{12}. \end{aligned}$$

For the upper bound, suppose that the distribution of \(X\) is supported on the interval \((a,b)\), where it has a positive log-concave density \(p\). The distribution function \(F(x) = \mathbf{P}\{X \le x\}\) is continuous and strictly increasing on \((a,b)\), so it has an inverse function \(F^{-1}:(0,1)\rightarrow (a,b)\). Moreover, by the log-concavity of \(p\), the function \(I(t) = p(F^{-1}(t))\) is positive and concave on \((0,1)\) (cf. [4], Proposition A.1). We extend it to \([0,1]\) by continuity; then, \(I\) attains its maximum at some point \(\alpha \in [0,1]\), so that \(I(\alpha ) = M(X) = 1\).

By the concavity of \(I\), it admits a lower pointwise bound

$$\begin{aligned} I(t) \ge I_\alpha (t) \equiv \min \Big \{\frac{t}{\alpha },\frac{1-t}{1-\alpha }\Big \}, \quad 0<t<1. \end{aligned}$$
(2.4)

Since \(F^{-1}\) has the distribution function \(F\) under the Lebesgue measure on \((0,1)\), we get from (2.4) that

$$\begin{aligned} \mathrm{Var}(X)&= \frac{1}{2}\,\int \limits _0^1\!\!\int \limits _0^1 (F^{-1}(t) - F^{-1}(s))^2\,dt\,ds \\&= \frac{1}{2}\,\int \limits _0^1\!\!\int \limits _0^1 \left( \int \limits _t^s \frac{du}{I(u)}\right) ^2\,dt\,ds \ \le \ \frac{1}{2}\,\int \limits _0^1\!\!\int \limits _0^1 \left( \int \limits _t^s \frac{du}{I_\alpha (u)}\right) ^2\,dt\,ds. \\ \end{aligned}$$

It is now a simple exercise to check that the latter expression is minimized for \(\alpha = 0\) and \(\alpha = 1\), in which case it is equal to \(1\).\(\square \)

Let us now apply Proposition 2.1 to random variables of the form \(X + \lambda U\) with \(U \sim \mathfrak{U }(0,1)\) being independent of \(X\). Then (2.2) becomes

$$\begin{aligned} \frac{1}{12} \le \Big (\mathrm{Var}(X) + \frac{\lambda ^2}{12}\Big ) M^2(X + \lambda U) \le 1, \end{aligned}$$

where for the right inequality, we should assume that \(X\) has a log-concave distribution. In view of the identity (2.1), we arrive at:

Corollary 2.2

If a random variable \(X\) has a log-concave distribution, then for all \(\lambda \ge 0\),

$$\begin{aligned} \frac{\lambda }{\sqrt{12\,\big (\mathrm{Var}(X) + \frac{\lambda ^2}{12}\big )}} \le Q(X;\lambda ) \le \frac{\lambda }{\sqrt{\mathrm{Var}(X) + \frac{\lambda ^2}{12}}}. \end{aligned}$$
(2.5)

Moreover, the left inequality holds without the log-concavity assumption.

Finally, as we have already mentioned, the convolution of log-concave distributions is always log-concave (cf. [7]). This important result allows us to get Theorem 1.1 as a direct consequence of Corollary 2.2.

3 Slices of the cube

For i.i.d. random variables \((U_k)_{1 \le k \le n}\), consider the weighted sums

$$\begin{aligned} X = \lambda _1 U_1 + \cdots + \lambda _n U_n \quad (\lambda _1^2 + \cdots + \lambda _n^2 = 1). \end{aligned}$$
(3.1)

Proposition 2.1 allows one to bound the maximum of the density of \(X\) in terms of the variance of \(U_1\) (regardless of the coefficients \(\lambda _k\)). Applying (2.2), we obtain

$$\begin{aligned} \frac{1}{\sqrt{12\,\mathrm{Var}(U_1)}} \le M(X) \le \frac{1}{\sqrt{\mathrm{Var}(U_1)}}, \end{aligned}$$

where in the right inequality, we assume that \(U_1\) has a log-concave distribution. Moreover, if \(U_1\) has a symmetric distribution (about its median), by (2.3), one may sharpen the above to

$$\begin{aligned} \frac{1}{\sqrt{12\,\mathrm{Var}(U_1)}} \le M(X) \le \frac{1}{\sqrt{2\,\mathrm{Var}(U_1)}}. \end{aligned}$$

For instance, when \(U_1 \sim \mathfrak{U }\big (-\frac{1}{2},\frac{1}{2}\big )\), we have \(1 \le M(X) \le \sqrt{6}\). In this important particular example, \(M(X)\) represents the \((n-1)\)-dimensional volume of the corresponding slice of the unit cube \(\big [-\frac{1}{2},\frac{1}{2}\big ]^n \subset \mathbf{R}^n\) obtained by intersecting it with the hyperplane \(\lambda _1 x_1 + \cdots + \lambda _n x_n = 0\). In fact, in this case, the general upper bound \(\sqrt{6}\) for \(M(X)\) is not optimal. A remarkable observation in this respect was made by Ball, who proved the following:

Proposition 3.1

[1] If \(U_1 \sim \mathfrak{U }\big (-\frac{1}{2},\frac{1}{2}\big )\), then for any weighted sum \(X\) in (3.1),

$$\begin{aligned} 1 \le M(X) \le \sqrt{2}. \end{aligned}$$
(3.2)

The equality on the right-hand side is attained for \(X = \frac{1}{\sqrt{2}}\,U_1 + \frac{1}{\sqrt{2}}\,U_2\). This result was used in [2] to construct a simple counterexample in the famous Busemann–Petty problem.

For our purposes, we will need to control non-central sections of the cube, as well. More precisely, we need to bound from below the density of \(X\) in a neighborhood of the origin.

Proposition 3.2

Any weighted sum \(X\) in (3.1), with \(U_1 \sim \mathfrak{U }\big (-\frac{1}{2},\frac{1}{2}\big )\), has a density \(p\) such that with some universal constant \(c>0\)

$$\begin{aligned} \inf _{|x| < \frac{1}{2}} p(x) \ge c. \end{aligned}$$
(3.3)

The example \(X = U_1\) shows that the interval \(|x|<\frac{1}{2}\) cannot be enlarged in (3.3).

To get an idea about the best possible universal constant \(c\), let us look at what will happen in the limit case for \(\lambda _k = \frac{1}{\sqrt{n}}\), when the density \(p\) of \(X\) will approximate the normal density

$$\begin{aligned} \varphi _\sigma (x) = \frac{1}{\sigma }\, \varphi (x/\sigma ) = \frac{1}{\sqrt{2\pi \sigma ^2}}\,e^{-x^2/2\sigma ^2} \end{aligned}$$

with mean zero and variance \(\sigma ^2 = \frac{1}{12}\). We then have \(c \le \varphi _\sigma (1/2) = \sqrt{\frac{6}{\pi }}\ e^{-3/2} = 0.308...\)

Proof of Proposition 3.2

The random variable \(X\) has a symmetric log-concave density \(p\) on the interval \((-b,b)\), where \(b = \frac{1}{2}\,(\lambda _1 + \cdots + \lambda _n) \ge \frac{1}{2}\). In particular, \(p\) is non-increasing in \(0 \le x < b\). We use the same notations as in the proof of Proposition 2.1. In particular, \(I(t) = p(F^{-1}(t))\) for \(0 < t < 1\), where \(F^{-1}:(0,1)\rightarrow (-b,b)\) is the inverse of the distribution function \(F(x) = \mathbf{P}\{X \le x\}\).

Since the function \(I\) is concave and symmetric about \(\frac{1}{2}\), the inequality (2.4) reads as

$$\begin{aligned} I(t) \ge 2I(1/2)\, \min \{t,1-t\}, \quad 0 < t < 1. \end{aligned}$$
(3.4)

Here, \(I(1/2) = p(0) = M(X) \ge 1\), where we applied (3.2). Hence, after the substitution \(t = F(x)\), we get from (3.4)

$$\begin{aligned} p(x) \ge 2\, \min \{F(x),1-F(x)\}, \quad |x| < b. \end{aligned}$$
(3.5)

The right-hand side of (3.5) may further be bounded from below by virtue of the upper bound in (3.2). Consider a random variable

$$\begin{aligned} Y = tU_{n+1} + sX, \quad t,s > 0, \ t^2 + s^2 = 1, \end{aligned}$$
(3.6)

where \(U_{n+1} \sim \mathfrak{U }\big (-\frac{1}{2},\frac{1}{2}\big )\) is independent of \(X\). It has density

$$\begin{aligned} q(x) = \frac{1}{t}\, \mathbf{P}\Bigg \{\frac{x-\frac{t}{2}}{s} \le X \le \frac{x+\frac{t}{2}}{s}\Bigg \}, \end{aligned}$$

which is also symmetric and log-concave. In particular,

$$\begin{aligned} M(Y) = q(0) = \frac{1}{t}\,\mathbf{P}\Big \{|X| \le \frac{t}{2s}\Big \} = \frac{1}{t}\, \Bigg (2F\Big (\frac{t}{2s}\Big ) - 1\Bigg ). \end{aligned}$$

Since Proposition 3.1 is applicable to \(Y\), we have \(M(Y) \le \sqrt{2}\), that is,

$$\begin{aligned} 2F\Big (\frac{t}{2s}\Big ) - 1 \le t\sqrt{2}. \end{aligned}$$

Equivalently, after the substitution \(x = \frac{t}{2s}\), we get \(2F(x) - 1 \le \frac{2x}{\sqrt{1 + 4x^2}}\,\sqrt{2}\), so

$$\begin{aligned} 1 - F(x) \ge \frac{1}{2} - \frac{x\sqrt{2}}{\sqrt{1 + 4x^2}} \ge \frac{1}{2}\,\Big (\frac{1}{2} - x\Big ), \quad x \ge 0. \end{aligned}$$
(3.7)

To prove the last inequality, rewrite it as

$$\begin{aligned} \psi (x) \equiv \frac{x\sqrt{2}}{\sqrt{1 + 4x^2}} \le \frac{1}{4} + \frac{1}{2}\, x \equiv \xi (x). \end{aligned}$$
(3.8)

The function \(\psi \) has a decreasing derivative \(\psi ^{\prime }(x) = \frac{\sqrt{2}}{(1+4x^2)^{3/2}}\), so it is concave, while the function \(\xi \) is linear. Since also \(\psi (\frac{1}{2}) = \xi (\frac{1}{2}) = \frac{1}{2}\) and \(\psi ^{\prime }(\frac{1}{2}) = \xi ^{\prime }(\frac{1}{2}) = \frac{1}{2}\), (3.7)–(3.8) immediately follow.

By the symmetry of the distribution of \(X\) about the origin, together with (3.7), we also have a similar bound on the negative half-axis and one can unite them by

$$\begin{aligned} \min \{F(x),1-F(x)\} \ge \frac{1}{2}\,\Big (\frac{1}{2} - |x|\Big ). \end{aligned}$$

Note that this inequality is of interest for \(|x| \le \frac{1}{2}\), only. Combining it with (3.5), we thus obtain a lower bound on the density \(p(x)\), namely

$$\begin{aligned} p(x)\ge \frac{1}{2} - |x|, \quad |x| \le \frac{1}{2}. \end{aligned}$$
(3.9)

This inequality is good, when \(x\) is separated from the end points \(\pm \frac{1}{2}\), so an additional argument should be used to get a uniform lower bound. Nevertheless, let us proceed and integrate (3.9) over the interval \((x,\frac{1}{2})\), to get

$$\begin{aligned} \mathbf{P}\Big \{x \le X \le \frac{1}{2}\Big \} \ge \frac{1}{2}\,\Big (\frac{1}{2} - x\Big )^2, \quad 0 \le x \le \frac{1}{2}. \end{aligned}$$
(3.10)

Next, write \(X = \lambda _1 U_1 + \cdots + \lambda _n U_n\), where we may assume that \(1 > \lambda _1 \ge \cdots \ge \lambda _n \ge 0\), and as before, \(\lambda _1^2 + \cdots + \lambda _n^2 = 1\). In particular, \(b > \frac{1}{2}\), so that the density \(p\) is continuous in \(|x| \le \frac{1}{2}\). Recall that the distribution of \(X\) is symmetric and unimodal (as any other log-concave distribution), so we only need to estimate \(p\) at the point \(\frac{1}{2}\).

If \(\lambda _1\) is small enough, the distribution function \(F\) of \(X\) is close to the normal distribution function \(\Phi _\sigma \) with mean zero and variance \(\sigma ^2 = \frac{1}{12}\). In particular, we have a Berry–Esseen bound

$$\begin{aligned} \sup _x |F(x) - \Phi _\sigma (x)| \le CL_3, \end{aligned}$$
(3.11)

where \(C\) is a universal constant and \(L_3\) is the Lyapunov ratio, which in our case is given by

$$\begin{aligned} L_3 = \frac{\sum _{k=1}^n \mathbf{E}\, |\lambda _k U_k|^3}{\big (\sum _{k=1}^n \mathbf{E}\, |\lambda _k U_k|^2\big )^{3/2}} = \frac{\mathbf{E}\, |U_1|^3}{\big (\mathbf{E}\, U_1^2\big )^{3/2}}\ \sum _{k=1}^n \mathbf{E}\, \lambda _k^3 = \frac{3\sqrt{3}}{4}\, \sum _{k=1}^n \mathbf{E}\, \lambda _k^3. \end{aligned}$$

Since \(\lambda _k \le \lambda _1\), we have an estimate

$$\begin{aligned} L_3 \le \frac{3\sqrt{3}}{4}\,\lambda _1 < 1.3\,\lambda _1. \end{aligned}$$

The inequality (3.11) holds, for example, with \(C = 0.91\) (cf. [26], although better constants are known), which thus gives

$$\begin{aligned} \sup _x |F(x) - \Phi _\sigma (x)| < 1.3\,\lambda _1. \end{aligned}$$

Applying (3.5), we therefore get, for \(x > 0\),

$$\begin{aligned} p(x) \ge 2 (1-F(x)) \ge 2\,\big (1 - \Phi _\sigma (x) - 1.3\,\lambda _1\big ). \end{aligned}$$

For \(x = \frac{1}{2}\), we have \(\Phi _\sigma (x) = \Phi (x/\sigma ) = \Phi (\sqrt{3}) < 0.96\), so,

$$\begin{aligned} p(1/2) \ge 0.08 - 2.6\,\lambda _1. \end{aligned}$$
(3.12)

This bound is sufficient for the assertion of Proposition 3.2, if \(\lambda _1\) is small enough. If not, put \(t = \lambda _1\), \(s = \sqrt{1 - \lambda _1^2}\), and write \(X = tU_1 + sY\). From this representation,

$$\begin{aligned} p(x) = \frac{1}{t}\, \mathbf{P}\Big \{\frac{x-\frac{t}{2}}{s} \le Y \le \frac{x+\frac{t}{2}}{s}\Big \}, \end{aligned}$$

implying

$$\begin{aligned} p(1/2) \ge \frac{1}{t}\, \mathbf{P}\Big \{\frac{1-t}{2s} \le Y \le \frac{1}{2}\Big \}. \end{aligned}$$

But since \(Y\) is of the same type as \(X\), the right probability may be estimated from below by virtue of the general inequality (3.10) with \(x = \frac{1-t}{2s}\). It gives

$$\begin{aligned} p(1/2) \ge \frac{(s-(1-t))^2}{8ts^2}. \end{aligned}$$

To simplify, first write

$$\begin{aligned} s-(1-t) = \sqrt{1 - t^2} - (1-t) = \frac{1 - t^2 - (1-t)^2}{\sqrt{1 - t^2} + (1-t)} = \frac{2t(1-t)}{s+(1-t)}. \end{aligned}$$

Since

$$\begin{aligned} 1 - t&= 1 - \sqrt{1 - s^2} \ = \ \frac{s^2}{1 + \sqrt{1 - s^2}} \ \ge \ \frac{s^2}{2}, \\ s + (1-t)&= s + \frac{s^2}{1 + \sqrt{1 - s^2}} \ \le \ s + s^2 \ \le \ 2s, \end{aligned}$$

we get

$$\begin{aligned} s-(1-t) = \frac{2t(1-t)}{s+(1-t)} \ge \frac{ts}{2}, \end{aligned}$$

and we see that

$$\begin{aligned} p(1/2) \ge \frac{(s-(1-t))^2}{8ts^2} \ge \frac{t}{32} = \frac{\lambda _1}{32}. \end{aligned}$$

Together with (3.12), this gives

$$\begin{aligned} p(1/2) \ge \max \Big \{0.08 - 2.6\,\lambda _1, \frac{\lambda _1}{32}\Big \} = 0.00095\ldots \end{aligned}$$

\(\square \)

4 Proof of Theorem 1.2

Proof of Theorem 1.2 is based upon the following general property of the \(M\)-functional.

Proposition 4.1

If random variables \((X_k)_{1 \le k \le n}\) are independent, then for the sum \(S_n = X_1 + \cdots + X_n\), we have

$$\begin{aligned} M^{-2}(S_n) \ge \frac{1}{2} \sum _{k=1}^n M^{-2}(X_k). \end{aligned}$$
(4.1)

With constant \(\frac{1}{e}\) (in place of \(\frac{1}{2}\)), such an estimate including its multidimensional extension can be obtained as a consequence of the sharp Young inequality for the \(L^p\)-norms of convolutions, cf. [6]. With some existing constant, (4.1) also follows from the Miroshnikov-Rogozin inequality (1.3) for the concentration functions, by applying it with \(\lambda _k = \lambda \) and letting \(\lambda \rightarrow 0\).

As for the current formulation, first let us note that in the particular case, where \(X_k = \lambda _k U_k,\, U_k \sim \mathfrak{U }\big (-\frac{1}{2},\frac{1}{2}\big )\), the inequality (4.1) is equivalent to the upper bound of Proposition 3.1 (Ball’s result). This also shows that the constant \(\frac{1}{2}\) is unimprovable.

To derive the general case, we apply the following delicate comparison result due to Rogozin:

Proposition 4.2

[25] Given independent random variables \((X_k)_{1 \le k \le n}\) with bounded densities, let \((U_k)_{1 \le k \le n}\) be independent random variables uniformly distributed in the intervals \((-\frac{1}{2 M(X_k)},\frac{1}{2 M(X_k)})\), so that

$$\begin{aligned} M(X_k) = M(U_k), \quad k = 1,\ldots , n. \end{aligned}$$
(4.2)

Then,

$$\begin{aligned} M(X_1 + \cdots + X_n) \le M(U_1 + \cdots + U_n). \end{aligned}$$
(4.3)

Hence, starting with (4.2)–(4.3) and applying the right inequality in (3.2) to the normalized sum

$$\begin{aligned} X = \frac{1}{\sqrt{12\sigma ^2}}\, (U_1 + \cdots + U_n) \quad \mathrm{with} \quad \sigma ^2 = \mathrm{Var}(U_1 + \cdots + U_n), \end{aligned}$$

we arrive at the bound (4.1). Indeed, in this case, by (4.2),

$$\begin{aligned} \sigma ^2 = \sum _{k=1}^n \mathrm{Var}(U_k) = \frac{1}{12}\, \sum _{k=1}^n M^{-2}(U_k) = \frac{1}{12}\, \sum _{k=1}^n M^{-2}(X_k), \end{aligned}$$

and

$$\begin{aligned} M^{-2}(U_1 + \cdots + U_n)&= 12\sigma ^2 M^{-2}(X) \\&= M^{-2}(X) \sum _{k=1}^n M^{-2}(X_k) \ \ge \ \frac{1}{2}\,\sum _{k=1}^n M^{-2}(X_k), \end{aligned}$$

where (3.2) was used on the last step. It remains to apply (4.3), thus leading to (4.1).

Note also that without the loss of generality, one may always restrict (4.1) to the summands having bounded densities.

Proof of Theorem 1.2

The assertion of Theorem 1.2 is homogeneous with respect to the sequence \((X_k,\lambda _k)\) and the parameter \(\lambda \). This means that, given \(c>0\), if we replace in (1.8) each \(X_k\) with \(cX_k,\, \lambda _k\) with \(c\lambda _k\), and \(\lambda \) with \(c\lambda \), this inequality together with the hypothesis (1.7) will be unchanged. Therefore, we may assume without the loss of generality that \( \sum _{k=1}^n \lambda _k^2 = 1. \) In this case, the hypothesis (1.7) becomes \(\lambda \ge 1\).

Let independent random variables \((U_k)_{1 \le k \le n}\) be uniformly distributed in \(\big (-\frac{1}{2},\frac{1}{2}\big )\) and independent of all \(X_j\). First, we apply Proposition 4.1 to the random variables \(X_k + \lambda _k U_k\), so as to get

$$\begin{aligned} M^{-2}(S_n + X) \ge \frac{1}{2}\, \sum _{k=1}^n M^{-2}(X_k + \lambda _k U_k), \end{aligned}$$

where \(X = \lambda _1 U_1 + \cdots + \lambda _n U_n\). One may rewrite this inequality by virtue of the identity (2.1) as

$$\begin{aligned} M^{-2}(S_n + X) \ge \frac{1}{2}\, \sum _{k=1}^n \lambda _k^2\, Q^{-2}(X_k; \lambda _k). \end{aligned}$$
(4.4)

Denote by \(F_n\) the distribution function of \(S_n\) and by \(p\) the density of \(X\). Then, \(S_n + X\) has density

$$\begin{aligned} q(x) = \int \limits _{-\infty }^\infty p(x-y)\,dF_n(y). \end{aligned}$$

By Proposition 3.2, on the interval \(\big (-\frac{1}{2},\frac{1}{2}\big )\), \(p\) is uniformly bounded from below by a universal constant \(c>0\). Hence, for all \(x\),

$$\begin{aligned} q(x) \ge c \int \limits _{|y-x| < \frac{1}{2}} dF_n(y) = c\, \mathbf{P}\Big \{|S_n - x| < \frac{1}{2}\Big \}. \end{aligned}$$

Taking here the supremum over all \(x\) leads to \(M(S_n + X) \ge c\,Q(S_n;1)\), so, by (4.4),

$$\begin{aligned} c^{-2}\,Q^{-2}(S_n;1) \ge \frac{1}{2}\, \sum _{k=1}^n \lambda _k^2\, Q^{-2}(X_k; \lambda _k). \end{aligned}$$

This is exactly the required inequality (1.8) with \(\lambda = 1\). To cover the values \(\lambda \ge 1\), it remains to apply an elementary general bound \(Q(Y;\lambda ) \le 2\lambda \,Q(Y;1)\), cf., e.g., [14], p.20.\(\square \)