Abstract
In this paper we consider a probability measure on the high dimensional Euclidean space satisfying Bobkov-Ledoux inequality. Bobkov and Ledoux have shown in (Probab Theory Related Fields 107(3):383–400, 1997) that such entropy inequality captures concentration phenomenon of product exponential measure and implies Poincaré inequality. For this reason any measure satisfying one of those inequalities shares the same concentration result as the exponential measure. In this paper using B-L inequality we derive some bounds for exponential Orlicz norms for any locally Lipschitz function. The result is close to the question posted by Adamczak and Wolff in (Probab Theory Related Fields 162:531–586, 2015) regarding moments estimate for locally Lipschitz functions, which is expected to result from B-L inequality.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
Subject Classification
2.1 The Bobkov-Ledoux Inequality
Let μ be a probability measure on \({\mathbb R}^d\). We assume that μ satisfies Bobkov-Ledoux inequality i.e. with fixed D > 0, for any positive, locally Lipschitz function f such that \(\vert \nabla f\vert _{\infty }\leqslant f/2\) we have
As noticed by Bobkov and Ledoux in [3] this modification of log-Sobolev inequality is satisfied by product exponential measure, but more importantly, it implies subexponential concentration. It is also quite easy to show that it implies Poincaré inequality. For any smooth function g we may take f = 1 + 𝜖g and 𝜖 > 0 such that \(\vert \nabla f\vert _{\infty }\leqslant f/2\), which allows us to apply (2.1). In the next step divide both sides of inequality by 𝜖 2, consider standard Taylor expansion and take limit with 𝜖 tending to 0. As a result
which is exactly the Poincaré inequality. Finally just notice that any locally Lipschitz function f such that both f and |∇f|2 are square integrable w.r.t. μ may be approximated in (2.2) by smooth functions. The result means that B-L inequality (2.1) is stronger than Poincaré inequality (2.2), nevertheless both inequalities imply concentration phenomenon of product exponential measure, therefore any measure satisfying one of those inequalities shares the same concentration result. See [3] for more details regarding this subtle connection.
As we are dealing with big number of constants in the following section, it would be wise to adopt some useful convention. Therefore, let us denote by D′ numeric constant which may vary from line to line, but importantly, it is comparable to D from log-Sobolev inequality (2.1). Similarly let C be constant comparable to 1 and by C(α) denote one that depends on α only.
In [4] it was noticed by E. Milman that, Poincaré inequality (2.2) implies the following estimate for \(p\geqslant 1\)
with f locally Lipschitz. It is easy to see that above results with the following bound
Adamczak and Wolff has conjectured in [1] that Bobkov-Ledoux inequality (2.1) imply
They also proved following weaker form of the conjecture
Their result is based on tricky modification of given function so that (2.1) could be used. In our paper we are trying to understand this phenomenon and apply its more advanced form.
2.2 Bounds for Moments
In this section we investigate possible estimates for ∥g∥pα, with a given α > 0, when we know that g α is globally Lipschitz. This bounds will be useful when we start dealing with the exponential Orlicz norms.
Theorem 2.1
If measure μ satisfies (2.1), function g is non-negative, locally Lipschitz and \(p\geqslant 1\), then for \(\ 0<\alpha \leqslant 1\)
and in case of α > 1
Proof
Consider g α to be a non-negative Lipschitz function, otherwise estimate is trivial. Note that in case of \(p\leqslant 2\) there is also nothing to prove, therefore we may take p > 2. For simplicity let us assume that ∥|∇g α|∞∥∞ = 1. If it happens to be
then proof is once again trivial, therefore assume that
then following the idea of the proof of (2.4) from [1] we define function \(h=\max \{g,c\}\), where \(c=\Vert g\Vert _{p\alpha }/2^{\frac {1}{\alpha }}\). Obviously, for \(2\leqslant t\leqslant p\)
Due to our definition \(h\geqslant c\) and \(\vert \nabla h^{\alpha }\vert _{\infty }\leqslant \vert \nabla g^{\alpha }\vert _{\infty }\), which gives us
Combining above with (2.6) we get
Therefore, we may apply (2.1) to the function h αt∕2 and thus by the Aida Stroock [2] argument i.e.
combined with Hölder inequality with exponents t∕(t − 2) and t∕2 applied to the last term, gives us
The moment function (as function of t) is non-decreasing, therefore for \(2\leqslant t \leqslant p\) we get
Now we have to consider two cases. First suppose that \(\alpha \leqslant 1\) and then
and combining this with (2.7), we infer
Now observe that \(\Vert h^{\alpha }\Vert ^2_{p}\geqslant \Vert g^{\alpha }\Vert ^2_p\) and furthermore
which combined together gives us
Noting that the case of
is another trivial part, we assume conversely getting
which together with (2.8) implies that
Reminding that \(c^{\alpha }=2^{-1} \Vert g\Vert ^{\alpha }_{p\alpha }\) we infer
and rewriting it in simplified form
Combining together (2.5), (2.9), and (2.11) implies the result in the case of \(0<\alpha \leqslant 1\).
Consider now case of α > 1, following the same reasoning as in previous case, up to the (2.7) after that Hölder inequality is used, we get
Therefore, by (2.7)
Again, either (2.9) holds or we have
Since obviously \(\Vert h\Vert ^{2\alpha }_{p\alpha }\geqslant \Vert g\Vert ^{2\alpha }_{p\alpha }\), we get
and combining above with (2.12) gives us
Clearly (2.5), (2.9), and (2.13) cover the case of α > 1, which ends whole proof. ■
Next step of the reasoning is to apply previous result to g = |f −E μf| and combine it with Poincaré inequality. Let us gather everything together in form of
Corollary 2.1
If measure μ satisfies (2.1), function f is locally Lipschitz and \(p\geqslant 1\), then for \(\ 0<\alpha \leqslant 1\)
and in case of α > 1
Proof
If we fix g = |f −E μf| then by the Poincaré inequality
Note also that
then applying Theorem 2.1 statement easily follows. ■
2.3 Bounds for Exponential Orlicz Norms
First, let us recall the notion of exponential Orlicz norms. For any α > 0
Obviously, ∥f∥φ(α) is a norm in case of \(\alpha \geqslant 1\) only, otherwise there is a problem with the triangle inequality. Moreover, we have \(\Vert f\Vert _{\varphi (\alpha )}=\Vert \vert f\vert ^{\alpha }\Vert ^{\frac {1}{\alpha }}_{\varphi (1)}\). Nevertheless, in case of 0 < α < 1 one can use
It is worth to know that ∥f∥φ(α) is always comparable with \(\sup _{k\geqslant 1} \frac {\Vert f\Vert _{k\alpha }}{k^{1/\alpha }}\). More precisely, observe that for all \(k\geqslant 1\) and a positive g
Note that, just by the definition of ∥g∥φ(α), there exists \(k\geqslant 1\) for which
Let us denote the set of such \(k\geqslant 1\) by J(g, α) and note that for any k ∈ J(g, α)
Next let \(M\geqslant e\) be such a constant that \((k!)^{\frac {1}{k}}\geqslant k/M\) for all \(k\geqslant 1\). We have following crucial observation namely for all k ∈ J(g, α)
Therefore, we may use Theorem 2.1 in order to obtain
Corollary 2.2
If μ satisfies (2.1) and g is non-negative locally Lipschitz function, then for any k ∈ J(g, α) in case of \(\ 0<\alpha \leqslant 1\)
and for \(1<\alpha \leqslant 2\)
Note that set J(g, α) is stable with respect to g↦h, where \(h=\max \{g,c\}\) i.e. if c is comparable to ∥g∥φ(α) there exists \(C\geqslant 1\) such that for k ∈ J(g, α)
which means that we cannot easily improve the result using the trick.
In the same way as we have established Corollary 2.1 we can deduce the following result.
Corollary 2.3
If μ satisfies (2.1) and g is locally Lipschitz function, then for any k ∈ J(g, α) in case of \(\ 0<\alpha \leqslant 1\)
and for α > 1
A simple consequence of the above is
Corollary 2.4
If μ satisfies (2.1) and \(0<\alpha \leqslant 2\), then for any locally Lipschitz function f
The result shows that at least for globally Lipschitz function |f|α, \(\alpha \leqslant 1\) the exponential moment ∥f −E μf∥φ(α) has to bounded, though it is still far from replacement of \(\Vert \vert \nabla \vert f-{\mathbf {E}}_{\mu } f\vert ^{\alpha }\vert _{\infty }\Vert ^{\frac {1}{\alpha }}_{\infty }\) by the expected \(\Vert \vert \nabla f\vert _{\infty }\Vert _{\varphi (\frac {\alpha }{1-\alpha })}\).
Note that it is not possible to simply replace the constant \(C(\alpha )\sim (4M)^{\frac {1}{\alpha }}\) in Corollary 2.4 by 1 which would be a natural choice for the question. In the next section we will show another approach which allows to obtain such a result.
2.4 Another Approach
Theorem 2.2
If μ satisfies (2.1) and \(0<\alpha \leqslant 2\), then for any locally Lipschitz function f
where \(C(\alpha )= \alpha \left (\frac {2}{\ln 2}\right )^{\frac {1}{\alpha }}\).
Proof
Let g α be a non-negative Lipschitz function, we may assume that
Then for any \(t\leqslant 1\) and a function \(h=\exp (g^{\alpha }t/2)\) we can apply (2.1), indeed
In fact there are three possibilities we should acknowledge.
The first case we should consider is \({\mathbf {E}}_{\mu } \exp (g^{\alpha })\leqslant 2\), but then
Otherwise there must exist \(t_{\ast }\leqslant 1\) such that \(\mathbf {E}\exp (g^{\alpha }t_{\ast })=2\). Clearly \(1/t^{\frac {1}{\alpha }}_{\ast }=\Vert g\Vert _{\varphi (\alpha )}\). For simplicity let us denote \(V(t)=\ln \mathbf {E} \exp (g^{\alpha }t)\), \(t\geqslant 0\). It is well known that V is convex, increasing and V (0) = 0. Now we use (2.1), in order to get for all t ∈ [0, 1]
Note that V (0)′ = E μg α. Moreover, for \(0\leqslant t\leqslant t_{\ast }\) we have \(\frac {1}{2}\leqslant \exp (-V(t))\leqslant 1\), so we can rewrite (2.17) in the following form
Since V is convex V (0) = 0 we know that V (t)∕t is increasing and also V ′(0) = E μg α. Consequently, integrating (2.18) on [0, t ∗]
Note that \(V(t_{\ast })=\ln 2\), so
The second case which should be considered is when t ∗ is very close to E μg α. If \(t_{\ast }{\mathbf {E}}_{\mu }g^{\alpha }> \frac {1}{2}\ln 2\), then
For the last part of the proof we assume that
Obviously, we have then
Using the Hölder inequality, we get
Therefore,
Now we split all the indices k into two classes.
where the constant \(M\geqslant 1\) will be chosen later. First, we bound summands over the set I, i.e.
Obviously it is easy to choose M close to 2e so that \(\sum _{k\in I}\frac {(k+2)^{k+2}}{M^{k+2}(k+2)!}\leqslant 1\). Thus, we may state our bound over I in the following form
where \(K=\max _{k\geqslant 1}\frac {\Vert \vert \nabla g\vert _2\Vert _{k\alpha }}{k^{\frac {1}{\alpha }-\frac {1}{2}}}\). On the set J we do as follows
But now
Thus, our bound on J is
Combining bounds (2.21), (2.22), and (2.20) we get
but this implies
Note that K is comparable with \(\Vert \vert \nabla g\vert _2\Vert _{\varphi (\frac {2\alpha }{2-\alpha })}\). It leads to the formula
Bound (2.16), (2.19), and (2.23) implies that for any positive g
If we now fix g = |f −E μf| then by the Poincaré inequality
Note also that
Thus, by (2.24) we obtain
It ends the proof. ■
References
R. Adamczak, P. Wolff, Concentration inequalities for non-Lipschitz functions with bounded derivatives of higher order. Probab. Theory Related Fields 162, 531–586 (2015)
S. Aida, D. Stroock, Moment estimates derived from Poincare and logarithmic Sobolev inequality. Math. Res. Lett. 1, 75–86 (1994)
S. Bobkov, M. Ledoux, Poincaré’s inequalities and Talagrand’s concentration phenomenon for the exponential distribution. Probab. Theory Related Fields 107(3), 383–400 (1997)
E. Milman, On the role of convexity in isoperimetry, spectral gap and concentration. Invent. Math. 177, 1–4 (2009)
Acknowledgement
This research was partially supported by NCN Grant UMO-2016/21/B/ ST1/01489.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Bednorz, W., Głowienko, G. (2019). Moment Estimation Implied by the Bobkov-Ledoux Inequality. In: Gozlan, N., Latała, R., Lounici, K., Madiman, M. (eds) High Dimensional Probability VIII. Progress in Probability, vol 74. Birkhäuser, Cham. https://doi.org/10.1007/978-3-030-26391-1_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-26391-1_2
Published:
Publisher Name: Birkhäuser, Cham
Print ISBN: 978-3-030-26390-4
Online ISBN: 978-3-030-26391-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)