Abstract
This is the first part of the notes with preliminary remarks on the plane isoperimetric inequality and its applications to the Poincaré and Sobolev-type inequalities in dimension one. Links with informational quantities of Rényi and Fisher are briefly discussed.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
3.1 Isoperimetry on the Plane and the Upper Half-Plane
The paper by Diaz et al. [4] contains the following interesting Sobolev-type inequality in dimension one.
Proposition 3.1.1
For any smooth real-valued function f on [0, 1],
More precisely, this paper mentions without proof that (3.1) is a consequence of the isoperimetric inequality on the plane \({\mathbb R}^2\). Let us give an argument, which is actually based on the isoperimetric inequality
in the upper half-plane \({\mathbb R}^2_+ = \{(x_1,x_2) \in {\mathbb R}^2: x_2 \geq 0\}\). Here, μ denotes the Lebesgue measure restricted to this half-plane, which generates the corresponding notion of the perimeter
(cf. e.g. [2]) .
Inequality (3.2) follows from the Brunn-Minkowski inequality in \({\mathbb R}^2\)
along the same arguments as in the case of its application to the usual isoperimetric inequality. Indeed, applying it with a Borel set \(A \subset {\mathbb R}^2_+\) and B = εB 2 (ε > 0), we get
and therefore (3.2) from the definition of the perimeter.
The relation (3.2) is sharp and is attained for the upper semi-discs
In this case, \(\mu (A_\rho ) = \frac {1}{2}\,\pi \rho ^2\) is the area size between the upper part of the circle \(x_1^2 + x_2^2 = \rho ^2\) and the x 1 -axis x 2 = 0, while the μ-perimeter is just the length of the half-circle μ +(A \nρ\n) = πρ.
To derive (3.1), one may assume that the function f is non-negative and is not identically zero on [0, 1]. Then we associate with it the set in \({\mathbb R}^2_+\) described in polar coordinates as
with \(x_1 = r\cos {}(\pi t)\) , \(x_2 = r\sin {}(\pi t)\). Integration in polar coordinates indicates that, for any non-negative Borel function u on \({\mathbb R}^2\) ,
Applying it to the indicator function u = 1\nA\n , we get
On the other hand , μ +(A) represents the length of the curve C = {(x 1(t), x 2(t)) : 0 ≤ t ≤ 1} parameterized by
Since
we find that
As a result, the isoperimetric inequality (3.2) takes the form
which is the same as (3.1). Note that the condition f ≥ 0 may easily be removed in the resulting inequality. □
One can reverse the argument and obtain the isoperimetric inequality (3.2) on the basis of (3.1) for the class of star-shaped sets in the upper half-plane.
The same argument may be used on the basis of the classical isoperimetric inequality
in the whole plane \({\mathbb R}^2\) with respect to the Lebesgue measure μ. It is attained for the discs
in which case μ(A \nρ\n) = πρ 2 and μ +(A \nρ\n) = 2πρ.
Starting from a smooth non-negative function f on [−1, 1] such that f(−1) = f(1), one may consider the star-shaped region
enclosed by the curve C = {(x 1(t), x 2(t)) : −1 ≤ t ≤ 1} with the same functions \(x_1(t) = f(t)\cos {}(\pi t)\), \(x_2(t) = f(t)\sin {}(\pi t)\). Integration in polar coordinates (3.3) then yields a similar formula as before,
and also the perimeter μ +(A) represents the length of C, i.e.,
As a result, the isoperimetric inequality (3.4) takes the form
or equivalently,
To compare with (3.1), let us restate (3.5) on the unit interval [0, 1] by making the substitution \(f(t) = u(\frac {1+t}{2})\). Then it becomes
Changing \(x = \frac {1+t}{2}\), replacing u again with f, and removing the unnecessary condition f ≥ 0, we arrive at:
Proposition 3.1.2
For any smooth real-valued function f on [0, 1] such that f(0) = f(1),
As we can see, an additional condition f(0) = f(1) allows one to improve the coefficient in front of the derivative, in comparison with (3.1). It should also be clear that (3.6) represents an equivalent form of the isoperimetric inequality (3.4) for the class of star-shaped regions.
3.2 Relationship with Poincaré-type Inequalities
It would be interesting to compare Propositions 3.1.1–3.1.2 with other popular Sobolev-type inequalities such as the Poincaré-type and logarithmic Sobolev inequalities. Starting from (3.1) and (3.6), a simple variational argument yields:
Corollary 3.2.1
For any smooth real-valued function f on [0, 1],
where the variance is understood with respect to the uniform probability measure dμ(x) = dx on the unit segment. Moreover, if f(0) = f(1), then
The constants \(\frac {1}{\pi ^2}\) and \(\frac {1}{4\pi ^2}\) in (3.7)–(3.8) are optimal and are respectively attained for the functions \(f(x) = \cos {}(\pi x)\) and \(f(x) = \sin {}(2\pi x)\) (cf. also [1]).
For the proof, let us note that an analytic inequality of the form
with a constant c > 0 becomes equality for f = 1. So, one may apply it to f \nε\n = 1 + εf, and letting ε → 0, one may compare the coefficients in front of the powers of ε on both sides. First,
so, by Taylor’s expansion, as ε → 0,
On the other hand, since
we have
Hence
Inserting both expansions in (3.9), we see that the linear coefficients coincide, while comparing the quadratic terms leads to the Poincaré-type inequality
□
Thus, the isoperimetric inequality on the upper half-plane implies the Poincaré-type inequality (3.7) on [0, 1], while the isoperimetric inequality on the whole plane implies the restricted Poincaré-type inequality (3.8), with optimal constants in both cases.
3.3 Sobolev Inequalities
If f is non-negative, then f(x) = 0 ⇒ f′(x) = 0 and thus f(x)2 + cf′(x)2 = 0. Hence, applying Cauchy’s inequality, from (3.9) we get
Therefore, Propositions 3.1.1–3.1.2 also yield:
Proposition 3.3.1
For any non-negative smooth function f on [0, 1] with \(\int _0^1 f(x)\,dx = 1\) ,
where the variance is with respect to the uniform probability measure μ on the unit segment. Moreover, if f(0) = f(1), then
Recall that there is a general relation between the entropy functional
and the variance, namely
It is rather elementary; assume by homogeneity that \(\int f\,d\mu = 1\). Since \(\log t \leq t-1\) and therefore \(t\log t \leq t(t-1)\) for all t ≥ 0, we have
After integration it yields (3.12) .
Using the latter in (3.10)–(3.11), we arrive at the logarithmic Sobolev inequalities .
Corollary 3.3.2
For any non-negative smooth function f on [0, 1], with respect to the uniform probability measure μ on the unit segment we have
Moreover, if f(0) = f(1), then
Replacing here f by (1 + εf)2 and letting ε → 0, we return to the Poincaré-type inequalities (3.7) and (3.8) with an extra factor of 2. The best constant in (3.13) is however \(\frac {1}{2\pi ^2}\) and in (3.14) is \(\frac {1}{8\pi ^2}\) [1, Proposition 5.7.5]. On the other hand, the inequalities (3.10)–(3.11) are much stronger than (3.13)–(3.14).
3.4 Informational Quantities and Distances
The inequalities (3.13)–(3.14) may be stated equivalently in terms of informational distances to the uniform measure μ on the unit segment. Let us recall that, for random elements X and Z in an abstract measurable space Ω with distributions ν and μ respectively, the Rényi divergence power or the Tsallis distance from ν to μ of order α > 0 is defined by
where p and q are densities of ν and μ with respect to some (any) σ-finite dominating measure λ on Ω, with f = p∕q being the density of ν with respect to μ (the definition does not depend on the choice of λ). If α = 1, we arrive at the Kullback–Leibler distance or an informational divergence
which is the same as Ent\nμ\n(f). For α = 2 the Tsallis T 2-distance is the same as the χ 2-distance. If α ≥ 1, necessarily T \nα\n(X||Z) = ∞ as long as ν is not absolutely continuous with respect to μ. In any case, the function α → T \nα\n is non-decreasing; we refer an interested reader to the survey [6] (cf. also [3]).
In the case of the real line \(\varOmega = {\mathbb R}\), and when the densities p and q are absolutely continuous, the relative Fisher information or the Fisher information distance from ν to μ is defined by
still assuming that the probability measure ν is absolutely continuous with respect to μ and has density f = p∕q. This definition is commonly used when q is supported and is positive on an interval \(\varDelta \subset {\mathbb R}\), finite or not, with the above integration restricted to Δ. With these notations, Proposition 3.3.1 corresponds to the order α = 2 and therefore takes the form
holding true for an arbitrary random variable X with values in [0, 1]. Here the random variable Z has a uniform distribution μ on [0, 1], and we use an additional constraint f(0) = f(1) in the second relation.
There is also another non-distance formulation of (3.15) in terms of classical informational quantities such as the Rényi entropy power and the Fisher information
Here the case α = 2 defines the quadratic Rényi entropy power N 2(X). If μ is supported and has an absolutely continuous positive density q on the interval \(\varDelta \subset {\mathbb R}\), one may also define the restricted Fisher information
For example, if Z is uniformly distributed in the unit interval, so that q(x) = 1 for 0 < x < 1, we have I(Z) = ∞, while I 0(Z) = 0. In this case, if X has values in [0, 1], we have
Hence, the first inequality in (3.15) may be written as the following.
Corollary 3.4.1
For any random variable X with values in [0, 1], having there an absolutely continuous density, we have
This relation is analogous to the well-known isoperimetric inequality for entropies,
where N(X) = N 1(X) = e 2h(X) is the entropy power, corresponding to the Shannon differential entropy
The functional I 0(X) may be replaced with I(X) in (3.16) (since I 0 ≤ I), and then one may remove the assumption on the values of X. Moreover, with the functional I(X), this inequality may be considerably strengthened. Indeed, the relation \( N_2(X) (1 + \frac {1}{\pi ^2}\,I(X))^2 \geq 1 \) is not 0-homogeneous with respect to X, and therefore it admits a self-refinement when applying it to the random variables λX, λ > 0. Optimizing over this parameter, we will obtain an equivalent 0-homogeneous relation
with c = π∕4. But, it is obviously true that with c = 1. To see this, first note that, by the Cauchy inequality, for all \(x \in {\mathbb R}\),
Therefore,
that is, N 2(X)I(X) ≥ 1.
Observe that another inequality involving the quadratic Rényi entropy power N 2(X) and some generalisation of Fisher information can be extracted from [5], namely for all 1 ≤ q < ∞, \(N_2(X)^q \int |p'|{ }^q p\geq C_q\) for an optimal constant C \nq\n. However it’s unclear how to related this inequality to (3.17).
References
D. Bakry, I. Gentil, M. Ledoux, Analysis and geometry of Markov diffusion operators, in Grundlehren der Mathematischen Wissenschaften (Fundamental Principles of Mathematical Sciences), vol. 348 (Springer, Cham, 2014), p. xx+ 552
Yu.D. Burago, V.A. Zalgaller, Geometric inequalities, in Translated from the Russian by A. B. Sosinskii. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer Series in Soviet Mathematics, vol. 285 (Springer, Berlin, 1988), p. xiv+ 331
A. Dembo, T.M. Cover, J.A. Thomas, Information-theoretic inequalities. IEEE Trans. Inform. Theory 37(6), 1501–1518 (1991)
A. Diaz, N. Harman, S. Howe, D. Thompson, Isoperimetric problems in sectors with density. Adv. Geom. 12(4), 589–619 (2012)
E. Lutwak, D. Yang, G. Zhang, Cramer-Rao and moment-entropy inequalities for Renyi entropy and generalized Fisher information. IEEE Trans. Inform. Theory 51(2), 473–478 (2005)
T. van Erven, P. Harremoës, Rényi divergence and Kullback-Leibler divergence. IEEE Trans. Inform. Theory 60(7), 3797–3820 (2014)
Acknowledgements
Research was partially supported by the NSF grant DMS-1855575 and by the Bzout Labex, funded by ANR, reference ANR-10-LABX-58, the Labex MME-DII funded by ANR, reference ANR-11-LBX-0023-01, and the ANR Large Stochastic Dynamic, funded by ANR, reference ANR-15-CE40-0020-03-LSD.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Bobkov, S.G., Gozlan, N., Roberto, C., Samson, PM. (2019). Polar Isoperimetry. I: The Case of the Plane. In: Gozlan, N., Latała, R., Lounici, K., Madiman, M. (eds) High Dimensional Probability VIII. Progress in Probability, vol 74. Birkhäuser, Cham. https://doi.org/10.1007/978-3-030-26391-1_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-26391-1_3
Published:
Publisher Name: Birkhäuser, Cham
Print ISBN: 978-3-030-26390-4
Online ISBN: 978-3-030-26391-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)