Keywords

3.1 Isoperimetry on the Plane and the Upper Half-Plane

The paper by Diaz et al. [4] contains the following interesting Sobolev-type inequality in dimension one.

Proposition 3.1.1

For any smooth real-valued function f on [0, 1],

$$\displaystyle \begin{aligned} \int_0^1 \sqrt{f(x)^2 + \frac{1}{\pi^2}\, f'(x)^2}\,dx \, \geq \, \Big(\int_0^1 f(x)^2\,dx\Big)^{1/2}. \end{aligned} $$
(3.1)

More precisely, this paper mentions without proof that (3.1) is a consequence of the isoperimetric inequality on the plane \({\mathbb R}^2\). Let us give an argument, which is actually based on the isoperimetric inequality

$$\displaystyle \begin{aligned} \mu^+(A) \, \geq \, \sqrt{2\pi}\,(\mu(A))^{1/2}, \qquad A \subset {\mathbb R}^2_+ \ \ (A \ \mathit{\text{is }\ \text{Borel}}), \end{aligned} $$
(3.2)

in the upper half-plane \({\mathbb R}^2_+ = \{(x_1,x_2) \in {\mathbb R}^2: x_2 \geq 0\}\). Here, μ denotes the Lebesgue measure restricted to this half-plane, which generates the corresponding notion of the perimeter

$$\displaystyle \begin{aligned} \mu^+(A) = \liminf_{\varepsilon \rightarrow 0}\, \frac{\mu(A + \varepsilon B_2) - \mu(A)}{\varepsilon} \end{aligned}$$

(cf. e.g. [2]) .

Inequality (3.2) follows from the Brunn-Minkowski inequality in \({\mathbb R}^2\)

$$\displaystyle \begin{aligned}\mu(A + B)^{1/2} \geq \mu(A)^{1/2} + \mu(B)^{1/2} \end{aligned}$$

along the same arguments as in the case of its application to the usual isoperimetric inequality. Indeed, applying it with a Borel set \(A \subset {\mathbb R}^2_+\) and B = εB 2 (ε > 0), we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mu(A + \varepsilon B_2) &\displaystyle \geq &\displaystyle \big[\,\mu(A)^{1/2} + \mu(\varepsilon B_2)^{1/2}\big]^2 \\ &\displaystyle = &\displaystyle \Big[\mu(A)^{1/2} + \Big(\frac{\pi}{2}\Big)^{1/2} \varepsilon\, \Big]^2\\ &\displaystyle = &\displaystyle \mu(A) + \,\sqrt{2\pi}\,(\mu(A))^{1/2} \varepsilon + O(\varepsilon^2), \end{array} \end{aligned} $$

and therefore (3.2) from the definition of the perimeter.

The relation (3.2) is sharp and is attained for the upper semi-discs

$$\displaystyle \begin{aligned}A_\rho = \{(x_1,x_2) \in {\mathbb R}^2: x_1^2 + x_2^2 \leq \rho^2, \ x_2 \geq 0\}, \qquad \rho > 0. \end{aligned}$$

In this case, \(\mu (A_\rho ) = \frac {1}{2}\,\pi \rho ^2\) is the area size between the upper part of the circle \(x_1^2 + x_2^2 = \rho ^2\) and the x 1 -axis x 2 = 0, while the μ-perimeter is just the length of the half-circle μ +(A \nρ\n) = πρ.

To derive (3.1), one may assume that the function f is non-negative and is not identically zero on [0, 1]. Then we associate with it the set in \({\mathbb R}^2_+\) described in polar coordinates as

$$\displaystyle \begin{aligned}A = \{(x_1,x_2): 0 \leq r \leq f(t), \ 0 \leq t \leq 1\} \end{aligned}$$

with \(x_1 = r\cos {}(\pi t)\) , \(x_2 = r\sin {}(\pi t)\). Integration in polar coordinates indicates that, for any non-negative Borel function u on \({\mathbb R}^2\) ,

$$\displaystyle \begin{aligned} \int\int_{{\mathbb R}^2} u(x_1,x_2)\,dx_1\, dx_2 \, = \, \pi \int_{-1}^1 \left[\int_0^\infty u\big(r\cos{}(\pi t),r\sin{}(\pi t)\big)\,rdr\right] dt. \end{aligned} $$
(3.3)

Applying it to the indicator function u = 1\nA\n , we get

$$\displaystyle \begin{aligned} \mu(A) = \frac{\pi}{2} \int_0^1 f(t)^2\,dt. \end{aligned}$$

On the other hand , μ +(A) represents the length of the curve C = {(x 1(t), x 2(t)) : 0 ≤ t ≤ 1} parameterized by

$$\displaystyle \begin{aligned}x_1(t) = f(t)\cos{}(\pi t), \qquad x_2(t) = f(t)\sin{}(\pi t). \end{aligned}$$

Since

$$\displaystyle \begin{aligned} x_1^{\prime}(t)^2 + x_2^{\prime}(t)^2 \, = \, f'(t)^2 + \pi^2 f(t)^2, \end{aligned}$$

we find that

$$\displaystyle \begin{aligned}\mu^+(A) = \int_0^1 \sqrt{x_1^{\prime}(t)^2 + x_2^{\prime}(t)^2}\,dt = \int_0^1 \sqrt{f'(t)^2 + \pi^2 f(t)^2}\,dt. \end{aligned}$$

As a result, the isoperimetric inequality (3.2) takes the form

$$\displaystyle \begin{aligned} \int_0^1 \sqrt{f'(t)^2 + \pi^2 f(t)^2}\,dt \, \geq \, \sqrt{2\pi}\,\Big(\frac{\pi}{2} \int_0^1 f(t)^2\,dt\Big)^{1/2}. \end{aligned}$$

which is the same as (3.1). Note that the condition f ≥ 0 may easily be removed in the resulting inequality. □

One can reverse the argument and obtain the isoperimetric inequality (3.2) on the basis of (3.1) for the class of star-shaped sets in the upper half-plane.

The same argument may be used on the basis of the classical isoperimetric inequality

$$\displaystyle \begin{aligned} \mu^+(A) \geq \sqrt{4\pi}\,(\mu(A))^{1/2} \qquad (A \ \text{is }\ \text{Borel}) \end{aligned} $$
(3.4)

in the whole plane \({\mathbb R}^2\) with respect to the Lebesgue measure μ. It is attained for the discs

$$\displaystyle \begin{aligned}A_\rho = \{(x_1,x_2) \in {\mathbb R}^2: x_1^2 + x_2^2 \leq \rho^2\}, \qquad \rho > 0, \end{aligned}$$

in which case μ(A \nρ\n) = πρ 2 and μ +(A \nρ\n) = 2πρ.

Starting from a smooth non-negative function f on [−1, 1] such that f(−1) = f(1), one may consider the star-shaped region

$$\displaystyle \begin{aligned}A = \{(x_1,x_2): 0 \leq r \leq f(t), \ -1 \leq t \leq 1\}, \qquad x_1 = r\cos{}(\pi t), \ x_2 = r\sin{}(\pi t), \end{aligned}$$

enclosed by the curve C = {(x 1(t), x 2(t)) : −1 ≤ t ≤ 1} with the same functions \(x_1(t) = f(t)\cos {}(\pi t)\), \(x_2(t) = f(t)\sin {}(\pi t)\). Integration in polar coordinates (3.3) then yields a similar formula as before,

$$\displaystyle \begin{aligned}\mu(A) = \frac{\pi}{2} \int_{-1}^1 f(t)^2\,dt, \end{aligned}$$

and also the perimeter μ +(A) represents the length of C, i.e.,

$$\displaystyle \begin{aligned}\mu^+(A) = \int_{-1}^1 \sqrt{x_1^{\prime}(t)^2 + x_2^{\prime}(t)^2}\,dt = \int_{-1}^1 \sqrt{f'(t)^2 + \pi^2 f(t)^2}\,dt. \end{aligned}$$

As a result, the isoperimetric inequality (3.4) takes the form

$$\displaystyle \begin{aligned}\int_{-1}^1 \sqrt{f'(t)^2 + \pi^2 f(t)^2}\,dt \, \geq \, \sqrt{4\pi}\ \Big(\frac{\pi}{2} \int_{-1}^1 f(t)^2\,dt\Big)^{1/2}, \end{aligned}$$

or equivalently,

$$\displaystyle \begin{aligned} \frac{1}{2} \int_{-1}^1 \sqrt{\frac{1}{\pi^2}\,f'(t)^2 + f(t)^2}\,dt \, \geq \, \Big(\frac{1}{2}\int_{-1}^1 f(t)^2\,dt\Big)^{1/2}. \end{aligned} $$
(3.5)

To compare with (3.1), let us restate (3.5) on the unit interval [0, 1] by making the substitution \(f(t) = u(\frac {1+t}{2})\). Then it becomes

$$\displaystyle \begin{aligned}\frac{1}{2} \int_{-1}^1 \sqrt{\frac{1}{4\pi^2}\,u'\Big(\frac{1+t}{2}\Big)^2 + u\Big(\frac{1+t}{2}\Big)^2}\,dt \, \geq \, \left(\frac{1}{2}\int_{-1}^1 u\Big(\frac{1+t}{2}\Big)^2\,dt\right)^{1/2}. \end{aligned}$$

Changing \(x = \frac {1+t}{2}\), replacing u again with f, and removing the unnecessary condition f ≥ 0, we arrive at:

Proposition 3.1.2

For any smooth real-valued function f on [0, 1] such that f(0) = f(1),

$$\displaystyle \begin{aligned} \int_0^1 \sqrt{f(x)^2 + \frac{1}{4\pi^2}\, f'(x)^2}\,dx \, \geq \, \Big(\int_0^1 f(x)^2\,dx\Big)^{1/2}. \end{aligned} $$
(3.6)

As we can see, an additional condition f(0) = f(1) allows one to improve the coefficient in front of the derivative, in comparison with (3.1). It should also be clear that (3.6) represents an equivalent form of the isoperimetric inequality (3.4) for the class of star-shaped regions.

3.2 Relationship with Poincaré-type Inequalities

It would be interesting to compare Propositions 3.1.13.1.2 with other popular Sobolev-type inequalities such as the Poincaré-type and logarithmic Sobolev inequalities. Starting from (3.1) and (3.6), a simple variational argument yields:

Corollary 3.2.1

For any smooth real-valued function f on [0, 1],

$$\displaystyle \begin{aligned} \mathrm{Var}_\mu(f) \leq \frac{1}{\pi^2}\,\int_0^1 f'(x)^2\,dx, \end{aligned} $$
(3.7)

where the variance is understood with respect to the uniform probability measure dμ(x) = dx on the unit segment. Moreover, if f(0) = f(1), then

$$\displaystyle \begin{aligned} \mathrm{Var}_\mu(f) \leq \frac{1}{4\pi^2}\,\int_0^1 f'(x)^2\,dx. \end{aligned} $$
(3.8)

The constants \(\frac {1}{\pi ^2}\) and \(\frac {1}{4\pi ^2}\) in (3.7)–(3.8) are optimal and are respectively attained for the functions \(f(x) = \cos {}(\pi x)\) and \(f(x) = \sin {}(2\pi x)\) (cf. also [1]).

For the proof, let us note that an analytic inequality of the form

$$\displaystyle \begin{aligned} \int_0^1 \sqrt{f(x)^2 + c f'(x)^2}\,dx \, \geq \, \Big(\int_0^1 f(x)^2\,dx\Big)^{1/2} \end{aligned} $$
(3.9)

with a constant c > 0 becomes equality for f = 1. So, one may apply it to f \nε\n = 1 + εf, and letting ε → 0, one may compare the coefficients in front of the powers of ε on both sides. First,

$$\displaystyle \begin{aligned}\int_0^1 f_\varepsilon(x)^2\,dx = 1 + 2\varepsilon \int_0^1 f(x)\,dx + \varepsilon^2\int_0^1 f(x)^2\,dx, \end{aligned}$$

so, by Taylor’s expansion, as ε → 0,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Big(\int_0^1 f_\varepsilon(x)^2\,dx\Big)^{1/2} &\displaystyle = &\displaystyle 1 + \varepsilon \int_0^1 f(x)\,dx + \frac{\varepsilon^2}{2}\int_0^1 f(x)^2\,dx \\ &\displaystyle &\displaystyle - \frac{1}{8} \ \Big(2\varepsilon \int_0^1 f(x)\,dx + \varepsilon^2 \int_0^1 f(x)^2\,dx\Big)^2 + O(\varepsilon^3) \\ &\displaystyle = &\displaystyle 1 + \varepsilon \int_0^1 f(x)\,dx + \frac{\varepsilon^2}{2}\int_0^1 f(x)^2\,dx - \frac{\varepsilon^2}{2}\, \Big(\int_0^1 f(x)\,dx\Big)^2 + O(\varepsilon^3). \end{array} \end{aligned} $$

On the other hand, since

$$\displaystyle \begin{aligned}f_\varepsilon(x)^2 + c f_\varepsilon'(x)^2 = 1 + 2\varepsilon f(x) + \varepsilon^2\,\big(f(x)^2 + cf'(x)^2\big), \end{aligned}$$

we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \big(f_\varepsilon(x)^2 + c f_\varepsilon'(x)^2\big)^{1/2} &\displaystyle = &\displaystyle 1 + \varepsilon f(x) + \frac{\varepsilon^2}{2}\,\big(f(x)^2 + cf'(x)^2\big) \\ &\displaystyle &\displaystyle - \frac{1}{8} \, \Big(2\varepsilon f(x) + \varepsilon^2\,\big(f(x)^2 + cf'(x)^2\big)\Big)^2 + O(\varepsilon^3) \\ &\displaystyle = &\displaystyle 1 + \varepsilon f(x) + \frac{c\varepsilon^2}{2}\,f'(x)^2 + O(\varepsilon^3). \end{array} \end{aligned} $$

Hence

$$\displaystyle \begin{aligned}\int_0^1 \big(f_\varepsilon(x)^2 + c f_\varepsilon'(x)^2\big)^{1/2} dx \, = \, 1 + \varepsilon \int_0^1 f(x)\,dx + \frac{c\varepsilon^2}{2} \int f'(x)^2\,dx + O(\varepsilon^3). \end{aligned}$$

Inserting both expansions in (3.9), we see that the linear coefficients coincide, while comparing the quadratic terms leads to the Poincaré-type inequality

$$\displaystyle \begin{aligned}c \int f'(x)^2\,dx \, \geq \, \int_0^1 f(x)^2\,dx - \Big(\int_0^1 f(x)\,dx\Big)^2. \end{aligned}$$

Thus, the isoperimetric inequality on the upper half-plane implies the Poincaré-type inequality (3.7) on [0, 1], while the isoperimetric inequality on the whole plane implies the restricted Poincaré-type inequality (3.8), with optimal constants in both cases.

3.3 Sobolev Inequalities

If f is non-negative, then f(x) = 0 ⇒ f′(x) = 0 and thus f(x)2 + cf′(x)2 = 0. Hence, applying Cauchy’s inequality, from (3.9) we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_0^1 f(x)^2\, dx &\displaystyle \leq &\displaystyle \bigg(\int_0^1 \sqrt{f(x)}\,\sqrt{f(x) + c\frac{f'(x)^2}{f(x)}}\,1_{\{f(x)>0\}}\, dx\bigg)^2 \\ &\displaystyle \leq &\displaystyle \int_0^1 f(x)\,dx\ \Big(\int_0^1 f(x)\,dx + c\int_0^1 \frac{f'(x)^2}{f(x)}\,1_{\{f(x)>0\}}\, dx \Big). \end{array} \end{aligned} $$

Therefore, Propositions 3.1.13.1.2 also yield:

Proposition 3.3.1

For any non-negative smooth function f on [0, 1] with \(\int _0^1 f(x)\,dx = 1\) ,

$$\displaystyle \begin{aligned} \mathrm{Var}_\mu(f) \, \leq \, \frac{1}{\pi^2} \int_0^1 \frac{f'(x)^2}{f(x)}\,1_{\{f(x)>0\}}\,dx, \end{aligned} $$
(3.10)

where the variance is with respect to the uniform probability measure μ on the unit segment. Moreover, if f(0) = f(1), then

$$\displaystyle \begin{aligned} \mathrm{Var}_\mu(f) \, \leq \, \frac{1}{4\pi^2} \int_0^1 \frac{f'(x)^2}{f(x)}\,1_{\{f(x)>0\}}\,dx. \end{aligned} $$
(3.11)

Recall that there is a general relation between the entropy functional

$$\displaystyle \begin{aligned}\mathrm{Ent}_\mu(f) = \int f \log f\,d\mu - \int f\,d\mu \ \log \int f\,d\mu \qquad (f \geq 0) \end{aligned}$$

and the variance, namely

$$\displaystyle \begin{aligned} \mathrm{Ent}_\mu(f) \int f\,d\mu \, \leq \, \mathrm{Var}_\mu(f). \end{aligned} $$
(3.12)

It is rather elementary; assume by homogeneity that \(\int f\,d\mu = 1\). Since \(\log t \leq t-1\) and therefore \(t\log t \leq t(t-1)\) for all t ≥ 0, we have

$$\displaystyle \begin{aligned}f(x) \log f(x) \leq f(x)^2 - f(x). \end{aligned}$$

After integration it yields (3.12) .

Using the latter in (3.10)–(3.11), we arrive at the logarithmic Sobolev inequalities .

Corollary 3.3.2

For any non-negative smooth function f on [0, 1], with respect to the uniform probability measure μ on the unit segment we have

$$\displaystyle \begin{aligned} \mathrm{Ent}_\mu(f) \, \leq \, \frac{1}{\pi^2} \int_0^1 \frac{f'(x)^2}{f(x)}\,1_{\{f(x)>0\}}\,dx. \end{aligned} $$
(3.13)

Moreover, if f(0) = f(1), then

$$\displaystyle \begin{aligned} \mathrm{Ent}_\mu(f) \, \leq \, \frac{1}{4\pi^2} \int_0^1 \frac{f'(x)^2}{f(x)}\,1_{\{f(x)>0\}}\,dx. \end{aligned} $$
(3.14)

Replacing here f by (1 + εf)2 and letting ε → 0, we return to the Poincaré-type inequalities (3.7) and (3.8) with an extra factor of 2. The best constant in (3.13) is however \(\frac {1}{2\pi ^2}\) and in (3.14) is \(\frac {1}{8\pi ^2}\) [1, Proposition 5.7.5]. On the other hand, the inequalities (3.10)–(3.11) are much stronger than (3.13)–(3.14).

3.4 Informational Quantities and Distances

The inequalities (3.13)–(3.14) may be stated equivalently in terms of informational distances to the uniform measure μ on the unit segment. Let us recall that, for random elements X and Z in an abstract measurable space Ω with distributions ν and μ respectively, the Rényi divergence power or the Tsallis distance from ν to μ of order α > 0 is defined by

$$\displaystyle \begin{aligned}T_\alpha(X||Z) = T_\alpha(\nu||\mu) = \frac{1}{\alpha - 1}\, \bigg[\int \Big(\frac{p}{q}\Big)^\alpha\,p\,d\lambda - 1\bigg] = \frac{1}{\alpha - 1}\, \bigg[\int f^\alpha\,d\mu - 1\bigg], \end{aligned}$$

where p and q are densities of ν and μ with respect to some (any) σ-finite dominating measure λ on Ω, with f = pq being the density of ν with respect to μ (the definition does not depend on the choice of λ). If α = 1, we arrive at the Kullback–Leibler distance or an informational divergence

$$\displaystyle \begin{aligned}T_1(X||Z) = D(X||Z) = \int p \log\frac{p}{q}\,d\lambda = \int f\log f\,d\mu, \end{aligned}$$

which is the same as Ent\nμ\n(f). For α = 2 the Tsallis T 2-distance is the same as the χ 2-distance. If α ≥ 1, necessarily T \nα\n(X||Z) =  as long as ν is not absolutely continuous with respect to μ. In any case, the function α → T \nα\n is non-decreasing; we refer an interested reader to the survey [6] (cf. also [3]).

In the case of the real line \(\varOmega = {\mathbb R}\), and when the densities p and q are absolutely continuous, the relative Fisher information or the Fisher information distance from ν to μ is defined by

$$\displaystyle \begin{aligned}I(X||Z) = I(\nu||\mu) = \int_{-\infty}^\infty \Big(\frac{p'}{p} - \frac{q'}{q}\Big)^2\,p\,d\lambda = \int_{-\infty}^\infty \frac{f^{\prime 2}}{f}\,d\mu, \end{aligned}$$

still assuming that the probability measure ν is absolutely continuous with respect to μ and has density f = pq. This definition is commonly used when q is supported and is positive on an interval \(\varDelta \subset {\mathbb R}\), finite or not, with the above integration restricted to Δ. With these notations, Proposition 3.3.1 corresponds to the order α = 2 and therefore takes the form

$$\displaystyle \begin{aligned} T_2(X||Z) \leq \frac{1}{\pi^2}\,I(X||Z), \qquad T_2(X||Z) \leq \frac{1}{4\pi^2}\,I(X||Z), \end{aligned} $$
(3.15)

holding true for an arbitrary random variable X with values in [0, 1]. Here the random variable Z has a uniform distribution μ on [0, 1], and we use an additional constraint f(0) = f(1) in the second relation.

There is also another non-distance formulation of (3.15) in terms of classical informational quantities such as the Rényi entropy power and the Fisher information

$$\displaystyle \begin{aligned}N_\alpha(X) = \Big(\int_{-\infty}^\infty p(x)^\alpha\,dx\Big)^{-\frac{2}{\alpha - 1}}, \qquad I(X) = \int_{-\infty}^\infty \frac{p'(x)^2}{p(x)}\,dx. \end{aligned}$$

Here the case α = 2 defines the quadratic Rényi entropy power N 2(X). If μ is supported and has an absolutely continuous positive density q on the interval \(\varDelta \subset {\mathbb R}\), one may also define the restricted Fisher information

$$\displaystyle \begin{aligned}I_0(X) = \int_\varDelta \frac{p'(x)^2}{p(x)}\,dx. \end{aligned}$$

For example, if Z is uniformly distributed in the unit interval, so that q(x) = 1 for 0 < x < 1, we have I(Z) = , while I 0(Z) = 0. In this case, if X has values in [0, 1], we have

$$\displaystyle \begin{aligned}T_2(X||Z) = \int_0^1 p(x)^2\,dx - 1 = N_2(X)^{-1/2} - 1, \qquad I(X||Z) = I_0(X). \end{aligned}$$

Hence, the first inequality in (3.15) may be written as the following.

Corollary 3.4.1

For any random variable X with values in [0, 1], having there an absolutely continuous density, we have

$$\displaystyle \begin{aligned} N_2(X) \Big(1 + \frac{1}{\pi^2}\,I_0(X)\Big)^2 \geq 1. \end{aligned} $$
(3.16)

This relation is analogous to the well-known isoperimetric inequality for entropies,

$$\displaystyle \begin{aligned}N(X)\, I(X) \geq 2\pi e, \end{aligned}$$

where N(X) = N 1(X) = e 2h(X) is the entropy power, corresponding to the Shannon differential entropy

$$\displaystyle \begin{aligned}h(X) = - \int_{-\infty}^\infty p(x) \log p(x)\,dx. \end{aligned}$$

The functional I 0(X) may be replaced with I(X) in (3.16) (since I 0 ≤ I), and then one may remove the assumption on the values of X. Moreover, with the functional I(X), this inequality may be considerably strengthened. Indeed, the relation \( N_2(X) (1 + \frac {1}{\pi ^2}\,I(X))^2 \geq 1 \) is not 0-homogeneous with respect to X, and therefore it admits a self-refinement when applying it to the random variables λX, λ > 0. Optimizing over this parameter, we will obtain an equivalent 0-homogeneous relation

$$\displaystyle \begin{aligned} N_2(X) I(X) \geq c, \end{aligned} $$
(3.17)

with c = π∕4. But, it is obviously true that with c = 1. To see this, first note that, by the Cauchy inequality, for all \(x \in {\mathbb R}\),

$$\displaystyle \begin{aligned} \begin{array}{rcl} p(x) &\displaystyle = &\displaystyle \int_{-\infty}^x p'(y)\,dy \, \leq \, \int_{p(y)>0} |p'(y)|\,dy \, = \, \int_{p(y)>0} \frac{|p'(y)|}{\sqrt{p(y)}}\, \sqrt{p(y)}\,dy \\ &\displaystyle \leq &\displaystyle \bigg(\int_{p(y)>0} \frac{p'(y)^2}{p(y)}\, dy\bigg)^{1/2} \ \bigg(\int_{p(y)>0} p(y)\,dy\bigg)^{1/2} \, = \, \sqrt{I(X)}. \end{array} \end{aligned} $$

Therefore,

$$\displaystyle \begin{aligned}\int_{-\infty}^\infty p(x)^2\,dx \leq \sqrt{I(X)}, \end{aligned}$$

that is, N 2(X)I(X) ≥ 1.

Observe that another inequality involving the quadratic Rényi entropy power N 2(X) and some generalisation of Fisher information can be extracted from [5], namely for all 1 ≤ q < , \(N_2(X)^q \int |p'|{ }^q p\geq C_q\) for an optimal constant C \nq\n. However it’s unclear how to related this inequality to (3.17).