Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

More than 40 years ago W. Schmidt [51, 52] proved the following theorem on irregularities of point distribution related to discs.

Theorem 1 (Schmidt).

For every distribution \(\mathcal{P}\) of N points in the torus \(\mathbb{T}^{2}\) there exists a disc \(D \subset \mathbb{T}^{2}\) of diameter less than 1 such that, for every \(\varepsilon > 0\),

$$\displaystyle{\left \vert \mathrm{card}\left (\mathcal{P}\cap D\right ) - N\left \vert D\right \vert \right \vert \geq c_{\varepsilon }\ N^{\left (1/4\right )-\varepsilon }\;,}$$

where \(\left \vert A\right \vert\) denotes the volume.

This result has to be compared with the following earlier results of K. Roth [50] and H. Davenport [26].

Theorem 2 (Roth).

For every distribution \(\mathcal{P}\) of N points in \(\left [0,1\right ]^{2}\) we have the following lower bound

$$\displaystyle{\int _{\mathbb{T}^{2}}\left \vert \mathrm{card}\left (\mathcal{P}\cap I_{x}\right ) - Nx_{1}x_{2}\right \vert ^{2}\mathit{dx}_{ 1}\,\mathit{dx}_{2} \geq c\ \log N\;,}$$

where \(I_{x} = \left [0,x_{1}\right ] \times \left [0,x_{2}\right ]\) for every \(x = \left (x_{1},x_{2}\right ) \in \left [0,1\right ]^{2}\) . Hence for every distribution \(\mathcal{P}\) of N points in the torus \(\mathbb{T}^{2}\) there exists a rectangle \(R \subset \mathbb{T}^{2}\) , having sides parallel to the axes and such that

$$\displaystyle{\left \vert \mathrm{card}\left (\mathcal{P}\cap R\right ) - N\left \vert R\right \vert \right \vert \geq c\ \log ^{1/2}N\;.}$$

Theorem 3 (Davenport).

For every integer N ≥ 2 there exists a distribution \(\mathcal{P}\) of N points in the torus \(\mathbb{T}^{2}\) such that

$$\displaystyle{\int _{\mathbb{T}^{2}}\left \vert \mathrm{card}\left (\mathcal{P}\cap I_{x}\right ) - Nx_{1}x_{2}\right \vert ^{2}\mathit{dx}_{ 1}\,\mathit{dx}_{2} \leq c\ \log N\;.}$$

Schmidt’s theorem has been improved and extended by J. Beck [3] and H. Montgomery [42], who have independently obtained the following L 2 result (see also [2, 4, 11]).

Theorem 4 (Beck, Montgomery).

Let \(B \subset \mathbb{T}^{d}\) be a convex body. Then for every distribution \(\mathcal{P}\) of N points in \(\mathbb{T}^{d}\) we have

$$\displaystyle{\int _{0}^{1}\int _{ \mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}\left \vert \mathrm{card}\left (\mathcal{P}\cap \left (\lambda \sigma \left (B + t\right )\right )\right ) -\lambda ^{d}N\left \vert B\right \vert \right \vert ^{2}\ \mathit{dt}\,d\sigma \,d\lambda \geq c_{ d}N^{\left (d-1\right )/d}\;.}$$

Hence for every distribution \(\mathcal{P}\) of N points in \(\mathbb{T}^{d}\) there exists a translated, rotated and dilated copy B of B such that

$$\displaystyle{\left \vert \mathrm{card}\left (\mathcal{P}\cap B^{{\prime}}\right ) - N\left \vert B^{{\prime}}\right \vert \right \vert \geq cN^{\left (d-1\right )/2d}\;.}$$

J. Beck and W. Chen have proved that the above L 2 estimate is sharp (see [2], see also [13, 22, 35]).

Theorem 5 (Beck and Chen).

Let \(B \subset \mathbb{R}^{d}\) be a convex body having diameter less than 1. Then for every positive integer N there exists a distribution \(\mathcal{P}\) of N points in \(\mathbb{T}^{d}\) such that

$$\displaystyle{\int _{0}^{1}\int _{ \mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}\left \vert \mathrm{card}\left (\mathcal{P}\cap \left (\sigma \left (B + t\right )\right )\right ) -\left \vert B\right \vert \right \vert ^{2}\ \mathit{dt}\,d\sigma \leq c_{ d}\ N^{\left (d-1\right )/d}\;.}$$

The large gap between the sharp L 2 estimates which appear in Theorem 2 and in Theorem 4 seems to be related to the different behaviors of the Fourier transforms of the characteristic functions of balls and polyhedra. The case of the ball is enlightening: the main ingredients in the proofs of the results in Theorems 4 and 5 are provided by the sharp estimates of the L 2 average decay of the Fourier transform \(\hat{\chi }_{B}\) of the characteristic function of a convex body B:

$$\displaystyle{ \int _{\varSigma _{d-1}}\left \vert \hat{\chi }_{B}\left (\rho \sigma \right )\right \vert ^{2}\ d\sigma }$$
(3.1)

(here \(\varSigma _{d-1} = \left \{x \in \mathbb{R}^{d}: \left \vert x\right \vert = 1\right \}\) is the unit sphere in \(\mathbb{R}^{d}\)) so that the study of the above problem on irregularities of distribution turns out to be strictly related to the study of (3.1) (see e.g. [6, 11, 13, 61, 62]).

The purpose of this chapter is to exploit the above relation in a detailed and self-contained way. In the second section we prove the L 2 results for the average decay of Fourier transforms of characteristic functions of convex bodies. The third section contains L p results for polyhedra. In the fourth section we deduce lattice point results. The fifth and the sixth section are the main part of this chapter and show how to obtain different proofs of Theorems 4 and 5, depending on the estimates proved before.

During this chapter positive constants are denoted by c, c , c 1, …(they may vary at every occurrence). By c d , \(c_{\varepsilon }\), c B , …we denote constants which depend on d, \(\varepsilon\), B, …For positive A and B, we write A ≈ B when there exist positive constants c 1 and c 2 such that c 1 A ≤ B ≤ c 2 A.

2 Decay of the Fourier Transform : L 2 Estimates for Characteristic Functions of Convex and More General Bodies

2.1 Introduction

Let \(B \subset \mathbb{R}^{d}\) be a convex body, i.e. a convex bounded set with non empty interior, and let d μ be the surface measure on ∂ B. The study of the decay of the Fourier transforms \(\hat{\chi }_{B}\left (\xi \right )\) and \(\hat{\mu }\left (\xi \right )\) has a long history and provides several applications to different fields in mathematics (see [56, Ch. VIII, 5, B]). Of course we have \(\hat{\chi }_{B}\left (\xi \right ) \rightarrow 0\) as \(\left \vert \xi \right \vert \rightarrow +\infty \), by the Riemann-Lebesgue lemma. However more is true, since

$$\displaystyle{ \left \vert \hat{\chi }_{B}\left (\xi \right )\right \vert \leq c_{B}\ \vert \xi \vert ^{-1}\;, }$$
(3.2)

for every \(\xi \in \mathbb{R}^{d}\). Indeed, write ξ = ρ σ in polar coordinates (ρ ≥ 0, σ ∈ Σ d−1) and for every σ ∈ Σ d−1 define, for \(s \in \mathbb{R}\), the parallel section function \(\varLambda \left (s\right ) =\varLambda _{\sigma }\left (s\right )\) equal to the \(\left (d - 1\right )\)-volume of the set \(B \cap \left \{\sigma ^{\perp } + s\sigma \right \}\). In order to prove (3.2) it is enough to assume \(\sigma = \left (1,0,\ldots,0\right )\), so that \(\xi = \left (\rho,0,\ldots,0\right )\). Then, if \(x = \left (x_{1},x_{2},\ldots,x_{d}\right )\), we have

$$\displaystyle{ \hat{\chi }_{B}\left (\xi \right ) =\int _{B}e^{-2\pi i\xi \cdot x}\ \mathit{dx} =\int _{ \mathbb{R}}e^{-2\pi i\rho x_{1} }\varLambda \left (x_{1}\right )\ \mathit{dx}_{1} =\hat{\varLambda } \left (\rho \right )\;. }$$
(3.3)

Observe that the variation of the function Λ σ is bounded uniformly in σ, then (see e.g. [64, p.221]) we get (3.2). The case of the cube \(Q = \left [-1/2,1/2\right ]^{d}\), shows that (3.2) cannot be improved. Indeed

$$\displaystyle{\hat{\chi }_{Q}\left (\xi \right ) =\prod _{ j=1}^{d}\frac{\sin \left (\pi \xi _{j}\right )} {\pi \xi _{j}} \;,}$$

so that, for the directions orthogonal to the facets (i.e the \(\left (d - 1\right )\)-faces) of this cube, e.g. for \(\xi = \left (\rho,0,\ldots,0\right )\), we have \(\hat{\chi }_{Q}\left (\rho,0,\ldots,0\right ) =\sin \left (\pi \rho \right )/\pi \rho\) and then we have

$$\displaystyle{\mathop{\lim \sup }\limits_{\left \vert \xi \right \vert \rightarrow +\infty }\left \vert \xi \right \vert \left \vert \hat{\chi }_{Q}\left (\xi \right )\right \vert > 0\;.}$$

In the same way it is easy to see that if ξ = ρ σ and σ is not orthogonal to any facet of the cube, then \(\left \vert \hat{\chi }_{Q}\left (\xi \right )\right \vert \leq c_{\sigma }\ \rho ^{-2}\). More generally, if σ is not orthogonal to any face (of any dimension), then \(\left \vert \hat{\chi }_{Q}\left (\xi \right )\right \vert \leq c_{\sigma }\ \rho ^{-d}\), hence this last inequality holds for almost all directions.

The case of the (unit) ball \(D = \left \{x \in \mathbb{R}^{d}: \left \vert x\right \vert \leq 1\right \}\) is of course peculiar, \(\hat{\chi }_{D}\) is a radial function and we have

$$\displaystyle{ \hat{\chi }_{D}(\xi ) = \left \vert \xi \right \vert ^{-d/2}J_{ d/2}\left (2\pi \left \vert \xi \right \vert \right )\;, }$$
(3.4)

for every \(\xi \in \mathbb{R}^{d}\). Here J d∕2 is the Bessel function of order d∕2. By the asymptotics of Bessel functions (see [57, Ch. IV, Lemma 3.11], see also [63] for the basic reference on Bessel functions ) we know that

$$\displaystyle{ \hat{\chi }_{D}(\xi ) =\pi ^{-1}\left \vert \xi \right \vert ^{-\left (d+1\right )/2}\cos \left (2\pi \left \vert \xi \right \vert -\pi \left (d + 1\right )/4\right ) + \mathcal{O}_{ d}\left (\left \vert \xi \right \vert ^{-\left (d+3\right )/2}\right ) }$$
(3.5)

as \(\left \vert \xi \right \vert \rightarrow +\infty \).

In certain cases \(\hat{\chi }_{B}\left (\xi \right )\) admits interesting upper bounds of geometric nature. When d = 2 we shall see in Lemma 14 that for every convex body \(B \subset \mathbb{R}^{2}\) we have, for large ξ = ρ σ,

$$\displaystyle{ \left \vert \hat{\chi }_{B}(\xi )\right \vert \leq c_{B}\ \rho ^{-1}\left \{\varLambda \left (-\rho ^{-1} +\sup _{ y\in B}y\cdot \sigma \right ) +\varLambda \left (\rho ^{-1} +\inf _{ y\in B}y\cdot \sigma \right )\right \}\;, }$$
(3.6)

where Λ is the parallel section function. It is easy to show that (3.6) is false when d ≥ 3. Indeed let P be the octahedron in \(\mathbb{R}^{3}\) given by the convex hull of the six points \(\left (\pm 1,\pm 1,0\right )\), \(\left (0,0,\pm 1\right ),\) and let \(\sigma = \left (0,0,1\right )\). Then

$$\displaystyle{\hat{\chi }_{P}(\rho \sigma ) =\int _{P}e^{-2\pi i\rho x_{3} }\ \mathit{dx}_{1}\,\mathit{dx}_{2}\,\mathit{dx}_{3} =\hat{\varLambda } \left (\rho \right )\;.}$$

Since

$$\displaystyle{\varLambda \left (\rho \right ) = \left (1 -\left \vert \rho \right \vert \right )_{+}^{2}\;,}$$

then the RHS of (3.6) is ≈ ρ −3. Now observe that the piecewise smooth function \(\varLambda \left (\rho \right )\) has continuous derivative at ± 1, but it is only continuous at 0. Then an integration by parts shows that

$$\displaystyle{\limsup _{\rho \rightarrow +\infty }\rho ^{2}\left \vert \hat{\varLambda }\left (\rho \right )\right \vert > 0\;.}$$

Then neither (3.6) nor an average version of it can be true. On the other hand a deeper analysis shows that (3.6) holds true for every d as long as ∂ B is smooth and it has everywhere finite order of contact (see [1] and [16]).

In general (3.6) cannot be reverted. Indeed let B be a ball and recall (3.4), then the zeros of the Bessel function (see [63]) show that the inequality (3.6) can be reverted for no d. A. Podkorytov has shown that (3.6) can be inverted “in mean” (Podkorytov, 2001, personal communication).

A very important case is given by the class of convex bodies B such that ∂ B is smooth with everywhere positive Gaussian curvature . In this case the decay of \(\hat{\chi }_{B}(\xi )\) resembles the decay for the ball. Indeed we have (see [16, 30, 32, 33] or [56, Ch. VIII, 5, B])

$$\displaystyle{ \left \vert \hat{\chi }_{B}(\xi )\right \vert \leq c_{B}\left \vert \xi \right \vert ^{-\left (d+1\right )/2}\;, }$$
(3.7)

for every \(\xi \in \mathbb{R}^{d}\).

When ∂ B is flat at some points or irregular, the bound in (3.7) may fail and a pointwise estimate for \(\hat{\chi }_{B}(\xi )\) may lead to poor results in the applications. As a way to overcome this difficulty, we observe that in several problems (see e.g. [10, 11, 15, 46, 48, 49, 62]) the Fourier transform has to be integrated over the rotations, so that it may be enough to study suitable spherical averages of \(\hat{\chi }_{B}(\xi )\). In the next subsection we will study the L 2 spherical means

$$\displaystyle{\left \{\int _{\varSigma _{d-1}}\left \vert \hat{\chi }_{B}\left (\rho \sigma \right )\right \vert ^{2}\ d\sigma \right \}^{1/2}}$$

for the case of arbitrary convex bodies, while in the following section we will consider L p spherical means for polyhedra.

2.2 L 2 Spherical Estimates for Convex Bodies

The main result in this field shows that if \(B \subset \mathbb{R}^{d}\) is a convex body, then the L 2 spherical average of \(\hat{\chi }_{B}\) decays of order \(\left (d + 1\right )/2\). Of course this agrees with the case of the ball, where no spherical average is necessary. The following theorem has been proved by A. Podkorytov in the case d = 2 [45] and L. Brandolini, S. Hofmann and A. Iosevich for any dimension d [6].

Theorem 6.

Let \(B \subset \mathbb{R}^{d}\) be a convex body. Then there exists a positive constant c = c d such that

$$\displaystyle{ \left \Vert \hat{\chi }_{B}\left (\rho \cdot \right )\right \Vert _{L^{2}\left (\varSigma _{d-1}\right )} \leq c\left (\mathrm{diam}\left (B\right )\right )^{\left (d-1\right )/2}\rho ^{-\left (d+1\right )/2}\;. }$$
(3.8)

Proof.

For every \(\varepsilon > 0\) consider a convex body B  ⊂ B such that ∂ B is smooth with positive Gaussian curvature and \(\left \vert B\setminus B^{{\prime}}\right \vert <\varepsilon\) (here \(\left \vert A\right \vert \) denotes the Lebesgue measure of the set A). Assume

$$\displaystyle{\left \Vert \hat{\chi }_{B^{{\prime}}}\left (\rho \cdot \right )\right \Vert _{L^{2}\left (\varSigma _{d-1}\right )} \leq c\rho ^{-\left (d+1\right )/2}}$$

with c depending on B, but not on B . Then

$$\displaystyle{\left \Vert \hat{\chi }_{B}\left (\rho \cdot \right )\right \Vert _{L^{2}\left (\varSigma _{d-1}\right )} \leq \left \Vert \hat{\chi }_{B^{{\prime}}}\left (\rho \cdot \right )\right \Vert _{L^{2}\left (\varSigma _{d-1}\right )} + \left \Vert \hat{\chi }_{B\setminus B^{{\prime}}}\left (\rho \cdot \right )\right \Vert _{L^{2}\left (\varSigma _{d-1}\right )} \leq c\rho ^{-\left (d+1\right )/2}+\varepsilon }$$

and (3.8) follows by choosing suitable B (and \(\varepsilon\)) as ρ diverges. Then it is enough to prove (3.8) assuming B smooth, since the constant \(c\left (\mathrm{diam}\left (B\right )\right )^{\left (d-1\right )/2}\) must be independent of the smoothness of ∂ B. For ξ ≠ 0 let

$$\displaystyle{\omega \left (t\right ) = \frac{e^{-2\pi \mathit{it}\cdot \xi }} {^{-2\pi i\left \vert \xi \right \vert ^{2} }} \xi.}$$

Then \(\mathrm{div}\omega \left (t\right ) = e^{-2\pi \mathit{it}\cdot \xi }\) and the divergence theorem yields

$$\displaystyle{ \hat{\chi }_{B}\left (\rho \sigma \right ) =\int _{B}e^{-2\pi i\rho \sigma \cdot t}\,\mathit{dt} = -\frac{1} {2\pi i\rho }\int _{\partial B}e^{-2\pi i\rho \sigma \cdot t}\left (\nu (t)\cdot \sigma \right )\ d\mu \left (t\right )\;, }$$
(3.9)

where ν(t) is the outward unit normal to ∂ B at t and d μ denotes the surface measure on ∂ B. Now write the unit sphere Σ d−1 as a finite union of spherical caps U j having small radius and centers at points γ j  ∈ Σ d−1, in such a way that every spherical cap U j supports a cutoff function η j , so that the η j ’s provide a smooth partition of unity of Σ d−1. Although this partition of unity is independent of B, the family \(\left \{\eta _{j}\left (\nu (t)\right )\right \}\) is a partition of unity on ∂ B. We then write

$$\displaystyle{\hat{\chi }_{B}\left (\rho \sigma \right ) = -\frac{1} {2\pi i\rho }\sum _{j}\int _{\partial B}e^{-2\pi i\rho \sigma \cdot t}\left (\nu (t)\cdot \sigma \right )\eta _{ j}\left (\nu (t)\right )\ d\mu \left (t\right )}$$

and it is enough to prove that for every j we have

$$\displaystyle{ \int _{\varSigma _{d-1}}\left \vert \int _{\partial B}e^{-2\pi i\rho \sigma \cdot t}\left (\nu (t)\cdot \sigma \right )\eta _{ j}\left (\nu (t)\right )d\mu \left (t\right )\right \vert ^{2}\,d\sigma \,\leq \,c\left (\mathrm{diam}\left (B\right )\right )^{\left (d-1\right )/2}\rho ^{-\left (d-1\right )}\;. }$$
(3.10)

Now suppose j is given, write η for η j , γ for γ j , and let Ω ⊂ ∂ B be the support of \(\eta \left (\nu (t)\right )\), so that from now on the inner integral in (3.10) will be on Ω. We may assume η supported in a small spherical cap having center at \(\gamma = \left (0,\ldots,0,-1\right )\). We need to consider directions which are essentially orthogonal and directions which are essentially non orthogonal to Ω, and tell them apart. In order to do this, let \(\psi: \mathbb{R} \rightarrow \left [0,1\right ]\) be a \(\mathcal{C}^{\infty }\) cutoff function such that \(\psi \left (t\right ) = 1\) for \(\left \vert t\right \vert \leq c_{1}\) and \(\psi \left (t\right ) = 0\) for \(\left \vert t\right \vert \geq c_{2}\), for 0 < c 1 < c 2 < 1. We write

$$\displaystyle\begin{array}{rcl} & & \int _{\varSigma _{d-1}}\left \vert \int _{\varOmega }e^{-2\pi i\rho \sigma \cdot t}\left (\nu (t)\cdot \sigma \right )\eta \left (\nu (t)\right )d\mu \left (t\right )\right \vert ^{2}\ d\sigma {}\\ & & =\int _{\varSigma _{d-1}}\left \vert \int _{\varOmega }e^{-2\pi i\rho \sigma \cdot t}\left (\nu (t)\cdot \sigma \right )\eta \left (\nu (t)\right )d\mu \left (t\right )\right \vert ^{2}\left (1 -\psi \left (-\sigma _{ d}\right )\right )\ d\sigma {}\\ & & +\int _{\varSigma _{d-1}}\left \vert \int _{\varOmega }e^{-2\pi i\rho \sigma \cdot t}\left (\nu (t)\cdot \sigma \right )\eta \left (\nu (t)\right )d\mu \left (t\right )\right \vert ^{2}\psi \left (-\sigma _{ d}\right )\ d\sigma {}\\ & & = S + \mathit{NS}\;. {}\\ \end{array}$$

We term “singular” the directions essentially orthogonal to the hyperplanes tangent to Ω and “non singular” the remaining ones (Fig. 3.1).

Fig. 3.1
figure 1

The set Ω

Note that the phase − 2π i ρ σ ⋅ t has a stationary point in the singular directions. However this is not an obstacle, and the proof in [6] starts with the easy but somehow unexpected remark that the L 2 spherical mean “makes this stationary point disappear”, as we shall see in a moment. In order to estimate S we write

$$\displaystyle{S =\int _{\varOmega }\int _{\varOmega }\int _{\varSigma _{d-1}}e^{-2\pi i\rho \sigma \cdot \left (t-u\right )}f\left (t,u,\sigma \right )\ d\sigma d\mu \left (u\right )d\mu \left (t\right )\;,}$$

where

$$\displaystyle{f\left (t,u,\sigma \right ) = \left (\nu (t)\cdot \sigma \right )\eta \left (\nu (t)\right )\left (\nu (u)\cdot \sigma \right )\eta \left (\nu (u)\right )\left (1 -\psi \left (\sigma \cdot \gamma \right )\right )}$$

is smooth in σ. Note that tu in the above integral is essentially parallel to Ω. Then, writing the integral in d σ in local coordinates, we can apply [56, Ch. 8, Prop. 4] and obtain

$$\displaystyle{\left \vert \int _{\varSigma _{d-1}}e^{-2\pi i\rho \sigma \cdot \left (t-u\right )}f\left (t,u,\sigma \right )\ d\sigma \right \vert \leq c\left (\left (1 +\rho \left \vert t - u\right \vert \right )^{-N}\right )}$$

for a large positive integer N. Then

$$\displaystyle\begin{array}{rcl} S& \leq & c\int _{\varOmega }\int _{\varOmega } \frac{1} {\left (1 +\rho \left \vert t - u\right \vert \right )^{N}}\ d\mu \left (u\right )d\mu \left (t\right ) {}\\ & \leq & c\int \int _{\left \{\left \vert t-u\right \vert \leq \rho ^{-1}\right \}} \frac{1} {\left (1 +\rho \left \vert t - u\right \vert \right )^{N}}\ d\mu \left (u\right )d\mu \left (t\right ) {}\\ & & +c\int \int _{\left \{\left \vert t-u\right \vert \geq \rho ^{-1}\right \}} \frac{1} {\left (1 +\rho \left \vert t - u\right \vert \right )^{N}}\ d\mu \left (u\right )d\mu \left (t\right ) {}\\ & \leq & c\mu \left (\varOmega \right )\left (\int _{\left \{x\in \mathbb{R}^{d-1}:\left \vert x\right \vert \leq \rho ^{-1}\right \}}\ \mathit{dx} +\rho ^{-N}\int _{ \left \{x\in \mathbb{R}^{d-1}:\left \vert x\right \vert >\rho ^{-1}\right \}}\left \vert x\right \vert ^{-N}\ \mathit{dx}\right ) {}\\ & \leq & c\mu \left (\varOmega \right )\rho ^{-\left (d-1\right )}\;. {}\\ \end{array}$$

Now we need to prove the same estimate for NS. If we were free to integrate by parts several times,Footnote 1it should then be easy to handle NS and to end the proof. Since the constants in our estimates need to be independent of the smoothness of ∂ B, we need a more refined argument. As a first step, let us see Ω as the graph of a convex smooth function \(x\mapsto \varPhi \left (x\right )\). Then, writing \(\varSigma _{d-1} \ni \sigma = \left (\sigma _{1},\ldots,\sigma _{d}\right ) = \left (\sigma ^{{\prime}},\sigma _{d}\right )\) we have

$$\displaystyle\begin{array}{rcl} \mathit{NS}& =& \int _{\varSigma _{d-1}}\left \vert \int _{\mathbb{R}^{d-1}}e^{-2\pi i\rho \left (\sigma ^{{\prime}}\cdot x+\sigma _{ d}\varPhi \left (x\right )\right )}\left ( \frac{\left (\nabla \varPhi \left (x\right ),-1\right )} {\sqrt{\left \vert \nabla \varPhi \left (x\right ) \right \vert ^{2 } + 1}}\cdot \sigma \right )\right. {}\\ & \times & \left.\eta \left ( \frac{\left (\nabla \varPhi \left (x\right ),-1\right )} {\sqrt{\left \vert \nabla \varPhi \left (x\right ) \right \vert ^{2 } + 1}}\right )\mathit{dx}\right \vert ^{2}\psi \left (\sigma \cdot \gamma \right )\ d\sigma {}\\ & =& \int _{\varSigma _{d-1}}\left \vert \int _{\mathcal{A}}e^{-2\pi i\rho \left (\sigma ^{{\prime}}\cdot x+\sigma _{ d}\varPhi \left (x\right )\right )}h\left (\sigma,\nabla \varPhi \left (x\right )\right )\mathit{dx}\right \vert ^{2}\psi \left (\sigma \cdot \gamma \right )\ d\sigma \;, {}\\ \end{array}$$

where \(\mathcal{A}\) is the support of

$$\displaystyle{\eta \left ( \frac{\left (\nabla \varPhi \left (x\right ),-1\right )} {\sqrt{\left \vert \nabla \varPhi \left (x\right ) \right \vert ^{2 } + 1}}\right )}$$

and h is a smooth function in the variables σ and \(\nabla \varPhi \left (x\right )\). Note that our choice of Ω implies that ∇Φ is uniformly bounded on \(\mathcal{A}\) and that \(\left \vert \sigma ^{{\prime}}\right \vert \geq c > 0\) for a suitable choice of c.

We will work uniformly in σ ⋅ γ = −σ d , so that σ d will not play a role. We will then concentrate on σ or, better, on \(\sigma ^{{\prime}}/\left \vert \sigma ^{{\prime}}\right \vert \in \varSigma _{d-2}\). As we did for Σ d−1 we now write Σ d−2 as a finite union of spherical caps having small radius and supporting cutoff functions ζ which give a smooth partition of unity on Σ d−2. It is enough to consider the cutoff function ζ supported on a small spherical cap centered at \(\left (1,0,\ldots,0\right ) \in \varSigma _{d-2}\). We then have to bound

$$\displaystyle{ \int _{\varSigma _{d-1}}\left \vert \int _{\mathcal{A}}e^{-2\pi i\rho \left (\sigma ^{{\prime}}\cdot x+\sigma _{ d}\varPhi \left (x\right )\right )}h\left (\sigma,\nabla \varPhi \left (x\right )\right )\ \mathit{dx}\right \vert ^{2}\psi \left (\sigma _{d}\right )\zeta \left ( \frac{\sigma ^{{\prime}}} {\left \vert \sigma ^{{\prime}}\right \vert }\right )\ d\sigma \;. }$$
(3.11)

None of the previous steps has said anything on the coordinates σ 2, , σ d−1 inside σ. We then introduce the change of variables \(\sigma =\varXi \left (\tau,\theta \right )\), where θ is a real variable, \(\tau = \left (\tau _{1},\tau _{2},\ldots,\tau _{d-3},\tau _{d-2}\right )\), with \(\left (\tau,\theta \right )\) defined in a neighborhood V of the origin in \(\mathbb{R}^{d-1}\) and

$$\displaystyle\begin{array}{rcl} \sigma & =& \left (\sigma ^{{\prime}},\sigma _{ d}\right ) = \left (\sigma _{1},\sigma _{2},\sigma _{3},\ldots,\sigma _{d-2},\sigma _{d-1},\sigma _{d}\right ) {}\\ & =& \left (\sqrt{\frac{1 - \left \vert \tau \right \vert ^{2 } } {1 +\theta ^{2}}},\tau _{1},\tau _{2},\ldots,\tau _{d-3},\tau _{d-2},\theta \sqrt{\frac{1 - \left \vert \tau \right \vert ^{2 } } {1 +\theta ^{2}}} \right ) {}\\ & =& \left (\sqrt{\frac{1 - \left \vert \tau \right \vert ^{2 } } {1 +\theta ^{2}}},\tau,\theta \sqrt{\frac{1 - \left \vert \tau \right \vert ^{2 } } {1 +\theta ^{2}}} \right )\;. {}\\ \end{array}$$

Then (3.11) takes the form

$$\displaystyle{ \int _{V }\left \vert \int _{\mathbb{R}^{d-1}}e^{-2\pi i\rho \left (\sigma ^{{\prime}}\cdot x+\sigma _{ d}\varPhi \left (x\right )\right )}h\left (\sigma,\nabla \varPhi \left (x\right )\right )\mathit{dx}\right \vert ^{2}J\left (\tau,\theta \right )\ d\tau \,d\theta }$$
(3.12)

where \(J\left (\tau,\theta \right )\) is the Jacobian of the change of variables, times a smooth function. Let \(x^{{\prime}} = \left (x_{2},\ldots,x_{d-1}\right )\). Since σ d  = θ σ 1, the inner integral in (3.12) equals

$$\displaystyle{ \int _{\mathbb{R}^{d-2}}e^{-2\pi i\rho \tau \cdot x^{{\prime}} }\int _{\mathbb{R}}e^{-2\pi i\rho \sigma _{1}\left (x_{1}+\theta \varPhi \left (x_{1},x^{{\prime}}\right )\right ) }h\left (\sigma,\nabla \varPhi \left (x_{1},x^{{\prime}}\right )\right )\ \mathit{dx}_{ 1}\mathit{dx}^{{\prime}}\;. }$$
(3.13)

Now let

$$\displaystyle{s = g_{\theta,x^{{\prime}}}\left (x_{1}\right ) = x_{1} +\theta \varPhi \left (x_{1},x^{{\prime}}\right )\;.}$$

Since ∇Φ is small we have \(g_{\theta,x^{{\prime}}}^{{\prime}} > c > 0\), so that we may write (3.13) as

$$\displaystyle{ \int _{\mathbb{R}^{d-2}}e^{-2\pi i\rho \tau \cdot x^{{\prime}} }\int _{\mathbb{R}}e^{-2\pi i\rho \sigma _{1}s}H\left (\tau,\theta,s,x^{{\prime}}\right )\ \mathit{ds}\,\mathit{dx}^{{\prime}}\;, }$$
(3.14)

where

$$\displaystyle{H\left (\tau,\theta,s,x^{{\prime}}\right ) = \frac{h\left (\varXi \left (\tau,\theta \right ),\nabla \varPhi \left (g_{\theta,x^{{\prime}}}^{-1}\left (s\right ),x^{{\prime}}\right )\right )} {g_{\theta,x^{{\prime}}}^{{\prime}}\left (g_{\theta,x^{{\prime}}}^{-1}\left (s\right )\right )} }$$

is smooth in τ and bounded.

Let us introduce the difference operator Δ ρ :

$$\displaystyle{\varDelta _{\rho }\left [f\left (s\right )\right ] = f\left (s + \left (2\rho \right )^{-1}\right ) - f(s)\;.}$$

Since \(\varDelta _{-\rho }\left [e^{-2\pi i\rho \sigma _{1}s}\right ] = \left (e^{\pi i\sigma _{1}} - 1\right )e^{-2\pi i\rho \sigma _{1}s}\) and since

$$\displaystyle{\int _{\mathbb{R}}\varDelta _{-\rho }\left (f\right )g =\int _{\mathbb{R}}f\varDelta _{-\rho }\left (g\right )\;,}$$

then (3.14) equals

$$\displaystyle\begin{array}{rcl} & & \frac{1} {e^{\pi i\sigma _{1}} - 1}\int _{\mathbb{R}^{d-2}}e^{-2\pi i\rho \tau \cdot x^{{\prime}} }\int _{\mathbb{R}}\varDelta _{-\rho }\left [e^{-2\pi i\rho \sigma _{1}s}\right ]H\left (\tau,\theta,s,x^{{\prime}}\right )\ \mathit{ds}\,\mathit{dx}^{{\prime}} {}\\ & & = \frac{1} {e^{\pi i\sigma _{1}} - 1}\int _{\mathbb{R}^{d-2}}e^{-2\pi i\rho \tau \cdot x^{{\prime}} }\int _{\mathbb{R}}e^{-2\pi i\rho \sigma _{1}s}\varDelta _{ \rho }\left [H\left (\tau,\theta,s,x^{{\prime}}\right )\right ]\ \mathit{ds}\,\mathit{dx}^{{\prime}}\;. {}\\ \end{array}$$

Then, by Minkowski integral inequality and by the boundedness of \(\left (e^{\pi i\sigma _{1}} - 1\right )^{-1}\) on V, we have

$$\displaystyle{\sqrt{\mathit{NS}}\,\leq \,c\int _{\mathbb{R}}\left \{\int _{V }\left \vert \int _{\mathbb{R}^{d-2}}e^{-2\pi i\rho \tau \cdot x^{{\prime}} }\varDelta _{\rho }\left [H\left (\tau,\theta,s,x^{{\prime}}\right )\right ]\,\mathit{dx}^{{\prime}}\right \vert ^{2}J\left (\tau,\theta \right )\ d\tau \,d\theta \right \}^{1/2}\!ds\;.}$$

Let us rewrite the inner integrals as

$$\displaystyle\begin{array}{rcl} & & \int _{V }\left \vert \int _{\mathbb{R}^{d-2}}e^{-2\pi i\rho \tau \cdot x^{{\prime}} }\varDelta _{\rho }\left [H\left (\tau,\theta,s,x^{{\prime}}\right )\right ]\ \mathit{dx}^{{\prime}}\right \vert ^{2}J\left (\tau,\theta \right )\ d\tau \,d\theta {}\\ & & =\int _{\mathbb{R}^{d-2}}\int _{\mathbb{R}^{d-2}}\int _{V }e^{-2\pi i\rho \tau \cdot \left (x^{{\prime}}-y^{{\prime}}\right ) }\varDelta _{\rho }\left [H\left (\tau,\theta,s,x^{{\prime}}\right )\right ]\varDelta _{\rho }\left [H\left (\tau,\theta,s,y^{{\prime}}\right )\right ] {}\\ & & \times J\left (\tau,\theta \right )\ d\tau \,d\theta \,\mathit{dx}^{{\prime}}\,\mathit{dy}^{{\prime}}\;. {}\\ \end{array}$$

We define

$$\displaystyle{D^{N}f =\sum _{ k=0}^{N}\sum _{ \left \vert \alpha \right \vert =k}\sup _{\tau }\left \vert \frac{\partial ^{\alpha }} {\partial \tau ^{\alpha }}f(\tau )\right \vert }$$

so that we can integrate by parts several times in τ and obtain, for every positive integer N,

$$\displaystyle\begin{array}{rcl} & & \left \vert \int _{V }e^{-2\pi i\rho \tau \cdot \left (x^{{\prime}}-y^{{\prime}}\right ) }\varDelta _{\rho }\left [H\left (\tau,\theta,s,x^{{\prime}}\right )\right ]\varDelta _{\rho }\left [H\left (\tau,\theta,s,y^{{\prime}}\right )\right ]J\left (\tau,\theta \right )\ d\tau \,d\theta \right \vert {}\\ & &\leq c\int _{V } \frac{1} {\left (1 +\rho \left \vert x^{{\prime}}- y^{{\prime}}\right \vert \right )^{N}} {}\\ & & \times \ D^{N}\left (\varDelta _{\rho }\left [H\left (\tau,\theta,s,x^{{\prime}}\right )\right ]\varDelta _{\rho }\left [H\left (\tau,\theta,s,y^{{\prime}}\right )\right ]J\left (\tau,\theta \right )\right )\ d\tau \,d\theta {}\\ & & \leq c\int _{V } \frac{1} {\left (1 +\rho \left \vert x^{{\prime}}- y^{{\prime}}\right \vert \right )^{N}} {}\\ & & \times \ D^{N}\left (\varDelta _{\rho }\left [H\left (\tau,\theta,s,x^{{\prime}}\right )\right ]\right )D^{N}\left (\varDelta _{\rho }\left [H\left (\tau,\theta,s,y^{{\prime}}\right )\right ]J\left (\tau,\theta \right )\right )\ d\tau \,d\theta \;. {}\\ \end{array}$$

Since H and J are smooth in τ, the term

$$\displaystyle{D^{N}\left (\varDelta _{\rho }\left [H\left (\tau,\theta,s,y^{{\prime}}\right )\right ]J\left (\tau,\theta \right )\right )}$$

is bounded. For the remaining term

$$\displaystyle{D^{N}\left (\varDelta _{\rho }\left [H\left (\tau,\theta,s,x^{{\prime}}\right )\right ]\right )}$$

we seek a better estimate. Observe that for every α we have

$$\displaystyle\begin{array}{rcl} & & \frac{\partial ^{\alpha }} {\partial \tau ^{\alpha }}\varDelta _{\rho }\left [H\left (\tau,\theta,s,x^{{\prime}}\right )\right ] =\varDelta _{\rho }\frac{\partial ^{\alpha }H} {\partial \tau ^{\alpha }} \left (\tau,\theta,s,x^{{\prime}}\right ) {}\\ & & = \frac{\partial ^{\alpha }H} {\partial \tau ^{\alpha }} \left (\tau,\theta,s + \frac{1} {2\rho },x^{{\prime}}\right ) -\frac{\partial ^{\alpha }H} {\partial \tau ^{\alpha }} \left (\tau,\theta,s,x^{{\prime}}\right ) {}\\ & & = \frac{1} {2\rho }\int _{0}^{1}\left ( \frac{d} {\mathit{dr}} \frac{\partial ^{\alpha }H} {\partial \tau ^{\alpha }} \right )\left (\tau,\theta,s + \frac{r} {2\rho },x^{{\prime}}\right )\ \mathit{dr}\;. {}\\ \end{array}$$

Since \(\frac{\partial ^{\alpha }H} {\partial \tau ^{\alpha }}\) is smooth in ∇Φ, we can bound \(\frac{d} {\mathit{dr}} \frac{\partial ^{\alpha }H} {\partial \tau ^{\alpha }}\) (uniformly in τ and θ) by a linear combination of \(\frac{\partial ^{2}\varPhi } {\partial x_{i}\partial x_{j}}\). Being Φ convex, its Hessian matrix is positive definite and we can bound every matrix entry by the trace Δ Φ, so that we have

$$\displaystyle{D^{N}\varDelta _{ \rho }\left [H\left (\tau,\theta,s,x^{{\prime}}\right )\right ] \leq c\ \frac{1} {\rho } \int _{0}^{1}K\left (g_{\theta,x^{{\prime}}}^{-1}\left (s + \frac{r} {2\rho }\right ),x^{{\prime}}\right )\ \mathit{dr}\;,}$$

where

$$\displaystyle{K =\chi _{\mathcal{A}}\varDelta \varPhi \;.}$$

Summarizing,

$$\displaystyle\begin{array}{rcl} & & \sqrt{\mathit{NS}} {}\\ & & \leq c\int _{\mathbb{R}}\left \{\int _{V }\left \vert \int _{\mathbb{R}^{d-2}}e^{-2\pi i\rho \tau \cdot x^{{\prime}} }\varDelta _{\rho }\left [H\left (\tau,\theta,s,x^{{\prime}}\right )\right ]\ \mathit{dx}^{{\prime}}\right \vert ^{2}J\left (\tau,\theta \right )\ d\tau \,d\theta \right \}^{1/2}\ \mathit{ds} {}\\ & & \leq c\int _{\mathbb{R}}\left \{\int _{\mathbb{R}^{d-2}}\int _{\mathbb{R}^{d-2}}\int _{V }e^{-2\pi i\rho \tau \cdot \left (x^{{\prime}}-y^{{\prime}}\right ) }\varDelta _{\rho }\left [H\left (\tau,\theta,s,x^{{\prime}}\right )\right ]\varDelta _{\rho }\left [H\left (\tau,\theta,s,y^{{\prime}}\right )\right ]\right. {}\\ & & \left.\begin{array}{l} \end{array} \times J\left (\tau,\theta \right )d\tau \,d\theta \,\mathit{dx}^{{\prime}}\,\mathit{dy}^{{\prime}}\right \}^{1/2}\ \mathit{ds} {}\\ & & \leq c\int _{\mathbb{R}}\left \{\int _{\mathbb{R}^{d-2}}\int _{\mathbb{R}^{d-2}} \frac{1} {\left (1 +\rho \left \vert x^{{\prime}}- y^{{\prime}}\right \vert \right )^{N}}\right. {}\\ & & \left.\times \int _{V }D^{N}\left (\varDelta _{\rho }\left [H\left (\tau,\theta,s,x^{{\prime}}\right )\right ]\right )\ d\tau \,d\theta \,\mathit{dx}^{{\prime}}\mathit{dy}^{{\prime}}\right \}^{1/2}\ \mathit{ds} {}\\ & & \leq c\rho ^{-1/2}\sup _{ \theta }\int _{\mathbb{R}}\left \{\int _{\mathbb{R}^{d-2}}\int _{0}^{1}K\left (g_{\theta,x^{{\prime}}}^{-1}\left (s + \frac{r} {2\rho }\right ),x^{{\prime}}\right )\ \mathit{dr}\ \right. {}\\ & & \times \left.\int _{\mathbb{R}^{d-2}} \frac{1} {\left (1 +\rho \left \vert x^{{\prime}}- y^{{\prime}}\right \vert \right )^{N}}\ \mathit{dy}^{{\prime}}\mathit{dx}^{{\prime}}\right \}^{1/2}\ \mathit{ds}\;. {}\\ \end{array}$$

By the Cauchy-Schwarz inequality the last term is smaller than

$$\displaystyle\begin{array}{rcl} & & c\rho ^{-1/2}\sup _{ \theta }\left \{\int _{\mathbb{R}^{d-2}} \frac{1} {\left (1 +\rho \left \vert z^{{\prime}}\right \vert \right )^{N}}\ dz^{{\prime}}\right \}^{1/2} {}\\ & & \times \int _{\mathbb{R}}\left \{\int _{\mathbb{R}^{d-2}}\int _{0}^{1}K\left (g_{\theta,x^{{\prime}}}^{-1}\left (s + \frac{r} {2\rho }\right ),x^{{\prime}}\right )\ \mathit{dr}\ \mathit{dx}^{{\prime}}\right \}^{1/2}\ \mathit{ds} {}\\ & & \leq c\rho ^{-\left (d-1\right )/2}\sup _{ \theta }\sqrt{\mathrm{diam }\left (B\right )} {}\\ & & \times \left \{\int _{\mathbb{R}}\int _{\mathbb{R}^{d-2}}\int _{0}^{1}K\left (g_{\theta,x^{{\prime}}}^{-1}\left (s + \frac{r} {2\rho }\right ),x^{{\prime}}\right )\ \mathit{dr}\ \ \mathit{dx}^{{\prime}}\,\mathit{ds}\right \}^{1/2} {}\\ & & \leq c\rho ^{-\left (d-1\right )/2}\sup _{ \theta }\sqrt{\mathrm{diam }\left (B\right )} {}\\ & & \times \left \{\int _{0}^{1}\int _{ \mathbb{R}}\int _{\mathbb{R}^{d-2}}K\left (g_{\theta,x^{{\prime}}}^{-1}\left (s\right ),x^{{\prime}}\right )\ \mathit{dx}^{{\prime}}\,\mathit{ds}\,\mathit{dr}\right \}^{1/2} {}\\ & & \leq c\rho ^{-\left (d-1\right )/2}\sqrt{\mathrm{diam }\left (B\right )}\left \{\int _{ \mathbb{R}^{d-2}}K\left (y\right )\ \mathit{dy}\right \}^{1/2}\;. {}\\ \end{array}$$

Finally,

$$\displaystyle\begin{array}{rcl} \int _{\mathbb{R}^{d-2}}K\left (y\right )\ \mathit{dy}& =& \int _{\mathcal{A}}\varDelta \varPhi \left (y\right )\mathit{dy} =\sum _{ j=1}^{d}\int _{ \mathcal{A}} \frac{\partial ^{2}\varPhi } {\partial y_{j}^{2}}\left (y\right )dy {}\\ & =& \int _{\mathcal{A}_{1}^{{\prime}}}\int _{\mathcal{A}_{1}\left (y^{{\prime}}\right )} \frac{\partial ^{2}\varPhi } {\partial y_{1}^{2}}\left (y_{1},y^{{\prime}}\right )\mathit{dy}_{ 1}\,\mathit{dy}^{{\prime}}+\ldots {}\\ \end{array}$$

where \(\mathcal{A}_{1}^{{\prime}}\) is the projection of \(\mathcal{A}\) on the hyperplane y 1 = 0, and

$$\displaystyle{\mathcal{A}_{1}\left (y^{{\prime}}\right ) = \left \{y_{ 1}: \left (y_{1},y^{{\prime}}\right ) \in \mathcal{A}\right \}.}$$

Since \(\frac{\partial ^{2}\varPhi } {\partial y_{1}^{2}} \geq 0\) then

$$\displaystyle\begin{array}{rcl} \int _{\mathcal{A}_{1}\left (y^{{\prime}}\right )} \frac{\partial ^{2}\varPhi } {\partial y_{1}^{2}}\left (y_{1},y^{{\prime}}\right )dy_{ 1}& \leq & \frac{\partial \varPhi } {\partial y_{1}}\left (\sup \mathcal{A}_{1}\left (y^{{\prime}}\right ),y^{{\prime}}\right ) - \frac{\partial \varPhi } {\partial y_{1}}\left (\inf \mathcal{A}_{1}\left (y^{{\prime}}\right ),y^{{\prime}}\right ) {}\\ & \leq & 2\sup _{\mathcal{A}}\left \vert \nabla \varPhi \right \vert {}\\ \end{array}$$

and therefore

$$\displaystyle{\int _{\mathbb{R}^{d-2}}K\left (y\right )\ \mathit{dy} \leq c\left \vert \mathcal{A}^{{\prime}}\right \vert.}$$

Thus

$$\displaystyle{\sqrt{\mathit{NS}} \leq c\rho ^{-\left (d-1\right )/2}\sqrt{\mathrm{diam }\left (B\right )}\sqrt{\left \vert \mathcal{A}^{{\prime} } \right \vert }\leq c\rho ^{-\left (d-1\right )/2}\left (\mathrm{diam}\left (B\right )\right )^{\left (d-1\right )/2}\;.}$$

 □ 

Remark 7.

The above proof shows that the term \(\left (\mathrm{diam}\left (B\right )\right )^{\left (d-1\right )/2}\) in the statement of Theorem 6 can be replaced by the term

$$\displaystyle{\left (\mu \left (\partial B\right ) +\mathrm{ diam}\left (B\right )p\right )^{1/2}\;,}$$

where p is the maximum of \(\left (d - 2\right )\)-dimensional surface area of the projections of B on \(\left (d - 1\right )\)-dimensional hyperplanes. When B has large eccentricity, this provides a better estimate.

2.3 Estimates for Bounded Sets

In certain problems the spherical mean \(\left \Vert \hat{\chi }_{B}\left (\rho \cdot \right )\right \Vert _{L^{2}\left (\varSigma _{d-1}\right )}\) can be replaced by “easier” averages such as

$$\displaystyle{\int _{A\rho \leq \left \vert \xi \right \vert \leq B\rho }\left \vert \hat{\chi }_{B}\left (\xi \right )\right \vert ^{2}\ d\xi.}$$

In this way we can get non trivial lower bounds (which for spherical averages are impossible e.g. because of the zeros of the Bessel functions) and also deal with sets more general than convex bodies. The following result is taken from [11], see also [27].

Theorem 8.

Let \(B \subset \mathbb{R}^{d}\) and assume the existence of positive constants c 1 and c 2 such that

$$\displaystyle{ c_{1}\,\left \vert h\right \vert \leq \left \vert \left (B\setminus \left (B + h\right )\right ) \cup \left (\left (B + h\right )\setminus B\right )\right \vert \leq c_{2}\ \left \vert h\right \vert }$$
(3.15)

for every \(h \in \mathbb{R}^{d}\) . Then there exist four positive constants α,β,γ,δ such that

$$\displaystyle{ \alpha \ \rho ^{-1} \leq \int _{\left \{\gamma \rho \leq \left \vert \xi \right \vert \leq \delta \rho \right \}}\left \vert \hat{\chi }_{B}(\xi )\right \vert ^{2}\ d\xi \leq \beta \ \rho ^{-1} }$$
(3.16)

for every ρ ≥ 1.

Proof.

We first show that

$$\displaystyle{ \int _{\left \{\left \vert \xi \right \vert \geq \rho \right \}}\left \vert \hat{\chi }_{B}(\xi )\right \vert ^{2}\ d\xi \leq c\ \rho ^{-1}\;. }$$
(3.17)

In order to prove (3.17) it is enough to show that for every integer k ≥ 0 we have

$$\displaystyle{ \int _{\left \{2^{k}\leq \left \vert \xi \right \vert \leq 2^{k+1}\right \}}\left \vert \hat{\chi }_{B}(\xi )\right \vert ^{2}\ d\xi \leq c\ 2^{-k}\;. }$$
(3.18)

By (3.15) and the Parseval identity we have

$$\displaystyle{c_{2}\ \left \vert h\right \vert \geq \int _{\mathbb{R}^{d}}\left \vert \chi _{B}(x + h) -\chi _{B}(x)\right \vert ^{2}\ \mathit{dx} =\int _{ \mathbb{R}^{d}}\left \vert e^{2\pi i\xi \cdot h} - 1\right \vert ^{2}\left \vert \hat{\chi }_{ B}(\xi )\right \vert ^{2}\ d\xi \;.}$$

We split the set \(\left \{2^{k} \leq \left \vert \xi \right \vert \leq 2^{k+1}\right \}\) into a bounded number of subsets such that in each one of them we have (for a suitably chosen h with \(\left \vert h\right \vert \approx 2^{-k}\)) the inequality \(\left \vert e^{2\pi i\xi \cdot h} - 1\right \vert \geq c\). This proves (3.18), so that the estimate from above in (3.16) follows from (3.17). Again (3.18 ) implies

$$\displaystyle\begin{array}{rcl} & & \int _{\left \{\left \vert \xi \right \vert \leq \gamma \rho \right \}}\left \vert \xi \right \vert ^{2}\left \vert \hat{\chi }_{ B}(\xi )\right \vert ^{2}\ d\xi \leq c_{ 3} + c_{4}\ \sum _{k=1}^{\log _{2}\left (\gamma \rho \right )}\int _{ 2^{k}\leq \left \vert \xi \right \vert \leq 2^{k+1}}\left \vert \xi \right \vert ^{2}\left \vert \hat{\chi }_{ B}\left (\xi \right )\right \vert ^{2}\ d\xi \\ & & \leq c_{3} + c_{4}\ \sum _{k=1}^{\log _{2}\left (\gamma \rho \right )}2^{2k}\int _{ 2^{k}\leq \left \vert \xi \right \vert \leq 2^{k+1}}\left \vert \hat{\chi }_{B}\left (\xi \right )\right \vert ^{2}\ d\xi \leq c_{ 3} + c_{5}\ \sum _{k=1}^{\log _{2}\left (\gamma \rho \right )}2^{2k}2^{-k} \leq c\gamma \rho \;.{}\end{array}$$
(3.19)

Then, by (3.17) and (3.19),

$$\displaystyle\begin{array}{rcl} c_{1}\ \left \vert h\right \vert & \leq & \int _{\mathbb{R}^{d}}\left \vert \chi _{B}(x + h) -\chi _{B}(x)\right \vert ^{2}\ \mathit{dx} {}\\ & =& \int _{\mathbb{R}^{d}}\left \vert e^{2\pi i\xi \cdot h} - 1\right \vert ^{2}\left \vert \hat{\chi }_{ B}(\xi )\right \vert ^{2}\ d\xi {}\\ & \leq & 4\pi ^{2}\left \vert h\right \vert ^{2}\int _{ \left \{\left \vert \xi \right \vert \leq \gamma \rho \right \}}\left \vert \xi \right \vert ^{2}\left \vert \hat{\chi }_{ B}(\xi )\right \vert ^{2}\ d\xi + 4\int _{\left \{\gamma \rho \leq \left \vert \xi \right \vert \leq \delta \rho \right \}}\left \vert \hat{\chi }_{B}(\xi )\right \vert ^{2}\ d\xi {}\\ & & +4\int _{\left \{\left \vert \xi \right \vert \geq \delta \rho \right \}}\left \vert \hat{\chi }_{B}(\xi )\right \vert ^{2}\ d\xi {}\\ & \leq & c\ \left [\gamma \rho \left \vert h\right \vert ^{2} +\delta ^{-1}\rho ^{-1}\right ] + 4\int _{\left \{\gamma \rho \leq \left \vert \xi \right \vert \leq \delta \rho \right \}}\left \vert \hat{\chi }_{B}(\xi )\right \vert ^{2}\ d\xi \;, {}\\ \end{array}$$

so that, if \(\left \vert h\right \vert =\rho ^{-1}\), γ is suitably small and δ suitably large, we have

$$\displaystyle{\int _{\left \{\gamma \rho \leq \left \vert \xi \right \vert \leq \delta \rho \right \}}\left \vert \hat{\chi }_{B}(\xi )\right \vert ^{2}\ d\xi \geq \frac{c_{1}} {4} \left \vert h\right \vert -\frac{c} {4}\left [\gamma \rho \left \vert h\right \vert ^{2} +\delta ^{-1}\rho ^{-1}\right ] \geq \frac{c_{1}} {8} \ \rho ^{-1}\;,}$$

and this ends the proof. □ 

It is easy to see that a convex body satisfies (3.15).

Remark 9.

M. Kolountzakis and T. Wolff [37] have proved that for every set \(B \subset \mathbb{R}^{d}\) having positive finite measure we have

$$\displaystyle{\int _{\left \{\left \vert \xi \right \vert \geq \rho \right \}}\left \vert \hat{\chi }_{B}(\xi )\right \vert ^{2}\ d\xi \geq c\ \rho ^{-1}\;.}$$

We can use Theorem 8 to prove that (3.8) is best possible up to the constant involved.

Theorem 10.

Let \(B \subset \mathbb{R}^{d}\) be a convex body. Then

$$\displaystyle{ \mathop{\lim \sup }\limits_{\rho \rightarrow +\infty }\ \rho ^{\left (d+1\right )/2}\left \Vert \hat{\chi }_{ B}\left (\rho \cdot \right )\right \Vert _{L^{2}\left (\varSigma _{d-1}\right )} > 0\;. }$$
(3.20)

Proof.

If (3.20) fails, then there exists a function \(\varepsilon \left (\rho \right )\) such that \(\varepsilon \left (\rho \right ) \rightarrow 0\) as ρ → + and

$$\displaystyle{\left \Vert \hat{\chi }_{B}\left (\rho \cdot \right )\right \Vert _{L^{2}\left (\varSigma _{d-1}\right )} \leq \varepsilon \left (\rho \right )\rho ^{-\left (d+1\right )/2}}$$

for ρ > 1. This contradicts the lower bound in (3.16). □ 

We have pointed out that when B is a ball we cannot bound the spherical mean \(\left \Vert \hat{\chi }_{B}\left (\rho \cdot \right )\right \Vert _{L^{2}\left (\varSigma _{d-1}\right )}\) from below by \(\rho ^{-\left (d+1\right )/2}\) because of the zeroes of the Bessel function. The next result shows that this lower estimate fails also for a cube, so that it fails for the two most popular convex bodies.

Lemma 11.

Let d ≥ 2 and \(Q = Q_{d} = [-1/2,1/2]^{d}\) . Then for every positive integer k we have

$$\displaystyle{\left \Vert \hat{\chi }_{Q}(k\cdot )\right \Vert _{L^{2}(\varSigma _{d-1})} \leq c\,k^{-\left (d+3/2\right )/2}\;.}$$

Proof.

Let

$$\displaystyle{\varSigma ^{{\prime}} =\varSigma _{ d-1} \cap \left \{x \in \mathbb{R}^{d}: x_{ 1} \geq \left \vert x_{k}\right \vert,\;k = 2,\ldots,d\right \}.}$$

By the symmetries of Q and by Theorem 6 applied to the \(\left (d - 1\right )\)-dimensional cube Q d−1 we have

$$\displaystyle\begin{array}{rcl} & & \left \Vert \hat{\chi }_{Q}(k\cdot )\right \Vert _{L^{2}(\varSigma _{d-1})}^{2} \leq c\left \Vert \hat{\chi }_{ Q}(k\cdot )\right \Vert _{L^{2}(\varSigma ^{{\prime}})}^{2} {}\\ & & \leq c\int _{0}^{\pi /4}\int _{ \varSigma _{d-2}}\left \vert \dfrac{\sin (\pi k\cos (\phi ))} {\pi k\cos (\phi )} \hat{\chi }_{Q_{d-1}}(k\sin (\phi )\eta )\right \vert ^{2}\sin ^{d-2}(\phi )\ d\eta d\phi {}\\ & & = c\ k^{-2}\int _{ 0}^{\pi /4}\left \vert \sin (\pi k\cos (\phi ))\right \vert ^{2}\phi ^{d-2}\int _{ \varSigma _{d-2}}\left \vert \hat{\chi }_{Q_{d-1}}(k\sin (\phi )\eta )\right \vert ^{2}\ d\eta d\phi {}\\ & & \leq c\ k^{-d-2}\int _{ 0}^{\pi /4}\left \vert \sin (2\pi k\sin ^{2}\left (\phi /2\right ))\right \vert ^{2}\phi ^{-2}\ d\phi {}\\ & & \leq c\ k^{-d-2}\left (\int _{ 0}^{k^{-1/2} }k^{2}\phi ^{2}\ d\phi + \int _{ k^{-1/2}}^{\pi /4}\phi ^{-2}\ d\phi \right ) \leq c\ k^{-d-3/2}\;. {}\\ \end{array}$$

 □ 

2.4 A Maximal Estimate for the Planar Case

In Theorem 6 we have seen that for every convex body B we have the upper bound \(\left \Vert \hat{\chi }_{B}\left (\rho \cdot \right )\right \Vert _{L^{2}\left (\varSigma _{d-1}\right )} \leq c\rho ^{-\left (d+1\right )/2}\). On the other hand we shall see that in certain lattice point problems it is important to have a bound in the angular variable σ which is uniform with respect to ρ. This means to study the maximal function

$$\displaystyle{\mathcal{M}_{B}\left (\sigma \right ) =\sup _{\rho >0}\rho ^{(d+1)/2}\left \vert \hat{\chi }_{ B}\left (\rho \sigma \right )\right \vert \;.}$$

We need the following definition (see [57]).

Definition 12.

Let X be a measure space and let 0 < p < . We define the space \(L^{p,\infty }\left (X\right )\) (also called weak L p) by the quasi norm

$$\displaystyle{ \left \Vert f\right \Vert _{L^{p,\infty }\left (X\right )} =\sup _{\lambda >0}\lambda \left \vert \left \{x \in X: \left \vert f(x)\right \vert >\lambda \right \}\right \vert ^{1/p}\;. }$$
(3.21)

We shall prove that \(\mathcal{M}_{B} \in L^{2,\infty }\left (\varSigma _{1}\right )\), i.e. that

$$\displaystyle{\sup _{\lambda >0}\lambda ^{2}\left \vert \theta \in \left [0,2\pi \right ]: \mathcal{M}_{ B}\left (\varTheta \right ) >\lambda \right \vert < \infty \;,}$$

where \(\varTheta = \left (\cos \theta,\sin \theta \right )\). Observe that \(L^{2}\left (\varSigma _{1}\right ) \subset L^{2,\infty }\left (\varSigma _{1}\right )\), but \(\mathcal{M}_{B}\) does not necessarily belong to \(L^{2}\left (\varSigma _{1}\right )\). Indeed, consider the unit square Q = [−1∕2, 1∕2]2, then

$$\displaystyle{ \hat{\chi }_{Q}\left (\rho \varTheta \right ) = \frac{\sin \left (\pi \rho \cos \theta \right )} {\pi \rho \cos \theta } \frac{\sin \left (\pi \rho \sin \theta \right )} {\pi \rho \sin \theta } \;. }$$
(3.22)

By symmetry it is enough to consider \(\theta \in \left (0, \frac{\pi } {4}\right )\), and observe that, for any such θ, there exists ρ θ satisfying the following conditions (for a suitable integer k ≥ 0):

$$\displaystyle\begin{array}{rcl} \frac{1} {4}& \leq & \rho _{\theta }\sin \left (\theta \right ) \leq \frac{3} {4}\;, {}\\ \frac{1} {4} + k& \leq & \rho _{\theta }\cos \left (\theta \right ) \leq \frac{3} {4} + k {}\\ \end{array}$$

(because the line ρ Θ intersects at least one of the squares \(\left [\frac{1} {4} + k,\, \frac{3} {4} + k\right ] \times \left [\frac{1} {4},\, \frac{3} {4}\right ]\)). Hence, by (3.22) we have

$$\displaystyle{\mathcal{M}_{B}\left (\varTheta \right ) \geq c\ \rho _{\theta }^{-1/2}\frac{1} {\sin \theta } \geq c\ \theta ^{-1/2}\;.}$$

so that \(\mathcal{M}_{B}\notin L^{2}\).

Before proving the weak type estimate we need the following two results, due to A. Podkorytov (see [45], see also [13]).

Lemma 13.

Let \(f: \mathbb{R}\mathbf{\rightarrow }[0,+\infty )\) be supported and concave in [−1,1]. Then, for every \(\left \vert \xi \right \vert \geq 1\),

$$\displaystyle{ \left \vert \hat{f}(\xi )\right \vert \leq \frac{1} {\left \vert \xi \right \vert }\left [f\left (1 - \frac{1} {2\left \vert \xi \right \vert }\right ) + f\left (-1 + \frac{1} {2\left \vert \xi \right \vert }\right )\right ]\;. }$$
(3.23)

Proof.

It is enough to prove (3.23) when ξ > 1. The assumption on the concavity of f allows us to integrate by parts obtaining

$$\displaystyle{\left \vert \hat{f}(\xi )\right \vert \leq \frac{1} {2\pi \xi }f(1^{-}) + \frac{1} {2\pi \xi }f(-1^{+}) + \frac{1} {2\pi \xi }\left \vert \int _{-1}^{1}f^{{\prime}}(t)e^{-2\pi i\xi t}\ \mathit{dt}\right \vert \;.}$$

Let α be a point where f attains its maximum. Then f will be non-decreasing in [−1, α] and non-increasing in [α, 1]. We can assume 0 ≤ α ≤ 1, so that \(f(-1^{+}) \leq f(-1 + 1/\left (2\xi \right ))\). To estimate f(1) we observe that when \(\alpha \leq 1 - 1/\left (2\xi \right )\,\) one has \(f(1^{-}) \leq f(1 - 1/\left (2\xi \right ))\). On the other hand, since f is concave, in case \(\alpha > 1 - 1/\left (2\xi \right )\) we have \(f(1^{-}) \leq f(\alpha ) \leq 2f(0) \leq 2f(1 - 1/\left (2\xi \right ))\).

To estimate the integral we observe that, by a change of variable,

$$\displaystyle{I =\int _{ -1}^{1}f^{{\prime}}(t)e^{-2\pi i\xi t}\ \mathit{dt} = -\int _{ -1+\frac{1} {2\xi } }^{1+\frac{1} {2\xi } }f^{{\prime}}\left (t -\frac{1} {2\xi }\right )\,e^{-2\pi i\xi t}\ \mathit{dt}\;.}$$

So that

$$\displaystyle\begin{array}{rcl} 2I& =& \int _{-1}^{1}f^{{\prime}}(t)e^{-2\pi i\xi t}\ \mathit{dt} -\int _{ -1+\frac{1} {2\xi } }^{1+\frac{1} {2\xi } }f^{{\prime}}\left (t -\frac{1} {2\xi }\right )\,e^{-2\pi i\xi t}\ \mathit{dt} {}\\ & =& \int _{-1}^{-1+\frac{1} {2\xi } }f^{{\prime}}(t)e^{-2\pi i\xi t}\ \mathit{dt} +\int _{ -1+\frac{1} {2\xi } }^{1}\left [f^{{\prime}}(t) - f^{{\prime}}\left (t -\frac{1} {2\xi }\right )\right ]\,e^{-2\pi i\xi t}\ \mathit{dt} {}\\ & & +\int _{1}^{1+\frac{1} {2\xi } }f^{{\prime}}\left (t -\frac{1} {2\xi }\right )\,e^{-2\pi i\xi t}\ \mathit{dt} {}\\ & =& I_{1} + I_{2} + I_{3}\;. {}\\ \end{array}$$

To estimate I 1 from above we note that

$$\displaystyle{\left \vert I_{1}\right \vert \leq \int _{-1}^{-1+\frac{1} {2\xi } }f^{{\prime}}(t)\mathit{dt} = f\left (-1 + \frac{1} {2\xi }\right ) - f(-1^{+}) \leq f\left (-1 + \frac{1} {2\xi }\right )\;,}$$

since 0 ≤ α ≤ 1.

The estimate for I 3 is similar in case α ≤ 1 − 1∕(2ξ). If α > 1 − 1∕(2ξ), then

$$\displaystyle\begin{array}{rcl} \left \vert I_{3}\right \vert & \leq & \int _{1}^{\alpha +\frac{1} {2\xi } }f^{{\prime}}\left (t -\frac{1} {2\xi }\right )\ \mathit{dt} -\int _{\alpha +\frac{1} {2\xi } }^{1+\frac{1} {2\xi } }f^{{\prime}}\left (t -\frac{1} {2\xi }\right )\ \mathit{dt} {}\\ & =& 2f(\alpha ) - f\left (1 -\frac{1} {2\xi }\right ) - f(1^{-}) \leq 2f(\alpha ) \leq 4f(0) \leq 4f\left (1 -\frac{1} {2\xi }\right )\;. {}\\ \end{array}$$

As for I 2, since f is non increasing, we have

$$\displaystyle\begin{array}{rcl} \left \vert I_{2}\right \vert & \leq & \int _{-1+\frac{1} {2\xi } }^{1}\left [f^{{\prime}}\left (t -\frac{1} {2\xi }\right ) - f^{{\prime}}(t)\right ]\ \mathit{dt} {}\\ & =& f\left (1 -\frac{1} {2\xi }\right ) - f(-1^{+}) - f(1^{-}) + f\left (-1 + \frac{1} {2\xi }\right ) {}\\ & \leq & f\left (1 -\frac{1} {2\xi }\right ) + f\left (-1 + \frac{1} {2\xi }\right )\;, {}\\ \end{array}$$

ending the proof. Note that no constant c is missing in (3.23). □ 

Lemma 14.

Let B be a convex body in \(\mathbb{R}^{2}\) and Θ = (cos θ,sin θ). For a small δ > 0 we consider the chord

$$\displaystyle{ \lambda _{B}(\delta,\theta ) =\lambda (\delta,\theta ) = \left \{x \in B: x\cdot \varTheta = -\delta +\sup \limits _{x\in B}x\cdot \varTheta \right \}. }$$
(3.24)

Then

$$\displaystyle{\left \vert \hat{\chi }_{B}(\rho \varTheta )\right \vert \leq \frac{1} {\rho } \left (\left \vert \lambda \left (\frac{1} {2\rho },\theta \right )\right \vert + \left \vert \lambda \left (\frac{1} {2\rho },\theta +\pi \right )\right \vert \right )\;,}$$

where \(\left \vert \lambda \right \vert \) denotes the length of the chord (Fig.  3.2).

Fig. 3.2
figure 2

Geometric estimate of \(\hat{\chi }_{B}\)

Proof.

Without loss of generality we choose Θ = (1, 0). Then, as in (3.3),

$$\displaystyle{ \hat{\chi }_{B}(\xi _{1},0) =\int _{ -\infty }^{+\infty }\left (\int _{ -\infty }^{+\infty }\chi _{ B}(x_{1},x_{2})\ dx_{2}\right )\ e^{-2\pi ix_{1}\xi _{1} }\ dx_{1} =\hat{ h}(\xi _{1})\;, }$$
(3.25)

where h(s) is the length of the segment obtained intersecting B with the line x 1 = s. Observe that h is concave on its support, say \(\left [a,b\right ]\). We can therefore apply Lemma 13 to obtain, after a change of variable,

$$\displaystyle\begin{array}{rcl} \left \vert \hat{h}(\xi _{1})\right \vert & \leq & \frac{1} {\left \vert \xi _{1}\right \vert }\left [h\left (b - \frac{1} {2\left \vert \xi _{1}\right \vert }\right ) + h\left (a + \frac{1} {2\left \vert \xi _{1}\right \vert }\right )\right ] {}\\ & \leq & \frac{1} {\left \vert \xi _{1}\right \vert }\left (\left \vert \lambda _{B}\left ( \frac{1} {2\left \vert \xi _{1}\right \vert },0\right )\right \vert + \left \vert \lambda _{B}\left ( \frac{1} {2\left \vert \xi _{1}\right \vert },\pi \right )\right \vert \right )\;. {}\\ \end{array}$$

 □ 

We can now prove the following maximal estimate (see [8]).

Theorem 15.

Let \(B \subset \mathbb{R}^{2}\) be a convex body. Then the maximal function

$$\displaystyle{\mathcal{M}_{B}\left (\varTheta \right ) =\sup _{\rho >0}\rho ^{3/2}\left \vert \hat{\chi }_{ B}\left (\rho \varTheta \right )\right \vert }$$

belongs to \(L^{2,\infty }\left (\varSigma _{1}\right )\) , see (3.21) .

Proof.

As in the proof of Theorem 6 we assume ∂ B smooth with everywhere non-vanishing curvature (and the constants in our inequalities will not depend on the smoothness of ∂ B). By Lemma 14 we have, for \(\varTheta = \left (\cos \theta,\sin \theta \right )\),

$$\displaystyle{\sup _{\rho >0}\rho ^{3/2}\left \vert \hat{\chi }_{ B}\left (\rho \varTheta \right )\right \vert \leq \sup _{\rho >0}\rho ^{1/2}\left \vert \lambda _{ B}(\rho ^{-1},\theta )\right \vert +\sup _{\rho >0}\rho ^{1/2}\left \vert \lambda _{ B}(\rho ^{-1},\theta +\pi )\right \vert \;,}$$

so that we study the maximal function

$$\displaystyle{\varOmega _{B}\left (\theta \right ) =\sup _{\delta >0}\delta ^{-1/2}\left \vert \lambda _{ B}(\delta,\theta )\right \vert \;.}$$

By the above non-vanishing assumption, the chord λ B (δ, θ) reduces to a single point as δ → 0. Let z(θ) be this point. Let us choose a direction θ 0 and for every θ close to θ 0 let γ(θ) denote the arc-length on ∂ B between z(θ 0) and z(θ). Assume that we have proved the inequality

$$\displaystyle{ \varOmega _{B}^{2}\left (\theta \right ) \leq 2\sup _{\alpha \neq 0}\frac{\left \vert \gamma \left (\theta +\alpha \right ) -\gamma \left (\theta \right )\right \vert } {\alpha } \;. }$$
(3.26)

Then we have

$$\displaystyle{\varOmega _{B}^{2}\left (\theta \right ) \leq 2\sup _{\alpha >0}\frac{1} {\alpha } \int _{\theta }^{\theta +\alpha }\left \vert \gamma ^{{\prime}}\left (\varphi \right )\right \vert \ d\varphi \;,}$$

so that, by the Hardy-Littlewood maximal function theorem (see e.g. [64, 7.9]), we have

$$\displaystyle\begin{array}{rcl} & & \sup _{\beta >0}\beta ^{2}\left \vert \left \{\theta \in \left [0,2\pi \right ): \mathcal{M}_{ B}\left (\cos \theta,\sin \theta \right ) >\beta \right \}\right \vert {}\\ & &\leq c\sup _{\beta >0}\beta ^{2}\left \vert \left \{\theta \in \left [0,2\pi \right ):\sup _{\alpha >0}\frac{1} {\alpha } \int _{\theta }^{\theta +\alpha }\left \vert \gamma ^{{\prime}}\left (\varphi \right )\right \vert \ d\varphi >\beta ^{2}\right \}\right \vert {}\\ & &\leq c\int _{0}^{2\pi }\left \vert \gamma ^{{\prime}}\left (\varphi \right )\right \vert \ d\varphi \leq c\;. {}\\ \end{array}$$

In order to prove (3.26) we observe that if δ is small, then the normal to ∂ B at the point \(z\left (\theta \right )\) cuts the chord λ B (δ, θ) into two parts λ (δ, θ) and λ +(δ, θ). Let us consider only the segment λ +(δ, θ) and let

$$\displaystyle{\varOmega _{+}\left (\theta \right ) =\sup _{\delta >0}\delta ^{-1/2}\left \vert \lambda _{ +}(\delta,\theta )\right \vert \;.}$$

We may assume that ∂ B is locally the graph of a smooth function f defined on an interval \(\left [0,a\right ]\) with \(f\left (0\right ) = f^{{\prime}}\left (0^{+}\right ) = 0.\) Then, by the mean value theorem ,

$$\displaystyle\begin{array}{rcl} \varOmega _{+}^{2}\left (\theta \right )& \leq & \sup _{ 0<x<a} \frac{x^{2}} {f\left (x\right )} \leq \sup _{0<z<a} \frac{2z} {f^{{\prime}}\left (z\right )} \leq \sup _{0<z<a} \frac{2} {f^{{\prime}}\left (z\right )}\int _{0}^{z}\left (1 + \left [f^{{\prime}}\left (t\right )\right ]^{2}\right )^{1/2}\ \mathit{dt} {}\\ & \leq & 2\sup _{0<\alpha < \frac{\pi }{ 2} } \frac{\left \vert \gamma \left (\theta +\alpha \right ) -\gamma \left (\theta \right )\right \vert } {\alpha } \;. {}\\ \end{array}$$

 □ 

3 Decay of the Fourier Transform : L p Estimates for Characteristic Functions of Polyhedra

In Theorem 6 we have seen that \(\left \Vert \hat{\chi }_{B}\left (\rho \cdot \right )\right \Vert _{L^{2}\left (\varSigma _{d-1}\right )} \leq c\) \(\rho ^{-\left (d+1\right )/2}\) independently of the shape of the convex body B. If we replace B by a ball or, more generally by a convex body with smooth boundary ∂ B which has everywhere positive Gaussian curvature , then, by (3.7), the same estimate holds true for every 1 ≤ p ≤ +. However, if we replace B by a polyhedron P, then the situation should be different (see Sect. 3.2.1, where we have observed that \(\hat{\chi }_{P}(\xi )\) decays as fast as \(\left \vert \xi \right \vert ^{-d}\) along almost all directions, but only as \(\left \vert \xi \right \vert ^{-1}\) along the directions perpendicular to the facets). In this section we will prove sharp estimates for the decay of \(\left \Vert \hat{\chi }_{P}\left (\rho \cdot \right )\right \Vert _{L^{p}\left (\varSigma _{d-1}\right )}\) and in particular we shall see that this decay is faster than \(\rho ^{-\left (d+1\right )/2}\) when 1 ≤ p < 2 and it is slower than \(\rho ^{-\left (d+1\right )/2}\) when 2 < p ≤ +.

Theorem 16.

Let P be a convex polyhedron in \(\mathbb{R}^{d}\) , d ≥ 1. Write \(\xi \in \mathbb{R}^{d}\) in polar coordinates, ξ = ρσ (ρ ≥ 0, σ ∈Σ d−1 ). Then, for ρ ≥ 2, we have

$$\displaystyle{ \left \Vert \hat{\chi }_{P}\left (\rho \cdot \right )\right \Vert _{L^{1}\left (\varSigma _{d-1}\right )} \leq c\ \frac{\log ^{d-1}\left (\rho \right )} {\rho ^{d}} \;, }$$
(3.27)
$$\displaystyle\begin{array}{rcl} \left \Vert \hat{\chi }_{P}\left (\rho \cdot \right )\right \Vert _{L^{p}\left (\varSigma _{d-1}\right )} \leq c_{p}\ \rho ^{-1-\left (d-1\right )/p}\;,\;\;\;\;\;\text{for }1 < p \leq \infty \;.& &{}\end{array}$$
(3.28)

Proof.

The proof is by induction on the dimension d. For d = 1 the bound is true since in this case the average is trivial and we have

$$\displaystyle{ \hat{\chi }_{\left [-1/2,1/2\right ]}\left (\xi \right ) = \frac{\sin \left (\pi \xi \right )} {\pi \xi }. }$$

We then assume the result true for d − 1. Let P have m facets F 1, , F m with outward unit normal vectors ν 1, , ν m . As in (3.9) the divergence theorem yields

$$\displaystyle{ \hat{\chi }_{P}\left (\xi \right ) =\int _{P}e^{-2\pi i\xi \cdot x}\ \mathit{dx} =\sum _{ j=1}^{m}\frac{i\xi \cdot \nu _{j}} {2\pi \left \vert \xi \right \vert ^{2}}\int _{F_{j}}e^{-2\pi i\xi \cdot x}\ \mathit{dx}\;. }$$
(3.29)

Let \(x = \left (x_{1},x_{2},\ldots,x_{d}\right ) = \left (x_{1},x^{{\prime}}\right )\) and write \(\sigma = \left (\cos \left (\varphi \right ),\sin \left (\varphi \right )\eta \right )\), with \(0 \leq \varphi \leq \pi\) and η ∈ Σ d−2. We single out one facet F, which we may assume to stay in the hyperplane x 1 = 0, with outward normal \(\nu = \left (1,0,\ldots,0\right )\). Then

$$\displaystyle\begin{array}{rcl} & & \frac{i\xi \cdot \nu } {2\pi \left \vert \xi \right \vert ^{2}}\int _{F}e^{-2\pi i\xi \cdot x}\ \mathit{dx} = \frac{i\cos \left (\varphi \right )} {2\pi \rho } \int _{F}e^{-2\pi i\rho \sin \left (\varphi \right )\eta \cdot x^{{\prime}} }\ \mathit{dx}^{{\prime}} \\ & & = \frac{i\cos \left (\varphi \right )} {2\pi \rho } \ \hat{\chi }_{F}\left (\rho \sin \left (\varphi \right )\eta \right )\;, {}\end{array}$$
(3.30)

where we see \(\hat{\chi }_{F}\) as a \(\left (d - 1\right )\)-dimensional Fourier transform. Then, by the induction hypothesis,

$$\displaystyle\begin{array}{rcl} & & \frac{1} {\rho } \int _{0}^{\pi }\int _{ \varSigma _{d-2}}\left \vert \hat{\chi }_{F}\left (\rho \sin \left (\varphi \right )\eta \right )\right \vert \sin ^{d-2}\left (\varphi \right )\ d\eta d\varphi {}\\ & & \leq c\ \frac{1} {\rho } \int _{0}^{2/\rho }\varphi ^{d-2}\ d\varphi + c\ \frac{1} {\rho } \int _{2/\rho }^{\pi /2}\frac{\log ^{d-2}\left (\rho \sin \left (\varphi \right )\right )} {\left (\rho \sin \left (\varphi \right )\right )^{d-1}}\sin ^{d-2}\left (\varphi \right )\ d\varphi {}\\ & & \leq c\ \rho ^{-d} + c\ \frac{\log ^{d-2}\left (\rho \right )} {\rho ^{d}} \int _{2/\rho }^{\pi /2}\frac{1} {\varphi } \ d\varphi \leq c\ \frac{\log ^{d-1}\left (\rho \right )} {\rho ^{d}} \;, {}\\ \end{array}$$

while for 1 < p < + we have

$$\displaystyle\begin{array}{rcl} & & \frac{1} {\rho ^{p}}\int _{0}^{\pi }\int _{ \varSigma _{d-2}}\left \vert \hat{\chi }_{F}\left (\rho \sin \left (\varphi \right )\eta \right )\right \vert ^{p}\sin ^{d-2}\left (\varphi \right )\ d\eta d\varphi {}\\ & & \leq c_{p}\ \frac{1} {\rho ^{p}}\int _{0}^{1/\rho }\varphi ^{d-2}\ d\varphi + c\ \frac{1} {\rho ^{p}}\int _{1/\rho }^{\pi /2}\left (\rho \sin \left (\varphi \right )\right )^{-p-\left (d-2\right )}\sin ^{d-2}\left (\varphi \right )\ d\varphi {}\\ & & \leq c_{p}\ \rho ^{-p-\left (d-1\right )} + c_{ p}\ \frac{1} {\rho ^{-2p+d-2}}\int _{1/\rho }^{\pi /2}\varphi ^{-p}\ d\varphi \leq c_{ p}\ \rho ^{-p-\left (d-1\right )} {}\\ \end{array}$$

so that (3.29) and (3.30) give (3.27) and (3.28). □ 

The following weak type estimates (see (3.21)) will be useful too.

Theorem 17.

Let P be a polyhedron in \(\mathbb{R}^{d}\) , d ≥ 2. Write \(\xi \in \mathbb{R}^{d}\) in polar coordinates, ξ = ρσ (ρ ≥ 0, σ ∈Σ d−1 ). Then, for ρ ≥ 2, we have

$$\displaystyle{ \left \Vert \hat{\chi }_{P}\left (\rho \cdot \right )\right \Vert _{L^{1,\infty }\left (\varSigma _{d-1}\right )} \leq c\ \frac{\log ^{d-2}\left (\rho \right )} {\rho ^{d}} \;. }$$

Proof.

Since here d ≥ 2, the first step of the induction needs some work. Assume d = 2, and let P be a polygon in \(\mathbb{R}^{2}\) with counterclockwise oriented vertices \(\left \{a_{j}\right \}_{j=1}^{m}\). For each side \(\left [a_{j},a_{j+1}\right ]\) (assume a m+1 = a 1) let u j be a unit vector parallel to this side and with the same orientation, and let ν j be the outside unit normal to this side. Then the divergence theorem gives

$$\displaystyle\begin{array}{rcl} \hat{\chi }_{P}\left (\rho \sigma \right )& =& \int _{P}e^{-2\pi i\rho \sigma \cdot x}\ \mathit{dx} = -\frac{1} {2\pi i\rho }\sum _{j=1}^{m}\sigma \cdot \nu _{ j}\int _{\left [a_{j},a_{j+1}\right ]}e^{-2\pi i\rho \sigma \cdot x}\ \mathit{dx} {}\\ & =& - \frac{1} {4\pi ^{2}\rho ^{2}}\sum _{j=1}^{m}\rho \sigma \cdot \nu _{ j}\ \frac{e^{-2\pi i\rho \sigma \cdot a_{j}} - e^{-2\pi i\xi \cdot a_{j+1}}} {\rho \sigma \cdot u_{j}} \;. {}\\ \end{array}$$

Hence \(\hat{\chi }_{P}\left (\rho \cos \left (\varphi \right ),\rho \sin \left (\varphi \right )\right )\) is dominated by a finite sum of terms of the form \(\rho ^{-2}\left \vert \cos \left (\varphi -\varphi _{j}\right )\right \vert ^{-1}\) and since the functions \(\cos ^{-1}\left (\varphi -\varphi _{j}\right )\) are in \(L^{1,\infty }\left (\mathbb{T}\right )\), the result for d = 2 follows. For d > 2 we argue as in Theorem 16 and we reduce to a finite sum of terms of the form

$$\displaystyle{\frac{i\cos \left (\varphi \right )} {2\pi \rho } \ \hat{\chi }_{F}\left (\rho \sin \left (\varphi \right )\eta \right )\,}$$

where F is a facet of P. Then, by induction, we have

$$\displaystyle\begin{array}{rcl} & & \lambda \left \vert \left \{\left (\cos \left (\varphi \right ),\sin \left (\varphi \right )\eta \right ) \in \varSigma _{d-1}: \left \vert \frac{i\cos \left (\varphi \right )} {2\pi \rho } \hat{\chi }_{F}\left (\rho \sin \left (\varphi \right )\eta \right )\right \vert >\lambda \right \}\right \vert {}\\ & & =\lambda \int _{ 0}^{\pi }\left \vert \left \{\eta \in \varSigma _{ d-2}: \left \vert \hat{\chi }_{F}\left (\rho \sin \left (\varphi \right )\eta \right )\right \vert > \frac{2\pi \rho \lambda } {\left \vert \cos \left (\varphi \right )\right \vert }\right \}\right \vert \sin ^{d-2}\left (\varphi \right )\ d\varphi {}\\ & & \leq c\ \rho ^{-d} + c\ \int _{ 2/\rho }^{\pi /2}\frac{\cos \left (\varphi \right )} {\rho } \ \frac{\log ^{d-3}\left (\rho \sin \left (\varphi \right )\right )} {\left (\rho \sin \left (\varphi \right )\right )^{d-1}}\sin ^{d-2}\left (\varphi \right )\ d\varphi {}\\ & & \leq c\ \rho ^{-d}\int _{ 2/\rho }^{\pi /2}\cos \left (\varphi \right )\ \frac{\log ^{d-3}\left (\rho \varphi \right )} {\varphi } \ d\varphi \leq c\rho ^{-d}\int _{ 2}^{\rho \pi /2}\frac{\log ^{d-3}\left (t\right )} {t} \ \mathit{dt} {}\\ & & \leq c\ \frac{\log ^{d-2}\left (\rho \right )} {\rho ^{d}} \;. {}\\ \end{array}$$

 □ 

The estimates from above in Theorems 16 and 17 are sharp in many, but not all, cases. We first consider simplices but the proof of the following theorem works with no modifications for polyhedra having a facet not parallel to any other. We need a technical lemma which may be well known.

Lemma 18.

Let Σ be a finite measure space and let \(f \in L^{\infty }\left (\varSigma \right )\) . Then for any \(0 <\alpha < \left \vert \varSigma \right \vert \) we have

$$\displaystyle{\left \Vert f\right \Vert _{1} \leq \alpha \left \Vert f\right \Vert _{\infty } +\log \left (\frac{\vert \varSigma \vert } {\alpha } \right )\left \Vert f\right \Vert _{1,\infty }.}$$

Proof.

Let g be the non-increasing rearrangement of f (see [57]). Then

$$\displaystyle\begin{array}{rcl} \left \Vert g\right \Vert _{\infty }& =& \left \Vert f\right \Vert _{\infty }, {}\\ \mathit{ug}\left (u\right )& \leq & \left \Vert g\right \Vert _{1,\infty } = \left \Vert f\right \Vert _{1,\infty }, {}\\ \end{array}$$

so that

$$\displaystyle\begin{array}{rcl} \left \Vert f\right \Vert _{1}& =& \int _{0}^{\left \vert \varSigma \right \vert }g\left (u\right )\mathit{du} =\int _{ 0}^{\alpha }g\left (u\right )\mathit{du} +\int _{ \alpha }^{\left \vert \varSigma \right \vert }g\left (u\right )\mathit{du} {}\\ & \leq & \alpha \left \Vert f\right \Vert _{\infty } + \left \Vert f\right \Vert _{1,\infty }\int _{\alpha }^{\left \vert \varSigma \right \vert }\frac{1} {u}\ \mathit{du} =\alpha \left \Vert f\right \Vert _{\infty } +\log \frac{\left \vert \varSigma \right \vert } {\alpha } \left \Vert f\right \Vert _{1,\infty }. {}\\ \end{array}$$

 □ 

Theorem 19.

Let P be a simplex in \(\mathbb{R}^{d}\) , d ≥ 2. Again we write \(\xi \in \mathbb{R}^{d}\) in polar coordinates, ξ = ρσ (ρ ≥ 0, σ ∈Σ d−1 ). Then, for every ρ ≥ 1,

  1. i)

    \(\left \Vert \hat{\chi }_{P}\left (\rho \cdot \right )\right \Vert _{L^{1,\infty }\left (\varSigma _{d-1}\right )} \geq c\ \dfrac{\log ^{d-2}\left (\rho \right )} {\rho ^{d}}\)

  2. ii)

    \(\left \Vert \hat{\chi }_{P}\left (\rho \cdot \right )\right \Vert _{L^{1}\left (\varSigma _{d-1}\right )} \geq c\ \dfrac{\log ^{d-1}\left (\rho \right )} {\rho ^{d}}\)

  3. iii)

    \(\left \Vert \hat{\chi }_{P}\left (\rho \cdot \right )\right \Vert _{L^{p}\left (\varSigma _{d-1}\right )} \geq c_{p}\ \rho ^{-1-\left (d-1\right )/p}\;,\qquad \qquad\) if   1 < p ≤∞ .

Proof.

We prove ii) and iii) first. The proof is by induction on the dimension d and we first consider the planar case, showing that a triangle \(T \subset \mathbb{R}^{2}\) satisfies

$$\displaystyle{ \int _{0}^{2\pi }\left \vert \hat{\chi }_{ T}\left (\rho \varTheta \right )\right \vert ^{p}d\theta \geq c\rho ^{-p-1}\;, }$$
(3.31)

for p > 1, where \(\varTheta = \left (\cos \theta,\sin \theta \right )\) and ρ ≥ 1. As in the proofs of the previous theorems we use the divergence theorem. Let

$$\displaystyle{\omega \left (t\right ) = \frac{i} {2\pi \rho }e^{-2\pi i\rho \varTheta \cdot t}\varTheta \;,}$$

with \(t = \left (t_{1},t_{2}\right )\). Then

$$\displaystyle\begin{array}{rcl} \mathrm{div}\left (\omega \left (t\right )\right )& =& \frac{\partial } {\partial t_{1}}\left ( \frac{i} {2\pi \rho }e^{-2\pi i\rho \varTheta \cdot t}\cos \theta \right ) + \frac{\partial } {\partial t_{2}}\left ( \frac{i} {2\pi \rho }e^{-2\pi i\rho \varTheta \cdot t}\sin \theta \right ) {}\\ & =& e^{-2\pi i\rho \varTheta \cdot t}\cos ^{2}\theta + e^{-2\pi i\rho \varTheta \cdot t}\sin ^{2}\theta = e^{-2\pi i\rho \varTheta \cdot t}\;, {}\\ \end{array}$$

and by the divergence theorem we obtain

$$\displaystyle{\hat{\chi }_{T}\left (\rho \varTheta \right ) =\int _{T}e^{-2\pi i\rho \varTheta \cdot t}\mathit{dt} =\int _{ \partial T}\omega \left (t\right ) \cdot \nu (t)\,\mathit{dt}\;,}$$

where ν is the outward unit vector, which takes only the three values ν 1, ν 2, ν 3 on the three sides λ 1, λ 2, λ 3 respectively. Then, if ds is the measure on ∂ T, 

$$\displaystyle\begin{array}{rcl} \hat{\chi }_{T}\left (\rho \varTheta \right )& =& \frac{\varTheta \cdot \nu _{1}} {2\pi \rho } i\int _{\lambda _{1}}e^{-2\pi i\rho \varTheta \cdot s}\mathit{ds} + \frac{\varTheta \cdot \nu _{2}} {2\pi \rho } i\int _{\lambda _{2}}e^{-2\pi i\rho \varTheta \cdot s}\mathit{ds} {}\\ & & +\frac{\varTheta \cdot \nu _{3}} {2\pi \rho } i\int _{\lambda _{3}}e^{-2\pi i\rho \varTheta \cdot s}\mathit{ds} {}\\ & =& A(\rho,\varTheta ) + B(\rho,\varTheta ) + C(\rho,\varTheta )\;. {}\\ \end{array}$$

We may assume that λ 1 has extremes \(\left (\pm \frac{1} {2},0\right )\). Of course it suffices to show that for a given small δ > 0 we have

$$\displaystyle{\int _{-\frac{\pi }{ 2} -\delta }^{-\frac{\pi }{2} +\delta }\left \vert \hat{\chi }_{ T}\left (\rho \varTheta \right )\right \vert ^{p}d\theta \geq c_{ p}\ \rho ^{-p-1}\;.}$$

Indeed \(\left \vert \varTheta \cdot \nu _{1}\right \vert = \left \vert \sin \theta \right \vert \) and changing variables we obtain

$$\displaystyle\begin{array}{rcl} & & \int _{-\frac{\pi }{ 2} -\delta }^{-\frac{\pi }{2} +\delta }\left \vert A(\rho,\varTheta )\right \vert ^{p}d\theta = \frac{1} {\left (2\pi \rho \right )^{p}}\int _{-\frac{\pi }{ 2} -\delta }^{-\frac{\pi }{2} +\delta }\left \vert \sin \theta \int _{ -1/2}^{1/2}e^{-2\pi i\rho s\cos \theta }\mathit{ds}\right \vert ^{p}d\theta {}\\ & & = \frac{1} {\left (2\pi \rho \right )^{p}}\int _{-\frac{\pi }{ 2} -\delta }^{-\frac{\pi }{2} +\delta }\left \vert \frac{\sin \left (\pi \rho \cos \theta \right )} {\pi \rho \cos \theta } \sin \theta \right \vert ^{p}d\theta {}\\ & & \geq c_{p}\ \frac{1} {\rho ^{p+1}}\int _{0}^{c_{1}\rho }\left \vert \frac{\sin \left (u\right )} {u} \right \vert ^{p}\mathit{du} \geq c_{ p}\ \rho ^{-p-1}\;. {}\\ \end{array}$$

As for \(B(\rho,\varTheta )\) and C(ρ, Θ), if \(\left \vert \theta -\frac{\pi }{2}\right \vert \leq \delta\) we reduce to terms of the form

$$\displaystyle{\frac{1} {\rho ^{p}}\int _{c}^{c^{{\prime}} }\left \vert \frac{\sin \left (2\pi \rho x\right )} {\rho x} \right \vert ^{p}\mathit{dx}}$$

with 0 < c < c  < π∕4, so that

$$\displaystyle{\int _{-\frac{\pi }{ 2} -\delta }^{-\frac{\pi }{2} +\delta }\left \vert B(\rho,\varTheta )\right \vert ^{p}d\theta +\int _{ -\frac{\pi }{2} -\delta }^{-\frac{\pi }{2} +\delta }\left \vert C(\rho,\varTheta )\right \vert ^{p}d\theta \leq c\rho ^{-2p}}$$

and (3.31) follows. The proof of the planar case when p = 1 is similar.

Now let S be a simplex in \(\mathbb{R}^{d}\) with facets F 1, , F d+1. We may assume F 1 contained in the hyperplane x 1 = 0 with outward normal \(\nu _{1} = \left (1,0,\ldots,0\right )\). Let U be a small neighborhood of ν 1 in Σ d−1, then by (3.29) we have

$$\displaystyle\begin{array}{rcl} & & \left \Vert \hat{\chi }_{P}\left (\rho \cdot \right )\right \Vert _{L^{p}\left (\varSigma _{d-1}\right )} {}\\ & & \geq c\ \frac{1} {\rho } \left \vert \left \{\int _{U}\left \vert \int _{F_{1}}e^{-2\pi i\rho \sigma \cdot x}\mathit{dx}\right \vert ^{p}\ d\sigma \right \}^{1/p} -\sum _{ j=2}^{m}\left \{\int _{ U}\left \vert \int _{F_{j}}e^{-2\pi i\rho \sigma \cdot x}\mathit{dx}\right \vert ^{p}\ d\sigma \right \}^{1/p}\right \vert \;. {}\\ \end{array}$$

As in the proof of Theorem 16, the induction assumption implies

$$\displaystyle{\frac{1} {\rho } \int _{U}\left \vert \int _{F_{1}}e^{-2\pi i\rho \sigma \cdot x}\mathit{dx}\right \vert d\sigma \geq c\ \rho ^{-d}\log ^{d-1}\left (\rho \right )}$$

and

$$\displaystyle{\frac{1} {\rho } \left \{\int _{U}\left \vert \int _{F_{1}}e^{-2\pi i\rho \sigma \cdot x}\mathit{dx}\right \vert ^{p}d\sigma \right \}^{1/p} \geq c\rho ^{-1-\left (d-1\right )/p}\;,\;\;\;\;\;\text{for }1 < p \leq \infty \;.}$$

We now have to estimate each term

$$\displaystyle{ \int _{U}\left \vert \int _{F}e^{-2\pi i\rho \sigma \cdot x}\mathit{dx}\right \vert ^{p}d\sigma }$$

from above, when F = F 2, , F m . Since we are integrating each facet separately we may rotate and translate F until it belongs to the hyperplane x 1 = 0. After this transformation the normal ν 1 to the facet F 1 is no longer parallel to \(\left (1,0,\ldots,0\right )\). Being U a neighborhood of ν 1 in Σ d−1 we can choose a small δ > 0 such that

$$\displaystyle{U \subset \left \{\left (\cos \left (\varphi \right ),\sin \left (\varphi \right )\eta \right ):\delta \leq \varphi \leq \pi -\delta,\ \eta \in \varSigma _{d-2}\right \}\;.}$$

Applying Theorem 16 to the \(\left (d - 1\right )\)-dimensional Fourier transform of the characteristic function of F we get

$$\displaystyle\begin{array}{rcl} & & \frac{1} {\rho } \int _{U}\left \vert \int _{F}e^{-2\pi i\rho \sigma \cdot x}\ \mathit{dx}\right \vert \ d\sigma \leq c\ \frac{1} {\rho } \int _{\delta }^{\pi -\delta }\int _{ \varSigma _{d-2}}\left \vert \hat{\chi }_{F}\left (\rho \sin \left (\varphi \right )\eta \right )\right \vert \sin ^{d-2}\left (\varphi \right )\ d\varphi d\eta {}\\ & & \leq c\ \frac{1} {\rho } \int _{\delta }^{\pi -\delta }\frac{\log ^{d-2}\left (\rho \sin \left (\varphi \right )\right )} {\left (\rho \sin \left (\varphi \right )\right )^{d-1}}\sin ^{d-2}\left (\varphi \right )\ d\varphi \leq c\ \frac{\log ^{d-2}\left (\rho \right )} {\rho ^{d}} \;, {}\\ \end{array}$$

while, for 1 < p ≤ +,

$$\displaystyle\begin{array}{rcl} & & \frac{1} {\rho ^{p}}\int _{U}\left \vert \int _{F}e^{-2\pi i\rho \sigma \cdot x}\ \mathit{dx}\right \vert ^{p}\ d\sigma {}\\ & & \leq c\ \frac{1} {\rho ^{p}}\int _{\delta }^{\pi -\delta }\int _{ \varSigma _{d-2}}\left \vert \hat{\chi }_{F}\left (\rho \sin \left (\varphi \right )\eta \right )\right \vert ^{p}\sin ^{d-2}\left (\varphi \right )\ d\varphi d\eta {}\\ & & \leq c\ \frac{1} {\rho ^{p}}\int _{\delta }^{\pi -\delta }\left (\rho \sin \left (\varphi \right )\right )^{-p-\left (d-2\right )}\sin ^{d-2}\left (\varphi \right )\ d\varphi \leq c\ \rho ^{-2p-\left (d-2\right )}\;. {}\\ \end{array}$$

Hence ii) and iii) are proved.

To prove i) assume, by way of contradiction, that for any arbitrary small \(\varepsilon\) there exists a suitable large ρ such that

$$\displaystyle{\left \Vert \hat{\chi }_{P}\left (\rho \cdot \right )\right \Vert _{L^{1,\infty }\left (\varSigma _{d-1}\right )} \leq \varepsilon \rho ^{-d}\log ^{d-2}\left (\rho \right ).}$$

By Lemma 18 we have

$$\displaystyle\begin{array}{rcl} & & \left \Vert \hat{\chi }_{P}\left (\rho \cdot \right )\right \Vert _{L^{1}\left (\varSigma _{d-1}\right )} {}\\ & & \leq \rho ^{-d}\left \Vert \hat{\chi }_{ P}\left (\rho \cdot \right )\right \Vert _{L^{\infty }\left (\varSigma _{d-1}\right )} +\varepsilon \rho ^{-d}\log ^{d-2}\left (\rho \right )\int _{\rho ^{ -d}}^{\left \vert \varSigma _{d-1}\right \vert }u^{-1}\ \mathit{du} {}\\ & & \leq \left \vert P\right \vert \rho ^{-d} +\varepsilon \log \left (\left \vert \varSigma _{ d-1}\right \vert \right )\rho ^{-d}\log ^{d-2}\left (\rho \right ) +\varepsilon d\rho ^{-d}\log ^{d-1}\left (\rho \right ), {}\\ \end{array}$$

which, for small \(\varepsilon\) contradicts ii). □ 

The above theorem is false in the case d = 1, and this is simply due to the zeros of \(\hat{\chi }_{P}\) when P is an segment. When d ≥ 2 the lower bound \(\left (\mathit{iii}\right )\) in Theorem 19 is false for a cube. The following analog of Lemma 11 can be easily proved.

Theorem 20.

Let Q = Q d = [−1∕2,1∕2] d be the unit cube in \(\mathbb{R}^{d}\) , d ≥ 2. Then for 1 < p ≤ +∞ and for every positive integer k we have

$$\displaystyle{\left \Vert \hat{\chi }_{Q}(k\cdot )\right \Vert _{L^{p}(\varSigma _{d-1})} \leq c\,k^{-\left (3p+2d-3\right )/2p}\;.}$$

So far we have seen that balls and polyhedra share the same spherical L p order of decay if and only if p = 2. It is natural to look for convex bodies with “intermediate” order of decay. On this problem we have significative results only for d = 2 (see, [7, 12, 13, 62]). It can be shown that for every 2 < p ≤ + and every order of decay \(a \in \left (1 + 1/p,3/2\right )\) there exists a convex planar body B having piecewise smooth boundary and satisfying

$$\displaystyle{ \left \Vert \hat{\chi }_{B}\left (\rho \cdot \right )\right \Vert _{L^{p}\left (\varSigma _{1}\right )} \leq c\ \rho ^{-a}\;,\;\;\;\;\;\mathop{\lim \sup }\limits_{\rho \rightarrow +\infty }\ \rho ^{a}\left \Vert \chi _{ B}\left (\rho \cdot \right )\right \Vert _{L^{p}\left (\varSigma _{1}\right )} > 0\;. }$$
(3.32)

For p < 2 the situation is different: if we keep the piecewise smooth boundary assumption, then there is no intermediate decay between the one of the disc and the one of the polygons (observe that in (3.32) we have a limsup). The reason is that if ∂ B is piecewise smooth, but B is not a polygon, then ∂ B contains an arc with positive curvature, and by the argument in [9], this is enough to get the lower estimate ρ −3∕2 for the limsup. If we pass to arbitrary convex bodies, then it is possible to construct convex bodies with intermediate L p order of decay. Moreover, for no convex planar body B we have \(\left \Vert \hat{\chi }_{B}\left (\rho \cdot \right )\right \Vert _{L^{1}\left (\varSigma _{1}\right )} = \mathit{o}\left (\rho ^{-2}\log \rho \right )\) (see [7, 62]).

4 Lattice Points: Estimates from Above

The literature on lattice points in multi-dimensional domains is very impressive and deep, see e.g. [28, 34, 39]. Here we focus on the topics which are necessary for (or close to) the goal of this chapter, i.e. the relation between discrepancy problems and the average decay of Fourier transforms . First we shall see that the upper bounds in Theorems 6 and 16 readily provide estimates from above for L 2 or L p discrepancy problems related to rotations and translations of convex bodies. The lower bounds for the discrepancy related to the lattice \(\mathbb{Z}^{d}\) are not strictly necessary for our purpose, since the typical results on irregularities of point set distribution involve arbitrary choices of points. However we will present some results of this kind, on the one hand because the choice of points related to a lattice are very important, on the other hand because in some cases they compensate the lack of lower bounds for arbitrary choices of points.

For a convex body \(B \subset \mathbb{R}^{d}\) and for a large dilation R, let \(\tau \left (\mathit{RB}\right ) + t\) be the rotated and translated copy of RB. Here \(\tau \in SO\left (d\right ),\) and since we are interested in the cardinality of the set \(\mathbb{Z}^{d} \cap \left (\tau \left (\mathit{RB}\right ) + t\right )\), which is \(\mathbb{Z}^{d}\)-periodic in the variable t, we take \(t \in \mathbb{T}^{d}\). Define the discrepancy function D R on \(\mathit{SO}\left (d\right ) \times \mathbb{T}^{d}\)

$$\displaystyle\begin{array}{rcl} D_{R}\left (\tau,t\right )& =& \mathrm{card}\left (\mathbb{Z}^{d} \cap \left (\tau \left (\mathit{RB}\right ) + t\right )\right ) - R^{d}\left \vert B\right \vert \\ & =& \sum _{k\in \mathbb{Z}^{d}}\chi _{\tau \left (\mathit{RB}\right )}\left (k - t\right ) - R^{d}\left \vert B\right \vert \;. {}\end{array}$$
(3.33)

The Fourier coefficients of the periodic function \(D_{\tau,R}\left (t\right ) = D_{R}\left (\tau,t\right )\) take values

$$\displaystyle{ \hat{D}_{\tau,R}\left (m\right ) = \left \{\begin{array}{lll} 0 &&\text{if }m = 0\\ R^{d } \ \hat{\chi }_{ \tau \left (B\right )}\left (\mathit{Rm}\right )&&\text{if }m\neq 0 \end{array} \right.\;. }$$
(3.34)

Indeed

$$\displaystyle\begin{array}{rcl} & & \hat{D}_{\tau,R}\left (0\right ) =\int _{\left [-\frac{1} {2},\frac{1} {2} \right )^{d}}\left (\mathrm{card}\left (\mathbb{Z}^{d} \cap \tau \left (\mathit{RB}\right ) + t\right ) - R^{d}\left \vert B\right \vert \right )\ \mathit{dt} {}\\ & & = -R^{d}\left \vert B\right \vert +\sum _{ k\in \mathbb{Z}^{d}}\int _{\left [-\frac{1} {2},\frac{1} {2} \right )^{d}}\chi _{\tau \left (\mathit{RB}\right )}\left (k - t\right )\ \mathit{dt} {}\\ & & = -R^{d}\left \vert B\right \vert +\int _{ \mathbb{R}^{d}}\chi _{\tau \left (\mathit{RB}\right )}\left (t\right )\ \mathit{dt} = 0\;, {}\\ \end{array}$$

while for m ≠ 0

$$\displaystyle\begin{array}{rcl} & & \hat{D}_{\tau,R}\left (m\right ) =\int _{\left [-\frac{1} {2},\frac{1} {2} \right )^{d}}\left (\mathrm{card}\left (\mathbb{Z}^{d} \cap \tau \left (\mathit{RB}\right ) + t\right ) - R^{d}\left \vert B\right \vert \right )\ e^{-2\pi \mathit{im}\cdot t}\ \mathit{dt} {}\\ & & =\sum _{k\in \mathbb{Z}^{d}}\int _{\left [-\frac{1} {2},\frac{1} {2} \right )^{d}}\chi _{\tau \left (\mathit{RB}\right )}\left (k - t\right )\ e^{-2\pi \mathit{im}\cdot t}\ \mathit{dt} =\int _{ \mathbb{R}^{d}}\chi _{\tau \left (\mathit{RB}\right )}\left (t\right )\ e^{-2\pi \mathit{im}\cdot t}\ \mathit{dt} {}\\ & & =\hat{\chi } _{\tau \left (\mathit{RB}\right )}\left (m\right ) = R^{d}\ \hat{\chi }_{ \tau \left (B\right )}\left (\mathit{Rm}\right )\;. {}\\ \end{array}$$

Then \(D_{\tau,R}\left (t\right )\) has Fourier series

$$\displaystyle{R^{d}\sum _{ 0\neq m\in \mathbb{Z}^{d}}\hat{\chi }_{\tau \left (B\right )}\left (\mathit{Rm}\right )\ e^{2\pi \mathit{im}\cdot t}\;.}$$

The following result is due to D. KendallFootnote 2 (see [35], see also [13]).

Theorem 21.

Let B be a convex body in \(\mathbb{R}^{d}\) , d ≥ 1, and let D R be as in (3.33) . Then there exists a positive constant c, depending on d but not on B, such that for every R ≥ 1 we have

$$\displaystyle{\left \Vert D_{R}\right \Vert _{L^{2}\left (\mathit{SO}\left (d\right )\times \mathbb{T}^{d}\right )} \leq c\ \left (\mathrm{diam}\left (B\right )\right )^{\left (d-1\right )/2}\ R^{\left (d-1\right )/2}\;.}$$

Proof.

By Parseval identity we obtain

$$\displaystyle\begin{array}{rcl} & & \left \Vert D_{R}\right \Vert _{L^{2}\left (\mathit{SO}\left (d\right )\times \mathbb{T}^{d}\right )}^{2} =\int _{\mathit{ SO}\left (d\right )}\int _{\mathbb{T}^{d}}D_{R}^{2}\left (\tau,t\right )\ \mathit{dt}\,d\tau \\ & & =\int _{\mathit{SO}\left (d\right )}\sum _{0\neq m\in \mathbb{Z}^{d}}\left \vert \hat{D}_{\tau,R}\left (m\right )\right \vert ^{2}\ d\tau = R^{2d}\sum _{ 0\neq m\in \mathbb{Z}^{d}}\int _{\mathit{SO}\left (d\right )}\left \vert \hat{\chi }_{\tau \left (B\right )}\left (\mathit{Rm}\right )\right \vert ^{2}\ d\tau \\ & & = R^{2d}\sum _{ 0\neq m\in \mathbb{Z}^{d}}\int _{SO\left (d\right )}\left \vert \hat{\chi }_{B}\left (\tau ^{-1}\left (\mathit{Rm}\right )\right )\right \vert ^{2}\ d\tau {}\end{array}$$
(3.35)

because the Fourier transform commutes with rotations. Then Theorem 6 gives

$$\displaystyle\begin{array}{rcl} & & \left \Vert D_{R}\right \Vert _{L^{2}\left (\mathit{SO}\left (d\right )\times \mathbb{T}^{d}\right )}^{2} \leq c\ \left (\mathrm{diam}\left (B\right )\right )^{d-1}R^{2d}\sum _{ 0\neq m\in \mathbb{Z}^{d}}\left (R\left \vert m\right \vert \right )^{-\left (d+1\right )} \\ & & \leq c\ \left (\mathrm{diam}\left (B\right )\right )^{d-1}R^{d-1}\int _{ x\in \mathbb{R}^{d},\;\left \vert x\right \vert \geq 1}\left \vert x\right \vert ^{-\left (d+1\right )}\ \mathit{dx} \leq c^{{\prime}}\ \left (\mathrm{diam}\left (B\right )\right )^{d-1}R^{d-1}\;.{}\end{array}$$
(3.36)

 □ 

Remark 22.

The above argument can be applied to a more general setting (see [25]). First consider a body \(B \subset \mathbb{R}^{d}\) and let 0 ≤ α ≤ 1 satisfy

$$\displaystyle{\left \vert \left \{t \in \mathbb{R}^{d}:\mathrm{ dist}\left (t,\partial B\right ) \leq \delta \right \}\right \vert \leq c_{ d}\ \delta ^{\alpha }}$$

for every small δ > 0. Let \(D_{R}\left (\tau,t\right )\) be as in (3.33). Then

$$\displaystyle{\left \{ \frac{1} {R}\int _{0}^{R}\int _{ \mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}\left \vert D_{\rho }\left (\tau,t\right )\right \vert ^{2}\ \mathit{dt}\,d\sigma \,d\rho \right \}^{1/2} \leq c\ R^{\left (d-\alpha \right )/2}\;.}$$

Moreover, the characteristic function χ B can be replaced by an arbitrary integrable function. In this case a modulus of continuity appears in the upper bound.

4.1 The Curious Case of the Ball When \(d \equiv 1\ \left (\mathrm{mod}4\right )\)

The above upper estimate is best possible, but it is not always sharp. Indeed, let B be a ball in \(\mathbb{R}^{d}\) and \(1 < d \equiv 1\ \left (\mathrm{mod}4\right )\), then there exists a diverging sequence R j such that the upper bound \(R_{j}^{d-1}\) in (3.36) can be replaced by \(c_{\varepsilon }\ R_{j}^{d-1}\log ^{-1/\left (d+\varepsilon \right )}\left (R_{j}\right )\), where \(\varepsilon > 0\) is arbitrarily small. This interesting fact has been proved by L. Parnovski and A. Sobolev in [44] (see also [38] and [43]).

We first need the following approximation result (see [44]).

Lemma 23.

Let \(\alpha _{1},\alpha _{2},\ldots,\alpha _{n}\) be real numbers. Then for every positive integer  j there exist integers p 1 ,p 2 ,…,p n ,q such that

$$\displaystyle{\;j \leq q \leq \; j^{n+1}\;,\;\;\;\;\;\left \vert \alpha _{ k}q - p_{k}\right \vert <\; j^{-1}\;\;\text{for every }\;k = 1,\ldots,n.}$$

Proof.

As usual we write \(\left \{x\right \} = x -\left [x\right ]\) for the fractional part of a real number x. Split

$$\displaystyle{ \left [0,1\right )^{n} =\bigcup _{ k=1}^{\;j^{n} }Q_{k}\;, }$$

where the Q k ’s are cubes of sides parallel to the axes and of length  j −1. For every integer 0 ≤  ≤   j n+1 consider

$$\displaystyle{ \left (\left \{\ell\alpha _{1}\right \},\left \{\ell\alpha _{2}\right \},\ldots,\left \{\ell\alpha _{n}\right \}\right ) = a_{\ell} \in \left [0,1\right )^{n}\;. }$$

Since the number of the a ’s is  j n+1 + 1, there exists k 0 such that the cube \(Q_{k_{0}}\) contains at least  j + 1 points \(a_{\ell_{1}},a_{\ell_{2}},\ldots,a_{\ell_{j+1}}\), say with \(\ell_{1} <\ell _{2} <\ldots <\ell _{\;j+1}\). Then j+1 1 ≥   j and, since the above points stay in \(Q_{k_{0}}\), we have

$$\displaystyle{\;j^{-1} \geq \left \vert \left \{\ell_{\; j+1}\alpha _{k}\right \} -\left \{\ell_{1}\alpha _{k}\right \}\right \vert = \left \vert \left (\ell_{\;j+1} -\ell_{1}\right )\alpha _{k} -\left (\left [\ell_{\;j+1}\alpha _{k}\right ] -\left [\ell_{1}\alpha _{k}\right ]\right )\right \vert }$$

for every k = 1, , n. To end the proof we choose q =  j+1 1 and \(p_{k} = \left [\ell_{\;j+1}\alpha _{k}\right ] -\left [\ell_{1}\alpha _{k}\right ]\). □ 

Theorem 24.

Let \(1 < d \equiv 1\ \left (\mathrm{mod}4\right )\) , let \(B = \left \{u \in \mathbb{R}^{d}: \left \vert u\right \vert \leq 1\right \}\) be the unit ball and for every \(t \in \mathbb{T}^{d}\) consider the discrepancy

$$\displaystyle{D_{R}\left (t\right ) =\mathrm{ card}\left (\mathbb{Z}^{d} \cap \left (\mathit{RB} + t\right )\right ) - R^{d}\left \vert B\right \vert \;.}$$

Then for every \(\varepsilon > 0\) there exists a sequence of integers R j → +∞ such that

$$\displaystyle{\left \Vert D_{R_{j}}\right \Vert _{L^{2}\left (\mathbb{T}^{d}\right )} \leq c_{\varepsilon }\ R_{j}^{\left (d-1\right )/2}\log ^{\frac{-1} {d+\varepsilon } }\left (R_{j}\right )\;.}$$

Proof.

For every positive integer j let

$$\displaystyle{H_{j} = \left \{m \in \mathbb{Z}^{d}: 0 < \left \vert m\right \vert \leq j^{2}\right \}\;.}$$

Then \(\mathrm{card}H_{j} \leq 2^{d}j^{2d}\). Lemma 23 implies the existence of a positive integer R j such that

$$\displaystyle{ j \leq R_{j} \leq j^{2^{d}j^{2d}+1 }\;,\;\;\;\;\;\left \vert \sin \left (2\pi R_{j}\left \vert m\right \vert \right )\right \vert \leq j^{-1} }$$
(3.37)

for every \(\left \vert m\right \vert \leq j^{2}\). Then by (3.35), (3.4), (3.5), (3.34), the assumption \(d \equiv 1\ \left (\mathrm{mod}4\right )\) and (3.37) we obtain

$$\displaystyle\begin{array}{rcl} & & \left \Vert D_{R_{j}}\right \Vert _{L^{2}\left (\mathbb{T}^{d}\right )}^{2} =\sum _{ m\in \mathbb{Z}^{d}}\left \vert \hat{D}_{R_{j}}\left (m\right )\right \vert ^{2} = R_{ j}^{d}\sum _{ 0\neq m\in \mathbb{Z}^{d}}\left \vert m\right \vert ^{-d}J_{ d/2}^{2}\left (2\pi R_{ j}\left \vert m\right \vert \right ) \\ & & = R_{j}^{d}\sum _{ 0<\left \vert m\right \vert \leq j^{2}}\left \vert m\right \vert ^{-d}J_{ d/2}^{2}\left (2\pi R_{ j}\left \vert m\right \vert \right ) + R_{j}^{d}\sum _{ \left \vert m\right \vert >j^{2}}\left \vert m\right \vert ^{-d}J_{ d/2}^{2}\left (2\pi R_{ j}\left \vert m\right \vert \right ) \\ & & \leq R_{j}^{d-1}\sum _{ 0<\left \vert m\right \vert \leq j^{2}}\pi ^{-2}\left \vert m\right \vert ^{-\left (d+1\right )}\sin ^{2}\left (2\pi R_{ j}\left \vert m\right \vert \right ) \\ & & +R_{j}^{d-1}\sum _{ \left \vert m\right \vert >j^{2}}\pi ^{-2}\left \vert m\right \vert ^{-\left (d+1\right )} + \mathcal{O}\left (R_{ j}^{d-2}\right ) \\ & & \leq c\ R_{j}^{d-1}j^{-2}\int _{ 1}^{j^{2} }r^{-2}\ \mathit{dr} + c\ R_{ j}^{d-1}\int _{ j^{2}}^{+\infty }r^{-2}\ \mathit{dr} + \mathcal{O}\left (R_{ j}^{d-2}\right ) \\ & & \leq c\ j^{-2}R_{ j}^{d-1} + \mathcal{O}\left (R_{ j}^{d-2}\right )\;. {}\end{array}$$
(3.38)

Since (3.37) implies

$$\displaystyle\begin{array}{rcl} \log \left (R_{j}\right )& <& \left (\left (2j^{2}\right )^{d} + 1\right )\log j < c_{\varepsilon }\left (2j^{2}\right )^{d+\varepsilon }\;, {}\\ j^{2}& >& c_{\varepsilon }^{{\prime}}\ \log ^{ \frac{1} {d+\varepsilon } }\left (R_{j}\right ) {}\\ \end{array}$$

for every \(\varepsilon > 0\), we end the proof. □ 

4.2 Lattice Points in Polyhedra

For general p we have some rather sharp results in the case of polyhedra.

Theorem 25.

Let P be a convex polyhedron in \(\mathbb{R}^{d}\) d ≥ 1, and let D R = D P,R as in (3.33) . Then there exist positive constants c and c p such that for every R ≥ 2 we have

$$\displaystyle{ \left \Vert D_{R}\right \Vert _{L^{1}\left (SO\left (d\right )\times \mathbb{T}^{d}\right )} \leq c\ \log ^{d}\left (R\right ) }$$
(3.39)

and for 1 < p ≤ +∞

$$\displaystyle{ \left \Vert D_{R}\right \Vert _{L^{p}\left (SO\left (d\right )\times \mathbb{T}^{d}\right )} \leq c_{p}\ R^{\left (d-1\right )\left (1-1/p\right )}\;. }$$
(3.40)

Proof.

The bound in (3.39) is a particular case of Theorem 30 below. We prove (3.40) first in the case 1 < p ≤ 2. Then by (3.34), Parseval identity, Hölder inequality lder inequality lder and the inequality \(\left \Vert \cdot \right \Vert _{\ell^{2}} \leq \left \Vert \cdot \right \Vert _{\ell^{p}}\) we obtain

$$\displaystyle\begin{array}{rcl} & & \left \{\int _{\mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}\left \vert D_{R}\left (\tau,t\right )\right \vert ^{p}\ \mathit{dt}\,d\tau \right \}^{1/p} {}\\ & & \leq \left \{\int _{\mathit{SO}\left (d\right )}\left \{\int _{\mathbb{T}^{d}}\left \vert D_{R}\left (\tau,t\right )\right \vert ^{2}\ \mathit{dt}\right \}^{p/2}\ d\tau \right \}^{1/p} {}\\ & & = \left \{\int _{\mathit{SO}\left (d\right )}\left \{\sum _{0\neq m\in \mathbb{Z}^{d}}\left \vert R^{d}\ \hat{\chi }_{ \tau \left (B\right )}\left (\mathit{Rm}\right )\right \vert ^{2}\right \}^{p/2}\ d\tau \right \}^{1/p} {}\\ & & \leq \left \{\int _{\mathit{SO}\left (d\right )}\sum _{0\neq m\in \mathbb{Z}^{d}}\left \vert R^{d}\ \hat{\chi }_{ \tau \left (B\right )}\left (\mathit{Rm}\right )\right \vert ^{p}\ d\tau \right \}^{1/p} {}\\ & & = R^{d}\left \{\sum _{ 0\neq m\in \mathbb{Z}^{d}}\int _{\mathit{SO}\left (d\right )}\left \vert \ \hat{\chi }_{\tau \left (B\right )}\left (\mathit{Rm}\right )\right \vert ^{p}\ d\tau \right \}^{1/p}\;. {}\\ \end{array}$$

By Theorem 16 the last term is bounded by

$$\displaystyle\begin{array}{rcl} & & c\ R^{d}\left \{\sum _{ 0\neq m\in \mathbb{Z}^{d}}\left \vert \mathit{Rm}\right \vert ^{-p-d+1}\right \}^{1/p} \leq c\ R^{\left (d-1\right )\left (1-1/p\right )}\int _{ 1}^{+\infty }r^{-p}\ \mathit{dr} {}\\ & & = c_{p}\ R^{\left (d-1\right )\left (1-1/p\right )}\;. {}\\ \end{array}$$

For the case p = + a geometric consideration shows the existence of a positive constant c such that for every \(\tau \in \mathit{SO}\left (d\right )\) and every \(t \in \mathbb{T}^{d}\) we have

$$\displaystyle{\left \vert D_{R}\left (\tau,t\right )\right \vert \leq c\ R^{d-1}\;.}$$

We end the proof obtaining the case 2 < p < + by interpolation:

$$\displaystyle\begin{array}{rcl} & & \left \{\int _{\mathit{SO}\left (d\right )\times \mathbb{T}^{d}}\left \vert D_{R}\right \vert ^{p}\right \}^{1/p} = \left \{\int _{\mathit{ SO}\left (d\right )\times \mathbb{T}^{d}}\left \vert D_{R}\right \vert ^{2}\left \vert D_{ R}\right \vert ^{p-2}\right \}^{1/p} {}\\ & & \leq \left \Vert D_{R}\right \Vert _{L^{\infty }\left (SO\left (d\right )\times \mathbb{T}^{d}\right )}^{\left (p-2\right )/p}\left \Vert D_{ R}\right \Vert _{L^{2}\left (SO\left (d\right )\times \mathbb{T}^{d}\right )}^{2/p} \leq c\ R^{\left (d-1\right )\left (p-2\right )/p}\ R^{\left (d-1\right )/p} {}\\ & & = c\ R^{\left (d-1\right )\left (1-1/p\right )}\;. {}\\ \end{array}$$

 □ 

A modification of the above argument can be used to study the so-called half-space discrepancy (see [23, 40]).

The proof of the weak-L 1 estimate requires a more delicate argument.

Theorem 26.

Let P be a convex polyhedron in \(\mathbb{R}^{d}\) and let D R = D P,R as in (3.33) . Then there exists a positive constant c such that for every R ≥ 2 we have

$$\displaystyle{\left \Vert D_{R}\right \Vert _{L^{1,\infty }\left (SO\left (d\right )\times \mathbb{T}^{d}\right )} \leq c\ \log ^{d-1}\left (R\right )\;.}$$

The proof of this theorem requires two preliminary results.

Lemma 27.

Let X,Y be finite measure spaces, and let

$$\displaystyle{\left \Vert F\right \Vert _{L^{1,\infty }\left (X,L^{2}\left (Y \right )\right )} =\sup _{\lambda >0}\lambda \left \vert \left \{x \in X: \left \Vert F\left (x,\cdot \right )\right \Vert _{L^{2}\left (Y \right )} >\lambda \right \}\right \vert < +\infty \;.}$$

Then

$$\displaystyle{\left \Vert F\right \Vert _{L^{1,\infty }\left (X\times Y \right )} \leq c\ \left \Vert F\right \Vert _{L^{1,\infty }\left (X,L^{2}\left (Y \right )\right )}\;.}$$

Proof.

Without loss of generality we may assume \(\left \Vert F\right \Vert _{L^{1,\infty }\left (X,L^{2}\left (Y \right )\right )} = 1\). Being the statement rearrangement invariant, we may assume \(X = \left [0,1\right ]\), \(Y = \left [0,1\right ]\), endowed with Lebesgue measure and \(\left \Vert F\left (x,\cdot \right )\right \Vert _{L^{2}\left (Y \right )} \leq 1/x\). Then, by Chebyshev inequality we obtain

$$\displaystyle\begin{array}{rcl} & & \left \vert \left \{\left (x,y\right ): 0 \leq x \leq 1,\;0 \leq y \leq 1,\;\left \vert F\left (x,y\right )\right \vert >\lambda \right \}\right \vert {}\\ & &\leq \lambda ^{-1} + \left \vert \left \{\left (x,y\right ):\lambda ^{-1} \leq x \leq 1,\;0 \leq y \leq 1,\;\left \vert F\left (x,y\right )\right \vert >\lambda \right \}\right \vert {}\\ & & =\lambda ^{-1} +\int _{ \lambda ^{ -1}}^{1}\left \vert \left \{y: 0 \leq y \leq 1,\;\left \vert F\left (x,y\right )\right \vert >\lambda \right \}\right \vert \ \mathit{dx} {}\\ & & \leq \lambda ^{-1} +\int _{ \lambda ^{ -1}}^{1}\left (\lambda ^{-2}\int _{ 0}^{1}\left \vert F\left (x,y\right )\right \vert ^{2}\ \mathit{dy}\right )\mathit{dx} \leq \lambda ^{-1} +\lambda ^{-2}\int _{ \lambda ^{-1}}^{1} \frac{1} {x^{2}}\ \mathit{dx} \leq 2\lambda ^{-1}\;. {}\\ \end{array}$$

 □ 

The triangle inequality for \(\left \Vert \cdot \right \Vert _{L^{1,\infty }}\) fails when we add infinitely many terms (see [57, p. 215]). The following lemma is a kind of substitute.

Lemma 28.

Let f m be a sequence of functions in \(L^{1,\infty }\left (X\right ).\) Then

$$\displaystyle{\left \Vert \left \{\sum _{m}\left \vert f_{m}\right \vert ^{2}\right \}^{1/2}\right \Vert _{ L^{1,\infty }\left (X\right )} \leq c\ \sum _{m}\left \Vert f_{m}\right \Vert _{L^{1,\infty }\left (X\right )}\;.}$$

Proof.

We have

$$\displaystyle\begin{array}{rcl} & & \left \Vert \left \{\sum _{m}\left \vert f_{m}\right \vert ^{2}\right \}^{1/2}\right \Vert _{ L^{1,\infty }\left (X\right )} =\sup _{\lambda >0}\lambda \left \vert \left \{x \in X: \left \{\sum _{m}\left \vert f_{m}\left (x\right )\right \vert ^{2}\right \}^{1/2} >\lambda \right \}\right \vert \\ & & =\sup _{\lambda >0}\lambda \left \vert \left \{x \in X:\sum _{m}\left \vert f_{m}\left (x\right )\right \vert ^{2} >\lambda ^{2}\right \}\right \vert \\ & & =\sup _{\lambda >0}\lambda ^{1/2}\left \vert \left \{x \in X:\sum _{ m}\left \vert f_{m}\left (x\right )\right \vert ^{2} >\lambda \right \}\right \vert = \left \Vert \sum _{ m}\left \vert f_{m}\right \vert ^{2}\right \Vert _{ L^{1/2,\infty }\left (X\right )}^{1/2}\;.{}\end{array}$$
(3.41)

Now we recall that the following q-triangular inequality holds true when 0 < q < 1 (see e.g. [59, Lemma 1.8]):

$$\displaystyle{\left \Vert \sum _{m}g_{m}\right \Vert _{L^{q,\infty }\left (X\right )} \leq c\ \sum _{m}\left \Vert g_{m}\right \Vert _{L^{q,\infty }\left (X\right )}\;.}$$

Then, as in (3.41),

$$\displaystyle{\left \Vert \sum _{m}\left \vert f_{m}\right \vert ^{2}\right \Vert _{ L^{1/2,\infty }\left (X\right )}^{1/2} \leq c\ \sum _{ m}\left \Vert f_{m}^{2}\right \Vert _{ L^{1/2,\infty }\left (X\right )}^{1/2} = c\ \sum _{ m}\left \Vert f_{m}\right \Vert _{L^{1,\infty }\left (X\right )}\;.}$$

 □ 

Proof (of Theorem 26).

By Lemma 27 we have

$$\displaystyle\begin{array}{rcl} & & \left \Vert D_{R}\right \Vert _{L^{1,\infty }\left (\mathit{SO}\left (d\right )\times \mathbb{T}^{d}\right )} \leq c\ \left \Vert D_{R}\right \Vert _{L^{1,\infty }\left (\mathit{SO}\left (d\right ),L^{2}\left (\mathbb{T}^{d}\right )\right )} {}\\ & & = c\ \left \Vert \left \{\sum _{0\neq m\in \mathbb{Z}^{d}}\left \vert \ \hat{\chi }_{\tau \left (B\right )}\left (\mathit{Rm}\right )\right \vert ^{2}\right \}^{1/2}\right \Vert _{ L^{1,\infty }\left (\mathit{SO}\left (d\right )\right )} {}\\ & & \leq c\ R^{d}\left \Vert \left \{\sum _{ 0<\left \vert m\right \vert \leq R^{d-1}}\left \vert \ \hat{\chi }_{\tau \left (B\right )}\left (\mathit{Rm}\right )\right \vert ^{2}\right \}^{1/2}\right \Vert _{ L^{1,\infty }\left (\mathit{SO}\left (d\right )\right )} {}\\ & & +c\ R^{d}\left \Vert \left \{\sum _{ \left \vert m\right \vert >R^{d-1}}\left \vert \ \hat{\chi }_{\tau \left (B\right )}\left (\mathit{Rm}\right )\right \vert ^{2}\right \}^{1/2}\right \Vert _{ L^{1,\infty }\left (\mathit{SO}\left (d\right )\right )}\;. {}\\ \end{array}$$

By Lemma 28 we have

$$\displaystyle\begin{array}{rcl} & & R^{d}\left \Vert \left \{\sum _{ 0<\left \vert m\right \vert \leq R^{d-1}}\left \vert \ \hat{\chi }_{\tau \left (B\right )}\left (\mathit{Rm}\right )\right \vert ^{2}\right \}^{1/2}\right \Vert _{ L^{1,\infty }\left (\mathit{SO}\left (d\right )\right )} {}\\ & & \leq c\ R^{d}\sum _{ 0<\left \vert m\right \vert \leq R^{d-1}}\left \Vert \hat{\chi }_{\tau \left (B\right )}\left (\mathit{Rm}\right )\right \Vert _{L^{1,\infty }\left (\mathit{SO}\left (d\right )\right )} \leq c\ R^{d}\sum _{ 0<\left \vert m\right \vert \leq R^{d-1}} \frac{\log ^{d-2}\left (R\left \vert m\right \vert \right )} {R^{d}\left \vert m\right \vert ^{d}} {}\\ & & \leq c\ \log ^{d-2}\left (R\right )\int _{ 1}^{+\infty }\frac{1} {r}\ \mathit{dr} = c\ \log ^{d-1}\left (R\right )\;. {}\\ \end{array}$$

On the other hand, by Chebyshev inequality and (3.40),

$$\displaystyle\begin{array}{rcl} & & R^{d}\left \Vert \left \{\sum _{ \left \vert m\right \vert >R^{d-1}}\left \vert \ \hat{\chi }_{\tau \left (B\right )}\left (\mathit{Rm}\right )\right \vert ^{2}\right \}^{1/2}\right \Vert _{ L^{1,\infty }\left (\mathit{SO}\left (d\right )\right )} {}\\ & & \leq R^{d}\left \Vert \left \{\sum _{ \left \vert m\right \vert >R^{d-1}}\left \vert \ \hat{\chi }_{\tau \left (B\right )}\left (\mathit{Rm}\right )\right \vert ^{2}\right \}^{1/2}\right \Vert _{ L^{1}\left (\mathit{SO}\left (d\right )\right )} {}\\ & & \leq R^{d}\left \Vert \left \{\sum _{ \left \vert m\right \vert >R^{d-1}}\left \vert \ \hat{\chi }_{\tau \left (B\right )}\left (\mathit{Rm}\right )\right \vert ^{2}\right \}^{1/2}\right \Vert _{ L^{2}\left (\mathit{SO}\left (d\right )\right )} {}\\ & & = R^{d}\left \{\int _{ \mathit{SO}\left (2\right )}\sum _{\left \vert m\right \vert >R^{d-1}}\left \vert \ \hat{\chi }_{\tau \left (B\right )}\left (\mathit{Rm}\right )\right \vert ^{2}\ d\tau \right \}^{1/2} {}\\ & & \leq c\ R^{d}\left \{\sum _{ \left \vert m\right \vert >R^{d-1}}\left \vert Rm\right \vert ^{-\left (d+1\right )}\right \}^{1/2} \leq c\ R^{\left (d-1\right )/2}\left \{\int _{ R^{d-1}}^{+\infty }r^{-2}\ \mathit{dr}\right \}^{1/2} \leq c\;. {}\\ \end{array}$$

 □ 

We now prove an upper bound where the discrepancy is averaged only over rotations. The proof follows a known argument which is usually applied to get a short proof of Sierpinski’s 1903 estimate for the circle problem (see e.g. [48, 55, 60, 61]). For a convex polyhedron \(P \subset \mathbb{R}^{d}\), for \(\tau \in \mathit{SO}\left (d\right )\), and for a large dilation R, let \(\tau \left (\mathit{RP}\right )\) be the rotated copy of RP. Define the discrepancy function D R  = D P, R on \(\mathit{SO}\left (d\right )\)

$$\displaystyle{D_{R}\left (\tau \right ) =\mathrm{ card}\left (\mathbb{Z}^{d} \cap \tau \left (R\ P\right )\right ) - R^{d}\left \vert P\right \vert \;.}$$

The following result has been pointed out to us by Leonardo Colzani.

Lemma 29.

Let C be a convex body in \(\mathbb{R}^{d}\) such that \(\mathrm{Interior}\left (C\right ) \supseteq B\left (0,1\right )\) , the unit ball centered at the origin. Then for large R and small \(\varepsilon\) we have

$$\displaystyle{B\left (q,\varepsilon \right ) \subseteq \left (R+\varepsilon \right )C\setminus \mathrm{Interior}\left (R-\varepsilon \right )C}$$

for every \(q \in \partial \left (RC\right )\) .

Proof.

Since C is convex we have

$$\displaystyle{ \frac{R} {R+\varepsilon }\;C + \frac{\varepsilon } {R+\varepsilon }\;C \subseteq C}$$

so that

$$\displaystyle{ \left (R+\varepsilon \right )C \supseteq \mathit{RC} +\varepsilon C \supseteq \mathit{RC} + B\left (0,\varepsilon \right ) }$$
(3.42)

and therefore \(B\left (q,\varepsilon \right ) \subseteq \left (R+\varepsilon \right )C\) for every \(q \in \partial \left (\mathit{RC}\right )\). Applying (3.42) to \(\mathrm{Interior}\left (C\right )\) with R in place of \(R+\varepsilon\) we obtain

$$\displaystyle{\mathrm{Interior}\left (\mathit{RC}\right ) \supseteq \mathrm{ Interior}\left (R-\varepsilon \right )C + B\left (0,\varepsilon \right ).}$$

Assume there exists \(y \in B\left (q,\varepsilon \right ) \cap \mathrm{ Interior}\left (R-\varepsilon \right )C\). It follows that

$$\displaystyle{q \in \mathrm{ Interior}\left (R-\varepsilon \right )C + B\left (0,\varepsilon \right ) \subseteq \mathrm{ Interior}\left (\mathit{RC}\right )}$$

so that \(q\notin \partial \left (\mathit{RC}\right )\). □ 

Theorem 30.

Let d ≥ 2 and let P be a convex polyhedron in \(\mathbb{R}^{d}\) . Then there exists a positive constant c such that, for large R,

$$\displaystyle{\left \Vert D_{R}\right \Vert _{L^{1}\left (SO\left (d\right )\right )} \leq c\ \log ^{d}\left (R\right )\;.}$$

Proof.

Let \(B = \left \{t \in \mathbb{R}^{d}: \left \vert t\right \vert \leq 1\right \}\) and let \(\varphi = c\chi _{\frac{1} {2} B} {\ast}\chi _{\frac{1} {2} B}\) where we choose c so that \(\int \varphi \left (x\right )dx = 1\). For every small \(\varepsilon > 0\) let \(\varphi _{\varepsilon }\left (t\right ) =\varepsilon ^{-d}\varphi \left (t/\varepsilon \right )\), so that for every \(\varepsilon > 0\) we have \(\int _{\mathbb{R}^{d}}\varphi _{\varepsilon } = 1\) and \(\hat{\varphi }_{\varepsilon }\left (\xi \right ) =\hat{\varphi } \left (\varepsilon \xi \right )\). Let R ≥ 2 and let χ RP be the characteristic function of the dilated polyhedron P. We start the proof introducing the smooth functions

$$\displaystyle{\chi _{R,\varepsilon,\tau }^{\pm } =\chi _{\left (R\pm \varepsilon \right )\tau ^{-1}P} {\ast}\varphi _{\varepsilon }\;.}$$

By (3.4) and (3.5) we know that

$$\displaystyle{\left \vert \hat{\varphi }\left (\xi \right )\right \vert = c\left \vert \hat{\chi }_{\frac{1} {2} B}\left (\xi \right )\right \vert ^{2} \leq \frac{c^{{\prime}}} {1 + \left \vert \xi \right \vert ^{d+1}}\;.}$$

Then, writing in polar coordinates ξ = ρ σ,

$$\displaystyle\begin{array}{rcl} \left \vert \widehat{\chi _{R,\varepsilon,\tau }^{\pm }}\left (\xi \right )\right \vert & =& \left \vert \hat{\chi }_{\left (R\pm \varepsilon \right )\tau ^{-1}P}\left (\xi \right )\hat{\varphi }_{\varepsilon }\left (\xi \right )\right \vert \\ &\leq & c\ R^{d}\left \vert \hat{\chi }_{\tau ^{ -1}P}\left (\left (R\pm \varepsilon \right )\rho \sigma \right )\right \vert \frac{1} {1 + \left \vert \varepsilon \rho \right \vert ^{d+1}}\;.{}\end{array}$$
(3.43)

By Lemma 29, the support of \(\chi _{R,\varepsilon,\tau }^{-}\) is contained in R τ −1 P, while R τ −1 P is contained in the set where \(\chi _{R,\varepsilon,\tau }^{+}\) takes the value 1. Therefore, for all \(t \in \mathbb{R}^{d}\) we have

$$\displaystyle{ \chi _{R,\varepsilon,\tau }^{-}\left (t\right ) \leq \chi _{ R\tau ^{-1}P}\left (t\right ) \leq \chi _{R,\varepsilon,\tau }^{+}\left (t\right ). }$$

By the Poisson summation formula we have

$$\displaystyle\begin{array}{rcl} & & D_{R}\left (\tau \right ) = -R^{d}\left \vert P\right \vert +\sum _{ m\in \mathbb{Z}^{d}}\chi _{R\tau ^{-1}P}\left (m\right ) \leq -R^{d}\left \vert P\right \vert +\sum _{ m\in \mathbb{Z}^{d}}\chi _{R,\varepsilon,\tau }^{+}\left (m\right ) {}\\ & & = -R^{d}\left \vert P\right \vert +\sum _{ m\in \mathbb{Z}^{d}}\widehat{\chi _{R,\varepsilon,\tau }^{+}}\left (m\right ) = \left (\left (R+\varepsilon \right )^{d} - R^{d}\right )\left \vert P\right \vert +\sum _{ m\neq 0}\widehat{\chi _{R,\varepsilon,\tau }^{+}}\left (m\right ), {}\\ \end{array}$$

and similarly,

$$\displaystyle{D_{R}\left (\tau \right ) \geq \left (\left (R-\varepsilon \right )^{d} - R^{d}\right )\left \vert P\right \vert +\sum _{ m\neq 0}\widehat{\chi _{R,\varepsilon,\tau }^{-}}\left (m\right )\;.}$$

Thus,

$$\displaystyle{\left \vert D_{R}\left (\tau \right )\right \vert \leq c\ R^{d-1}\varepsilon + c\ \sum _{ m\neq 0}\left \vert \widehat{\chi _{R,\varepsilon,\tau }^{\pm }}\left (m\right )\right \vert \;.}$$

Hence, by Theorem 16 and (3.43),

$$\displaystyle\begin{array}{rcl} & & \int _{\mathit{SO}\left (d\right )}\left \vert \mathrm{card}\left (\mathbb{Z}^{d} \cap \tau \left (\mathit{RP}\right )\right ) - R^{d}\left \vert P\right \vert \right \vert \ d\tau {}\\ & & \leq c\ R^{d-1}\varepsilon + c\ R^{d}\sum _{ 0\neq m\in \mathbb{Z}^{d}}\left \vert \hat{\varphi }\left (\varepsilon m\right )\right \vert \int _{SO\left (d\right )}\left \vert \hat{\chi }_{P}\left (\left (R\pm \varepsilon \right )\tau \left (m\right )\right )\right \vert \ d\tau {}\\ & & \leq c\ R^{d-1}\varepsilon + c\ \sum _{ 0\neq m\in \mathbb{Z}^{d}} \frac{1} {1 + \left \vert \varepsilon m\right \vert ^{d+1}}\left \vert m\right \vert ^{-d}\log ^{d-1}\left (R\left \vert m\right \vert \right ). {}\\ \end{array}$$

Now choose \(\varepsilon = R^{1-d}\). Then a repeated integration by parts yields

$$\displaystyle\begin{array}{rcl} & & \int _{\mathit{SO}\left (d\right )}\left \vert \mathrm{card}\left (\mathbb{Z}^{d} \cap \tau \left (\mathit{RP}\right )\right ) - R^{d}\left \vert P\right \vert \right \vert \ d\tau {}\\ & & \leq c + c\ \log ^{d-1}\left (R\right )\int _{ 1}^{R^{d-1} } \frac{1} {r}\ \mathit{dr} + c\ R^{d^{2}-1 }\int _{R^{d-1}}^{+\infty }\frac{\log ^{d-1}\left (r\right )} {r^{d+2}} \ \mathit{dr} {}\\ & & \leq c + c\ \log ^{d}\left (R\right ) {}\\ & & +c\ R^{d^{2}-1 }\left (R^{1-d^{2} }\log ^{d-1}\left (R\right ) +\int _{ R^{d-1}}^{+\infty }\frac{\log ^{d-2}\left (r\right )} {r^{d+2}} \ \mathit{dr}\right ) {}\\ & & \leq \ldots \leq c_{d}\ \log ^{d}\left (R\right )\;. {}\\ \end{array}$$

 □ 

Remark 31.

Note that the estimate in the above theorem coincides with the upper L 1 estimate in Theorem 25 where the discrepancy has been averaged also over translations. The case 1 < p <  seems to be different, since either repeating the steps of the above proof for L p norms or interpolating between L 1 and L we get estimates larger than the one in (3.40).

The previous theorem shows that the discrepancy of a convex body with respect to \(\mathbb{Z}^{d}\) can be quite small after we have averaged over the rotations. Let us make some remarks on this point. Let us consider for simplicity a square in \(\mathbb{R}^{2}\) with sides parallel to the axes: the two close dilations (say R and \(R+\varepsilon\)) of the square in the picture, have almost the same area, but the number of integer points inside differ for ≈ R (Fig. 3.3).

Fig. 3.3
figure 3

Integer points and squares with sides parallel to the axes

The same happens for every rational rotation of the square. On the other hand we know (see Theorem 3) that in certain directions the discrepancy can be as small as \(\sqrt{\log R}\). Then we may expect that the discrepancy of a convex body C with respect to \(\mathbb{Z}^{d}\) is reasonably small for almost every rotation of C. This is a very deep problem, since when C is the unit disc centered at the origin, then the rotation θ disappears and we have the classical Gauss’ circle problem (so far the best bound for this problem is due to M. Huxley and it is close to R 0. 629⋯). We are now ready to state the following result (see [8], see also [24, 28, 47]). Let

$$\displaystyle{ D_{R}\left (\theta \right ) =\mathrm{ card}\left (\mathbb{Z}^{2} \cap R\theta \left (C\right )\right ) - R^{2}\left \vert C\right \vert = -R^{2}\left \vert C\right \vert +\sum _{ m\in \mathbb{Z}^{2}}\chi _{R\theta \left (C\right )}\left (m\right )\;, }$$

where C is a convex planar body and \(\theta \in \mathit{SO}\left (2\right )\).

Theorem 32.

Let \(C \subset \mathbb{R}^{2}\) be a convex body, let δ > 1∕2 and R ≥ 2. Then for almost every \(\theta \in \mathit{SO}\left (2\right )\) there exists a constant c = c θ,δ such that

$$\displaystyle{ \left \vert D_{R}\left (\theta \right )\right \vert \leq c\ R^{2/3}\log ^{\delta }R\;. }$$
(3.44)

Proof.

We use Theorem 15 and a smoothing argument similar to the one we have used in Theorem 30. Let \(\psi =\pi ^{-1}\chi _{\left \{t\in \mathbb{R}^{2}:\left \vert t\right \vert \leq 1\right \}}\) be the normalized characteristic function of the unit disc. For every small \(\varepsilon > 0\) let \(\psi _{\varepsilon }\left (t\right ) =\varepsilon ^{-2}\psi \left (t/\varepsilon \right )\), so that for every \(\varepsilon > 0\) we have \(\int _{\mathbb{R}^{2}}\psi _{\varepsilon } = 1\) and \(\hat{\psi }_{\varepsilon }\left (\xi \right ) =\hat{\psi } \left (\varepsilon \xi \right )\). Let

$$\displaystyle{ D_{R}\left (\theta,\varepsilon \right ) = -R^{2}\left \vert C\right \vert +\sum _{ m\in \mathbb{Z}^{2}}\left (\chi _{R\theta \left (C\right )} {\ast}\psi _{\varepsilon }\right )\left (m\right )\;, }$$

observe that, as in the proof of Theorem 30,

$$\displaystyle{ D_{R-\varepsilon }\left (\theta,\varepsilon \right ) + \left (-2R\varepsilon +\varepsilon ^{2}\right )\left \vert C\right \vert \leq D_{ R}\left (\theta \right ) \leq D_{R+\varepsilon }\left (\theta,\varepsilon \right ) + \left (2R\varepsilon +\varepsilon ^{2}\right )\left \vert C\right \vert \;. }$$
(3.45)

By the Poisson summation formula we obtain

$$\displaystyle{D_{R}\left (\theta,\varepsilon \right ) = R^{2}\sum _{ 0\neq m\in \mathbb{Z}^{2}}\hat{\chi }_{C}\left (R\theta ^{-1}\left (m\right )\right )\hat{\psi }\left (\varepsilon m\right )\;.}$$

Then, for every positive integer j, (3.5) gives

$$\displaystyle\begin{array}{rcl} & & \sup _{2^{j}\leq R\leq 2^{j+1}}R^{-2/3}\left \vert D_{ R}\left (\theta,\varepsilon \right )\right \vert \\ & &\leq c\ 2^{-j/6}\sum _{ 0\neq m\in \mathbb{Z}^{2}}\left \vert m\right \vert ^{-3/2} \frac{1} {1 + \left \vert \varepsilon m\right \vert ^{3/2}}\sup _{2^{j}\leq R\leq 2^{j+1}}\left (\left \vert \hat{\chi }_{C}\left (R\theta ^{-1}\left (m\right )\right )\right \vert \left \vert \mathit{Rm}\right \vert ^{3/2}\right )\;.{}\end{array}$$
(3.46)

By Theorem 15 the function

$$\displaystyle{\theta \longmapsto \sup _{2^{j}\leq R\leq 2^{j+1}}\left (\left \vert \hat{\chi }_{C}\left (R\theta ^{-1}\left (m\right )\right )\right \vert \left \vert \mathit{Rm}\right \vert ^{3/2}\right )}$$

belongs to \(L^{2,\infty }\left (\mathit{SO}\left (2\right )\right )\), uniformly with respect to j and m. Since L 2,  is a Banach space, then also the sum in (3.46) belongs to \(L^{2,\infty }\left (\mathit{SO}\left (2\right )\right )\), with norm bounded up to a constant by

$$\displaystyle\begin{array}{rcl} & & 2^{-j/6}\sum _{ 0\neq m\in \mathbb{Z}^{2}}\left \vert m\right \vert ^{-3/2} \frac{1} {1 + \left \vert \varepsilon m\right \vert ^{3/2}} {}\\ & & = 2^{-j/6}\sum _{ 0<\left \vert m\right \vert \leq \varepsilon ^{-1}}\left \vert m\right \vert ^{-3/2} + 2^{-j/6}\varepsilon ^{-3/2}\sum _{ \left \vert m\right \vert >\varepsilon ^{-1}}\left \vert m\right \vert ^{-3} \leq c\ 2^{-j/6}\ \varepsilon ^{-1/2}\;. {}\\ \end{array}$$

Choosing \(\varepsilon = 2^{-j/3}\) and using (3.45) we obtain

$$\displaystyle{ \left \Vert \sup _{2^{j}\leq R\leq 2^{j+1}}R^{-2/3}\left \vert D_{ R}\left (\theta \right )\right \vert \right \Vert _{L^{2,\infty }\left (\mathit{SO}\left (2\right )\right )} \leq c\;. }$$
(3.47)

Then

$$\displaystyle\begin{array}{rcl} & & \sup _{R\geq 2}\left (\log ^{-\delta }\left (R\right )R^{-2/3}\left \vert D_{ R}\left (\theta,\varepsilon \right )\right \vert \right )^{2} =\sup _{ R\geq 2}\left (\log ^{-2\delta }\left (R\right )R^{-4/3}\left \vert D_{ R}\left (\theta,\varepsilon \right )\right \vert ^{2}\right ) {}\\ & & \leq \sum _{j=1}^{+\infty }j^{-2\delta }\sup _{ 2^{j}\leq R\leq 2^{j+1}}\left (R^{-4/3}\left \vert D_{ R}\left (\theta,\varepsilon \right )\right \vert ^{2}\right ) {}\\ \end{array}$$

belongs to \(L^{1,\infty }\left (\mathit{SO}\left (2\right )\right )\) since by (3.47) the function

$$\displaystyle{\sup _{2^{j}\leq R\leq 2^{j+1}}\left (R^{-4/3}\left \vert D_{ R}\left (\theta,\varepsilon \right )\right \vert ^{2}\right )}$$

is uniformly in L 1,  and can therefore be summed by the sequence j −2δ if δ > 1∕2 (see [58, Lemma 2.3]). Then the function

$$\displaystyle{\sup _{R\geq 2}\left (\log ^{-\delta }\left (R\right )R^{-2/3}\left \vert D_{ R}\left (\theta,\varepsilon \right )\right \vert \right )}$$

belongs to \(L^{2,\infty }\left (\mathit{SO}\left (2\right )\right )\) and therefore is a.e. bounded. This proves (3.44). □ 

Remark 33.

M. Skriganov [54] has shown that when C is a polygon we have \(\left \vert D_{R}\left (\theta \right )\right \vert \leq c_{\varepsilon }\log ^{1+\varepsilon }R\) for any \(\varepsilon > 0\) and almost every θ. Our technique can be applied also in the case of a polygon, but we only get a power of the logarithm larger than the one in [54].

5 Lattice Points: Estimates from Below

In this section we prove that the previous upper bounds are essentially best possible. We will consider the balls (with the intriguing case \(d \equiv 1\ \left (\mathrm{mod}4\right )\) introduced in Theorem 24) and the simplices.

We need a technical result (see [44]) where, as usual, \(\left \Vert \beta \right \Vert\) denotes the minimal distance of a real number β from the integers.

Lemma 34.

For every \(\varepsilon > 0\) there exist R 0 ≥ 1 and 0 < α < 1∕2 such that for every R ≥ R 0 there exists \(m \in \mathbb{Z}^{d}\) such that

$$\displaystyle{ \left \vert m\right \vert \leq R^{\varepsilon }\;,\;\;\;\;\;\left \Vert R\left \vert m\right \vert \right \Vert \geq \alpha \;. }$$
(3.48)

Proof.

We introduce positive integers \(n = n\left (R,\varepsilon \right )\) and \(k_{0} = k_{0}\left (\varepsilon \right )\) which will be chosen later. For every integer \(k \in \left [0,k_{0}\right ]\) we consider the point

$$\displaystyle{ m_{k} = \left (n,k,0,\ldots,0\right ) \in \mathbb{Z}^{d} }$$

and write \(B\left (k\right ) = \sqrt{n^{2 } + k^{2}} = \left \vert m_{k}\right \vert \). We are going to show that for all \(\varepsilon > 0\) there exist R 0 ≥ 1, \(\alpha \in \left (0,1/2\right )\) and \(k_{0} \in \mathbb{N}\) such that for all R ≥ R 0 there exist \(n \leq R^{\varepsilon }/2\) and \(k \in \left [0,k_{0}\right ]\) such that \(\left \Vert R\left \vert m_{k}\right \vert \right \Vert \geq \alpha\). Assume the contrary, so that there exists \(\varepsilon > 0\) such that for every R 0 ≥ 1, \(\alpha \in \left (0,1/2\right )\) and \(k_{0} \in \mathbb{N}\) there exist R ≥ R 0 such that for all \(n \leq R^{\varepsilon }/2\) and \(k \in \left [0,k_{0}\right ]\) we have \(\left \Vert R\left \vert m_{k}\right \vert \right \Vert <\alpha\). Let

$$\displaystyle\begin{array}{rcl} & & B^{\left (1\right )}\left (k\right ) = B\left (k + 1\right ) - B\left (k\right )\;,\;\;\;\;\;k = 0,1,\ldots,k_{ 0} - 1 {}\\ & & B^{\left (2\right )}\left (k\right ) = B^{\left (1\right )}\left (k + 1\right ) - B^{\left (1\right )}\left (k\right )\;,\;\;\;\;\;k = 0,1,\ldots,k_{ 0} - 2 {}\\ & & \vdots {}\\ & & B^{\left (\ell\right )}\left (k\right ) = B^{\left (\ell-1\right )}\left (k + 1\right ) - B^{\left (\ell-1\right )}\left (k\right )\;,\;\;\;\;\;k = 0,1,\ldots,k_{ 0} -\ell {}\\ & &\vdots {}\\ & & B^{\left (k_{0}\right )}\left (0\right ) = B^{\left (k_{0}-1\right )}\left (1\right ) - B^{\left (k_{0}-1\right )}\left (0\right ) {}\\ \end{array}$$

Since \(\left \Vert R\ B\left (k\right )\right \Vert <\alpha\) we have \(\left \Vert R\ B^{\left (\ell\right )}\left (k\right )\right \Vert < 2^{\ell}\alpha\). Now replace k with a real variable ny and let, for \(\left \vert y\right \vert < 1\),

$$\displaystyle{\tilde{B}\left (y\right ) = \sqrt{1 + y^{2}} =\sum _{ j=0}^{+\infty }\binom{1/2}{j}y^{2j}\;.}$$

Differentiating, we obtain

$$\displaystyle{\frac{d^{2j}\tilde{B}} {\mathit{dx}^{2j}} \left (\frac{x} {n}\right ) = \left (2j\right )!\binom{1/2}{j} + \mathcal{O}\left ( \frac{1} {n^{2}}\right )}$$

uniformly in \(x \in \left [0,k_{0}\right ]\). Since \(B\left (x\right ) = n\tilde{B}\left (x/n\right )\), we have

$$\displaystyle{\frac{d^{2j}B} {\mathit{dx}^{2j}} \left (x\right ) = n^{1-2j}\left (\left (2j\right )!\binom{1/2}{j} + \mathcal{O}\left ( \frac{1} {n^{2}}\right )\right )\;.}$$

Now observe that

$$\displaystyle{B^{\left (\ell\right )}\left (x\right ) =\int _{ x}^{x+1}\int _{ x_{1}}^{x_{1}+1}\cdots \int _{ x_{\ell-1}}^{x_{\ell-1}+1}\frac{d^{\ell}B} {\mathit{dx}_{\ell}^{\ell}} \ \mathit{dx}_{\ell}\ldots \mathit{dx}_{2}\mathit{dx}_{1}\;,}$$

so that

$$\displaystyle{B^{\left (2j\right )}\left (x\right ) = n^{1-2j}\left (\left (2j\right )!\binom{1/2}{j} + \mathcal{O}\left ( \frac{1} {n^{2}}\right )\right )}$$

uniformly in \(x \in \left [0,1/2\right ]\). Now let j be the smallest integer j such that \(j \geq 1 +\varepsilon ^{-1}\) and choose

$$\displaystyle\begin{array}{rcl} k_{0}& =& 2j^{{\ast}} {}\\ n& =& \left [\left (2\left (2j^{{\ast}}\right )!R\left \vert \binom{1/2}{j^{{\ast}}}\right \vert \right )^{ \frac{1} {2j^{{\ast}}-1} }\right ]\;. {}\\ \end{array}$$

Then

$$\displaystyle\begin{array}{rcl} & & R\ B^{\left (2j^{{\ast}}\right ) }\left (x\right ) {}\\ & & = \left (2\left (2j^{{\ast}}\right )!\left \vert \binom{1/2}{j^{{\ast}}}\right \vert \right )^{-1}\left (2j^{{\ast}}\right )!\binom{1/2}{j^{{\ast}}} + \mathit{o}\left (1\right ) = \frac{1} {2}\mathrm{sign}\binom{1/2}{j^{{\ast}}} + \mathit{o}\left (1\right ) {}\\ \end{array}$$

as \(R \rightarrow +\infty \), so that

$$\displaystyle{\left \Vert R\ B^{\left (2j^{{\ast}}\right ) }\left (x\right )\right \Vert = \frac{1} {2} + \mathit{o}\left (1\right )\;.}$$

Observe that

$$\displaystyle{n = \left [\left (2\left (2j^{{\ast}}\right )!\left \vert \binom{1/2}{j^{{\ast}}}\right \vert R\right )^{ \frac{1} {2j^{{\ast}}-1} }\right ] \leq \left (2\left (2j^{{\ast}}\right )!\left \vert \binom{1/2}{j^{{\ast}}}\right \vert \right )^{ \frac{\varepsilon }{\varepsilon +2} }R_{0}^{-\frac{\varepsilon ^{2}+\varepsilon } {\varepsilon +2} }R^{\varepsilon }\;.}$$

Choosing α such that \(2^{2j^{{\ast}} }\alpha < 1/2\) and R 0 such that

$$\displaystyle{\left (2\left (2j^{{\ast}}\right )!\left \vert \binom{1/2}{j^{{\ast}}}\right \vert \right )^{ \frac{\varepsilon }{\varepsilon +2} }R_{0}^{-\frac{\varepsilon ^{2}+\varepsilon } {\varepsilon +2} } < \frac{1} {2}}$$

we obtain a contradiction. □ 

Again, let \(B = \left \{t \in \mathbb{R}^{d}: \left \vert t\right \vert \leq 1\right \}\) be the unit ball and for every \(t \in \mathbb{T}^{d}\) consider the discrepancy

$$\displaystyle{D_{R}\left (t\right ) =\mathrm{ card}\left (\mathbb{Z}^{d} \cap \left (RB + t\right )\right ) - R^{d}\left \vert B\right \vert \;.}$$

We have the following result.

Theorem 35.

Let d > 1.

  1. (i)

    If \(d\not\equiv 1\ \left (\mathrm{mod}4\right )\) , then there exists a positive constant c such that for every R ≥ 1 we have

    $$\displaystyle{\left \Vert D_{R}\right \Vert _{L^{2}\left (\mathbb{T}^{d}\right )} \geq c\ R^{\left (d-1\right )/2}\;.}$$
  2. (ii)

    If \(d \equiv 1\ \left (\mathrm{mod}4\right )\) , then for every small \(\varepsilon > 0\) there exists a positive constant \(c_{\varepsilon }\) such that for every R ≥ 1 we have

    $$\displaystyle{\left \Vert D_{R}\right \Vert _{L^{2}\left (\mathbb{T}^{d}\right )} \geq c_{\varepsilon }\ R^{\left (d-1\right )/2-\varepsilon }\;.}$$

Proof.

We prove \(\left (i\right )\).

Arguing as in (3.38) we obtain

$$\displaystyle\begin{array}{rcl} & & \left \Vert D_{R}\right \Vert _{L^{2}\left (\mathbb{T}^{d}\right )}^{2} =\sum _{ 0\neq m\in \mathbb{Z}^{d}}\left \vert \hat{D}_{R}\left (m\right )\right \vert ^{2} = R^{d}\sum _{ 0\neq m\in \mathbb{Z}^{d}}\left \vert m\right \vert ^{-d}J_{ d/2}^{2}\left (2\pi R\left \vert m\right \vert \right ) {}\\ & & =\pi ^{-2}\ R^{d-1}\sum _{ 0\neq m\in \mathbb{Z}^{d}}\left \vert m\right \vert ^{-d-1}\cos ^{2}\left (2\pi R\left \vert m\right \vert -\pi \left (d + 1\right )/4\right ) + \mathcal{O}\left (R^{d-2}\right )\;. {}\\ \end{array}$$

Now let \(m^{{\prime}} = \left (1,0,\ldots,0\right )\) and assume

$$\displaystyle{\left \vert \cos \left (2\pi R\left \vert m^{{\prime}}\right \vert -\pi \left (d + 1\right )/4\right )\right \vert = \left \vert \cos \left (2\pi R -\pi \left (d + 1\right )/4\right )\right \vert > \frac{1} {100}\;.}$$

Then

$$\displaystyle\begin{array}{rcl} \left \Vert D_{R}\right \Vert _{L^{2}\left (\mathbb{T}^{d}\right )}^{2}& \geq & \pi ^{-2}\ R^{d-1}\left \vert m^{{\prime}}\right \vert ^{-d-1}\cos ^{2}\left (2\pi R\left \vert m^{{\prime}}\right \vert -\pi \left (d + 1\right )/4\right ) + \mathcal{O}\left (R^{d-2}\right ) {}\\ & =& \pi ^{-2}10^{-2}\ R^{d-1} + \mathcal{O}\left (R^{d-2}\right ) \geq cR^{d-1}\;. {}\\ \end{array}$$

Now assume

$$\displaystyle{\left \vert \cos \left (2\pi R\left \vert m^{{\prime}}\right \vert -\pi \left (d + 1\right )/4\right )\right \vert = \left \vert \sin \left (2\pi R -\pi \left (d - 1\right )/4\right )\right \vert \leq \frac{1} {100}\;.}$$

Then there exists an integer such that \(2R =\ell +\left (d - 1\right )/4\pm \delta\), for a suitable \(\left \vert \delta \right \vert \leq 1/50\). Then \(4R = 2\ell + \left (d - 1\right )/2 \pm 2\delta\), and since \(\left (d - 1\right )/4\) is not an integer we have

$$\displaystyle\begin{array}{rcl} & & \left \vert \cos \left (2\pi R\left \vert 2m^{{\prime}}\right \vert -\pi \left (d + 1\right )/4\right )\right \vert {}\\ & & = \left \vert \sin \left (\pi \left \{2\ell + \left (d - 1\right )/2 \pm 2\delta -\pi \left (d - 1\right )/4\right \}\right )\right \vert {}\\ & &\geq \frac{1} {2}\left \Vert \pm 2\delta + \left (d - 1\right )/4\right \Vert \geq \frac{1} {10}\;. {}\\ \end{array}$$

Then choosing \(\overline{m}\) equal to m or to 2m , we have

$$\displaystyle{\left \Vert D_{R}\right \Vert _{L^{2}\left (\mathbb{T}^{d}\right )}^{2} \geq \mathit{cR}\,^{d-1}\;.}$$

We now prove \(\left (\mathit{ii}\right )\). Let m be as in Lemma 34. Since \(d \equiv 1\ \left (\mathrm{mod}4\right )\) we have

$$\displaystyle\begin{array}{rcl} \left \Vert D_{R}\right \Vert _{L^{2}\left (\mathbb{T}^{d}\right )}^{2}& \geq & \pi ^{-2}\ R^{d-1}\left \vert m\right \vert ^{-d-1}\cos ^{2}\left (2\pi R\left \vert m\right \vert -\pi \left (d + 1\right )/4\right ) + \mathcal{O}\left (R^{d-2}\right ) {}\\ & =& \pi ^{-2}\ R^{d-1}\left \vert m\right \vert ^{-d-1}\sin ^{2}\left (2\pi R\left \vert m\right \vert \right ) + \mathcal{O}\left (R^{d-2}\right ) {}\\ & \geq & c_{\varepsilon }\ R^{d-1}R^{-\left (d+1\right )\varepsilon }\;. {}\\ \end{array}$$

 □ 

For the simplices we have results complementary to the ones in Theorems 25 and 26. Let S be a simplex in \(\mathbb{R}^{d}\) and for every \(t \in \mathbb{T}^{d}\) let

$$\displaystyle{D_{R}\left (t\right ) =\mathrm{ card}\left (\mathbb{Z}^{d} \cap \left (\tau \left (R\ S\right ) + t\right )\right ) - R^{d}\left \vert S\right \vert \;.}$$

Theorem 36.

For every simplex S in \(\mathbb{R}^{d}\) (d ≥ 2) and R ≥ 2 we have

  1. i)

    \(\left \Vert D_{R}\right \Vert _{L^{1,\infty }\left (\mathit{SO}\left (d\right )\times \mathbb{T}^{d}\right )} \geq c\ \log ^{d-2}\left (R\right )\)

  2. ii)

    \(\left \Vert D_{R}\right \Vert _{L^{1}\left (\mathit{SO}\left (d\right )\times \mathbb{T}^{d}\right )} \geq c\ \log ^{d-1}\left (R\right )\)

  3. iii)

    \(\left \Vert D_{R}\right \Vert _{L^{p}\left (\mathit{SO}\left (d\right )\times \mathbb{T}^{d}\right )} \geq R^{\left (d-1\right )\left (1-1/p\right )}\;,\;\;\;\;\;\) if 1 < p ≤ +∞ .

Proof.

For every m ≠ 0 we have

$$\displaystyle\begin{array}{rcl} & & \left \Vert D_{R}\right \Vert _{L^{p}\left (\mathit{SO}\left (d\right )\times \mathbb{T}^{d}\right )} \geq R^{d}\left \{\int _{ \mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}\left \vert \sum _{m\neq 0}\hat{\chi }_{S}\left (R\tau \left (m\right )\right )\ e^{2\pi \mathit{im}\cdot t}\right \vert ^{p}\ \mathit{dt}\,d\tau \right \}^{1/p} {}\\ & & \geq R^{d}\left \{\int _{ \mathbb{T}^{d}}\int _{\mathit{SO}\left (d\right )}\left \vert \hat{\chi }_{S}\left (R\tau \left (m^{{\prime}}\right )\right )\right \vert ^{p}\ d\tau \,\mathit{dt}\right \}^{1/p}\;. {}\\ \end{array}$$

Then \(\left (\mathit{ii}\right )\) and \(\left (\mathit{iii}\right )\) follow from Theorem 19. The proof of \(\left (i\right )\) is a consequence of Lemma 18 as in the proof of the corresponding part of Theorem 19. □ 

6 Irregularities of Distribution: Estimates from Below

It is time to go back to the Introduction, where we have referred to the fundamental results of K. Roth (Theorem 2) and W. Schmidt (Theorem 1). In this section we present two different approaches to Theorem 4, due to J. Beck and H. Montgomery respectively. For convenience we shall apply Beck’s argument to prove Theorem 4 and Montgomery’s argument to prove a stronger version of the theorem which holds true in the particular case when B is a simplex.

We now repeat the main part of the statement of Theorem 4.

Theorem 37.

Let d ≥ 2 and let \(B \subset \mathbb{R}^{d}\) be a body of diameter smaller than 1 which satisfies (3.15) . Then for every distribution \(\mathcal{P}\) of N points in \(\mathbb{T}^{d}\) we have

$$\displaystyle{ \int _{0}^{1}\int _{ \mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}\left \vert \mathrm{card}\left (\mathcal{P}\cap \left (\lambda \tau \left (B + t\right )\right )\right ) -\lambda ^{d}N\left \vert B\right \vert \right \vert ^{2}\ \mathit{dt}\,d\tau \,d\lambda \geq c_{ d}N^{\left (d-1\right )/d}\;. }$$
(3.49)

Proof.

For every 0 < λ ≤ 1, \(\tau \in \mathit{SO}\left (d\right )\), \(t \in \mathbb{T}^{d}\) the projection of \(\lambda \tau \left (B\right ) - t\) is injective from \(\mathbb{R}^{d}\) to \(\mathbb{T}^{d}\). Given a finite distribution \(\mathcal{P} = \left \{t\left (j\right )\right \}_{j=1}^{N}\) of N points in \(\mathbb{T}^{d}\) we consider the discrepancies

$$\displaystyle\begin{array}{rcl} D_{N}^{B,\mathcal{P}}& =& D_{ N} = -N\left \vert B\right \vert +\sum _{ j=1}^{N}\chi _{ B}\left (t\left (j\right )\right ) {}\\ D_{N}^{B,\mathcal{P}}\left (\lambda,\tau,t\right )& =& D_{ N}\left (\lambda,\tau,t\right ) = -N\lambda ^{d}\left \vert B\right \vert +\sum _{ j=1}^{N}\chi _{ \lambda \tau ^{-1}\left (B\right )-t}\left (t\left (j\right )\right )\;. {}\\ \end{array}$$

First we show that the function \(t\mapsto D_{N}\left (\lambda,\tau,t\right )\) has Fourier series

$$\displaystyle{ \sum _{m\neq 0}\left (\sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right )\lambda ^{d}\hat{\chi }_{ B}\left (\lambda \tau \left (m\right )\right )\ e^{2\pi \mathit{im}\cdot t}\;. }$$
(3.50)

Indeed

$$\displaystyle\begin{array}{rcl} & & \int _{\mathbb{T}^{d}}\left (-N\lambda ^{d}\left \vert B\right \vert +\sum _{ j=1}^{N}\chi _{ \lambda \tau ^{-1}\left (B\right )-t}\left (t\left (j\right )\right )\right )\ \mathit{dt} {}\\ & & = -N\lambda ^{d}\left \vert B\right \vert +\sum _{ j=1}^{N}\int _{ \mathbb{T}^{d}}\chi _{\lambda \tau ^{-1}\left (B\right )}\left (t\left (j\right ) + t\right )\ \mathit{dt} {}\\ & & = -N\lambda ^{d}\left \vert B\right \vert + N\int _{ \mathbb{R}^{d}}\chi _{\lambda \tau ^{-1}\left (B\right )}\left (u\right )\ \mathit{du} = 0\;, {}\\ \end{array}$$

while for m ≠ 0

$$\displaystyle\begin{array}{rcl} & & \int _{\mathbb{T}^{d}}\left (-N\lambda ^{d}\left \vert B\right \vert +\sum _{ j=1}^{N}\chi _{ \lambda \tau ^{-1}\left (B\right )-t}\left (t\left (j\right )\right )\right )e^{-2\pi \mathit{im}\cdot t}\ \mathit{dt} {}\\ & & =\sum _{ j=1}^{N}\int _{ \mathbb{T}^{d}}\chi _{\lambda \tau ^{-1}\left (B\right )}\left (t\left (j\right ) + t\right )e^{-2\pi \mathit{im}\cdot t}\ \mathit{dt} {}\\ & & =\sum _{ j=1}^{N}\int _{ \mathbb{T}^{d}}\chi _{\lambda \tau ^{-1}\left (B\right )}\left (u\right )e^{-2\pi \mathit{im}\cdot \left (u-t\left (j\right )\right )}\ \mathit{du} =\sum _{ j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\lambda ^{d}\ \hat{\chi }_{ B}\left (\lambda \tau \left (m\right )\right )\;. {}\\ \end{array}$$

Let 0 < q < 1 and 0 < r < 1. By (3.50) and Theorem 8 we have

$$\displaystyle\begin{array}{rcl} & & \dfrac{1} {r}\int _{\mathit{qr}}^{r}\int _{ \mathit{SO}(d)}\int _{\mathbb{T}^{d}}\left \vert D_{N}(\lambda,\tau,t)\right \vert ^{2}\ \mathit{dt}\,d\tau \,d\lambda \\ & & = \sum \limits _{m\neq 0}\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right \vert ^{2}\dfrac{1} {r}\int _{\mathit{qr}}^{r}\int _{ \mathit{SO}(d)}\left \vert \lambda ^{d}\hat{\chi }_{ B}(\lambda \tau (m))\right \vert ^{2}\ d\tau \,d\lambda \\ & & = c\sum \limits _{m\neq 0}\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right \vert ^{2}\dfrac{1} {r}\int _{\{\mathit{qr}\leq \left \vert \xi \right \vert \leq r\}}\left \vert \hat{\chi }_{B}\left (\left \vert m\right \vert \xi \right )\right \vert ^{2}\,\left \vert \xi \right \vert ^{d+1}\ d\xi \\ & & \approx \sum \limits _{m\neq 0}\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right \vert ^{2}r^{d}\int _{ \{\mathit{qr}\leq \left \vert \xi \right \vert \leq r\}}\left \vert \hat{\chi }_{B}\left (\left \vert m\right \vert \xi \right )\right \vert ^{2}\ d\xi \\ & & \approx \sum \limits _{m\neq 0}\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right \vert ^{2}r^{d}\left \vert m\right \vert ^{-d}\int _{ \{\mathit{qr}\left \vert m\right \vert \leq \left \vert \xi \right \vert \leq r\left \vert m\right \vert \}}\left \vert \hat{\chi }_{B}(\eta )\right \vert ^{2}\ d\eta \\ & & \approx \sum \limits _{m\neq 0}\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right \vert ^{2}r^{d}\left \vert m\right \vert ^{-d}\left (1 + r\left \vert m\right \vert \right )^{-1}\;. {}\end{array}$$
(3.51)

(again, A ≈ B means that there exist two positive constants c 1 and c 2 which do not depend on N and r and satisfy c 1 A ≤ B ≤ c 2 A). Now we apply (3.51) first with r = 1 and then with r = kN −1∕d (we shall choose the constant k later on). We obtain

$$\displaystyle\begin{array}{rcl} & & \int _{q}^{1}\int _{ \mathit{SO}(d)}\int _{\mathbb{T}^{d}}\left \vert D_{N}(\lambda,\tau,t)\right \vert ^{2}\ \mathit{dt}\,d\tau \,d\lambda \\ & & \approx \sum \limits _{m\neq 0}\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right \vert ^{2}\left \vert m\right \vert ^{-d-1} \\ & & \geq c\ \left \{\inf \limits _{m\neq 0}\frac{1 + \mathit{kN}^{-1/d}\left \vert m\right \vert } {k^{d}N^{-1}\left \vert m\right \vert } \right \}\left \{\sum \limits _{m\neq 0}\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\,\right \vert ^{2}\left ( \frac{k^{d}N^{-1}\left \vert m\right \vert ^{-d}} {1 + \mathit{kN}^{-1/d}\left \vert m\right \vert }\right )\right \} \\ & & \approx \left \{N^{1-1/d}k^{1-d}\right \}\left \{k^{-1}N^{1/d}\int _{ \mathit{qk}N^{-1/d}}^{\mathit{kN}^{-1/d} }\int _{\mathit{SO}(d)}\int _{\mathbb{T}^{d}}\left \vert D_{N}(\lambda,\tau,t)\right \vert ^{2}\ \mathit{dt}\,d\tau \,d\lambda \right \}\;.{}\end{array}$$
(3.52)

Since

$$\displaystyle{\mathit{qk}N^{-1/d} \leq \lambda \leq \mathit{kN}^{-1/d}}$$

there exists a small constant δ > 0 such that, for suitable choices of the constants q and k we have

$$\displaystyle{\delta \leq q^{d}k^{d}\left \vert B\right \vert \leq N\lambda ^{d}\left \vert B\right \vert \leq k^{d}\left \vert B\right \vert \leq 1 -\delta \;.}$$

Being

$$\displaystyle{\sum _{j=1}^{N}\chi _{ \lambda \tau ^{-1}\left (B\right )-t}\left (t\left (j\right )\right )}$$

an integer, we deduce that

$$\displaystyle{\left \vert D_{N}\left (\lambda,\tau,t\right )\right \vert = \left \vert -N\lambda ^{d}\left \vert B\right \vert +\sum _{ j=1}^{N}\chi _{ \lambda \tau ^{-1}\left (B\right )-t}\left (t\left (j\right )\right )\right \vert \geq \delta }$$

for every τ, t and \(\lambda \in \left [\mathit{qkN}^{-1/d},\mathit{kN}^{-1/d}\right ]\). Then (3.52) gives

$$\displaystyle{\int _{q}^{1}\int _{ \mathit{SO}(d)}\int _{\mathbb{T}^{d}}\left \vert D_{N}(\lambda,\tau,t)\right \vert ^{2}\ \mathit{dt}\,d\tau \,d\lambda \geq \mathit{cN}^{1-1/d}\;.}$$

 □ 

Because of Theorem 24, the dilation in λ in (3.49) cannot be deleted. In the sequel of this section we shall see how to avoid the dilation in particular cases. The starting point is a lemma due to J. Cassels (see [17, 42]).

Lemma 38.

For every positive integer N let

$$\displaystyle{Q_{N} = \left \{x = \left (x_{1},x_{2},\ldots,x_{d}\right ) \in \mathbb{R}^{d}: \left \vert x_{ j}\right \vert \leq \root{d}\of{2N}\quad \text{for every }j = 1,2,\ldots,d\right \}\;.}$$

Then for every finite set \(\left \{t\left (j\right )\right \}_{j=1}^{N} \subset \mathbb{T}^{d}\)

$$\displaystyle{ \sum _{0\neq m\in Q_{N}\cap \mathbb{Z}^{d}}\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right \vert ^{2} \geq N^{2}\;. }$$
(3.53)

Proof.

Let \(m = \left (m_{1},m_{2},\ldots,m_{d}\right )\) an element of \(\mathbb{Z}^{d}\). Adding N 2 on both sides of (3.53), we have to prove that

$$\displaystyle{\sum _{\left \vert m_{1}\right \vert \leq \root{d}\of{2N}}\ldots \sum _{\left \vert m_{d}\right \vert \leq \root{d}\of{2N}}\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right \vert ^{2} \geq 2N^{2}\;,}$$

and this is a consequence of the following inequality:

$$\displaystyle{ \sum _{\left \vert m_{1}\right \vert \leq \left [\root{d}\of{2N}\right ]}\ldots \sum _{\left \vert m_{d}\right \vert \leq \left [\root{d}\of{2N}\right ]}\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right \vert ^{2} \geq N\left (\left [\root{d}\of{2N}\right ] + 1\right )^{d}\;, }$$
(3.54)

where \(\left [\gamma \right ]\) denotes the integral part of the real number γ. The LHS in (3.54) is larger than

$$\displaystyle\begin{array}{rcl} & & \sum _{\left \vert m_{1}\right \vert \leq \left [\root{d}\of{2N}\right ]}\left (1 - \frac{\left \vert m_{1}\right \vert } {\left [\root{d}\of{2N}\right ] + 1}\right )\cdots \\ & & \times \sum _{\left \vert m_{d}\right \vert \leq \left [\root{d}\of{2N}\right ]}\left (1 - \frac{\left \vert m_{d}\right \vert } {\left [\root{d}\of{2N}\right ] + 1}\right )\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right \vert ^{2} \\ & & =\sum _{\left \vert m_{1}\right \vert \leq \left [\root{d}\of{2N}\right ]}\left (1 - \frac{\left \vert m_{1}\right \vert } {\left [\root{d}\of{2N}\right ] + 1}\right )\cdots \sum _{\left \vert m_{d}\right \vert \leq \left [\root{d}\of{2N}\right ]}\left (1 - \frac{\left \vert m_{d}\right \vert } {\left [\root{d}\of{2N}\right ] + 1}\right ) \\ & & \times \sum _{j=1}^{N}\sum _{ \ell=1}^{N}e^{2\pi \mathit{im}\cdot \left (t\left (j\right )-t\left (\ell\right )\right )} \\ & & =\sum _{ j=1}^{N}\sum _{ \ell=1}^{N}\sum _{ \left \vert m_{1}\right \vert \leq \left [\root{d}\of{2N}\right ]}\left (1 - \frac{\left \vert m_{1}\right \vert } {\left [\root{d}\of{2N}\right ] + 1}\right )e^{2\pi \mathit{im}_{1}\left (t_{1}\left (j\right )-t_{1}\left (\ell\right )\right )}\ldots \\ & & \times \sum _{\left \vert m_{d}\right \vert \leq \left [\root{d}\of{2N}\right ]}\left (1 - \frac{\left \vert m_{d}\right \vert } {\left [\root{d}\of{2N}\right ] + 1}\right )e^{2\pi \mathit{im}_{d}\left (t_{d}\left (j\right )-t_{d}\left (\ell\right )\right )} \\ & & =\sum _{ j=1}^{N}\sum _{ \ell=1}^{N}K_{\left [\root{d}\of{2N}\right ]}\left (t_{1}\left (j\right ) - t_{1}\left (\ell\right )\right )\cdots K_{\left [\root{d}\of{2N}\right ]}\left (t_{d}\left (j\right ) - t_{d}\left (\ell\right )\right )\;,{}\end{array}$$
(3.55)

where

$$\displaystyle{K_{M}(t) =\sum _{ j=-M}^{M}\left (1 - \frac{\left \vert j\right \vert } {M + 1}\right )e^{2\pi \mathit{ijt}} = \frac{1} {M + 1}\left (\frac{\sin \left (\pi \left (M + 1\right )t\right )} {\sin \left (\pi t\right )} \right )^{2}}$$

is the Fejér kernel r kernel (\(M \in \mathbb{N}\), \(t \in \mathbb{T}\)). Since K M (t) ≥ 0 for every t, we may bound the terms in (3.55) from below by the “diagonal”

$$\displaystyle\begin{array}{rcl} & & \sum _{j=1}^{N}K_{\left [\root{d}\of{2N}\right ]}\left (t_{1}\left (j\right ) - t_{1}\left (j\right )\right )\cdots K_{\left [\root{d}\of{2N}\right ]}\left (t_{d}\left (j\right ) - t_{d}\left (j\right )\right ) = N\left (K_{\left [\root{d}\of{2N}\right ]}\left (0\right )\right )^{d} {}\\ & & = N\left (\left [\root{d}\of{2N}\right ] + 1\right )^{d}\;. {}\\ \end{array}$$

 □ 

Theorem 39.

Let S be a simplex in \(\mathbb{R}^{d}\) , d ≥ 2, the sides of which have length smaller than 1. Then there exists a constant c d > 0 such that for every finite set \(\left \{t\left (j\right )\right \}_{j=1}^{N} \subset \mathbb{T}^{d}\) we have

$$\displaystyle{\int _{\mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}\left \vert -N\left \vert S\right \vert +\sum _{ j=1}^{N}\chi _{ \tau ^{-1}\left (S\right )-t}\left (t\left (j\right )\right )\right \vert ^{2}\ \mathit{dt}\,d\tau \geq c_{ d}\ N^{\left (d-1\right )/d}\;.}$$

Proof.

As a consequence of Parseval identity, Theorem 19 and Lemma 38 we obtain

$$\displaystyle\begin{array}{rcl} & & \int _{\mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}\left \vert -N\left \vert S\right \vert +\sum _{ j=1}^{N}\chi _{ \tau ^{-1}\left (S\right )-t}\left (t\left (j\right )\right )\right \vert ^{2}\ \mathit{dt}\,d\tau {}\\ & & =\sum _{m\neq 0}\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right \vert ^{2}\int _{ \mathit{SO}\left (d\right )}\left \vert \hat{\chi }_{S}\left (\tau \left (m\right )\right )\right \vert ^{2}\ d\tau {}\\ & & \geq \sum _{0\neq m\in Q_{N}}\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right \vert ^{2}\int _{ \mathit{SO}\left (d\right )}\left \vert \hat{\chi }_{S}\left (\tau \left (m\right )\right )\right \vert ^{2}\ d\tau {}\\ & & \geq c\sum _{0\neq m\in Q_{N}}\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right \vert ^{2}\left \vert m\right \vert ^{-d-1} {}\\ & & \geq c\inf _{m\in Q_{N}}\left (\left \vert m\right \vert ^{-d-1}\right )\sum _{ 0\neq m\in Q_{N}}\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right \vert ^{2} {}\\ & & \geq c\ N^{-1-1/d}N^{2} = c\ N^{1-1/d}\;. {}\\ \end{array}$$

 □ 

Remark 40.

Using Theorem 8 the above argument gives a new proof of Theorem 37 (see [42]).

Corollary 41.

Let S be a simplex in \(\mathbb{R}^{d}\) the sides of which have length smaller than 1. Then, for every finite set \(\left \{t\left (j\right )\right \}_{j=1}^{N} \subset \mathbb{T}^{d}\) there exists a (translated and rotated) copy S of S such that

$$\displaystyle{\left \vert -N\left \vert S\right \vert +\mathrm{ card}\left (S^{{\prime}}\cap \left \{t\left (j\right )\right \}_{ j=1}^{N}\right )\right \vert \geq c_{ d}\ N^{\left (d-1\right )/2d}\;.}$$

7 Irregularities of Distribution: Estimates from Above

We are going to check the quality of the estimates from below obtained in the previous section. We will see that for any given body B and for every positive integer N one can find a finite set \(\left \{t\left (j\right )\right \}_{j=1}^{N} \subset \mathbb{T}^{d}\), such that a suitable L 2 mean of the discrepancy is smaller than \(cN^{\left (d-1\right )/2d}\). Observe that we cannot choose the points at random. Indeed, such a Monte Carlo choice of the N points gives a \(\sqrt{N}\) discrepancy (see e.g. (3.61) below), and this is not enough to match the \(N^{\left (d-1\right )/2d}\) lower estimates in Theorems 37 and 39 (although for large dimension d the exponent \(\left (d - 1\right )/2d\) approaches 1∕2). We shall get the \(N^{\left (d-1\right )/2d}\) estimate first using an argument related to lattice points problems, and then a probabilistic argument.

For an overview of upper estimates related to irregularities of distribution, see [20].

7.1 Applying Lattice Points Results

A very natural way to choose N points in a cube consists in putting them on a grid. Suppose for the time being that we have N = M d points (Fig. 3.4). Then let

$$\displaystyle{ \mathcal{P} = \mathcal{P}_{M} = \frac{1} {M}\mathbb{Z}^{d} \cap \left [-\frac{1} {2}, \frac{1} {2}\right )^{d} = \left \{t\left (j\right )\right \}_{ j=1}^{N} }$$
(3.56)

(the ordering of the \(t\left (j\right )\)’s is irrelevant).

Fig. 3.4
figure 4

Points on a grid

This choice of \(\mathcal{P}\) immediately relates our point distribution problem to certain lattice point problems similar to the ones that we have considered in the previous sections. Indeed, if B is a body in \(\left [-\frac{1} {2}, \frac{1} {2}\right )^{d}\) and \(\mathcal{P}\) is as in (3.56) we have

$$\displaystyle{\mathrm{card}\left (B \cap \mathcal{P}_{M}\right ) =\mathrm{ card}\left (\mathit{MB} \cap \mathbb{Z}^{d}\right )\;.}$$

Before going on, observe that here M is an integer, while in the lattice point problems we have considered so far, the dilation parameter R is real. In other words, the choice of the piece of lattice in (3.56) implicitly contains some (but not all the) dilations.

Now we show that the lower estimate in Theorem 37 cannot be improved. This result has been originally proved by J. Beck and W. Chen [2]. We give two proofs: the first one is based on Theorem 21, while the second one is probabilistic in nature (see [2, 14, 22], see also [36, 41]). Since the second proof works under assumptions more general than convexity, we will state two different theorems (the first one is contained in the second one).

Theorem 42.

Let \(B \subset \mathbb{R}^{d}\) be a convex body of diameter smaller than 1. Then for every positive integer N there exists a finite set \(\left \{t\left (j\right )\right \}_{j=1}^{N} \subset \mathbb{T}^{d}\) such that

$$\displaystyle{\int _{\mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}\left \vert -N\left \vert B\right \vert +\sum _{ j=1}^{N}\chi _{ \tau ^{-1}\left (B\right )-t}\left (t\left (j\right )\right )\right \vert ^{2}\ \mathit{dt}\,d\tau \leq c_{ d}\ N^{\left (d-1\right )/d}\;.}$$

Here c d depends only on the dimension d.

Proof.

We apply Theorem 21. Assume first that N = M d for a positive integer M. For any \(a \in \left [-\frac{1} {2}, \frac{1} {2}\right )^{d}\) let

$$\displaystyle{A_{N} = \left \{t\left (j\right )\right \}_{j=1}^{N} = \left (a + M^{-1}\mathbb{Z}^{d}\right ) \cap \left [-\frac{1} {2}, \frac{1} {2}\right )^{d}}$$

(the role of a will be clear later on). Then

$$\displaystyle\begin{array}{rcl} & & \int _{\mathit{SO}(d)}\int _{\mathbb{T}^{d}}\left \vert \mathrm{card}\left (A_{M^{d}} \cap \left (\tau (B) + t\right )\right ) - M^{d}\left \vert B\right \vert \right \vert ^{2}\ \mathit{dt}\,d\tau {}\\ & & =\int _{\mathit{SO}(d)}\int _{\mathbb{T}^{d}}\left \vert \mathrm{card}\left (A_{M^{d}} \cap \left (\tau (B) + t + a\right )\right ) - M^{d}\left \vert B\right \vert \right \vert ^{2}\ \mathit{dt}\,d\tau {}\\ & & = M^{d}\int _{ \mathit{SO}(d)}\int _{\left [- \frac{1} {2M}, \frac{1} {2M}\right )^{d}}\left \vert \mathrm{card}\left (M^{-1}\mathbb{Z}^{d} \cap \left (\tau (B) + t\right )\right ) - M^{d}\left \vert B\right \vert \right \vert ^{2}\ \mathit{dt}\,d\tau {}\\ & & = M^{d}\int _{ \mathit{SO}(d)}\int _{\left [- \frac{1} {2M}, \frac{1} {2M}\right )^{d}}\left \vert \mathrm{card}\left (\mathbb{Z}^{d} \cap \left (\tau (\mathit{MB}) + \mathit{Mt}\right )\right ) - M^{d}\left \vert B\right \vert \right \vert ^{2}\ \mathit{dt}\,d\tau {}\\ & & =\int _{\mathit{SO}(d)}\int _{\mathbb{T}^{d}}\left \vert \mathrm{card}\left (\mathbb{Z}^{d} \cap \left (\tau (MB) + v\right )\right ) - M^{d}\left \vert B\right \vert \right \vert ^{2}\mathit{dv}\,d\tau \;, {}\\ \end{array}$$

since the function

$$\displaystyle{t\mapsto \mathrm{card}\left (M^{-1}\mathbb{Z}^{d} \cap \left (\tau (B) + t\right )\right ) - M^{d}\left \vert B\right \vert }$$

is \(M^{-1}\mathbb{Z}^{d}\) periodic and the cube \(\left [-\frac{1} {2}, \frac{1} {2}\right )^{d}\) contains M d disjoint copies of \(\left [- \frac{1} {2M}, \frac{1} {2M}\right )^{d}\). By Theorem 21 we have

$$\displaystyle{ \int _{\mathit{SO}(d)}\int _{\mathbb{T}^{d}}\left \vert \mathrm{card}\left (A_{M^{d}} \cap \left (\tau (B) + t\right )\right ) - M^{d}\left \vert B\right \vert \right \vert ^{2}\ \mathit{dt}\,d\tau \leq c_{ d}M^{d-1} \leq c_{ d}N^{1-1/d}. }$$
(3.57)

To end the proof we need to pass from N = M d to an arbitrary positive integer N. By a theorem of Hilbert (Waring problem, see [29]) there exists a constant H = H d such that every positive integer N can be written a sum of at most Hdth powers:

$$\displaystyle{N =\sum _{ j=1}^{H}M_{ j}^{d}}$$

with M 1, M 2, , M H positive integers. Now choose \(a_{1},a_{2},\ldots,a_{H} \in \left [-\frac{1} {2}, \frac{1} {2}\right )^{d}\) such that

$$\displaystyle{ \left (a_{j} + M_{j}^{-1}\mathbb{Z}^{d}\right ) \cap \left (a_{ k} + M_{k}^{-1}\mathbb{Z}^{d}\right ) = \varnothing }$$
(3.58)

whenever jk. For j = 1, 2, , H let

$$\displaystyle{A_{M_{j}^{d}} = \left (a_{j} + M_{j}^{-1}\mathbb{Z}^{d}\right ) \cap \left [-\frac{1} {2}, \frac{1} {2}\right )^{d}\;.}$$

By (3.58) the union

$$\displaystyle{A_{N} =\bigcup _{ j=1}^{H}A_{ M_{j}^{d}}}$$

is disjoint, so that A N contains exactly N points. Since

$$\displaystyle{\mathrm{card}\left (A_{N} \cap B\right ) - N\left \vert B\right \vert =\sum _{ j=1}^{H}\left (\mathrm{card}\left (A_{ M_{j}^{d}} \cap B\right ) - M_{j}^{d}\left \vert B\right \vert \right )\;,}$$

the theorem follows from (3.57). □ 

7.2 Deterministic and Probabilistic Discrepancies

In the proof of the next theorem the points will be chosen in a probabilistic way. Since we will start from the piece of lattice \(\left \{t\left (j\right )\right \}_{j=1}^{N}\) introduced in (3.56), it will be possible, as in the first proof, to assume N = M d. We will choose one point at random inside each one of the N small cubes having sides parallel to the axes and of length 1∕M (Fig. 3.5).

Fig. 3.5
figure 5

Jittered sampling

The above choice is sometimes called jittered sampling.

We have the following generalization of Theorem 42.

Theorem 43.

Let \(B \subset \mathbb{T}^{d}\) be a body of diameter smaller than 1 satisfying (3.15) . Then for every positive integer N there exists a finite set \(\left \{t\left (j\right )\right \}_{j=1}^{N} \subset \mathbb{T}^{d}\) such that

$$\displaystyle{\int _{\mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}\left \vert -N\left \vert B\right \vert +\sum _{ j=1}^{N}\chi _{ \tau ^{-1}\left (B\right )-t}\left (t\left (j\right )\right )\right \vert ^{2}\ \mathit{dt}\,d\tau \leq c_{ d}\ N^{\left (d-1\right )/d}\;.}$$

Before starting the proof we introduce the following slightly more general point of view.

Given the finite point set distribution \(\mathcal{P} = \left \{t\left (j\right )\right \}_{j=1}^{N}\) in (3.56), we introduce the following randomization of \(\mathcal{P}\), see [2, 14, 22] and also [5, 36, 41]. Let d μ denote a probability measure on \(\mathbb{T}^{d}\) and for every j = 1, , N, let d μ j denote the measure obtained after translating d μ by \(t\left (j\right )\). More precisely, for any integrable function g on \(\mathbb{T}^{d}\), let

$$\displaystyle{ \int _{\mathbb{T}^{d}}g(t)\ d\mu _{j} =\int _{\mathbb{T}^{d}}g(t - t\left (j\right ))\ d\mu \;. }$$
(3.59)

As before, let dt denote the Lebesgue measure on \(\mathbb{T}^{d}\). For every sequence V N  = { v 1, , v N } in \(\mathbb{T}^{d}\) and every \(t \in \mathbb{T}^{d}\), \(\tau \in \mathit{SO}\left (d\right )\) let

$$\displaystyle{D(t,\tau,V _{N}) = D(V _{N}) = -N\left \vert B\right \vert +\sum _{ j=1}^{N}\chi _{ \tau ^{-1}\left (B\right )-t}\left (v_{j}\right )\;.}$$

As in (3.50), \(D_{N}\left (t\right ) = D(t,\tau,V _{N})\) has Fourier series

$$\displaystyle{\sum _{0\neq m\in \mathbb{Z}^{d}}\left (\sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot v_{j} }\right )\hat{\chi }_{B}(\tau \left (m\right ))\ e^{2\pi \mathit{im}\cdot t}\;.}$$

We now average

$$\displaystyle{\left \{\int _{\mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}D^{2}(t,\tau,V _{ N})\ \mathit{dt}\,d\tau \right \}^{1/2}}$$

in \(L^{2}(\mathbb{T}^{d},d\mu _{j})\) for every j = 1, , N, and consider

$$\displaystyle{D_{d\mu }(N) = \left \{\int _{\mathbb{T}^{d}}\ldots \int _{\mathbb{T}^{d}}\int _{\mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}D^{2}(t,\tau,V _{ N})\ \mathit{dt}\,d\tau \,d\mu _{1}(v_{1})\ldots d\mu _{N}(v_{N})\right \}^{1/2}\;.}$$

We now use an orthogonality argument to obtain an explicit identity for D d μ (N). Indeed by Parseval identity, (3.59) and (3.56) we have

$$\displaystyle\begin{array}{rcl} & & D_{d\mu }^{2}(N) {}\\ & & =\int _{\mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}\ldots \int _{\mathbb{T}^{d}}\sum _{0\neq m\in \mathbb{Z}^{d}}\left \vert \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot v_{j} }\right \vert ^{2}\vert \hat{\chi }_{ B}(\tau \left (m\right )\vert ^{2}\ d\mu _{ 1}(v_{1})\ldots d\mu _{N}(v_{N})d\tau {}\\ & & =\int _{\mathit{SO}\left (d\right )}\sum _{0\neq m\in \mathbb{Z}^{d}}\vert \hat{\chi }_{B}(\tau \left (m\right )\vert ^{2}\sum _{ j,\ell=1}^{N}\int _{ \mathbb{T}^{d}}\int _{\mathbb{T}^{d}}e^{2\pi \mathit{im}\cdot v_{j} }e^{-2\pi \mathit{im}\cdot v_{\ell}}\ d\mu _{ j}(v_{j})d\mu _{\ell}(v_{\ell})d\tau {}\\ & & =\int _{\mathit{SO}\left (d\right )}\sum _{0\neq m\in \mathbb{Z}^{d}}\vert \hat{\chi }_{B}(\tau \left (m\right )\vert ^{2} {}\\ & & \times \left (N +\sum _{ \begin{array}{c}j,\ell=1 \\ j\neq \ell \end{array}}^{N}\int _{ \mathbb{T}^{d}}\int _{\mathbb{T}^{d}}e^{2\pi \mathit{im}\cdot \left (v_{j}-t\left (j\right )\right )}e^{-2\pi \mathit{im}\cdot \left (v_{\ell}-t\left (\ell\right )\right )}\mathrm{\ }d\mu (v_{ j})d\mu (v_{\ell})\right )d\tau \;. {}\\ \end{array}$$

Then

$$\displaystyle\begin{array}{rcl} & & D_{d\mu }^{2}(N) {}\\ & & =\int _{\mathit{SO}\left (d\right )}\sum _{0\neq m\in \mathbb{Z}^{d}}\vert \hat{\chi }_{B}(\tau \left (m\right )\vert ^{2}\left (N + \vert \hat{\mu }(m)\vert ^{2}\sum _{ \begin{array}{c}j,\ell=1\\ j\neq \ell\end{array}}^{N}e^{2\pi \mathit{im}\cdot \left (t\left (\ell\right )-t\left (j\right )\right )}\right )d\tau {}\\ & & =\int _{\mathit{SO}\left (d\right )}\sum _{0\neq m\in \mathbb{Z}^{d}}\vert \hat{\chi }_{B}(\tau \left (m\right )\vert ^{2}\left (N + \vert \hat{\mu }(m)\vert ^{2}\left (\sum _{ j,\ell=1}^{N}e^{2\pi \mathit{im}\cdot \left (t\left (j\right )-t\left (\ell\right )\right )} - N\right )\right )d\tau {}\\ & & = N\sum _{0\neq m\in \mathbb{Z}^{d}}\left (1 -\vert \hat{\mu }(m)\vert ^{2}\right )\int _{\mathit{ SO}\left (d\right )}\vert \hat{\chi }_{B}(\tau \left (m\right )\vert ^{2}\ d\tau {}\\ & & +\sum _{0\neq m\in \mathbb{Z}^{d}}\vert \hat{\mu }(m)\vert ^{2}\left \vert \sum _{ j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right \vert ^{2}\int _{ \mathit{SO}\left (d\right )}\vert \hat{\chi }_{B}(\tau \left (m\right )\vert ^{2}\ d\tau \;. {}\\ \end{array}$$

So that

$$\displaystyle{ D_{d\mu }^{2}(N) = N\left (\left \vert B\right \vert -\left \Vert \chi _{\tau ^{ -1}B}{\ast}\mu \right \Vert _{L^{2}\left (\mathit{SO}\left (d\right )\times \mathbb{T}^{d}\right )}^{2}\right ) + \left \Vert D(\cdot,\tau,\mathcal{P}){\ast}\mu \right \Vert _{ L^{2}\left (\mathit{SO}\left (d\right )\times \mathbb{T}^{d}\right )}^{2}\,. }$$
(3.60)

The following are particular cases.

  1. (a)

    Let d μ = dt (the Lebesgue measure on \(\mathbb{T}^{d}\)). Then the second term in the RHS of (3.60) vanishes since \(\hat{\mu }(m) = 0\) for m ≠ 0, and we find the Monte-Carlo discrepancy:

    $$\displaystyle{ D_{\mathit{dt}}^{2}(N) = N\int _{\mathit{ SO}\left (d\right )}\sum _{0\neq m\in \mathbb{Z}^{d}}\left \vert \hat{\chi }_{B}(\tau \left (m\right ))\right \vert ^{2}\ d\tau = N\left (\left \vert B\right \vert -\left \vert B\right \vert ^{2}\right )\;. }$$
    (3.61)
  2. (b)

    Let d μ = δ 0 (the Dirac δ at 0) . Then we have the piece of grid

    $$\displaystyle{M^{-1}\mathbb{Z}^{d} \cap \left [-\frac{1} {2}, \frac{1} {2}\right )^{d}\;,}$$

    and the first term in the RHS of (3.60) vanishes because \(\hat{\mu }(m) = 1\) for every m. As for the second term, note that for \(t\left (j\right )\) in (3.56) we have

    $$\displaystyle{ \sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )} = \left \{\begin{array}{lll} N &&\text{if }m \in M\mathbb{Z}^{d} \\ 0 &&\text{otherwise,}\end{array} \right.\;. }$$
    (3.62)

    We then obtain the grid discrepancy:

    $$\displaystyle{D_{\delta _{0}}^{2}(N) = N^{2}\int _{ \mathit{SO}\left (d\right )}\sum _{0\neq m\in \mathbb{Z}^{d}}\left \vert \hat{\chi }_{B}(M\tau \left (m\right ))\right \vert ^{2}\ d\tau \;.}$$

    In the next case we shall choose one point at random inside each one of the M d small cubes having sides parallel to the axes and length 1∕M.

  3. (c)

    Let d μ = d λ = λ(t)dt, with

    $$\displaystyle{\lambda (t) = N\chi _{\left [-1/\left (2M\right ),1/\left (2M\right )\right ]^{d}}(t)\;.}$$

    Then, for \(m = \left (m_{1},m_{2},\ldots,m_{d}\right )\) we have

    $$\displaystyle{\hat{\lambda }(m) = N\ \frac{\sin (\pi m_{1}/M)} {\pi m_{1}} \frac{\sin (\pi m_{2}/M)} {\pi m_{2}} \cdots \frac{\sin (\pi m_{d}/M)} {\pi m_{d}} }$$

    and, for every m ≠ 0, (3.62) gives

    $$\displaystyle\begin{array}{rcl} & & \hat{\mu }(m)\widehat{D\left (\cdot,\tau,\mathcal{P}\right )}\left (m\right ) {}\\ & & =\hat{\lambda } (m)\widehat{D\left (\cdot,\tau,\mathcal{P}\right )}\left (m\right ) {}\\ & & = \left (N\ \frac{\sin (\pi m_{1}/M)} {\pi m_{1}} \frac{\sin (\pi m_{2}/M)} {\pi m_{2}} \cdots \frac{\sin (\pi m_{d}/M)} {\pi m_{d}} \right )\left (\hat{\chi }_{B}\left (\tau \left (m\right )\right )\sum _{j=1}^{N}e^{2\pi \mathit{im}\cdot t\left (j\right )}\right ) {}\\ & & = 0\;, {}\\ \end{array}$$

    so that the second term in the RHS of (3.60) vanishes. In this way we have the jittered sampling discrepancy:

    $$\displaystyle\begin{array}{rcl} D_{d\lambda }^{2}(N)& =& N\left (\left \vert B\right \vert -\int _{\mathit{ SO}\left (d\right )}\int _{\mathbb{T}^{d}}\left \vert \chi _{\tau ^{-1}\left (B\right )}{\ast}\lambda \right \vert ^{2}\ \mathit{dt}\,d\tau \right ) \\ & =& N\sum _{0\neq m\in \mathbb{Z}^{d}}\left (1 -\left \vert \hat{\lambda }\left (m\right )\right \vert ^{2}\right )\int _{\mathit{ SO}\left (d\right )}\left \vert \hat{\chi }_{B}(\tau \left (m\right ))\right \vert ^{2}\ d\tau \;. {}\end{array}$$
    (3.63)

Proof (of Theorem 43).

By (3.63) we can select a point u j from each one of the cubes

$$\displaystyle{\left \{t\left (j\right ) + \left [- \frac{1} {2M}, \frac{1} {2M}\right )^{d}\right \}_{ j=1}^{N}}$$

in such a way that

$$\displaystyle\begin{array}{rcl} & & \int _{\mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}\left \vert -N\left \vert B\right \vert +\sum _{ j=1}^{N}\chi _{ \tau ^{-1}\left (B\right )-t}\left (u_{j}\right )\right \vert ^{2}\ \mathit{dt}\,d\tau {}\\ & & \leq N\left (\left \vert B\right \vert -\int _{\mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}\left (\chi _{\tau ^{-1}\left (B\right )}{\ast}\lambda \right )^{2}\left (t\right )\ \mathit{dt}\,d\tau \right )\;. {}\\ \end{array}$$

Since the support of the function \(\lambda \left (t\right )\) has diameter \(\sqrt{d}/M\) we have

$$\displaystyle{\left (\chi _{\tau ^{-1}\left (B\right )}{\ast}\lambda \right )^{2}\left (t\right ) =\chi _{\tau ^{ -1}\left (B\right )}\left (t\right )}$$

for every t not belonging to the set

$$\displaystyle{\left \{x \in \mathbb{R}^{d}:\min _{ y\in \partial \left (\tau ^{-1}\left (B\right )\right )}\left \vert x - y\right \vert \leq \sqrt{d}/M\right \}\;.}$$

By our assumptions this set has measure ≤ c d  M −1, uniformly in τ, so that

$$\displaystyle{ N\left (\left \vert B\right \vert -\int _{\mathit{SO}\left (d\right )}\int _{\mathbb{T}^{d}}\left (\chi _{\tau ^{-1}\left (B\right )}{\ast}\lambda \right )^{2}\left (t\right )\ \mathit{dt}\,d\tau \right ) \leq c_{ d}\ N\ M^{-1} = c_{ d}\ N^{1-1/d}\;. }$$
(3.64)

 □ 

Remark 44.

When B is a ball of radius r and N = M d the inequality (3.64) can be reversed (see [22]), and there exist positive constants c 1 and c 2, depending at most on d and on r, such that

$$\displaystyle{c_{1}\ N^{1-1/d} \leq N\left (\left \vert B\right \vert -\int _{ \mathbb{T}^{d}}\left (\chi _{B}{\ast}\lambda \right )^{2}\left (t\right )\ \mathit{dt}\right ) \leq c_{ 2}\ N^{1-1/d}\;.}$$

Remark 45.

In the case of the ball it is possible to show that the discrepancy described in Theorem 42 is larger than the one described in Theorem 43 for small d and it is smaller for large d (see [22]). For \(d \equiv 1\ \left (\mathrm{mod}4\right )\) the situation is more intricate because of the results in Theorem 24 (see [22] or [21]).