Keywords

MSC codes

1 Introduction

Properties of the Estermann zeta function have been used by Balasubramanian et al. [2] to prove asymptotic formulas for mean-values of the product consisting of the Riemann zeta function and a Dirichlet polynomial. These asymptotic formulas contain cotangent sums which also appear in recent work of Bettin and Conrey [4] on period functions. Very recently, Maier and Rassias in their paper [14] prove asymptotic results and upper bounds for the moments of cotangent sums under consideration. Their main result is the existence of a unique positive measure μ on \(\mathbb{R}\) with respect to which these cotangent sums are equidistributed.

2 The Cotangent Sum and Its Applications

In the present paper, we consider the following cotangent sum:

Definition 2.1

If r, \(b \in \mathbb{N}\) , b ≥ 2, 1 ≤ r ≤ b and (r,b) = 1, we define

$$\displaystyle{c_{0}\left (\frac{r} {b}\right ):= -\sum _{m=1}^{b-1}\frac{m} {b} \cot \left (\frac{\pi mr} {b} \right )\:.}$$

The function c 0(rb) is odd and has period 1. Its value is an algebraic number. We first exhibit some relations of the cotangent sums to the Estermann and Riemann zeta functions and connections to the Riemann Hypothesis.

Definition 2.2

The Estermann zeta function \(E\left (s, \frac{r} {b},\alpha \right )\) is defined by the Dirichlet series

$$\displaystyle{E\left (s, \frac{r} {b},\alpha \right ) =\sum _{n\geq 1}\frac{\sigma _{\alpha }(n)\exp \left (2\pi inr/b\right )} {n^{s}} \:,}$$

where Re s > Re α + 1, b ≥ 1, (r,b) = 1 and

$$\displaystyle{\sigma _{\alpha }(n) =\sum _{d\vert n}d^{\alpha }\:.}$$

It can be proved that the Estermann zeta function can be continued analytically to a meromorphic function, on the whole complex plane up to two simple poles s = 1 and s = 1 +α if α ≠ 0 or a double pole at s = 1 if α = 0 (see [9, 11, 17]).

The Estermann zeta function satisfies the functional equation:

$$\displaystyle\begin{array}{rcl} E\left (s, \frac{r} {b},\alpha \right )& =& \frac{1} {\pi } \left (\frac{b} {2\pi }\right )^{1+\alpha -2s}\varGamma (1 - s)\varGamma (1 +\alpha -s) {}\\ & \times & \left (\cos \left ( \frac{\pi \alpha } {2}\right )E\left (1 +\alpha -s, \frac{\overline{r}} {b},\alpha \right )-\cos \left (\pi s - \frac{\pi \alpha } {2}\right )E\left (1 +\alpha -s,-\frac{\overline{r}} {b},\alpha \right )\right ), {}\\ \end{array}$$

where \(\overline{r}\) is such that \(\overline{r}r \equiv 1\:(\bmod \;b)\) and Γ(s) stands for the Gamma function.

Balasubramanian et al. [2] used properties of \(E\left (0, \frac{r} {b},0\right )\) to prove an asymptotic formula for

$$\displaystyle{I =\int _{ 0}^{T}\left \vert \zeta \left (\frac{1} {2} + it\right )\right \vert ^{2}\left \vert A\left (\frac{1} {2} + it\right )\right \vert ^{2}dt\:,}$$

where A(s) is a Dirichlet polynomial.

Asymptotics for functions of the form of I have been used for theorems which give a lower bound for the portion of zeros of the Riemann zeta-function ζ(s) on the critical line (see [12, 13]).

A nice result concerning the value of \(E\left (s, \frac{r} {b},\alpha \right )\) at s = 0 was presented by Ishibashi in [10].

Theorem 2.3 (Ishibashi)

Let b ≥ 2, 1 ≤ r ≤ b, (r,b) = 1, \(\alpha \in \mathbb{N} \cup \{ 0\}\) . Then

  1. (1)

    If α is even, it holds

    $$\displaystyle{E\left (0, \frac{r} {b},\alpha \right ) = \left (-\frac{i} {2}\right )^{\alpha +1}\sum _{ m=1}^{b-1}\frac{m} {b} \cot ^{(\alpha )}\left (\frac{\pi mr} {b} \right ) + \frac{1} {4}\delta _{\alpha,0}\:,}$$

    where δ α,0 is the Kronecker delta function .

  2. (2)

    If α is odd, it holds

    $$\displaystyle{E\left (0, \frac{r} {b},\alpha \right ) = \frac{B_{\alpha +1}} {2(\alpha +1)}\:.}$$

If r = b = 1, one has

$$\displaystyle{E\left (0,1,\alpha \right ) = \frac{(-1)^{\alpha +1}B_{\alpha +1}} {2(\alpha +1)} \:,}$$

where B α denotes the Bernoulli number (see Definition  2.4).

Hence for b ≥ 2, 1 ≤ r ≤ b, (r, b) = 1, we have

$$\displaystyle{E\left (0, \frac{r} {b},0\right ) = \frac{1} {4} + \frac{i} {2}c_{0}\left (\frac{r} {b}\right )\:,}$$

where c 0(rb) is the cotangent sum (see Definition 2.1).

The above result gives a relation between the cotangent sum c 0(rb) and the Estermann zeta function.

The cotangent sum c 0(rb) can be associated with the so-called Vasyunin sum, which is defined as follows:

$$\displaystyle{V \left (\frac{r} {b}\right ):=\sum _{ m=1}^{b-1}\left \{\frac{mr} {b} \right \}\cot \left (\frac{\pi mr} {b} \right )\:,}$$

where {u} = u −⌊u⌋, \(u \in \mathbb{R}.\)

One can prove that (see [3, 4])

$$\displaystyle{V \left (\frac{r} {b}\right ) = -c_{0}\left (\frac{\overline{r}} {b}\right ),}$$

where, as mentioned previously, \(\overline{r}\) is such that \(\overline{r}r \equiv 1\:(\bmod \;b)\).

The Vasyunin sum [19] is itself associated with the study of the Riemann Hypothesis through the following identity (see [3, 4]):

$$\displaystyle\begin{array}{rcl} \frac{1} {2\pi (rb)^{1/2}}\int _{-\infty }^{+\infty }\left \vert \zeta \left (\frac{1} {2} + it\right )\right \vert ^{2}\left (\frac{r} {b}\right )^{it} \frac{dt} {\frac{1} {4} + t^{2}}& =& \frac{\log 2\pi -\gamma } {2} \left (\frac{1} {r} + \frac{1} {b}\right ) \\ & +& \frac{b - r} {2rb} \log \frac{r} {b} - \frac{\pi } {2rb}\left (V \left (\frac{r} {b}\right ) + V \left (\frac{b} {r}\right )\right ).{}\end{array}$$
(2.1)

The only non-explicit function in the right-hand side of (2.1) is the Vasyunin sum.

Formula (2.1) is related to the Nyman–Beurling–Baéz-Duarte–Vasyunin approach to the Riemann Hypothesis (see [1, 3]). The Riemann Hypothesis is true if and only if

$$\displaystyle{\lim _{N\rightarrow +\infty }d_{N} = 0,}$$

where

$$\displaystyle{d_{N}^{2} =\inf _{ D_{N}}\frac{1} {2\pi }\int _{-\infty }^{+\infty }\left \vert 1 -\zeta \left (\frac{1} {2} + it\right )D_{N}\left (\frac{1} {2} + it\right )\right \vert ^{2} \frac{dt} {\frac{1} {4} + t^{2}}}$$

and the infimum is taken over all Dirichlet polynomials

$$\displaystyle{D_{N}(s) =\sum _{ n=1}^{N}\frac{a_{n}} {n^{s}}.}$$

From the above considerations we see that the behavior of c 0(rb) helps to understand the behavior of V (rb). From (2.1) we may hope to obtain some information related to the Nyman–Beurling–Baéz-Duarte–Vasyunin approach to the Riemann Hypothesis.

Definition 2.4

The m-th Bernoulli number B m is defined by

$$\displaystyle{B_{2m} = 2\frac{(2m)!} {(2\pi )^{2m}}\sum _{\nu \geq 1}\nu ^{-2m},\ \ B_{ 2m+1} = 0\:,}$$

where \(m \in \mathbb{N}\) . Furthermore, we have B −1 = −1∕2.

3 Main Result

We now discuss the equidistribution of certain normalized cotangent sums with respect to a positive measure, which is also constructed in the following theorem.

Definition 3.1

For \(z \in \mathbb{R}\) , let

$$\displaystyle{F(z) = \text{meas}\{\alpha \in [0,1]\::\: g(\alpha ) \leq z\},}$$

where “meas” denotes the Lebesgue measure,

$$\displaystyle{g(\alpha ) =\sum _{ l=1}^{+\infty }\frac{1 - 2\{l\alpha \}} {l} }$$

and

$$\displaystyle{C_{0}(\mathbb{R})=\{\,f \in C(\mathbb{R})\::\:\forall \:\epsilon > 0,\:\exists \:\ \text{a compact set}\ \mathcal{K}\subset \mathbb{R},\:\text{such that}\ \vert \,f(x)\vert <\epsilon,\forall \:x\not\in \mathcal{K}\}.}$$

Remark

The convergence of this series has been investigated by de la Bretèche and Tenenbaum (see [7]). It depends on the partial fraction expansion of the number α.

We now state Theorem 3.2, the main result of our paper. An overview of the basic steps of its proof is provided in Sect. 4. The only fact, whose proof is provided in greater detail, is stated in Lemma 4.8. It implies the continuity of the distribution function of the cotangent sums c 0.

Theorem 3.2

  1. (i)

    F is a continuous function of z.

  2. (ii)

    Let A 0 , A 1 be fixed constants, such that 1∕2 < A 0 < A 1 < 1. Let also

    $$\displaystyle{H_{k} =\int _{ 0}^{1}\left (\frac{g(x)} {\pi } \right )^{2k}dx,}$$

    where H k is a positive constant depending only on k, \(k \in \mathbb{N}\) .

There is a unique positive measure μ on \(\mathbb{R}\) with the following properties:

  1. (a)

    For \(\alpha <\beta \in \mathbb{R}\) we have

    $$\displaystyle{\mu ([\alpha,\beta ]) = (A_{1} - A_{0})(F(\beta ) - F(\alpha )).}$$
  2. (b)
    $$\displaystyle{ \int x^{k}d\mu = \left \{\begin{array}{l l} (A_{1} - A_{0})H_{k/2}\:,&\quad \text{for even}\:k \\ 0\:, &\quad \text{otherwise}\:.\\ \end{array} \right.}$$
  3. (c)

    For all \(f \in C_{0}(\mathbb{R})\) , we have

    $$\displaystyle{\lim _{b\rightarrow +\infty } \frac{1} {\phi (b)}\sum _{\begin{array}{c}r\::\:(r,b)=1 \\ A_{0}b\leq r\leq A_{1}b\end{array}}f\left (\frac{1} {b}c_{0}\left (\frac{r} {b}\right )\right ) =\int f\:d\mu,}$$

    where ϕ(⋅) denotes the Euler phi-function.

Remark

Bruggeman (see [5, 6]) and Vardi (see [18]) have studied the equidistribution of Dedekind sums. In contrast with the work in this paper, they consider an additional averaging over the denominator.

4 Outline of the Proof and Further Results

Rassias [15, 16] proved the following asymptotic formula:

Theorem 4.1

For b ≥ 2, \(b \in \mathbb{N}\) , we have

$$\displaystyle{c_{0}\left (\frac{1} {b}\right ) = \frac{1} {\pi } \:b\log b -\frac{b} {\pi } (\log 2\pi -\gamma ) + O(1)\:.}$$

In that paper, the fractional parts are expressed in terms of cotangent sums. This method is generalized in the present paper, where some stronger results are being proved.

In [16] also the following improvement of Theorem 4.1 is proved. It is not needed in the proof of Theorem 3.2.

Theorem 4.2

Let \(b,n \in \mathbb{N}\) , b ≥ 6N, with \(N = \left \lfloor n/2\right \rfloor + 1\) .There exist absolute real constants A 1 ,A 2 ≥ 1 and absolute real constants E l , \(l \in \mathbb{N}\) with |E l |≤ (A 1 l) 2l , such that for each \(n \in \mathbb{N}\) we have

$$\displaystyle{c_{0}\left (\frac{1} {b}\right ) = \frac{1} {\pi } b\log b -\frac{b} {\pi } (\log 2\pi -\gamma ) -\frac{1} {\pi } +\sum _{ l=1}^{n}E_{ l}b^{-l} + R_{ n}^{{\ast}}(b)}$$

where

$$\displaystyle{\vert R_{n}^{{\ast}}(b)\vert \leq (A_{ 2}n)^{4n}\:b^{-(n+1)}.}$$

In the following Proposition 4.3, due to the second author (see [16]), a relation is obtained between the cotangent sum c 0 and the modified sum Q. Thus the study of c 0 is reduced to the study of Q, which is crucial. In [16] also Theorem 4.4 is proved.

Proposition 4.3

For r, \(b \in \mathbb{N}\) with (r,b) = 1, it holds

$$\displaystyle{c_{0}\left (\frac{r} {b}\right ) = \frac{1} {r}\:c_{0}\left (\frac{1} {b}\right ) -\frac{1} {r}Q\left (\frac{r} {b}\right )\:,}$$

where

$$\displaystyle{Q\left (\frac{r} {b}\right ) =\sum _{ m=1}^{b-1}\cot \left (\frac{\pi mr} {b} \right )\left \lfloor \frac{rm} {b} \right \rfloor \:.}$$

Theorem 4.4

Let \(r,b_{0} \in \mathbb{N}\) be fixed, with (b 0 ,r) = 1. Let b be a positive integer such that \(b \equiv b_{0}\:(\bmod \:r)\) . Then, there exists a constant C 1 = C 1 (r,b 0 ), with C 1 (1,b 0 ) = 0, satisfying

$$\displaystyle{c_{0}\left (\frac{r} {b}\right ) = \frac{1} {\pi r}b\log b -\frac{b} {\pi r}(\log 2\pi -\gamma ) + C_{1}\:b + O(1),}$$

for large integer values of b.

We now list other results proven in [16]. In the sequel we shall give a few hints concerning their proofs.

Theorem 4.5

Let \(k \in \mathbb{N}\) be fixed. Let also A 0 , A 1 be fixed constants such that 1∕2 < A 0 < A 1 < 1. Then there exist explicit constants E k > 0 and H k > 0, depending only on k, such that

  1. (a)
    $$\displaystyle{\sum _{\begin{array}{c}r:(r,b)=1 \\ A_{0}b\leq r\leq A_{1}b\end{array}}Q\left (\frac{r} {b}\right )^{2k} = E_{ k} \cdot (A_{1}^{2k+1} - A_{ 0}^{2k+1})b^{4k}\phi (b)(1 + o(1)),\ \ (b \rightarrow +\infty ).}$$
  2. (b)
    $$\displaystyle{\sum _{\begin{array}{c}r:(r,b)=1 \\ A_{0}b\leq r\leq A_{1}b\end{array}}Q\left (\frac{r} {b}\right )^{2k-1} = o\left (b^{4k-2}\phi (b)\right ),\ \ (b \rightarrow +\infty ).}$$
  3. (c)
    $$\displaystyle{\sum _{\begin{array}{c}r:(r,b)=1 \\ A_{0}b\leq r\leq A_{1}b\end{array}}c_{0}\left (\frac{r} {b}\right )^{2k} = H_{ k} \cdot (A_{1} - A_{0})b^{2k}\phi (b)(1 + o(1)),\ \ (b \rightarrow +\infty ).}$$
  4. (d)
    $$\displaystyle{\sum _{\begin{array}{c}r:(r,b)=1 \\ A_{0}b\leq r\leq A_{1}b\end{array}}c_{0}\left (\frac{r} {b}\right )^{2k-1} = o\left (b^{2k-1}\phi (b)\right ),\ \ (b \rightarrow +\infty ).}$$

Applying the method of moments, we deduce detailed information about the distribution of the values of c 0(rb), where A 0 b ≤ r ≤ A 1 b and b → +. In fact, we prove Theorem 3.2.

Finally, we study the convergence of the series

$$\displaystyle{\sum _{k\geq 0}H_{k}x^{2k}}$$

and prove the following theorem:

Theorem 4.6

The series

$$\displaystyle{\sum _{k\geq 0}H_{k}x^{2k},}$$

converges only for x = 0.

Another interesting question is whether the series

$$\displaystyle{\sum _{k\geq 0} \frac{H_{k}} {(2k)!}x^{2k},}$$

has a positive radius of convergence. This would lead to a simplification in the proof of our equidistribution result, since in this case we could apply the theory of distributions which are determined by their moments. We now give a few hints concerning the proofs of Theorems 4.4 and 4.5.

By Proposition 4.3 we know that

$$\displaystyle{c_{0}\left (\frac{r} {b}\right ) = \frac{1} {r}c_{0}\left (\frac{1} {b}\right ) -\frac{1} {r}Q\left (\frac{r} {b}\right ),}$$

where

$$\displaystyle{Q\left (\frac{r} {b}\right ) =\sum _{ m=1}^{b-1}\cot \left (\frac{\pi mr} {b} \right )\left \lfloor \frac{rm} {b} \right \rfloor \:.}$$

We partition the range of the above summation into intervals on which the term \(\left \lfloor rm/b\right \rfloor \) assumes a constant value j:

$$\displaystyle{Q\left (\frac{r} {b}\right ) =\sum _{ j=0}^{r-1}j\sum _{ j\leq \left \lfloor \frac{rm} {b} \right \rfloor <j+1}\cot \left (\frac{\pi mr} {b} \right )}$$

and define

$$\displaystyle{S_{j} =\{ rm\::\: bj \leq rm < b(j + 1),m \in \mathbb{Z}\}.}$$

We write

$$\displaystyle{S_{j} =\{ bj + s_{j},bj + s_{j} + r,\ldots,bj + s_{j} + d_{j}r\},}$$

where d j  ∈ { 0, 1} and introduce s = s j as a new variable of summation.

The relation between j and s j is given by

$$\displaystyle{s_{j} \equiv -bj(\bmod r)}$$

and thus for a given value s j  = s we can find j by

$$\displaystyle{\frac{j} {r} = \left \{-\frac{sb^{{\ast}}} {r} \right \},}$$

where b is defined by

$$\displaystyle{bb^{{\ast}}\equiv 1(\bmod r),\ 1 \leq b^{{\ast}}\leq r - 1.}$$

Since cot(π x) has a pole of first order at x = 0, the sum Q(rb) is dominated by small values of s. The substitution

$$\displaystyle{\alpha =\alpha (r,b) = \frac{b^{{\ast}}} {r} }$$

and the asymptotics

$$\displaystyle{\cot (\pi x) \sim \frac{1} {\pi x}}$$

lead to the approximation of Q(rb) by

$$\displaystyle{\frac{br} {\pi } g(\alpha ).}$$

For the proof of Theorem 1.5 of [14], the crucial property is the continuity of the function F(z), where

$$\displaystyle{F(z) = \text{meas}\{\alpha \in [0,1]\::\: g(\alpha ) \leq z\}.}$$

We shall give a proof of this fact shortly. The proof of Theorem 4.6 is obtained by studying the contribution of the interval

$$\displaystyle{I(k) = [e^{-2k-1},e^{-2k}]\:}$$

to the moment

$$\displaystyle{H_{k} =\int _{ 0}^{1}\left (\frac{g(x)} {\pi } \right )^{2k}\:dx\:.}$$

Definition 4.7

A distribution function G is a monotonically increasing function

$$\displaystyle{G\!:\! \mathbb{R} \rightarrow [0,1].}$$

 The characteristic function ψ of G is defined by the following Stieltjes integral:

$$\displaystyle{\psi (t) =\int _{ -\infty }^{+\infty }e^{itu}dG(u).}$$

(cf. [8, p.27])

Lemma 4.8

The distribution function G is continuous if and only if the characteristic function ψ satisfies

$$\displaystyle{\liminf _{T\rightarrow +\infty } \frac{1} {2T}\int _{-T}^{T}\vert \psi (t)\vert ^{2}dt = 0.}$$

Proof

See [8, p. 48, Lemma 1.23].

Definition 4.9

Let t ≥ 1. We set

$$\displaystyle{K = K(t) = \lfloor t^{9/10}\rfloor,\ L = L(t) = \lfloor t^{11/10}\rfloor,\ R = R(t) = \lfloor t^{9/5}\rfloor }$$

and

$$\displaystyle{g(\alpha,K) = -2\sum _{l\leq K}\frac{B^{{\ast}}(l\alpha )} {l},\ \ h(\alpha ) = -2\sum _{l>K}\frac{B^{{\ast}}(l\alpha )} {l},}$$

where B (u) = u −⌊u⌋− 1∕2, \(u \in \mathbb{R}\) .

Assume that (α i ) with 0 = α 0 < α 1 < ⋯ < α R = 1 is a partition of [0,1] with the following properties:

$$\displaystyle{\frac{1} {2}R^{-1} \leq \alpha _{ i+1} -\alpha _{i} \leq 2R^{-1}}$$

and g(α,K) is continuous at α = α i for 0 < i < R.

We now make preparations for an application of Lemma 4.8 with G = F, and

$$\displaystyle{\psi (t) =\varPhi (t):=\int _{ 0}^{1}e\left (\frac{tg(\alpha )} {2\pi } \right )d\alpha.}$$

Lemma 4.10

The function h(α) has a Fourier expansion

$$\displaystyle{h(\alpha ) =\sum _{n>K}c(n)\sin (2\pi n\alpha ),}$$

with

$$\displaystyle{\vert c(n)\vert \leq \frac{2\tau (n)} {\pi n},}$$

where τ stands for the divisor function.

Proof

From the Fourier expansion

$$\displaystyle{B^{{\ast}}(u) = \frac{i} {2\pi }\sum _{\begin{array}{c}n=-\infty \\ n\neq 0 \end{array}}^{+\infty }\frac{e(nu)} {n},}$$

we obtain

$$\displaystyle{h(\alpha ) = -\frac{i} {\pi } \sum _{l>K}\frac{1} {l} \sum _{\begin{array}{c}m=-\infty \\ m\neq 0 \end{array}}^{+\infty }\frac{e(lm\alpha )} {m} =\sum _{\vert n\vert >K}d(n)e(n\alpha )}$$

with

$$\displaystyle{d(n) = -\frac{i} {\pi n}\left \vert \{(l,m)\::\: lm = n,\ l > K\}\right \vert.}$$

We have

$$\displaystyle{h(\alpha ) =\sum _{n>K}d(n)\left (e(n\alpha ) - e(-n\alpha )\right ) = 2i\sum _{n>K}d(n)\sin (2\pi n\alpha ),}$$

which completes the proof of the lemma.

Definition 4.11

We set

$$\displaystyle{h_{1}(\alpha ):=\sum _{K<n\leq L}c(n)\sin (2\pi n\alpha )}$$

and

$$\displaystyle{h_{2}(\alpha ):=\sum _{n>L}c(n)\sin (2\pi n\alpha ).}$$

Lemma 4.12

We have

$$\displaystyle{\int _{0}^{1}\left (e\left ( \frac{t} {2\pi }\left (g(\alpha,K) + h_{1}(\alpha )\right )\right ) - e\left (\frac{tg(\alpha )} {2\pi } \right )\right )d\alpha = O\left (t^{-1/100}\right ).}$$

Proof

By Parseval’s identity , it follows that for every ε > 0 it holds

$$\displaystyle{\int _{0}^{1}h_{ 2}(\alpha )^{2}d\alpha =\sum _{ n>L}c(n)^{2} \ll L^{-(1-2\epsilon )},}$$

because of the estimate

$$\displaystyle{c(n) \ll n^{-1+\epsilon }.}$$

Thus, for all α ∈ [0, 1] not belonging to an exceptional set \(\mathcal{E}\) with

$$\displaystyle{\text{meas}(\mathcal{E}) = O\left (t^{-1/100}\right ),}$$

we have

$$\displaystyle{h_{2}(\alpha ) = O\left (t^{-1-1/100}\right )}$$

and therefore

$$\displaystyle{\left \vert e\left (\frac{th_{2}(\alpha )} {2\pi } \right ) - 1\right \vert = O\left (t^{-1/100}\right )}$$

by the Taylor expansion of the exponential function.

Hence,

$$\displaystyle\begin{array}{rcl} \left \vert \int _{0}^{1}e\left (\frac{tg(\alpha )} {2\pi } \right )d\alpha -\int _{0}^{1}e\left ( \frac{t} {2\pi }(g(\alpha,K) + h_{1}(\alpha ))\right )d\alpha \right \vert && {}\\ & \leq & \int _{0}^{1}\left \vert e\left (\frac{t(g(\alpha,K) + h_{1}(\alpha ))} {2\pi } \right )\right \vert \left \vert e\left (\frac{th_{2}(\alpha )} {2\pi } \right ) - 1\right \vert d\alpha {}\\ & \leq & \int _{\mathcal{E}}2\:d\alpha +\int _{[0,1]\setminus \mathcal{E}}\left \vert e\left (\frac{th_{2}(\alpha )} {2\pi } \right ) - 1\right \vert \:d\alpha {}\\ & =& O\left (t^{-1/100}\right ). {}\\ \end{array}$$

Lemma 4.13

There exists a set I ⊆{ 1,…,R} of non-negative integers, such that

$$\displaystyle{\sum _{i\in I}(\alpha _{i+1} -\alpha _{i}) = O\left (t^{-1/100}\right )}$$

and for i∉I, α ∈ [α i i+1 ] we have

$$\displaystyle{\vert h_{1}(\alpha ) - h_{1}(\alpha _{i})\vert \leq t^{-(1+1/100)}.}$$

Proof

We have

$$\displaystyle{\frac{d} {d\alpha }h_{1}(\alpha ) =\sum _{K<n\leq L}2\pi nc(n)\cos (2\pi n\alpha )}$$

and

$$\displaystyle{\frac{d^{2}} {d\alpha ^{2}}h_{1}(\alpha ) = -\sum _{K<n\leq L}4\pi ^{2}n^{2}c(n)\sin (2\pi n\alpha ).}$$

By Parseval’s identity, for every ε > 0 we get

$$\displaystyle{\int _{0}^{1}\left \vert \frac{d} {d\alpha }h_{1}(\alpha )\right \vert ^{2}d\alpha = O\left (L^{1+2\epsilon }\right )}$$

and by the Cauchy–Schwarz inequality, it follows that

$$\displaystyle{ \int _{0}^{1}\left \vert \frac{d} {d\alpha }h_{1}(\alpha )\right \vert d\alpha = O\left (L^{1/2+\epsilon }\right ). }$$
(4.1)

We now define the set I as the set of all subscripts i for which the closed interval [α i , α i+1] contains an α with

$$\displaystyle{\vert h_{1}(\alpha ) - h_{1}(\alpha _{i})\vert > t^{-(1+1/100)}.}$$

Since

$$\displaystyle{ h_{1}(\alpha ) = h_{1}(\alpha _{i}) +\int _{ \alpha _{i}}^{\alpha }\frac{d} {d\beta }h_{1}(\beta )d\beta }$$
(4.2)

and

$$\displaystyle{\vert \alpha -\alpha _{i}\vert = O\left (t^{-9/5}\right ),}$$

it follows that for i ∈ I there must exist β ∈ (α i , α i+1) with

$$\displaystyle{\left \vert \frac{d} {d\beta }h_{1}(\beta )\right \vert \geq t^{3/5}.}$$

Because of the estimation of the Fourier coefficients of \(\frac{d^{2}} {d\alpha ^{2}} h_{1}(\alpha )\), we obtain

$$\displaystyle{\left \vert \frac{d^{2}} {d\alpha ^{2}}h_{1}(\alpha )\right \vert = O\left (L^{2+\epsilon }\right ).}$$

Analogously to (4.2) we obtain that

$$\displaystyle{\left \vert \frac{d} {d\alpha }h_{1}(\alpha )\right \vert \geq \frac{1} {2}t^{3/5},}$$

for every α ∈ [α i , α i+1] and therefore

$$\displaystyle{\int _{a_{i}}^{a_{i+1} }\left \vert \frac{d} {d\alpha }h_{1}(\alpha )\right \vert d\alpha \geq \frac{1} {2}t^{3/5}(\alpha _{ i+1} -\alpha _{i}).}$$

From (4.1) we obtain that the measure of the union of the closed intervals [α i , α i+1] with i ∈ I is O(t −1∕100), which concludes the proof of the lemma.

Lemma 4.14

We have

$$\displaystyle{\lim _{t\rightarrow +\infty }\varPhi (t) =\lim _{t\rightarrow -\infty }\varPhi (t) = 0.}$$

Proof

We shall prove the result only for t → +, since the proof of the part when t → − is analogous.

By Lemma 4.12, we have

$$\displaystyle{\varPhi (t) =\int _{ 0}^{1}e\left (\frac{tg(\alpha )} {2\pi } \right )d\alpha =\int _{ 0}^{1}e\left ( \frac{t} {2\pi }(g(\alpha,K) + h_{2}(\alpha ))\right ) + O\left (t^{-1/100}\right )}$$

and thus

$$\displaystyle\begin{array}{rcl} \varPhi (t) =\int _{ 0}^{1}e\left (\frac{tg(\alpha )} {2\pi } \right )d\alpha & =& \sum _{\begin{array}{c}i=0 \\ i\not\in I\end{array}}^{R}e\left (\frac{th_{1}(\alpha _{i})} {2\pi } \right )\int _{\alpha _{i}}^{\alpha _{i+1} }e\left (\frac{tg(\alpha,K)} {2\pi } \right )d\alpha {}\\ & +& \sum _{\begin{array}{c}i=0 \\ i\not\in I\end{array}}^{R}\int _{ \alpha _{i}}^{\alpha _{i+1} }e\left (\frac{tg(\alpha,K)} {2\pi } \right )\left (e\left (\frac{th_{1}(\alpha )} {2\pi } \right ) - e\left (\frac{th_{1}(\alpha _{i})} {2\pi } \right )\right )d\alpha {}\\ & +& O\left (\sum _{i\in I}(\alpha _{i+1} -\alpha _{i})\right ) + O\left (t^{-1/100}\right ). {}\\ \end{array}$$

From Lemma 4.13 we get

$$\displaystyle{ \varPhi (t)=\int _{0}^{1}e\left (\frac{tg(\alpha )} {2\pi } \right )d\alpha =\sum _{\begin{array}{c}i=0 \\ i\not\in I\end{array}}^{R}e\left (\frac{th_{1}(\alpha _{i})} {2\pi } \right )\int _{\alpha _{i}}^{\alpha _{i+1} }e\left (\frac{tg(\alpha,K)} {2\pi } \right )\:d\alpha +O\left (t^{-1/100}\right ). }$$
(4.3)

We now estimate

$$\displaystyle{\int _{\alpha _{i}}^{\alpha _{i+1} }e\left (\frac{tg(\alpha,K)} {2\pi } \right )d\alpha,}$$

for i ∉ I. Let J i − 1 be the number of discontinuities of the function g(α, K) in the interval [α i , α i+1]. Let β i, 0 = α i , \(\beta _{i,J_{i}} =\alpha _{i+1}\) and let the discontinuities of g(α, K) in [α i , α i+1] occur at the points \(\beta _{i,1} <\beta _{i,2} < \cdots <\beta _{i,J_{i}-1}\).

In the intervals [β i, r  ,  β i, r+1] the function g(α, K) is a linear function, that is

$$\displaystyle{g(\alpha,K) = d_{r} - 2K\alpha,}$$

where \(d_{r} \in \mathbb{R}\). Therefore,

$$\displaystyle\begin{array}{rcl} \left \vert \int _{\alpha _{i}}^{\alpha _{i+1} }e\left (\frac{tg(\alpha,K)} {2\pi } \right )d\alpha \right \vert & \leq & \sum _{r=0}^{J_{i} }\left \vert \int _{\beta _{i,r}}^{\beta _{i,r+1} }e\left (\frac{tg(\alpha,K)} {2\pi } \right )d\alpha \right \vert \\ &\leq & \sum _{r=0}^{J_{i} }\left \vert \int _{\beta _{i,r}}^{\beta _{i,r+1} }e\left (-\frac{tK\alpha } {\pi } \right )d\alpha \right \vert \\ & =& O\left (J_{i}(tK)^{-1}\right ). {}\end{array}$$
(4.4)

From (4.3) and (4.4), we get

$$\displaystyle\begin{array}{rcl} \int _{0}^{1}e\left (\frac{tg(\alpha )} {2\pi } \right )d\alpha & \leq & \sum _{\begin{array}{c}i=0 \\ i\not\in I\end{array}}^{R}\left \vert \int _{\alpha _{ i}}^{\alpha _{i+1} }e\left (\frac{tg(\alpha,K)} {2\pi } \right )d\alpha \right \vert + O\left (t^{-1/100}\right ) {}\\ & =& O\left ((tK)^{-1}\sum _{ \begin{array}{c}i=0\\ i\not\in I\end{array}}^{R}J_{ i}\right ) + O\left (t^{-1/100}\right ). {}\\ \end{array}$$

The number of discontinuities of g(α, K) is O(K 2), since each of the K terms

$$\displaystyle{\frac{B^{{\ast}}(l\alpha )} {l} }$$

has O(K) discontinuities in the interval [0, 1]. We thus have

$$\displaystyle{\sum _{i=0}^{R}J_{ i} = O(K^{2}).}$$

Then

$$\displaystyle{\varPhi (t) = O\left (t^{-1/100}\right ).}$$

Therefore

$$\displaystyle{\lim _{t\rightarrow +\infty }\varPhi (t) = 0.}$$

Similarly, we obtain

$$\displaystyle{\lim _{t\rightarrow -\infty }\varPhi (t) = 0,}$$

which completes the proof of the lemma.

Lemma 4.15

F is a continuous function of z.

Proof

This follows from Lemmas 4.8 and 4.14.

Thus, part (i) of Theorem 3.2 is now proved.