1 Introduction

1.1 Lattice Points

Let \( \Gamma \subset {\mathbb {R}}^s \) be a lattice, i.e., a discrete subgroup of \({\mathbb {R}}^s \) with a compact fundamental set \( {\mathbb {R}}^s/\Gamma \), \(\det \Gamma \) = vol\(({\mathbb {R}}^s/\Gamma )\). Let \(N_1,\ldots ,N_s>0\) be reals, \(\mathbf{N}= (N_1,\ldots ,N_s)\), \(B_{\mathbf{N}} =[0,N_1)\times \cdots \times [0,N_s) \), \(\mathrm{vol}( B_{\mathbf{N}} )\) the volume of \(B_{\mathbf{N}}\), \(tB_{\mathbf{N}} \) the dilatation of \( B_{\mathbf{N}} \) by a factor \( t>0\), \(tB_{\mathbf{N}}+\mathbf{x}\) the translation of \(tB_{\mathbf{N}} \) by a vector \( \mathbf{x}\in {\mathbb {R}}^s\), \((x_1,\ldots ,x_s) \cdot (y_1,\ldots ,y_s) =(x_1y_1,\ldots ,x_s y_s)\), and let \((x_1,\ldots ,x_s) \cdot B_{\mathbf{N}} =\{ (x_1,\ldots ,x_s) \cdot (y_1,\ldots ,y_s) \; | \; (y_1,\ldots ,y_s) \in B_{\mathbf{N}} \} \). Let

$$\begin{aligned} {\mathcal {N}}(B_{\mathbf{N}} + \mathbf{x},\Gamma )=\#(B_{\mathbf{N}} + \mathbf{x}\cap \Gamma )=\sum _{\varvec{\gamma }\in \Gamma } \mathbbm {1}_{B_{\mathbf{N}} +\mathbf{x}} (\varvec{\gamma }) \end{aligned}$$
(1.1)

be the number of points of the lattice \( \Gamma \) lying inside the parallelepiped \(B_{\mathbf{N}} \), where we denote by \( \mathbbm {1}_{B_{\mathbf{N}} +\mathbf{x}} (\varvec{\gamma })\) the indicator function of \( B_{\mathbf{N}} +\mathbf{x}\). We define the error \({\mathcal { R}}(B_{\mathbf{N}}+\mathbf{x},\Gamma )\) by setting

$$\begin{aligned} {\mathcal {N}}(B_{\mathbf{N}}+\mathbf{x},\Gamma )= (\det \Gamma )^{-1} \mathrm{vol}( B_{\mathbf{N}}) \;+\;{\mathcal { R}}(B_{\mathbf{N}}+\mathbf{x},\Gamma ). \end{aligned}$$
(1.2)

Let \(\mathrm{Nm} ( \mathbf{x})= x_1 x_2 \ldots x_s\) for \(\mathbf{x}=(x_1, \dots , x_s) \). The lattice \( \Gamma \subset {\mathbb {R}}^s \) is admissible if

$$\begin{aligned} \mathrm{Nm} \;\Gamma =\inf _{\varvec{\gamma }\in \Gamma \setminus \{0\}} | \mathrm{Nm} (\varvec{\gamma })| >0. \end{aligned}$$

Let \( \Gamma \) be an admissible lattice. In 1994, Skriganov [27] proved the following theorem:

Theorem A

Let \(\mathbf{t}=(t_1,\ldots ,t_s)\). Then

$$\begin{aligned} | {\mathcal { R}}(\mathbf{t}\cdot [-1/2,1/2)^s +\mathbf{x}, \Gamma ) | < c_0 (\Gamma ) \log _2^{s-1} (2+ |\mathrm{Nm} (\mathbf{t})|), \end{aligned}$$
(1.3)

where the constant \( c_0(\Gamma )\) depends upon the lattice \( \Gamma \) only by means of the invariants \( \det \Gamma \) and \(\mathrm{Nm} \; \Gamma \).

In [27, p. 205], Skriganov conjectured that the bound (1.3) is the best possible. In this paper we prove this conjecture.

Let \({\mathcal {K}}\) be a totally real algebraic number field of degree \(s \ge 2\), and let \(\sigma \) be the canonical embedding of \({\mathcal {K}}\) in the Euclidean space \({\mathbb {R}}^s\), \( \sigma : {\mathcal {K}}\ni \xi \rightarrow \sigma (\xi ) = (\sigma _1(\xi ), \ldots , \sigma _s (\xi )) \in {\mathbb {R}}^s \), where \( \{\sigma _j \}_{j=1}^s \) are s distinct embeddings of \({\mathcal {K}}\) in the field \( {\mathbb {R}}\) of real numbers. Let \(N_{{\mathcal {K}}/{\mathbb {Q}}}(\xi )\) be the norm of \(\xi \in {\mathcal {K}}\). By [6, p. 404],

$$\begin{aligned} N_{{\mathcal {K}}/{\mathbb {Q}}}(\xi ) = \sigma _1 (\xi ) \cdots \sigma _s (\xi ) \quad \mathrm{and} \quad |N_{{\mathcal {K}}/{\mathbb {Q}}}(\alpha )| \ge 1 \end{aligned}$$

for all algebraic integers \(\alpha \in {\mathcal {K}}\setminus \{ 0 \}\). We see that \( |\mathrm{Nm} (\sigma (\xi ))|= | N_{{\mathcal {K}}/{\mathbb {Q}}}(\xi )|\). Let \({\mathcal {M}}\) be a full \({\mathbb {Z}}\) module in \( {\mathcal {K}}\) and let \(\Gamma _{{\mathcal {M}}}\) be the lattice corresponding to \({\mathcal {M}}\) under the embedding \(\sigma \). Let \((c_{{\mathcal {M}}})^{-1} >0\) be an integer such that \((c_{{\mathcal {M}}})^{-1} \gamma \) are algebraic integers for all \( \gamma \in {\mathcal {M}}\). Hence

$$\begin{aligned} \mathrm{Nm} \;\Gamma _{{\mathcal {M}}} \ge c_{{\mathcal {M}}}^s. \end{aligned}$$

Therefore, \(\Gamma _{{\mathcal {M}}} \) is an admissible lattice. In the following, we will use notations \(\Gamma =\Gamma _{{\mathcal {M}}}\), and \(N =N_1N_2 \cdots N_s \ge 2\). In Sect. 2 we will prove the following theorem:

Theorem 1

With the above notations, there exist \(c_1({\mathcal {M}})>0\) such that

$$\begin{aligned} \sup _{\varvec{\theta }\in [0,1]^s} |{\mathcal { R}}( B_{\varvec{\theta }\cdot \mathbf{N}} +\mathbf{x}, \Gamma _{{\mathcal {M}}}) |\ge c_1({{\mathcal {M}}})\log _2^{s-1} N \end{aligned}$$
(1.4)

for all \(\mathbf{x}\in {\mathbb {R}}^{s}\).

In [15, Chap. 5], Lang considered the lattice point problem in the adelic setting. In [15, 25], the upper bound for the lattice point remainder problem in parallelotopes was found. In a forthcoming paper, we will prove that the lower bound (1.4) can be extended to the adelic case (see [18]). Namely, we will prove that the upper bound in [25] is exact for the case of totally real algebraic number fields.

1.2 Low Discrepancy Sequences

Let \((\beta _{k,N})_{k=0}^{N-1}\) be a N-point set in an s-dimensional unit cube \([0,1)^s\), \(B_{\mathbf{y}}=[0,y_1) \times \cdots \times [0,y_s) \),

$$\begin{aligned} \Delta (B_{\mathbf{y}}, (\beta _{k,N})_{k=0}^{N-1} )= \#\{0 \le k <N \;|\; \beta _{k,N}\in B_{\mathbf{y}}\}-Ny_1 \ldots y_s. \end{aligned}$$
(1.5)

We define the star discrepancy of a N-point set \((\beta _{k,N})_{k=0}^{N-1}\) as

$$\begin{aligned} {D}^{*}(N)={D}^{*}((\beta _{k,N})_{k=0}^{N-1}) = \sup _{ 0<y_1, \ldots , y_s \le 1} \; \big | \frac{1}{ N} \Delta (B_{\mathbf{y}},(\beta _{k,N})_{k=0}^{N-1}) \big |. \end{aligned}$$
(1.6)

In 1954, Roth proved that there exists a constant \( {\dot{c}}_1>0 \), such that

$$\begin{aligned} N{D}^{*}\big ((\beta _{k,N})_{k=0}^{N-1}\big )>{\dot{c}}_1(\ln N)^{\frac{s-1}{ 2}}, \end{aligned}$$

for all N-point sets \((\beta _{k,N})_{k=0}^{N-1}\).

Definition 1

A sequence of point sets \(((\beta _{k,N})_{k=0}^{N-1})_{N=1}^{\infty }\) is of low discrepancy (abbreviated l.d.p.s.) if \( {D}^{*}((\beta _{k,N})_{k=0}^{N-1})=O(N^{-1}(\ln N)^{s-1}) \) for \( N \rightarrow \infty \).

For examples of l.d.p.s. see e.g. in [3, 10, 27]. Consider a lower bound for l.d.p.s. According to the well-known conjecture (see, e.g., [3, p. 283]), there exists a constant \( {\dot{c}}_2>0 \) such that

$$\begin{aligned} {N{D}^{*}\big ((\beta _{k,N})_{k=0}^{N-1}\big ) >{\dot{c}}_2 (\ln N)^{s-1}} \end{aligned}$$
(1.7)

for all N-point sets \((\beta _{k,N})_{k=0}^{N-1}\). In 1972, W. Schmidt proved this conjecture for \( s=2 \). In 1989, Beck [1] proved that \(N{D}^{*}(N) \ge {\dot{c}} \ln N (\ln \ln N)^{1/8-\epsilon }\) for \(s=3\) and some \({\dot{c}}>0\). In 2008, Bilyk et al. (see [4, p. 147], [5, p. 2]) proved in all dimensions \(s \ge 3\) that there exists some \({\dot{c}}(s), \eta >0\) for which the following estimate holds for all N-point sets: \(N{D}^{*} (N)>{\dot{c}}(s)(\ln N)^{\frac{s-1}{2} +\eta }\).

There exists another conjecture on the lower bound for the discrepancy function: there exists a constant \({\dot{c}}_3>0 \) such that

$$\begin{aligned} {N{D}^{*}\big ((\beta _{k,N})_{k=0}^{N-1}\big ) >{\dot{c}}_3 (\ln N)^{s/2}} \end{aligned}$$
(1.8)

for all N-point sets \((\beta _{k,N})_{k=0}^{N-1}\) (see [4, p. 147], [5, p. 3] and [8, p. 153]).

Let \({\mathcal {W}}= (\Gamma _{{\mathcal {M}}} +\mathbf{x}) \cap [0,1)^{s-1} \times [0,\infty )\). We enumerate \({\mathcal {W}}\) by the sequence \((z_{1,k}(\mathbf{x}), z_{2,k}(\mathbf{x}))\) with \(z_{1,k}(\mathbf{x}) \in [0,1)^{s-1} \), \(z_{2,k}(\mathbf{x}) \in [0,\infty ) \), and \(z_{2,i}(\mathbf{x}) <z_{2,j}(\mathbf{x})\) for \(i <j\). In [27], Skriganov proved that the point set \(((\beta _{k,N}(\mathbf{x}))_{k=0}^{N-1})\) with \(\beta _{k,N}(\mathbf{x}) =(z_{1,k}(\mathbf{x}), z_{2,k}(\mathbf{x})/z_{2,N}(\mathbf{x})) \) is of low discrepancy (see also [17]). In Sect. 2.10 we will prove

Theorem 2

With the notations as above, there exist \(c_2({\mathcal {M}})\) such that

$$\begin{aligned} N {D}^{*}\big ((\beta _{k,N}(\mathbf{x}))_{k=0}^{N-1}\big ) \ge c_2({\mathcal {M}})\log _2^{s-1} N \end{aligned}$$
(1.9)

for all \(\mathbf{x}\in {\mathbb {R}}^{s}\).

This result supports conjecture (1.7). In [19, 20], we proved that (1.9) is also true for the Halton sequence, and (ts)-sequences.

We note that the constant \(c_2\) depends on the chosen module \({\mathcal {M}}\). Hence we get a lower bound for translations of one concrete lattice. We do not understand if \(c_2({\mathcal {M}})\) is uniformly bounded from below for all module \({\mathcal {M}}\). However, it seems that conjecture (1.7) is more likely than conjecture (1.8), because the following result of Beck [2]:

Consider a Kronecker’s lattice \( \{ (n,n\alpha _1 +m_1,\ldots ,n\alpha _{s-1} + m_{s-1}) | (n, m_1,\ldots ,m_{s-1})\in ~{\mathbb {Z}}^{s}\}\) and the corresponding Kronecker’s sequence \({\mathcal {P}}_N =\{(\{n \alpha _1\},\ldots ,\{n\alpha _{s-1}\}, n/N)\}_{n=0}^{N-1}\), where \( \alpha =(\alpha _1, \ldots ,\alpha _{s-1})\in {\mathbb {R}}^{s-1}\). Then that for almost all \(\alpha \in {\mathbb {R}}^{s-1}\), we have that \({D}({\mathcal {P}}_N) >c(s) (\log N)^{s-1} \log \log N\), with a uniform constant c(s) depending only on the dimension s.

2 Proof of Theorems

In this paper we consider a fundamental units of the field \({\mathcal {K}}\) and the appropriate toral automorphisms \(A_1,\ldots ,A_{s-1}\). Applying the profound Chevalley’s result [9], we construct a Hecke character, corresponding to \(A_1,\ldots ,A_{s-1}\).

The main idea of this paper is to express the essential part of the normalized discrepancy function as a truncated Lfunction with the above Hecke character. Using the non-vanishing property of an L-function, we obtain the assertion of Theorem 1.

Let us describe the main steps of the proof of Theorem 1:

In Sect. 2.1, we use the Poisson summation formula and the standard trick of ‘smoothing’. This allows to express the discrepancy function \({\mathcal { R}}_{\theta }\) in terms of absolutely convergent Fourier’s series. Next we decompose the domain of the summation in three parts, and we obtain that \({\mathcal { R}}_{\theta } ={\mathcal {A}}_{\theta }+{\mathcal {B}}_{\theta }+{\mathcal {C}}_{\theta }\). Using the expectation function E, we get \(\sup _{\theta } |R_{\theta }| \ge |E({\mathcal {A}}_{\theta })| -|E({\mathcal {B}}_{\theta })| -|E({\mathcal {C}}_{\theta })|\). Hence, to obtain the assertion of Theorem 1, it is sufficient to find the lower bound of \(|E({\mathcal {A}}_{\theta })|\) and the upper bounds of \(|E({\mathcal {B}}_{\theta })|\) and \(|E({\mathcal {C}}_{\theta })|\).

In Sect. 2.2, we consider the fundamental domain of the field \({\mathcal {K}}\). We apply [30] to estimate the error term in the lattice point problem in a compact convex body. We use these results to compute the difference between an L-function and the corresponding truncated L-function, and also to estimate the value of the domain of the summation in the Fourier’s series of \({\mathcal {A}}_{\theta }\).

In Sect. 2.3, we use the Chevalley theorem [9] to construct a special Hecke character.

In Sect. 2.4, we consider the truncated L-function \(\vartheta \), with the above Hecke character. Using the estimates of Sect. 2.2 and the non-vanishing property of L-function, we obtain the lower bound of \(\vartheta \).

In Sect. 2.5, we find the lower bound of \(|E({\mathcal {A}}_{\theta })|\). First, we decompose the domain of the summation in seven parts, and we get that \({\mathcal {A}}_{\theta }={\mathcal {A}}_0+{\mathcal {A}}_1+\cdots +{\mathcal {A}}_6\). Using results of Sect. 2.2, we compute \(|E({\mathcal {A}}_1)|+\cdots +|E({\mathcal {A}}_6)|\). In addition, we decompose \({\mathcal {A}}_0\) in several parts and we select the main part \({\mathcal {A}}_7(\Gamma ^{\bot } + \mathbf{x})\). Lemma 12 is the main result of this subsection. Let \(\Gamma ^{\bot } = A{\mathbb {Z}}^s\), \({\dot{Z}}_p=\{(a_1,\ldots ,a_s)^{\top } | a_i \in \{0,1,\ldots ,p-1\},i=1,\ldots ,s \}\), and \(\varLambda _p =A {\dot{Z}}_p^s\), where p is obtained from the Chevalley theorem (see Theorem C). In Lemma 12, we prove that \(p^{-s}\sum _{\mathbf{b}\in \varLambda _p} |A_7(\Gamma ^{\bot } + \mathbf{b}/p)|^2\) may be estimated from below as a part of the corresponding L-function. Next, using results of Sect. 2.4, we get the lower bound of \(|E({\mathcal {A}}_{\theta })|\).

In Sect. 2.6, we cite some inequalities from [27].

In Sect. 2.7, we use the dyadic decomposition method (see, e.g., [27]) to obtain the convenient expressions for \(E({\mathcal {B}}_{\theta })\) and \(E({\mathcal {C}}_{\theta })\).

In Sect. 2.8, we apply inequalities from Sect. 2.6 to obtain the upper bound estimate for \(|E({\mathcal {B}}_{\theta })|\).

In Sect. 2.9, we apply the Koksma–Hlawka inequality and Theorem A to obtain the upper bound estimate for \(|E({\mathcal {C}}_{\theta })|\).

2.1 Poisson Summation Formula

It is known that the set \({\mathcal {M}}^{\bot }\) of all \(\beta \in {\mathcal {K}}\), for which \( \mathrm{Tr}_{{\mathcal {K}}/ {\mathbb {Q}}}(\alpha \beta ) \in {\mathbb {Z}}\) for all \(\alpha \in {\mathcal {M}}\), is also a full \({\mathbb {Z}}\) module (the dual of the module \({\mathcal {M}}\)) of the field K (see [6, p. 94]). Recall that the dual lattice \( \Gamma _{\mathcal {M}}^\bot \) consists of all vectors \( \varvec{\gamma }^\bot \in {\mathbb {R}}^{s}\) such that the inner product \(\langle \varvec{\gamma }^\bot ,\varvec{\gamma }\rangle \) belongs to \({\mathbb {Z}}\) for each \(\varvec{\gamma }\in \Gamma \). Hence \(\Gamma _{{\mathcal {M}}^{\bot }} =\Gamma _{\mathcal {M}}^{\bot }\). Let \({\mathcal {O}}\) be the ring of integers of the field \({\mathcal {K}}\), and let \(a {\mathcal {M}}^{\bot } \subseteq {\mathcal {O}}\) for some \(a \in {\mathbb {Z}}\setminus 0\). By (1.1), we have \( {\mathcal {N}}(B_{\mathbf{N}} +\mathbf{x},\Gamma _{{\mathcal {M}}} ) = {\mathcal {N}}( a^{-1}B_{\mathbf{N}} +a^{-1}\mathbf{x},\Gamma _{a^{-1}{\mathcal {M}}} )\). Therefore, to prove Theorem 1 it suffices consider only the case \({\mathcal {M}}^{\bot } \subseteq {\mathcal {O}}\). We set

$$\begin{aligned} p_1 = \min \{ b \in {\mathbb {Z}}\; | \; b {\mathcal {O}}\subseteq {\mathcal {M}}^{\bot } \subseteq {\mathcal {O}}, \;b>0 \}. \end{aligned}$$
(2.1)

We will use the same notations for elements of \({\mathcal {O}}\) and \(\Gamma _{{\mathcal {O}}}\). Let \({\mathcal {D}}_{{\mathcal {M}}}\) be the ring of coefficients of the full module \({\mathcal {M}}\), \({\mathcal {U}}_{{\mathcal {M}}}\) be the group of units of \({\mathcal {D}}_M\), and let \(\eta _{1},\ldots , \eta _{s-1}\) be the set of fundamental units of \({\mathcal {U}}_{{\mathcal {M}}}\). According to the Dirichlet theorem (see e.g., [6, p. 112]), every unit \(\varepsilon \in {\mathcal {U}}_{{\mathcal {M}}}\) has a unique representation in the form

$$\begin{aligned} \varepsilon = (-1)^a\eta _{1}^{a_1} \cdots \eta _{s-1}^{a_{s-1}}, \end{aligned}$$
(2.2)

where \(a_1,\ldots ,a_{s-1}\) are rational integers and \(a \in \{0,1\}\). It is easy to proof (see e.g. [19, Lemma 1]) that there exists a constant \(c_3>1\) such that for all \(\mathbf{N}\) there exists \(\eta (\mathbf{N}) \in {\mathfrak {U}}_{{\mathcal {M}}}\) with \( |N_i^{'} N^{-1/s}| \in [1/c_3, c_3]\), where \( N_i^{'} = N_i |\sigma _i(\eta (\mathbf{N}))|\), \( i=1,\ldots ,s \), and \(N=N_1 \cdots N_s\). Let \(\sigma (\eta (\mathbf{N})) = ( \sigma _1(\eta (\mathbf{N})),\ldots ,\sigma _s(\eta (\mathbf{N})))\). We see that \(\sigma (\eta (\mathbf{N})) \cdot (\varvec{\theta }\cdot B_{\mathbf{N}} +\mathbf{x}) = \varvec{\theta }\cdot B_{\mathbf{N}^{'}} +\mathbf{x}_1\) and

$$\begin{aligned} \varvec{\gamma }\in \Gamma _{\mathcal {M}}\cap (\varvec{\theta }\cdot B_{\mathbf{N}} +\mathbf{x}) \Leftrightarrow \varvec{\gamma }\cdot \sigma (\eta (\mathbf{N})) \in \Gamma _{\mathcal {M}}\cap (\varvec{\theta }\cdot B_{\mathbf{N}^{'}} +\mathbf{x}_1 )), \end{aligned}$$

with \(\mathbf{x}_1= \sigma (\eta (\mathbf{N}) \cdot \mathbf{x}+ \sigma (\eta (\mathbf{N})) \cdot \mathbf{N}/2 -\mathbf{N}^{'}/2\). Hence

$$\begin{aligned} {\mathcal {N}}(\varvec{\theta }\cdot B_{\mathbf{N}} +\mathbf{x},\Gamma _{\mathcal {M}}) = {\mathcal {N}}(\varvec{\theta }\cdot B_{\mathbf{N}^{'}} +\mathbf{x}_1,\Gamma _{\mathcal {M}}). \end{aligned}$$

By (1.2), we have

$$\begin{aligned} {\mathcal { R}}(\varvec{\theta }\cdot B_{\mathbf{N}} +\mathbf{x},\Gamma _{\mathcal {M}}) = {\mathcal { R}}(\varvec{\theta }\cdot B_{\mathbf{N}^{'}} +\mathbf{x}_1 ,\Gamma _{\mathcal {M}}). \end{aligned}$$

Therefore, without loss of generality, we can assume that

$$\begin{aligned} N_i N^{-1/s} \in [1/c_3, c_3], \quad i=1,\ldots ,s. \end{aligned}$$
(2.3)

Note that in this paper O-constants and constants \(c_1,c_2,\ldots \) depend only on \({\mathcal {M}}\).

We shall need the Poisson summation formula:

$$\begin{aligned} \det \Gamma \sum _{\varvec{\gamma }\in \Gamma } f(\varvec{\gamma }-X) = \sum _{\varvec{\gamma }\in \Gamma ^\bot } \widehat{f}(\varvec{\gamma }) e(\langle \varvec{\gamma },\mathbf{x}\rangle ), \end{aligned}$$
(2.4)

where

$$\begin{aligned} \widehat{f}(Y) = \int _{{\mathbb {R}}^s}{ f(X) e(\langle \mathbf{y},\mathbf{x}\rangle ) \mathrm{d}\mathbf{x}} \end{aligned}$$

is the Fourier transform of f(X), and \(e(x) =\mathrm{exp}(2\pi \sqrt{-1}x), \langle \mathbf{y},\mathbf{x}\rangle = y_1x_1+ \cdots +y_s x_s\). Formula (2.4) holds for functions \(f(\mathbf{x})\) with period lattice \(\Gamma \) if one of the functions f or \(\widehat{f}\) is integrable and belongs to the class \(C^{\infty }\) (see e.g. [28, p. 251]).

Let \( {\widehat{\mathbbm {1}}}_{B_{ \mathbf{N}}}(\varvec{\gamma })\) be the Fourier transform of the indicator function \(\mathbbm {1}_{B_{ \mathbf{N}}}(\varvec{\gamma })\). It is easy to prove that \({\widehat{\mathbbm {1}}}_{B_{ \mathbf{N}}}(\mathbf{0}) = N_1\cdots N_s\) and

$$\begin{aligned} {\widehat{\mathbbm {1}}}_{B_{ \mathbf{N}}}(\varvec{\gamma }) = \prod _{i=1}^s \frac{e( N_i \gamma _i) -1}{2\pi \sqrt{-1}\gamma _i} = \prod _{i=1}^s \frac{\sin (\pi N_i \gamma _i)}{ \pi \gamma _i} e\big (\sum _{i=1}^s N_i \gamma _i/2 \big ) \; \mathrm{for}\; \mathrm{Nm}(\varvec{\gamma }\ne 0). \end{aligned}$$
(2.5)

We fix a nonnegative even function \( \omega (x),\; x \in {\mathbb {R}},\) of the class \( C^\infty \), with a support inside the segment \( [-1/2, 1/2] \), and satisfying the condition \( \int _{{\mathbb {R}}}\omega (x)\mathrm{d}x=1\). We set \(\varOmega (\mathbf{x}) = \omega ( x_1) \cdots \omega ( x_s) \), \(\varOmega _{\tau }(\mathbf{x}) = \tau ^{-s} \varOmega (\tau ^{-1} x_1,\ldots ,\tau ^{-1} x_s) \), \(\tau >0\), and

$$\begin{aligned} \widehat{\varOmega } (\mathbf{y}) = \int _{{\mathbb {R}}^s }e(\langle \mathbf{y},\mathbf{x}\rangle ) \varOmega (\mathbf{x})\mathrm{d}\mathbf{x}. \end{aligned}$$
(2.6)

Notice that the Fourier transform \(\widehat{\varOmega }_{\tau }( \mathbf{y}) =\widehat{\varOmega } (\tau \mathbf{y})\) of the function \( \varOmega _{\tau }(\mathbf{y})\) satisfies the bound

$$\begin{aligned} |\widehat{\varOmega } (\tau \mathbf{y})|<{\dot{c}}(s, \omega )(1+ \tau |\mathbf{y}|)^{-2s} . \end{aligned}$$
(2.7)

It is easy to see that

$$\begin{aligned} \widehat{\varOmega }(\mathbf{y}) =\widehat{\varOmega }(\mathbf{0}) +O( |\mathbf{y}|) = 1+O( |\mathbf{y}|) \quad \mathrm{for} \quad |\mathbf{y}| \rightarrow 0. \end{aligned}$$
(2.8)

Lemma 1

There exists a constant \(c>0\) such that we have for \( N> c\)

$$\begin{aligned} |{\mathcal { R}}(B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) - \ddot{{\mathcal { R}}} (B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma )| \le 2^s, \end{aligned}$$

where

$$\begin{aligned} \ddot{{\mathcal { R}}} (B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) = (\det \Gamma )^{-1} \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus \{0\}} {\widehat{\mathbbm {1}}}_{B_{\varvec{\theta }\cdot \mathbf{N}}}(\varvec{\gamma })\widehat{\varOmega } (\tau \varvec{\gamma })e(\langle \varvec{\gamma },\mathbf{x}\rangle ), \quad \tau =N^{-2}. \end{aligned}$$
(2.9)

Proof

Let \(B^{\pm \tau }_{\varvec{\theta }\cdot \mathbf{N}} = [0, \max (0,\theta _1N_1\pm \tau )) \times \cdots \times [0, \max (0,\theta _sN_s\pm \tau ))\), and let \(\mathbbm {1}_B(x)\) be the indicator function of B. We consider the convolutions of the functions \(\mathbbm {1}_{B^{\pm \tau }_{\varvec{\theta }\cdot \mathbf{N}}}( \varvec{\gamma })\) and \(\varOmega _{\tau }(\mathbf{y})\):

$$\begin{aligned} \varOmega _{\tau } *\mathbbm {1}_{B^{\pm \tau }_{\varvec{\theta }\cdot \mathbf{N}}} (\mathbf{x}) = \int _{{\mathbb {R}}^s} \varOmega _{\tau } (\mathbf{x}-\mathbf{y}) \mathbbm {1}_{B^{\pm \tau }_{\varvec{\theta }\cdot \mathbf{N}}} (\mathbf{y}) \mathrm{d}\mathbf{y}. \end{aligned}$$
(2.10)

It is obvious that the nonnegative functions (2.10) are of class \( C^\infty \) and are compactly supported in \(\tau \)-neighborhoods of the bodies \(B^{\pm \tau }_{\varvec{\theta }\cdot \mathbf{N}} \), respectively. We obtain

$$\begin{aligned} \mathbbm {1}_{B_{\varvec{\theta }\cdot \mathbf{N}}^{-\tau }} (\mathbf{x}) \le \mathbbm {1}_{B_{\varvec{\theta }\cdot \mathbf{N}}} (\mathbf{x}) \le \mathbbm {1}_{B_{\varvec{\theta }\cdot \mathbf{N}}^{+\tau }} (\mathbf{x}) , \;\; \mathbbm {1}_{B_{\varvec{\theta }\cdot \mathbf{N}}^{-\tau }} (\mathbf{x}) \le \varOmega _{\tau } *\mathbbm {1}_{B_{\varvec{\theta }\cdot \mathbf{N}} }(\mathbf{x}) \le \mathbbm {1}_{B_{\varvec{\theta }\cdot \mathbf{N}}^{+\tau }} (\mathbf{x}) . \end{aligned}$$
(2.11)

Replacing \(\mathbf{x}\) by \(\varvec{\gamma }-\mathbf{x}\) in (2.11) and summing these inequalities over \(\varvec{\gamma }\in \Gamma = \Gamma _{\mathcal {M}}\), we find from (1.1) that

$$\begin{aligned} {\mathcal {N}}(B^{-\tau }_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) \le {\mathcal {N}}(B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) \le {\mathcal {N}}(B^{+\tau }_{ \varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ), \end{aligned}$$

and

$$\begin{aligned} {\mathcal {N}}(B^{-\tau }_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) \le \dot{{\mathcal {N}}} (B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) \le {\mathcal {N}}(B^{+\tau }_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ), \end{aligned}$$

where

$$\begin{aligned} \dot{{\mathcal {N}}} (B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) = \sum _{\varvec{\gamma }\in \Gamma } \varOmega _{\tau } *\mathbbm {1}_{B_{\varvec{\theta }\cdot \mathbf{N}}} (\varvec{\gamma }- \mathbf{x}). \end{aligned}$$
(2.12)

Hence

$$\begin{aligned}&-\, {\mathcal {N}}(B^{+ \tau }_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) + {\mathcal {N}}(B^{- \tau }_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma )\nonumber \\&\quad \le \dot{{\mathcal {N}}} (B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) - {\mathcal {N}}(B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) \le {\mathcal {N}}(B^{+ \tau }_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) - {\mathcal {N}}(B^{- \tau }_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) . \end{aligned}$$

Thus

$$\begin{aligned} |{\mathcal {N}}(B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) - \dot{{\mathcal {N}}} (B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma )| \le {\mathcal {N}}(B^{+ \tau }_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) - {\mathcal {N}}(B^{- \tau }_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},v\Gamma ) . \end{aligned}$$
(2.13)

Consider the right side of this inequality. We have that \(B^{+ \tau }_{\varvec{\theta }\cdot \mathbf{N}} \setminus B^{- \tau }_{\varvec{\theta }\cdot \mathbf{N}}\) is the union of boxes \(B^{(i)}, \; i=1,\ldots ,2^s-1\), where

$$\begin{aligned} \mathrm{vol}(B^{(i)})\le & {} \mathrm{vol}( B_{\mathbf{N}}^{+\tau }) - \mathrm{vol}(B_{\mathbf{N}}^{-\tau }) \le \prod _{i=1}^s (N_i + \tau ) - \prod _{i=1}^s (N_i - \tau )\nonumber \\\le & {} N \big ( \prod _{i=1}^s (1 + \tau ) - \prod _{i=1}^s (1 - \tau ) \big ) <\ddot{c}_s N \tau =\ddot{c}_s / N, \quad \tau =N^{-2}, \end{aligned}$$

with some \(\ddot{c}_s>0\). From (2.1), we get \({\mathcal {M}}\supseteq p_1^{-1}{\mathcal {O}}\). Hence \(|\mathrm{Nm}( \varvec{\gamma })| \ge p_1^{-s}\) for \(\varvec{\gamma }\in \Gamma _{{\mathcal {M}}} \setminus \mathbf{0}\). We see that \( |\mathrm{Nm}( \varvec{\gamma }_1 - \varvec{\gamma }_2 )| \le \mathrm{vol}(B^{(i)} +\mathbf{x}) < p_1^{-s} \) for \( \varvec{\gamma }_1, \varvec{\gamma }_2 \in B^{(i)} +\mathbf{x}\) and \( N>\ddot{c}_s p_1^{s} \). Therefore, the box \(B^{(i)}+\mathbf{x}\) contains at most one point of \(\Gamma _{\mathcal {M}}\) for \( N>\ddot{c} p_1^{s} \). By (2.13), we have

$$\begin{aligned} |\dot{{\mathcal {N}}} (B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) - {\mathcal {N}}(B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) | \le 2^s -1 \quad \mathrm{for} \quad N>\ddot{c} p_1^{s} . \end{aligned}$$
(2.14)

Let

$$\begin{aligned} \dot{{\mathcal { R}}} (B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) = \dot{{\mathcal {N}}} (B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) - \frac{ \mathrm{vol}(B_{\varvec{\theta }\cdot \mathbf{N}})}{\det \Gamma } . \end{aligned}$$
(2.15)

By (2.12), we obtain that \( \dot{{\mathcal {N}}} (B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma )\) is a periodic function of \( \mathbf{x}\in {\mathbb {R}}^n \) with the period lattice \( \Gamma \). Applying the Poisson summation formula to the series (2.12), and bearing in mind that \( \widehat{\varOmega }_{\tau } (\mathbf{y}) = \widehat{\varOmega } (\tau \mathbf{y})\), we get from (2.9)

$$\begin{aligned} \dot{{\mathcal { R}}} (B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) = \ddot{{\mathcal { R}}} (B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) . \end{aligned}$$

Note that (2.7) ensure the absolute convergence of the series (2.9) over \(\varvec{\gamma }\in \Gamma ^\bot \setminus \{0\}\). Using (1.2), (2.14) and (2.15) , we obtain the assertion of Lemma 1. \(\square \)

Let \(\eta (t)=\eta (|t|)\), \(t \in {\mathbb {R}}^1\) be an even function of the class \(C^{\infty }\); moreover, let \(\eta (t)=0\) for \(|t| \le 1\), \(0 \le \eta (t) \le 1\) for \(|t| \le 2\) and \(\eta (t)=1\) for \(|t| \ge 2\). Let \(n=s^{-1}\log _2 N \), \(M=[\sqrt{n}]\) , and

$$\begin{aligned} \eta _M(\varvec{\gamma }) = 1-\eta (2|\mathrm{Nm}(\varvec{\gamma })|/M). \end{aligned}$$
(2.16)

By (2.5) and (2.9), we have

$$\begin{aligned} \dot{{\mathcal { R}}} (B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma )= (\pi ^s \det \Gamma )^{-1} ({\mathcal {A}}(\mathbf{x},M) +{\mathcal {B}}(\mathbf{x},M)), \end{aligned}$$
(2.17)

where

$$\begin{aligned} {\mathcal {A}}(\mathbf{x},M)= & {} \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}} } \prod _{i=1}^s \sin (\pi \theta _i N_i \gamma _i) \frac{ \eta _M(\varvec{\gamma }) \widehat{\varOmega } (\tau \varvec{\gamma })e(\langle \varvec{\gamma },\mathbf{x}\rangle +\dot{x})}{\mathrm{Nm}(\varvec{\gamma })},\\ {\mathcal {B}}(\mathbf{x},M)= & {} \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}} } \prod _{i=1}^s \sin (\pi \theta _i N_i \gamma _i) \frac{ (1- \eta _M(\varvec{\gamma })) \widehat{\varOmega } (\tau \varvec{\gamma })e(\langle \varvec{\gamma },\mathbf{x}\rangle +\dot{x})}{\mathrm{Nm}(\varvec{\gamma })}, \end{aligned}$$

with \(\dot{x} = \sum _{1 \le i \le s} \theta _i N_i \gamma _i/2 \). Let

$$\begin{aligned} \mathbf{E}(f) =\int _{[0,1]^s} f(\varvec{\theta })\mathrm{d}\varvec{\theta }. \end{aligned}$$

By the triangle inequality, we get

$$\begin{aligned} \pi ^s \det \Gamma \sup _{\varvec{\theta }\in [0,1]^s} | \dot{{\mathcal { R}}} (B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{x},\Gamma ) | \ge |\mathbf{E}({\mathcal {A}}(\mathbf{x},M))| - |\mathbf{E}({\mathcal {B}}(\mathbf{x},M))|. \end{aligned}$$
(2.18)

In Sect. 2.5 we will find the lower bound of \(|\mathbf{E}({\mathcal {A}}(\mathbf{x},M))|\) and in Sect. 2.9 we will find the upper bound of \(|\mathbf{E}({\mathcal {B}}(\mathbf{x},M))|\).

2.2 The Logarithmic Space and the Fundamental Domain

We consider Dirichlet’s Unit Theorem (2.2) applied to the ring of integers \({\mathcal {O}}\). Let \(\varvec{\varepsilon }_1,\ldots , \varvec{\varepsilon }_{s-1}\) be the set of fundamental units of \({\mathcal {U}}_{{\mathcal {O}}}\). We set \(l_i(\mathbf{x}) = \ln |x_i|\), \(i=1,\ldots ,s\), \(\mathbf{l}(\mathbf{x}) =(l_1(\mathbf{x}),\ldots ,l_s(\mathbf{x}))\), \(\mathbf{1}=(1,\ldots ,1)\), where \(\mathbf{x}\in {\mathbb {R}}^s\) and \(\mathrm{Nm}(\mathbf{x}) \ne \mathbf{0}\). By [6, p. 311], the set of vectors \(\mathbf{1}, \mathbf{l}(\varepsilon _1),\ldots , \mathbf{l}(\varepsilon _{s-1}))\) is a basis for \({\mathbb {R}}^s \). Any vector \(\mathbf{l}(\mathbf{x}) \in {\mathbb {R}}^s \) (\(\mathbf{x}\in {\mathbb {R}}^s, \; \mathrm{Nm}(\mathbf{x}) \ne \mathbf{0}\)) can be represented in the form

$$\begin{aligned} \mathbf{l}(\mathbf{x}) = \xi \mathbf{1} + \xi _1 \mathbf{l}(\varvec{\varepsilon }_1) + \cdots + \xi _{s-1} \mathbf{l}(\varvec{\varepsilon }_{s-1}), \end{aligned}$$
(2.19)

where \(\xi ,\xi _1,\ldots ,\xi _{s-1}\) are real numbers. In the following we will need the next definition.

Definition 2

[6, p. 312] A subset \({\mathcal {F}}\) of the space \({\mathbb {R}}^s\) is called a fundamental domain for the field \({\mathcal {K}}\) if it consists of all points \(\mathbf{x}\) which satisfy the following conditions: \(\mathrm{Nm}(\mathbf{x}) \ne \mathbf{0}\), in the representation (2.19) the coefficients \(\xi _i\) \((i = 1, \ldots , s-~1)\) satisfy the inequality \(0 \le \xi _i < 1\), \(x_1 >0\).

Theorem B

[6, p. 312] In every class of associate numbers \(( \ne 0)\) of the field \({\mathcal {K}}\), there is one and only one number whose geometric representation in the space \({\mathbb {R}}^s\) lies in the fundamental domain \({\mathcal {F}}\) .

Lemma A

[30, p. 59, Thm. 2, Ref. 3] Let \(\dot{\Gamma } \subset {\mathbb {R}}^k\) be a lattice, \(\det \dot{\Gamma } =1\), \({\mathcal {Q}}\subset {\mathbb {R}}^k\) a compact convex body and r the radius of its greatest sphere in the interior. Then

$$\begin{aligned} \mathrm{vol}( {\mathcal {Q}}) \big (1- \frac{\sqrt{k}}{2r}\big ) \le \# \dot{\Gamma } \cap {\mathcal {Q}}\le \mathrm{vol}( {\mathcal {Q}}) \big (1+ \frac{\sqrt{k}}{2r}\big ) , \end{aligned}$$

provided \(r> \sqrt{k}/2\).

Let \(\dot{\Gamma } \subset {\mathbb {R}}^k\) be an arbitrary lattice. We derive from Lemma A

$$\begin{aligned} \sup _{\mathbf{x}\in {\mathbb {R}}^s} | \#\dot{\Gamma } \cap (t{\mathcal {Q}}+\mathbf{x}) - t^k \mathrm{vol}( {\mathcal {Q}}) /\det \dot{\Gamma } | = O(t^{k-1}) \quad \mathrm{for} \quad t \rightarrow \infty . \end{aligned}$$
(2.20)

See also [11, pp. 141, 142].

Lemma 2

Let \( \varvec{\varepsilon }^{\mathbf{k}}_\mathrm{max} =\max _{1 \le i \le s}|(\varvec{\varepsilon }^{\mathbf{k}})_i|\) and \(\varvec{\varepsilon }^{\mathbf{k}}_\mathrm{min} = \min _{1 \le i \le s}|(\varvec{\varepsilon }^{\mathbf{k}})_i|\). There exists a constant \(c_4,c_5>0\), such that

$$\begin{aligned} \# \{ \mathbf{k}\in {\mathbb {Z}}^{s-1} \; | \; \varvec{\varepsilon }^{\mathbf{k}}_\mathrm{max} \le e^{t} \} = c_4t^{s-1} + O(t ^{s-2} ) \end{aligned}$$
(2.21)

and

$$\begin{aligned} \# \{ \mathbf{k}\in {\mathbb {Z}}^{s-1} \; | \; \varvec{\varepsilon }^{\mathbf{k}}_\mathrm{min} \ge e^{-t} \} = c_5t^{s-1} + O(t ^{s-2} ) . \end{aligned}$$
(2.22)

Proof

By (2.19), we have that the left hand sides of (2.21) and (2.22) are equal to

$$\begin{aligned} \# \big \{ \mathbf{k}\in {\mathbb {Z}}^{s-1} \big | \; \sum _{i=1}^{s-1} k_i l_j (\varvec{\varepsilon }_i) \le t, \; j=1,\ldots ,s \big \}, \end{aligned}$$

and

$$\begin{aligned} \# \big \{ \mathbf{k}\in {\mathbb {Z}}^{s-1} \big | \; \sum _{i=1}^{s-1} k_i l_ j(\varvec{\varepsilon }_i) \ge -t, \; j=1,\ldots ,s \big \}, \end{aligned}$$

respectively. Let

$$\begin{aligned} {\mathcal {Q}}_1 = \{ \mathbf{x}\in {\mathbb {R}}^{s-1} | \dot{x_j} \le 1, \; j \in [1,s] \} \; \; \mathrm{and} \;\; {\mathcal {Q}}_2 = \{ \mathbf{x}\in {\mathbb {R}}^{s-1} | \dot{x_j} \ge -1,\; j \in [1,s]\}, \end{aligned}$$

with \(\dot{x_j} = x_1 l_j(\varvec{\varepsilon }_1) + \cdots + x_{s-1} l_j(\varvec{\varepsilon }_{s-1}) \). We see \(\dot{x_1}+\cdots +\dot{x_s} =0\). Hence \( \dot{x_j} \ge -s+1\) for \(\mathbf{x}\in {\mathcal {Q}}_1 \) and \( \dot{x_j} \le s-1\) for \(\mathbf{x}\in {\mathcal {Q}}_2 \) \((j=1,\ldots ,s )\). By [6, p. 115], we get \( \det (l_i(|\varvec{\varepsilon }_j|)_{i,j=1,\ldots ,s-1}) \ne 0\). Hence, \( {\mathcal {Q}}_i \) is the compact convex set in \({\mathbb {R}}^{s-1}\), \(i=1,2\). Applying (2.20) with \(k=s-1\), and \(\dot{\Gamma } ={\mathbb {Z}}^{s-1}\), we obtain the assertion of Lemma 2. \(\square \)

Let \( cl({\mathcal {K}})\) be the ideal class group of \({\mathcal {K}}\), \(h = \# cl({\mathcal {K}})\), and \( cl({\mathcal {K}}) = \{C_1,\ldots , C_h \}\). In the ideal class \(C_i\), we choose an integral ideal \({\mathfrak {a}}_i\), \(i=1,\ldots ,h\). Let \({\mathfrak {N}}({\mathfrak {a}})\) be the absolute norm of ideal \({\mathfrak {a}}\). If \(h=1\), then we set \(p_2=1\) and \(\Gamma _1= \Gamma _{{\mathcal {O}}}\). Let \(h>1\), \(i \in [1,h]\),

$$\begin{aligned} {\mathcal {M}}_i =\{ u \in {\mathcal {O}}\; | \; u \equiv 0\ \text {mod }{\mathfrak {a}}_i \}, \quad \Gamma _i = \sigma ({\mathcal {M}}_i), \;\; \mathrm{and}\;\; p_2 = \prod _{i=1}^h {\mathfrak {N}}({\mathfrak {a}}_i) . \end{aligned}$$
(2.23)

Lemma 3

Let \(w \ge 1\), \(i \in [1,h]\), \({\mathbb {F}}_{M_1}(\varvec{\varsigma }) =\{\mathbf{y}\in {\mathcal {F}}\;|\; |\mathrm{Nm}(\mathbf{y})| <M_1, \mathrm{sgn}(y_i) =\varsigma _i, i=1,\ldots ,s \}\), where \(\mathrm{sgn}(y) = y/|y|\) for \(y \ne 0\) and \(\varvec{\varsigma }=(\varsigma _1,\ldots ,\varsigma _s) \in \{ -1, 1\}^s\). Then there exists \(c_{6,i}>0\) such that

$$\begin{aligned} \sup _{\mathbf{x}\in {\mathbb {R}}^s} \big | \sum _{ \gamma \in (w \Gamma _{i} + \mathbf{x})\cap {\mathbb {F}}_{M_1}(\varvec{\varsigma })} 1 - c_{6,i} M_1/w^s \big | = O(M_1^{1-1/s}) \quad \mathrm{for} \quad M_1 \rightarrow \infty . \end{aligned}$$

Proof

It is easy to see that \({\mathbb {F}}_{M_1}(\varvec{\varsigma }) ={M_1}^{1/s}{\mathbb {F}}_1(\varvec{\varsigma })\). By [6, p. 312], the fundamental domain \({\mathcal {F}}\) is a cone in \({\mathbb {R}}^s\). Let \(\dot{{\mathbb {F}}} =\{ \mathbf{y}\in {\mathcal {F}}|\;|y_i| \le y_0, \mathrm{sgn}(y_i) =\varsigma _i, i=1,\ldots ,s \}\) and let \(\ddot{{\mathbb {F}}} =\{ \mathbf{y}\in \dot{{\mathbb {F}}}\;|\; |\mathrm{Nm}(\mathbf{y})| \ge 1\}\), where \(y_0 =\sup _{\mathbf{y}\in {\mathbb {F}}_1(\varvec{\varsigma }), i=1,\ldots ,s} |y_i|\). We see that \({\mathbb {F}}_1(\varvec{\varsigma }) = \dot{{\mathbb {F}}} \setminus \ddot{{\mathbb {F}}} \) and \(\dot{{\mathbb {F}}}, \ddot{{\mathbb {F}}} \) are compact convex sets. Using (2.20) with \(k=s\), \(\dot{\Gamma } =w \Gamma _{i} \), and \(t={M_1}^{1/s}\), we obtain the assertion of Lemma 3. \(\square \)

2.3 Construction of a Hecke Character by Using Chevalley’s Theorem

Let \({\mathfrak {m}}\) be an integral ideal of the number field \({\mathcal {K}}\), and let \({\mathcal {J}}^{{\mathfrak {m}}}\) be the group of all ideals of \({\mathcal {K}}\) which are relatively prime to \({\mathfrak {m}}\). Let \(S^1=\{z \in {\mathbb {C}}\; | \; |z| = 1\} \).

Definition 3

[24, p. 470] A Hecke character \(\text {mod }{\mathfrak {m}}\) is a character \(\chi :\; {\mathcal {J}}^{{\mathfrak {m}}} \rightarrow S^1\) for which there exists a pair of characters

$$\begin{aligned} \chi _f :\; ({\mathcal {O}}/{\mathfrak {m}})^{*} \rightarrow S^1, \qquad \chi _{\infty } :\; ({\mathbb {R}}^{*})^s \rightarrow S^1, \end{aligned}$$

such that

$$\begin{aligned} \chi ((a)) = \chi _f(a) \chi _{\infty } (a) \end{aligned}$$

for every algebraic integer \(a \in {\mathcal {O}}\) relatively prime to \({\mathfrak {m}}\).

The character taking the value one for all group elements will be called the trivial character.

Definition 4

Let \(A_1,\ldots ,A_d\) be invertible \(s\times s\) commuting matrices with integer entries. A sequence of matrices \(A_1,\ldots ,A_d\) is said to be partially hyperbolic if for all \((n_1,\ldots ,n_d) \in {\mathbb {Z}}^d \setminus \{0 \}\) none of the eigenvalues of \(A_1^{n_1}...A_d^{n_d}\) are roots of unity.

We need the following variant of Chevalley’s theorem ([9], see also [29]):

Theorem C [13, p. 282, Th. 6.2.6] Let \(A_1,\ldots ,A_d \in GL(s,{\mathbb {Z}})\) be commuting partially hyperbolic matrices with determinants \( w_1,\ldots , w_d\), \(p^{(k)}\) the product of the first k primes numbers relatively prime to \( w_1,\ldots , w_d\). If \( \mathbf{z},{\tilde{\mathbf{z}}} \in {\mathbb {Z}}^s\) and there are d sequences \( \{j_i^{(k)}, 1 \le i \le d \}\) of integers such that

$$\begin{aligned} A_1^{j_1^{(n)}} \cdots A_d^{j_d^{(k)}} {\tilde{\mathbf{z}}} \equiv \mathbf{z}\ (\text {mod }p^{(k)}), \qquad k=1,2,\ldots , \end{aligned}$$

then there exists a vector \((j_1^{(0)},\ldots ,j_d^{(0)}) \in {\mathbb {Z}}^s\) such that

$$\begin{aligned} A_1^{j_1^{(0)}} \cdots A_d^{j_d^{(0)}} {\tilde{\mathbf{z}}} = \mathbf{z}. \end{aligned}$$
(2.24)

Let

$$\begin{aligned} \mu = {\left\{ \begin{array}{ll} 1 &{} \mathrm{if} \; s \;\mathrm{is \;odd,}\\ 2 &{}\mathrm{if} \; s \;\mathrm{is \;even \;and\;} \not \exists \varvec{\varepsilon }\; \mathrm{with} \; N_{{\mathcal {K}}/Q} (\varvec{\varepsilon }) =-1, \\ 3 &{} \mathrm{if} \; s \;\mathrm{is \;even \;and\;} \exists \varvec{\varepsilon }\; \mathrm{with} \; N_{{\mathcal {K}}/Q} (\varvec{\varepsilon }) =-1. \end{array}\right. } \end{aligned}$$
(2.25)

Let \(\mu \in \{ 1,2 \}\). By [6, p. 117], we see that there exist units \(\varvec{\varepsilon }_{i} \in {\mathcal {U}}_{{\mathcal {O}}} \) with \(N_{{\mathcal {K}}/Q} (\varvec{\varepsilon }_i) =1\), \(i=1,\ldots ,s-1\), such that every \(\varvec{\varepsilon }\in {\mathcal {U}}_{{\mathcal {O}}} \) can be uniquely represented as follows:

$$\begin{aligned} \varvec{\varepsilon }= (-1)^a\varvec{\varepsilon }_{1}^{k_1} \cdots \varvec{\varepsilon }_{s-1}^{k_{s-1}} \quad \mathrm{with} \quad (k_1,\ldots ,k_{s-1}) \in {\mathbb {Z}}^{s-1}, \; a \in \{0,1\}. \end{aligned}$$
(2.26)

Let \(\mu =3\). By [6, p. 117], there exist units \(\varvec{\varepsilon }_{i} \in {\mathcal {U}}_{{\mathcal {O}}} \) with \(N_{{\mathcal {K}}/Q} (\varvec{\varepsilon }_i) =1\), \(i=1,\ldots ,s-1\) and \(N_{{\mathcal {K}}/Q} (\varvec{\varepsilon }_0) =-1\), such that every \(\varvec{\varepsilon }\in {\mathcal {U}}_{{\mathcal {O}}} \) can be uniquely represented as follows:

$$\begin{aligned} \varvec{\varepsilon }= (-1)^{a_1} \varvec{\varepsilon }_{0}^{a_2}\varvec{\varepsilon }_{1}^{k_1} \cdots \varvec{\varepsilon }_{s-1}^{k_{s-1}} \quad \mathrm{with} \quad (k_1,\ldots ,k_{s-1}) \in {\mathbb {Z}}^{s-1}, \; a_1,a_2 \in \{0,1\}. \end{aligned}$$
(2.27)

Consider the case \(\mu =1\). Let \( I_i =\mathrm{diag}((\sigma _j(\varvec{\varepsilon }_i))_{1 \le j \le s}), \; i=1,\ldots ,s-1 \), \(\Gamma _{{\mathcal {O}}}=\sigma ({\mathcal {O}})\), \(\mathbf{f}_1,\ldots ,\mathbf{f}_s\) be a basis of \(\Gamma _{{\mathcal {O}}}\), \(\mathbf{e}_i=(0,\ldots ,1,\ldots ,0) \in {\mathbb {Z}}^s\), \(i=1,\ldots ,s\) a basis of \({\mathbb {Z}}^s\). Let Y be the \(s \times s\) matrix with \(\mathbf{e}_i Y = \mathbf{f}_i\), \(i=1,\ldots ,s\). We have \({\mathbb {Z}}^s Y = \Gamma _{{\mathcal {O}}}\). Let \(A_i = Y I_i Y^{-1}\), \(i=1,\ldots ,s-1 \). We see \({\mathbb {Z}}^s A_i ={\mathbb {Z}}^s \) \((i=1,\ldots ,s-1) \). Hence, \(A_i\) is the integer matrix with \(\det A_i = \det I_i =1 \) \((i=1,\ldots ,s-1 ) \).

Let \({\tilde{\mathbf{z}}} =(1,\ldots ,1)\) and \(\mathbf{z}=-{\tilde{\mathbf{z}}} \). Let \(h>1\), and let \(A_s=p_2I\), where I is the identity matrix. Taking into account that \((\varvec{\varepsilon }_{1}^{k_1} \ldots \varvec{\varepsilon }_{s-1}^{k_{s-1}}p_2^{k_{s}})_j =1\) for some \(j \in [1,s]\) if and only if \(k_1= \cdots =k_{s} =0\), we get that \( A_{1},\ldots ,A_{s}\) are commuting partially hyperbolic matrices. By Definition 4, \(-1\) is not the eigenvalue of \( A_{1}^{k_1} \ldots A_{s}^{k_{s}}\), and \( {\tilde{\mathbf{z}}} A_{1}^{k_1} \ldots A_{s}^{k_{s}} \ne \mathbf{z}\) for all \((k_1,\ldots ,k_{s}) \in {\mathbb {Z}}^{s}\). Applying Theorem D with \(d=s\), we have that there exists an integer \(i \ge 1\) such that \((p_2,p^{(i)})=1\),

$$\begin{aligned} {\tilde{\mathbf{z}}} A_{1}^{k_1} \ldots A_{s-1}^{k_{s-1}} \not \equiv \mathbf{z}\;(\text {mod }p^{(i)}) \quad \mathrm{for \; all} \quad (k_1,\ldots ,k_{s-1}) \in {\mathbb {Z}}^{s-1}, \end{aligned}$$

and

$$\begin{aligned} (\varvec{\varepsilon }_{1}^{k_1} ...\varvec{\varepsilon }_{s-1}^{k_{s-1}})_j \not \equiv -1\; (\text {mod }p^{(i)}) \quad \mathrm{for \; all} \quad (k_1,\ldots ,k_{s-1}) \in {\mathbb {Z}}^{s-1}, \quad j\in [1,s]. \end{aligned}$$
(2.28)

We denote this \(p^{(i)}\) by \(p_3\). We have \((p_2,p_3)=1\). If \(h=1\), then we apply Theorem D with \(d=s-1\).

Let \({\mathfrak {p}}_3 =p_3{\mathcal {O}}\) and \({\mathbb {P}}= {\mathcal {O}}/{\mathfrak {p}}_3\). Denote the projection map \({\mathcal {O}}\rightarrow {\mathbb {P}}\) by \(\pi _1\). Let \({\mathcal {O}}^*\) be the set of all integers of \({\mathcal {O}}\) which are relatively prime to \({\mathfrak {p}}_3\), \({\mathbb {P}}^*=\pi _1({\mathcal {O}}^*)\),

$$\begin{aligned} {\mathcal {E}}_j = \{ v \in {\mathbb {P}}^* \; | \; \exists \; (k_1,\ldots ,k_{s-1}) \in {\mathbb {Z}}^{s-1} \; \mathrm{with} \; v \equiv (-1)^j \varvec{\varepsilon }_{1}^{k_1} \ldots \varvec{\varepsilon }_{s-1}^{k_{s-1}} (\text {mod }{\mathfrak {p}}_3) \}, \end{aligned}$$

where \(j =0,1\), and \({\mathcal {E}}={\mathcal {E}}_0 \cup {\mathcal {E}}_1\). By (2.28), \({\mathcal {E}}_0 \cap {\mathcal {E}}_1 =\emptyset \). Let

$$\begin{aligned} \chi _{1,p_3}(v) = (-1)^j \quad \mathrm{for} \quad v \in {\mathcal {E}}_j, \; j=0,1. \end{aligned}$$
(2.29)

We see that \(\chi _{1,p_3}\) is the character on group \({\mathcal {E}}\). We need the following known assertion (see e.g. [12, p. 63], [14, p. 446, Chap. 8, Sect. 2, Ex. 4]) :

Lemma B

Let \(\dot{G}\) be a finite abelian group, \(\dot{H}\) is a subgroup of \(\dot{G}\), and \(\chi _{\dot{H}}\) is a character of \(\dot{H}\). Then there exists a character \(\chi _{\dot{G}}\) of \(\dot{G}\) such that \(\chi _{\dot{H}}(h) = \chi _{\dot{G}}(h)\) for all \(h \in \dot{H}\).

Applying Lemma B, we can extend the character \(\chi _{1,p_3}\) to a character \(\chi _{2,p_3}\) of group \({\mathbb {P}}^*\). Now we extend \(\chi _{2,p_3}\) to a character \(\chi _{3,p_3}\) of group \({\mathcal {O}}^*\) by setting

$$\begin{aligned} \chi _{3,p_3}(v) = \chi _{2,p_3}(\pi _1(v)) \quad \mathrm{for} \quad v \in {\mathcal {O}}^*. \end{aligned}$$
(2.30)

Let

$$\begin{aligned} \chi _{4,p_3}(v) = \chi _{3,p_3}(v) \chi _{\infty }(v) \quad \mathrm{with} \quad \chi _{\infty }(v) =\mathrm{Nm}(v)/|\mathrm{Nm}(v)| , \end{aligned}$$

for \( v \in {\mathcal {O}}^*\), and let

$$\begin{aligned} \chi _{5,p_3}((v)) = \chi _{4,p_3}(v). \end{aligned}$$
(2.31)

We need to prove that the right hand side of (2.31) does not depend on units \(\varvec{\varepsilon }\in {\mathcal {U}}_{{\mathcal {O}}}\). Let \(\varvec{\varepsilon }=\varvec{\varepsilon }_{1}^{k_1} \ldots \varvec{\varepsilon }_{s-1}^{k_{s-1}}\). By (2.26), (2.29), and (2.30), we have \(\chi _{3,p_3}(\varvec{\varepsilon }) =1\), \(\mathrm{Nm}(\varvec{\varepsilon })=1\), and \(\chi _{\infty }(\varvec{\varepsilon })=1\). Therefore

$$\begin{aligned} \chi _{4,p_3}(v\varvec{\varepsilon }) = \chi _{3,p_3}(v\varvec{\varepsilon }) \chi _{\infty }(v\varvec{\varepsilon }) = \chi _{3,p_3}(v) \chi _{3,p_3}(\varvec{\varepsilon }) \chi _{\infty }(v)\chi _{\infty }(\varvec{\varepsilon }) = \chi _{3,p_3}(v) \chi _{\infty }(v). \end{aligned}$$

Now let \(\varvec{\varepsilon }=-1\). Bearing in mind that \(\chi _{3,p_3}(-1) =-1\), \(\mathrm{Nm}(-1)=-1\), and \(\chi _{\infty }(-1)=-1\), we obtain \( \chi _{4,p_3}(-1)=1\). Hence, definition (2.31) is correct. Let \({\mathcal {I}}^{{\mathfrak {p}}_3}\) be the group of all principal ideals of \({\mathcal {K}}\) which are relatively prime to \({\mathfrak {p}}_3\). Let

$$\begin{aligned} \chi _{6,p_3}((v_1/v_2)) = \chi _{5,p_3}((v_1))/\chi _{5,p_3}((v_2)) \quad \mathrm{for} \quad v_1,v_2 \in {\mathcal {O}}^*. \end{aligned}$$

Let \({\mathcal {P}}^{{\mathfrak {p}}_3}\) is the group of fractional principal ideals (a) such that \(a \equiv 1 \text {mod }{\mathfrak {p}}_3\) and \(\sigma _i(a) >0, \; i=1,\ldots ,s\). Let \(\pi _2: \; {\mathcal {I}}^{{\mathfrak {p}}_3} \rightarrow {\mathcal {I}}^{{\mathfrak {p}}_3}/ {\mathcal {P}}^{{\mathfrak {p}}_3}\) be the projection map. Bearing in mind that \(\chi _{6,p_3}({\mathfrak {a}}) = 1\) for \({\mathfrak {a}}\in {\mathcal {P}}^{{\mathfrak {p}}_3} \), we define

$$\begin{aligned} \chi _{7,p_3}(\pi _2({\mathfrak {a}})) = \chi _{6,p_3}({\mathfrak {a}}) \quad \mathrm{for} \quad {\mathfrak {a}}\in {\mathcal {I}}^{{\mathfrak {p}}_3} . \end{aligned}$$

By [23, p. 94, Lemma. 3.3], \({\mathcal {J}}^{{\mathfrak {p}}_3}/ {\mathcal {P}}^{{\mathfrak {p}}_3}\) is the finite abelian group. Applying Lemma B, we extend the character \(\chi _{7,p_3}\) to a character \(\chi _{8,p_3}\) of group \({\mathcal {J}}^{{\mathfrak {p}}_3}/ {\mathcal {P}}^{{\mathfrak {p}}_3}\). We have \(\chi _{8,p_3}({\mathfrak {a}}) = 1\) for \({\mathfrak {a}}\in {\mathcal {P}}^{{\mathfrak {p}}_3} \), and we set \( \chi _{9,p_3}({\mathfrak {a}}) = \chi _{8,p_3}(\pi _3({\mathfrak {a}})) \), where \(\pi _3\) is the proection map \({\mathcal {J}}^{{\mathfrak {p}}_3} \rightarrow {\mathcal {J}}^{{\mathfrak {p}}_3}/ {\mathcal {P}}^{{\mathfrak {p}}_3}\). It is easy to verify

$$\begin{aligned} \chi _{9,p_3}((v))= & {} \chi _{8,p_3}(\pi _3((v))) = \chi _{7,p_3}(\pi _3((v)))=\chi _{7,p_3}(\pi _2((v)))\nonumber \\= & {} \chi _{6,p_3}((v))=\chi _{4,p_3}(v) =\chi _{3,p_3}(v) \chi _{\infty }(v) \end{aligned}$$

for \({\mathfrak {a}}\in {\mathcal {I}}^{{\mathfrak {p}}_3} \). Thus we have constructed a nontrivial Hecke character.

Case \(\mu =2\). We repeat the construction of the case \(\mu =1\), taking \(p_3=1\) and \( \chi _{4,p_3}((v)) = \mathrm{Nm}(v)/|\mathrm{Nm}(v)|\).

Case \(\mu =3\). Similarly to the case \(\mu =1\), we have that there exists \(i>0\) with

$$\begin{aligned} \varvec{\varepsilon }_{1}^{k_1} ...\varvec{\varepsilon }_{s-1}^{k_{s-1}} \not \equiv \varvec{\varepsilon }_0 (\text {mod }p^{(i)}) \quad \mathrm{for \; all} \quad (k_1,\ldots ,k_{s-1}) \in {\mathbb {Z}}^{s-1}. \end{aligned}$$
(2.32)

We denote this \(p^{(i)}\) by \(p_3\). Let

$$\begin{aligned} {\mathcal {E}}_j = \{ v \in {\mathcal {P}}^* \; | \; \exists \; (k_1,\ldots ,k_{s-1}) \in {\mathbb {Z}}^{s-1} \; \mathrm{with} \; v \equiv \varvec{\varepsilon }_0^j \varvec{\varepsilon }_{1}^{k_1} ...\varvec{\varepsilon }_{s-1}^{k_{s-1}} (\text {mod }p_3 {\mathcal {O}}) \}, \end{aligned}$$

where \(j =0,1\), and \({\mathcal {E}}={\mathcal {E}}_0 \cup {\mathcal {E}}_1\). By (2.32), \({\mathcal {E}}_0 \cap {\mathcal {E}}_1 =\emptyset \). Let

$$\begin{aligned} \chi _{2,p_3}(v) = (-1)^j \quad \mathrm{for} \quad v \in {\mathcal {E}}_j, \; j=0,1. \end{aligned}$$

Next, we repeat the construction of the case \(\mu =1\), and we verify the correction of definition (2.31). Thus, we have proved the following lemma:

Lemma 4

Let \(\mu \in \{1,2,3\}\). There exists \(p_3 =p_3(\mu ) \ge 1\), \((p_2,p_3)=1,\) a nontrivial Hecke character \(\dot{\chi }_{p_3}\), and a character \(\ddot{\chi }_{p_3}\) on group \(({\mathcal {O}}/p_3{\mathcal {O}})^*\) such that

$$\begin{aligned} \dot{\chi }_{p_3} ((v)) =\tilde{\chi }_{p_3}(v) , \quad \mathrm{with } \quad \tilde{\chi }_{p_3}( v) = \ddot{\chi }_{p_3}(v) \mathrm{Nm}(v)/|\mathrm{Nm}(v)| , \end{aligned}$$

for \(v \in {\mathcal {O}}^*\), and \( \ddot{\chi }_{p_3}(v) =0 \) for \(( v, p_3{\mathcal {O}}) \ne 1\).

2.4 Non-vanishing of L-functions

With every Hecke character \(\chi \; \text {mod }{\mathfrak {m}}\), we associate its L-function

$$\begin{aligned} L(s, \chi )= \sum _{{\mathfrak {a}}} \frac{\chi ({\mathfrak {a}})}{{\mathfrak {N}}({\mathfrak {a}})^s} , \end{aligned}$$

where \({\mathfrak {a}}\) varies over the integral ideals of \({\mathcal {K}}\), and we put \(\chi ({\mathfrak {a}}) = 0\) whenever \(({\mathfrak {a}}, {\mathfrak {m}}) \ne 1\).

Theorem C

[15], p. 313, Thm. 2] Let \(\chi \) be a nontrivial Hecke character. Then

$$\begin{aligned} L(1, \chi ) \ne 0. \end{aligned}$$

Theorem D

[21], p. 128, Thm. 10.1.4] Let \((a_k)_{k \ge 1}\) be a sequence of complex numbers, and let \( \sum _{k<x} a_k = O(x^{\delta })\), for some \(\delta > 0\). Then

$$\begin{aligned} \sum _{n \ge 1} a_n/n^s \end{aligned}$$
(2.33)

converges for \( \mathfrak {R}(s) > \delta \).

Theorem E

[23, p. 464, Prop. I] If the series (2.33) converges at a point \(s_0\), then it converges also in the open half-plane \( \mathfrak {R}s > \mathfrak {R}s_0 \), the convergence being uniform in every angle \( \arg (s-s0) < c < \pi /2\). Thus (2.33) defines a function regular in \( \mathfrak {R}s > \mathfrak {R}s_0 \).

Let \(\mathbf{f}_1,\ldots ,\mathbf{f}_s\) be a basis of \(\Gamma _{{\mathcal {O}}}\), and let \(\mathbf{f}_1^{\bot },\ldots ,\mathbf{f}_s^{\bot }\) be a dual basis (i.e. \( \langle \mathbf{f}_i,\mathbf{f}_i^{\bot }\rangle =1\), \( \langle \mathbf{f}_i,\mathbf{f}_j^{\bot } \rangle =0,\) \(1 \le i,j \le s, \; i \ne j\)). Let

$$\begin{aligned} \varLambda _w = \{ a_1 \mathbf{f}_1^{\bot }+ \cdots + a_s \mathbf{f}_s^{\bot } \; | \; 0 \le a_i \le w-1, \; i=1,\ldots ,s \}, \end{aligned}$$
(2.34)

and \( \varLambda _w^* = \{ \mathbf{b}\in \varLambda _w\; | \; (w,\mathbf{b}) =1 \} \).

Lemma 5

With notations as above,

$$\begin{aligned} \rho (M,j):= \sum _{ \varvec{\gamma }\in \Gamma _{j} \cap {\mathcal {F}}, \; |\mathrm{Nm}(\varvec{\gamma })| < M/2} \tilde{\chi }_{ p_3}(\varvec{\gamma }) = O(M^{1-1/s}), \quad \quad j \in [1,h], \end{aligned}$$
(2.35)

and

$$\begin{aligned} \sum _{ {\mathfrak {N}}({\mathfrak {a}}) < M/2} \dot{\chi }_{ p_3} ({\mathfrak {a}}) = O(M^{1-1/s}) \end{aligned}$$
(2.36)

for \(M \rightarrow \infty \), where \({\mathfrak {a}}\) varies over the integral ideals of \({\mathcal {K}}\).

Proof

By Lemma 4, we have

$$\begin{aligned} \rho (M,j) = \sum _{\mathbf{a}\in \varLambda ^{*}_{p_3} } \ddot{\chi }_{ p_3}(\mathbf{a}) \sum _{\varsigma _i \in \{-1, +1\},\; i=1,\ldots ,s} \varsigma _1 \cdots \varsigma _s \dot{\rho }(\mathbf{a}, \varvec{\varsigma },j), \end{aligned}$$

where

$$\begin{aligned} \dot{\rho }(\mathbf{a}, \varvec{\varsigma },j) = \sum _{ \begin{array}{c} \varvec{\gamma }\in \Gamma _{j} \cap {\mathcal {F}}, \; \varvec{\gamma }\equiv \mathbf{a}\ \text {mod }p_3, \\ |\mathrm{Nm}(\varvec{\gamma })| < M/2, \;\mathrm{sgn}(\gamma _i) =\varsigma _i, \;i=1,\ldots ,s \end{array}} 1 . \end{aligned}$$

Using Lemma 3 with \(M_1=M/2\) and \(w=p_3\) , we get

$$\begin{aligned} \dot{\rho }(\mathbf{a}, \varvec{\varsigma },j) = \sum _{ \begin{array}{c} \varvec{\gamma }\in (p_3\Gamma _{j} +\mathbf{a}) \cap {\mathcal {F}}, \;|\mathrm{Nm}(\varvec{\gamma })| < M/2 \\ \mathrm{sgn}(\gamma _i) =\varsigma _i, \;i=1,\ldots ,s \end{array}} 1 =c_{6,j} M/p_3^s + O(M^{1-1/s}). \end{aligned}$$

Therefore

$$\begin{aligned} \rho (M,j)= & {} \sum _{\mathbf{a}\in \varLambda ^{*}_{p_3} } \ddot{\chi }_{ p_3}(\mathbf{a}) \sum _{\varsigma _i \in \{-1, +1\},\; i=1,\ldots ,s} \varsigma _1 \cdots \varsigma _s (c_{6,j} M /p_3^s + O(M^{1-1/s}))\nonumber \\= & {} O(M^{1-1/s}). \end{aligned}$$

Hence, the assertion (2.35) is proved. The assertion (2.36) can be proved similarly (see also [7], p. 210, Thm. 1], [22, p. 142, and p.144, Thm 11.1.5]). \(\square \)

Lemma 6

There exists \(M_0>0\), \(i_0 \in [1,h]\), and \(c_{7}>0\), such that

$$\begin{aligned} |\rho _0(M,i_0)| \ge c_{7} \quad \mathrm{for} \quad M>M_0 \quad \mathrm{with} \quad \rho _0(M,i)=\sum _{ \varvec{\gamma }\in \Gamma _{i} \cap {\mathcal {F}}, \; |\mathrm{Nm}(\varvec{\gamma })| < M/2} \frac{\tilde{\chi }_{p_3}(\varvec{\gamma })}{|\mathrm{Nm}(\varvec{\gamma })|}. \end{aligned}$$

Proof

Let \( cl({\mathcal {K}}) = \{C_1,\ldots , C_h \}\), \({\mathfrak {a}}_i \in C_i\) be an integral ideal, \(i=1,\ldots ,s\), and let \(C_1\) be the class of principal ideals. Consider the inverse ideal class \(C_i^{-1}\). We set \(\dot{{\mathfrak {a}}}_i =\{{\mathfrak {a}}_1,\ldots ,{\mathfrak {a}}_h \} \cap C_i^{-1}\). Then for any \({\mathfrak {a}}\in C_i\) the product \({\mathfrak {a}}\dot{{\mathfrak {a}}}_i\) will be a principal ideal: \({\mathfrak {a}}\dot{{\mathfrak {a}}}_i = (\alpha )\), \((\alpha \in {\mathcal {K}})\). By [6, p. 310], we have that the mapping \({\mathfrak {a}}\rightarrow (\alpha )\) establishes a one to one correspondence between integral ideal \({\mathfrak {a}}\) of the class \(C_i\) and principal ideals divisible by \(\dot{{\mathfrak {a}}}_i\). Let

$$\begin{aligned} \rho _1(M) = \sum _{{\mathfrak {N}}({\mathfrak {a}}) < M/2 } \dot{\chi }_{p_3}({\mathfrak {a}})/{\mathfrak {N}}({\mathfrak {a}}) . \end{aligned}$$

Similarly to [6, p. 311], we get

$$\begin{aligned} \rho _1(M) = \sum _{1 \le i \le h} \;\; \sum _{{\mathfrak {a}}\in C_i, {\mathfrak {N}}({\mathfrak {a}}) < M/2 } \frac{\dot{\chi }_{p_3}({\mathfrak {a}})}{{\mathfrak {N}}({\mathfrak {a}})} = \sum _{1 \le i \le h} \;\; \sum _{\begin{array}{c} {\mathfrak {a}}\in C_1, {\mathfrak {N}}({\mathfrak {a}}/ \dot{{\mathfrak {a}}}_i)) < M/2 \\ {\mathfrak {a}}\equiv 0\ \text {mod }\dot{{\mathfrak {a}}}_i \end{array}} \frac{\dot{\chi }_{p_3}({\mathfrak {a}}/ \dot{{\mathfrak {a}}}_i)}{{\mathfrak {N}}({\mathfrak {a}}/ \dot{{\mathfrak {a}}}_i))} . \end{aligned}$$

Let

$$\begin{aligned} \rho _2(M,i) = \sum _{\begin{array}{c} {\mathfrak {a}}\in C_1, \; {\mathfrak {N}}({\mathfrak {a}}) < M/2 \\ {\mathfrak {a}}\equiv 0\ \text {mod }\dot{{\mathfrak {a}}}_i \end{array}} \dot{\chi }_{p_3}({\mathfrak {a}})/{\mathfrak {N}}({\mathfrak {a}}) . \end{aligned}$$

We see

$$\begin{aligned} \rho _1(M) = \sum _{1 \le i \le h} \frac{ \dot{\chi }_{p_3}(1/ \dot{{\mathfrak {a}}}_i)}{{\mathfrak {N}}(1/ \dot{{\mathfrak {a}}}_i)} \rho _2(M{\mathfrak {N}}( \dot{{\mathfrak {a}}}_i) ,i). \end{aligned}$$
(2.37)

By Lemma 4, we obtain \(\tilde{\chi }_{p_3}(\varvec{\gamma }) /|\mathrm{Nm}(\varvec{\gamma })| = \dot{\chi }_{p_3}((\varvec{\gamma })) /{\mathfrak {N}}((\varvec{\gamma }))\). Using Theorem B, we get \(\rho _0(M,i) = \rho _2(M,i)\). From (2.36), Theorem C, Theorem D, and Theorem E, we derive \( \rho _1(M) \mathop {\longrightarrow }\limits ^{M \rightarrow \infty } L(1, \dot{\chi }_{p_3}) \ne 0 \). By (2.35) and Theorem D, we obtain that there exists a complex number \(\rho _i\) such that \( \rho _0(M,i) \mathop {\longrightarrow }\limits ^{M \rightarrow \infty } \rho _i \), \(i=1,\ldots ,h\). Hence, there exists \(M_0>0\) such that

$$\begin{aligned} |L(1, \dot{\chi }_{p_3})|/2 \le |\rho _1(M)| \quad \mathrm{and} \quad |\rho _i -\rho _2(M,i)| \le |L(1, \dot{\chi }_{p_3})| (8 \beta )^{-1}, \end{aligned}$$
(2.38)

with \(\beta = \sum _{1 \le i \le h} {\mathfrak {N}}(\dot{{\mathfrak {a}}}_i)\) for \(M \ge M_0\). Let \(\rho = \max _{1 \le i \le h} |\rho _i| = |\rho _{i_0}| \).

Using (2.37), we have

$$\begin{aligned} |L(1, \dot{\chi }_{p_3})|/2 \le |\rho _1(M)|\le & {} \rho \beta + \big | \sum _{1 \le i \le h} \frac{ \dot{\chi }_{p_3}(1/ \dot{{\mathfrak {a}}}_i)}{{\mathfrak {N}}(1/ \dot{{\mathfrak {a}}}_i)} (\rho _i - \rho _2(M {\mathfrak {N}}( \dot{{\mathfrak {a}}}_i) ,i) ) \big |\nonumber \\\le & {} \rho \beta + |L(1, \dot{\chi }_{p_3})|/8 \quad \mathrm{for}\quad M>M_0. \end{aligned}$$

By (2.38), we get for \(M>M_0\)

$$\begin{aligned} \rho \ge |L(1, \dot{\chi }_{p_3})|(4 \beta )^{-1} \quad \mathrm{and} \quad |\rho _0(M,i_0)| = |\rho _2(M,i_0)| \ge |L(1, \dot{\chi }_{p_3})|(8 \beta )^{-1}. \end{aligned}$$

Therefore, Lemma 6 is proved. \(\square \)

Lemma 7

There exists \(M_2>0\) such that

$$\begin{aligned} |\vartheta | \ge c_{7}/2 \quad \mathrm{for} \quad M>M_2, \quad \mathrm{where} \quad \vartheta = \sum _{ \varvec{\gamma }\in \Gamma _{i_0} \cap {\mathcal {F}}} \frac{\ddot{\chi }_{p_3}(\varvec{\gamma }) \eta _M(\varvec{\gamma })}{\mathrm{Nm}(\varvec{\gamma })} . \end{aligned}$$

Proof

Let \(\dot{\eta }_M(k)=1-\eta (2|k|/M)\),

$$\begin{aligned} \vartheta _1= \sum _{\begin{array}{c} \varvec{\gamma }\in \Gamma _{i_0} \cap {\mathcal {F}}\\ |\mathrm{Nm}(\varvec{\gamma })| < M/2 \end{array}} \frac{\tilde{\chi }_{p_3}(\varvec{\gamma }) }{|\mathrm{Nm}(\varvec{\gamma })|} \quad \mathrm{and} \quad \vartheta _2 = \sum _{ \begin{array}{c} \varvec{\gamma }\in \Gamma _{i_0} \cap {\mathcal {F}}\\ M/2 \le |\mathrm{Nm}(\varvec{\gamma })| \le M \end{array}} \frac{\tilde{\chi }_{p_3}(\varvec{\gamma }) \dot{\eta }_M(\mathrm{Nm}(\varvec{\gamma }))}{|\mathrm{Nm}(\varvec{\gamma })|}. \end{aligned}$$

From (2.16), we get \(\eta _M(\varvec{\gamma }) =\dot{\eta }_M(\mathrm{Nm}(\varvec{\gamma }))\), \(\eta _M(\varvec{\gamma }) =1\) for \(|\mathrm{Nm}(\varvec{\gamma })| \le M/2\), and \(\eta _M(\varvec{\gamma }) =0\) for \(|\mathrm{Nm}(\varvec{\gamma })| \ge M\). Using Lemma 4, we derive

$$\begin{aligned} \vartheta = \sum _{ \varvec{\gamma }\in \Gamma _{i_0} \cap {\mathcal {F}}, \; |\mathrm{Nm}(\varvec{\gamma })| \le M} \frac{\tilde{\chi }_{p_3}(\varvec{\gamma }) \dot{\eta }_M(\mathrm{Nm}(\varvec{\gamma }))}{|\mathrm{Nm}(\varvec{\gamma })|} \quad \mathrm{and} \quad \vartheta =\vartheta _1 + \vartheta _2. \end{aligned}$$
(2.39)

Bearing in mind that \(\mathrm{Nm}(\varvec{\gamma }) \in {\mathbb {Z}}\) and \(\mathrm{Nm}(\varvec{\gamma }) \ne 0\), we have

$$\begin{aligned} \vartheta _2 = \sum _{ M/2 \le \dot{n} \le M} \frac{a_{\dot{n}} \dot{\eta }_M(k)}{k} \quad \mathrm{with} \quad a_{\dot{n}} = \sum _{ \varvec{\gamma }\in \Gamma _{i_0} \cap {\mathcal {F}}, \; |\mathrm{Nm}(\varvec{\gamma })| =\dot{n}} \tilde{\chi }_{p_3}(\varvec{\gamma }) . \end{aligned}$$

Applying Abel’ transformation

$$\begin{aligned} \sum _{m < k \le \dot{n}} g_k f_k =g_{\dot{n}} F_{\dot{n}}- \sum _{m < k \le \dot{n}-1} (g_{k+1} -g_k)F_k , \quad \mathrm{where} \quad F_k= \sum _{m < i \le k} f_i, \end{aligned}$$

with \(f_k = a_k, \; g_k = \dot{\eta }_M(k)/k\) and \(F_k= \sum _{ \varvec{\gamma }\in \Gamma _{i_0} \cap {\mathcal {F}}, \; M/2 -0.1 < |\mathrm{Nm}(\varvec{\gamma })| \le k } \tilde{\chi }_{p_3}(\varvec{\gamma })\), we obtain

$$\begin{aligned} \vartheta _2 = \dot{\eta }_M(M) F_{M}/M - \sum _{M/2 -0.1< k \le M-1} (\dot{\eta }_M(k+1)/(k+1) - \dot{\eta }_M(k)/k) F_k . \end{aligned}$$
(2.40)

Bearing in mind that \( 0 \le \dot{\eta }_M(x) \le 1 \) and \(\eta ^{'}(x) =O(1)\), for \(|x| \le 2\), we get

$$\begin{aligned} |\dot{\eta }_M(k+1)/(k+1) - \dot{\eta }_M(k)/k)|\le & {} |\dot{\eta }_M(k+1)/(k+1) - \dot{\eta }_M(k+1)/k)|\nonumber \\&+\,|(\dot{\eta }_M(k+1)-\dot{\eta }_M(k))/k|\nonumber \\\le & {} 1/k^2 + 2(kM)^{-1} \sup _{x \in [0,2]} |\eta ^{'}(x)| = O(k^{-2}). \end{aligned}$$

Taking into account that \(F_k = O(M^{1-1/s}) \) (see (2.35)), we have from (2.40) that \(\vartheta _2 =O(M^{-1/s})\). Using Lemma 6 and (2.39), we obtain the assertion of Lemma 7. \(\square \)

2.5 The Lower Bound Estimate for \(\mathbf{E}({\mathcal {A}}(\mathbf{x},M))\)

Let \(n=s^{-1} \log _2 N\) with \(N=N_1 \cdots N_s\), \(\tau =N^{-2}\), \(M=[\sqrt{n}]\), and

$$\begin{aligned} G_0= & {} \{ \varvec{\gamma }\in \Gamma ^\bot \; | \; |\mathrm{Nm}(\varvec{\gamma })| > M \},\nonumber \\ G_1= & {} \{ \varvec{\gamma }\in \Gamma ^\bot \; | \; |\mathrm{Nm}(\varvec{\gamma })| \le M , \; \max _i |\gamma _i| \ge 1/\tau ^2 \},\nonumber \\ G_2= & {} \{ \varvec{\gamma }\in \Gamma ^\bot \; | \; |\mathrm{Nm}(\varvec{\gamma })| \le M , \; 1/\tau ^2 > \max _i |\gamma _i| \ge n/\tau \},\nonumber \\ G_3= & {} \{ \varvec{\gamma }\in \Gamma ^\bot \; | \; |\mathrm{Nm}(\varvec{\gamma })| \le M , \; n/\tau > \max _i |\gamma _i| \ge n^{-s}/ \tau \},\nonumber \\ G_4= & {} \{ \varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}}\; | \; |\mathrm{Nm}(\varvec{\gamma })| \le M , \; \max _i|\gamma _i| <n^{-s}\tau ^{-1},\;\; n^{-s} > N^{1/s}\min _i|\gamma _i| \},\nonumber \\ G_5= & {} \{ \varvec{\gamma }\in \Gamma ^\bot | \; |\mathrm{Nm}(\varvec{\gamma })| \le M , \; \max _i|\gamma _i| <n^{-s}\tau ^{-1} , \; N^{1/s} \min _i|\gamma _i| \in [ n^{-s},n^{s}] \},\nonumber \\ G_6= & {} \{ \varvec{\gamma }\in \Gamma ^\bot \; | \; |\mathrm{Nm}(\varvec{\gamma })| \le M , \;\; \max _i|\gamma _i| <n^{-s}\tau ^{-1}, \;\quad \; N^{1/s}\min _i|\gamma _i| > n^{s} \}.\nonumber \\ \end{aligned}$$
(2.41)

We see that

$$\begin{aligned} \Gamma ^\bot \setminus {\mathbf{0}} = G_0 \cup \cdots \cup G_6 \quad \mathrm{and} \quad G_i \cap G_j =\emptyset , \; \mathrm{for}\; i\ne j. \end{aligned}$$

Let \(p=p_1 p_2 p_3\), \(\mathbf{b}\in \Delta _p\). By (2.16) and (2.17), we have

$$\begin{aligned} {\mathcal {A}}(\mathbf{b}/p,M) = \sum _{0 \le i \le 6} {\mathcal {A}}_i(\mathbf{b}/p,M) \quad \mathrm{and} \quad {\mathcal {A}}_0(\mathbf{b}/p,M)=0, \end{aligned}$$
(2.42)

where

$$\begin{aligned} {\mathcal {A}}_i(\mathbf{b}/p,M) = \sum _{\varvec{\gamma }\in G_i } \prod _{i=1}^s \sin (\pi \theta _i N_i \gamma _i) \frac{ \eta _M(\varvec{\gamma }) \widehat{\varOmega } (\tau \varvec{\gamma })e(\langle \varvec{\gamma },\mathbf{b}/p\rangle + \dot{x})}{\mathrm{Nm}(\varvec{\gamma })}, \end{aligned}$$
(2.43)

with \(\dot{x} = \sum _{1 \le i \le s} \theta _i N_i \gamma _i/2 \).

We will use the following simple decomposition (see notations from Sect. 2.2 and (2.25)–(2.27)):

$$\begin{aligned} G_i= & {} \bigcup _{1 \le j \le M} \;\;\bigcup _{ \varvec{\gamma }_0 \in \Gamma ^\bot \cap {\mathcal {F}}, |\mathrm{Nm}(\varvec{\gamma }_0)| \in (j-1,j ] } \;\; \nonumber \\&\times \,\bigcup _{a_1,a_2 =0,1}\big \{ \varvec{\gamma }\in G_i \;\;| \varvec{\gamma }= \varvec{\gamma }_0 (-1)^{a_1}\varvec{\varepsilon }_{0}^{a_2}\varvec{\varepsilon }^{\mathbf{k}} , \; \mathbf{k}\in {\mathbb {Z}}^{s-1} \big \}, \quad i \in [1,6] , \end{aligned}$$
(2.44)

where \(\mathbf{k}=(k_1,\ldots ,k_{s-1})\), \(\varvec{\varepsilon }^{\mathbf{k}} = \varvec{\varepsilon }_1^{k_1} \cdots \varvec{\varepsilon }_{s-1}^{k_{s-1}} \), and \(\varvec{\varepsilon }_0=1\) for \(\mu =1,2\).

Lemma 8

With notations as above

$$\begin{aligned} {\mathcal {A}}_i(\mathbf{b}/p, M) =O( n^{s-3/2}\ln n), \quad \mathrm{where } \quad M= [\sqrt{n}] \quad \mathrm{and } \quad i \in [1,5]. \end{aligned}$$

Proof

By (2.43), we have

$$\begin{aligned} |{\mathcal {A}}_i(\mathbf{b}/p,M)| \le \sum _{\varvec{\gamma }\in G_i} \prod _{1 \le j \le s} \big |\sin (\pi \theta _j N_j\gamma _j) \widehat{\varOmega } (\tau \varvec{\gamma })/ \mathrm{Nm}(\varvec{\gamma }) \big |. \end{aligned}$$
(2.45)

Case \(i=1.\) Applying (2.20), we obtain \( \#\{\varvec{\gamma }\in \Gamma ^{\bot } \; : \; j \le | \varvec{\gamma }| \le j+1\} =O(j^{s-1}) \). By (2.7) we get \( \widehat{\varOmega } (\tau \varvec{\gamma }) =O( (\tau |\varvec{\gamma }|)^{-2s}) \) for \(\varvec{\gamma }\in G_1\). From (2.45) and (2.41), we have

$$\begin{aligned} {\mathcal {A}}_1(\mathbf{b}/p,M)= & {} O\big ( \sum _{\varvec{\gamma }\in \Gamma ^\bot , \max _{i \in [1,s]}|\gamma _i| \ge 1/\tau ^2 } \tau ^{-2s}(\max _{i \in [1,s]}|\gamma _i|)^{-2s} \big )\nonumber \\= & {} O\big ( \sum _{j \ge \tau ^{-2}} \sum _{ \begin{array}{c} \varvec{\gamma }\in \Gamma ^\bot \\ \max _{i}|\gamma _i| \in [j,j+1) \end{array}} \tau ^{-2s}(\max _{i \in [1,s]}|\gamma _i|)^{-2s}\big )\nonumber \\= & {} O\big ( \sum _{j \ge \tau ^{-2}} \frac{\tau ^{-2s}}{j^{s+1}} \big ) =O(1). \end{aligned}$$

Case \(i=2.\) By (2.7) we obtain \( \widehat{\varOmega } (\tau \varvec{\gamma }) =O( n^{-2s}) \) for \(\varvec{\gamma }\in G_2\). By [6, pp. 312, 322], the points of \(\Gamma _{\mathcal {O}}\cap {\mathcal {F}}\) can be arranged in a sequence \(\dot{\varvec{\gamma }}^{(k)}\) so that

$$\begin{aligned} |\mathrm{Nm}(\dot{\varvec{\gamma }}^{(1)}) | \le |\mathrm{Nm}(\dot{\varvec{\gamma }}^{(2)})| \le \cdots \quad \mathrm{and} \; c^{(1)} k \le |\mathrm{Nm}(\dot{\varvec{\gamma }}^{(k)})| \le c^{(2)} k, \end{aligned}$$
(2.46)

\(k=1,2,\ldots \) for some \(c^{(2)} > c^{(1)}>0\). Let \( \varvec{\varepsilon }^{\mathbf{k}}_\mathrm{max} =\max _{1 \le i \le s}|(\varvec{\varepsilon }^{\mathbf{k}})_i|\) and \(\varvec{\varepsilon }^{\mathbf{k}}_\mathrm{min} = \min _{1 \le i \le s}|(\varvec{\varepsilon }^{\mathbf{k}})_i|\). Using Lemma 2, we get

$$\begin{aligned} \# \{ \mathbf{k}\in {\mathbb {Z}}^{s-1} \; | \; \varvec{\varepsilon }^{\mathbf{k}}_\mathrm{max} \le \tau ^{-4} \} = O(n^{s-1}), \quad \mathrm{where} \quad \tau =N^{-2}=e^{-2sn}. \end{aligned}$$
(2.47)

Applying (2.44)–(2.47), we have

$$\begin{aligned} {\mathcal {A}}_2(\mathbf{b}/p,M) = O\big (\sum _{1 \le j \le M} \sum _{ \mathbf{k}\in {\mathbb {Z}}^{s-1}, \;\varvec{\varepsilon }^{\mathbf{k}}_\mathrm{max} \le \tau ^{-2} } n^{-2s} \big ) = O( M n^{-2s+s-1}) = O( 1) . \end{aligned}$$

Case \(i=3\). Using Lemma 2, we obtain

$$\begin{aligned}&\# \{ \mathbf{k}\in {\mathbb {Z}}^{s-1} \; | \; \varvec{\varepsilon }^{\mathbf{k}}_\mathrm{max} \in [n^{-s-1}/\tau , n^{s+1}/\tau ] \}\nonumber \\&\quad = c_4(\ln ^{s-1} (n^{s+1}/\tau ) - \ln ^{s-1} (n^{-s-1}/\tau ) ) + O(n^{s-2})\nonumber \\&\quad = O\big ( |\ln ^{s-1} \tau | \big (\big ( 1 +\frac{(s+1)\ln n}{ |\ln \tau | }\big )^{s-1} - \big ( 1 -\frac{(s+1)\ln n}{ | \ln \tau | } \big )^{s-1}\big )\big ) \nonumber \\&\quad = O(n^{s-2}\ln n). \end{aligned}$$
(2.48)

Applying (2.44)–(2.47), we get

$$\begin{aligned} {\mathcal {A}}_3(\mathbf{b}/p,M) = O\big (\sum _{1 \le j \le M} \;\; \sum _{ \mathbf{k}\in {\mathbb {Z}}^{s-1}, \varvec{\varepsilon }^{\mathbf{k}}_\mathrm{max} \in [n^{-s-1}/\tau , n^{s+1}/\tau ]} \; 1 \big ) = O( M n^{s-2} \ln n) . \end{aligned}$$

Case \(i=4.\) We see \(\min _{1 \le i \le s} |\sin (\pi N_i \gamma _i)| =O(n^{-s})\) for \(\varvec{\gamma }\in G_4\). Applying (2.44)–(2.47), we have

$$\begin{aligned} |{\mathcal {A}}_4(\mathbf{b}/p,M)| = O\big ( \sum _{1 \le j \le M} \sum _{ \mathbf{k}\in {\mathbb {Z}}^{s-1}, \;\varvec{\varepsilon }^{\mathbf{k}}_\mathrm{max} \le \tau ^{-4}} n^{-s} \big ) = O( M n^{-2}). \end{aligned}$$

Case \(i=5.\) Similarly to (2.48), we obtain from Lemma 2 that

$$\begin{aligned} \{ \mathbf{k}\in {\mathbb {Z}}^{s-1} \; | \; \varvec{\varepsilon }^{\mathbf{k}}_\mathrm{min} \in [n^{-s-1}N^{-1/s} , n^{s+1}N^{-1/s} ] \} = O(n^{s-2}\ln n). \end{aligned}$$

Therefore

$$\begin{aligned} {\mathcal {A}}_3(\mathbf{b}/p,M) = O\big (\sum _{1 \le j \le M} \;\; \sum _{ \mathbf{k}\in {\mathbb {Z}}^{s-1}, \varvec{\varepsilon }^{\mathbf{k}}_\mathrm{min} \in [n^{-s-1}N^{-1/s} , n^{s+1}N^{-1/s} ] } \; 1 \big ) = O( M n^{s-2} \ln n) . \end{aligned}$$

Hence, Lemma 8 is proved. \(\square \)

Let \( \varvec{\varsigma }=(\varsigma _1,\ldots ,\varsigma _s) \), \(\mathbf{1}=(1,1,\ldots ,1)\), and

$$\begin{aligned} {\breve{{\mathcal {A}}}}_6(\mathbf{b}/p,M,\varvec{\varsigma }) = \varsigma _1 \cdots \varsigma _s (2\sqrt{-1})^{-s} \sum _{\varvec{\gamma }\in G_6 } \frac{\widehat{\varOmega } (\tau \varvec{\gamma }) \eta _M(\varvec{\gamma })e(\langle \varvec{\gamma }, \mathbf{b}/p+{\dot{\varvec{\theta }}}(\varvec{\varsigma }) \rangle )}{\mathrm{Nm}(\varvec{\gamma })} \end{aligned}$$
(2.49)

with \({\dot{\varvec{\theta }}}(\varvec{\varsigma }) = (\dot{\theta }_1(\varvec{\varsigma }),\ldots ,\dot{\theta }_s(\varvec{\varsigma }))\) and \(\dot{\theta }_i(\varvec{\varsigma }) = (1+\varsigma _i) \theta _iN_i/4 , \; i=1,\ldots ,s\).

By (2.43), we see

$$\begin{aligned} {\mathcal {A}}_6(\mathbf{b}/p,M) = \sum _{ \varvec{\varsigma }\in \{1,-1\}^s} {\breve{{\mathcal {A}}}}_6(\mathbf{b}/p,M,\varvec{\varsigma }) . \end{aligned}$$
(2.50)

Lemma 9

With notations as above

$$\begin{aligned} \mathbf{E}({\mathcal {A}}_6(\mathbf{b}/p,M)) = {\dot{{\mathcal {A}}_6}}(\mathbf{b}/p,M,-\mathbf{1}) + O(1), \end{aligned}$$

where

$$\begin{aligned} \dot{{\mathcal {A}}_i}(\mathbf{b}/p,M,-\mathbf{1}) = (-2\sqrt{-1})^{-s} \sum _{\varvec{\gamma }\in G_i } \frac{ \eta _M(\varvec{\gamma })e(\langle \varvec{\gamma },\mathbf{b}/p\rangle )}{\mathrm{Nm}(\varvec{\gamma })},\quad i=1,2,\ldots \end{aligned}$$
(2.51)

Proof

By (2.49) and (2.50), we have

$$\begin{aligned}&|\mathbf{E} ({\mathcal {A}}_6(\mathbf{b}/p,M)) - {\breve{{\mathcal {A}}}}_6(\mathbf{b}/p,M,-\mathbf{1}) |\nonumber \\&\quad =O\big ( \sum _{ \begin{array}{c} \varvec{\varsigma }\in \{1,-1\}^s \\ \varvec{\varsigma }\ne -\mathbf{1} \end{array}} \sum _{\varvec{\gamma }\in G_6 } \sum _{1 \le i \le s} \frac{ |\mathbf{E} (e(\varsigma _i \theta _iN_i \gamma _i/4))|}{|\mathrm{Nm}(\varvec{\gamma })|} \big ). \end{aligned}$$

Bearing in mind that

$$\begin{aligned} \mathbf{E}(e(\theta _i z )) =\frac{e(z)-1}{2\pi \sqrt{-1} z} \end{aligned}$$
(2.52)

and that \(|N_i \gamma _i| \ge n^s/c_3\) for \(\varvec{\gamma }\in G_6\) (see (2.3), and (2.41)), we get

$$\begin{aligned} |\mathbf{E} ({\mathcal {A}}_6(\mathbf{b}/p,M)) - {\breve{{\mathcal {A}}}}_6(\mathbf{b}/p,M,-\mathbf{1}) | =O\big ( \sum _{\varvec{\gamma }\in G_6 } n^{-s}|\mathrm{Nm}(\varvec{\gamma })|^{-1}\big ). \end{aligned}$$

By (2.49) and (2.51), we obtain

$$\begin{aligned} |{\breve{{\mathcal {A}}}}_6(\mathbf{b}/p,M,-\mathbf{1}) - {\dot{{\mathcal {A}}}}_6(\mathbf{b}/p,M,-\mathbf{1})| =O\big ( \sum _{\varvec{\gamma }\in G_6 } \frac{ |\widehat{\varOmega } (\tau \varvec{\gamma }) -1|}{|\mathrm{Nm}(\varvec{\gamma })|} \big ) . \end{aligned}$$

By (2.8) and (2.41), we see \(\widehat{\varOmega } (\tau \varvec{\gamma }) =1 +O(n^{-s})\) for \(\varvec{\gamma }\in G_6\). From (2.41), (2.44) and (2.47), we have \(\#G_6 =O(M n^{s-1})\). Hence

$$\begin{aligned} \mathbf{E}({\mathcal {A}}_6(\mathbf{b}/p,M)) - {\dot{{\mathcal {A}}}}_6(\mathbf{b}/p,M,-\mathbf{1}) = O\big (\sum _{\varvec{\gamma }\in G_6 } n^{-s}|\mathrm{Nm}(\varvec{\gamma })|^{-1} \big )= O(1). \end{aligned}$$

Therefore, Lemma 9 is proved. \(\square \)

Let

$$\begin{aligned} G_7 = \bigcup _{ \varvec{\gamma }_0 \in \Gamma ^{\bot } \cap {\mathcal {F}}, |\mathrm{Nm}(\varvec{\gamma }_0)| \le M } \;\; \bigcup _{a_1,a_2 =0,1} \bigcup _{ \mathbf{k}\in {\mathcal {Y}}_N} T_{ \varvec{\gamma }_0,a_1,a_2 ,\mathbf{k}} , \end{aligned}$$
(2.53)

with

$$\begin{aligned} {\mathcal {Y}}_N = \{ \mathbf{k}\in {\mathbb {Z}}^{s-1} \; | \; \varvec{\varepsilon }^{\mathbf{k}}_\mathrm{min} \ge N^{-1/s} \}, \end{aligned}$$
(2.54)

and

$$\begin{aligned} T_{ \varvec{\gamma }_0,a_1,a_2 ,\mathbf{k}} = \{ \varvec{\gamma }\in \Gamma ^\bot \; | \; \varvec{\gamma }= \varvec{\gamma }_0 (-1)^{a_1}\varvec{\varepsilon }_{0}^{a_2}\varvec{\varepsilon }^{\mathbf{k}} \}. \end{aligned}$$

We note that \( \# T_{ \varvec{\gamma }_0,a_1,a_2 ,\mathbf{k}} \le 1 \) (may be \(\varvec{\gamma }_0 (-1)^{a_1}\varvec{\varepsilon }_{0}^{a_2}\varvec{\varepsilon }^{\mathbf{k}} \notin \Gamma ^{\bot }\)).

Lemma 10

With notations as above

$$\begin{aligned} \mathbf{E}({\mathcal {A}}(\mathbf{b}/p,M)) = {\dot{{\mathcal {A}}_7}}(\mathbf{b}/p,M,-\mathbf{1}) + O(n^{s-3/2} \ln n), \quad \mathrm{where} \quad M=[\sqrt{n}] . \end{aligned}$$
(2.55)

Proof

By (2.51), we have

$$\begin{aligned} |{\dot{{\mathcal {A}}_6}}(\mathbf{b}/p,M,-\mathbf{1}) - {\dot{{\mathcal {A}}_7}}(\mathbf{b}/p,M,-\mathbf{1}) | = O(\#(G_7 \setminus G_6) + \#(G_6 \setminus G_7)) . \end{aligned}$$

Consider \(\varvec{\gamma }\in G_6\) (see (2.41)). Bearing in mind that \( \min _{1 \le i \le s} |\gamma _i| \ge n^{s} N^{-1/s}\), we get

$$\begin{aligned} |\gamma _i| = |\mathrm{Nm}(\varvec{\gamma })| \prod _{[1,s] \ni j \ne i} |\gamma _j|^{-1} \le n^{-s(s-1)} N^{1+(s-1)/s} < n^{-s}/\tau , \quad \mathrm{with} \quad \tau =N^{-2}. \end{aligned}$$

Thus

$$\begin{aligned} G_6=\{ \varvec{\gamma }\in \Gamma ^\bot \; | \; |\mathrm{Nm}(\varvec{\gamma })| \le M , \; N^{1/s}\min _i|\gamma _i| > n^{s} \}. \end{aligned}$$

From (2.53), we obtain \(G_7 \supseteq G_6\). Bearing in mind that \( |\mathrm{Nm}(\varvec{\gamma })| \ge 1\) for \(\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}} \), we have that \(G_6 \supseteq G_5\), where

$$\begin{aligned} G_5 = \bigcup _{ \varvec{\gamma }_0 \in \Gamma ^{\bot } \cap {\mathcal {F}}, \;|\mathrm{Nm}(\varvec{\gamma }_0)| \le M } \; \bigcup _{a_1,a_2 =0,1} \; \bigcup _{ \mathbf{k}\in \dot{{\mathcal {Y}}}_N} T_{ \varvec{\gamma }_0,a_1,a_2 ,\mathbf{k}} , \end{aligned}$$

with

$$\begin{aligned} \dot{{\mathcal {Y}}}_N = \{ \mathbf{k}\in {\mathbb {Z}}^{s-1} \; | \; N^{1/s} \varvec{\varepsilon }^{\mathbf{k}}_\mathrm{min} \ge n^{2s}\}. \end{aligned}$$
(2.56)

By Lemma 3, we get \( \# \{ \varvec{\gamma }_0 \in \Gamma ^{\bot } \cap {\mathcal {F}}, |\mathrm{Nm}(\varvec{\gamma }_0)| \le M \} =O(M).\) Therefore

$$\begin{aligned} |{\dot{{\mathcal {A}}}}_6(\mathbf{b}/p,M,-\mathbf{1}) - {\dot{{\mathcal {A}}}}_7(\mathbf{b}/p,M,-\mathbf{1}) | =O(M \#( {\mathcal {Y}}_N \setminus \dot{{\mathcal {Y}}}_N) ) . \end{aligned}$$

Using Lemma 2, we obtain

$$\begin{aligned} \#( {\mathcal {Y}}_N \setminus \dot{{\mathcal {Y}}}_N)= & {} \{ \mathbf{k}\in {\mathbb {Z}}^{s-1} \; | \; \varvec{\varepsilon }^{\mathbf{k}}_\mathrm{min} \in [N^{-1/s}, n^{2s} N^{-1/s}] \}\nonumber \\= & {} c_{5}\big (\ln ^{s-1} ( N^{1/s}) - \ln ^{s-1} (n^{-2s} N^{1/s}) \big ) + O(n^{s-2})\nonumber \\= & {} O\big ( \ln ^{s-1} N \big (\big (1 - \big ( 1 -\frac{2s^2\log _2 n}{ \ln N } \big )^{s-1}\big ) \big )\big ) \nonumber \\= & {} O(n^{s-2}\ln n), \quad n=s^{-1}\log _2 N. \end{aligned}$$

Hence

$$\begin{aligned} |{\dot{{\mathcal {A}}_6}}(\mathbf{b}/p,M,-\mathbf{1}) - {\dot{{\mathcal {A}}_7}}(\mathbf{b}/p,M,-\mathbf{1}) | =O(M n^{s-2} \ln n). \end{aligned}$$

Applying Lemmas 8 and 9, we get the assertion of Lemma 10. \(\square \)

Let

$$\begin{aligned} \delta _w(\varvec{\gamma }) = \bigg \{\begin{array}{l@{\quad }l} 1 &{} \mathrm{if} \; \varvec{\gamma }\in w{\mathcal {O}},\\ 0 &{}\mathrm{otherwise}. \end{array} \end{aligned}$$

Lemma 11

Let \(\varvec{\gamma }\in {\mathcal {O}}\), then

$$\begin{aligned} \frac{1}{w^s} \sum _{\mathbf{y}\in \varLambda _w} e(\langle \varvec{\gamma }, \mathbf{y}\rangle /w) = \delta _w(\gamma ). \end{aligned}$$

Proof

It easy to verify that

$$\begin{aligned} \frac{1}{v} \sum _{0 \le k <w} e(kb/w) = \dot{\delta }_w(b), \quad \hbox {where} \quad \dot{\delta }_w(b) = \bigg \{\begin{array}{l@{\quad }l} 1 &{} \mathrm{if} \quad b \equiv 0\ \text {mod }w,\\ 0 &{}\mathrm{otherwise} . \end{array} \end{aligned}$$
(2.57)

Let \(\varvec{\gamma }=d_1 \mathbf{f}_1+ \cdots + d_s \mathbf{f}_s\), and \(\mathbf{y}=a_1 \mathbf{f}_1^{\bot }+ \cdots + a_s \mathbf{f}_s^{\bot }\) (see (2.34)). We have \(\langle \varvec{\gamma }, \mathbf{y}\rangle = a_1d_1 + \cdots + a_s d_s\). Bearing in mind that \( \varvec{\gamma }\in w{\mathcal {O}}\) if and only if \(d_i \equiv 0\ \text {mod }w\) \((i=1,\ldots ,s)\), we obtain from (2.57) the assertion of Lemma 11. \(\square \)

Lemma 12

There exist \(\mathbf{b}\in \varLambda _{p}\), \(c_8>0\) and \(N_0>0\) such that

$$\begin{aligned} | \mathbf{E}({\mathcal {A}}(\mathbf{b}/p,M))| > c_{8} n^{s-1} \quad \mathrm{for } \quad N > N_0. \end{aligned}$$

Proof

We consider the case \(\mu =1\). The proof for the cases \(\mu =2,3\) is similar.

By (2.51) and Lemma 11, we have

$$\begin{aligned} \varrho:= & {} \frac{2^{2s}}{p^s} \sum _{\mathbf{b}\in \varLambda _{p}} |{\dot{{\mathcal {A}}_7}}(\mathbf{b}/p,M,-\mathbf{1})|^2 = \sum _{ \varvec{\gamma }_1, \varvec{\gamma }_2 \in G_7 } \frac{ \eta _M(\varvec{\gamma }_1)\eta _M(\varvec{\gamma }_2) \delta _{p}( \varvec{\gamma }_1 - \varvec{\gamma }_2 )}{\mathrm{Nm}(\varvec{\gamma }_1)\mathrm{Nm}(\varvec{\gamma }_2) }\nonumber \\= & {} \sum _{\mathbf{b}\in \varLambda _{p}} \big | \sum _{ \varvec{\gamma }\in G_7, \; \varvec{\gamma }\equiv \mathbf{b}\ \text {mod }p} \;\; \frac{ \eta _M(\varvec{\gamma })}{\mathrm{Nm}(\varvec{\gamma }) } \big |^2 . \end{aligned}$$
(2.58)

Bearing in mind that \(\eta _M(\varvec{\gamma })=0\) for \(|\mathrm{Nm}(\varvec{\gamma })| \ge M \) (see (2.16)), we get from (2.53) that

$$\begin{aligned} \varrho = \sum _{\mathbf{b}\in \varLambda _{p}} \big | \sum _{\varsigma =-1,1} \sum _{\mathbf{k}\in {\mathcal {Y}}_N} \;\; \sum _{ \begin{array}{c} \varvec{\gamma }\in \Gamma ^\bot \cap {\mathcal {F}}, \; \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }\in \Gamma ^\bot \\ \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }\equiv \mathbf{b}\ \text {mod }p \end{array}} \;\; \frac{ \eta _M( \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma })}{\mathrm{Nm}(\varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }) } \big |^2 . \end{aligned}$$

We consider only \(\mathbf{b}= p_1\mathbf{b}_0 \in \varLambda _p\), where \(\mathbf{b}_0 \in \varLambda _{p_2p_3}\) and \(p=p_1p_2p_3\). By (2.1), we obtain \(\Gamma _{p_1 {\mathcal {O}}} \subseteq \Gamma ^\bot \subseteq \Gamma _{ {\mathcal {O}}}\) and \(\Gamma _{p_1{\mathcal {O}}} = \{ \varvec{\gamma }\in \Gamma ^{\bot } | \gamma \equiv \mathbf{0}\ \text {mod }p_1\}\). Hence, we can take \(\Gamma _{ p_1{\mathcal {O}}}\) instead of \(\Gamma ^\bot \). We see \(\varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }\in \Gamma _{\mathcal {O}}\) for all \(\varvec{\gamma }\in \Gamma _{\mathcal {O}}\), \(\mathbf{k}\in {\mathbb {Z}}^{s-1}\) and \(\varsigma \in \{-1,1\}\). Thus

$$\begin{aligned} \varrho \ge \sum _{\mathbf{b}\in \varLambda _{p_2p_3}} \big |\sum _{ \begin{array}{c} \varsigma =-1,1 \\ \mathbf{k}\in {\mathcal {Y}}_N \end{array}} \;\; \sum _{ \begin{array}{c} \varvec{\gamma }\in \Gamma _{\mathcal {O}}\cap {\mathcal {F}}\\ \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }\equiv \mathbf{b}\ \text {mod }p_2p_3 \end{array}} \;\; \frac{ \eta _M(p_1\varsigma \varvec{\varepsilon }^{\mathbf{k}} \varvec{\gamma })}{\mathrm{Nm}(p_1\varsigma \varvec{\varepsilon }^{\mathbf{k}} \varvec{\gamma }) } \big |^2 . \end{aligned}$$

By Lemma 4, \((p_2,p_3) =1\). Hence, there exists \(w_2, w_3 \in {\mathbb {Z}}\) such that \(p_2w_2 \equiv 1\ \text {mod }p_3\) and \(p_3w_3 \equiv 1\ \text {mod }p_2\). It is easy to verify that if \(\dot{\mathbf{b}}_2, \ddot{\mathbf{b}}_2 \in \varLambda _{p_2}\) (see (2.34)), \(\dot{\mathbf{b}}_3, \ddot{\mathbf{b}}_3 \in \varLambda _{p_3}\), and \( (\dot{\mathbf{b}}_2, \dot{\mathbf{b}}_3) \ne (\ddot{\mathbf{b}}_2, \ddot{\mathbf{b}}_3) \), then

$$\begin{aligned} \dot{\mathbf{b}}_2 p_3w_3 + \dot{\mathbf{b}}_3 p_2w_2 \not \equiv \ddot{\mathbf{b}}_2 p_3w_3 + \ddot{\mathbf{b}}_3 p_2w_2\ \text {mod }p_2p_3. \end{aligned}$$

Therefore

Thus

$$\begin{aligned} \varrho\ge & {} \sum _{\mathbf{b}_2 \in \varLambda _{p_2}} \sum _{\mathbf{b}_3 \in \varLambda _{p_3}} \big |\sum _{ \begin{array}{c} \varsigma =-1,1 \\ \mathbf{k}\in {\mathcal {Y}}_N \end{array}} \; \sum _{ \begin{array}{c} \varvec{\gamma }\in \Gamma _{\mathcal {O}}\cap {\mathcal {F}}\\ \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }\equiv \mathbf{b}_2 p_3w_3 + \mathbf{b}_3 p_2w_2\ \text {mod }p_2p_3 \end{array}} \;\; \frac{ \eta _M(p_1 \varvec{\gamma })}{\mathrm{Nm}(p_1\varsigma \varvec{\gamma }) } \big |^2\nonumber \\\ge & {} \sum _{\mathbf{b}_2 \in \varLambda _{p_2}} \sum _{\mathbf{b}_3 \in \varLambda _{p_3}} \big | \ddot{\chi }_{p_3}( \mathbf{b}_3) \sum _{ \begin{array}{c} \varsigma =-1,1 \\ \mathbf{k}\in {\mathcal {Y}}_N \end{array}} \; \sum _{ \begin{array}{c} \varvec{\gamma }\in \Gamma _{\mathcal {O}}\cap {\mathcal {F}}\\ \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }\equiv \mathbf{b}_2 p_3w_3 + \mathbf{b}_3 p_2w_2\ \text {mod }p_2p_3 \end{array}} \;\; \frac{ \eta _M(p_1 \varvec{\gamma })}{\mathrm{Nm}(p_1\varsigma \varvec{\gamma }) } \big |^2\nonumber \\= & {} \sum _{\mathbf{b}_2 \in \varLambda _{p_2}} \sum _{\mathbf{b}_3 \in \varLambda _{p_3}} \big | \sum _{ \begin{array}{c} \varsigma =-1,1 \\ \mathbf{k}\in {\mathcal {Y}}_N \end{array}} \; \sum _{ \begin{array}{c} \varvec{\gamma }\in \Gamma _{\mathcal {O}}\cap {\mathcal {F}}\\ \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }\equiv \mathbf{b}_2 p_3w_3 + \mathbf{b}_3 p_2w_2\ \text {mod }p_2p_3 \end{array}} \;\; \frac{ \ddot{\chi }_{p_3}( \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }) \eta _M(p_1 \varvec{\gamma })}{\mathrm{Nm}(p_1\varsigma \varvec{\gamma }) } \big |^2 . \end{aligned}$$

Using the Cauchy–Schwartz inequality, we have

$$\begin{aligned} p_3^s \varrho {\ge } \sum _{\mathbf{b}_2 \in \varLambda _{p_2}} \big | \sum _{\mathbf{b}_3 \in \varLambda _{p_3}} \sum _{ \begin{array}{c} \varsigma =-1,1 \\ \mathbf{k}\in {\mathcal {Y}}_N \end{array}} \; \sum _{ \begin{array}{c} \varvec{\gamma }\in \Gamma _{\mathcal {O}}\cap {\mathcal {F}}\\ \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }\equiv \mathbf{b}_2 p_3w_3 + \mathbf{b}_3 p_2w_2\ \text {mod }p_2p_3 \end{array}} \;\; \frac{ \ddot{\chi }_{p_3}( \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }) \eta _M(p_1 \varvec{\gamma })}{ p_1^{s} \mathrm{Nm}( \varsigma \varvec{\gamma }) } \big |^2 . \end{aligned}$$

We see that \( \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }\equiv \mathbf{b}_2 p_3w_3 \equiv \mathbf{b}_2\ \text {mod }p_2\) if and only if there exists \(\mathbf{b}_3 \in \varLambda _{p_3}\) such that \( \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }\equiv \mathbf{b}_2 p_3w_3 + \mathbf{b}_3 p_2w_2\ \text {mod }p_2p_3\). Hence

$$\begin{aligned} p_1^{2s}p_3^s \varrho \ge \sum _{\mathbf{b}_2 \in \varLambda _{p_2}} \big | \sum _{ \begin{array}{c} \varsigma =-1,1 \\ \mathbf{k}\in {\mathcal {Y}}_N \end{array}} \; \sum _{ \begin{array}{c} \varvec{\gamma }\in \Gamma _{\mathcal {O}}\cap {\mathcal {F}}\\ \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }\equiv \mathbf{b}_2\ \text {mod }p_2 \end{array}} \;\; \frac{ \ddot{\chi }_{p_3}( \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }) \eta _M(p_1 \varvec{\gamma })}{\mathrm{Nm}(\varsigma \varvec{\gamma }) } \big |^2 . \end{aligned}$$
(2.59)

By (2.23), we get \(\Gamma _{i_0} = \varsigma \varvec{\varepsilon }^{\mathbf{k}} \Gamma _{i_0} \) for all \(\mathbf{k}\in {\mathbb {Z}}^{s-1}\), \(\varsigma \in \{-1,1\}\), and there exists \(\varPhi _{i_0} \subseteq \varLambda _{p_2}\) with

$$\begin{aligned} \Gamma _{i_0} = \bigcup _{\mathbf{b}\in \varPhi _{i_0}} (p_2 \Gamma _{{\mathcal {O}}} + \mathbf{b}), \quad \mathrm{where} \quad (p_2 \Gamma _{{\mathcal {O}}} + \mathbf{b}_1) \cap (p_2 \Gamma _{{\mathcal {O}}} + \mathbf{b}_2) = \emptyset , \; \mathrm{for} \; \; \mathbf{b}_1 \ne \mathbf{b}_2. \end{aligned}$$

We consider in (2.59) only \(\mathbf{b}_2 \in \varPhi _{i_0}\). Applying the Cauchy–Schwartz inequality, we obtain

$$\begin{aligned} p_1^{2s} p_2^s p_3^s \varrho\ge & {} \big | \sum _{\mathbf{b}_2 \in \varPhi _{i_0}} \sum _{ \begin{array}{c} \varsigma =-1,1 \\ \mathbf{k}\in {\mathcal {Y}}_N \end{array}} \; \sum _{ \begin{array}{c} \varvec{\gamma }\in \Gamma _{\mathcal {O}}\cap {\mathcal {F}}\\ \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }\equiv \mathbf{b}_2\ \text {mod }p_2 \end{array}} \;\; \frac{ \ddot{\chi }_{p_3}( \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }) \eta _M(p_1 \varvec{\gamma })}{\mathrm{Nm}(\varsigma \varvec{\gamma }) } \big |^2\nonumber \\= & {} \big | \sum _{ \begin{array}{c} \varsigma =-1,1 \\ \mathbf{k}\in {\mathcal {Y}}_N \end{array}} \; \sum _{ \varvec{\gamma }\in \Gamma _{i_0} \cap {\mathcal {F}}} \;\; \frac{ \ddot{\chi }_{p_3}( \varsigma \varvec{\varepsilon }^{\mathbf{k}}\varvec{\gamma }) \eta _M(p_1 \varvec{\gamma })}{\mathrm{Nm}(\varsigma \varvec{\gamma }) } \big |^2 . \end{aligned}$$

Using Lemma 4, we get

$$\begin{aligned} \ddot{\chi }_{p_3}( \varsigma \varvec{\varepsilon }^{\mathbf{k}} \varvec{\gamma }) \frac{ |\mathrm{Nm}( \varvec{\gamma })| }{\mathrm{Nm}(\varsigma \varvec{\gamma }) }= & {} \ddot{\chi }_{p_3}( \varsigma \varvec{\varepsilon }^{\mathbf{k}} \varvec{\gamma }) \frac{ \mathrm{Nm}(\varsigma \varvec{\varepsilon }^{\mathbf{k}} \varvec{\gamma }) }{|\mathrm{Nm}(\varsigma \varvec{\varepsilon }^{\mathbf{k}} \varvec{\gamma })| }\nonumber \\= & {} \dot{\chi }_{p_3}( (\varsigma \varvec{\varepsilon }^{\mathbf{k}} \varvec{\gamma })) = \dot{\chi }_{p_3}( ( \varvec{\gamma })) = \ddot{\chi }_{p_3}( \varvec{\gamma }) \frac{ |\mathrm{Nm}( \varvec{\gamma })| }{ \mathrm{Nm}(\varvec{\gamma }) }. \end{aligned}$$

Hence

$$\begin{aligned} p_1^{2s} p_2^s p_3^s \varrho \ge \big | \sum _{ \begin{array}{c} \varsigma =-1,1 \\ \mathbf{k}\in {\mathcal {Y}}_N \end{array}} \; \sum _{ \varvec{\gamma }\in \Gamma _{i_0} \cap {\mathcal {F}}} \;\; \frac{ \ddot{\chi }_{p_3}( \varvec{\gamma }) \eta _M(p_1\varvec{\gamma })}{\mathrm{Nm}(\varvec{\gamma }) } \big |^2 . \end{aligned}$$

Bearing in mind that \(\eta _M(p_1\varvec{\gamma }) = \eta _{M/p_1^s}(\varvec{\gamma })\) (see (2.16)), we obtain

$$\begin{aligned} p_1^{2s} p_2^s p_3^s \varrho \ge 4 \#{\mathcal {Y}}_N^2 \big | \sum _{ \varvec{\gamma }\in \Gamma _{i_0} \cap {\mathcal {F}}} \;\; \frac{ \ddot{\chi }_{p_3}( \varvec{\gamma }) \eta _{M/p_1^s}(\varvec{\gamma })}{|\mathrm{Nm}(\varvec{\gamma })| } \big |^2 . \end{aligned}$$

Applying Lemma 2, we have from (2.54) that \(\#{\mathcal {Y}}_N \ge 0.5 c_5 (n/s)^{s-1}\) for \(N \ge \dot{N}_0\) with some \(\dot{N}_0>1\), and \(n=s^{-1} \log _2 N\). By Lemma 7 and (2.58), we obtain

$$\begin{aligned} \sup _{\mathbf{b}\in \varLambda _{p}} | {\dot{{\mathcal {A}}_7}}(\mathbf{b}/p,M,-\mathbf{1}) |\ge & {} 2^{-s} \varrho ^{1/2} \nonumber \\\ge & {} c_{7} (2p_1^2 p_2p_3 )^{-s} \#{\mathcal {Y}}_N \ge 0.5c_{5} c_{7} (2p_1^2 p_2p_3 s)^{-s} n^{s-1} , \end{aligned}$$

with \(M=[\sqrt{n}]=[\sqrt{\log _2 N}] \ge M_2 + \log _2 \dot{N}_0 \). Using Lemma 10, we get the assertion of Lemma 12. \(\square \)

2.6 Auxiliary Lemmas

We need the following notations and results from [27]:

Lemma C

[27, Lemma 3.2] Let \(\dot{\Gamma } \subset {\mathbb {R}}^s\) be an admissible lattice. Then

$$\begin{aligned} \sup _{\mathbf{x}\in {\mathbb {R}}^s} \sum _{\varvec{\gamma }\in \dot{\Gamma }} \prod _{1 \le i \le s} (1+|\gamma _i -x_i|)^{-2s}\le H_{\dot{\Gamma }} \end{aligned}$$

where the constant \(H_{\dot{\Gamma }}\) depends upon the lattice \(\dot{\Gamma }\) only by means of the invariants \(\det \dot{\Gamma }\) and \(\mathrm{Nm}\; \dot{\Gamma }\).

Let \(f(t), \; t\in {\mathbb {R}}\), be a function of the class \(C^{\infty }\); moreover let f(t) and all derivatives \(f^{(k)}\) belong to \(L^1({\mathbb {R}})\). We consider the following integrals for \(\dot{\tau } >0\):

$$\begin{aligned} I(\dot{\tau }, \xi ) = \int _{-\infty }^{\infty }{\frac{\eta (t)\widehat{\omega }(\dot{\tau } t) e(-\xi t)}{t} \mathrm{d}t}, \; J_{f}(\dot{\tau }, \xi ) = \int _{-\infty }^{\infty } f(t)\widehat{\omega }(\dot{\tau } t) e(-\xi t) \mathrm{d}t. \end{aligned}$$
(2.60)

Lemma D

[27, Lemma 4.2] For all \(\alpha >0\) and \(\beta >0\), there exists a constant \( \breve{c}_{(\alpha ,\beta )} >0\) such that

$$\begin{aligned} \max (|I(\dot{\tau }, \xi ) |, |J_{f}(\dot{\tau }, \xi )|) < \breve{c}_{(\alpha ,\beta )} (1+\dot{\tau })^{-\alpha }(1+|\xi |)^{-\beta }. \end{aligned}$$

Let \(m(t), \; t\in {\mathbb {R}}\), be an even non negative function of the class \(C^{\infty }\); moreover \(m(t)=0\) for \(|t| \le 1\), \(m(t)=0\) for \(|t| \ge 4\), and

$$\begin{aligned} \sum _{q= -\infty }^{+\infty } m(2^{-q}t) =1. \end{aligned}$$
(2.61)

For examples of such functions see e.g. [27, Ref. 5.16]. Let \(\dot{\mathbf{p}}=(\dot{p}_1,\ldots ,\dot{p}_s)\), \(\dot{p}_i>0, \; i=1,\ldots ,s\), \(a>0\), \(x_0=\gamma _0=1\),

$$\begin{aligned} \widehat{W}_{a,i}(\dot{\mathbf{p}},\mathbf{x}) = \frac{ \widehat{\omega } (\dot{p}_1 x_1) \eta (ax_1 )}{x_1} \prod _{j=2}^s \frac{ \widehat{\omega } (\dot{p}_j x_j) m ( x_j) }{x_j} \frac{1}{ x_i} \quad \mathrm{for} \quad \mathrm{Nm}\; \mathbf{x}\ne 0, \end{aligned}$$
(2.62)

and \( \widehat{W}_{a,i}(\dot{\mathbf{p}},\mathbf{x})=0\) for \( \mathrm{Nm}( \mathbf{x}) =0 ,\;\;i=0,1,\ldots ,s\). Let

$$\begin{aligned} \breve{W}_{a,i}(\dot{\Gamma },\dot{\mathbf{p}},\mathbf{x}) = \sum _{\varvec{\gamma }\in \dot{\Gamma }^\bot \setminus {\mathbf{0}} } \widehat{W}_{a,i}(\dot{\mathbf{p}},\varvec{\gamma }) e(\langle \varvec{\gamma },\mathbf{x}\rangle ). \end{aligned}$$
(2.63)

By (2.6) and (2.7), we see that the series (2.63) converge absolutely, and \( \widehat{W}_{a,i}(\dot{\mathbf{p}},\mathbf{x})\) belongs to the class \(C^{\infty }\). Therefore, we can use Poisson’s summation formula (2.4):

$$\begin{aligned} \breve{W}_{a,i}(\dot{\Gamma },\dot{\mathbf{p}},\mathbf{x}) = \det \dot{\Gamma } \sum _{\varvec{\gamma }\in \dot{\Gamma } } W_{a,i}(\dot{\mathbf{p}},\gamma - \mathbf{x}) , \end{aligned}$$
(2.64)

where \(\widehat{W}_{a,i}(\dot{\mathbf{p}},\mathbf{x})\) and \( W_{a,i}(\dot{\mathbf{p}},\mathbf{x}) \) are related by the Fourier transform. Using (2.62), we derive

$$\begin{aligned} W_{a,i}(\dot{\mathbf{p}},\mathbf{x}) = \prod _{j \in \{1,\ldots ,s\}\setminus \{i\}} w^{(1)}_1 (\dot{p}_j,x_j) \prod _{j \in \{1,\ldots ,s\} \cap \{i\}} w^{(2)}_j (\dot{p}_j,x_j), \end{aligned}$$

where co-factors can be described as follows (see also [27, Ref. 6.14–6.17]):

If \(j=1\) and \(i \ne 1\), then

$$\begin{aligned} w^{(1)}_1(\tau , \xi ) = \int _{-\infty }^{\infty }{\frac{1}{t} \eta (at)\widehat{\omega }(\tau t) e(-\xi t) \mathrm{d}t} =I(a^{-1}\tau , a^{-1}\xi ). \end{aligned}$$
(2.65)

Note that here we used formula (2.60). If \(j=1\) and \(i = 1\), then

$$\begin{aligned} w^{(2)}_1(\tau , \xi ) = \int _{-\infty }^{\infty }{\frac{1}{t^2} \eta (at)\widehat{\omega }(\tau t) e(-\xi t) \mathrm{d}t} =aJ_{f_1}(a^{-1}\tau , a^{-1}\xi ). \end{aligned}$$

Note that here we used formula (2.60) with \(f_1(t) =\eta (t)/t^2. \) If \(j \ge 2\), then

$$\begin{aligned} w^{(l)}_j(\tau , \xi ) = \int _{-\infty }^{\infty }{\frac{1}{t^l} m(t)\widehat{\omega }(\tau t) e(-\xi t) \mathrm{d}t} =J_{f_2}(\tau , \xi ). \end{aligned}$$
(2.66)

Here we used formula (2.60) with \(f_2(t) =m(t)/t^l \quad j=2,\ldots ,s, \;l=1,2\).

Applying Lemma D, we obtain for \(0<a \le 1\) that

$$\begin{aligned} |w_1^{(l)}(\tau , \xi ) | < \breve{c}_{(2s,2s)} (1+ a^{-1}|\xi |)^{-2s} \; \mathrm{and } \; |w_j^{(l)}(\tau , \xi )| < \breve{c}_{(2s,2s)} (1+ |\xi |)^{-2s} , \end{aligned}$$
(2.67)

with \(j=2,\ldots ,s, \) and \(l=1,2\). Now, using (2.64) and Lemma C, we get (see also [27, Ref. 6.18, 6.19, 3.7, 3.10, 3,13]):

Lemma E

Let \(\dot{\Gamma } \subset {\mathbb {R}}^s\) be an admissible lattice, and \(0<a \le 1\) . Then

$$\begin{aligned} \sup _{\mathbf{x}\in {\mathbb {R}}^s} | \breve{W}_{a,i}(\dot{\Gamma },\dot{\mathbf{p}},\mathbf{x})| \le \breve{c}_{(2s,2s)} \det \dot{\Gamma } H_{\dot{\Gamma }}. \end{aligned}$$

2.7 Dyadic Decomposition of \({\mathcal {B}}(\mathbf{b}/p,M)\)

Using the definition of the function m(x) (see (2.61)), we set

$$\begin{aligned} {\mathbb {M}}(\mathbf{x}) = \prod _{j=2}^s m(x_j). \end{aligned}$$
(2.68)

Let \(2^{\mathbf{q}}=(2^{q_1},\ldots ,2^{q_s})\), and

$$\begin{aligned} \psi _{\mathbf{q}}(\varvec{\gamma })= & {} {\mathbb {M}}(2^{-\mathbf{q}} \cdot \varvec{\gamma })\widehat{\varOmega }(\tau \varvec{\gamma })/\mathrm{Nm}(\varvec{\gamma }),\nonumber \\ {\mathcal {B}}_{\mathbf{q}}(M)= & {} {\mathcal {B}}_{\mathbf{q}}(\mathbf{b}/p,M)\nonumber \\= & {} \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}} } \prod _{i=1}^s \sin (\pi \theta _i N_i \gamma _i) ( 1-\eta _M(\varvec{\gamma })) \psi _{\mathbf{q}}(\varvec{\gamma }) )e(\langle \varvec{\gamma },\mathbf{b}/p \rangle +\dot{x}),\nonumber \\ \end{aligned}$$
(2.69)

with \( \dot{x} = \sum _{1 \le i \le s} \theta _i N_i \gamma _i/2 \).

By (2.17) and (2.61), we have

$$\begin{aligned} {\mathcal {B}}(\mathbf{b}/p,M) = \sum _{Q \in L} {\mathcal {B}}_{\mathbf{q}}(M), \end{aligned}$$
(2.70)

with \( {\mathcal {L}}=\{\mathbf{q}=(q_1,\ldots ,q_s) \in {\mathbb {Z}}^s \; | \; q_1+ \cdots +q_s =0\}\).

Let

$$\begin{aligned} {\widetilde{{\mathcal {B}}}}_{\mathbf{q}}(M) = \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}} } \prod _{i=1}^s \sin (\pi \theta _i N_i \gamma _i) \eta (\gamma _1 2^{-q_1 }/M) \psi _{\mathbf{q}}( \varvec{\gamma }) e(\langle \varvec{\gamma },\mathbf{b}/p \rangle +\dot{x}), \end{aligned}$$
(2.71)

and

$$\begin{aligned} {\mathcal {C}}_{\mathbf{q}}(M)= & {} \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}} } \prod _{i=1}^s \sin (\pi \theta _i N_i \gamma _i) ( 1-\eta _M(\varvec{\gamma }))\nonumber \\&\quad \times (1- \eta (\gamma _1 2^{-q_1 }/M)) \psi _{\mathbf{q}}( \varvec{\gamma })e(\langle \varvec{\gamma },\mathbf{b}/p \rangle +\dot{x}). \end{aligned}$$

According to (2.16), we get \(\eta _M(\varvec{\gamma }) =1- \eta (2|\mathrm{Nm}(\varvec{\gamma })|/M)\), \(\eta (x)=0\) for \(|x| \le 1\), \(\eta (x) = \eta (-x)\) and \(\eta (x)=1\) for \(|x| \ge 2\). Let \(\eta (\gamma _1 2^{-q_1 }/M) m(\gamma _2 2^{-q_2 }) \cdots m(\gamma _s 2^{-q_s }) \ne 0\), then \(|\mathrm{Nm}(\varvec{\gamma })| \ge M \) (see (2.61)), and

$$\begin{aligned} ( 1-\eta _M(\varvec{\gamma })) \eta (\gamma _1 2^{-q_1 }/M) = \eta (2|\mathrm{Nm}(\varvec{\gamma })|/M) \eta (\gamma _1 2^{-q_1 }/M)=\eta (\gamma _1 2^{-q_1 }/M). \end{aligned}$$

Hence

$$\begin{aligned} {\mathcal {B}}_{\mathbf{q}}(M) = {\widetilde{{\mathcal {B}}}}_{\mathbf{q}}(M) + {\mathcal {C}}_{\mathbf{q}}(M). \end{aligned}$$
(2.72)

Let \(n=s^{-1} \log _2 N, \; \tau =N^{-2}\) and

$$\begin{aligned} {\mathcal {G}}_1= & {} \{ \mathbf{q}\in {\mathcal {L}}\; | \; \max _{i=1,\ldots ,s} q_i \ge -\log _2 \tau + \log _2 n\},\nonumber \\ {\mathcal {G}}_2= & {} \{ \mathbf{q}\in {\mathcal {L}}\setminus {\mathcal {G}}_1 \; | \; \min _{i=2,\ldots ,s} q_i \le - n-1/2\log _2 n \},\nonumber \\ {\mathcal {G}}_3= & {} \{ \mathbf{q}\in {\mathcal {L}}\; | \; - n-1/2\log _2 n < \min _{i=2,\ldots ,s} q_i, \;\; \max _{i=1,\ldots ,s} q_i < -\log _2 \tau +\log _2 n \},\nonumber \\ {\mathcal {G}}_4= & {} \{ \mathbf{q}\in {\mathcal {G}}_3 \; | \; q_1 \ge -n +s \log _2 n \},\nonumber \\ {\mathcal {G}}_5= & {} \{ \mathbf{q}\in {\mathcal {G}}_3 \; | \; -n -s \log _2 n \le q_1 < -n +s \log _2 n \},\nonumber \\ {\mathcal {G}}_6= & {} \{ \mathbf{q}\in {\mathcal {G}}_3 \; | \; q_1 < -n -s \log _2 n \}. \end{aligned}$$
(2.73)

We see

$$\begin{aligned} {\mathcal {L}}= {\mathcal {G}}_1 \cup {\mathcal {G}}_2 \cup {\mathcal {G}}_3, \quad {\mathcal {G}}_3 = {\mathcal {G}}_4 \cup {\mathcal {G}}_5 \cup {\mathcal {G}}_6 \quad \mathrm{and} \quad {\mathcal {G}}_i \cap {\mathcal {G}}_j =\emptyset , \; \mathrm{for}\; i\ne j \end{aligned}$$
(2.74)

and \(i,j \in [1,3]\) or \(i,j \in [4,6]\). Let

$$\begin{aligned} {\mathcal {B}}_i(M) = \sum _{{\mathbf{q}} \in {\mathcal {G}}_i} {\mathcal {B}}_{\mathbf{q}}(M) . \end{aligned}$$
(2.75)

By (2.70), we obtain

$$\begin{aligned} {\mathcal {B}}(\mathbf{b}/p,M) = {\mathcal {B}}_1(M) + {\mathcal {B}}_2(M) +{\mathcal {B}}_3(M) . \end{aligned}$$
(2.76)

Let

$$\begin{aligned} {\widetilde{{\mathcal {B}}}}_3(M) = \sum _{{\mathbf{q}} \in {\mathcal {G}}_3} {\widetilde{{\mathcal {B}}}}_{\mathbf{q}}(M), \quad {\widetilde{{\mathcal {C}}}}_3(M) = \sum _{{\mathbf{q}} \in {\mathcal {G}}_3} {\mathcal {C}}_{\mathbf{q}}(M). \end{aligned}$$
(2.77)

Applying (2.72) and (2.75), we get

$$\begin{aligned} {\mathcal {B}}_3(M) = {\widetilde{{\mathcal {B}}}}_3(M) + {\widetilde{{\mathcal {C}}}}_3(M). \end{aligned}$$
(2.78)

By (2.7), we obtain the absolute convergence of the following series

$$\begin{aligned} \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}} } | \widehat{\varOmega } (\tau \varvec{\gamma })/\mathrm{Nm}(\varvec{\gamma })|. \end{aligned}$$

Hence, the series (2.71), (2.75) and (2.77) converges absolutely.

Let

$$\begin{aligned} \breve{{\mathcal {B}}}_{\mathbf{q}}(M, \varvec{\varsigma }) = \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}}} \eta (\gamma _1 2^{-q_1 }/M)\psi _{\mathbf{q}}(\varvec{\gamma })e(\langle \varvec{\gamma }, \mathbf{b}/p+{\dot{\varvec{\theta }}}(\varvec{\varsigma }) \rangle ) \end{aligned}$$
(2.79)

with \({\dot{\varvec{\theta }}}(\varvec{\varsigma }) = (\dot{\theta }_1(\varvec{\varsigma }),\ldots ,\dot{\theta }_s(\varvec{\varsigma }))\) and \(\dot{\theta }_i(\varvec{\varsigma }) = (1+\varsigma _i) \theta _iN_i/4 , \; i=1,\ldots ,s\). By (2.71), we have

$$\begin{aligned} {\widetilde{{\mathcal {B}}}}_{\mathbf{q}}(M) = \sum _{\varvec{\varsigma }\in \{1,-1\}^s } \varsigma _1 \cdots \varsigma _s (2\sqrt{-1})^{-s} \breve{{\mathcal {B}}}_{\mathbf{q}}(M, \varvec{\varsigma }) . \end{aligned}$$
(2.80)

Let \(\varvec{\varsigma }_2 = -\mathbf{1} =-(1,1,\ldots ,1)\), \(\varvec{\varsigma }_3 = \dot{\mathbf{1}}=(1,-1,\ldots ,-1)\), and let

$$\begin{aligned} {\widetilde{{\mathcal {B}}}}_{3,1}(M)= & {} \sum _{{\mathbf{q}} \in {\mathcal {G}}_3} \sum _{ \begin{array}{c} \varvec{\varsigma }\in \{1,-1\}^s \\ \varvec{\varsigma }\ne \varvec{\varsigma }_2, \varvec{\varsigma }_3 \end{array}} \varsigma _1 \cdots \varsigma _s (2\sqrt{-1})^{-s} \breve{{\mathcal {B}}}_{\mathbf{q}}(M, \varvec{\varsigma }) , \end{aligned}$$
(2.81)
$$\begin{aligned} {\widetilde{{\mathcal {B}}}}_{i,j}(M)= & {} (-1)^{s+j} (2\sqrt{-1})^{-s} \sum _{{\mathbf{q}} \in {\mathcal {G}}_i} \breve{{\mathcal {B}}}_{\mathbf{q}}(M, \varvec{\varsigma }_j), \quad i=3,4,5,6, \;j=2,3.\nonumber \\ \end{aligned}$$
(2.82)

Using (2.77) and (2.80), we derive

$$\begin{aligned} {\widetilde{{\mathcal {B}}}}_{3}(M) = {\widetilde{{\mathcal {B}}}}_{3,1}(M) + {\widetilde{{\mathcal {B}}}}_{3,2}(M) + {\widetilde{{\mathcal {B}}}}_{3,3}(M) . \end{aligned}$$

Bearing in mind (2.74), we obtain

$$\begin{aligned} {\widetilde{{\mathcal {B}}}}_{3}(M) = {\widetilde{{\mathcal {B}}}}_{3,1}(M) + \sum _{i=4,5,6} \sum _{j=2,3} {\widetilde{{\mathcal {B}}}}_{i,j}(M) . \end{aligned}$$
(2.83)

Let

$$\begin{aligned} {\widetilde{{\mathcal {B}}}}_{6,j,k}(M) = (-1)^{s+j} (2\sqrt{-1})^{-s} \sum _{{\mathbf{q}} \in {\mathcal {G}}_6} \breve{{\mathcal {B}}}_{\mathbf{q}}^{(k)}(M,\varvec{\varsigma }_j ), \quad j=2,3, \;\; k=1,2, \end{aligned}$$
(2.84)

where

$$\begin{aligned} \breve{{\mathcal {B}}}_{\mathbf{q}}^{(1)}(M,\varvec{\varsigma }) = \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}}} \eta (\gamma _1 2^{-q_1 }/M)\psi _{\mathbf{q}}(\varvec{\gamma })\eta (2^{n +\log _2 n }\gamma _1)e(\langle \varvec{\gamma }, \mathbf{b}/p+{\dot{\varvec{\theta }}}(\varvec{\varsigma }) \rangle ) \end{aligned}$$

and

$$\begin{aligned} \breve{{\mathcal {B}}}_{\mathbf{q}}^{(2)}(M, \varvec{\varsigma }) = \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}}} \eta (\gamma _1 2^{-q_1 }/M)\psi _{\mathbf{q}}(\varvec{\gamma })(1-\eta (2^{n +\log _2 n }\gamma _1))e(\langle \varvec{\gamma }, \mathbf{b}/p+{\dot{\varvec{\theta }}}(\varvec{\varsigma }) \rangle ). \end{aligned}$$

From (2.79), (2.82) and (2.84) , we get

$$\begin{aligned} \breve{{\mathcal {B}}}_{\mathbf{q}}(M, \varvec{\varsigma }) = \breve{{\mathcal {B}}}_{\mathbf{q}}^{(1)}(M, \varvec{\varsigma }) + \breve{{\mathcal {B}}}_{\mathbf{q}}^{(2)}(M, \varvec{\varsigma }) \quad \mathrm{and} \quad {\widetilde{{\mathcal {B}}}}_{6,j}(M) ={\widetilde{{\mathcal {B}}}}_{6,j,1}(M) + {\widetilde{{\mathcal {B}}}}_{6,j,2}(M). \end{aligned}$$

So, we proved the following lemma:

Lemma 13

With notations as above, we get from (2.76), (2.78) and (2.83)

$$\begin{aligned} {\mathcal {B}}(\mathbf{b}/p,M) = \bar{{\mathcal {B}}}(M) + {\widetilde{{\mathcal {C}}}}_3(M) , \end{aligned}$$
(2.85)

where

$$\begin{aligned} \bar{{\mathcal {B}}}(M) = {\mathcal {B}}_{1}(M) + {\mathcal {B}}_{2}(M) + {\widetilde{{\mathcal {B}}}}_{3}(M) \end{aligned}$$
(2.86)

and

$$\begin{aligned} {\widetilde{{\mathcal {B}}}}_{3}(M) = {\widetilde{{\mathcal {B}}}}_{3,1}(M) + \sum _{j=2,3} ( {\widetilde{{\mathcal {B}}}}_{4,j}(M)+{\widetilde{{\mathcal {B}}}}_{5,j}(M)+{\widetilde{{\mathcal {B}}}}_{6,j,1}(M)+{\widetilde{{\mathcal {B}}}}_{6,j,2}(M) ) . \end{aligned}$$
(2.87)

2.8 The Upper Bound Estimate for \(\mathbf{E}(\bar{{\mathcal {B}}}(M))\)

Lemma 14

With notations as above

$$\begin{aligned} {\mathcal {B}}_1(M) =O(1). \end{aligned}$$

Proof

Let \(\mathbf{q}\in {\mathcal {G}}_1\), and let \(j=q_{i_0} = \max _{1 \le i \le s} q_i\), \(i_0 \in [1,\ldots ,s]\). By (2.73), we have \(j \ge -\log _2 \tau + \log _2 n\). Using (2.69), we obtain

$$\begin{aligned} | {\mathcal {B}}_{\mathbf{q}}(M)| \le \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}}} \big | \prod _{i=1}^s \sin (\pi \theta _i N_i \gamma _i) \frac{ {\mathbb {M}}(2^{-\mathbf{q}} \cdot \varvec{\gamma })\widehat{\varOmega }( \tau \varvec{\gamma }) }{ \mathrm{Nm}(\varvec{\gamma }) } \big |. \end{aligned}$$
(2.88)

From (2.68) and (2.61), we get

$$\begin{aligned} | {\mathcal {B}}_{\mathbf{q}}(M)| \le \rho _1 + \rho _2 \quad \mathrm{with} \quad \rho _i = \sum _{\varvec{\gamma }\in {\mathcal {X}}_i} \frac{ |{\mathbb {M}}(\varvec{\gamma })\widehat{\varOmega }(\tau 2^{\mathbf{q}} \cdot \varvec{\gamma })| }{ |\mathrm{Nm}(\varvec{\gamma })| }, \end{aligned}$$
(2.89)

where

$$\begin{aligned} {\mathcal {X}}_1 = \{ \varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}} \; | \; |\gamma _1| \le 2^{4sj}, \; |\gamma _i| \in [1,4], \; i =2,\ldots ,s \} , \end{aligned}$$

and

$$\begin{aligned} {\mathcal {X}}_2 = \{ \varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}} \; | \; |\gamma _1| > 2^{4sj}, \; |\gamma _i| \in [1,4], \; i =2,\ldots ,s \} . \end{aligned}$$

We consider the admissible lattice \(2^{-\mathbf{q}} \cdot \Gamma ^\bot \), where \(\mathrm{Nm}(\Gamma ^\bot ) \ge 1\). Using Theorem A, we obtain that there exists a constant \(c_{9}= c_{9}(\Gamma ^\bot )\) such that

$$\begin{aligned} \# \{ \varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \; | \; |\gamma _i| \le 4, i=2,\ldots ,s, \;2^{4(s-1)} |\gamma _1| \in [k,2k] \} \le c_{9} k, \end{aligned}$$
(2.90)

where \(k=1,2,\ldots .\).

Let \(i_0=1\). We see that \(\tau 2^{q_1} =\tau 2^{j} \ge 2^{\log _2 n} =n\). By (2.7), (2.88) and (2.90), we get

$$\begin{aligned} {\mathcal {B}}_{\mathbf{q}}(M) = O\big ( \sum _{k \ge 0} \sum _{\begin{array}{c} \varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}}, \; 1 \le |\gamma _i| \le 4,\; i \ge 2\\ 2^{4(s-1)} |\gamma _1| \in [2^k,2^{k+1}] \end{array}} \frac{ |\widehat{\omega }(\tau 2^{q_1} \gamma _1)| }{ |\mathrm{Nm}(\varvec{\gamma })| }\big )= O \big ( \sum _{k \ge 0} (1+ \tau 2^{q_1+k})^{-2s} \big ). \end{aligned}$$

Hence

$$\begin{aligned} {\mathcal {B}}_{\mathbf{q}}(M) = O((\tau 2^{j})^{-2s}). \end{aligned}$$
(2.91)

Let \(i_0 \ge 2\). Bearing in mind (2.7) and (2.90), we have

$$\begin{aligned} \rho _1= & {} O\big ( \sum _{0 \le k \le 4s(j+1)} \sum _{\begin{array}{c} \varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}}, \; 1 \le |\gamma _i| \le 4,\; i \ge 2\\ 2^{4(s-1)} |\gamma _1| \in [2^k,2^{k+1}] \end{array}} \frac{ |\widehat{\omega }(\tau 2^{q_{i_0}} \gamma _{q_{i_0}})| }{ |\mathrm{Nm}(\varvec{\gamma })| }\big )\nonumber \\= & {} O\big ( \sum _{0 \le k \le 4s(j+1)} (1+ \tau 2^{q_{i_0}})^{-2s} \big ). \end{aligned}$$

Hence

$$\begin{aligned} \rho _1 = O(j(1+ \tau 2^j)^{-2s}) . \end{aligned}$$
(2.92)

Taking into account that \(q_1 =-(q_2+ \cdots + q_s) \ge -(s-1)j\) and \(\tau 2^{j} \ge n\), we obtain

$$\begin{aligned} \rho _2= & {} O\big ( \sum _{ k \ge 4sj} \sum _{\begin{array}{c} \varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}}, \; 1 \le |\gamma _i| \le 4,\; i \ge 2\\ 2^{4(s-1)} |\gamma _1| \in [2^k,2^{k+1}] \end{array}} \frac{ |\widehat{\omega }(\tau 2^{q_{1}} \gamma _{q_{1}}) \widehat{\omega }(\tau 2^{q_{i_0}} \gamma _{q_{i_0}})| }{ |\mathrm{Nm}(\varvec{\gamma })| }\big )\nonumber \\= & {} O\big ( \sum _{k \ge 4sj} (1+ \tau 2^{q_1+k})^{-2s} (1+ \tau 2^{q_{i_0}})^{-2s} \big ) =O\big ( (1+ \tau 2^{q_{i_0}})^{-2s} \big ). \end{aligned}$$

Therefore

$$\begin{aligned} \rho _2 = O((1+ \tau 2^j)^{-2s}). \end{aligned}$$
(2.93)

Thus

$$\begin{aligned} {\mathcal {B}}_{\mathbf{q}}(M) =O(j( \tau 2^j)^{-2s}). \end{aligned}$$
(2.94)

From (2.20), we have

$$\begin{aligned} \sum _{\mathbf{q}\in {\mathbb {Z}}^{s},\; q_1+\cdots +q_s=0,\; \max _{i} q_i =j} 1 =O(j^{s-2}). \end{aligned}$$
(2.95)

By (2.73), (2.75), (2.94) and (2.91), we get

$$\begin{aligned} {\mathcal {B}}_1(M)= & {} \sum _{{\mathbf{q}} \in {\mathcal {G}}_1} {\mathcal {B}}_{\mathbf{q}}(M) = O\big ( \sum _{j \ge -\log _2 \tau + \log _2 n} \;\; \sum _{\mathbf{q}\in {\mathcal {L}}, \max _{i} q_i =j} j (\tau 2^j)^{-2s} \big )\nonumber \\= & {} O\big ( \sum _{j \ge -\log _2 \tau + \log _2 n} j^{s}(\tau 2^j)^{-2s} \big ) =O(n^s(n)^{-2s}) =O(1). \end{aligned}$$

Hence, Lemma 14 is proved. \(\square \)

Lemma 15

With notations as above

$$\begin{aligned} | {\mathcal {B}}_2(M)| + |{\widetilde{{\mathcal {B}}}}_{6,2,2}(M) +{\widetilde{{\mathcal {B}}}}_{6,3,2}(M)|=O(n^{s-3/2}). \end{aligned}$$

Proof

We consider \({\mathcal {B}}_2(M)\) (see (2.69), (2.73) and (2.75)). Let \(\mathbf{q}\in {\mathcal {G}}_2\), and let \(j=-q_{i_0} = \min _{2 \le i \le s} q_i\), \(i_0 \in [2,\ldots ,s]\). We see \(j \ge n+1/2\log _2 n\) and \(|\sin (\pi N_{i_0} \gamma _{i_0})| \le \pi N_{i_0} 2^{-j+2}\) for \(m(2^{-q_{i_0}} \gamma _{i_0} ) \ne 0\). By (2.88) and (2.89), we obtain

$$\begin{aligned} {\mathcal {B}}_{\mathbf{q}}(M) =O( \rho _1 + \rho _2) \quad \mathrm{with} \quad \rho _i = \sum _{\mathbf{q}\in {\mathcal {X}}_i} \frac{ |N^{1/s} 2^{-j}{\mathbb {M}}(\varvec{\gamma })\widehat{\varOmega }(\tau 2^{\mathbf{q}} \cdot \varvec{\gamma })| }{ |\mathrm{Nm}(\varvec{\gamma })| }. \end{aligned}$$

Similarly to (2.92), (2.93), we get

$$\begin{aligned} \rho _1= & {} O\big ( \sum _{0 \le k \le 4s(j+1)} \sum _{\begin{array}{c} \varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}}, \; 1 \le |\gamma _i| \le 4,\; i \ge 2\\ 2^{4(s-1)} |\gamma _1| \in [2^k,2^{k+1}] \end{array}} \frac{ N^{1/s} 2^{-j}}{ |\mathrm{Nm}(\varvec{\gamma })| }\big )\nonumber \\= & {} O\big ( \sum _{0 \le k \le 4s(j+1)} N^{1/s} 2^{-j} \big )= O(jN^{1/s} 2^{-j}) . \end{aligned}$$

We see

$$\begin{aligned} \rho _2 = O\big ( \sum _{ k \ge 4sj} \sum _{\begin{array}{c} \varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}}, \; 1 \le |\gamma _i| \le 4,\; i \ge 2\\ 2^{4(s-1)} |\gamma _1| \in [2^k,2^{k+1}] \end{array}} \frac{ N^{1/s} 2^{-j}|\widehat{\omega }(\tau 2^{q_{1}} \gamma _{q_{1}})| }{ |\mathrm{Nm}(\varvec{\gamma })| }\big ). \end{aligned}$$

We have \(\max _{1 \le i \le s} q_i \le - \log _2 \tau +\log _2 n\) for \(\mathbf{q}\in {\mathcal {G}}_2\). Hence \(q_1 = -(q_2+...+q_s) \ge (s-1)(\log _2 \tau -\log _2 n)\) and \(\tau 2^{q_1} \ge \tau ^{s} n^{-s+1} = 2^{-2ns} n^{-s+1} > 2^{-2sj}\). Thus

$$\begin{aligned} \rho _2= & {} O\big ( N^{1/s} 2^{-j} \sum _{k \ge 4sj} (1+ \tau 2^{q_1+k})^{-2s} \big )\nonumber \\= & {} O\big ( N^{1/s} 2^{-j} \sum _{k \ge 4sj} 2^{-2s(k-2sj)} \big ) =O(N^{1/s} 2^{-j}). \end{aligned}$$

Bearing in mind (2.95), we derive

$$\begin{aligned} {\mathcal {B}}_2(M)= & {} \sum _{{\mathbf{q}} \in {\mathcal {G}}_2} {\mathcal {B}}_{\mathbf{q}}(M) = O\big ( \sum _{j \ge n+1/2 \log _2 n} \sum _{\mathbf{q}\in {\mathcal {L}}, \min _{2 \le i \le s} q_i =-j} j N^{1/s} 2^{-j} \big )\nonumber \\= & {} O\big ( \sum _{j \ge n+1/2\log _2 n} j^{s-1}N^{1/s} 2^{-j} \big ) =O(n^{s-3/2}). \end{aligned}$$

Consider \(\rho := \breve{{\mathcal {B}}}_{\mathbf{q}}^{(2)}(M,\dot{\mathbf{1}} )+ \breve{{\mathcal {B}}}_{\mathbf{q}}^{(2)}(M,-\mathbf{1} )\). By (2.69) and (2.84), we have

$$\begin{aligned} \rho= & {} O\big ( \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}}} | \sin ( \pi \theta _1 N_1 \gamma _1) \eta (\gamma _1 2^{-q_1 }/M)M (2^{-\mathbf{q}} \cdot \varvec{\gamma }) \widehat{\varOmega }(\tau \varvec{\gamma })/\mathrm{Nm}(\varvec{\gamma })\nonumber \\&\times (1 - \eta (2^{n +\log _2 n }\gamma _1)) e(\langle \varvec{\gamma }, \mathbf{b}/p \rangle ) |\big )\nonumber \\= & {} O\big ( \sum _{\varvec{\gamma }\in 2^{-\mathbf{q}} \dot{\Gamma }^\bot \setminus {\mathbf{0}}} |\sin (\pi \theta _1 N_1 2^{q_1}\gamma _1) (1 - \eta (2^{q_1+n +\log _2 n }\gamma _1)) {\mathbb {M}}(\varvec{\gamma })/\mathrm{Nm}(\varvec{\gamma }) | \big ). \end{aligned}$$

Applying (2.16), (2.68) and (2.90), we obtain

$$\begin{aligned} \rho= & {} O\big ( \sum _{\varvec{\gamma }\in 2^{-\mathbf{q}}\Gamma ^\bot \setminus {\mathbf{0}},\; |\gamma _1| \le 2^{-q_1- n -\log _2 n +4 }} | N_1 2^{q_1}\gamma _1 {\mathbb {M}}( \varvec{\gamma })/\mathrm{Nm}(\varvec{\gamma }) | \big ) =O(1/n).\nonumber \\= & {} O\big ( \sum _{ \begin{array}{c} \varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}}, \; 1 \le |\gamma _i| \le 4,\; i \ge 2\\ |\gamma _1| \le 2^{-q_1- n -\log _2 n +4 } \end{array}} N_1 2^{q_1} \big ) =O(N_1 2^{q_1}2^{-q_1- n -\log _2 n +4 } ) =O(1/n). \end{aligned}$$

We get from (2.73) that

$$\begin{aligned} \# {\mathcal {G}}_3 =O(n^{s-1}). \end{aligned}$$
(2.96)

By (2.73) and (2.84), we get \( {\widetilde{{\mathcal {B}}}}_{6,2,2}(M) +{\widetilde{{\mathcal {B}}}}_{6,3,2}(M) =(n^{s-2})\).

Hence, Lemma 15 is proved. \(\square \)

Lemma 16

With notations as above

$$\begin{aligned} | \mathbf{E}({\widetilde{{\mathcal {B}}}}_{3,1}(M))| + | \mathbf{E}({\widetilde{{\mathcal {B}}}}_{4,3}(M))|+ | {\widetilde{{\mathcal {B}}}}_{5,2}(M)| +| {\widetilde{{\mathcal {B}}}}_{5,3}(M)|= O(n^{s-3/2}). \end{aligned}$$

Proof

By (2.69) and (2.79), we have

$$\begin{aligned} \breve{{\mathcal {B}}}_{\mathbf{q}}(M, \varvec{\varsigma })= & {} \sum _{\varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}}} \eta (\gamma _1 /M)\psi _{\mathbf{q}}(2^{\mathbf{q}} \cdot \varvec{\gamma })e(\langle \varvec{\gamma },\mathbf{x}\rangle )\nonumber \\= & {} \sum _{\varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}}} \frac{ \widehat{\omega } (2^{q_1}\tau \gamma _1) \eta (\gamma _1 /M)}{\gamma _1} \prod _{j=2}^s \frac{ \widehat{\omega } (2^{q_j}\tau \gamma _j) m ( \gamma _j) }{\gamma _j} e(\langle \varvec{\gamma },\mathbf{x}\rangle ),\nonumber \\ \end{aligned}$$
(2.97)

with \(\mathbf{x}= 2^{\mathbf{q}} \cdot ( \mathbf{b}/p+{\dot{\varvec{\theta }}}(\varvec{\varsigma })) \) and \(\dot{\theta }_i(\varvec{\varsigma }) = (1+\varsigma _i) \theta _iN_i/4 , \; i=1,\ldots ,s\).

Applying (2.64) and Lemma E with \(\dot{\Gamma } = 2^{-\mathbf{q}} \Gamma \), \(i =0\), and \(\dot{\mathbf{p}} = \tau 2^{\mathbf{q}}\), we get

$$\begin{aligned} \breve{{\mathcal {B}}}_{\mathbf{q}}(M,\varvec{\varsigma }) = O( 1 ). \end{aligned}$$

Using (2.73), we obtain \(\# {\mathcal {G}}_5 =O(n^{s-2} \log _2 n) \).

By (2.82) , we get

$$\begin{aligned} {\widetilde{{\mathcal {B}}}}_{5,i}(M) = O\big ( \sum _{\mathbf{q}\in {\mathcal {G}}_5} |\breve{{\mathcal {B}}}_{\mathbf{q}}(M,\varvec{\varsigma })| \big ) = O(n^{s-2} \log _2 n),\quad i=2,3 . \end{aligned}$$
(2.98)

Consider \(\mathbf{E}({\widetilde{{\mathcal {B}}}}_{3,1}(M))\) and \(\mathbf{E}({\widetilde{{\mathcal {B}}}}_{4,3}(M))\). Let

$$\begin{aligned} \mathbf{E}_i(f) =\int _{0}^1 f(\varvec{\theta })\mathrm{d}\theta _i. \end{aligned}$$

Let \(\varvec{\varsigma }\ne -\mathbf{1} \). Then there exists \(i_0 =i_0(\varvec{\varsigma }) \in [1,s]\) with \(\varsigma _{i_0} =1\).

By (2.52) and (2.97), we have

$$\begin{aligned} \mathbf{E}_{i_0}({\widetilde{{\mathcal {B}}}}_{\mathbf{q}}(M,\varvec{\varsigma }))= & {} \sum _{\varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}}} \frac{e(N_{i_0} 2^{q_{i_0}} \gamma _{i_0}/2)-1}{\pi \sqrt{-1} N_{i_0} 2^{q_{i_0}} \gamma _{i_0}} \; \frac{ \widehat{\omega } (2^{q_1} \tau \gamma _1) \eta (\gamma _1 /M)}{\gamma _1}\nonumber \\&\times \prod _{j=2}^s \frac{ \widehat{\omega } ( 2^{q_j} \tau \gamma _j) m ( \gamma _j) }{\gamma _j} e(\langle \varvec{\gamma },\mathbf{x}\rangle ), \end{aligned}$$

with some \(\mathbf{x}\in {\mathbb {R}}^s\). Hence

$$\begin{aligned} \mathbf{E}_{i_0}(\breve{{\mathcal {B}}}_{\mathbf{q}}(M,\varvec{\varsigma })) = O\big ( N_{i_0}^{-1} 2^{-q_{i_0}} \sup _{\mathbf{x}\in {\mathbb {R}}^s} \big | \sum _{\varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}}} \widehat{{\mathcal {B}}}_{\mathbf{q}}(M, \varvec{\gamma }, i_0) e(\langle \varvec{\gamma },\mathbf{x}\rangle ) \big | \big ), \end{aligned}$$

where

$$\begin{aligned} \widehat{{\mathcal {B}}}_{\mathbf{q}}(M, \varvec{\gamma },i_0) = \frac{ \widehat{\omega } (2^{q_1}\tau \gamma _1) \eta (\gamma _1 /M)}{\gamma _1} \prod _{j=2}^s \frac{ \widehat{\omega } (2^{q_j}\tau \gamma _j) m ( \gamma _j) }{\gamma _j} \frac{1}{\gamma _{i_0}} . \end{aligned}$$

Applying (2.64) and Lemma E with \(\dot{\Gamma } = 2^{-\mathbf{q}} \Gamma \), and \(\dot{\mathbf{p}} = \tau 2^{\mathbf{q}}\), we obtain

$$\begin{aligned} \mathbf{E}(\breve{{\mathcal {B}}}_{\mathbf{q}}(M,\varvec{\varsigma })) = \mathbf{E}(\mathbf{E}_{i_0}(\breve{{\mathcal {B}}}_{\mathbf{q}}(M,\varvec{\varsigma }))) = O( N_{i_0}^{-1} 2^{-q_{i_0}} ). \end{aligned}$$
(2.99)

By (2.81), we have \(i_0(\varvec{\varsigma }) \ge 2\) and

$$\begin{aligned} \mathbf{E}({\widetilde{{\mathcal {B}}}}_{3,1}(M)) = O\big ( \sum _{ \begin{array}{c} \varvec{\varsigma }\in \{1,-1\}^s \\ \varvec{\varsigma }\ne -\mathbf{1}, \dot{\mathbf{1}} \end{array}} \sum _{\mathbf{q}\in {\mathcal {G}}_3} N_{i_0(\varvec{\varsigma })}^{-1} 2^{-q_{i_0(\varvec{\varsigma })}} \big ). \end{aligned}$$

Using (2.73), we get \(\# \{\mathbf{q}\in {\mathcal {G}}_3\; |\; q_{i_0} =j \}=O(n^{s-2} ) \) and \( j \ge - n - 1/2\log _2 n\). Hence

$$\begin{aligned} \mathbf{E}({\widetilde{{\mathcal {B}}}}_{3,1}(M)) = O\big ( n^{s-2} \sum _{ j \ge - n - 1/2\log _2 n} N^{-1/s} 2^{-j} \big ) =O(n^{s-3/2} ). \end{aligned}$$
(2.100)

From (2.73), we get \(q_1 \ge -n +s\log _2 n\) for \(\mathbf{q}\in {\mathcal {G}}_4\). Applying (2.82), (2.96) and (2.99) with \(i_0(\varvec{\varsigma }) =1\), we obtain

$$\begin{aligned} \mathbf{E}({\widetilde{{\mathcal {B}}}}_{4,3}(M)) = O\big ( \sum _{\mathbf{q}\in {\mathcal {G}}_4} N_{1}^{-1} 2^{-q_{1}}\Big ) = O\big ( n^{s-1} \sum _{ q_1 \ge -n +s\log _2 n} N^{-1/s} 2^{-q_1} \big ) =O(1). \end{aligned}$$

By (2.98) and (2.100), Lemma 16 is proved. \(\square \)

Lemma 17

With notations as above

$$\begin{aligned} {\widetilde{{\mathcal {B}}}}_{4,2}(M) = O( n^{s-3/2} ). \end{aligned}$$

Proof

By (2.97), we have

$$\begin{aligned} \breve{{\mathcal {B}}}_{\mathbf{q}}(M,-\mathbf{1}) = \sum _{\varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}}} \frac{ \widehat{\omega } (2^{q_1}\tau \gamma _1) \eta (\gamma _1 /M)}{\gamma _1} \prod _{j=2}^s \frac{ \widehat{\omega } (2^{q_j}\tau \gamma _j) m ( \gamma _j) }{\gamma _j} e(\langle \varvec{\gamma },2^{\mathbf{q}} \cdot \mathbf{b}/p \rangle ) . \end{aligned}$$

From (2.65), we derive that \(I(d,v)=0\) for \(v=0\). Hence \( w^{(1)}_1(\tau , 0)=0\). Now applying (2.64)–(2.67) with \(\dot{\Gamma }^\bot = 2^{-\mathbf{q}} \cdot \Gamma ^\bot , i=0\) and \(a=M^{-1}\), we get

$$\begin{aligned} |\breve{{\mathcal {B}}}_{\mathbf{q}}(M,-\mathbf{1}) |\le & {} \breve{c}_{(2s,2s)} \det \Gamma \sum _{\varvec{\gamma }\in 2^{\mathbf{q}} \cdot \Gamma , \; \gamma _1 \ne (\mathbf{b}/p)_1 } (1+M|\gamma _1 -2^{q_1}(\mathbf{b}/p)_1 |)^{-2s}\nonumber \\&\times \prod _{i=2}^s (1+|\gamma _i -2^{q_i}(\mathbf{b}/p)_i|)^{-2s}. \end{aligned}$$

Bearing in mind (2.1), we get \(p_1 \Gamma _{{\mathcal {O}}} \subseteq \Gamma ^\bot \subseteq \Gamma _{{\mathcal {O}}}\). Taking into account that \(p=p_1p_2p_3\) and \(\mathbf{b}\in \Gamma _{{\mathcal {O}}}\), we obtain

$$\begin{aligned} |\breve{{\mathcal {B}}}_{\mathbf{q}}(M,-\mathbf{1}) | \le \breve{c}_{(2s,2s)} \det \Gamma p^{2s^2}\sum _{\varvec{\gamma }\in p 2^{\mathbf{q}} \cdot \Gamma \setminus \mathbf{0}} (1+M|\gamma _1 |)^{-2s} \prod _{i=2}^s (1+|\gamma _i|)^{-2s}. \end{aligned}$$
(2.101)

We have

$$\begin{aligned} |\breve{{\mathcal {B}}}_{\mathbf{q}}(M,-\mathbf{1}) | \le \breve{c}_{(2s,2s)} \det \Gamma p^{2s^2}(a_1+a_2), \end{aligned}$$
(2.102)

where

$$\begin{aligned} a_1 = \sum _{\varvec{\gamma }\in p 2^{\mathbf{q}} \cdot \Gamma \setminus \mathbf{0}, \max |\gamma _i| \le M^{1/s }} (1+M|\gamma _1 |)^{-2s} \prod _{i=2}^s (1+|\gamma _i|)^{-2s}, \end{aligned}$$

and

$$\begin{aligned} a_2 = \sum _{\varvec{\gamma }\in p 2^{\mathbf{q}} \cdot \Gamma \setminus \mathbf{0}, \max |\gamma _i| >M^{1/s}} (1+M|\gamma _1 |)^{-2s} \prod _{i=2}^s (1+|\gamma _i|)^{-2s}. \end{aligned}$$

We see that \(|\gamma _1| \ge M^{-(s-1)/s}\) for \(\max _{1 \le i \le s} |\gamma _i| \le M^{1/s}\). Applying Theorem A, we have

$$\begin{aligned} a_1 \le M^{-2} \sum _{\varvec{\gamma }\in p 2^{\mathbf{q}} \cdot \Gamma \setminus \mathbf{0}, \max |\gamma _i| \le M^{1/s }} 1 = O(M^{-1}), \end{aligned}$$

and

$$\begin{aligned} a_2 \le \sum _{j \ge M^{1/s }} \sum _{ \begin{array}{c} \varvec{\gamma }\in p 2^{\mathbf{q}} \cdot \Gamma \setminus \mathbf{0}\\ \max |\gamma _i| \in [j,j+1) \end{array}} j^{-2s} = O\big (\sum _{j \ge M^{1/s } } j^{-s}\big ) =O(M^{-(s-1)/s}). \end{aligned}$$
(2.103)

Taking into account that \(\# {\mathcal {G}}_3 =O(n^{s-1})\) (see (2.96)), we get from (2.102) and (2.82) that

$$\begin{aligned} {\widetilde{{\mathcal {B}}}}_{4,2}(M) = O\big ( \sum _{\mathbf{q}\in {\mathcal {G}}_4} \breve{{\mathcal {B}}}_{\mathbf{q}}(M,-\mathbf{1}) \big )= O\big ( \sum _{\mathbf{q}\in {\mathcal {G}}_3} M^{-1/2} \big ) = O(M^{-1/2} n^{s-1} ). \end{aligned}$$

Hence, Lemma 17 is proved. \(\square \)

Lemma 18

With notations as above

$$\begin{aligned} {\widetilde{{\mathcal {B}}}}_{6,2,1}(M) + {\widetilde{{\mathcal {B}}}}_{6,3,1}(M) = O( n^{s-3/2}), \quad M=[\sqrt{n}]. \end{aligned}$$

Proof

Let \(M_1=2^{-q_1-n -\log _2 n }\). By (2.73), we get \(M_1 \ge n \ge 2M\) for \(\mathbf{q}\in {\mathcal {G}}_6\) and \(n \ge 4\). From (2.16), we have \( \eta (\gamma _1/M) \eta (\gamma _1/M_1)= \eta (\gamma _1/M_1)\). Using (2.69), (2.79) and (2.84), we derive similarly to (2.97) that

$$\begin{aligned} \breve{{\mathcal {B}}}_{\mathbf{q}}^{(1)}(M,\varvec{\varsigma }_j )= & {} \sum _{\varvec{\gamma }\in 2^{\mathbf{q}}\cdot \Gamma \setminus {\mathbf{0}}} \frac{ \widehat{\omega } (2^{q_1}\tau \gamma _1) \eta (\gamma _1 /M_1)}{\gamma _1}\nonumber \\&\times \prod _{i=2}^s \frac{ \widehat{\omega } (2^{q_j}\tau \gamma _j) m ( \gamma _j) }{\gamma _j} e(\langle \varvec{\gamma }, 2^{\mathbf{q}}\cdot ( \mathbf{b}/p {+}(j-2)\theta _1N_1(1,0,\ldots ,0))\rangle ) \end{aligned}$$

with \(j=2,3\), \(\varvec{\varsigma }_2=-\mathbf{1}\) and \(\varvec{\varsigma }_3=\dot{\mathbf{1}}\).

By (2.66), we obtain that, \(J_{f_2}(\tau , v)=0\) with \(f_2(t) =m(t)/t\) for \(v=0\) . Hence \( w^{(1)}_2(\tau , 0)=0\). Now applying (2.64)–(2.67) with \(\dot{\Gamma }^\bot = 2^{-\mathbf{q}} \cdot \Gamma ^\bot \), \(i=0\) and \(a=M_1^{-1}=2^{q_1+n +\log _2 n }\), we get analogously to (2.101)

$$\begin{aligned} |\breve{{\mathcal {B}}}_{\mathbf{q}}^{(1)}(M,\varvec{\varsigma }_j )| \le \breve{c}_{(2s,2s)} \det \Gamma p^{2s^2}\sum _{\varvec{\gamma }\in p 2^{\mathbf{q}} \cdot \Gamma \setminus \mathbf{0}} (1+M_1|\gamma _1 -x(j) |)^{-2s} \prod _{i=2}^s (1+|\gamma _i|)^{-2s}, \end{aligned}$$

with \(x(j)= (j-2)p\theta _12^{q_1}N_1\). We have

$$\begin{aligned} | \breve{{\mathcal {B}}}_{\mathbf{q}}^{(1)}(M,\varvec{\varsigma }_j) | \le \breve{c}_{(2s,2s)} \det \Gamma p^{2s^2}(a_3+a_4), \end{aligned}$$
(2.104)

where

$$\begin{aligned} a_3 = \sum _{\varvec{\gamma }\in p 2^{\mathbf{q}} \cdot \Gamma \setminus \mathbf{0}, \;\max |\gamma _i| \le M^{1/s }} (1+M_1|\gamma _1 -x(j)|)^{-2s} \prod _{i=2}^s (1+|\gamma _i|)^{-2s}, \end{aligned}$$

and

$$\begin{aligned} a_4 = \sum _{\varvec{\gamma }\in p 2^{\mathbf{q}} \cdot \Gamma , \max |\gamma _i| >M^{1/s}} (1+M_1|\gamma _1 -x(j) |)^{-2s} \prod _{i=2}^s (1+|\gamma _i|)^{-2s}. \end{aligned}$$

We see that \(|\gamma _1| \ge M^{-(s-1)/s}\) for \(\max _{1 \le i \le s} |\gamma _i| \le M^{1/s}\). Bearing in mind that \(|x(j)| \le c_3p n^{-s}\) for \(\mathbf{q}\in {\mathcal {G}}_6\), we obtain \(|\gamma _1| \ge 2|x(j)|\) for \(M=[\sqrt{n}]\) and \(N> 8psc_3\). Applying Theorem A, we get

$$\begin{aligned} a_3 \le 2^{2s} M_1^{-2s} M^{2(s-1)}\sum _{\varvec{\gamma }\in p 2^{\mathbf{q}} \cdot \Gamma , \; \max |\gamma _i| \le M^{1/s }} 1 = O(M^{-1}). \end{aligned}$$

Similarly to (2.103), we have

$$\begin{aligned} a_4 \le \sum _{j \ge M^{1/s }} \sum _{ \begin{array}{c} \varvec{\gamma }\in p 2^{\mathbf{q}} \cdot \Gamma \setminus \mathbf{0}\\ \max |\gamma _i| \in [j,j+1) \end{array}} j^{-2s} = O\big (\sum _{j \ge M^{1/s } } j^{-s}\big ) =O(M^{-(s-1)/s}). \end{aligned}$$

By (2.73) and (2.96), we obtain \(\# {\mathcal {G}}_6 \le \# {\mathcal {G}}_3 =O(n^{s-1})\). We get from (2.84) and (2.104) that

$$\begin{aligned} {\widetilde{{\mathcal {B}}}}_{6,2,1}(M) + {\widetilde{{\mathcal {B}}}}_{6,3,1}(M) = O\big ( \sum _{\mathbf{q}\in {\mathcal {G}}_6, \; j=2,3} \breve{{\mathcal {B}}}_{\mathbf{q}}^{(1)}(M,\varvec{\varsigma }_j ) \big ) = O(M^{-1/2} n^{s-1}). \end{aligned}$$

Hence, Lemma 18 is proved. \(\square \)

Using (2.87), (2.86) and Lemmas 1418, we obtain

Corollary 1

With notations as above

$$\begin{aligned} \mathbf{E}(\bar{{\mathcal {B}}}(M)) = O( n^{s-5/4}), \quad M=[\sqrt{n}]. \end{aligned}$$

2.9 The Upper Bound Estimate for \(\mathbf{E}({ {\widetilde{{\mathcal {C}}}}_3}(M))\) and Koksma–Hlawka Inequality

Let

$$\begin{aligned} {\mathcal {G}}_7= & {} \{ \mathbf{q}\in {\mathcal {G}}_3 \; | \; -\log _2 \tau -s\log _2 n \le \max _{i=1,\ldots ,s} q_i < -\log _2 \tau +\log _2 n \}.\nonumber \\ {\mathcal {G}}_8= & {} \{ \mathbf{q}\in {\mathcal {G}}_3 \setminus {\mathcal {G}}_7 \; | \; q_1 < -n -1/2 \log _2 n \},\nonumber \\ {\mathcal {G}}_9= & {} \{ \mathbf{q}\in {\mathcal {G}}_3 \setminus {\mathcal {G}}_7 \; | \; q_1 \ge -n -1/2 \log _2 n \}, \end{aligned}$$
(2.105)

and let

$$\begin{aligned} {\widetilde{{\mathcal {C}}}}_i(M) = \sum _{{\mathbf{q}} \in {\mathcal {G}}_i} {\mathcal {C}}_{\mathbf{q}}(M) , \quad i=7,8,9. \end{aligned}$$

It is easy to see that

$$\begin{aligned} {\mathcal {G}}_3 = {\mathcal {G}}_7 \cup {\mathcal {G}}_8 \cup {\mathcal {G}}_9, \quad \mathrm{and}\; \quad {\mathcal {G}}_i \cap {\mathcal {G}}_j =\emptyset , \quad \mathrm{for}\; i \ne j. \end{aligned}$$

Hence

$$\begin{aligned} {\widetilde{{\mathcal {C}}}}_3(M) = {\widetilde{{\mathcal {C}}}}_7(M) + {\widetilde{{\mathcal {C}}}}_8(M) + {\widetilde{{\mathcal {C}}}}_9(M) . \end{aligned}$$
(2.106)

From (2.71), we have similarly to (2.79) that

$$\begin{aligned} {\mathcal {C}}_{\mathbf{q}}(M) = \sum _{\varvec{\varsigma }\in \{1,-1\}^s} \varsigma _1 \cdots \varsigma _s (2\sqrt{-1})^{-s} \breve{{\mathcal {C}}}_{\mathbf{q}}(M, \varvec{\varsigma }) , \end{aligned}$$
(2.107)

where

$$\begin{aligned} \breve{{\mathcal {C}}}_{\mathbf{q}}(M, \varvec{\varsigma }) = \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}}} \psi _{\mathbf{q}}(\varvec{\gamma }) ( 1-\eta _M(\varvec{\gamma }))(1- \eta (\gamma _1 2^{-q_1 }/M)) e(\langle \varvec{\gamma }, \mathbf{b}/p+{\dot{\varvec{\theta }}}(\varvec{\varsigma })\rangle ), \end{aligned}$$

with \(\dot{\theta }_i(\varvec{\varsigma }) = (1+\varsigma _i) \theta _iN_i/4 , \; i=1,\ldots ,s\).

By (2.107) and (2.105), we get

$$\begin{aligned} {\widetilde{{\mathcal {C}}}}_{9}(M) = {\widetilde{{\mathcal {C}}}}_{10}(M) + {\widetilde{{\mathcal {C}}}}_{11}(M) , \end{aligned}$$
(2.108)

where

$$\begin{aligned} {\widetilde{{\mathcal {C}}}}_{10}(M) = \sum _{{\mathbf{q}} \in {\mathcal {G}}_9} \sum _{ \begin{array}{c} \varvec{\varsigma }\in \{1,-1\}^s \\ \varvec{\varsigma }\ne -\mathbf{1} \end{array}} \varsigma _1 \cdots \varsigma _s (2\sqrt{-1})^{-s} \breve{{\mathcal {C}}}_{\mathbf{q}}(M, \varvec{\varsigma }) , \end{aligned}$$
(2.109)

and

$$\begin{aligned} {\widetilde{{\mathcal {C}}}}_{11}(M) = (-1)^s (2\sqrt{-1})^{-s} \sum _{{\mathbf{q}} \in {\mathcal {G}}_9} \breve{{\mathcal {C}}}_{\mathbf{q}}(M, -\mathbf{1}). \end{aligned}$$
(2.110)

Lemma 19

With notations as above

$$\begin{aligned} \mathbf{E}( {\widetilde{{\mathcal {C}}}}_{i}(M)) = O(n^{s-3/2}), \quad i=7,8,10, \quad M =[\sqrt{n}]. \end{aligned}$$

Proof

Let \(\varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}} \). By (2.16), (2.61) and (2.68), we have \( ( 1-\eta _M(\varvec{\gamma }))(1- \eta (\gamma _1 /M)) {\mathbb {M}}(\varvec{\gamma }) \ne 0\) only if \( 2^{-2s+3} M \le |\gamma _1| \le 2M, \; |\gamma _i| \in [1,4]\), \(i =2,\ldots ,s \). From (2.71), we derive

$$\begin{aligned} {\mathcal {C}}_{\mathbf{q}}(M) =O\big ( \sum _{\varvec{\gamma }\in {\mathcal {X}}} \big |\prod _{i=1}^s \sin (\pi \theta _i N_i 2^{q_i} \gamma _i) \frac{ {\mathbb {M}}(\varvec{\gamma })\widehat{\varOmega }( \tau 2^{\mathbf{q}} \cdot \varvec{\gamma }) }{ \mathrm{Nm}(\varvec{\gamma }) } \big | \big ) \end{aligned}$$
(2.111)

where

$$\begin{aligned} {\mathcal {X}}= \{ \varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}} \; | \; 2^{-2s+3} M \le |\gamma _1| \le 2M, \; |\gamma _i| \in [1,4], \; i =2,\ldots ,s \} . \end{aligned}$$

Bearing in mind (2.90), we get \( {\mathcal {C}}_{\mathbf{q}}(M) =O( 1 )\).

Using (2.20), (2.73) and (2.105), we obtain \(\# {\mathcal {G}}_7 =O(n^{s-2} \log _2 n) \). Applying(2.105), we get

$$\begin{aligned} {\widetilde{{\mathcal {C}}}}_{7}(M) = \sum _{{\mathbf{q}} \in {\mathcal {G}}_7} {\mathcal {C}}_{\mathbf{q}}(M) = O( n^{s-2} \log _2 n). \end{aligned}$$
(2.112)

Consider \( {\widetilde{{\mathcal {C}}}}_{8}(M)\). Let \(\varvec{\gamma }\in {\mathcal {X}}\). Then \(|\sin (\pi \theta _1 N_{1} 2^{q_1}\gamma _{1})| \le \pi M N_{1} 2^{1+q_1}\).

By (2.111), we have

$$\begin{aligned} {\mathcal {C}}_{\mathbf{q}}(M) =O\big ( \sum _{\varvec{\gamma }\in {\mathcal {X}}} \frac{ |MN^{1/s} 2^{q_1}\widehat{\varOmega }(\tau 2^{\mathbf{q}} \cdot \varvec{\gamma })| }{ |\mathrm{Nm}(\varvec{\gamma })| }\big ) = O( M N^{1/s} 2^{q_1}). \end{aligned}$$

Using (2.20) and (2.105), we derive \(\# \{ \mathbf{q}\in {\mathcal {G}}_8 | q_1=d\} =O(n^{s-2} ) \). Hence

$$\begin{aligned} {\widetilde{{\mathcal {C}}}}_8(M)= & {} \sum _{{\mathbf{q}} \in {\mathcal {G}}_8} {\mathcal {C}}_{\mathbf{q}}(M) =O\big ( \sum _{j \ge n+ 0.5 \log _2 n} \; \sum _{{\mathbf{q}} \in {\mathcal {G}}_8, \; q_1 =-j} M N^{1/s} 2^{-j} \big )\nonumber \\= & {} O\big (n^{s-2} M \sum _{j \ge n + 0.5\log _2 n} 2^{n-j} \big ) =O(n^{s-2}). \end{aligned}$$
(2.113)

Consider \( {\widetilde{{\mathcal {C}}}}_{10}(M)\). From (2.109), we get that there exists \(i_0 =i_0(\varvec{\varsigma }) \in [1,s]\) with \(\varsigma _{i_0} =1\). By (2.52), (2.69) and (2.107), we have

$$\begin{aligned} \mathbf{E}_{i_0}(\breve{{\mathcal {C}}}_{\mathbf{q}}(M,\varvec{\varsigma })) = \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}}} \dot{{\mathcal {C}}}_{\mathbf{q}}(M, \varvec{\gamma }) \frac{e(N_i \gamma _{i_0}/2)-1}{\pi \sqrt{-1} N_{i_0} \gamma _{i_0}} e(\langle \varvec{\gamma },\mathbf{x}\rangle ) \end{aligned}$$

with some \(\mathbf{x}\in {\mathbb {R}}^s\), where

$$\begin{aligned} \dot{{\mathcal {C}}}_{\mathbf{q}}(M, \varvec{\gamma }) = ( 1-\eta _M(\varvec{\gamma }))(1- \eta (\gamma _1 2^{-q_1 }/M)) \widehat{\varOmega } ( \tau \cdot \varvec{\gamma }) {\mathbb {M}}(2^{-\mathbf{q}} \varvec{\gamma }) / \mathrm{Nm}(\varvec{\gamma }) . \end{aligned}$$

Hence

$$\begin{aligned} \mathbf{E}_{i_0}(\breve{{\mathcal {C}}}_{\mathbf{q}}(M,\varvec{\varsigma })) = O\big ( N_{i_0}^{-1} 2^{-q_{i_0}} \sum _{\varvec{\gamma }\in 2^{-\mathbf{q}} \cdot \Gamma ^\bot \setminus {\mathbf{0}}} |\ddot{{\mathcal {C}}}_{\mathbf{q}}(M, \varvec{\gamma }, i_0)| \big ), \end{aligned}$$

with

$$\begin{aligned} \ddot{{\mathcal {C}}}_{\mathbf{q}}(M, \varvec{\gamma },i_0) = \frac{ ( 1-\eta _M(\varvec{\gamma }))(1-\eta (\gamma _1 /M)) }{\gamma _1} \prod _{j=2}^s \frac{ m ( \gamma _j) }{\gamma _j} \frac{1}{\gamma _{i_0}} . \end{aligned}$$

Applying (2.111), we obtain \(\max _{\gamma \in {\mathcal {X}}, i\in [1,s]} |1/\gamma _i| =O(1)\).

By (2.16) and (2.90), we have

$$\begin{aligned} \mathbf{E}(\breve{{\mathcal {C}}}_{\mathbf{q}}(M,\varvec{\varsigma }))= & {} \mathbf{E}(\mathbf{E}_{i_0}(\breve{{\mathcal {C}}}_{\mathbf{q}}(M,\varvec{\varsigma })))\nonumber \\= & {} O\big ( N_{i_0}^{-1} 2^{-q_{i_0}} \sum _{\varvec{\gamma }\in {\mathcal {X}}} 1/|\mathrm{Nm}(\varvec{\gamma })|\big )\nonumber \\= & {} O( N_{i_0}^{-1} 2^{-q_{i_0}} ). \end{aligned}$$

Similarly to (2.99)–(2.100), we get from (2.105) and (2.73), that

$$\begin{aligned} \mathbf{E}( {\widetilde{{\mathcal {C}}}}_{10}(M))= & {} O\big ( \sum _{ \begin{array}{c} \varvec{\varsigma }\in \{1,-1\}^s\\ \varvec{\varsigma }\ne -\mathbf{1} \end{array}} \sum _{\mathbf{q}\in {\mathcal {G}}_{9}} N_{i_0(\varvec{\varsigma })}^{-1} 2^{-q_{i_0(\varvec{\varsigma })}} \big )\nonumber \\= & {} O\big ( \sum _{ 1 \le i \le s} \sum _{ j \le n+ 0.5\log _2 n} \sum _{\mathbf{q}\in {\mathcal {G}}_{9}, q_{i} =-j } 2^{-n+j} \big )\nonumber \\= & {} O\big ( n^{s-2} \sum _{ j \le 1/2\log _2 n} 2^{j} \big ) =O(n^{s-3/2}). \end{aligned}$$

Using (2.112) and (2.113), we obtain the assertion of Lemma 19. \(\square \)

Lemma 20

With notations as above

$$\begin{aligned} \mathbf{E}( {\widetilde{{\mathcal {C}}}}_{3}(M)) = {\widetilde{{\mathcal {C}}}}_{12}(M) +O( n^{s-3/2}), \quad M= [\sqrt{n}], \end{aligned}$$

where

$$\begin{aligned} {\widetilde{{\mathcal {C}}}}_{12}(M) = (-1)^s(2\sqrt{-1})^{-s} \sum _{{\mathbf{q}} \in {\mathcal {G}}_9} \sum _{\varvec{\gamma }_0 \in \Delta _p} e(\langle \varvec{\gamma }_0, \mathbf{b}/p \rangle )\check{{\mathcal {C}}}_{\mathbf{q}}(\varvec{\gamma }_0 ), \end{aligned}$$
(2.114)

with

$$\begin{aligned} \check{{\mathcal {C}}}_{\mathbf{q}}(\varvec{\gamma }_0 ) = M^{-1} \sum _{\varvec{\gamma }\in \Gamma _{M,\mathbf{q}} (\varvec{\gamma }_0)} g(\varvec{\gamma }), \quad g(\mathbf{x}) = \eta (2\mathrm{Nm}(\mathbf{x}))(1- \eta (x_1) )) {\mathbb {M}}(\mathbf{x})/\mathrm{Nm}(\mathbf{x}), \end{aligned}$$

and

$$\begin{aligned} \Gamma _{M,\mathbf{q}}(\varvec{\gamma }_0)= ( p 2^{-\mathbf{q}} \cdot \Gamma ^{\bot } +\varvec{\gamma }_0 )\cdot (1/M,1,1,\ldots ,1). \end{aligned}$$

Proof

By (2.106), (2.108) and Lemma 19, it is enough to prove that

$$\begin{aligned} {\widetilde{{\mathcal {C}}}}_{11}(M) = {\widetilde{{\mathcal {C}}}}_{12}(M) +O( n^{s-3/2}). \end{aligned}$$

Consider \( \breve{{\mathcal {C}}}_{\mathbf{q}}(M, -\mathbf{1})\). Let

$$\begin{aligned} \bar{{\mathcal {C}}}_{\mathbf{q}}(M, -\mathbf{1})= & {} \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}}} ( 1-\eta _M(\varvec{\gamma })) e(\langle \varvec{\gamma }, \mathbf{b}/p \rangle ))\nonumber \\&\times \eta (2^{-q_1}\gamma _1/M) {\mathbb {M}}(2^{-\mathbf{q}} \cdot \varvec{\gamma }) /\mathrm{Nm}(\varvec{\gamma }). \end{aligned}$$

By (2.107), we have

$$\begin{aligned} |\breve{{\mathcal {C}}}_{\mathbf{q}}(M, -\mathbf{1}) - \bar{{\mathcal {C}}}_{\mathbf{q}}(M, -\mathbf{1})|\le & {} \sum _{\varvec{\gamma }\in \Gamma ^\bot \setminus {\mathbf{0}}} |( 1-\eta _M(\varvec{\gamma })) \eta (2^{-q_1}\gamma _1/M) {\mathbb {M}}(2^{-\mathbf{q}} \cdot \varvec{\gamma }) |\nonumber \\&\times |(\widehat{\varOmega }(\tau \varvec{\gamma }) -1) /\mathrm{Nm}(\varvec{\gamma })|. \end{aligned}$$

We examine the case \((1- \eta (\gamma _1 2^{-q_1 }/M)) {\mathbb {M}}(2^{-\mathbf{q}} \varvec{\gamma }) \ne 0\). By (2.16) and (2.61), we get \(|\gamma _1| \le M2^{q_1+1}\) and \(|\gamma _i| \le 2^{q_i+2}, \; i\ge 2\).

Hence, we obtain from (2.73) and (2.105), that \(|\tau \gamma _i| \le 4n^{-s+1/2}, \; i \ge 1\) for \(\mathbf{q}\in {\mathcal {G}}_9\).

Applying (2.8), we get \(\widehat{\varOmega }(\tau \varvec{\gamma }) =1+O(n^{-s+1/2})\) for \(\mathbf{q}\in G_9\). Bearing in mind (2.90), we have

$$\begin{aligned} \breve{{\mathcal {C}}}_{\mathbf{q}}(M, -\mathbf{1}) = \bar{{\mathcal {C}}}_{\mathbf{q}}(M, -\mathbf{1}) +O(n^{-1}). \end{aligned}$$
(2.115)

Taking into account that \(\eta (0) =0\) (see (2.16)), we get

$$\begin{aligned} \bar{{\mathcal {C}}}_{\mathbf{q}}(M, -\mathbf{1}) = \sum _{\varvec{\gamma }_0 \in \Delta _p} e(\langle \varvec{\gamma }_0, \mathbf{b}/p \rangle ) \acute{{\mathcal {C}}}_{\mathbf{q}}( \varvec{\gamma }_0), \end{aligned}$$

with

$$\begin{aligned} \acute{{\mathcal {C}}}_{\mathbf{q}}( \varvec{\gamma }_0) = \sum _{\varvec{\gamma }\in 2^{-\mathbf{q}}(p\Gamma ^\bot +\varvec{\gamma }_0)} \eta (2|\mathrm{Nm}(\varvec{\gamma })|/M) (1-\eta (\gamma _1/M)) {\mathbb {M}}( \varvec{\gamma }) /\mathrm{Nm}(\varvec{\gamma }). \end{aligned}$$

It is easy to verify that \( \acute{{\mathcal {C}}}_{\mathbf{q}}( \varvec{\gamma }_0) = \check{{\mathcal {C}}}_{\mathbf{q}}(\varvec{\gamma }_0)\). By (2.110) and (2.114), we obtain

$$\begin{aligned} {\widetilde{{\mathcal {C}}}}_{11}(M)= & {} (-1)^s (2\sqrt{-1})^{-s} \sum _{{\mathbf{q}} \in {\mathcal {G}}_9} \big ( \sum _{\varvec{\gamma }_0 \in \Delta _p} e(\langle \varvec{\gamma }_0, \mathbf{b}/p \rangle ) \breve{{\mathcal {C}}}_{\mathbf{q}}( \varvec{\gamma }_0) +O(n^{-1}) \big ) \nonumber \\= & {} {\widetilde{{\mathcal {C}}}}_{12}(M) + O(n^{s-2}). \end{aligned}$$

Hence, Lemma 20 is proved. \(\square \)

We consider Koksma–Hlawka inequality (see e.g. [10, pp. 10, 11]):

Definition 5

Let a function \(f\; : \; [0,1]^s \rightarrow {\mathbb {R}}\) have continuous partial derivative \(\partial ^l f^{(F_l)} /\partial x_{i_1} \cdots \partial x_{i_l} \) on on the \(s-l\) dimensional face \(F_l\), defined by \( x_{i_1} = \cdots = x_{i_l} =1\), and let

$$\begin{aligned} V^{(s-l)}(f^{F_l}) = \int _{F_l} \big | \frac{\partial ^l f^{(F_l)}}{\partial x_{i_1} \cdots \partial x_{i_1}} \big | d x_{i_1} \cdots d x_{i_l}. \end{aligned}$$

Then the number

$$\begin{aligned} V(f) = \sum _{0 \le l <s} \sum _{F_l} V^{(s-l)}(f^{F_l}) \end{aligned}$$

is called a Hardy and Krause variation.

Theorem F

(Koksma–Hlawka) Let f be of bounded variation on \([0, 1]^s\) in the sense of Hardy and Krause. Let \(((\beta _{k,K})_{k=0}^{K-1})\) be a K-point set in an s-dimensional unit cube \([0,1)^s\). Then we have

$$\begin{aligned} \big | \frac{1}{K} \sum _{0 \le k \le K-1}f(\beta _{k,K}) - \int _{[0,1]^s} f(\mathbf{x}) \mathrm{d}\mathbf{x}\big | \le V(f) D((\beta _{k,K})_{k=0}^{K-1}) . \end{aligned}$$

Lemma 21

With notations as above

$$\begin{aligned} \mathbf{E}( {\widetilde{{\mathcal {C}}}}_{3}(M)) =O( n^{s-5/4}), \quad M= [\sqrt{n}]. \end{aligned}$$

Proof

By (2.114) \(g(\mathbf{x}) =\eta (2\mathrm{Nm}(\mathbf{x}))(1- \eta (x_1) )) {\mathbb {M}}(\mathbf{x})/\mathrm{Nm}(\mathbf{x})\). We have that g is the odd function, with respect to each coordinate, and \(g(\mathbf{x}) =0\) for \(\mathbf{x}\notin [-2,2]\times [-4,4]^{s-1}\). Hence

$$\begin{aligned} \int _{[-2,2]\times [-4,4]^{s-1} } g(\mathbf{x}) \mathrm{d}\mathbf{x}=0. \end{aligned}$$

Let \(f(\mathbf{x}) =g((4x_1 -2,8x_2 -4,\ldots ,8x_s-4))\). It is easy to verify that \(f(\mathbf{x}) =0\) for \(\mathbf{x}\notin [0, 1]^s\), and

$$\begin{aligned} \int _{[0, 1]^s} f(\mathbf{x}) \mathrm{d} \mathbf{x}= \int _{[-2,2]\times [-4,4]^{s-1} } g(\mathbf{x}) \mathrm{d} \mathbf{x}=0. \end{aligned}$$

We see that f is of bounded variation on \([0, 1]^s\) in the sense of Hardy and Krause. Let \( \ddot{\Gamma }(\varvec{\gamma }_0) = \{ ((\gamma _1 +2)/4,(\gamma _2+4)/8,\ldots ,(\gamma _s +4)/8) \; | \; \varvec{\gamma }\in \Gamma _{M,\mathbf{q}} (\varvec{\gamma }_0) \}\).

Using (2.114), we obtain

$$\begin{aligned} \check{{\mathcal {C}}}_{\mathbf{q}}(\varvec{\gamma }_0 ) = M^{-1} \sum _{\varvec{\gamma }\in \ddot{\Gamma }(\varvec{\gamma }_0)} f(\varvec{\gamma }). \end{aligned}$$

Let \(H=\ddot{\Gamma }(\varvec{\gamma }_0)\cap [0,1)^{s}\), and \(K =\#H\). Applying Theorem A, we get \(K \in [c_1M,c_2M]\) for some \(c_1,c_2 >0\). We enumerate the set H by a sequence \(((\beta _{k,K})_{k=0}^{K-1})\).

By Theorem A, we have \(D((\beta _{k,K})_{k=0}^{K-1}) =O(M^{-1} \ln ^{s-1} M)\).

Using Theorem F, we obtain \( \check{{\mathcal {C}}}_{\mathbf{q}}(\varvec{\gamma }_0 ) = O(M^{-1} \ln ^{s-1} M) \).

Bearing in mind that \( \# {\mathcal {G}}_3 =O(n^{s-1})\) (see (2.96)), we derive from (2.114) that \( {\widetilde{{\mathcal {C}}}}_{12}(M) =O(n^{s-1} M^{-1} \ln ^{s-1} M )\).

Applying Lemma 20, we obtain the assertion of the Lemma 21. \(\square \)

Now using (2.85), Corollary 1 and Lemma 21, we get

Corollary 2

With notations as above

$$\begin{aligned} \mathbf{E}({\mathcal {B}}(\mathbf{b}/p,M)) = O( n^{s-5/4}), \quad M= [\sqrt{n}]. \end{aligned}$$

Let \(\mathbf{N}= (N_1,\ldots ,N_s)\), \(N = N_1 \cdots N_s\), \(n=s^{-1}\log _2 N\), \(c_9= 0.25 ( \pi ^s \det \Gamma )^{-1} c_{8}\) and \(M = [\sqrt{n}]\). From Lemma 12, Corollary 2 and (2.18), we obtain that there exist \(N_0 >0\), and \(\mathbf{b}\in \Delta _p\) such that

$$\begin{aligned} \sup _{\varvec{\theta }\in [0,1]^s} | \mathbf{E}( {\mathcal { R}}) (B_{\varvec{\theta }\cdot \mathbf{N}}+\mathbf{b}/p,\Gamma ) | \ge c_{9}n^{s-1} \quad \mathrm{for} \quad N >N_0. \end{aligned}$$
(2.116)

2.10 End of Proof

End of the proof of Theorem 1.

We set \( \widetilde{{\mathcal { R}}} (\mathbf{z},\mathbf{y}) ={\mathcal { R}}(B_{\mathbf{y}-\mathbf{z}}+\mathbf{z},\Gamma )\), where \(y_i \ge z_i\) \((i=1,\ldots ,s)\) (see (1.2)). Let us introduce the difference operator \( \dot{\Delta }_{a_i,h_i} \; : \; {\mathbb {R}}^s \rightarrow {\mathbb {R}}\), defined by the formula

$$\begin{aligned} \dot{\Delta }_{a_i,h_i} \tilde{{\mathcal { R}}}(\mathbf{z},\mathbf{y})= & {} \tilde{{\mathcal { R}}} (\mathbf{z},(y_1,\ldots ,y_{i-1},h_i,y_{i+1},\ldots ,y_s))\nonumber \\&-\tilde{{\mathcal { R}}}(\mathbf{z},(y_1,\ldots ,y_{i-1},a_i,y_{i+1},\ldots ,y_s)). \end{aligned}$$

Similarly to [26, p. 160, Ref. 7], we derive

$$\begin{aligned} \dot{\Delta }_{a_1,h_1} \cdots \dot{\Delta }_{a_s,h_s} \widetilde{{\mathcal { R}}}(\mathbf{z},\mathbf{y}) = \widetilde{{\mathcal { R}}}(\mathbf{a},\mathbf{h}), \end{aligned}$$
(2.117)

where \(h_i \ge a_i \ge z_i \) \((i=1,\ldots ,s)\). Let \(\mathbf{f}_1,\ldots ,\mathbf{f}_s\) be a basis of \(\Gamma \). We have that \(F= \{ \rho _1 \mathbf{f}_1+ \cdots + \rho _s\mathbf{f}_s \; | \; (\rho _1,\ldots ,\rho _s) \in [0,1)^s \}\) is the fundamental set of \(\Gamma \). It is easy to see that \( {\mathcal { R}}(B_{\mathbf{N}}+\mathbf{x},\Gamma ) = {\mathcal { R}}(B_{\mathbf{N}}+\mathbf{x}+\varvec{\gamma },\Gamma )\) for all \(\varvec{\gamma }\in \Gamma \). Hence, we can assume in Theorem 1 that \(\mathbf{x}\in F\). Similarly, we can assume in Corollary 2 that \(\mathbf{b}/p \in F\). We get that there exists \(\varvec{\gamma }_0 \in \Gamma \) with \(|\varvec{\gamma }_0| \le 4 \max _i |\mathbf{f}_i|\) and \(x_i < (\mathbf{b}/p)_i + \gamma _{0,i}\), \(i=1,\ldots ,s\). Let \(\mathbf{b}_1= \mathbf{b}+p\varvec{\gamma }_0\). By (2.116), we have that there exists \(\varvec{\theta }\in [0,1]^s\) and \(\mathbf{b}\in \Delta _p\) such that

$$\begin{aligned} |\widetilde{{\mathcal { R}}}(\mathbf{b}_1/p, \mathbf{b}_1/p + \varvec{\theta }\cdot \mathbf{N})| \ge c_{9} n^{s-1}. \end{aligned}$$
(2.118)

Let \({\mathcal {S}}=\{ \mathbf{y}\; | \; y_i =(\mathbf{b}_/p)_i, (\mathbf{b}_/p)_i +\theta _i N_i, \; i=1,\ldots ,s \}\). We see \(\# {\mathcal {S}}=2^s\). From (2.117), we obtain that \(\widetilde{{\mathcal { R}}}(\mathbf{b}_1/p, \mathbf{b}_1/p + \varvec{\theta }\cdot \mathbf{N})\) is the sum of \(2^s\) numbers \(\pm \widetilde{{\mathcal { R}}}(\mathbf{x}, \mathbf{y}^{j})\), where \(\mathbf{y}^{j} \in {\mathcal {S}}\). By (2.118), we get

$$\begin{aligned} |{\mathcal { R}}(B_{\mathbf{y}-\mathbf{x}}+\mathbf{x},\Gamma )| = |\widetilde{{\mathcal { R}}}(\mathbf{x}, \mathbf{y})| \ge 2^{-s}c_{9} n^{s-1} \quad \mathrm{for \; some} \quad \mathbf{y}\in {\mathcal {S}}. \end{aligned}$$

Therefore, Theorem 1 is proved. \(\square \)

Proof of Theorem 2

We follow [17, p. 86] and [19, p. 1]. Let \(n \ge 1\), \( N \in [2^n, 2^{n+1})\), \(\mathbf{y}=(y_1,\ldots ,y_{s})\) and \(\Gamma =\Gamma _{{\mathcal {M}}}\). By (1.2) and (1.5), we have

$$\begin{aligned} N \Delta (B_{\mathbf{y}}, (\beta _{k,N}(\mathbf{x}))_{k=0}^{N-1} ) = \varphi _1 - y_1 \cdots y_{s} \varphi _2, \end{aligned}$$
(2.119)

where

$$\begin{aligned} \varphi _1 = {\mathcal {N}}(B_{(y_1,\ldots ,y_{s-1},y_s z_{2,N}(\mathbf{x}))} +\mathbf{x},\Gamma ) \;\; \mathrm{and} \;\; \varphi _2 = N= {\mathcal {N}}(B_{(1,\ldots ,1, z_{2,N}(\mathbf{x}))}+\mathbf{x},\Gamma ). \end{aligned}$$

Let

$$\begin{aligned} \alpha _1 = {\mathcal {N}}(B_{(y_1,\ldots ,y_{s-1},y_sN \det \Gamma )}+\mathbf{x},\Gamma ) \;\; \mathrm{and} \;\; \alpha _2 = {\mathcal {N}}(B_{(1,\ldots ,1,N \det \Gamma )}+\mathbf{x},\Gamma ). \end{aligned}$$

Applying Theorem A, we get

$$\begin{aligned} z_{2,N}(\mathbf{x}) (\det \Gamma )^{-1} -N =O(n^{s-1}),\quad \varphi _2 -\alpha _2 = z_{2,N}(\mathbf{x}) (\det \Gamma )^{-1} -N +O(\log _2^{s-1}n), \end{aligned}$$

and

$$\begin{aligned} \varphi _1 -\alpha _1 = y_1 ...y_{s}( z_{2,N}(\mathbf{x}) (\det \Gamma )^{-1} -N ) +O(\log _2^{s-1}n). \end{aligned}$$

From (2.119), we derive

$$\begin{aligned} N \Delta (B_{\mathbf{y}}, (\beta _{k,N}(\mathbf{x}))_{k=0}^{N-1} )= \alpha _1 - y_1 \cdots y_{s-1} \alpha _2 +O(\log _2^{s-1}n) \end{aligned}$$
(2.120)

By (1.2), we obtain

$$\begin{aligned} \alpha _1 - y_1 \cdots y_{s-1}\alpha _2 = \beta _1 - y_1 \cdots y_{s-1}\beta _2 \end{aligned}$$
(2.121)

with

$$\begin{aligned} \beta _1 = {\mathcal { R}}(B_{(y_1,\ldots ,y_{s-1},y_sN \det \Gamma )}+\mathbf{x},\Gamma ) \;\; \mathrm{and} \;\; \beta _2 = {\mathcal { R}}(B_{(1,\ldots ,1,N \det \Gamma )}+\mathbf{x},\Gamma ). \end{aligned}$$

Let \(y_0 =0.125\min (1, 1/\det \Gamma , (c_1({\mathcal {M}})/c_0(\Gamma ))^{1/(s-1)})\), \(\varvec{\theta }=(\theta _1,\ldots ,\theta _{s})\), \(y_i = y_0 \theta _i\), \(i=1,\ldots ,s-1\), and \(y_s=\theta _s\). Using Theorem A, we get

$$\begin{aligned} |y_1\cdots y_{s} {\mathcal { R}}(B_{(1,\ldots ,1,N \det \Gamma )} +\mathbf{x},\Gamma )|\le & {} y_0^{s-1} c_0(\Gamma ) \log _2^{s-1}(2+N \det \Gamma )\nonumber \\\le & {} (2y_0)^{s-1} c_0(\Gamma ) \log _2^{s-1}N \nonumber \\\le & {} 0.25c_1({\mathcal {M}}) n^{s-1} \quad \mathrm{for} \quad N > \det \Gamma +2.\nonumber \\ \end{aligned}$$
(2.122)

Applying Theorem 1, we have

$$\begin{aligned}&\sup _{\varvec{\theta }\in [0,1)^{s}} |{\mathcal { R}}(B_{(\theta _1 y_0 ,\ldots ,\theta _{s-1}y_0, \theta _s N \det \Gamma )}+\mathbf{x},\Gamma )|\nonumber \\&\quad \ge c_1 ({\mathcal {M}})\log _2^{s-1}(y_0^{s-1} \det \Gamma N)\nonumber \\&\quad \ge c_1 ({\mathcal {M}}) n^{s-1}(1 +n^{-1}(s-1)\log _2 (y_0^{s-1} \det \Gamma )) \ge 0.5c_1({\mathcal {M}}) n^{s-1} \end{aligned}$$

for \(n> 10(s-1)|\log _2 (y_0^{s-1} \det \Gamma )|\). Using (1.6), (2.120), (2.121) and (2.122), we get the assertion of Theorem 2. \(\square \)