1 Introduction

1.1 Ramanujan sums over the rationals

For positive integers m and n, the Ramanujan sum \(c_{m}(n)\) is defined as

$$\begin{aligned} c_{m}(n):=\sum _{\begin{array}{c} 1\le j\le m\\ \gcd ( j,m)=1 \end{array}}e\Big (\frac{jn}{m}\Big )=\sum _{ d \mid \gcd ( m,n)}d\mu \Big (\frac{m}{d}\Big ), \end{aligned}$$
(1.1)

where \(e(z)=e^{2\pi i z}\) and \(\mu (\cdot )\) is the Möbius function. In 2012, Chan and Kumchev [1] studied the average order of \(c_{m}(n)\) with respect to both m and n. They proved that

$$\begin{aligned} \begin{aligned} S_{1}(X,Y)&=\sum _{1\le m\le X}\sum _{1\le n\le Y}c_{m}(n)\\&=Y-\frac{3}{2\pi ^{2}}X^{2}+O(XY^{1/3}\log X)+O(X^{3}Y^{-1}), \end{aligned} \end{aligned}$$
(1.2)

for large real numbers \(Y\ge X \ge 3\), and

$$\begin{aligned} \begin{aligned} S_{1}(X,Y)&:= {\left\{ \begin{array}{ll} Y, &{} \text {if }\delta >2, \\ {} -\frac{3}{2\pi ^{2}}X^{2}, &{} \text {if }1< \delta <2, \end{array}\right. } \end{aligned} \end{aligned}$$
(1.3)

if \(Y\asymp X^{\delta }\).

Let s be an arbitrary fixed positive integer. For any positive integers mn and \(s \geqslant 2\), the sum \(c_{m}^{(s)}(n)\) denotes a generalization of the Ramanujan sum defined by

$$\begin{aligned} c_{m}^{(s)}(n):=\sum _{\begin{array}{c} d \mid m\\ d^{s}|n \end{array}}d^{s}\mu \left( \frac{m}{d}\right) . \end{aligned}$$
(1.4)

This sum is said to be Cohen sum or Cohen-Ramanujan sum. In the case \(s = 1\), the function \(c_{m}^{(s)}(n)\) is equal to the Ramanujan sum \(c_{m}(n)\). Some interesting properties of (1.4) were given in detail by Kühn and Robles [11], Robles and Roy [16] and others.

More generally, for any positive integers mns and any arithmetic functions f and g, define

$$\begin{aligned} s_{m}^{(s)}(n):=\sum _{\begin{array}{c} d \mid m\\ d^{s}|n \end{array}}f(d)g\left( \frac{m}{d}\right) . \end{aligned}$$

Kiuchi [9] considered some asymptotic formulas for weighted averages of \(s_{m}^{(s)}(n)\).

In 2021, Kiuchi, Pillichshammer and Eddin [10] proposed a further generalization of \(s_{m}^{(s)}(n)\) which is defined by

$$\begin{aligned} s^{(s)}_{f,g,h}(m,n):=\sum _{\begin{array}{c} d \mid m\\ d^{s}|n \end{array}}f(d)g\left( \frac{m}{d}\right) h\left( \frac{n}{d^{s}}\right) , \end{aligned}$$

where \(s, m, n \in {\mathbb {N}}\) and fgh are arithmetic functions. They derived various identities for the weighted average of the product of generalized sums \(s^{(s)}_{f,g,h}(m,n)\) with weights concerning some functions.

1.2 Ramanujan sums in fields

Let \(\textit{K}\) be a number field and \(\mathcal {O}_{\textit{K}}\) denote its ring of algebraic integers. For any nonzero integral ideal \(\mathcal {I}\) in \(\mathcal {O}_{\textit{K}}\), the Möbius function is defined as follows: \(\mu (\mathcal {I})=0\) if there exists a prime ideal \(\mathcal {P}\) such that \(\mathcal {P}^{2}\) divides \(\mathcal {I}\), and \(\mu (\mathcal {I})=(-1)^{r}\) if \(\mathcal {I}\) is a product of r distinct prime ideals. For any ideal \(\mathcal {I}\), the norm of \(\mathcal {I}\) is denoted by \(\textit{N}(\mathcal {I})\). For nonzero integral ideals \(\mathcal {I}\) and \(\mathcal {J}\), the Ramanujan sum in fields is defined by

$$\begin{aligned} c_{\mathcal {J}}(\mathcal {I}):=\sum _{\begin{array}{c} \mathcal {M}\in \mathcal {O}_{\textit{K}}\\ \mathcal {M}|\mathcal {I},\mathcal {M}|\mathcal {J} \end{array}}{} \textit{N}(\mathcal {M})\mu \Big (\frac{\mathcal {J}}{\mathcal {M}}\Big ), \end{aligned}$$
(1.5)

which is an analogue of (1.1).

For each \(n\ge 1\), let \(a_{\textit{K}}(n)\) denote the number of integral ideals in \(\mathcal {O}_{\textit{K}}\) of norm n. Then

$$\begin{aligned} \sum _{n\le x}a_{\textit{K}}(n)=\rho _{\textit{K}} x +P_{\textit{K}}(x),\qquad P_{\textit{K}}(x)=O(x^{\frac{\textbf{d}-1}{\textbf{d}+1}}), \end{aligned}$$
(1.6)

where \(\rho _{\textit{K}}\) is a constant depending only on the field \(\textit{K}\) and \(\textbf{d}\) is the degree of the field extension \(\textit{K}/{\mathbb {Q}}\). This is a classical result of Landau (see [12]).

Let \(X \ge 3\) and \(Y \ge 3\) be two large real numbers. Define

$$\begin{aligned} S_{\textit{K}}(X,Y):=\sum _{1\le \textit{N}(\mathcal {J})\le X}\sum _{1\le \textit{N}(\mathcal {I})\le Y}c_{\mathcal {J}}(\mathcal {I}), \end{aligned}$$
(1.7)

which is an analogue of (1.2).

When \(\textit{K}\) is a quadratic number field, some authors studied the asymptotic behaviour of \(S_{\textit{K}}(X,Y)\) (see [14, 18, 19]). In [14], Nowak proved

$$\begin{aligned} S_{\textit{K}}(X,Y)\sim \rho _{\textit{K}} Y \end{aligned}$$
(1.8)

provided that \(Y>X^{\delta }\) for some \(\delta >\frac{1973}{820}\). In [18], Zhai improved Nowak’ results and proved that (1.8) holds provided that \(Y>X^{\delta }\) for some \(\delta >\frac{79}{34}\). Recently Zhai [19] proved that (1.8) holds for \(Y>X^{2+\varepsilon }\).

In this paper, we consider the asymptotic behaviour of \(S_{\textit{K}}(X,Y)\) for a cubic field \(\textit{K}\). We shall prove the following results.

Theorem 1.1

Let \(\textit{K}\) be a cubic number field. Suppose that \(Y\ge X\ge 3\) are large real numbers. Then

$$\begin{aligned} S_{\textit{K}}(X,Y)=\rho _{\textit{K}} Y+O(X^{\frac{8}{5}}Y^{\frac{2}{5}+\varepsilon }+X^{\frac{11}{8}}Y^{\frac{1}{2}+\varepsilon }), \end{aligned}$$
(1.9)

provided that \(Y>X^{11/4}\).

Theorem 1.2

Let \(\textit{K}\) be a cubic number field. Suppose that \(T\ge X\ge 3\) are two large real numbers such that \(T\ge 10X\). Then

$$\begin{aligned} \int _{T}^{2T}|{\mathfrak {R}}_{\textit{K}}(X,Y)|^{2} \,dY = c(X)\int _{T}^{2T}Y^{\frac{2}{3}} \,dY + O(X^{\frac{31}{9}}T^{\frac{14}{9}+\varepsilon }+X^{\frac{26}{9}}T^{\frac{29}{18}+\varepsilon }), \end{aligned}$$

where

$$\begin{aligned} {\mathfrak {R}}_{\textit{K}}(X,Y):=S_{\textit{K}}(X,Y)-\rho _{\textit{K}} Y \end{aligned}$$

and c(X) is defined by (4.7).

Remark

From (4.10) we can see that \(c(X)\ll X^{\frac{7}{3}+\varepsilon } \). From this estimate we get from Theorem 2 that the asymptotic formula (1.8) holds on average provided that \(Y>X^{\frac{7}{3}+\varepsilon }\).

Notation

Let [x] denote the greatest integer less or equal to x. The notation \(U\ll V\) means that there exists a constant \(C>0\) such that \(|U|\leqslant CV\), which is equivalent to \(U=O(V)\). The notations \(U\gg V\) (which implies \(U\geqslant 0\) and \(V\geqslant 0\)), \(U\asymp V\) (which means that we have both \(U\ll V\) and \(U\gg V\) ) are defined similarly. Let \(\zeta (s)\) denote the Riemann zeta-function and \(\tau _{r}(n)\) the number of ways n factorized into r factors. In particular, \(\tau _{2}(n)=\tau (n)\) is the Dirichlet divisor function. At last, let \(z_{n}~(n\ge 1)\) denote a series of complex numbers. We set

$$\begin{aligned} \Big |\sum _{N<n\le 2N}z_{n}\Big |^{*}:=\max _{N\le N_{1}<N_{2}\le 2N}\Big |\sum _{N_{1}<n\le N_{2}}z_{n}\Big |. \end{aligned}$$
(1.10)

When we revised our manuscript, we noted that Sneha and Shivani [17] established asymptotic formulas for the second moment of averages of Ramanujan sums over quadratic and cubic number fields and obtained second moment results for Ramanujan sums over some other number fields.

2 Some lemmas

In this section, we will make preparation for the proof of our theorems. From now on, we always suppose that \(\textit{K}\) is a cubic number field. The Dedekind zeta-function of \(\textit{K}\) is defined by

$$\begin{aligned} \zeta _{\textit{K}}(s):=\sum _{\begin{array}{c} \mathcal {I}\in \mathcal {O}_{\textit{K}}\\ \mathcal {I}\ne 0 \end{array}}\frac{1}{\textit{N}^{s}(\mathcal {I})}\quad (\Re s >1). \end{aligned}$$
(2.1)

Then

$$\begin{aligned} \zeta _{\textit{K}}(s)=\sum _{n=1}^{\infty }\frac{a_{\textit{K}}(n)}{n^{s}} \quad (\Re s >1), \end{aligned}$$
(2.2)

where \(a_{\textit{K}}(n)\) is the number of integral ideals in \(\mathcal {O}_{\textit{K}}\) of norm n.

The function \(\mu _{\textit{K}}(n)\) is defined by

$$\begin{aligned} \frac{1}{\zeta _{\textit{K}}(s)}:=\sum _{n=1}^{\infty }\frac{\mu _{\textit{K}}(n)}{n^{s}} \quad (\Re s>1). \end{aligned}$$

Define

$$\begin{aligned} \textit{M}_{\textit{K}}(x):=\sum _{n\le x}\mu _{\textit{K}}(n). \end{aligned}$$

Then there is a trivial bound

$$\begin{aligned} \textit{M}_{\textit{K}}(x)\ll x. \end{aligned}$$
(2.3)

We collect the algebraic properties of cubic number fields in the following lemma.

Lemma 2.1

(Lemma 1 in [13]) Let \(\textit{K}\) be a cubic number field over \({\mathbb {Q}}\) and \(D=df^{2}\) (d squarefree) its discriminant; then

  1. (a)

    \(\textit{K}/{\mathbb {Q}}\) is a normal extension if and only if \(D=f^{2}\). In this case

    $$\begin{aligned} \zeta _{\textit{K}}(s)=\zeta (s)L(s,\chi _{1})\overline{L(s,\chi _{1})}, \end{aligned}$$

    where \(\zeta (s)\) is the Riemann zeta-function and \(L(s,\chi _{1})\) is an ordinary Dirichlet series (over \({\mathbb {Q}}\)) corresponding to a primitive character \(\chi _{1}\) modulo f.

  2. (b)

    If \(\textit{K}/{\mathbb {Q}}\) is not a normal extension, then \(d\ne 1\) and

    $$\begin{aligned} \zeta _{\textit{K}}(s)=\zeta (s)L(s,\chi _{2}), \end{aligned}$$

    where \(L(s,\chi _{2})\) is a Dirichlet L-function over the quadratic field \(F ={\mathbb {Q}}(\sqrt{d})\):

    $$\begin{aligned} L(s,\chi _{2})=\sum _{\varrho }\chi _{2}(\varrho )N_{F}(\varrho )^{-s}, \quad (\Re s >1). \end{aligned}$$

    Here the summation is taken over all ideals \(\varrho \ne 0\) in F and \(N_{F}\) denotes the (absolute) ideal norm in F.

Remark 2.2

To describe the character \(\chi _{2}\), let H be the ideal group in F according to which the normal extension \(\textit{K}(\sqrt{d})\) is the class field. Then H divides the set \(A^{f}\) of all ideals \(\varrho \subseteq F\) with \((\varrho ,f)=1\) into three classes \(A^{f}=H\cup C\cup C^{'}\), and (\(\omega =e^{2\pi i/3}\))

$$\begin{aligned} \chi _{2}(\varrho )={\left\{ \begin{array}{ll} 1, &{} \varrho \in H, \\ \omega , &{} \varrho \in C, \\ \overline{\omega }, &{} \varrho \in C^{'}, \\ 0, &{} (\varrho ,f) \ne 1. \end{array}\right. } \end{aligned}$$

The substitution \(\gamma =(\sqrt{d} \mapsto -\sqrt{d})\) in F maps C onto \(C^{'}\).

Remark 2.3

The factorization of \(\zeta _{\textit{K}}(s)\) in Lemma 2.1 gives

$$\begin{aligned} a_{\textit{K}}(n)=\sum _{m|n}b(m), \end{aligned}$$
(2.4)

where in the case of a normal extension \(b(m)=\sum _{xy=m}\chi _{1}(x)\overline{\chi _{1}(y)}\) (\(\chi _{1}\) is the primitive character modulo f). Otherwise b(m) is equal to the number of ideals \(\varrho \in H\) with \(N_{F}(\varrho )=m\) minus two times the number of ideals \(\varrho \in C\) with \(N_{F}(\varrho )=m\). In both cases, \(|b(m)|\ll m^{\varepsilon }\).

Lemma 2.4

((68) in [2]) Let \(\textit{K}\) be an algebraic number field of degree \(\textbf{d}\). Then

$$\begin{aligned} a_{\textit{K}}(n)\ll (\tau (n))^{\textbf{d}-1}, \end{aligned}$$
(2.5)

where \(\tau (n)\) is the Dirichlet divisor function and \(\textbf{d}=[K:{\mathbb {Q}}]\).

Corollary 2.5

Let \(\textit{K}\) be a cubic field. Then

$$\begin{aligned} a_{\textit{K}}(n)\ll \tau ^{2}(n). \end{aligned}$$
(2.6)

Lemma 2.6

Suppose \(1\ll N\ll Y\). Then

$$\begin{aligned} P_{\textit{K}}(Y)=\frac{Y^{1/3}}{\sqrt{3}\pi }\sum _{n\le N}\frac{a_{\textit{K}}(n)}{n^{{2}/{3}}}\cos (6\pi (nY)^{{1}/{3}})+O(Y^{{2}/{3}+\varepsilon }N^{-{1}/{3}}), \end{aligned}$$
(2.7)

where the O-constant depends on \(\varepsilon \).

Proof

This is a special case of Proposition 3.2 of Friedlander and Iwaniec [4]. \(\square \)

Lemma 2.7

Let \(T\ge 10\) be a large parameter and y a real number such that \(T^{\varepsilon }\ll y\ll T\). For any \(T\le Y \le 2T\) define

$$\begin{aligned} \begin{aligned}&P_{1}(Y)=P_{1}(Y;y):=\frac{Y^{1/3}}{\sqrt{3}\pi }\sum _{n\le y}\frac{a_{\textit{K}}(n)}{n^{{2}/{3}}}\cos (6\pi (nY)^{{1}/{3}}),\\&P_{2}(Y)=P_{2}(Y;y):=P_{\textit{K}}(Y)-P_{1}(Y). \end{aligned} \end{aligned}$$

Then we have

$$\begin{aligned} \int _{T}^{2T}|P_{2}(Y)|^{2} \,dY \ll {T^{5/3+\varepsilon }}{y^{-1/3}}\quad (y\ll T^{1/3}). \end{aligned}$$
(2.8)

Proof

We prove that the estimate

$$\begin{aligned} \int _{1}^{T}|\zeta _{\textit{K}}(7/12+it)|^{2} \,dt \ll T^{1+\varepsilon } \end{aligned}$$
(2.9)

holds.

If \(\textit{K}/{\mathbb {Q}}\) is a normal extension, then by Lemma 2.1 we have \(\zeta _{\textit{K}}(s)=\zeta (s)L(s,\chi _{1})\overline{L(s,\chi _{1})}\). From Theorem 8.4 in [7] we get that

$$\begin{aligned} \int _{1}^{T}|\zeta (7/12+it)|^{6} \,dt \ll T^{1+\varepsilon }. \end{aligned}$$
(2.10)

The proof of Theorem 8.4 in [7] can be applied directly to \(L(s,\chi _{1})\) to derive

$$\begin{aligned} \int _{1}^{T}|L(7/12+it,\chi _{1})|^{6} \,dt \ll T^{1+\varepsilon }. \end{aligned}$$
(2.11)

From (2.10), (2.11) and Hölder’s inequality we get

$$\begin{aligned} \begin{aligned}&\int _{1}^{T}|\zeta _{\textit{K}}(7/12+it)|^{2} \,dt \\&=\int _{1}^{T}|\zeta (7/12+it)|^{2}|L(7/12+it,\chi _{1})|^{4} \,dt \\&\ll \Big (\int _{1}^{T}|\zeta (7/12+it)|^{6} \,dt \Big )^{1/3}\Big (\int _{1}^{T}|L(7/12+it,\chi _{1})|^{6} \,dt \Big )^{2/3}\\&\ll T^{1+\varepsilon }. \end{aligned} \end{aligned}$$

Now suppose that \(\textit{K}/{\mathbb {Q}}\) is not a normal extension, then \(\zeta _{\textit{K}}(s)=\zeta (s)L(s,\chi _{2})\) from Lemma 2.1. We know that \(L(s,\chi _{2})\) is an automorphic L-function of degree 2 corresponding to a cusp form F over \(SL_{2}({\mathbb {Z}})\) (see, for example, Fomenko [5]). So from [3, Lemma 12], which is originally proved in [8], we have

$$\begin{aligned} \int _{1}^{T}|L(7/12+it,\chi _{2})|^{3} \,dt \ll T^{1+\varepsilon }. \end{aligned}$$
(2.12)

By (2.10), (2.12) and Hölder’s inequality we get

$$\begin{aligned} \begin{aligned}&\int _{1}^{T}|\zeta _{\textit{K}}(7/12+it)|^{2} \,dt \\&=\int _{1}^{T}|\zeta (7/12+it)|^{2}|L(7/12+it,\chi _{2})|^{2} \,dt \\&\ll \Big (\int _{1}^{T}|\zeta (7/12+it)|^{6} \,dt \Big )^{1/3}\Big (\int _{1}^{T}|L(7/12+it,\chi _{2})|^{3} \,dt \Big )^{2/3}\\&\ll T^{1+\varepsilon }. \end{aligned} \end{aligned}$$

Now we give a short proof of (2.8). For simplicity, we follow the proof of Theorem 1 in [3]. Take \(d=3\), \(a(n)=a_{\textit{K}}(n)\), \(N=[T^{5-\varepsilon }]\) and \(M=[T^{2/3}]\). From (2.9) we can take \(\sigma ^{*}=7/12\). As in the proof of Theorem 1 in [3], we can write

$$\begin{aligned} P_{2}(Y)=R_{1}^{*}(Y;y)+\sum _{j=2}^{7}R_{j}(Y), \end{aligned}$$

where

$$\begin{aligned} R_{1}^{*}(Y;y):=\frac{Y^{1/3}}{\sqrt{3}\pi }\sum _{y<n\le M}\frac{d_{3}(n)}{n^{2/3}}\cos (6\pi (nY)^{1/3}) \end{aligned}$$

and \(R_{j}(Y)\) (j = 2, 3, 4, 5, 6, 7) were defined in p. 2129 of [3]. Similar to (8.11) of [3], we have the estimate (noting that \(y\ll T^{1/3}\))

$$\begin{aligned} \begin{aligned}&\int _{T}^{2T}(R_{1}^{*}(x;y)+R_{2}(x))^{2}dx\\&\ll \sum _{y<n\le M}\frac{d_{3}^{2}(n)}{n^{4/3}}\int _{T}^{2T}x^{2/3}dx+T^{5/3+\varepsilon }M^{-1/6}+T^{4/3+\varepsilon }M^{1/3}\\&\ll T^{5/3+\varepsilon }y^{-1/3}+T^{14/9+\varepsilon }\ll T^{5/3+\varepsilon }y^{-1/3}, \end{aligned} \end{aligned}$$

which, combining (8.17) of [3], gives (2.8). \(\square \)

Next, we consider the following exponential sums:

$$\begin{aligned} S_{0}=\sum _{H<h\le 2H}\sum _{N<n\le 2N}a(h,n)\sum _{{M<m\le 2M}}b(m)e\Big (U\frac{h^{\beta }n^{\gamma }m^{\alpha }}{H^{\beta }N^{\gamma }M^{\alpha }}\Big ) \end{aligned}$$
(2.13)

and

$$\begin{aligned} S_{1}=\sum _{H<h\le 2H}\sum _{N<n\le 2N}a(h,n)\Big |\sum _{{M<m\le 2M}}e\Big (U\frac{h^{\beta }n^{\gamma }m^{\alpha }}{H^{\beta }N^{\gamma }M^{\alpha }}\Big )\Big |^{*}, \end{aligned}$$
(2.14)

where H, N, M are positive integers, U is a real number greater than one, a(hn) and b(m) are a complex number of modulus at most one; moreover, \(\alpha , \beta ,\gamma \) are fixed real numbers such that \(\alpha (\alpha -1)\beta \gamma \ne 0\).

Lemma 2.8

([15]) We have

$$\begin{aligned} S_{0}\ll (HNM)^{1+\varepsilon }\Big (\Big (\frac{U}{HNM^{2}}\Big )^{1/4}+\frac{1}{(HN)^{1/4}}+\frac{1}{M^{1/2}}+\frac{1}{U^{1/2}}\Big ), \end{aligned}$$
(2.15)

and

$$\begin{aligned} S_{1}\ll (HNM)^{1+\varepsilon }\Big (\Big (\frac{U}{HNM^{2}}\Big )^{1/4}+\frac{1}{M^{1/2}}+\frac{1}{U}\Big ). \end{aligned}$$
(2.16)

Lemma 2.9

(see Lemma 2.4 in [6]) Suppose that

$$\begin{aligned} L(H)=\sum _{i=1}^{m}A_{i}H^{a_{i}}+\sum _{j=1}^{n}B_{j}H^{-b_{j}}, \end{aligned}$$

where \(A_{i},B_{j},a_{i},\) and \(b_{j}\) are positive. Assume that \(H_{1}\le H_{2}\). Then there is some H with \(H_{1}\le H \le H_{2}\) and

$$\begin{aligned} L(H)\ll \sum _{i=1}^{m}\sum _{j=1}^{n}(A_{i}^{b_{j}}B_{j}^{a_{i}})^{{1}/{(a_{i}+b_{j})}}+\sum _{i=1}^{m}A_{i}H_{1}^{a_{i}}+\sum _{j=1}^{n}B_{j}H_{2}^{-b_{j}}. \end{aligned}$$

The implied constants depend only on m and n.

Lemma 2.10

(see Lemma 2.4 in [18]) Let \(l\ge 2\) and \(q\ge 1\) be two fixed integers. Then we have

$$\begin{aligned} \sum _{n\le x}\tau _{l}^{q}(n)\ll x(\log x)^{l^{q}-1}. \end{aligned}$$
(2.17)

Lemma 2.11

Let \(T \ge 2\) be a real number. Then we have

$$\begin{aligned} \sum _{\begin{array}{c} m,n\le T\\ m\ne n \end{array}}\frac{\tau _{4}^{2}(m)\tau _{4}^{2}(n)}{(mn)^{\frac{2}{3}}|\root 3 \of {m}-\root 3 \of {n}|}\ll T^{\frac{1}{3}+\varepsilon }. \end{aligned}$$
(2.18)

Proof

First, we write

$$\begin{aligned} \sum _{\begin{array}{c} m,n\le T\\ m\ne n \end{array}}\frac{\tau _{4}^{2}(m)\tau _{4}^{2}(n)}{(mn)^{\frac{2}{3}}|\root 3 \of {m}-\root 3 \of {n}|}=S_{1}+S_{2}, \end{aligned}$$

where

$$\begin{aligned} S_{1}= & {} \sum _{\begin{array}{c} m,n\le T\\ |\root 3 \of {m}-\root 3 \of {n}|\ge (mn)^{1/6}/10 \end{array}}\frac{\tau _{4}^{2}(m)\tau _{4}^{2}(n)}{(mn)^{\frac{2}{3}}|\root 3 \of {m}-\root 3 \of {n}|}, \\ S_{2}= & {} \sum _{\begin{array}{c} m,n\le T\\ 0<|\root 3 \of {m}-\root 3 \of {n}|< (mn)^{1/6}/10 \end{array}}\frac{\tau _{4}^{2}(m)\tau _{4}^{2}(n)}{(mn)^{\frac{2}{3}}|\root 3 \of {m}-\root 3 \of {n}|}. \end{aligned}$$

Applying Lemma 2.10 with \(l=4\) and \(q=2\), we have

$$\begin{aligned} S_{1}\ll \sum _{m,n\le T}\frac{\tau _{4}^{2}(m)\tau _{4}^{2}(n)}{(mn)^{\frac{5}{6}}}\ll T^{\frac{1}{3}+\varepsilon }, \end{aligned}$$

where we used a summation by parts.

Second, \(0<|\root 3 \of {m}-\root 3 \of {n}|< {(mn)^{1/6}}/{10}\) implies that \(m\asymp n\). And from the Lagrange theorem we have \(|\root 3 \of {m}-\root 3 \of {n}|\asymp (mn)^{-1/3}|m-n|\). By the formula \(ab\le (a^{2}+b^{2})/2\) and Lemma 2.10 with \(l=4\) and \(q=4\) we get that

$$\begin{aligned} \begin{aligned} S_{2}&\ll \sum _{m\asymp n\le T}\frac{\tau _{4}^{2}(m)\tau _{4}^{2}(n)}{(mn)^{{1}/{3}}|m-n|}\\&\ll \sum _{m\asymp n\le T}\Big (\frac{\tau _{4}^{4}(m)}{m^{{2}/{3}}}+\frac{\tau _{4}^{4}(n)}{n^{{2}/{3}}}\Big )\frac{1}{|m-n|}\\&\ll \sum _{m\le T}\frac{\tau _{4}^{4}(m)}{m^{{2}/{3}}}\sum _{m\asymp n}\frac{1}{|m-n|}\ll T^{\frac{1}{3}+\varepsilon }. \end{aligned} \end{aligned}$$

\(\square \)

3 Proof of Theorem 1.1

We begin the proof with formula (2.3) in [14], which reads

$$\begin{aligned} \begin{aligned} S_{\textit{K}}(X,Y)&=\rho _{\textit{K}} Y+ \sum _{\begin{array}{c} \mathcal {M},\mathcal {L}\in \mathcal {O}_{\textit{K}}\\ 1\le N(\mathcal {M}\mathcal {L})\le X \end{array}}N(\mathcal {M})\mu (\mathcal {L})P_{\textit{K}}\Big (\frac{Y}{N(\mathcal {M})}\Big )\\&=\rho _{\textit{K}} Y+\sum _{\begin{array}{c} \mathcal {M},\mathcal {L}\in \mathcal {O}_{\textit{K}}\\ 1\le N(\mathcal {M})N(\mathcal {L})\le X \end{array}}N(\mathcal {M})\mu (\mathcal {L})P_{\textit{K}}\Big (\frac{Y}{N(\mathcal {M})}\Big ). \end{aligned} \end{aligned}$$
(3.1)

Let \({\mathfrak {R}}={\mathfrak {R}}_{\textit{K}}(X,Y)\) denote the last sum in (3.1). We have

$$\begin{aligned} \begin{aligned} {\mathfrak {R}}&=\sum _{1\le ml \le X}ma_{\textit{K}}(m)\mu _{K}(l)P_{\textit{K}}\Big (\frac{Y}{m}\Big )\\&=\sum _{1\le l \le X}\mu _{K}(l)\sum _{1\le m \le X/l}ma_{\textit{K}}(m)P_{\textit{K}}\Big (\frac{Y}{m}\Big )\\&=\mathfrak {R_{1}^{\dag }}+\mathfrak {R_{2}^{\dag }}, \end{aligned} \end{aligned}$$
(3.2)

where

$$\begin{aligned} \begin{aligned}&\mathfrak {R_{1}^{\dag }}:=\sum _{1\le l \le X^{1-\varepsilon }}\mu _{K}(l)\sum _{1\le m \le X/l}ma_{\textit{K}}(m)P_{\textit{K}}\Big (\frac{Y}{m}\Big ),\\&\mathfrak {R_{2}^{\dag }}:=\sum _{X^{1-\varepsilon }< l \le X}\mu _{K}(l)\sum _{1\le m \le X/l}ma_{\textit{K}}(m)P_{\textit{K}}\Big (\frac{Y}{m}\Big ). \end{aligned} \end{aligned}$$

First, we bound \(\mathfrak {R_{2}^{\dag }}\). Müller [13] proved that \(P_{\textit{K}}(x)=O(x^{\frac{43}{96}+\varepsilon })\). So we can easily derived that

$$\begin{aligned} \mathfrak {R_{2}^{\dag }}\ll XY^{43/96+\varepsilon }. \end{aligned}$$
(3.3)

Second, we consider \(\mathfrak {R_{1}^{\dag }}\). We can write

$$\begin{aligned} \mathfrak {R_{1}^{\dag }}:=\sum _{1\le l \le X^{1-\varepsilon }}\mu _{K}(l)\mathfrak {R_{1}}(X_{l},Y), \end{aligned}$$
(3.4)

where

$$\begin{aligned} \mathfrak {R_{1}}(X_{l},Y)=\sum _{1\le m \le X_{l}}ma_{\textit{K}}(m)P_{\textit{K}}\Big (\frac{Y}{m}\Big ),\qquad X_{l}=X/l. \end{aligned}$$
(3.5)

Using (2.4), we can write

$$\begin{aligned} \mathfrak {R_{1}}(X_{l},Y)=\sum _{1\le m_{1}m_{2}\le X_{l}}m_{1}m_{2}b(m_{2})P_{\textit{K}}\Big (\frac{Y}{m_{1}m_{2}}\Big ). \end{aligned}$$
(3.6)

By a splitting argument, \(\mathfrak {R_{1}}(X_{l},Y)\) can be written as a sum of the following terms

$$\begin{aligned} R(M_{1},M_{2}):=\sum _{\begin{array}{c} 1\le m_{1}m_{2}\le X_{l}\\ M_{j}<m_{j}\le 2M_{j}(j=1,2) \end{array}}m_{1}m_{2}b(m_{2})P_{\textit{K}}\Big (\frac{Y}{m_{1}m_{2}}\Big ). \end{aligned}$$
(3.7)

Suppose that \( y\ll Y/ M_{1}M_{2}\) is a parameter to be determined. By Lemma 2.6, we have

$$\begin{aligned} \begin{aligned} R(M_{1},M_{2})&=\frac{Y^{\frac{1}{3}}}{\sqrt{3}\pi }\sum _{\begin{array}{c} 1\le m_{1}m_{2}\le X_{l}\\ M_{j}<m_{j}\le 2M_{j}(j=1,2) \end{array}}(m_{1}m_{2})^{\frac{2}{3}}b(m_{2})\sum _{n\le y}\frac{a_{\textit{K}}(n)}{n^{2/3}}\cos \Big (6\pi \root 3 \of {\frac{nY}{m_{1}m_{2}}}\Big )\\&\quad +O((M_{1}M_{2})^{4/3}Y^{2/3+\varepsilon }y^{-1/3}). \end{aligned} \end{aligned}$$

By a splitting argument to the sum over n we get

$$\begin{aligned} \begin{aligned} R(M_{1},M_{2})&\ll Y^{\frac{1}{3}}(M_{1}M_{2})^{\frac{2}{3}+\varepsilon }N^{-\frac{2}{3}+\varepsilon }|R^{*}(M_{1},M_{2},N)|\\&\quad +O((M_{1}M_{2})^{4/3}Y^{2/3+\varepsilon }y^{-1/3}) \end{aligned} \end{aligned}$$
(3.8)

for some \(1\ll N\ll y\), where

$$\begin{aligned} \begin{aligned} R^{*}(M_{1},M_{2},N)=\sum _{\begin{array}{c} 1\le m_{1}m_{2}\le X_{l}\\ M_{j}<m_{j}\le 2M_{j}(j=1,2) \end{array}}\Big (\frac{m_{1}}{M_{1}}\Big )^{\frac{2}{3}}\Big (\frac{m_{2}}{M_{2}}\Big )^{\frac{2}{3}}\frac{b(m_{2})}{M_{2}^{\varepsilon }}\sum _{N<n \le 2N}c(n)e\Big (6\pi \root 3 \of {\frac{nY}{m_{1}m_{2}}}\Big ) \end{aligned} \end{aligned}$$

with

$$\begin{aligned} c(n)=\frac{a_{\textit{K}}(n)}{N^{\varepsilon }}\Big (\frac{N}{n}\Big )^{\frac{2}{3}}. \end{aligned}$$

Now, we give our first estimate for the sum \(R^{*}(M_{1},M_{2},N)\). Obviously, we have

$$\begin{aligned} R^{*}(M_{1},M_{2},N)\ll R^{\dag }(M_{1},M_{2},N), \end{aligned}$$
(3.9)

where

$$\begin{aligned} R^{\dag }(M_{1},M_{2},N)=\sum _{M_{2}<m_{2}\le 2M_{2}}\sum _{N<n\le 2N}\Big |\sum _{M_{1}<m_{1}\le 2M_{1}}e\Big (6\pi \root 3 \of {\frac{nY}{m_{1}m_{2}}}\Big )\Big |^{*}. \end{aligned}$$

By taking \((H,N,M)=(M_{2},N,M_{1})\) and \(U=\root 3 \of {NY}/\root 3 \of {M_{1}M_{2}}\) in Lemma 2.8, we get that

$$\begin{aligned} R^{\dag }(M_{1},M_{2},N)Y^{-\varepsilon }\ll N^{\frac{5}{6}}Y^{\frac{1}{12}}{M_{1}}^{\frac{5}{12}}M_{2}^{\frac{2}{3}}+N{M_{1}}^{\frac{1}{2}}M_{2}+N^{\frac{2}{3}}Y^{-\frac{1}{3}}(M_{1}M_{2})^{\frac{4}{3}}, \end{aligned}$$

which combining (3.9) gives

$$\begin{aligned} \begin{aligned}&R^{*}(M_{1},M_{2},N)Y^{-\varepsilon }\\&\ll N^{\frac{5}{6}}Y^{\frac{1}{12}}{M_{1}}^{\frac{5}{12}}M_{2}^{\frac{2}{3}}+N{M_{1}}^{\frac{1}{2}}M_{2}+N^{\frac{2}{3}}Y^{-\frac{1}{3}}(M_{1}M_{2})^{\frac{4}{3}},\\&=N^{\frac{5}{6}}Y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{5}{12}}M_{2}^{\frac{1}{4}}+N(M_{1}M_{2})^{\frac{1}{2}}M_{2}^{\frac{1}{2}}+N^{\frac{2}{3}}Y^{-\frac{1}{3}}(M_{1}M_{2})^{\frac{4}{3}}. \end{aligned} \end{aligned}$$
(3.10)

Next, we give another estimate for \(R^{*}(M_{1}, M_{2}, N)\). Clearly we have

$$\begin{aligned} R^{*}(M_{1}, M_{2}, N)\ll R^{\ddag }(M_{1}, M_{2}, N), \end{aligned}$$
(3.11)

where

$$\begin{aligned} R^{\ddag }(M_{1}, M_{2}, N)=\sum _{M_{1}<m_{1}\le 2M_{1}}\sum _{N<n\le 2N}\sum _{M_{2}<m_{2}\le 2M_{2}}e\Big (6\pi \root 3 \of {\frac{nY}{m_{1}m_{2}}}\Big ). \end{aligned}$$

By taking \((H, N, M)=(M_{1}, N, M_{2})\) and \(U=\root 3 \of {NY}/\root 3 \of {M_{1}M_{2}}\) in Lemma 2.8, we get that

$$\begin{aligned} \begin{aligned} R^{\ddag }(M_{1},M_{2},N)Y^{-\varepsilon }&\ll N^{\frac{5}{6}}Y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{5}{12}}M_{1}^{\frac{1}{4}}+N^{\frac{3}{4}}(M_{1}M_{2})^{\frac{3}{4}}M_{2}^{\frac{1}{4}}\\&\qquad +N(M_{1}M_{2})^{\frac{1}{2}}M_{1}^{\frac{1}{2}}+N^{\frac{5}{6}}Y^{-\frac{1}{6}}(M_{1}M_{2})^{\frac{7}{6}}\\&\ll N^{\frac{5}{6}}Y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{5}{12}}M_{1}^{\frac{1}{4}}+N^{\frac{3}{4}}(M_{1}M_{2})^{\frac{3}{4}}(M_{1}M_{2})^{\frac{1}{4}} \\&\qquad +N(M_{1}M_{2})^{\frac{1}{2}}M_{1}^{\frac{1}{2}}+N^{\frac{5}{6}}Y^{-\frac{1}{6}}(M_{1}M_{2})^{\frac{7}{6}}\\&\ll N^{\frac{5}{6}}Y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{5}{12}}M_{1}^{\frac{1}{4}}+N^{\frac{3}{4}}(M_{1}M_{2}) \\&\qquad +N(M_{1}M_{2})^{\frac{1}{2}}M_{1}^{\frac{1}{2}}+N^{\frac{5}{6}}Y^{-\frac{1}{6}}(M_{1}M_{2})^{\frac{7}{6}}. \end{aligned} \end{aligned}$$

So

$$\begin{aligned} \begin{aligned} R^{*}(M_{1},M_{2},N)Y^{-\varepsilon }&\ll N^{\frac{5}{6}}Y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{5}{12}}M_{1}^{\frac{1}{4}}+N(M_{1}M_{2})^{\frac{1}{2}}M_{1}^{\frac{1}{2}} \\&\qquad +N^{\frac{3}{4}}(M_{1}M_{2})+N^{\frac{5}{6}}Y^{-\frac{1}{6}}(M_{1}M_{2})^{\frac{7}{6}}. \end{aligned} \end{aligned}$$
(3.12)

From (3.10) and (3.12), we get

$$\begin{aligned} \begin{aligned} R^{*}(M_{1},M_{2},N)Y^{-\varepsilon }&\ll J_{1}+J_{2}+J_{3}+J_{4}+ N^{\frac{5}{6}}Y^{-\frac{1}{6}}(M_{1}M_{2})^{\frac{7}{6}}\\&\qquad +N^{\frac{3}{4}}(M_{1}M_{2}) +N^{\frac{2}{3}}Y^{-\frac{1}{3}}(M_{1}M_{2})^{\frac{4}{3}}, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned}&J_{1}=\min \Big (N(M_{1}M_{2})^{\frac{1}{2}}M_{2}^{\frac{1}{2}},N^{\frac{5}{6}}Y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{5}{12}}M_{1}^{\frac{1}{4}}\Big ),\\&J_{2}=\min \Big (N^{\frac{5}{6}}Y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{5}{12}}M_{2}^{\frac{1}{4}},N^{\frac{5}{6}}Y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{5}{12}}M_{1}^{\frac{1}{4}}\Big ),\\&J_{3}=\min \Big (N(M_{1}M_{2})^{\frac{1}{2}}M_{2}^{\frac{1}{2}},N(M_{1}M_{2})^{\frac{1}{2}}M_{1}^{\frac{1}{2}}\Big ),\\&J_{4}=\min \Big (N^{\frac{5}{6}}Y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{5}{12}}M_{2}^{\frac{1}{4}},N(M_{1}M_{2})^{\frac{1}{2}}M_{1}^{\frac{1}{2}}\Big ). \end{aligned} \end{aligned}$$

Noticing the fact that \( \min (X_{1},\ldots ,X_{k}) \le X_{1}^{a_{1}} \ldots X_{k}^{a_{k}}, \) where \(X_{1}, \ldots ,X_{k}>0\), \(a_{1},\ldots ,a_{k}\ge 0\) satisfies \(a_{1}+\cdots +a_{k}=1\), we have

$$\begin{aligned} \begin{aligned} J_{1}&\le \Big (N(M_{1}M_{2})^{\frac{1}{2}}M_{2}^{\frac{1}{2}}\Big )^{\frac{1}{3}}\Big (N^{\frac{5}{6}}Y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{5}{12}}M_{1}^{\frac{1}{4}}\Big )^{\frac{2}{3}} \le N^{\frac{8}{9}}Y^{\frac{1}{18}}(M_{1}M_{2})^{\frac{11}{18}},\\ J_{2}&\le \Big (N^{\frac{5}{6}}Y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{5}{12}}M_{2}^{\frac{1}{4}}\Big )^{\frac{1}{2}} \Big (N^{\frac{5}{6}}Y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{5}{12}}M_{1}^{\frac{1}{4}}\Big )^{\frac{1}{2}} \le N^{\frac{5}{6}}Y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{13}{24}},\\ J_{3}&\le \Big (N(M_{1}M_{2})^{\frac{1}{2}}M_{2}^{\frac{1}{2}}\Big )^{\frac{1}{2}}\Big (N(M_{1}M_{2})^{\frac{1}{2}}M_{1}^{\frac{1}{2}}\Big )^{\frac{1}{2}} \le N(M_{1}M_{2})^{\frac{3}{4}},\\ J_{4}&\le \Big (N^{\frac{5}{6}}Y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{5}{12}}M_{2}^{\frac{1}{4}}\Big )^{\frac{2}{3}} \Big (N(M_{1}M_{2})^{\frac{1}{2}}M_{1}^{\frac{1}{2}}\Big )^{\frac{1}{3}} \le N^{\frac{8}{9}}Y^{\frac{1}{18}}(M_{1}M_{2})^{\frac{11}{18}}. \end{aligned} \end{aligned}$$

It now follows that

$$\begin{aligned} \begin{aligned}&R^{*}(M_{1},M_{2},N)Y^{-\varepsilon }\\&\ll N^{\frac{8}{9}}Y^{\frac{1}{18}}(M_{1}M_{2})^{\frac{11}{18}}+N(M_{1}M_{2})^{\frac{3}{4}}+N^{\frac{5}{6}}Y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{13}{24}} \\ {}&\quad +N^{\frac{5}{6}}Y^{-\frac{1}{6}}(M_{1}M_{2})^{\frac{7}{6}} +N^{\frac{3}{4}}(M_{1}M_{2}) +N^{\frac{2}{3}}Y^{-\frac{1}{3}}(M_{1}M_{2})^{\frac{4}{3}}. \end{aligned} \end{aligned}$$
(3.13)

Combining (3.8) with (3.13), we get (recalling \(N\ll y\))

$$\begin{aligned} \begin{aligned}&R(M_{1},M_{2})Y^{-\varepsilon }\\&\ll N^{\frac{2}{9}}Y^{\frac{7}{18}}(M_{1}M_{2})^{\frac{23}{18}} + N^{\frac{1}{3}}Y^{\frac{1}{3}}(M_{1}M_{2})^{\frac{17}{12}} + N^{\frac{1}{6}}Y^{\frac{5}{12}}(M_{1}M_{2})^{\frac{29}{24}}\\&\qquad + N^{\frac{1}{6}}Y^{\frac{1}{6}}(M_{1}M_{2})^{\frac{11}{6}} + N^{\frac{1}{12}}Y^{\frac{1}{3}}(M_{1}M_{2})^{\frac{5}{3}} + Y^{\frac{2}{3}}(M_{1}M_{2})^{\frac{4}{3}}y^{-\frac{1}{3}} +(M_{1}M_{2})^{2}\\&\ll Y^{\frac{7}{18}}y^{\frac{2}{9}}(M_{1}M_{2})^{\frac{23}{18}} + Y^{\frac{1}{3}}y^{\frac{1}{3}}(M_{1}M_{2})^{\frac{17}{12}} + Y^{\frac{5}{12}}y^{\frac{1}{6}}(M_{1}M_{2})^{\frac{29}{24}}\\&\qquad + Y^{\frac{1}{6}}y^{\frac{1}{6}}(M_{1}M_{2})^{\frac{11}{6}} + Y^{\frac{1}{3}}y^{\frac{1}{12}}(M_{1}M_{2})^{\frac{5}{3}} + Y^{\frac{2}{3}}(M_{1}M_{2})^{\frac{4}{3}}y^{-\frac{1}{3}} +(M_{1}M_{2})^{2}. \end{aligned} \end{aligned}$$
(3.14)

By choosing a best y with Lemma 2.6  (recalling that \(X_{l}=X/l\)), we get that

$$\begin{aligned} \begin{aligned} R(M_{1},M_{2})Y^{-\varepsilon }&\ll Y^{\frac{1}{2}}(M_{1}M_{2})^{\frac{13}{10}}+ Y^{\frac{1}{2}}(M_{1}M_{2})^{\frac{11}{8}}+ Y^{\frac{1}{2}}(M_{1}M_{2})^{\frac{5}{4}}\\&\qquad \qquad + Y^{\frac{1}{3}}(M_{1}M_{2})^{\frac{5}{3}}+ Y^{\frac{2}{5}}(M_{1}M_{2})^{\frac{8}{5}}+ (M_{1}M_{2})^{2}\\&\ll Y^{\frac{1}{2}}{X_{l}}^{\frac{13}{10}}+ Y^{\frac{1}{2}}{X_{l}}^{\frac{11}{8}}+ Y^{\frac{1}{2}}{X_{l}}^{\frac{5}{4}}+ Y^{\frac{1}{3}}{X_{l}}^{\frac{5}{3}}+ Y^{\frac{2}{5}}{X_{l}}^{\frac{8}{5}}+ {X_{l}}^{2}. \end{aligned} \end{aligned}$$
(3.15)

From (3.4)-(3.7) and (3.15), we get

$$\begin{aligned} \begin{aligned} {\mathfrak {R}}_{1}^{\dag }Y^{-\varepsilon }&\ll {X}^{\frac{13}{10}}Y^{\frac{1}{2}}+ {X}^{\frac{11}{8}}Y^{\frac{1}{2}}+ {X}^{\frac{5}{4}}Y^{\frac{1}{2}}+ {X}^{\frac{5}{3}}Y^{\frac{1}{3}}+ {X}^{\frac{8}{5}}Y^{\frac{2}{5}}+ {X}^{2}\\&\ll {X}^{\frac{11}{8}}Y^{\frac{1}{2}}+ {X}^{\frac{8}{5}}Y^{\frac{2}{5}} \end{aligned} \end{aligned}$$

by noting that \(Y\ge X\). This together with (3.2) and (3.3) yields

$$\begin{aligned} {\mathfrak {R}}Y^{-\varepsilon }\ll {X}^{\frac{11}{8}}Y^{\frac{1}{2}}+ {X}^{\frac{8}{5}}Y^{\frac{2}{5}}+XY^{\frac{43}{96}} \ll {X}^{\frac{11}{8}}Y^{\frac{1}{2}}+ {X}^{\frac{8}{5}}Y^{\frac{2}{5}}. \end{aligned}$$

This completes the proof of Theorem 1.

4 The proof of Theorem 1.2

We begin with the first expression of \({\mathfrak {R}}\) in (3.2)

$$\begin{aligned} \begin{aligned} {\mathfrak {R}}&=\sum _{1\le ml \le X}ma_{\textit{K}}(m)\mu _{K}(l)P_{\textit{K}}\Big (\frac{Y}{m}\Big )\\&=\sum _{1\le m \le X}ma_{\textit{K}}(m)\textit{M}_{\textit{K}}\Big (\frac{X}{m}\Big )P_{\textit{K}}\Big (\frac{Y}{m}\Big )\\&={\mathfrak {R}}_{1}+{\mathfrak {R}}_{2}, \end{aligned} \end{aligned}$$
(4.1)

where

$$\begin{aligned} \begin{aligned}&{\mathfrak {R}}_{1}= \frac{Y^{1/3}}{\sqrt{3}\pi }\sum _{m\le X}m^{2/3}a_{\textit{K}}(m)\textit{M}_{\textit{K}}\Big (\frac{X}{m}\Big )\sum _{n\le y} \frac{a_{\textit{K}}(n)}{n^{{2}/{3}}}\cos \Big (6\pi \root 3 \of {\frac{nY}{m}}\Big ),\\&{\mathfrak {R}}_{2}= \sum _{1\le m \le X}ma_{\textit{K}}(m)\textit{M}_{\textit{K}}\Big (\frac{X}{m}\Big )P_{2}\Big (\frac{Y}{m}\Big ). \end{aligned} \end{aligned}$$

4.1 A. Evaluation of \(\int _{T}^{2T}{\mathfrak {R}}_{2}^{2} \,dY \)

Suppose that \(0<y<\big (\frac{T}{X}\big )^{1/3}\), it is not hard to find that

$$\begin{aligned} \begin{aligned} {\mathfrak {R}}_{2}&\ll \sum _{ m\sim M}ma_{\textit{K}}(m)\textit{M}_{\textit{K}}\Big (\frac{X}{m}\Big )P_{2}\Big (\frac{Y}{m}\Big )\log X\\&\ll X\sum _{ m\sim M}a_{\textit{K}}(m)P_{2}\Big (\frac{Y}{m}\Big )\log X \end{aligned} \end{aligned}$$

for some \(1\ll M\ll X\) and \(\textit{M}_{\textit{K}}(t)\ll t\). By Cauchy’s inequality we get

$$\begin{aligned} \begin{aligned} {\mathfrak {R}}_{2}^{2}&\ll X^{2}\sum _{ m\sim M}a_{\textit{K}}(m)\sum _{ m\sim M}a_{\textit{K}}(m)P_{2}^{2}\Big (\frac{Y}{m}\Big )\log ^{2}X\\&\ll X^{2}M\sum _{ m\sim M}a_{\textit{K}}(m)P_{2}^{2}\Big (\frac{Y}{m}\Big )\log ^{2}X, \end{aligned} \end{aligned}$$

which together with \(Xy^{3}\ll T\) implies that

$$\begin{aligned} \begin{aligned} \int _{T}^{2T}{\mathfrak {R}}_{2}^{2} \,dY&\ll X^{2}M\sum _{ m\sim M}a_{\textit{K}}(m)\log ^{2}X\int _{T}^{2T}P_{2}^{2}\Big (\frac{Y}{m}\Big ) \,dY \\&\ll X^{2}M\sum _{ m\sim M}a_{\textit{K}}(m)m\log ^{2}X\int _{T}^{2T}P_{2}^{2}\Big (\frac{Y}{m}\Big )d\Big (\frac{Y}{m}\Big )\\&\ll X^{2}M\sum _{ m\sim M}a_{\textit{K}}(m)m\Big (\frac{T}{m}\Big )^{\frac{5}{3}+\varepsilon }y^{-\frac{1}{3}}\log ^{2}X\\&\ll X^{2}M^{\frac{4}{3}}T^{\frac{5}{3}+\varepsilon }y^{-\frac{1}{3}}\\&\ll X^{\frac{10}{3}}T^{\frac{5}{3}+\varepsilon }y^{-\frac{1}{3}}. \end{aligned} \end{aligned}$$
(4.2)

4.2 B. Evaluation of \(\int _{T}^{2T}{\mathfrak {R}}_{1}^{2} \,dY \)

Noting that

$$\begin{aligned} \begin{aligned} {\mathfrak {R}}_{1}^{2}&=\frac{Y^{\frac{2}{3}}}{3\pi ^{2}} \sum _{1\le m_{1},m_{2}\le X}(m_{1}m_{2})^{\frac{2}{3}}a_{\textit{K}}(m_{1})a_{\textit{K}}(m_{2})\textit{M}_{\textit{K}}\Big (\frac{X}{m_{1}}\Big )\textit{M}_{\textit{K}}\Big (\frac{X}{m_{2}}\Big )\\&\times \sum _{n_{1},n_{2}\le y}\frac{a_{\textit{K}}(n_{1})}{n_{1}^{{2}/{3}}}\frac{a_{\textit{K}}(n_{2})}{n_{2}^{{2}/{3}}}\cos \Big (6\pi \root 3 \of {\frac{n_{1}Y}{m_{1}}}\Big )\cos \Big (6\pi \root 3 \of {\frac{n_{2}Y}{m_{2}}}\Big ) \end{aligned} \end{aligned}$$

and using the elementary formula \( \cos \alpha \cos \beta =\frac{1}{2}\big (\cos (\alpha -\beta )+\cos (\alpha +\beta )\big ), \) we get

$$\begin{aligned} {\mathfrak {R}}_{1}^{2}=Q_{1}(Y)+Q_{2}(Y)+Q_{3}(Y), \end{aligned}$$
(4.3)

where

$$\begin{aligned} \begin{aligned} Q_{1}(Y)&:=\frac{Y^{\frac{2}{3}}}{6\pi ^{2}}\sum _{\begin{array}{c} m_{1},m_{2}\le X;n_{1},n_{2}\le y\\ n_{1}m_{2}=n_{2}m_{1} \end{array}}(m_{1}m_{2})^{\frac{2}{3}}a_{\textit{K}}(m_{1})a_{\textit{K}}(m_{2})\\&\qquad \times \textit{M}_{\textit{K}}\Big (\frac{X}{m_{1}}\Big )\textit{M}_{\textit{K}}\Big (\frac{X}{m_{2}}\Big ) \frac{a_{\textit{K}}(n_{1})}{n_{1}^{{2}/{3}}}\frac{a_{\textit{K}}(n_{2})}{n_{2}^{{2}/{3}}}, \\ Q_{2}(Y)&:=\frac{Y^{\frac{2}{3}}}{6\pi ^{2}}\sum _{\begin{array}{c} m_{1},m_{2}\le X;n_{1},n_{2}\le y\\ n_{1}m_{2}\ne n_{2}m_{1} \end{array}}(m_{1}m_{2})^{\frac{2}{3}}a_{\textit{K}}(m_{1})a_{\textit{K}}(m_{2})\textit{M}_{\textit{K}}\Big (\frac{X}{m_{1}}\Big )\textit{M}_{\textit{K}}\Big (\frac{X}{m_{2}}\Big ) \\&\qquad \times \frac{a_{\textit{K}}(n_{1})}{n_{1}^{{2}/{3}}}\frac{a_{\textit{K}}(n_{2})}{n_{2}^{{2}/{3}}}\cos \Big (6\pi \root 3 \of {Y}\Big (\root 3 \of {\frac{n_{1}}{m_{1}}}-\root 3 \of {\frac{n_{2}}{m_{2}}}\Big )\Big ), \\ Q_{3}(Y)&:=\frac{Y^{\frac{2}{3}}}{6\pi ^{2}}\sum _{\begin{array}{c} m_{1},m_{2}\le X\\ n_{1},n_{2}\le y \end{array}}(m_{1}m_{2})^{\frac{2}{3}}a_{\textit{K}}(m_{1})a_{\textit{K}}(m_{2})\textit{M}_{\textit{K}}\Big (\frac{X}{m_{1}}\Big )\textit{M}_{\textit{K}}\Big (\frac{X}{m_{2}}\Big ) \\&\qquad \times \frac{a_{\textit{K}}(n_{1})}{n_{1}^{{2}/{3}}}\frac{a_{\textit{K}}(n_{2})}{n_{2}^{{2}/{3}}}\cos \Big (6\pi \root 3 \of {Y}\Big (\root 3 \of {\frac{n_{1}}{m_{1}}}+\root 3 \of {\frac{n_{2}}{m_{2}}}\Big )\Big ). \end{aligned} \end{aligned}$$

Firstly, we consider \(Q_{3}(Y)\). By using the first derivative test, (2.3) and the elementary formula \(a+b\ge 2\sqrt{ab}\) (\(a>0, b>0\)), we get

$$\begin{aligned} \begin{aligned} \int _{T}^{2T}Q_{3}(Y) \,dY&\ll T^{\frac{4}{3}}\sum _{\begin{array}{c} m_{1},m_{2}\le X\\ n_{1},n_{2}\le y \end{array}}(m_{1}m_{2})^{\frac{2}{3}}a_{\textit{K}}(m_{1})a_{\textit{K}}(m_{2})\Big |\textit{M}_{\textit{K}}\Big (\frac{X}{m_{1}}\Big )\textit{M}_{\textit{K}}\Big (\frac{X}{m_{2}}\Big )\Big |\\&\quad \times \frac{a_{\textit{K}}(n_{1})}{n_{1}^{{2}/{3}}}\frac{a_{\textit{K}}(n_{2})}{n_{2}^{{2}/{3}}}\times \frac{1}{\root 3 \of {\frac{n_{1}}{m_{1}}}+\root 3 \of {\frac{n_{2}}{m_{2}}}} \\&\ll X^{2}T^{\frac{4}{3}}\sum _{m_{1},m_{2}\le X}\frac{a_{\textit{K}}(m_{1})a_{\textit{K}}(m_{2})}{(m_{1}m_{2})^{{1}/{6}}}\sum _{n_{1},n_{2}\le y}\frac{a_{\textit{K}}(m)}{n_{1}^{{5}/{6}}}\frac{a_{\textit{K}}(m)}{n_{2}^{{5}/{6}}} \\ {}&\ll X^{\frac{11}{3}}T^{\frac{4}{3}}y^{\frac{1}{3}}, \end{aligned} \end{aligned}$$
(4.4)

where in the last step we used (1.6) and a summation by parts.

Secondly, we consider \(Q_{2}(Y)\). By the first derivative test and (2.3) again we get with the help of Lemma 2.11 that

$$\begin{aligned}&\int _{T}^{2T}Q_{2}(Y) \,dY \nonumber \\&\quad \ll T^{\frac{4}{3}}\sum _{\begin{array}{c} m_{1},m_{2}\le X;n_{1},n_{2}\le y\\ n_{1}m_{2}\ne n_{2}m_{1} \end{array}}(m_{1}m_{2})^{\frac{2}{3}}a_{\textit{K}}(m_{1})a_{\textit{K}}(m_{2})\Big |\textit{M}_{\textit{K}}\Big (\frac{X}{m_{1}}\Big )\textit{M}_{\textit{K}}\Big (\frac{X}{m_{2}}\Big )\Big |\nonumber \\&\quad \quad \times \frac{a_{\textit{K}}(n_{1})}{n_{1}^{{2}/{3}}}\frac{a_{\textit{K}}(n_{2})}{n_{2}^{{2}/{3}}}\times \frac{1}{\Big |\root 3 \of {\frac{n_{1}}{m_{1}}}-\root 3 \of {\frac{n_{2}}{m_{2}}}\Big |} \nonumber \\&\quad \ll X^{2}T^{\frac{4}{3}} \sum _{\begin{array}{c} m_{1},m_{2}\le X;n_{1},n_{2}\le y\\ n_{1}m_{2}\ne n_{2}m_{1} \end{array}} \frac{a_{\textit{K}}(m_{1})a_{\textit{K}}(m_{2})a_{\textit{K}}(n_{1})a_{\textit{K}}(n_{2})}{(n_{1}n_{2})^{{2}/{3}}|\root 3 \of {n_{1}m_{2}}-\root 3 \of {n_{2}m_{1}}|} \nonumber \\ {}&\quad \ll X^{\frac{10}{3}}T^{\frac{4}{3}}\sum _{\begin{array}{c} m_{1},m_{2}\le X;n_{1},n_{2}\le y\\ n_{1}m_{2}\ne n_{2}m_{1} \end{array}} \frac{a_{\textit{K}}(m_{1})a_{\textit{K}}(m_{2})a_{\textit{K}}(n_{1})a_{\textit{K}}(n_{2})}{(m_{1}m_{2})^{{2}/{3}}(n_{1}n_{2})^{{2}/{3}}|\root 3 \of {n_{1}m_{2}}-\root 3 \of {n_{2}m_{1}}|} \nonumber \\ {}&\quad \ll X^{\frac{10}{3}}T^{\frac{4}{3}}\sum _{\begin{array}{c} l_{1},l_{2}\le Xy\\ l_{1}\ne l_{2} \end{array}}\frac{\tau _{4}^{2}(l_{1})\tau _{4}^{2}(l_{2})}{l_{1}^{2/3}l_{2}^{2/3}|\root 3 \of {l_{1}}-\root 3 \of {l_{2}}|}\nonumber \\&\quad \ll T^{\frac{4}{3}}X^{\frac{10}{3}}(Xy)^{\frac{1}{3}+\varepsilon }\nonumber \\&\quad \ll X^{\frac{11}{3}}T^{\frac{4}{3}+\varepsilon }y^{\frac{1}{3}}, \end{aligned}$$
(4.5)

where we used the estimate \(a_{\textit{K}}(m)a_{\textit{K}}(n)\le \tau ^{2}(m)\tau ^{2}(n)\le \tau _{4}^{2}(mn)\).

Finally, we consider \(Q_{1}(Y)\). Let \(m=(m_{1},m_{2})\). Write \(m_{1}=mm_{1}^{*},~ m_{2}=mm_{2}^{*}\) such that \((m_{1}^{*}, ~m_{2}^{*})=1\). If \(n_{1}m_{2}=n_{2}m_{1}\), we immediately get that \(n_{1}=nm_{1}^{*}, ~n_{2}=nm_{2}^{*}\) for some positive integer n. It follows that

$$\begin{aligned} \begin{aligned} Q_{1}(Y)&=\frac{Y^{\frac{2}{3}}}{6\pi ^{2}}\sum _{\begin{array}{c} mm_{1},mm_{2}\le X\\ \gcd ( m_{1},m_{2})=1 \end{array}}m^{4/3}a_{\textit{K}}(mm_{1})a_{\textit{K}}(mm_{2})\textit{M}_{\textit{K}}\Big ( \frac{X}{mm_{1}}\Big )\textit{M}_{\textit{K}}\Big ({\frac{X}{mm_{2}}}\Big ) \\ {}&\quad \times \sum _{n\le \min (\frac{y}{m_{1}},\frac{y}{m_{2}})}\frac{a_{\textit{K}}(nm_{1})a_{\textit{K}}(nm_{2})}{n^{4/3}}\\&=c(X)Y^{\frac{2}{3}}+E(Y), \end{aligned} \end{aligned}$$
(4.6)

where

$$\begin{aligned} \begin{aligned} c(X)=&\,\frac{1}{6\pi ^{2}}\sum _{\begin{array}{c} mm_{1},mm_{2}\le X\\ \gcd ( m_{1},m_{2})=1 \end{array}}m^{4/3}a_{\textit{K}}(mm_{1})a_{\textit{K}}(mm_{2})\textit{M}_{\textit{K}}\Big (\frac{X}{mm_{1}}\Big )\textit{M}_{\textit{K}}\Big (\frac{X}{mm_{2}}\Big ) \\ {}&\times \sum _{n=1}^{\infty }\frac{a_{\textit{K}}(nm_{1})a_{\textit{K}}(nm_{2})}{n^{4/3}},\\ E(Y)=&\,\frac{Y^{\frac{2}{3}}}{6\pi ^{2}}\sum _{\begin{array}{c} mm_{1},mm_{2}\le X\\ \gcd ( m_{1},m_{2})=1 \end{array}}m^{4/3}a_{\textit{K}}(mm_{1})a_{\textit{K}}(mm_{2})\textit{M}_{\textit{K}}\Big (\frac{X}{mm_{1}}\Big )\textit{M}_{\textit{K}}\Big (\frac{X}{mm_{2}}\Big ) \\ {}&\times \sum _{n> \min (\frac{y}{m_{1}},\frac{y}{m_{2}})}\frac{a_{\textit{K}}(nm_{1})a_{\textit{K}}(nm_{2})}{n^{4/3}}. \end{aligned} \end{aligned}$$
(4.7)

Noting that \(a_{\textit{K}}(mn)\le \tau ^{2}(mn)\le \tau ^{2}(m)\tau ^{2}(n)\), we get that

$$\begin{aligned} E(Y)&\ll Y^{\frac{2}{3}}\sum _{\begin{array}{c} mm_{1},mm_{2}\le X\\ \gcd ( m_{1},m_{2})=1 \end{array}}m^{4/3}a_{\textit{K}}(mm_{1})a_{\textit{K}}(mm_{2})\textit{M}_{\textit{K}}\Big (\frac{X}{mm_{1}}\Big )\textit{M}_{\textit{K}}\Big (\frac{X}{mm_{2}}\Big ) \nonumber \\ {}&\quad \times \sum _{n> \min (\frac{y}{m_{1}},\frac{y}{m_{2}})}\frac{a_{\textit{K}}(nm_{1})a_{\textit{K}}(nm_{2})}{n^{4/3}}\nonumber \\&\quad \ll X^{2}Y^{\frac{2}{3}}\sum _{m\le X}\frac{\tau ^{4}(m)}{m^{2/3}}\sum _{\begin{array}{c} m_{1}\le \frac{X}{m},m_{2}\le \frac{X}{m}\\ \gcd ( m_{1},m_{2})=1 \end{array}}\frac{\tau ^{4}(m_{1})\tau ^{4}(m_{2})}{m_{1}m_{2}}\sum _{n>\min (\frac{y}{m_{1}},\frac{y}{m_{2}})}\frac{\tau ^{4}(n)}{n^{4/3}} \nonumber \\ {}&\quad \ll X^{2}Y^{\frac{2}{3}}\sum _{m\le X}\frac{\tau ^{4}(m)}{m^{2/3}}\sum _{m_{1}\le m_{2}\le \frac{X}{m}}\frac{\tau ^{4}(m_{1})\tau ^{4}(m_{2})}{m_{1}m_{2}} \times \Big (\frac{m_{2}}{y}\Big )^{1/3-\varepsilon }\nonumber \\&\quad \ll X^{2}Y^{\frac{2}{3}}{y}^{\varepsilon -\frac{1}{3}}\sum _{m\le X}\frac{\tau ^{4}(m)}{m^{2/3}}\sum _{m_{2}\le \frac{X}{m}}\frac{\tau ^{4}(m_{2})}{m_{2}^{2/3+\varepsilon }}\sum _{m_{1}\le m_{2}}\frac{\tau ^{4}(m_{1})}{m_{1}}\nonumber \\&\quad \ll X^{\frac{7}{3}}T^{\frac{2}{3}+\varepsilon }y^{-\frac{1}{3}}. \end{aligned}$$
(4.8)

This together with (4.6) yields

$$\begin{aligned} \int _{T}^{2T}Q_{1}(Y) \,dY =c(X)\int _{T}^{2T}Y^{\frac{2}{3}} \,dY +O(X^{\frac{7}{3}}T^{\frac{5}{3}+\varepsilon }y^{-\frac{1}{3}}). \end{aligned}$$
(4.9)

Similar to (4.8), we obtain the estimate

$$\begin{aligned} c(X)\ll X^{\frac{7}{3}+\varepsilon }. \end{aligned}$$
(4.10)

From (4.3)-(4.5) and (4.9), we get

$$\begin{aligned} \int _{T}^{2T}{\mathfrak {R}}_{1}^{2} \,dY =c(X)\int _{T}^{2T}Y^{\frac{2}{3}} \,dY +O(X^{\frac{7}{3}}T^{\frac{5}{3}+\varepsilon }y^{-\frac{1}{3}}+ X^{\frac{11}{3}}T^{\frac{4}{3}+\varepsilon }y^{\frac{1}{3}}). \end{aligned}$$
(4.11)

4.3 C. Evaluation of \(\int _{T}^{2T}{\mathfrak {R}}^{2} \,dY \)

From (4.2), (4.10), (4.11) and Cauchy’s inequality, we get

$$\begin{aligned} \begin{aligned} \int _{T}^{2T}{\mathfrak {R}}_{1}{\mathfrak {R}}_{2} \,dY \ll X^{\frac{17}{6}}T^{\frac{5}{3}+\varepsilon }y^{-\frac{1}{6}} +X^{\frac{7}{2}}T^{\frac{3}{2}+\varepsilon }. \end{aligned} \end{aligned}$$
(4.12)

Combining (4.1), (4.2) and (4.11), we finally get

$$\begin{aligned} \begin{aligned} \int _{T}^{2T}{\mathfrak {R}}^{2} \,dY&= c(X)\int _{T}^{2T}Y^{\frac{2}{3}} \,dY + O(X^{\frac{11}{3}}T^{\frac{4}{3}+\varepsilon }y^{\frac{1}{3}}+X^{\frac{10}{3}}T^{\frac{5}{3}+\varepsilon }y^{-\frac{1}{3}}\\&\quad +X^{\frac{17}{6}}T^{\frac{5}{3}+\varepsilon }y^{-\frac{1}{6}}+X^{\frac{7}{2}}T^{\frac{3}{2}+\varepsilon }). \end{aligned} \end{aligned}$$
(4.13)

By choosing a best \(y\in (1,(T/X)^{1/3})\) via Lemma 2.9, we get

$$\begin{aligned} \int _{T}^{2T}|{\mathfrak {R}}_{\textit{K}}(X,Y)|^{2} \,dY = c(X)\int _{T}^{2T}Y^{\frac{2}{3}} \,dY + O(X^{\frac{31}{9}}T^{\frac{14}{9}+\varepsilon }+X^{\frac{26}{9}}T^{\frac{29}{18}+\varepsilon }) , \end{aligned}$$

where c(X) is defined by (4.7). This completes the proof of Theorem 2.