Abstract
We study the fourth moment of quadratic Dirichlet L-functions at \(s= \frac{1}{2}\). We show an asymptotic formula under the generalized Riemann hypothesis, and obtain a precise lower bound unconditionally. The proofs of these results follow closely arguments of Soundararajan and Young (J Eur Math Soc 12(5):1097–1116, 2010) and Soundararajan (Ann Math (2) 152(2):447–488, 2000).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let \(\chi _d = \left( \frac{d}{\cdot } \right) \) be a real primitive Dirichlet character modulo d given by the Kronecker symbol, where d is a fundamental discriminant. The k-th moment of quadratic Dirichlet L-functions is
where denotes the sum over fundamental discriminants, and k is a positive real number. One great motivation to study (1.1) comes from Chowla’s conjecture, which states that \(L(\tfrac{1}{2},\chi _d) \ne 0\) for all fundamental discriminants d. The current best result toward this conjecture is Soundararajan’s celebrated work [17] in 2000, where it was proven that \(L(\frac{1}{2},\chi _{8d}) \ne 0\) for at least \(87.5 \%\) of the odd square-free integers \(d\ge 0\). The key to the proof is the evaluation of mollified first and second moments of quadratic Dirichlet L-functions.
In 2000, using a random matrix model, Keating and Snaith [13] conjectured that for any positive real number k,
where \(C_k\) are explicit constants. Various researchers have studied versions of these moments summed over certain subsets of the fundamental discriminants. For instance, in (1.1) we consider positive fundamental discriminants. However, there are no difficulties in also studying negative fundamental discriminants. Some articles even consider characters of the form \(\chi _{8d}\), where d are odd positive square-free integers. The main reason researchers study these special cases, rather than consider all fundamental discriminants, is to focus on the methods and techniques. It is possible to establish results for all fundamental discriminants, but this would involve more cases that need to be studied. The conjecture analogous to (1.2) for characters of the form \(\chi _{8d}\), which can be established by using Keating and Snaith’s method [13], was obtained in Andrade and Keating’s paper [2, Conjecture 2]. For any positive real number k, it was conjectured that
where denotes the sum over square-free integers, G(z) is the Barnes G-function, and
In this paper, we prove the conjecture in (1.3) for \(k=4\) assuming the generalized Riemann hypothesis (GRH).
Theorem 1.1
Assume GRH for \(L(s,\chi _d)\) for all fundamental discriminants d. For any \(\varepsilon >0,\) we have
The proof of Theorem 1.1 largely follows Soundararajan and Young’s paper [19] in 2010 and Soundararajan’s paper [17] in 2000. In [19], Soundararajan and Young proved an asymptotic formula for the second moment of quadratic twists of a modular L-function, obtaining the leading main term. Experts believed that the methods and techniques in [19] could be used to evaluate the fourth moment of quadratic Dirichlet L-functions. Motivated by this expectation, we established Theorem 1.1. In fact, Theorem 1.1 may be viewed as a version of [19, Theorem 1.2] where f is an Eisenstein series. The main difference between this article and [19] is that the off-diagonal terms (see just after (3.11) for a precise definition) contribute to the main term, whereas in [19] they are part of the error term. We use techniques from [17, Sections 5.2, 5.3] to evaluate the off-diagonal terms and this is main new input. These terms may be written as a certain multiple complex integral. One of the difficulties in evaluating this integral is that the integrand has high order poles and this makes the calculation more intricate. It should be noted that in 2017 Florea [7] has proven an asymptotic formula for the fourth moment of quadratic Dirichlet L-functions in the function field setting, with extra lower main terms.
Similar to [19, Theorem 1.1], we obtain an unconditional lower bound that matches the conjectured asymptotic formula (1.3). This result was stated without proof by Rudnick and Soundararajan [15] in 2006.
Theorem 1.2
Unconditionally, we have
We now introduce more refined conjectures for the moments of quadratic Dirichlet L-functions and provide a brief history of related results. In 2005, Conrey et al. [4] gave a more precise conjecture, including all other principal lower order terms,
where k is a positive integer, \(P_n(x)\) is an explicit polynomial of degree n, and \(E_k(X)= o_k (X)\). For characters of the form \(\chi _{8d}\), their conjecture may be written as
where \( Q_n(x)\) is another explicit polynomial of degree n, and \(\hat{E}_k(X) = o_k (X)\).
In 1981, Jutila [11] established (1.5) for \(k=1\) with \(E_1(X)= O(X^{\frac{3}{4}+\varepsilon })\). In 1985, Goldfeld and Hoffstein [8] improved this to \(E_1(X)= O(X^{\frac{19}{32}+\varepsilon })\) by using multiple Dirichlet series. Their work implies the error \(O(X^{\frac{1}{2}+\varepsilon })\) for a smoothed version of the sum in (1.5) when \(k=1\). This was later obtained by Young [20] in 2009, using a different technique based on a recursive method and a study of shifted moments. We remark that Alderson and Rubinstein [1] conjectured that \(E_1(X) = O(X^{\frac{1}{4}+\varepsilon })\). In 1981, the second moment was established by Jutila [11],
In 2000, Soundararajan [17] improved this by obtaining the full main term in (1.6), in the case \(k=2\), with the power savings \(\hat{E}_2(X) = O (X^{\frac{5}{6}+\varepsilon })\). In 2020, Sono [16] improved this to \( O (X^{\frac{1}{2}+\varepsilon })\) for a smoothed variant of \(\hat{E}_2(X)\). In [17] Soundararajan was the first to prove an asymptotic for the third moment, obtaining \(\hat{E}_3(X) =O(X^{\frac{11}{12}+\varepsilon })\). In 2003, Diaconu et al. [5] improved this to \(E_3(X) = O(X^{0.85 \dots +\varepsilon })\) by using multiple Dirichlet series techniques. In 2013, Young [21] further improved this to \( O(X^{\frac{3}{4}+\varepsilon })\) for a smoothed version of \(\hat{E}_3(X)\) by using similar techniques to [20]. Recently, in 2018, Diaconu and Whitehead [6] improved Young’s result by showing that a smoothed version of \(\hat{E}_3(X)\) is of size \(cX^{\frac{3}{4}}+O(X^{\frac{2}{3} +\varepsilon })\), for some \(c \in {\mathbb {R}}\). This verified a conjecture of Diaconu et al. [5] of the presence of a secondary lower order term. Zhang [22] had previously conditionally established a secondary term of size \(X^{\frac{3}{4}}\) in 2005.
For the family of quadratic Dirichlet L-functions, moments higher than four have not been asymptotically evaluated. This seems beyond current techniques. However, there are celebrated results on upper and lower bounds of the moments. In 2006, Rudnick and Soundararajan [15] proved the lower bound
for all even natural number \(k \ge 1\). In 2009, Soundararajan [18] proved under GRH that for all positive real k,
In 2013, Harper [9], assuming GRH, improved this to
The method of this paper is largely based on the arguments and techniques in [17, 19]. We use the approximate functional equation for Dirichlet L-functions, and then employ the Poisson summation formula to separate the summation into diagonal terms, off-diagonal terms, and error terms. Both diagonal and off-diagonal terms contribute to the main term. To bound the error terms, by following the argument in [18, 19], under GRH, we established an upper bound for the shifted moments of quadratic Dirichlet L-functions (see Theorem 2.4).
With further effort, one might be able to heuristically obtain all the main terms that are expected from the conjecture of Conrey et al. in (1.6). However, the computation will be complicated. It might be simplified by considering a shifted version of the fourth moment, analogous to the calculation in [20]. Florea considered the function field version of the fourth moment in [7]. In her work she was able to identify all the main terms as given by a conjecture of Andrade–Keating [2, Conjecture 5] (the function field analogue of (1.6)). By using a recursive method, Florea obtained extra lower main terms in this case. It is possible that her techniques may be employed to obtain additional lower main terms in Theorem 1.1 and we hope to revisit this in future work. However, one would need to apply the approximate functional equation for the fourth power of the L-function rather than the second power (2.3). In addition, one would have to eliminate the use of the parameters \(U_1,U_2\) in (3.3). In our article, we use the approximate functional equation for the second power of the L-function as it is necessary to obtain the unconditional lower bound in Theorem 1.2.
The outline of this paper is as follows. The proof of Theorems 1.1 and 1.2 proceed simultaneously. In Sect. 2, we introduce some tools. In Sect. 3, we set up the evaluation of the fourth moment. We apply the Poisson summation formula to split the fourth moment into diagonal, off-diagonal, and error terms. We evaluate the diagonal terms and off-diagonal terms in Sects. 4 and 5, respectively. The error terms are bounded in Sect. 6. The proofs of Theorem 1.1 and 1.2 are completed in Sect. 7. Finally, we give the proof of Theorem 2.4 in Sect. 8.
Notation In this paper, we shall use the convention that \(\varepsilon >0\) denotes an arbitrary small constant which may vary in different situations. For two functions f(x) and g(x), we shall use the notation \(f(x) = O(g(x))\), \(f(x) \ll g(x)\) to mean there exists a constant C such that \(|f(x)| \le C|g(x)|\) for all sufficiently large x. If we write \(f(x) = O_a(g(x))\) or \(f(x) \ll _a g(x)\), then we mean that the corresponding constants depend on a. Throughout the paper, the big O may depend on \(\varepsilon \).
2 Basic tools
In this section, we introduce several tools that shall be used in this article.
2.1 Approximate functional equation
For \(\xi > 0\), define
where
Here, and henceforth, \(\int _{(c)}\) stands for \(\int _{c-i \infty }^{c+ i \infty }\). It can be shown (see [17, Lemma 2.1]) that \(w(\xi )\) is real-valued and smooth on \((0,+\infty )\), bounded as \(\xi \) near 0, and decays exponentially as \(\xi \rightarrow + \infty \). Define
where \(\tau (n)\) is the number of divisors of n. It was proved [17, Lemma 2.2] that for odd, positive, square-free integers d,
2.2 Poisson summation formula
The following lemma is [19, Lemma 2.2].
Lemma 2.1
Let \(\Phi \) be a smooth function with compact support on the positive real numbers, and suppose that n is an odd integer. Then
where
and
is a Fourier-type transform of \(\Phi \).
The precise values of the Gauss-type sum \(G_k(n)\) have been calculated in [17, Lemma 2.3] as follows.
Lemma 2.2
If m and n are relatively prime odd integers, then \(G_k(mn)=G_k(m)G_k(n)\). Moreover, if \(p^{\alpha }\) is the largest power of p dividing k (setting \(\alpha =\infty \) if \(k=0),\) then
2.3 Smooth function
Let \(\Phi \) be a smooth Schwarz class function that is compactly supported on \([\frac{1}{2}, \frac{5}{2}]\), and \(0 \le \Phi (t) \le 1\) for all t. For any integer \(\nu \ge 0\), define
For any \(s \in {\mathbb {C}}\), define
Note that \(\check{\Phi }(s)\) is a holomorphic function of s. Integrating by parts \(\nu \) times gives us
Hence, for \({{\text {Re}}}(s)<1\), we see that
2.4 Some lemmas
The following lemma is the sharpest upper bound up to date for the fourth moment of quadratic Dirichlet L-functions, due to Heath-Brown [10, Theorem 2].
Lemma 2.3
Suppose \(\sigma +it\) is a complex number with \(\sigma \ge \frac{1}{2}\). Then
Assuming GRH, the bound in Lemma 2.3 can be improved by the following theorem.
Theorem 2.4
Assume GRH for \(L(s,\chi _d)\) for all fundamental discriminants d. Let \(z_1,z_2 \in {\mathbb {C}}\) with \(0 \le {{\text {Re}}}(z_1), {{\text {Re}}}(z_2) \le \frac{1}{\log X},\) and \(|{{\text {Im}}}(z_1)|, |{{\text {Im}}}(z_2)| \le X\). Then
Theorem 2.4 is similar to [19, Corollary 5.1]. Indeed, the proof of it follows closely the proof of [19, Corollary 5.1] and the argument in [18, Section 4]. Analogous results to Theorem 2.4 were obtained by Chandee [3, Theorem 1.1] for the moments of the Riemann zeta function, and by Munsch [14, Theorem 1.1] for the moments of Dirichlet L-functions modulo q. The proof of Theorem 2.4 is postponed to Sect. 8.
We remark that Lemma 2.3 is used to bound the error terms in the proof of Theorem 1.2, while both Lemma 2.3 and Theorem 2.4 are needed to bound the error terms in the proof of Theorem 1.1.
3 Setup of the problem
Let \(\Phi \) be a smooth function as described in Sect. 2.3. We consider the following smoothed version of the fourth moment
Using the approximate functional equation (2.3), we have
where
Let \(X^{\frac{9}{10}} \le U_1 \le U_2 \le X \) be two parameters that will be chosen later. Define
We remark that (3.1) is approximately equal to (3.3) by choosing appropriate values for \(U_1\) and \(U_2\). This will be explained in Sect. 7.
Combining (3.2) and (3.3), we obtain that
where
Using the Möbius inversion to remove the square-free condition in (3.4) gives
In the above, we let \(S_1\) denote the terms with \(a \le Y\), where Y is a parameter that satisfies \(Y \le X\). The value of Y will be chosen later. Also, we let \(S_2\) denote the terms with \(a > Y \). The terms \(S_1\) contribute to the main term. We will discuss \(S_1\) in Sects. 4–6. The terms \(S_2\) contribute to the error term by the following lemma.
Lemma 3.1
Unconditionally, we have \(S_2 \ll X^{1+\varepsilon }Y^{-1}.\) Under GRH, we have \(S_2 \ll XY^{-1} \log ^{44}X.\)
Proof
Write \(d=lb^2\), where l is square-free and b is positive. Grouping terms in \(S_2\) according to \(c=ab\), we deduce that
where for \( {{\text {Re}}}(s)>1\), \(L_c(s,\chi )\) is given by the Euler product of \(L(s,\chi )\) with omitting all prime factors of c. The last equation follows by the definition of h(x, y, z) in (3.5). Moving the lines of the integral to \({{\text {Re}}}(u) = {{\text {Re}}}(v) = \frac{1}{\log X}\), the double integral above is bounded by
Here we use the inequalities \(2ab \le a^2 + b^2\) and \( |L_c(\tfrac{1}{2}+u,\chi _{8l})| \le \tau (c) | L(\tfrac{1}{2}+u,\chi _{8l})|. \)
By Theorem 2.4, we see that for \(|{{\text {Im}}}(u)| \le \frac{X}{c^2}\),
Also, by Lemma 2.3, we get that
Substituting both (3.9) and (3.10) in (3.8), we can bound (3.8) by
Together with (3.7), this yields
This completes the proof of the conditional part of the lemma. The unconditional part follows similarly by substituting (3.10) in (3.8). \(\square \)
Now we consider \(S_1\). Using the Poisson summation formula (see Lemma 2.1) for the sum over d in \(S_1\), we obtain that
Let \(S_1(k=0)\) denote the sum above over \(k=0\), which are called diagonal terms. Let \(S_1(k\ne 0)\) denote the sum over \(k\ne 0\). Write \(S_1(k\ne 0) = S_1(k= \Box ) + S_1(k\ne \Box )\), where \(S_1(k= \Box )\) denotes the terms with square k, and \(S_1(k \ne \Box )\) denotes the remaining terms. We call \(S_1(k= \Box ) \) off-diagonal terms. We will discuss \(S_1(k=0)\), \(S_1(k= \Box ) \), and \(S_1(k\ne \Box )\) in Sects. 4–6, respectively.
4 Evaluation of \(S_1(k=0)\)
In this section, we shall extract one main term of \(S_1\) from \(S_1(k=0)\). The argument here is similar to [19, Section 3.2].
It follows from the definition of \(G_k(n)\) in (2.4) that \(G_0(n) = \phi (n)\) if \(n=\Box \), and \(G_0 (n) = 0\) otherwise. By this fact and (3.11), we see that
Observe that
Inserting this into (4.1), combined with
we obtain that
Now we simplify the error term above. Recall that \(w(\xi )\) is bounded as \(\xi \) near 0 and decreases exponentially as \(\xi \rightarrow + \infty \). It follows that
The last inequality follows by separating the sum into two parts corresponding to whether \(n_1, n_2 \le U_1U_2\). Combining (4.2) and (4.3), we have
Recall h(x, y, z) from (3.5) and \(\omega (\xi )\) from (2.1). We have
Lemma 4.1
For \({{\text {Re}}}(\alpha ),{{\text {Re}}}(\beta ) > \frac{1}{2},\) we have
where \(Z_1(\alpha ,\beta )\) is defined by
Here
and for \(p \not \mid 2,\)
Furthermore, \(Z_1(\alpha ,\beta )\) is analytic and uniformly bounded in the region \({{\text {Re}}}(\alpha ),{{\text {Re}}}(\beta ) \ge \frac{1}{4}+\varepsilon \).
Proof
We have
Note that
Thus,
Then (4.5) follows by comparing Euler factors on both sides. The remaining part of the lemma follows directly from the definition of \(Z_1(\alpha ,\beta )\). \(\square \)
It follows from (4.4) and Lemma 4.1 that
The double integral in (4.6) can be written as
where
Clearly, \({\mathcal {E}}\) is analytic for \({{\text {Re}}}(u),{{\text {Re}}}(v)\ge -\frac{1}{4} + \varepsilon \).
Now move the lines of the integral above to \({{\text {Re}}}(u) = {{\text {Re}}}(v) = \frac{1}{10}\) without encountering any poles. Next move the line of the integral over v to \({{\text {Re}}}(v) = -\frac{1}{5}\). We may encounter two poles of order at most 4 at both \(v=0\) and \(v=-u\). Thus,
The integral of the residue at \(v=-u\) in (4.7) will contribute to an error term. In fact, we have
where \( {\mathcal {E}}^{(i,j)}(u, v) := \frac{\partial ^{i+j}{\mathcal {E}}}{\partial u^i \partial v^j} (u,v)\). It follows that
It remains to consider the integral of the residue at \(v=0\) in (4.7). Note that
Moving the line of the integral below from \({{\text {Re}}}(u)= \frac{1}{10}\) to \({{\text {Re}}}(u)= -\frac{1}{10}\) with encountering a pole at \(u=0\), we see that
Combining (4.6)–(4.9), we obtain that
where \(\tilde{\Phi }(s)\) is defined in (5.3).
Now we compute \({\mathcal {E}}(0,0) \) above. Clearly, \({\mathcal {E}}(0,0) = Z_1(\frac{1}{2},\frac{1}{2}) \). By the definition of \(Z_1(u,v)\) in Lemma 4.1, it follows that
On the other hand, recalling the definition of \(a_4\) from (1.4), we have
Comparing (4.11) with (4.12), we conclude \( Z_1(\frac{1}{2},\frac{1}{2}) = 4 a_4\), which implies \({\mathcal {E}}(0,0) = 4 a_4\). Together with (4.10), it follows that
Lemma 4.2
We have
5 Evaluation of \(S_1(k=\Box )\)
In this section, we compute another part of the main term of \(S_1\) which arises from \(S_1(k=\square )\). Many of the techniques used here are from Sections 5.2, 5.3 of [17].
Recall from (3.11) that
To proceed, we need the following lemma.
Lemma 5.1
Let f(x) be a smooth function on \({\mathbb {R}}_{> 0}.\) Suppose f decays rapidly as \(x \rightarrow \infty ,\) and \(f^{(n)}(x)\) converges as \(x \rightarrow 0^+\) for every \(n\in \mathbb {Z}_{\ge 0}.\) Then we have
where \(\tilde{f}\) is the Mellin transform of f defined by
In addition, the Eq. (5.2) is also valid when \(\cos \) is replaced by \(\sin .\)
Proof
See [19, Section 3.3]. \(\square \)
Taking \(f(x) = h(xX, n_1, n_2)\) in Lemma 5.1, we have
where
Recall from (3.5) the definition of h(x, y, z). The above contour integral is
where
Move the lines of the triple integral to \({{\text {Re}}}(s) = \frac{1}{2}+\varepsilon \), \({{\text {Re}}}(u)={{\text {Re}}}(v)=\frac{1}{2}+2\varepsilon \), and change the variables \(u'=u-s\), \(v'=v-s\). We obtain that
Substituting this in (5.1), we get that
Lemma 5.2
Write \(4k = k_1k_2^2,\) where \(k_1\) is a fundamental discriminant (possibly \(k_1 = 1),\) and \(k_2\) is a positive integer. In the region \({{\text {Re}}}(\alpha ),{{\text {Re}}}(\beta )>\frac{1}{2},\) we have
Here \(Z_2(\alpha ,\beta ,a,k)\) is defined as follows :
where
and
In addition, \(Z_2(\alpha ,\beta ,a,k)\) is analytic in the region \({{\text {Re}}}(\alpha ),{{\text {Re}}}(\beta )>0,\) and we have
in the region \({{\text {Re}}}(\alpha ),{{\text {Re}}}(\beta ) \ge \frac{1}{\log X},\) where the implied constant is absolute.
Proof
The formula (5.5) follows from the joint multiplicativity of \(G_k(n_1n_2)\) with variables \(n_1\) and \(n_2\). In fact,
Then we obtain (5.5) by comparing Euler factors on both sides.
For \(p \not \mid 2a k\), by Lemma 2.2, we know that
This shows that \(Z_2(\alpha ,\beta ,a,k)\) is analytic in the region \({{\text {Re}}}(\alpha ),{{\text {Re}}}(\beta )>0\).
It remains to prove the upper bound of \(Z_2(\alpha ,\beta ,a,k)\). It follows from (5.7) that for \(p \not \mid 2a k\),
For p|2a, we get that
For \(p \not \mid 2a,p|k\), using the trivial bound \(G_k(p^n) \le p^n\), we obtain that
By the above three bounds, we have obtained (5.6). \(\square \)
By (5.4) and Lemma 5.2, it follows that
Note that when moving the lines of integration of the variables u, v to the left, then we may encounter poles only when \(k=\Box \) (then \(k_1=1\)). Thus, we break the sum in (5.8) into two parts depending on whether \(k=\Box \).
Write
and
We will give an upper bound for \(S_1(k \ne \Box )\) in the next section. In the rest of this section, we focus on \(S_1(k=\Box )\) and obtain a main term. By the change of variables (replace k by \(k^2\)), we get that
Lemma 5.3
In the region \({{\text {Re}}}(\alpha ),{{\text {Re}}}(\beta )>0,\) \({{\text {Re}}}(\gamma )>\tfrac{1}{2},\)
Here \(Z_3(\alpha ,\beta ,\gamma ,a)\) is defined by
where for p|2a,
and for \(p \not \mid 2a,\)
Moreover,
-
(1)
\(Z_3(\alpha ,\beta ,\gamma ,a)\) is analytic and uniformly bounded in the region \({{\text {Re}}}(\alpha ),{{\text {Re}}}(\beta )\ge \tfrac{1}{2}+\varepsilon ,\) \({{\text {Re}}}(\gamma )\ge 2\varepsilon .\)
-
(2)
\(Z_3(\alpha ,\beta ,\gamma ,a)\) is analytic and \(Z_3(\alpha ,\beta ,\gamma ,a) \ll \log ^{14} X \) in the region \({{\text {Re}}}(\alpha ),{{\text {Re}}}(\beta )\ge \tfrac{1}{2}+\frac{1}{\log X},\) \({{\text {Re}}}(\gamma )\ge \frac{2}{\log X}.\) The implied constant is absolute.
Proof
We first compute the left-hand side of (5.11) without \((-1)^k\). Note that
We remark here that \(Z_{2,p} (\alpha ,\beta ,a,1)\) may not be 1. If p|2a, we have
If \(p \not \mid 2a\), we have
Note that
and
Inserting them into (5.16), combined with (5.14), (5.15), we obtain that
Now we prove (5.11). It is clear that \(G_{4k}(n) = G_k(n)\) for any odd n, so \(Z_2(\alpha ,\beta ,a,4k^2)=Z_2(\alpha ,\beta ,a,k^2)\). Thus,
Together with (5.17), this yields (5.11).
The first property of \(Z_3(\alpha ,\beta ,\gamma ,a)\) comes directly from its definition. Now we prove the second property. We know that for \({{\text {Re}}}(\alpha ),{{\text {Re}}}(\beta )\ge \tfrac{1}{2}+\frac{1}{\log X}\), \({{\text {Re}}}(\gamma )\ge \frac{2}{\log X}\),
as desired. \(\square \)
It follows from (5.10) and Lemma 5.3 that
Note that \(Z_3(\tfrac{1}{2}+u,\tfrac{1}{2}+v, s)\) is analytic in the region \({{\text {Re}}}(u),{{\text {Re}}}(v) \ge \varepsilon \), \({{\text {Re}}}(s) \ge 2\varepsilon \) by (1) of Lemma 5.3, so we move the lines of the integral above to \({{\text {Re}}}(u) = {{\text {Re}}}(v) = 1\), \({{\text {Re}}}(s) =\frac{1}{10}\) without encountering any poles. (The only possible pole lies in \(\zeta (2s)\) at \(s=\frac{1}{2}\), but is cancelled by the simple zero arising from \(2^{1-2s} -1 \).) Hence,
Note that we may extend the sum over a to infinity with an error term
Move the lines of the integral above to \({{\text {Re}}}(u)={{\text {Re}}}(v)=\frac{1}{\log X}\), \({{\text {Re}}}(s) = \frac{2}{\log X}\) without encountering any poles. Then by (2) of Lemma 5.3, this is bounded by
The last inequality is due to (2.5) and the fact \(| \Gamma (s) (\cos +\sin )(\frac{\pi s }{2}) |\ll |s|^{{{\text {Re}}}(s)-\frac{1}{2}}\). Together with (5.18), it implies that
Let \(K_1(\alpha ,\beta ,\gamma ;p), K_2(\alpha ,\beta ,\gamma ;p)\) denote the expressions of (5.12) and (5.13), respectively. We have the following lemma.
Lemma 5.4
In the region \({{\text {Re}}}(\alpha ),{{\text {Re}}}(\beta )>\tfrac{1}{2},\) \(0< {{\text {Re}}}(\gamma ) < \tfrac{1}{2},\)
where
Moreover, \(Z_4(\alpha ,\beta ,\gamma )\) is analytic and uniformly bounded in the region \({{\text {Re}}}(\alpha ),{{\text {Re}}}(\beta )\ge \frac{3}{8},\) \(-\frac{1}{16} \le {{\text {Re}}}(\gamma ) \le \frac{1}{8}\).
Proof
We have
This implies the Eq. (5.20). The later part of the lemma can be proved directly by the definition of \(Z_4(\alpha ,\beta ,\gamma )\). \(\square \)
It follows from (5.19) and Lemma 5.4 that
where \(Z_4(\tfrac{1}{2}+u,\tfrac{1}{2}+v,s)\) is analytic and uniformly bounded in the region \({{\text {Re}}}(u),{{\text {Re}}}(v)\ge -\frac{1}{8}\), \(-\frac{1}{16} \le {{\text {Re}}}(s) \le \frac{1}{8} \).
Move the lines of the triple integral above to \({{\text {Re}}}(u) = {{\text {Re}}}(v) = {{\text {Re}}}(s) = \frac{1}{100}\) without encountering any poles. Then move the line of the integral over v to \({{\text {Re}}}(v)= -\frac{1}{50} + \frac{1}{\log X}\). There is a pole of order at most 2 at \(v=0\), and a pole of order at most 4 at \(v=-s\), so the triple integral in (5.21) is
where \(I_2(u,s),I_3(u,s)\) are the residues of the integrand in (5.21) at \(v=0\) and \(v=-s\), respectively.
The double integral of \(I_3 (u,s)\) in (5.22) is bounded. To see this, note that
Moving the line of the following integral in terms of u from \({{\text {Re}}}(u)=\frac{1}{100}\) to \({{\text {Re}}}(u)=\frac{1}{\log X}\) gives
Now we handle the double integral of \(I_2 (u,s)\) in (5.22). Write the integrand in (5.21) in the form of
Clearly, \({\mathcal {F}}(u,v,s)\) is analytic in the region \({{\text {Re}}}(u+2s),{{\text {Re}}}(v+2s)>0\), \({{\text {Re}}}(u),{{\text {Re}}}(v)\ge -\frac{1}{8}\) and \(-\frac{1}{16} \le {{\text {Re}}}(s) \le \frac{1}{8} \). We have
Move the line of the double integral below from \({{\text {Re}}}(u)= \frac{1}{100} \) to \({{\text {Re}}}(u)= -\frac{1}{100} + \frac{1}{\log X}\). There is one possible pole at \(u=0\). Hence,
Note that
We see that the expression in the brackets above is analytic for \(-\frac{1}{16} \le {{\text {Re}}}(s) \le \frac{1}{8} \). Then we move the line of the integral below to \({{\text {Re}}}(s)=-\frac{1}{100}\) with only a possible pole at \(s=0\), and get that
where
Next we compute \({\mathcal {F}}(0,0,0)\) above. Note that \({\mathcal {F}}(0,0,0) = {\mathcal {J}}(0,1) g(0)^2 Z_4 (\frac{1}{2}, \frac{1}{2}, 0) = -\frac{1}{2}\tilde{\Phi }(1) Z_4 (\frac{1}{2}, \frac{1}{2}, 0)\). Recalling the definition of \(Z_4 (\alpha , \beta , \gamma )\) from Lemma 5.4, we have
The last equality is due to (4.12). Thus,
Combining (5.21) with (5.22)–(5.25), and the identity above, it follows that
Lemma 5.5
We have
6 Upper bounds for \(S_1(k \ne \Box )\)
In this section, we shall prove the following upper bounds for \(S_1(k \ne \square )\). The techniques applied here are from [17, Section 5.4] and the last part of [19, Section 3].
Lemma 6.1
Unconditionally, we have
Under GRH, we have
Proof
It follows from (5.9) that
Separate the sum over \(k_1\) to the sum over \(|k_1| \le T:= U_1 U_2 Y^2 X^{-1}\), and that over \(|k_1| > T\). Clearly, \(X^{\frac{4}{5}} \le T \le X^3\) since \(X^{\frac{9}{10}} \le U_1 \le U_2 \le X\) and \( 1\le Y \le X\). For the first category, we move the lines of the integral to \({{\text {Re}}}(u)={{\text {Re}}}(v) = -\frac{1}{2}+ \frac{1}{4\log X}\), \({{\text {Re}}}(s)= \frac{3}{4}\), while for the second category, we move the lines to \({{\text {Re}}}(u)={{\text {Re}}}(v) = -\frac{1}{2}+ \frac{1}{4\log X}\), \({{\text {Re}}}(s)= \frac{5}{4}\).
By (5.6), the terms in the first category are bounded by
Note that
By (6.3) and Lemma 2.3, it follows that
This bound can be improved under GRH. In fact, we split the left-hand side of (6.4) into
By Theorem 2.4, we have for \(|{{\text {Im}}}(u)| \le X^{\frac{1}{5}}\),
Later in (8.10) of Sect. 8, under GRH, it will be proved that for \(-\frac{1}{2}\le {{\text {Re}}}(u) \le -\frac{1}{2} + \frac{1}{\log X}\) and \(|{{\text {Im}}}(u)|\le X\),
Using dyadic blocks and Cauchy-Schwarz inequality, combined with the above bound, we can deduce that for \(|{{\text {Im}}}(u)| \le X^{\frac{1}{5}}\),
Thus for \(|{{\text {Im}}}(u)| \le X^{\frac{1}{5}}\),
Recall the definition of T. Substituting both (6.4) and (6.5) in (6.2), we have proved the contribution of the terms in the first category is \( \ll U_1^{\frac{1}{2}} U_2^{\frac{1}{2}} Y (\log X)^{2^{17}} \Phi _{(5)}\). Similarly, we can deduce that the contribution of the terms in the second category is also \( \ll U_1^{\frac{1}{2}} U_2^{\frac{1}{2}} Y (\log X)^{2^{17}} \Phi _{(5)}\).
The conditional part of the lemma is proved now. The unconditional part can be proved similarly by substituting (6.4) in (6.2). \(\square \)
7 Proof of main theorems
In this section, we complete the proof of Theorems 1.1 and 1.2. The argument is similar to [19, Section 5].
7.1 Proof of Theorem 1.1
Recall the definition of \(S(U_1,U_2)\) from (3.3). Write \(U=\frac{X}{(\log X)^{2^{50}}}\). Take \(U_1=U_2= U\) and \(Y=X^{\frac{1}{2}}U_1^{-\frac{1}{4}}U_2^{-\frac{1}{4}}\).
Using these values, we can simplify Lemmas 3.1, 4.2, 5.5 and 6.1. In the following, we give the detail of the simplification for Lemma 5.5. The summation in Lemma 5.5 is
We consider the case \(j_4 = 0\), and other cases can be done similarly. Assume \(j_4 =0\) in (7.1). Then by (5.26), we have
The second last equality is due to \(\log ^j U = \log ^{j} X + O(\log ^{j-1+\varepsilon } X)\) for \(j\ge 0\). The last equality is obtained by
Similarly, we can compute other cases in (7.1). Combining all cases we can show (7.1) is
Using this fact, Lemma 5.5 can be simplified to
Now by (3.3), (3.6), combined with Lemmas 3.1, 4.2, 5.5 and 6.1, we can obtain that
Define \(B_U(\frac{1}{2};8d) = L(\tfrac{1}{2},\chi _{8d})^2 - A_U(\frac{1}{2};8d)\). We claim that
In fact, we have
Since \(\frac{(8d)^s-U^s}{s}\) is entire, we move the line of the integral to \({{\text {Re}}}(s)=0\). By the bound \(| \frac{(8d)^{it}-U^{it}}{it}| \ll \log ( \frac{8d}{U} )\), \(t\in {\mathbb {R}}\), we get that
This implies that the left-hand side of (7.3) is
Split the integral according to whether \(|t_1|,|t_2| \le X\). If \(|t_1|,|t_2| \le X\), then use Theorem 2.4. Otherwise, use Lemma 2.3. This will establish (7.3).
Note that
Using the Cauchy–Schwarz inequality on the third term, combined with (7.2) and (7.3), we obtain that
In the following we remove the function \(\Phi (\frac{d}{X})\) in the above summation. Choose \(\Phi \) such that \(\Phi (t) =1\) for all \(t \in (1+Z^{-1} , 2-Z^{-1})\), \(\Phi (t) =0\) for all \(t \notin (1,2) \), and \(\Phi ^{(\nu )} (t)\ll _{\nu } Z^{\nu }\) for all \(\nu \ge 0\). This implies that \(\Phi _{(\nu )} \ll _{\nu } Z^{\nu }\), and that \(\tilde{\Phi }(1) = \check{\Phi } (0) = 1 + O(Z^{-1})\). Then by (7.5), we get that
Take \(Z=\log X\). We have
Similarly, we can choose \(\Phi (t)\) in (7.5) such that \(\Phi (t) = 1 \) for all \(t \in [1 , 2]\), \(\Phi (t) =0\) for all \(t \notin (1-Z^{-1},2+Z^{-1}) \), and \(\Phi ^{(\nu )}(t) \ll _{\nu } Z^{\nu }\) for all \(\nu \ge 0\). Taking \(Z=\log X\), we can deduce that
Combining (7.6) and (7.7), we obtain that
Applying the above with \(X= \frac{x}{2}\), \(X= \frac{x}{4}\), \(\dots \), we have proved Theorem 1.1.
7.2 Proof of Theorem 1.2
Write \(U = X^{1-4\varepsilon }\). By the Cauchy–Schwarz inequality, we obtain that
Let \(A^2\) and B denote the numerator and denominator of the right-hand side in (7.8), respectively.
We first handle B. By (3.3) and (3.6), combined with Lemmas 3.1, 4.2, 5.5 and 6.1, taking \(Y=X^{\frac{1}{2}}U_1^{-\frac{1}{4}}U_2^{-\frac{1}{4}}\) and \( U_1=U_2= U\), we get that
where the implied constant in \(O(\varepsilon ^2)\) is absolute.
For A, we have
where
Note that the difference between A and B lies in the difference between h(x, y, z) and \(h_1(x,y,z)\). By slightly modifying the argument for computing B, taking \(Y=X^{\frac{1}{2}}U^{-\frac{1}{4}} X^{-\frac{1}{4}}\), we can deduce that
where the implied constant in \(O(\varepsilon ^2)\) is absolute.
Choose \(\Phi \) such that \(\Phi (t) =1\) for all \(t \in (1+Z^{-1} , 2-Z^{-1})\), \(\Phi (t) =0\) for all \(t \notin (1,2) \), and \(\Phi ^{(\nu )} (t)\ll _{\nu } Z^{\nu }\) for all \(\nu \ge 0\). Take \(Z=\log X\). Combining (7.8) with the estimates for A and B, we have
Having summed this with \(X= \frac{x}{2}\), \(X= \frac{x}{4}\), \(\dots \), we obtain Theorem 1.2.
8 Proof of Theorem 2.4
In this section, we shall prove Theorem 2.4. The proof here closely follows [19, Section 6].
Let \(x\in {\mathbb {R}}\) with \(x \ge 10\), and \(z \in {\mathbb {C}}\). Define
Let \(z_1, z_2 \in {\mathbb {C}}\). We define
and
Remark 8.1
We see that the definition of \({\mathcal {M}}(z_1,z_2,x)\) is different from that in [19, Section 6] by a factor \(-1\), while \({\mathcal {V}} (z_1,z_2,x)\) is the same. The difference is due to the different symmetry types of families of L-functions (see Katz–Sarnak [12]). The family of quadratic Dirichlet L-functions is symplectic, whereas the family of quadratic twists of a modular L-function in [19] is orthogonal. For further explanation, we refer readers to [19, p. 1111] and [18, p. 991].
Proposition 8.2
Assume GRH for \(L(s,\chi _d)\) for all fundamental discriminants d. Let X be large. Let \(z_1,z_2 \in {\mathbb {C}}\) with \(0 \le {{\text {Re}}}(z_1), {{\text {Re}}}(z_2) \le \frac{1}{\log X}\), and \(|{{\text {Im}}}(z_1)|, |{{\text {Im}}}(z_2)| \le X\). Let \({\mathcal {N}}(V;z_1,z_2,X)\) denote the number of fundamental discriminants \(|d|\le X\) such that
Then for \(10\sqrt{{\text {log}}{\text {log}} X}\le V \le {\mathcal {V}}(z_1,z_2,X)\), we have
for \({\mathcal {V}}(z_1,z_2,X)< V \le \frac{1}{16}{\mathcal {V}}(z_1,z_2,X){\text {log}}{\text {log}}{\text {log}}X\), we have
finally, for \(\frac{1}{16}{\mathcal {V}}(z_1,z_2,X){\text {log}}{\text {log}} {\text {log}}X<V\), we have
Proof
It is helpful to keep in mind that \(\log \log X + O(1) \le {\mathcal {V}}(z_1,z_2,x) \le 4\log \log X \). By slightly modifying the proof of the main proposition in [18], we obtain that for any \(2 \le x \le X\),
where \(\lambda _0 = 0.56 \dots \) is the unique real number satisfying \(e^{-\lambda _0} = \lambda _0\). It follows that
The terms with \(l \ge 3\) in the above sum contribute O(1). Using the fact \(\sum _{p|d} \frac{1}{p} \ll \log \log \log d\), we get that
By RH, we can deduce that
The above sum also has a trivial bound \(\ll y\). Combining (8.2) with these two bounds, by partial summation, we have
Inserting above estimates into (8.1), by \({\mathcal {M}}(z_1,z_2,x) \le {\mathcal {M}}(z_1,z_2,X)\), we obtain that
For brevity, put \({\mathcal {V}} := {\mathcal {V}} (z_1,z_2,X)\). Set
By taking \(x = \log X\) in (8.4) and bounding the sum over p in (8.4) trivially, we know that \({\mathcal {N}} (V; z_1,z_2,X) =0\) for \(V > \frac{5\log X}{\log \log X} \). Thus, we can assume \(V \le \frac{5\log X}{\log \log X} \).
From now on, we set \(x=X^{A/V}\) and \(z=x^{1/\log \log X}\). Let \(S_1\) be the sum in (8.4) truncated to \(p \le z\), and \(S_2\) be the sum over \(z < p \le x\). It follows from (8.4) that
Note that if d satisfies \(\log |L (\frac{1}{2}+z_1 , \chi _d)||L(\tfrac{1}{2}+z_2,\chi _d)| \ge V + {\mathcal {M}} (z_1,z_2,X)\), then either
Write
For any \(m \le \frac{V}{2A} -1\), by [19, Lemma 6.3], we have
By choosing \(m = \lfloor \frac{V}{2A}\rfloor -1\), we get that
We next estimate \({\text {meas}}(X;S_1)\). For any \(m \le \frac{\frac{1}{2}\log X - \log \log X}{ \log z}\), by [19, Lemma 6.3], we obtain that
where
By using (8.3) and the partial summation, we can show that
Together with (8.6), this yields
Taking \(m = \lfloor \frac{V_1^2}{ 2{\mathcal {V}}} \rfloor \) when \(V \le \frac{(\log \log X)^2}{\log \log \log X}\), and taking \( m = \lfloor 10V \rfloor \) otherwise, we obtain that
Using the estimates (8.5) and (8.7), we can establish Proposition 8.2. This completes the proof. \(\square \)
For convenience, in the following we show a rough form of Proposition 8.2. Let \(k \in {\mathbb {R}}_{>0}\) be fixed. For \(10\sqrt{{\text {log}}{\text {log}} X} \le V \le 4k {\mathcal {V}}(z_1,z_2,X)\), we have
and for \(V > 4k {\mathcal {V}}(z_1,z_2,X)\), we have
Observe that
Inserting the rough bounds (8.8) and (8.9) into the integral above, we can deduce that
Theorem 8.3
Assume GRH for \(L(s,\chi _d)\) for all fundamental discriminants d. Let X be large. Let \(z_1,z_2 \in {\mathbb {C}}\) with \(0 \le {{\text {Re}}}(z_1), {{\text {Re}}}(z_2) \le \frac{1}{\log X},\) and \(|{{\text {Im}}}(z_1)|, |{{\text {Im}}}(z_2)| \le X.\) Then for any positive real number k and any \(\varepsilon >0,\) we have
In the rest of this section, we complete the proof of Theorem 2.4.
Proof of Theorem 2.4
By Theorem 8.3 and the fact that \({\mathcal {L}}(z,x) \le \log \log x\) for \(z \in {\mathbb {C}}, x \ge 10\), we can trivially get that
Now we assume \(|{{\text {Im}}}(z_1) - {{\text {Im}}}(z_2)| \ge \frac{1}{\log X}\). Write \(t_1 = {{\text {Im}}}(z_1)\) and \(t_2 = {{\text {Im}}}(z_2)\).
If \(t_1t_2 \ge 0\), then \(|t_1-t_2| \le |t_1+t_2| \le \max (2|t_1|, 2|t_2|)\), say \(|t_1+t_2| \le 2|t_1|\). Note that \({\mathcal {L}}(y,X\)) is a decreasing function for \(y \ge 0 \). Thus, we have
This together with
implies
On the other hand, if \(t_1t_2 < 0\), then \(|t_1-t_2| = |t_1|+|t_2|\le \max \{|2t_1|,|2t_2|\}\), say \(|t_1-t_2| \le |2t_2|\). It implies that \(|t_1| \le |t_2|\) and that \({\mathcal {L}}(2t_2,X)\le {\mathcal {L}}(|t_1-t_2|,X)\). Note \(|t_1-t_2|=2|t_1|+|t_1+t_2|\), so \(|t_1-t_2| \le \max \{4|t_1|, 2|t_1+t_2|\}\). In fact, if \(|t_1-t_2| > 4 |t_1|\), then \(2|t_1|+|t_1+t_2| > 4|t_1|\), which implies \(|t_1| \le \frac{1}{2} |t_1+t_2| \). It means \(|t_1-t_2|=2|t_1|+|t_1+t_2| \le 2 |t_1 +t_2|\). Without loss of generality, we can say \(|t_1-t_2| \le 4|t_1|\). It follows that \({\mathcal {L}}(z_1,X), {\mathcal {L}}(2z_1,X)\le {\mathcal {L}}(|t_1-t_2|,X)+O(1)\). Now we have
This combined with
also implies (8.11).
By inserting (8.11) into Theorem 8.3, we can show for \(|{{\text {Im}}}(z_1) - {{\text {Im}}}(z_2)| \ge \frac{1}{\log X}\),
By combining (8.12) and (8.10) with \(k=2\), we have proved Theorem 2.4. \(\square \)
References
Alderson, M.W., Rubinstein, M.O.: Conjectures and experiments concerning the moments of \(L(1/2,\chi _d)\). Exp. Math. 21(3), 307–328 (2012)
Andrade, J.C., Keating, J.P.: Conjectures for the integral moments and ratios of \(L\)-functions over function fields. J. Number Theory 142, 102–148 (2014)
Chandee, V.: On the correlation of shifted values of the Riemann zeta function. Q. J. Math. 62(3), 545–572 (2011)
Conrey, J.B., Farmer, D.W., Keating, J.P., Rubinstein, M.O., Snaith, N.C.: Integral moments of \(L\)-functions. Proc. Lond. Math. Soc. (3) 91(1), 33–104 (2005)
Diaconu, A., Goldfeld, D., Hoffstein, J.: Multiple Dirichlet series and moments of zeta and \(L\)-functions. Compos. Math. 139(3), 297–360 (2003)
Diaconu, A., Whitehead, I.: On the third moment of \(L(1/2,\chi _d)\) II: the number field case. J. Eur. Math. Soc. (to appear). Preprint. arXiv:1804.00690
Florea, A.: The fourth moment of quadratic Dirichlet \(L\)-functions over function fields. Geom. Funct. Anal. 27(3), 541–595 (2017)
Goldfeld, D., Hoffstein, J.: Eisenstein series of \({1\over 2}\)-integral weight and the mean value of real Dirichlet \(L\)-series. Invent. Math. 80(2), 185–208 (1985)
Harper, A.J.: Sharp conditional bounds for moments of the Riemann zeta function. arXiv:1305.4618
Heath-Brown, D.R.: A mean value estimate for real character sums. Acta Arith. 72(3), 235–275 (1995)
Jutila, M.: On the mean value of \(L({1\over 2},\,\chi )\) for real characters. Analysis 1(2), 149–161 (1981)
Katz, N., Sarnak, P.: Zeroes of zeta functions and symmetry. Bull. Am. Math. Soc. (N.S.) 36(1), 1–26 (1999)
Keating, J.P., Snaith, N.C.: Random matrix theory and \(L\)-functions at \(s=1/2\). Commun. Math. Phys. 214(1), 91–110 (2000)
Munsch, M.: Shifted moments of \(L\)-functions and moments of theta functions. Mathematika 63(1), 196–212 (2017)
Rudnick, Z., Soundararajan, K.: Lower bounds for moments of \(L\)-functions: symplectic and orthogonal examples. In: Multiple Dirichlet Series, Automorphic Forms, and Analytic Number Theory. Proceedings of Symposia in Pure Mathematics, vol. 75, pp. 293–303. American Mathematical Society, Providence (2006)
Sono, K.: The second moment of quadratic Dirichlet \(L\)-functions. J. Number Theory 206, 194–230 (2020)
Soundararajan, K.: Nonvanishing of quadratic Dirichlet \(L\)-functions at \(s=\frac{1}{2}\). Ann. Math. (2) 152(2), 447–488 (2000)
Soundararajan, K.: Moments of the Riemann zeta function. Ann. Math. (2) 170(2), 981–993 (2009)
Soundararajan, K., Young, M.P.: The second moment of quadratic twists of modular \(L\)-functions. J. Eur. Math. Soc. 12(5), 1097–1116 (2010)
Young, M.P.: The first moment of quadratic Dirichlet \(L\)-functions. Acta Arith. 138(1), 73–99 (2009)
Young, M.P.: The third moment of quadratic Dirichlet L-functions. Sel. Math. (N.S.) 19(2), 509–543 (2013)
Zhang, Q.: On the cubic moment of quadratic Dirichlet \(L\)-functions. Math. Res. Lett. 12(2–3), 413–424 (2005)
Acknowledgements
I would like to thank my supervisors Habiba Kadiri and Nathan Ng, for suggesting this problem to me, and for having numerous helpful discussions. I would also like to thank Matilde Lalín, Keiju Sono and Peng-jie Wong for their valuable comments. Lastly, I would like to thank the referee for their extensive feedback and constructive comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Research for this article is partially supported by the University of Lethbridge.
Rights and permissions
About this article
Cite this article
Shen, Q. The fourth moment of quadratic Dirichlet L-functions. Math. Z. 298, 713–745 (2021). https://doi.org/10.1007/s00209-020-02609-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00209-020-02609-2