1 Introduction

In this article, we derive an asymptotic expression for the Arakelov self-intersection number of the relative dualizing sheaf of the minimal regular model over \({\mathbb {Z}}\) for the modular curve \(X_0(p^2)\) in terms of its genus \(g_{p^2}\) for a prime p. For odd, square-free \(N \in {\mathbb {N}}\), this quantity was computed for the congruence subgroups \(\varGamma _0(N)\) [1], \(\varGamma _1(N)\) [28] and recently for the principal congruence subgroups \(\varGamma (N)\) [13]. We generalize our work to semi-stable models of these modular curves in a companion article [6]. We use the computation regarding the infinite part of the Arakelov self-intersection for the modular curve \(X_0(p^2)\) done in the present paper to compute the Arakelov self-intersection numbers for semi-stable models in [6]. From the viewpoint of Arakelov theory, the main motivation for studying the Arakelov self-intersection numbers is to prove an effective Bogomolov conjecture for the particular modular curve \(X_0(p^2)\). Bogomolov conjecture was proved by Ullmo [32] using Ergodic theory, though the proof is not effective. In [6], we prove an effective Bogomolov conjecture and find an asymptotic expression of the stable Faltings heights for the modular curves of the form \(X_0(p^2)\). In another ambitious direction, we hope that our results will find applications in finding Fourier coefficients of modular forms and residual Galois representations associated to modular forms following the strategy outlined in [11].

The main technical difficulty of this paper lies in the fact that for square free N the special fibers of the modular curves are reduced and even semi-stable over \({\mathbb {Q}}\), while without this hypothesis the special fiber is non-reduced and not semi-stable. We manage to remove the square-free assumption in our paper because of a careful analysis of the regular but non-minimal models of the corresponding modular curves, following Edixhoven [10]. The bound on the self-intersection number for the infinite place in terms of Green’s function has been achieved by the idea outlined by Zagier [33] using the Selberg trace formula.

Our first result concerns an asymptotic expression of the constant term of the Rankin–Selberg transform at the cusp \(\infty \) of the Arakelov metric. To state it we need some notation. Let \({\mathbb {H}}\) be the complex upper half plane and denote the non-compact modular curve corresponding to the subgroup \(\varGamma _0(p^2)\) by \(Y_0(p^2):=\varGamma _0(p^2) \backslash {\mathbb {H}}\). Let \(\mu _{\mathrm {hyp}}\) be the hyperbolic measure on the Riemann surface \(X_0(p^2)\) and \(v_{\varGamma _0(p^2)}\) be the volume of the compactified modular curve \(X_0(p^2)\) [8, p. 182]. We denote the weight zero Eisenstein series at the cusp \(\infty \) by \(E_{\infty ,0}(z,s)\) and by F the Arakelov metric on \(X_0(p^2)\) (see Sect. 3.3). The Rankin–Selberg transform at the cusp \(\infty \) of the Arakelov metric on \(X_0(p^2)\) is defined to be

$$\begin{aligned} R_F(s):=\int _{Y_0(p^2)} E_{\infty ,0}(z,s) F(z) \mu _{\mathrm {hyp}}. \end{aligned}$$

The above function has a meromorphic continuation in the whole complex plane with simple pole at \(s=1\) with residue \(v_{\varGamma _0(p^2)}^{-1}\). Let the Laurent series expansion of the Rankin–Selberg transform at \(s=1\) be given by

$$\begin{aligned} R_F(s)=\frac{1}{v_{\varGamma _0(p^2)}(s-1)}+{\mathcal {R}}_{\infty }^{\varGamma _0(p^2)}+O(s-1). \end{aligned}$$

Theorem 1.1

The constant term \( {\mathcal {R}}_{\infty }^{\varGamma _0(p^2)}\) in the Laurent series expansion of the Rankin–Selberg transform of the Arakelov metric on the modular curve \(X_0(p^2)\) is asymptotically given by

$$\begin{aligned} {\mathcal {R}}_{\infty }^{\varGamma _0(p^2)}=o\left( \frac{\log (p^2)}{g_{p^2}}\right) . \end{aligned}$$

The underlying philosophy in the proof of Theorem 1.1 is the same as that of Abbes–Ullmo [1, Theorem F, p. 6], Mayer [28] and Grados Fukuda [13]. By invoking the Selberg trace formula, computation of the Rankin–Selberg transform of the Arakelov metric reduces to finding contribution of various motions of \(\varGamma _0(p^2)\) in the trace formula. The proof hinges on the simplification of suitable Eisenstein series (cf. Proposition 3.2) that in turn is a generalization of [1, Proposition 3. 2. 2]. The actual computation of the hyperbolic (Sect. 4.2.1) and parabolic (Sect. 4.2.3) contributions is slightly different from the above mentioned papers because of the condition we imposed on N.

Note that the elliptic contribution in the Selberg trace formula will not follow directly in the same way as that of Abbes–Ullmo [1]. In loc. cit., the authors used the square free assumption in a crucial way to factorize the Epstein zeta functions suitably (cf. [1, Lemma 3.2.4]). As Mayer [28] and M. Grados Fukuda [13] worked with modular curves of the form \(X_1(N)\) and X(N) respectively, there is no elliptic contribution in those cases. We use an observation of M. Grados Fukuda in conjunction to the book of Zagier [34] to find a suitable bound on the elliptic contribution in Proposition  4.4. In the present paper, we give bounds on the terms in the Laurent series expansion of a certain Zeta function that appears in the elliptic contribution (Sect. 4.2.2) rather than finding the actual expression as accomplished by Abbes–Ullmo [1]. The fact that our N has only one prime factor is an advantage for us.

The computations of these three different types of contributions may be a bit complicated for general modular curves depending on the number of factors of N. We strongly believe that it is possible to prove an analogue of Theorem 1.1 for modular curves of the form \(X_0(N)\) for general N by the same strategy and modifying Proposition 3.2.2 of Abbes–Ullmo [1] suitably. However, the validity of our Theorem 1.2 depends on the information about special fibers of the arithmetic surface associated to a modular curve and this can’t be extended to an arbitrary N without non trivial algebro-geometric consideration. Hence, we choose to give a complete proof of Theorem 1.1 only for \(N=p^2\) in this paper. Although, we write down the computations in the Sects. 3, 4 to suit our specific modular curves \(X_0(p^2)\) considered in this paper, most of the results can be generalized with some minor changes to any modular curve of the form \(X_0(N)\) with arbitrary \(N \in {\mathbb {N}}\). For the general strategy to prove the theorem for general modular curves, we wish to refer to Abbes–Ullmo [1], Mayer [28] and M. Grados Fukuda [13].

Being an algebraic curve over \({\mathbb {Q}}\), \(X_0(p^2)\) has a minimal regular model over \({\mathbb {Z}}\) which we denote by \({\mathcal {X}}_0(p^2)\). Let \(\overline{\omega }_{p^2}\) be the relative dualizing sheaf of \({\mathcal {X}}_0(p^2)\) equipped with the Arakelov metric and \(\overline{\omega }_{p^2}^2 = \langle \overline{\omega }_{p^2}, \overline{\omega }_{p^2}\rangle \) be the Arakelov self-intersection number as defined in Sect. 2.

The following theorem is analogous to Proposition D of [1], Theorem 1 of [28] and Theorem 5.2.3 of [13] for the modular curve \(X_0(p^2)\):

Theorem 1.2

The Arakelov self intersection numbers for the modular curve \(X_0(p^2)\) satisfy the following asymptotic formula

$$\begin{aligned} \overline{\omega }_{p^2}^2= 3g_{p^2} \log (p^2)+o(g_{p^2} \log (p)). \end{aligned}$$

Similar results on the Arakelov self intersection numbers for general arithmetic surfaces are obtained in [23, 24]. It is only possible to give an upper and lower bound for general arithmetic surfaces but in the case of modular curves of the form \(X_0(p^2)\), we obtain an asymptotic expression since the required algebro-geometric information is available thanks to the work of Bas Edixhoven.

2 Arakelov intersection pairing

Let K be a number field and R be its ring of integers. Let \({\mathcal {X}}\) be an arithmetic surface over \({{\,\mathrm{Spec}\,}}R\) (in the sense of Liu [26, Chapter 8, Definition 3.14]) with the map \(f: {\mathcal {X}}\rightarrow {{\,\mathrm{Spec}\,}}R\). Let \(X = {\mathcal {X}}_{(0)}\) be the generic fiber which is a smooth irreducible projective curve over K. For each embedding \(\sigma : K \rightarrow {\mathbb {C}}\) we get a connected Riemann surface \({\mathcal {X}}_{\sigma }\) by taking the \({\mathbb {C}}\) points of the scheme

$$\begin{aligned} X \times _{{{\,\mathrm{Spec}\,}}K, \sigma } {{\,\mathrm{Spec}\,}}{\mathbb {C}}. \end{aligned}$$

Collectively we denote

$$\begin{aligned} {\mathcal {X}}_{\infty } = X({\mathbb {C}}) = \bigsqcup _{\sigma : K \rightarrow {\mathbb {C}}} {\mathcal {X}}_{\sigma }. \end{aligned}$$

Any line bundle L on \({\mathcal {X}}\) induces a line bundle on \({\mathcal {X}}_{\sigma }\) which we denote by \(L_{\sigma }\). A metrized line bundle \(\bar{L} = (L, h)\) is a line bundle L on \({\mathcal {X}}\) along with a hermitian metric \(h_{\sigma }\) on each \(L_{\sigma }\). Arakelov invented an intersection pairing for metrized line bundles which we describe now. Let \(\bar{L}\) and \(\bar{M}\) be two metrized line bundles with non-trivial global sections l and m respectively such that the associated divisors do not have any common components, then

$$\begin{aligned} \langle \bar{L}, \bar{M} \rangle = \langle L, M \rangle _{\text {fin}} + \sum _{\sigma : K \rightarrow {\mathbb {C}}} \langle L_{\sigma }, M_{\sigma } \rangle . \end{aligned}$$

The first summand is the algebraic part whereas the second summand is the analytic part of the intersection. For each closed point \(x \in {\mathcal {X}}\), \(l_x\) and \(m_x\) can be thought of as elements of \({\mathcal {O}}_{{\mathcal {X}}, x}\) via a suitable trivialization. If \({\mathcal {X}}^{(2)}\) is the set of closed points of \({\mathcal {X}}\), (the number 2 here signifies the fact that a closed point is an algebraic cycle on \({\mathcal {X}}\) of codimension 2), then

$$\begin{aligned} \langle L, M \rangle _{\text {fin}} = \sum _{x \in {\mathcal {X}}^{(2)}} \log \# ({\mathcal {O}}_{{\mathcal {X}},x}/ (l_x,m_x)). \end{aligned}$$

Now for the analytic part, we assume that the associated divisors of l and m which we denote by \({{\,\mathrm{div}\,}}(l)_{\sigma }\) and \({{\,\mathrm{div}\,}}(m)_{\sigma }\) on \({\mathcal {X}}_{\sigma }\) do not have any common points, and that \({{\,\mathrm{div}\,}}(l)_{\sigma } = \sum _{\alpha } n_{\alpha } P_{\alpha }\) with \(n_{\alpha } \in {\mathbb {Z}}\), then

$$\begin{aligned} \langle L_{\sigma }, M_{\sigma } \rangle = -\sum _{\alpha } n_{\alpha } \log || m(P_{\alpha }) || - \int _{{\mathcal {X}}_{\sigma }} \log ||l|| c_1(M_{\sigma }). \end{aligned}$$

Here \(||\cdot ||\) denotes the norm given by the hermitian metric on L or M respectively and is clear from the context. The first Chern class of \(M_{\sigma }\) is denoted by \(c_1(M_{\sigma })\) and it is a closed (1, 1) form on \({\mathcal {X}}_{\sigma }\) (see for instance Griffith–Harris [14]).

This intersection product is symmetric in \(\bar{L}\) and \(\bar{M}\). Moreover if we consider the group of metrized line bundles up to isomorphisms, called the arithmetic Picard group denoted by \(\widehat{{{\,\mathrm{Pic}\,}}} {\mathcal {X}}\), then the arithmetic intersection product extends to a symmetric bilinear form on all of \(\widehat{{{\,\mathrm{Pic}\,}}} {\mathcal {X}}\). It can be extended by linearity to the rational arithmetic Picard group

$$\begin{aligned} \widehat{{{\,\mathrm{Pic}\,}}}_{{\mathbb {Q}}} {\mathcal {X}}= \widehat{{{\,\mathrm{Pic}\,}}} {\mathcal {X}}\otimes {\mathbb {Q}}. \end{aligned}$$

For more details see Arakelov [2, 3] and Curilla [7].

Arakelov gave a unique way of attaching a hermitian metric to a line bundle on \({\mathcal {X}}\), see for instance Faltings [12, Section 3]. We summarise the construction here. Note that the space \(H^0({\mathcal {X}}_{\sigma }, \Omega ^1)\) of holomorphic differentials on \({\mathcal {X}}_{\sigma }\) has a natural inner product on it

$$\begin{aligned} \langle \phi , \psi \rangle = \frac{i}{2} \int _{{\mathcal {X}}_{\sigma }} \phi \wedge \overline{\psi }. \end{aligned}$$

Let us assume that the genus of \({\mathcal {X}}_{\sigma }\) is greater than or equal to 1. Choose an orthonormal basis \(f_1^{\sigma }, \ldots , f_g^{\sigma }\) of \(H^0({\mathcal {X}}_{\sigma }, \Omega ^1)\), . The canonical volume form on \({\mathcal {X}}_{\sigma }\) is

$$\begin{aligned} \mu _{\mathrm {can}}^{\sigma } = \frac{i}{2g} \sum _{j=1}^g f_j^{\sigma } \wedge \overline{f_j^{\sigma }}. \end{aligned}$$

There is a hermitian metric on a line bundle L on \({\mathcal {X}}\) such that \(c_1(L_{\sigma }) = \deg (L_{\sigma }) \mu _{\mathrm {can}}^{\sigma }\) for each embedding \(\sigma : K \rightarrow {\mathbb {C}}\). This metric is unique up to scalar multiplication. Such a metric is called admissible. An admissible metric may also be obtained using the canonical Green’s function, we describe that procedure presently.

Let now X be a Riemann surface of genus greater than 1 and \(\mu _{\mathrm {can}}\) the canonical volume form. The canonical Green’s function for X is the unique solution of the differential equation

$$\begin{aligned} \partial _{z} \partial _{\overline{z}}\ {\mathfrak {g}}_{\mathrm {can}}(z,w)=i\pi (\mu _{\mathrm {can}}(z)-\delta _w(z)) \end{aligned}$$

where \(\delta _w(z)\) is the Dirac delta distribution, with the normalization condition

$$\begin{aligned} \int _X {\mathfrak {g}}_{\mathrm {can}}(z,w) \mu _{\mathrm {can}}(z) = 0. \end{aligned}$$

For \(Q \in X\) there is a unique admissible metric on \(L = {\mathcal {O}}_X(Q)\) such that the norm of the constant function 1, which is a section of \({\mathcal {O}}_X(Q)\), at the point P is given by

$$\begin{aligned} |1|(P) = \exp ({\mathfrak {g}}_{\mathrm {can}}(P,Q)). \end{aligned}$$

By tensoring we can get an admissible metric on any line bundle on X.

Let again \({\mathcal {X}}\) be an arithmetic surface over R as above. Now we assume that the generic genus of \({\mathcal {X}}\) is greater than 1. To any line bundle L on \({\mathcal {X}}\) we can associate in this way a hermitian metric on \(L_{\sigma }\) for each \(\sigma \). This metric is called the Arakelov metric.

Let L and M be two line bundles on \({\mathcal {X}}\), we equip them with the Arakelov metrics to get metrized line bundles \(\bar{L}\) and \(\bar{M}\). The Arakelov intersection pairing of L and M is defined as arithmetic intersection pairing of \(\bar{L}\) and \(\bar{M}\)

$$\begin{aligned} \langle L, M \rangle _{Ar} = \langle \bar{L}, \bar{M} \rangle . \end{aligned}$$

It relates to the canonical Green’s function as follows. Let l and m be meromorphic sections of L and M as above. Assume that the corresponding divisors don’t have any common components. Furthermore let

$$\begin{aligned} {{\,\mathrm{div}\,}}(l)_{\sigma } = \sum _{\alpha } n_{\alpha , \sigma } P_{\alpha , \sigma }, \quad \text {and} \quad {{\,\mathrm{div}\,}}(m)_{\sigma } = \sum _{\beta } r_{\beta , \sigma } Q_{\beta , \sigma } \end{aligned}$$

then

$$\begin{aligned} \langle L, M \rangle _{Ar} = \langle L, M \rangle _{\text {fin}} - \sum _{\sigma : K \rightarrow {\mathbb {C}}} \sum n_{\alpha ,\sigma }r_{\beta ,\sigma }\ {\mathfrak {g}}_{\mathrm {can}}^{\sigma } (P_{\alpha ,\sigma }, Q_{\beta ,\sigma }). \end{aligned}$$

By \(\overline{\omega }_{{\mathcal {X}}, Ar}\) we denote the relative dualizing sheaf on \({\mathcal {X}}\) (see Qing Liu [26, chapter 6, section 6.4.2]) equipped with the Arakelov metric. We shall usually denote this simply by \(\overline{\omega }\) if the arithmetic surface \({\mathcal {X}}\) is clear from the context.

We are interested in a particular invariant of the modular curve \(X_0(p^2)\) which arises from Arakelov geometry and has applications in number theory. The modular curve \(X_0(p^2)\) which is defined over \({\mathbb {Q}}\) and has a minimal regular model \({\mathcal {X}}_0(p^2)\) over \({\mathbb {Z}}\) for primes \(p>5\). In this paper we shall calculate the Arakelov self intersection \(\overline{\omega }^2 = \langle \overline{\omega }, \overline{\omega } \rangle \) of the relative dualizing sheaf on \({\mathcal {X}}_{0}(p^2)\).

We retain the notation K for a number field and R its ring of integers. If X is a smooth curve over K then a regular model for X is an arithmetic surface \(p: {\mathcal {X}}\rightarrow {{\,\mathrm{Spec}\,}}R\) with an isomorphism of the generic fiber \({\mathcal {X}}_{(0)}\) to X. If genus of X is greater than 1 then there is a minimal regular model \({\mathcal {X}}_{min}\), which is unique. \({\mathcal {X}}_{min}\) is minimal among the regular models for X in the sense that any proper birational morphism to another regular model is an isomorphism. Another equivalent criterion for minimality is that \({\mathcal {X}}_{min}\) does not have any prime vertical divisor that can be blown down without introducing a singularity.

A regular model for \(X_0(p^2)/{\mathbb {Q}}\) was constructed in Edixhoven [10]. We denote this model by \(\widetilde{{\mathcal {X}}}_0(p^2) / {\mathbb {Z}}\). This model is not minimal but a minimal model \({\mathcal {X}}_0(p^2)\) is easily obtained by blowing down certain prime vertical divisors. We describe these constructions in Sect. 5.

Remark 2.1

By [8, Theorem 3.1.1], the genus \(g_{p^2}\) of \(X_0(p^2)\) is given by

$$\begin{aligned} g_{p^2}=1+\frac{(p+1)(p-6)-12c}{12} \end{aligned}$$

where \(c \in \left\{ 0,\frac{1}{2},\frac{2}{3}, \frac{7}{6}\right\} \).

3 Canonical Green’s functions and Eisenstein series

In this section, we evaluate the canonical Green’s function \({\mathfrak {g}}_{\mathrm {can}}\) for \(X_0(p^2)\) at the cusps in terms of the Eisenstein series.

3.1 Eisenstein series

We recall the definition and some properties of the Eisenstein series that we need for our purpose. For a more elaborate discussion on Eisenstein series for general congruence subgroups, we refer to Kühn [22] or Grados [13, p. 10]. Let \(\partial (X_0(p^2))\) be the set of all cusps of \(X_0(p^2)\) for which we have the following complete description by [5]

$$\begin{aligned} \partial \big (X_0(p^2)\big )=\{0,\infty \} \cup \left\{ \frac{1}{lp}: l=1, \ldots , (p-1) \right\} . \end{aligned}$$

For \(P \in \partial (X_0(p^2))\), let \(\varGamma _0(p^2)_P\) be the stabilizer of P in \(\varGamma _0(p^2)\). Denote by \(\sigma _P\), any scaling matrix of the cusp P, i.e., \(\sigma _P\) is an element of \({\mathrm {SL}}_2({\mathbb {R}})\) with the properties \(\sigma _P(\infty )=P\) and

$$\begin{aligned} \sigma _P^{-1}\varGamma _0(p^2)_P \sigma _P=\varGamma _0(p^2)_{\infty }=\left\{ \pm \left( \begin{array}{cc} 1 &{}\quad m\\ 0 &{}\quad 1\\ \end{array}\right) \mid m \in {\mathbb {Z}}\right\} . \end{aligned}$$

We fix such a matrix. For \(\gamma =\left( {\begin{matrix} a &{}\quad b\\ c &{}\quad d \end{matrix}}\right) \in {\mathrm {SL}}_2({\mathbb {R}})\) and \(k \in \{0,2\}\), define the automorphic factor of weight k to be

$$\begin{aligned} j_{\gamma }(z;k)=\frac{(cz+d)^k}{|cz+d|^k}. \end{aligned}$$
(3.1)

Let \({\mathbb {H}}=\{z| z \in {\mathbb {C}}; {\text { Im }}(z)>0\}\) be the complex upper half plane.

Definition 3.1

For \(z \in {\mathbb {H}}\) and \(s \in {\mathbb {C}}\) with \({\text { Re }}(s)>1\), the non-holomorphic Eisenstein series \(E_{P,k}(z,s)\) at a cusp \(P \in \partial (X_0(p^2))\) of weight k is defined to be

$$\begin{aligned} E_{P,k}(z,s)=\sum _{\gamma \in \varGamma _0(p^2)_P\backslash \varGamma _0(p^2)} \big ({\text { Im }}(\sigma _P^{-1} \gamma z) \big )^s j_{\sigma _P^{-1} \gamma }(z;k)^{-1}. \end{aligned}$$

The series \(E_{P,k}(z,s)\) is a holomorphic function of s in the region \({\text { Re }}(s)>1\) and for each such s, it is an automorphic form (function if \(k=0\)) of z with respect to \(\varGamma _0(p^2)\). Moreover, it has a meromorphic continuation to the whole complex plane. Also, \(E_{P,k}\) is an eigenfunction of the hyperbolic Laplacian \(\varDelta _k\) of weight k. Recall that

$$\begin{aligned} \varDelta _k=y^2\left( \frac{\partial ^2}{\partial x^2} + \frac{\partial ^2}{\partial y^2}\right) -ik(k-1) y \frac{\partial }{\partial x}. \end{aligned}$$
(3.2)

For any \(N \in {\mathbb {N}}\), let \(v_{\varGamma _0(N)}\) be the volume of the Fuchsian group of first kind \(\varGamma _0(N)\) [8, Equation 5.15, p. 183]. We note that \(E_{P,0}\) has a simple pole at \(s=1\) with residue \(1/v_{\varGamma _0(p^2)}\) [16, Proposition 6.13] independent of the z variable. Being an automorphic function, \(E_{P,0}(z,s)\) has a Fourier series expansion at any cusp Q, given by

$$\begin{aligned} E_{P,0}(\sigma _Q(z),s) =\delta _{P,Q}y^s+\phi _{P,Q}^{\varGamma _0(p^2)}(s)y^{1-s} +\sum _{n \ne 0} \phi _{P,Q} ^{\varGamma _0(p^2)}(n,s) W_s (nz); \end{aligned}$$
(3.3)

where

$$\begin{aligned} \phi _{P, Q}^{\varGamma _0(p^2)}(s)&= \sqrt{\pi } \frac{\varGamma (s-\frac{1}{2})}{\varGamma (s)}\sum _{c=1}^{\infty } c^{-2s}S_{P, Q}(0,0; c),\\ \phi _{P,Q}^{\varGamma _0(p^2)}(n,s)&=\pi ^s \varGamma (s)^{-1} \vert n \vert ^{s-1} \sum _{c=1}^{\infty }c^{-2s} S_{P,Q}(0,n;c). \end{aligned}$$

Here, \(S_{P, Q}(a,b; c)\) is the Kloosterman sum [16, p. 48, equation (2.23)] and \(W_s(z)\) is the Whittaker function [16, p. 20, equation (1.26)]. Let

$$\begin{aligned} {\mathcal {C}}_{P, Q}^{\varGamma _0(p^2)} = \lim _{s \rightarrow 1} \left( \phi _{P, Q}^{\varGamma _0(p^2)}(s)-\frac{1}{v_{\varGamma _0(p^2)}} \frac{1}{s-1}\right) \end{aligned}$$
(3.4)

be the constant term in the Laurent series expansion of \(\phi _{P, Q}^{\varGamma _0(p^2)}(s)\).

The following proposition regarding the Eisenstein series of weight zero at the cusp \(\infty \) will be crucial in the subsequent sections.

Proposition 3.2

The Eisenstein series of weight zero at the cusp \(\infty \) can be expressed as

$$\begin{aligned} E_{\infty ,0}(z,s)=\frac{1}{2}\frac{1}{\zeta (2s)}\frac{1}{1-p^{-2s}}\left[ \sum _{(m,n)}^{\prime }\frac{y^s}{\vert p^2mz+n\vert ^{2s}}-\sum _{(m,n)}^{\prime }\frac{y^s}{\vert p^2mz+pn\vert ^{2s}}\right] . \end{aligned}$$
(3.5)

Here, \(\sum _{(m,n)}^{\prime }\) denote the summation over \((m,n) \in {\mathbb {Z}}^2-\{(0,0)\}\).

Proof

For \(t \in \{1,p\}\), notice that

$$\begin{aligned}&\sum _{(m,n)}^{\prime }\frac{y^s}{\vert p^2mz+tn\vert ^{2s}}=\sum _{d=1}^{\infty } \sum _{(m,n)\in {\mathbb {Z}}^2; (m,n)=d}^{\prime }\frac{y^s}{\vert p^2mz+tn\vert ^{2s}}\\&\quad =\sum _{d=1}^{\infty } \sum _{(m',n')\in {\mathbb {Z}}^2; (m',n')=1}^{\prime }\frac{y^s}{\vert p^2dm'z+td n'\vert ^{2s}}\\&\quad =\zeta (2s)\left( \sum _{(m,n)=1} \frac{y^s}{\vert p^2 mz +tn \vert ^{2s}}\right) . \end{aligned}$$

In other words, Eq. (3.5) is equivalent to

$$\begin{aligned} 2(1-p^{-2s})E_{\infty ,0}(z,s)=\sum _{(m,n)=1} \frac{y^s}{\vert p^2 mz +n \vert ^{2s}} - \sum _{(m,n)=1} \frac{y^s}{\vert p^2 mz + pn \vert ^{2s}}. \end{aligned}$$
(3.6)

The left hand side of (3.6) is equal to

$$\begin{aligned} \begin{aligned}&2(1-p^{-2s}) \sum _{\gamma \in \varGamma _0(p^2)_{\infty }\backslash \varGamma _0(p^2)} {\text { Im }}(\gamma z)^s \\&\quad = 2(1-p^{-2s}) \frac{1}{2}\sum _{\begin{array}{c} (m,n)=1,\\ m\equiv 0(p^2) \end{array}} \frac{y^s}{\vert mz +n \vert ^{2s}}\\&\quad = \sum _{\begin{array}{c} (m,n)=1,\\ m\equiv 0(p^2) \end{array}} \frac{y^s}{\vert mz +n \vert ^{2s}}-p^{-2s}\sum _{\begin{array}{c} (m,n)=1,\\ m\equiv 0(p^2) \end{array}} \frac{y^s}{\vert mz +n \vert ^{2s}}\\&\quad = \sum _{\begin{array}{c} (m,n)=1,\\ m\equiv 0(p^2) \end{array}} \frac{y^s}{\vert mz +n \vert ^{2s}} -\sum _{\begin{array}{c} (m,n)=1,\\ m\equiv 0(p^2) \end{array}} \frac{y^s}{\vert pmz + pn \vert ^{2s}}\\&\quad = \sum _{(p^2m,n)=1}\frac{y^s}{\vert p^2mz +n \vert ^{2s}}-\sum _{(p^2m,n)=1}\frac{y^s}{\vert p^3mz + pn \vert ^{2s}}. \end{aligned} \end{aligned}$$
(3.7)

The first term in the right hand side of (3.6) is equal to

$$\begin{aligned}&\sum _{(m,n)=1, p\not |n}\frac{y^s}{\vert p^2 mz +n \vert ^{2s}} + \sum _{(m,n)=1, p |n}\frac{y^s}{\vert p^2 mz +n \vert ^{2s}}\\&\quad = \sum _{(m,n)=1, p\not |n}\frac{y^s}{\vert p^2 mz +n \vert ^{2s}}+\sum _{(m,pn)=1}\frac{y^s}{\vert p^2 mz + pn \vert ^{2s}}, \end{aligned}$$

and the second term in the right hand side of (3.6) is equal to

$$\begin{aligned}&\sum _{\begin{array}{c} (m,n)=1,\\ (m,pn)=1 \end{array}} \frac{y^s}{\vert p^2 mz + pn \vert ^{2s}}+\sum _{\begin{array}{c} (m,n)=1,\\ (m,pn)=p \end{array}} \frac{y^s}{\vert p^2 mz + pn \vert ^{2s}}\\&\quad = \sum _{\begin{array}{c} (m,n)=1,\\ (m,pn)=1 \end{array}} \frac{y^s}{\vert p^2 mz + pn \vert ^{2s}} + \sum _{\begin{array}{c} (pm,n)=1, \\ (pm,pn)=p \end{array}} \frac{y^s}{\vert p^3 mz + pn \vert ^{2s}}\\&\quad = \sum _{\begin{array}{c} (m,n)=1,\\ (m,pn)=1 \end{array}} \frac{y^s}{\vert p^2 mz + pn \vert ^{2s}} + \sum _{\begin{array}{c} (pm,n)=1, \\ (m,n)=1 \end{array}} \frac{y^s}{\vert p^3 mz + pn \vert ^{2s}}. \end{aligned}$$

Therefore the right hand side of (3.6) is

$$\begin{aligned} \sum _{(m,n)=1, p\not |n}\frac{y^s}{\vert p^2 mz+n\vert ^{2s}}-\sum _{(pm,n)=1,(m,n)=1}\frac{y^s}{\vert p^3 mz + pn \vert ^{2s}}. \end{aligned}$$
(3.8)

To complete the proof using (3.7) and (3.8), we only need to observe that \((p^2m,n)=1\) if and only if \((m,n)=1\) and \(p\not | n\), so that

$$\begin{aligned} \sum _{(p^2m,n)=1}\frac{y^s}{\vert p^2mz +n \vert ^{2s}}=\sum _{(m,n)=1, p\not |n}\frac{y^s}{\vert p^2 mz+n\vert ^{2s}}; \end{aligned}$$

and similarly \((p^2m,n)=1\) if and only if \((pm,n)=1\) and \((m,n)=1\), so that

$$\begin{aligned} \sum _{(p^2m,n)=1}\frac{y^s}{\vert p^3mz + pn \vert ^{2s}}=\sum _{(pm,n)=1,(m,n)=1}\frac{y^s}{\vert p^3 mz + pn \vert ^{2s}}. \end{aligned}$$

\(\square \)

3.1.1 Computation of \({\mathcal {C}}_{\infty , \infty }^{\varGamma _0(p^2)}\) and \({\mathcal {C}}_{\infty ,0}^{\varGamma _0(p^2)}\)

In this section, we compute the terms \({\mathcal {C}}_{\infty , \infty }^{\varGamma _0(p^2)}\) and \({\mathcal {C}}_{\infty ,0}^{\varGamma _0(p^2)}\) that appear in Eq. 3.17. To do the same, we expand the constant terms of the Eisenstein series \(\phi _{\infty ,\infty }^{\varGamma _0(p^2)}(s)\) and \(\phi _{\infty ,0}^{\varGamma _0(p^2)}(s)\) as defined above. The below computations are inspired by [21].

Lemma 3.3

The Laurent series expansion of \(\phi _{\infty ,\infty }^{\varGamma _0(p^2)}(s)\) at \(s=1\) is given by

$$\begin{aligned} \phi _{\infty ,\infty }^{\varGamma _0(p^2)}(s)= & {} \frac{1}{v_{\varGamma _0(p^2)}} \frac{1}{s-1}+\frac{1}{v_{\varGamma _0(p^2)}}\left( 2\gamma _{EM} +\frac{a\pi }{6} -\frac{(2p^2-1)\log (p^2)}{p^2-1}\right) +O(s-1), \end{aligned}$$

where \(\gamma _{EM}\) is the Euler–Mascheroni constant and a is the derivative of \(\sqrt{\pi }\frac{\varGamma (s-\frac{1}{2})}{\varGamma (s) \zeta (2s)}\) at \(s=1\).

Proof

From [16, page 48], we compute

$$\begin{aligned} S_{\infty , \infty }(0,0;c)=\left| \{ d\pmod c | \left( {\begin{matrix} a &{}\quad b\\ c &{}\quad d \end{matrix}}\right) \in \varGamma _0(p^2)\}\right| = {\left\{ \begin{array}{ll} 0 &{}\quad \hbox { if}\ p^2 \not \mid c ,\\ \phi (c) &{}\quad \hbox { if}\ p^2 \mid c. \\ \end{array}\right. } \end{aligned}$$

Writing c in the form \(c=p^{k+2}n\) where \(k\ge 0\) and \(p \not \mid n\),

$$\begin{aligned} \phi (c)=\phi (p^{k+2}n)=\phi (p^{k+2})\phi (n)=(p-1)p^{k+1}\phi (n). \end{aligned}$$

Therefore,

$$\begin{aligned} \sum _{c=1}^{\infty } c^{-2s}S_{\infty ,\infty }(0,0;c)&=\sum _{n=1, p\not \mid n}^{\infty }\sum _{k=0}^{\infty }(p^{k+2}n)^{-2s}(p-1)p^{k+1}\phi (n)\\&=p^{-4s+1}(p-1)\sum _{n=1, p\not \mid n}^{\infty } n^{-2s}\phi (n)\sum _{k=0}^{\infty } p^{(-2s+1)k}\\&= p^{-4s+1}(p-1) \left( \frac{\zeta (2s-1)}{\zeta (2s)}\frac{p^{2s}-p}{p^{2s}-1}\right) \left( \frac{1}{1-p^{-2s+1}}\right) \\&=\frac{p(p-1)}{p^{2s}(p^{2s}-1)}\frac{\zeta (2s-1)}{\zeta (2s)}. \end{aligned}$$

Hence, we deduce that

$$\begin{aligned} \phi _{\infty ,\infty }^{\varGamma _0(p^2)}(s)=\big (p(p-1)\big )\left( \frac{1}{p^{2s}(p^{2s}-1)} \right) \left( \sqrt{\pi }\frac{\varGamma (s-\frac{1}{2})}{\varGamma (s) \zeta (2s)} \right) \zeta (2s-1). \end{aligned}$$
(3.9)

The second factor is holomorphic at \(s=1\) and has the Taylor series expansion

$$\begin{aligned} \frac{1}{p^{2s}(p^{2s}-1)}= \frac{1}{p^2(p^2-1)} - \frac{ (2p^2-1) \log (p^2) }{p^2(p^2-1)^2}(s-1) + O\big ((s-1)^2\big ). \end{aligned}$$
(3.10)

The third factor is holomorphic as well at \(s=1\) and has the Taylor series expansion

$$\begin{aligned} \sqrt{\pi }\frac{\varGamma (s-\frac{1}{2})}{\varGamma (s) \zeta (2s)} = \frac{6}{\pi }+a(s-1)+O\big ((s-1)^2\big ), \end{aligned}$$
(3.11)

Finally, the Riemann zeta function \(\zeta (2s-1)\) is meromorphic at \(s=1\) with the Laurent series expansion

$$\begin{aligned} \zeta (2s-1)=\frac{1}{2(s-1)}+\gamma _{EM}+O(s-1). \end{aligned}$$
(3.12)

Multiplying these expansions, we see that

$$\begin{aligned} \phi _{\infty ,\infty }^{\varGamma _0(p^2)}(s)= & {} \left( \frac{1}{p(p+1)} \frac{6}{\pi } \frac{1}{2}\right) \frac{1}{s-1} \\&+ \left( \frac{1}{p(p+1)} \frac{6}{\pi } \gamma _{EM}+ \frac{1}{p(p+1)} a \frac{1}{2} -\frac{(2p^2-1)\log (p^2)}{p(p+1)(p^2-1)} \frac{6}{\pi }\frac{1}{2} \right) \\&+ O(s-1). \end{aligned}$$

This equation gives the result observing that \(v_{\varGamma _0(p^2)}=\frac{\pi }{3}p(p+1)\). \(\square \)

Corollary 3.4

The constant term in the Laurent series expansion at \(s=1\) of the Eisenstein series \(E_{\infty ,0}\) at the cusp \(\infty \) is given by

$$\begin{aligned} {\mathcal {C}}_{\infty , \infty }^{\varGamma _0(p^2)}=\frac{1}{v_{\varGamma _0(p^2)}}\left( 2\gamma _{EM} +\frac{a\pi }{6} -\frac{(2p^2-1)\log (p^2)}{p^2-1}\right) . \end{aligned}$$

Lemma 3.5

The Laurent series expansion of \(\phi _{\infty ,0}^{\varGamma _0(p^2)}(s)\) at \(s=1\) is

$$\begin{aligned} \phi _{\infty ,0}^{\varGamma _0(p^2)}(s)= & {} \frac{1}{v_{\varGamma _0(p^2)}}\frac{1}{s-1} \nonumber \\&+ \frac{1}{v_{\varGamma _0(p^2)}} \left( 2\gamma _{EM} +\frac{a\pi }{6} -\frac{(p^2-p-1)}{p^2-1} \log (p^2)\right) +O(s-1);\nonumber \\ \end{aligned}$$
(3.13)

where \(\gamma _{EM}\) and a are as in Lemma 3.3.

Proof

Let \(\sigma _0\) be the scaling matrix of the cusp 0 defined by

$$\begin{aligned} \sigma _0^{-1}=\frac{1}{\sqrt{p^2}}W_{p^2} \in {\mathrm {SL}}_2({\mathbf {R}}), \end{aligned}$$
(3.14)

where \(W_{p^2} = \left( {\begin{matrix}0 &{} 1\\ -p^2 &{} 0\end{matrix}}\right) \) is the Atkin–Lehner involution. For the cusp \(\infty \), we take \(\sigma _{\infty }=I\) as a scaling matrix. From [16, page 48], we then have

$$\begin{aligned} S_{\infty ,0}(0,0;c)&=\left| \left\{ d\pmod c | \left( {\begin{matrix} a &{}\quad b\\ c &{}\quad d \end{matrix}}\right) = \left( {\begin{matrix}pb^{\prime } &{} -a^{\prime }/p \\ pd^{\prime } &{} -pc^{\prime }\end{matrix}}\right) , a^{\prime },b^{\prime },c^{\prime },d^{\prime } \in {\mathbf {Z}}, a^{\prime }d^{\prime }-b^{\prime }c^{\prime }p^2=1\right\} \right| \\&= {\left\{ \begin{array}{ll} 0 &{}\quad \hbox {if } p \not \mid c \hbox { or } p^2 \mid c,\\ \phi (n) &{}\quad \hbox {if } c=pn \hbox { with } p\not \mid n. \end{array}\right. } \end{aligned}$$

Therefore, we obtain

$$\begin{aligned} \sum _{c} c^{-2s} S_{\infty , 0}(0,0;c) = \sum _{n=1, p \not \mid n}^{\infty } p^{-2s}n^{-2s} \phi (n) =\frac{1}{p^{2s}}\frac{\zeta (2s-1)}{\zeta (2s)}\frac{p^{2s}-p}{p^{2s}-1}. \end{aligned}$$

Thus, we deduce that

$$\begin{aligned} \phi _{\infty , 0}^{\varGamma _0(p^2)}(s)= & {} (p^{2s}-p) \left( \frac{1}{p^{2s}(p^{2s}-1)}\right) \left( \sqrt{\pi } \frac{\varGamma (s-\frac{1}{2})}{\varGamma (s)\zeta (2s)}\right) \zeta (2s-1) \\= & {} \frac{1}{p(p-1)} (p^{2s}-p) \phi _{\infty , \infty }^{\varGamma _0(p^2)}(s), \end{aligned}$$

using (3.9). The function \(p^{2s}-p\) is holomorphic near \(s=1\) and has the Taylor series expansion

$$\begin{aligned} p^{2s}-p=(p^2-p)+(p^2\log (p^2) )(s-1) +O\big ((s-1)^2\big ). \end{aligned}$$
(3.15)

Combining this with Lemma 3.3 yields (3.13). \(\square \)

By Lemma 3.5, we compute the constant term of the Eisenstein series:

Corollary 3.6

The constant term of the Eisenstein series \(E_{\infty ,0}\) at the cusp 0 is given by

$$\begin{aligned} {\mathcal {C}}_{\infty ,0}^{\varGamma _0(p^2)}=\frac{1 }{v_{\varGamma _0(p^2)}} \left( 2\gamma _{EM} +\frac{a\pi }{6} -\frac{(p^2-p-1)}{p^2-1} \log (p^2)\right) . \end{aligned}$$

3.2 Relation between Green’s function and Eisenstein series

Let \(G_s(z,w)\) be the automorphic Green’s function [1, p. 4]. The automorphic Green’s function and the Eisenstein series are related by the following equation [1, Proposition E, p. 5]

$$\begin{aligned} {\mathfrak {g}}_{\mathrm {can}}(\infty ,0) =&-2\pi \lim _{s \rightarrow 1} \left( \phi _{\infty , 0}^{\varGamma _0(p^2)}(s)-\frac{1}{v_{\varGamma _0(p^2)}}\frac{1}{s-1}\right) -\frac{2\pi }{v_{\varGamma _0(p^2)}}\\&+ 2\pi \lim _{s \rightarrow 1} \left( \frac{1}{v_{\varGamma _0(p^2)}}\frac{1}{s(s-1)}+\int _{X_0(p^2) \times X_0(p^2)} G_s (z,w)\mu _{\text {can}}(z) \mu _{\text {can}}(w) \right) \\&+2\pi \lim _{s \rightarrow 1} \left( \int _{X_0(p^2)} E_{\infty ,0}(z,s) \mu _{\text {can}}(z) \right. \\&\left. + \int _{X_0(p^2)} E_{0,0}(z,s) \mu _{\text {can}}(z) -\frac{2}{v_{\varGamma _0(p^2)}} \frac{1}{s-1} \right) . \end{aligned}$$

For brevity, we write

$$\begin{aligned} R_{\infty }^{\varGamma _0(p^2)}= & {} \frac{1}{2} \lim _{s \rightarrow 1} \left( \int _{X_0(p^2)} E_{\infty ,0}(z,s) \mu _{\text {can}}(z) \right. \nonumber \\&\left. + \int _{X_0(p^2)} E_{0,0}(z,s) \mu _{\text {can}}(z) -\frac{2}{v_{\varGamma _0(p^2)}} \frac{1}{s-1} \right) . \end{aligned}$$
(3.16)

We will show in Sect. 3.3 that \(R_{\infty }^{\varGamma _0(p^2)}\) as defined above coincides with \({\mathcal {R}}_{\infty }^{\varGamma _0(p^2)}\) that appears in Theorem 1.1. From [13, Proposition 4.1.2, p. 65], we know that

$$\begin{aligned} 2\pi \lim _{s \rightarrow 1} \left( \frac{1}{v_{\varGamma _0(p^2)}}\frac{1}{s(s-1)}+\int _{X_0(p^2) \times X_0(p^2)} G_s(z,w) \mu _{\text {can}}(z) \mu _{\text {can}}(w) \right) =O\left( \frac{1}{g_{p^2}}\right) . \end{aligned}$$

Note that \(G_s(z,w)=-G_s^{\varGamma _0(p^2)}(z,w)\) in loc. cit. The key inputs in the above important estimate are results of Jorgenson–Kramer [18, Lemma 3.7, p. 690] and [19] on the constant term of the logarithmic derivate of the Selberg zeta function for varying congruence subgroups.

Since \(\frac{g_{p^2}}{v_{\varGamma _0(p^2)}}=O(1)\), we have

$$\begin{aligned} {\mathfrak {g}}_{\mathrm {can}}(\infty ,0)= -2\pi {\mathcal {C}}_{\infty ,0}^{\varGamma _0(p^2)} +4\pi R_{\infty }^{\varGamma _0(p^2)} + O\left( \frac{1}{g_{p^2}}\right) . \end{aligned}$$
(3.17)

3.3 Computation of \({\mathcal {R}}_{\infty }^{\varGamma _0(p^2)}\)

We first show that \(R_{\infty }^{\varGamma _0(p^2)}={\mathcal {R}}_{\infty }^{\varGamma _0(p^2)}\) is the constant term in the Laurent series expansion of the Rankin–Selberg transform at the cusp \(\infty \) of the Arakelov metric. We start by writing down the canonical volume form \(\mu _{\mathrm {can}}\) in coordinates. Let \(S_2\big (\varGamma _0(p^2)\big )\) be the space of holomorphic cusp forms of weight 2 with respect to \(\varGamma _0(p^2)\). We then have an isomorphism \(S_2\big (\varGamma _0(p^2)\big ) \cong H^0(X_0(p^2), \Omega ^1)\) given by \(f(z) \mapsto f(z) dz\). The space of cusp forms \(S_2\big (\varGamma _0(p^2)\big )\) is equipped with the Petersson inner product. Let \(\{f_1, \ldots , f_{g_{p^2}}\}\) be an orthonormal basis of \(S_2\big (\varGamma _0(p^2)\big )\). The Arakelov metric on \(X_0(p^2)\) is the function given by

$$\begin{aligned} F(z):=\frac{{\text { Im }}(z)^2}{g_{p^2}} \sum _{j=1}^{g_{p^2}} \vert f_j(z)\vert ^2 . \end{aligned}$$
(3.18)

It is easy to see that the Arakelov metric as defined above is independent of the choice of orthonormal basis. In the local co-ordinate z, the canonical volume form \(\mu _{\mathrm {can}}(z)\) is given by [13, p. 18]

$$\begin{aligned} \mu _{\mathrm {can}}(z)=\frac{i}{2g_{p^2}} \sum _{j=1}^{g_{p^2}} \vert f_j(z)\vert ^2 dz \wedge d\overline{z} = F(z) \mu _{\mathrm {hyp}}. \end{aligned}$$
(3.19)

Next, we recall the definition of Rankin–Selberg transform of a function at the cusp \(\infty \) [13, p. 19]. Let f be a \(\varGamma _0(p^2)\)-invariant holomorphic function on \({\mathbb {H}}\) of rapid decay at the cusp \(\infty \), i.e., the constant term in the Fourier series expansion of f at the cusp \(\infty \)

$$\begin{aligned} f(x+iy)= \sum _{n} a_n(y) e^{2\pi i nx}, \end{aligned}$$

satisfies the asymptotic \(a_0(y)=O(y^{-M})\) for some \(M>0\) as \(y \rightarrow \infty \). The Rankin–Selberg transform of f at the cusp \(\infty \), denoted by \(R_f(s)\), is then defined as

$$\begin{aligned} R_f(s):=\int _{Y_0(p^2)} f(z) E_{\infty ,0}(z,s) \mu _{\mathrm {hyp}}(z) = \int _0^{\infty } a_0(y) y^{s-2} \, dy. \end{aligned}$$

The function \(R_f(s)\) is holomorphic for \({\text { Re }}(s) >1\) and admits a meromorphic continuation to the whole complex plane and has a simple pole at \(s=1\) with residue

$$\begin{aligned} \frac{1}{v_{\varGamma _0(p^2)}}\int _{Y_0(p^2)} f(z) \mu _{\mathrm {hyp}}(z). \end{aligned}$$

The following lemma is similar to [28, Lemma 3.5]. We determine the relation between the Rankin–Selberg transforms at the cusp 0 (defined similarly by replacing \(\infty \) with 0) and \(\infty \) from the following lemma.

Lemma 3.7

The Rankin–Selberg transforms of the Arakelov metric at the cusps 0 and \(\infty \) are related by

$$\begin{aligned} \int _{Y_0(p^2)} E_{0,0}(z,s) \mu _{\mathrm {can}}(z)=\int _{Y_0(p^2)} E_{\infty ,0}(z,s) \mu _{\mathrm {can}}(z). \end{aligned}$$

Proof

Let \(\sigma _0\) be the scaling matrix of the cusp 0 defined in (3.14). Notice that

$$\begin{aligned} E_{0,0}(z,s)= & {} \sum _{\gamma \in \varGamma _0(p^2)_0 \backslash \varGamma _0(p^2)} \big ({\text { Im }}(\sigma _0^{-1}\gamma z)\big )^s = \sum _{\beta \in \varGamma _0(p^2)_{\infty } \backslash \varGamma _0(p^2)} \big ({\text { Im }}(\beta \sigma _0^{-1} z)\big )^s \\= & {} \sum _{\beta \in \varGamma _0(p^2)_{\infty } \backslash \varGamma _0(p^2)} \big ({\text { Im }}(\sigma _{\infty }^{-1}\beta (\sigma _0^{-1} z))\big )^s = E_{\infty ,0}(\sigma _0^{-1}z,s), \end{aligned}$$

by taking \(\sigma _{\infty }=I\). Now substituting \(z=\sigma _0 w\), and using the fact that \(\mu _{\mathrm {can}}\) is invariant under \(\sigma _0\), the result follows. \(\square \)

As a result we have the following simpler formula for \(R_{\infty }^{\varGamma _0(p^2)}\) defined in 3.16

$$\begin{aligned} R_{\infty }^{\varGamma _0(p^2)}=\lim _{s \rightarrow 1} \left( \int _{Y_0(p^2)} E_{\infty ,0}(z,s) \mu _{\text {can}}(z) -\frac{1}{v_{\varGamma _0(p^2)}} \frac{1}{s-1}\right) . \end{aligned}$$
(3.20)

Let F be the Arakelov metric on \(X_0(p^2)\) and write \(\mu _{\text {can}}(z)=F(z) \mu _{\mathrm {hyp}}\) in Eq. (3.20). Then we see that the integral is the Rankin–Selberg transform \(R_F(s)\) of F at the cusp \(\infty \) and hence \(R_{\infty }^{\varGamma _0(p^2)}={\mathcal {R}}_{\infty }^{\varGamma _0(p^2)}\). The next section of the paper is devoted to finding an estimate of \({\mathcal {R}}_{\infty }^{\varGamma _0(p^2)}\) using this formula. In the rest of the paper, we denote by \(R_G\) the Rankin–Selberg transform of a function G at the cusp \(\infty \).

3.4 Epstein zeta functions

In this subsection, we define the Epstein zeta functions and recall some basic properties of the same. We first recall a connection between quadratic forms and matrices in the group \(\varGamma _0(N)\). For \(a,b,c \in {\mathbb {Z}}\), let [abc] be the quadratic form

$$\begin{aligned} \varPhi (X, Y)=aX^2+b XY+cY^2=(X,Y) \left( {\begin{matrix} a &{} b/2\\ b/2 &{} c\\ \end{matrix}}\right) (X,Y)^t. \end{aligned}$$

The discriminant of \(\varPhi \) is by definition \({\text { dis }}(\varPhi ) :=b^2-4ac\). For any integer \(l \in {\mathbb {Z}}\) with \(\vert l \vert \ne 2\), define

$$\begin{aligned} Q_l&=\{\varPhi | {\text { dis }}(\varPhi )=l^2-4\},\\ Q_l(N)&=\big \{\varPhi |\varPhi \in Q_l ; \varPhi =[Na, b, c] : a,b, c \in {\mathbb {Z}}\big \}. \end{aligned}$$

The full modular group \({\mathrm {SL}}_2({\mathbb {Z}})\) acts on \(Q_l\) by

$$\begin{aligned} \begin{aligned} Q_l \times {\mathrm {SL}}_2({\mathbb {Z}})&\rightarrow Q_l\\ (\varPhi , \delta )&\mapsto \varPhi \circ \delta \end{aligned} \end{aligned}$$
(3.21)

where \(\varPhi \circ \delta (X,Y)=\varPhi \big ((X, Y)\delta ^t\big )\). If \(\delta =\left( {\begin{matrix} x &{} y\\ z &{} t\\ \end{matrix}}\right) \), then it can be shown that

$$\begin{aligned} \varPhi \circ \delta =\big [\varPhi (x,z), b(x t+y z)+2(axy+czt), \varPhi (y,t)\big ]. \end{aligned}$$

Note that the above defines an action of \(\varGamma _0(N)\) on \(Q_l(N)\).

Proposition 3.8

Let \(\varPhi \in Q_l\) be a quadratic form with \({\text { dis }}(\varPhi )=l^2-4\). If \(\vert l \vert <2\) then \({\mathrm {SL}}_2(\mathbb {Z})_{\varPhi }\) is finite and if \(\vert l \vert >2\) then

$$\begin{aligned} {\mathrm {SL}}_2({\mathbb {Z}})_{\varPhi }=\left\{ \pm M^n : n \in \mathbb {Z}\right\} \end{aligned}$$

for some \(M \in {\mathrm {SL}}_2(\mathbb {Z})\) with positive trace and which is unique up to replacing M by \(M^{-1}\). Moreover, if \(\varPhi \in Q_l(N)\), then

$$\begin{aligned} \varGamma _0(N)_{\varPhi }={\mathrm {SL}}_2({\mathbb {Z}})_{\varPhi }. \end{aligned}$$

Proof

Let \(\varPhi =[a,b,c]\) be a quadratic form with discriminant \({\text { dis }}(\varPhi )=\varDelta \). By [34, p. 63, Satz 2], note that \({\mathrm {SL}}_2({\mathbb {Z}})_{\varPhi }\) consists precisely of the following matrices

$$\begin{aligned} U_{\varPhi }(x,y)=\left( \begin{array}{cc} \frac{x-yb}{2} &{}\quad -cy\\ ay &{}\quad \frac{x+yb}{2}\\ \end{array}\right) , \end{aligned}$$
(3.22)

where \((x,y) \in {\mathbb {Z}}^2\) is a solution of the Pell’s equation \(P_{\varDelta } : x^2-\varDelta y^2=4\). If \(\vert l \vert <2\), then \(\varDelta <0\), in which case there are only finitely many integer solutions of \(P_{\varDelta }\) and hence \({\mathrm {SL}}_2({\mathbb {Z}})_{\varPhi }\) is finite. On the other hand, if \(\vert l \vert >2\), then \(\varDelta >0\), and again by [34, p. 63, Satz 2], \({\mathrm {SL}}_2({\mathbb {Z}})_{\varPhi } \cong \mathbb {Z} \times \mathbb {Z}/2\mathbb {Z}\). This guarantees the existence of an M as required [34, p. 65]. Finally, since \(\varPhi \in Q_l(N)\), we have \(U_{\varPhi } \in \varGamma _0(N)\) (cf. (3.22)) and hence \({\mathrm {SL}}_2({\mathbb {Z}})_{\varPhi }=\varGamma _0(N)_{\varPhi }\). \(\square \)

Remark 3.9

For a quadratic form \(\varPhi \in Q_l\) with \(\vert l \vert >2\), let M be as in the proposition. The largest eigenvalue of M is called its fundamental unit and is denoted by \(\epsilon _{\varPhi }\). Note that it is well-defined as the eigenvalues of \(M^{-1}\) are reciprocals of the eigenvalues of M.

Next, consider the set of matrices in the modular group \(\varGamma _0(N)\) of trace l

$$\begin{aligned} \varGamma _0(N)_l :=\big \{\gamma \in \varGamma _0(N) : {\text {tr}}(\gamma ) =l\big \}. \end{aligned}$$

Note that \(\varGamma _0(N)\) acts on this set by conjugation

$$\begin{aligned} \varGamma _0(N)_l \times \varGamma _0(N)&\rightarrow \varGamma _0(N)_l\\ (\gamma , \delta )&\mapsto \delta ^{-1}\gamma \delta . \end{aligned}$$

There is a \(\varGamma _0(N)\) equivariant one-one correspondence between \(\varGamma _0(N)_l\) and \(Q_l(N)\) given by

$$\begin{aligned} \psi : \gamma =\left( \begin{array}{cc} a &{}\quad b\\ N c &{}\quad d\\ \end{array}\right) \mapsto \varPhi _{\gamma }=[Nc,d-a,-b], \end{aligned}$$

with inverse

$$\begin{aligned} \psi ^{\prime }: \varPhi =[aN,b,c] \mapsto \gamma _\varPhi =\left( \begin{array}{cc} \frac{l-b}{2}&{}\quad -c\\ Na &{}\quad \frac{l+b}{2}\\ \end{array}\right) . \end{aligned}$$

We observe that this correspondence induces a bijection

$$\begin{aligned} Q_l(N)/\varGamma _0(N) \cong \varGamma _0(N)_l/\varGamma _0(N). \end{aligned}$$

Now fix a quadratic form \(\varPhi \in Q_l\) and introduce the notation

$$\begin{aligned} \varPhi \cdot (m,n)&=\varPhi (n,-m), \quad (m,n) \in \mathbb {Z}^2. \end{aligned}$$

Also consider the following subset of \({\mathbb {Z}}^2\):

$$\begin{aligned} M^\varPhi&=\big \{(m,n) \in {\mathbb {Z}}^2 : \varPhi \cdot (m,n)>0\big \} \subset {\mathbb {Z}}^2. \end{aligned}$$

Observe that \({\mathrm {SL}}_2(\mathbb {Z})_{\varPhi }\) acts on \(M^{\varPhi }\) by matrix multiplication from the right which follows from the identity

$$\begin{aligned} (\varPhi \circ \delta ) \cdot (m,n)\delta = \varPhi \cdot (m,n),\quad \delta \in {\mathrm {SL}}_2(\mathbb {Z}). \end{aligned}$$
(3.23)

Definition 3.10

The Epstein zeta function associated to the quadratic form \(\varPhi \) is defined to be

$$\begin{aligned} \zeta _\varPhi (s)=\sum _{(m,n) \in M^\varPhi /{\mathrm {SL}}_2({\mathbb {Z}})_\varPhi } \frac{1}{(\varPhi \cdot (m,n))^s} \end{aligned}$$

which is well-defined by (3.23).

The series converges absolutely for \({\text { Re }}(s)>1\) and defines a holomorphic function. It has a meromorphic extension to the entire complex plane and has a simple pole at \(s=1\) with residue (see [1, §3.2.2]):

$$\begin{aligned} {\text { Res }}_{s=1}\zeta _{\varPhi }(s)= {\left\{ \begin{array}{ll} \frac{2\pi }{\sqrt{\vert {\text { dis }}(\varPhi )\vert }} \frac{1}{\vert {\mathrm {SL}}_2({\mathbb {Z}})_\varPhi \vert } &{}\quad {\text { dis }}(\varPhi )<0,\\ \frac{1}{\sqrt{\vert {\text { dis }}(\varPhi )\vert }} \log (\epsilon _{\varPhi }) &{}\quad {\text { dis }}\varPhi >0, \end{array}\right. } \end{aligned}$$
(3.24)

where \(\epsilon _{\varPhi }\) is the fundamental unit as in Remark 3.9. Now let \(\varPhi \in Q_l(N)\) be a quadratic form and \(d\mid N\). Consider the following sets

$$\begin{aligned} M_d&=\big \{(Nm, dn) \in \mathbb {Z}^2 \setminus \{0,0\}\big \},\\ M^{\varPhi }_d&=\{(Nm, dn) | \varPhi \cdot (Nm, dn) >0\}. \end{aligned}$$

Note that \({\mathrm {SL}}_2(\mathbb {Z})_{\varPhi }=\varGamma _0(N)_{\varPhi }\) acts on \(M_d^{\varPhi }\) in view of (3.23).

Definition 3.11

For a quadratic form \(\varPhi \in Q_l(N)\) and \(d \mid N\), define the zeta function

$$\begin{aligned} \zeta _{\varPhi ,d}(s)=\sum _{(m,n) \in M^{\varPhi }_d/{\mathrm {SL}}_2(\mathbb {Z})_{\varPhi }} \frac{1}{(\varPhi \cdot (m,n))^s} \end{aligned}$$

which is well-defined by (3.23).

Consider the group homomorphism \(*d: \varGamma _0(N) \rightarrow \varGamma _0(d) \) defined by

$$\begin{aligned} \gamma =\begin{pmatrix} x &{}\quad y\\ Nz &{}\quad t \end{pmatrix} \mapsto \gamma ^{*d}= \begin{pmatrix} x &{}\quad \frac{N}{d}y\\ dz &{}\quad t \end{pmatrix} \end{aligned}$$

and note that this map induces the injection \(*d : Q_l(N) \rightarrow Q_l(d)\)

$$\begin{aligned} \varPhi =[Na,b,c] \mapsto \varPhi ^{*d}=\left[ da, b, \frac{N}{d}c\right] . \end{aligned}$$

The map \(*d\) respects the action (see (3.21)) of \(\varGamma _0(N)\) on \(Q_l(N)\) in the sense that \((\varPhi \circ \gamma )^{*d}=\varPhi ^{*d} \circ \gamma ^{*d}\). This immediately implies that \(*d\) maps \( \varGamma _0(N)_{\varPhi }\) into \(\varGamma _0(d)_{\varPhi ^{*d}}\). This map is also surjective. Indeed, suppose \(\varPhi =[Na,b,c]\) and \(\varDelta ={\text { dis }}(\varPhi )={\text { dis }}(\varPhi ^{*d})\). If \(B\in \varGamma _0(d)_{\varPhi ^{*d}}\), then by (3.22),

$$\begin{aligned} B=U_{\varPhi ^{*d}}(x,y)=\begin{pmatrix} \frac{x-yb}{2} &{}\quad -\frac{N}{d}cy\\ day &{}\quad \frac{x+yb}{2} \end{pmatrix} \end{aligned}$$

where (xy) is a solution of the Pell’s equation \(x^2-\varDelta y^2=4\). Set

$$\begin{aligned} A=\begin{pmatrix} \frac{x-yb}{2} &{}\quad -cy\\ Nay &{}\quad \frac{x+yb}{2} \end{pmatrix}. \end{aligned}$$

It is evident that \(A \in \varGamma _0(N)_{\varPhi }\) and \(A^{*d}=B\).

To find a relation between \(\zeta _{\varPhi ,d}\) and the Epstein zeta function, note that \(\varPhi \cdot (Nm, dn)=(Nd) \varPhi ^{*d}\cdot (m,n)\), and thus

$$\begin{aligned} \begin{aligned} \zeta _{\varPhi ,d}(s)&= \sum _{(m,n) \in M^{\varPhi }_d/\varGamma _0(N)_{\varPhi }} \frac{1}{(\varPhi \cdot (m,n))^s}\\&= \sum _{(m,n) \in M^{\varPhi ^{*d}}/\varGamma _0(d)_{\varPhi ^{*d}}}\left( \frac{1}{(N d) \varPhi ^{*d}\cdot (m,n)}\right) ^s\\&=\frac{1}{(Nd)^s}[{\mathrm {SL}}_2({\mathbb {Z}})_{\varPhi ^{*d}}:\varGamma _0(d)_{\varPhi ^{*d}}]\zeta _{\varPhi ^{*d}}(s). \end{aligned} \end{aligned}$$
(3.25)

By Proposition 3.8, we have \([{\mathrm {SL}}_2({\mathbb {Z}})_{\varPhi ^{*d}}:\varGamma _0(d)_{\varPhi ^{*d}}]=1\). We now calculate the residue of \(\zeta _{\varPhi ,d}(s)\) by expressing it in terms of Epstein zeta functions. Our computation is similar to that of [1, Proposition 3.2.3]. From (3.24), we get for \({\text { dis }}(\varPhi )>0\)

$$\begin{aligned} {\text { Res }}_{s=1}\zeta _{\varPhi ,d}(s)=\frac{ \log \epsilon _{\varPhi ^{*d}}}{Nd\sqrt{{\text { dis }}(\varPhi ^{*d})}} =\frac{\log \epsilon _{\varPhi }}{Nd\sqrt{{\text { dis }}(\varPhi })}. \end{aligned}$$
(3.26)

For \({\text { dis }}(\varPhi )<0\), we also obtain

$$\begin{aligned} {\text { Res }}_{s=1}\zeta _{\varPhi ,d}(s)=\frac{2\pi }{Nd\sqrt{\vert {\text { dis }}(\varPhi ^{*d})\vert }\vert {\mathrm {SL}}_2({\mathbb {Z}})_{\varPhi ^{*d}}\vert }=\frac{2\pi }{Nd\sqrt{\vert {\text { dis }}(\varPhi )\vert }\vert {\mathrm {SL}}_2({\mathbb {Z}})_\varPhi \vert }.\nonumber \\ \end{aligned}$$
(3.27)

Definition 3.12

Let \(\mu \) be the Möbius function. Define the zeta function

$$\begin{aligned} \zeta _{\varGamma _0(p^2)}(s,l)= \frac{1}{2 \zeta (2s) (1-p^{-2s})}\sum _{d\in \{1,p\}} \mu (d) \sum _{\varPhi \in Q_l(p^2)/\varGamma _0(p^2)} \zeta _{\varPhi ,d}(s). \end{aligned}$$

Again, these zeta functions are suitable linear combinations of Epstein zeta functions. Hence these functions have meromorphic continuations in the entire complex s plane with simple pole at \(s=1\). The residues can be computed using the formulae of residues of Epstein zeta functions. In other words, we have the following Laurent series expansion at \(s=1\):

$$\begin{aligned} \zeta _{\varGamma _0(p^2)}(s,l)&=\frac{a_{-1}(l)}{s-1}+a_0(l)+O(s-1). \end{aligned}$$

4 Spectral expansions of automorphic kernels

4.1 Selberg trace formula

To compute \(R_F(s)\), we follow the strategy carried out in [1] and [28]. Consider on one hand the spectral expansions of certain automorphic kernels that consist of the term F together with some well-behaved terms and on the other hand the contribution of various motions of \(\varGamma _0(p^2)\) to these kernels. Computation of \(R_F(s)\) then reduces to understanding the other terms in this identity which are easier to handle. To elaborate this process, let us recall the definitions of these kernels. For \(t>0\), let \(h_t: {\mathbb {R}} \rightarrow {\mathbb {R}}\) be the test function

$$\begin{aligned} h_t(r)=e^{-t(\frac{1}{4}+r^2)}, \end{aligned}$$

which is a function of rapid decay.

4.1.1 Automorphic kernels of weights \(k=0,2\)

The automorphic kernels involve the inverse Selberg/Harish-Chandra transform \(\phi _k(t, \cdot )\) of \(h_t\) of weights \(k=0,2\). These are given by

$$\begin{aligned} \begin{aligned}&g_t(v) =\frac{1}{2\pi } \int _{-\infty }^{\infty } h_t(r) e^{-ivr} \, dr, \quad v \in {\mathbb {R}},\\&q_t(e^v+e^{-v}-2) = g_t(v), \quad v\in {\mathbb {R}},\\&\phi _{0}(t,u) =-\frac{1}{\pi }\int _{-\infty }^{\infty } q_t^{\prime }(u+v^2) \, dv, \quad u \ge 0,\\&\phi _2(t,u)=-\frac{1}{\pi } \int _{-\infty }^{\infty } q^{\prime }_t(u+v^2) \frac{\sqrt{u+4+v^2} - v}{\sqrt{u+4+v^2} + v} \, dv, \quad u \ge 0. \end{aligned} \end{aligned}$$
(4.1)

Consider the functions

$$\begin{aligned} u(z,w)=\frac{\vert z- w \vert ^2}{4 {\text { Im }}z {\text { Im }}w} \quad \text {and} \quad H(z,w)=\frac{w-\overline{z}}{z-\overline{w}}, \end{aligned}$$

and for \(\gamma \in {\mathrm {SL}}_2({\mathbb {Z}})\) set

$$\begin{aligned} \nu _k(t,\gamma ;z)=j_{\gamma }(z,k) H^{k/2}(z,\gamma z)\phi _k(t,u(z,\gamma z)). \end{aligned}$$

Here, \(j_\gamma \) is the automorphic factor defined in (3.1).

The automorphic kernel \(K_k(t,z)\) of weight k with respect to \(\varGamma _0(p^2)\) is defined as

$$\begin{aligned} \begin{aligned} K_0(t,z)&:= \frac{1}{2} \sum _{\gamma \in \varGamma _0(p^2)} \nu _0(t,\gamma ;z)=\frac{1}{2} \sum _{\gamma \in \varGamma _0(p^2)} \phi _0\big (t, u(z, \gamma z)\big ) , \\ K_2(t,z)&:=\frac{1}{2} \sum _{\gamma \in \varGamma _0(p^2)} \nu _2(t,\gamma ;z) = \frac{1}{2} \sum _{\gamma \in \varGamma _0(p^2)} j_{\gamma }(z,2)H(z, \gamma z) \phi _2\big (t, u(z, \gamma z)\big ). \end{aligned} \end{aligned}$$
(4.2)

We also denote the corresponding summations over the elliptic, hyperbolic, and parabolic elements of \(\varGamma _0(p^2)\) by \({{\mathcal {E}}}_k\), \(H_k\), and \(P_k\) respectively, i.e.,

$$\begin{aligned} \begin{aligned} {{\mathcal {E}}}_k(t, z)&:=\frac{1}{2} \sum _{\begin{array}{c} \gamma \in \varGamma _0(p^2)\\ \vert {\text {tr}}(\gamma )\vert <2 \end{array}} \nu _k(t,\gamma ;z),\\ H_k(t,z)&:= \frac{1}{2} \sum _{\begin{array}{c} \gamma \in \varGamma _0(p^2)\\ \vert {\text {tr}}(\gamma )\vert >2 \end{array}} \nu _k(t,\gamma ;z),\\ P_k(t,z)&:= \frac{1}{2} \sum _{\begin{array}{c} \gamma \in \varGamma _0(p^2)\\ \vert {\text {tr}}(\gamma )\vert =2 \end{array}} \nu _k(t,\gamma ;z), \end{aligned} \end{aligned}$$
(4.3)

and write \({{\mathcal {E}}}={{\mathcal {E}}}_2-{{\mathcal {E}}}_0\), \(H=H_2-H_0\) and \(P=P_2-P_0\). Let \(R_H\), \(R_{{{\mathcal {E}}}}\) and \(R_P\) be the Rankin–Selberg transforms of these functions at the cusp \(\infty \).

We now simplify the calculation of the Rankin–Selberg transforms of various terms above. To do the same, we introduce

$$\begin{aligned} F_k^l(t, z)&:= \sum _{\begin{array}{c} \gamma \in \varGamma _0(p^2)\\ {\text {tr}}(\gamma ) =l \end{array}} \nu _k(t, \gamma ; z), \end{aligned}$$
(4.4)
$$\begin{aligned} R_k^l(t,s)&:= \int _{Y_0(p^2)} E_{\infty ,0}(z,s) F_k^l(t, z) \mu _{\mathrm {hyp}}(z). \end{aligned}$$
(4.5)

We now compute \(R_k^l\) by exploiting the connection between Epstein zeta function and Eisenstein series.

Lemma 4.1

Let \(\gamma \in \varGamma _0(p^2)_l\) and suppose \(\varPhi _\gamma \cdot (m,n) >0\). We have the following equality of integrals

$$\begin{aligned} \int _{\mathbb {H}} \nu _k(t, \gamma ; z) \frac{y^s}{\vert mz +n \vert ^{2s}} \mu _{\mathrm {hyp}}(z) = \frac{1}{(\varPhi _\gamma \cdot (m,n))^s} \int _{\mathbb {H}} \nu _k(t, \gamma _{l}; z) y^s \mu _{\mathrm {hyp}}(z), \end{aligned}$$

where \(\gamma _{l} =\left( \begin{array}{cc} \frac{l}{2} &{}\quad \frac{l^2}{4}-1\\ 1 &{} \quad \frac{l}{2} \\ \end{array}\right) \).

Proof

For a matrix \(\gamma =\left( {\begin{matrix} a &{}\quad b\\ c &{}\quad d\\ \end{matrix}}\right) \), we have the quadratic form \(\varPhi _{\gamma }=[c, d-a, -b]\) associated to \(\gamma \). Consider the matrix

$$\begin{aligned} T= \frac{1}{\sqrt{\varPhi _{\gamma } \cdot (m,n)}}\begin{pmatrix} n &{}\quad -(d-a)\frac{n}{2}-bm\\ -m &{}\quad cn - (d-a)\frac{m}{2} \end{pmatrix}. \end{aligned}$$

We can easily verify that \(T \in {\mathrm {SL}}_2(\mathbb {R})\), \(T^{-1}\gamma T= \gamma _l\), and \(\dfrac{{\text { Im }}Tw}{\vert mTw+n\vert ^2}=\dfrac{{\text { Im }}w}{\varPhi _\gamma \cdot (m,n)}\).

A small check using matrix multiplication shows that \(\nu _k(t, \gamma ; Tw)=\nu _k(t, T^{-1}\gamma T; w)\). Thus the equality of the integrals follows immediately once we substitute \(z=Tw\). \(\square \)

Proposition 4.2

For all \(l \in {\mathbb {Z}}\) with \(|l| \ne 2\), we have the following equality

$$\begin{aligned} R_k^l(t,s)=\zeta _{\varGamma _0(p^2)}(s,l)I_k(t,s,l). \end{aligned}$$

Here, we denote by \(I_k^{\pm }(t,s,l) = \displaystyle \int _{\mathbb {H}}\nu _k(t, \gamma _{\pm l}; z) y^s \mu _{\mathrm {hyp}}(z)\) and \(I_k(t,s,l)=I_k^{+}(t,s,l)-I_k^{-}(t,s,l)\).

Proof

From Proposition 3.2, we obtain

$$\begin{aligned} E_{\infty ,0}(z,s)= \frac{1}{2 \zeta (2s) (1-p^{-2s})}\sum _{d\in \{1,p\}} \mu (d)\sum _{(m,n) \in M_d} \frac{y^s}{\vert m z +n\vert ^{2s}}. \end{aligned}$$

From Eqs. (4.4) and (4.5), we now have

$$\begin{aligned}&R^l_k(t,s) \\&\quad =\frac{1}{2 \zeta (2s) (1-p^{-2s})}\sum _{d\in \{1,p\}} \mu (d)\int _{Y_0(p^2)} \sum _{{\text {tr}}(\gamma ) =l} \sum _{(m,n) \in M_d} \nu _k(t, \gamma ; z) \frac{y^s}{\vert m z+n \vert ^{2s}} \mu _{\mathrm {hyp}}(z). \end{aligned}$$

For a fixed \(l \in {\mathbb {Z}}\) with \(|l| \ne 2\), denote

$$\begin{aligned} S_d^{\pm }(l)&=\left\{ \big (\varPhi , (m,n) \big ) | \varPhi \in Q_l(p^2), (m,n) \in M^{\pm \varPhi }_d \right\} , \end{aligned}$$

and

$$\begin{aligned} \sigma ^{\pm }_d(l, t,z) =\sum _{S_d^{\pm }(l)} \nu _k(t, \gamma ; z) \frac{y^s}{\vert m z+n \vert ^{2s}}. \end{aligned}$$

Recall the identification of the quadratic form \(\varPhi \) with the matrix \(\gamma _{\varPhi }\) (see Sect. 3.4). For simplicity of notation, we write \(\gamma _{\varPhi }=\gamma \). For a fixed \(\gamma \), we now break the summations inside the integral into two parts to get a simplification

$$\begin{aligned} \sum _{{\text {tr}}(\gamma ) =l} \sum _{(m,n) \in M_d} \nu _k(t, \gamma ; z) \frac{y^s}{\vert m z+n \vert ^{2s}} = \sum _{{\text {tr}}(\gamma ) =l} ( \sigma _d^{+}(l, t, z)+\sigma _d^{-}(l, t, z)). \end{aligned}$$

The congruence subgroup \(\varGamma _0(p^2)\) acts freely on \(S_d^{\pm }(l)\) component wise

$$\begin{aligned} \alpha \cdot \big (\varPhi , (m,n)\big ) = \big (\varPhi \circ \alpha , (m,n)\alpha \big ). \end{aligned}$$

Hence, we have

$$\begin{aligned} R^l_k(t,s)&=\frac{1}{2 \zeta (2s) (1-p^{-2s})}\sum _{d\in \{1,p\}} \mu (d)\int _{Y_0(p^2)} \sum _{{\text {tr}}(\gamma ) =l} (\sigma _d^{+}(l, t, z)+\sigma _d^{-}(l, t, z)) \mu _{\mathrm {hyp}}(z). \end{aligned}$$

Recall that \(\varPhi \circ \alpha \) corresponds to \(\alpha ^{-1} \gamma \alpha \). Following [13, p. 86], write \((m_{\alpha },n_{\alpha })=(m,n)\alpha \). As in loc. cit., a small check shows that \(\nu _k(t, \alpha ^{-1}\gamma \alpha ; z)= \nu _k(t, \gamma ; \alpha z)\) and \( \frac{y^s}{\vert m_{\alpha }z+n_{\alpha }\vert ^{2s}}=\frac{({\text { Im }}\alpha z)^s}{\vert m \alpha z+n\vert ^{2s}}\). We deduce that

$$\begin{aligned} \sigma _d^{\pm }(l,t,z)&= \sum _{(\varPhi , (m,n)) \in S^{\pm }_d(l)/\varGamma _0(p^2)} \sum _{\alpha \in \varGamma _0(p^2)} \nu _k(t, \alpha ^{-1}\gamma \alpha ; z) \frac{y^s}{\vert m_{\alpha }z+n_{\alpha }\vert ^{2s}}\\&= \sum _{(\varPhi , (m,n)) \in S^{\pm }_d(l)/\varGamma _0(p^2)} \sum _{\alpha \in \varGamma _0(p^2)} \nu _k(t, \gamma ; \alpha z) \frac{({\text { Im }}\alpha z)^s}{\vert m \alpha z+n\vert ^{2s}}. \end{aligned}$$

Since the group \(\varGamma _0(p^2)\) acts on the set \(S^{\pm }_d(l)\) component wise, we have

$$\begin{aligned} S^{\pm }_d(l)/\varGamma _0(p^2)= \left\{ (\varPhi , (m,n)) | \varPhi \in Q_l(p^2)/\varGamma _0(p^2), (m,n) \in M^{\pm \varPhi }_d/\varGamma _0(p^2)_{\varPhi }\right\} . \end{aligned}$$

Using Lemma 4.1, we deduce that

$$\begin{aligned}&\int _{Y_0(p^2)} \sigma _d^{+}(l,t,z) \mu _{\mathrm {hyp}}(z) \\&\quad = \sum _{(\varPhi , (m,n)) \in S^{+}_d(l)/\varGamma _0(p^2)} \int _{\mathbb {H}} \nu _k(t, \gamma ; z) \frac{y^s}{\vert m z+n\vert ^{2s}} \mu _{\mathrm {hyp}}(z)\\&\quad = \sum _{\varPhi \in Q_l(p^2)/\varGamma _0(p^2)} \sum _{(m,n) \in M^{ \varPhi }_d/\varGamma _0(p^2)_{\varPhi }} \frac{1}{\varPhi _\gamma \cdot (m,n)^s} \int _{\mathbb {H}} \nu _k(t, \gamma _{ l}; z) y^s \mu _{\mathrm {hyp}}(z)\\&\quad = I_k^{+}(t,s,l) \zeta _{\varGamma _0(p^2)}(s,l). \end{aligned}$$

Observe hat \(-\varPhi _\gamma = \varPhi _{-\gamma }\) and \(S_d^{-}(l)=S_d^{+}(-l)\). Hence, we obtain

$$\begin{aligned} \int _{Y_0(p^2)} \sigma _d^{-}(t,z) \mu _{\mathrm {hyp}}(z)= I_k^{-}(t,s,l) \zeta _{\varGamma _0(p^2)}(s,l) \end{aligned}$$

and the proposition now follows.\(\square \)

4.1.2 Spectral expansions

Recall that the hyperbolic Laplacian \(\varDelta _k\) defined in (3.2) acts as a positive self-adjoint operator on \(L^2(\varGamma _0(p^2)\backslash \mathbb {H},k)\)—the space of square integrable automorphic forms of weight k [1, Definition 3.1.1, p. 22]. Both these operators have the same discrete spectrum [31] say

$$\begin{aligned} 0=\lambda _0 <\lambda _1 \le \lambda _2 \cdots . \end{aligned}$$

For weights \(k\in \{0,2\}\), the eigenspaces \(L^2_{\lambda _j}(\varGamma _0(p^2)\backslash \mathbb {H},k)\) corresponding to eigenvalue \(\lambda _j\ne 0\) are isomorphic via the Mass operators of weight 0:

$$\begin{aligned} \varLambda _0=iy\frac{\partial }{\partial x}+y\frac{\partial }{\partial y}: L^2_{\lambda _j}(\varGamma _0(p^2)\backslash \mathbb {H},0) \rightarrow L^2_{\lambda _j}(\varGamma _0(p^2)\backslash \mathbb {H},2). \end{aligned}$$

We write \(\lambda _j=1/4+r_j^2\) where \(r_j\) is real or purely imaginary and let \(\{u_j\}\) be an orthonormal basis of eigenfunctions of \(\varDelta _0\) corresponding to \(\lambda _j\).

Theorem 4.3

[13, Theorem 1.5.7, p. 16] The spectral expansions are given by

$$\begin{aligned} K_0(t,z)&= \frac{1}{v_{\varGamma _0(p^2)}}+\sum _{j=1}^\infty h_t(r_j)|u_j(z)|^2 \nonumber \\&\quad +\frac{1}{4\pi }\sum _{P \in \partial (X_0(p^2))}\int _{-\infty }^{\infty }h_t(r) \left| E_{P,0}\left( z,\frac{1}{2}+ir\right) \right| ^2dr, \nonumber \\ K_2(t,z)&= g_{p^2} F(z) +\sum _{j=1}^{\infty } \frac{h_t(r_j)}{\lambda _j} \vert \varLambda _0 u_j (z) \vert ^2 \nonumber \\&\quad +\frac{1}{4\pi } \sum _{P \in \partial (X_0(p^2))} \int _{-\infty }^{\infty } h_t(r) \left| E_{P,2}\left( z,\frac{1}{2}+ir\right) \right| ^2 dr. \end{aligned}$$

4.2 Different contributions in the Rankin–Selberg transform of the Arakelov metric

We now prove Proposition 4.4 in this subsection that will determine the term \({\mathcal {R}}_{\infty }^{\varGamma _0(p^2)}\) of our Theorem 1.1. To obtain the same, let us first define the following quantities for weights \(k \in \{0,2\}\):

$$\begin{aligned} \begin{aligned} D_0(t,z)&=\sum _{j=1}^\infty h_t(r_j)|u_j(z)|^2, \quad D_2(t,z)=\sum _{j=1}^{\infty } \frac{h_t(r_j)}{\lambda _j} \vert \varLambda _0 u_j (z) \vert ^2, \\ C_k(t,z)&= \frac{1}{4\pi } \sum _{P \in \partial (X_0(p^2))} \int _{-\infty }^{\infty } h_t(r) \left| E_{P,k}\left( z,\frac{1}{2}+ir\right) \right| ^2 \, dr+\frac{2-k}{2}\frac{1}{v_{\varGamma _0(p^2)}}; \end{aligned} \end{aligned}$$
(4.6)

and write \(D=D_2-D_0\) and \(C=C_2-C_0\). From Theorem 4.3, we then have

$$\begin{aligned} \begin{aligned} K_2(t,z)-K_0(t,z)&=g_{p^2} F(z) +D(t,z) + C(t,z). \end{aligned} \end{aligned}$$
(4.7)

On the other hand from (4.3)

$$\begin{aligned} \begin{aligned} K_2(t,z)-K_0(t,z)&= H(t,z)+{{\mathcal {E}}}(t,z)+P(t,z). \end{aligned} \end{aligned}$$
(4.8)

Combining these two equations, we obtain the identity

$$\begin{aligned} g_{p^2} F(z)+D(t,z)+C(t,z)=H(t,z)+{{\mathcal {E}}}(t,z)+P(t,z). \end{aligned}$$
(4.9)

Note that our Eq. (4.9) is exactly same as the identity of [13, p. 72] as \({\mathcal {P}}=P-C\) in loc. cit. By integrating with respect to \(E_{\infty ,0}(z,s)\mu _{\mathrm {hyp}}\), we obtain the key identity

$$\begin{aligned} g_{p^2}R_F(s)=-R_D(t,s) +R_{H}(t,s) + R_{{{\mathcal {E}}}}(t,s) + R_{P-C}(t,s) \end{aligned}$$
(4.10)

as announced in the beginning of this section.

Proposition 4.4

The contributions in the Rankin–Selberg transform arising from different motions are given as follows:

  1. (i)

    The discrete contribution \(R_D(t,s)\) is holomorphic at \(s=1\) for all t and \(R^{\mathrm {dis}}_0(t):=R_D(t,1)\) satisfies

    $$\begin{aligned} R^{\mathrm {dis}}_0(t)\rightarrow 0 \end{aligned}$$

    as \(t \rightarrow \infty \).

  2. (ii)

    The hyperbolic contribution \(R_H(t,s)\) is holomorphic at \(s=1\) for all t and \(R^{\mathrm {hyp}}_0(t):=R_H(t,1)\) is of the form

    $$\begin{aligned} R^{\mathrm {hyp}}_0(t) = {\mathcal {E}}^{\mathrm {hyp}}(t) - \frac{t-1}{2 v_{\varGamma _0(p^2)}} \end{aligned}$$

    where \(\lim _{t \rightarrow \infty } {\mathcal {E}}^{\mathrm {hyp}}(t) = \frac{1}{2 v_{\varGamma _0(p^2)}} O_{\epsilon }(p^{2 \epsilon })\).

  3. (iii)

    The elliptic contribution \(R_{{{\mathcal {E}}}}(t,s)\) at \(s=1\) is of the form

    $$\begin{aligned} R_{{{\mathcal {E}}}}(t,s)=\frac{R^{\mathrm {ell}}_{-1}(t)}{s-1} + R^{\mathrm {ell}}_{0}(t) + O(s-1), \end{aligned}$$

    where \(R^{\mathrm {ell}}_{-1}(t)\) and \(R^{\mathrm {ell}}_{0}(t)\) have finite limits as \(t \rightarrow \infty \), and \(R^{\mathrm {ell}}_0= \displaystyle \lim _{t \rightarrow \infty } R^{\mathrm {ell}}_0(t)=o(\log (p))\).

  4. (iv)

    The parabolic and spectral contribution \(R_{P-C}(t,s)\) is given by:

    $$\begin{aligned} R_{P-C}(t,s)= \frac{R^{\mathrm {par}}_{-1}(t)}{s-1} + R^{\mathrm {par}}_{0} (t) + O(s-1). \end{aligned}$$

    Here, \(R^{\mathrm {par}}_0(t) = \frac{t+1}{2 v_{\varGamma _0(p^2)}} + {\mathcal {E}}^{\mathrm {par}}(t)\) with \(\lim _{t \rightarrow \infty } {\mathcal {E}}^{\mathrm {par}}(t) = \frac{1-\log (4\pi )}{4\pi } + O\left( \frac{\log p}{p^2}\right) \).

Since \(h_t(r)=e^{-t(\frac{1}{4}+r^2)}\), proof of part (i) of Proposition 4.4 follows from [28, Proposition 5.2.4] (see also [28, p. 30]). The explicit description of the modular curves are not used in this portion of [28] and hence the same proof works verbatim in our case also. The proof of remaining three parts will be given in the next three subsections.

As mentioned in the introduction, the underlying strategies for proving the other parts of Proposition 4.4 are also same as that of Abbes–Ullmo [1], Mayer [28] and M. Grados Fukuda [13]. However, they used squarefree assumption of the levels of modular curves crucially to write down the Eisenstein series and hence implement the strategy of Zagier [33] involving Selberg’s trace formula. To follow the same strategy, we first compute the Eisenstein series of weight 0 at the cusp \(\infty \) for our modular curves \(X_0(p^2)\) in Proposition 3.2. This is a key step in Zagier’s program and we implement the same strategy for our specific modular curves of the form \(X_0(p^2)\) similar to the above mentioned papers.

In the rest of this section, we simplify the calculation of the Rankin–Selberg transforms of various terms above. For all \(l \in {\mathbb {Z}}\), recall the definitions of \(F_k^l(t, z)\) and \(R_k^l(t,s)\) from the previous section (see  (4.4) and  (4.5)). Then observe that \(R_H\), \(R_{{{\mathcal {E}}}}\) and \(R_P\) can be obtained by summing up \(R_2^l-R_0^l\) respectively over \(\vert l \vert >2\), \(\vert l \vert <2\) and \(\vert l \vert =2\). We now prove Proposition 4.4 which is then used to get an expression for \({\mathcal {R}}_{\infty }^{\varGamma _0(p^2)}\). Consequently we derive an asymptotic expression of \({\mathfrak {g}}_{\mathrm {can}}(\infty ,0)\) in terms of p.

4.2.1 The hyperbolic contribution

Recall that the hyperbolic contribution in the Rankin–Selberg transform is determined by a suitable theta function. We proceed to define these theta functions in our context.

Let \(\gamma \in \varGamma _0(p^2)\) be a hyperbolic element, i. e., \({\text {tr}}(\gamma )=l; |l|>2\). Let v the eigenvalue of \(\gamma \) with \(v^2>1\). Recall that the norm of the matrix \(\gamma \) is defined to be \(N(\gamma ):=v^2\). Since \({\text {tr}}(\gamma )=l\), it is easy to see that \(n_l=(l+\sqrt{l^2-4})/2\) is the larger eigenvalue of \(\gamma \). The theta function for \(X_0(p^2)\) is defined by [29]

$$\begin{aligned} \varTheta _{\varGamma _0(p^2)}(\xi )=\sum _{|l|>2} \sum _{\varPhi \in Q_l(p^2) /\varGamma _0(p^2)} \frac{\log \epsilon _{\varPhi }}{\sqrt{l^2-4}} \frac{1}{\sqrt{4\pi \xi }}e^{-\frac{\xi ^2+(\log n_l^2)^2}{4\xi }}. \end{aligned}$$

Note that this function is exactly equal to the one defined in [1, p. 54] as \(\varPhi \mapsto \gamma _{\varPhi }\), \(N(\gamma _0)=\epsilon _{\varPhi }^2\) and \(N(\gamma )=n_l^2\).

Proposition 4.5

The hyperbolic contribution \(R_H(t,s)\) in the trace formula is holomorphic at \(s=1\) and it has the following integral representation:

$$\begin{aligned} R_{H}(t,1)=-\frac{1}{2v_{\varGamma _0(p^2)}}\int _0^{t} \varTheta _{\varGamma _0(p^2)}(\xi ) \, d\xi . \end{aligned}$$

Proof

From Proposition 4.2, the hyperbolic contribution is given by

$$\begin{aligned} R_H(t,s)=\sum _{\vert l \vert >2} \zeta _{\varGamma _0(p^2)}(s,l)\big (I_2(t,s,l)-I_0(t,s,l)\big ). \end{aligned}$$

Observe that from (3.26)

$$\begin{aligned} {\text { Res }}_{s=1}\zeta _{\varGamma _0(p^2)}(s,l) = \frac{1}{\pi v_{\varGamma _0(p^2)}} \sum _{\varPhi \in Q_l(p^2) /\varGamma _0(p^2)} \frac{\log \epsilon _{\varPhi }}{\sqrt{l^2-4}}. \end{aligned}$$
(4.11)

Let us consider the integral

$$\begin{aligned} A_l(t)=-\frac{\pi }{2}\int _0^t \frac{1}{\sqrt{4\pi \xi }}e^{-\frac{\xi ^2+(\log n_l^2)^2}{4\xi }} \, d \xi . \end{aligned}$$

From [1, Propositions 3.3.2, 3.3.3], we obtain

$$\begin{aligned} I_2(t,s,l)-I_0(t,s,l)=A_l(t)(s-1)+O\big ((s-1)^2\big ); \end{aligned}$$

for \({\text { Re }}(s)<1+\delta \) and \(\delta >0\).

It follows that \(R_H(t,s)\) is holomorphic near \(s=1\) and has the required form. \(\square \)

Proof of Proposition 4.4 (ii)

By the above proposition, we write

$$\begin{aligned} R^{\mathrm {hyp}}_0 (t) = R_{H}(t,1) = {\mathcal {E}}^{\mathrm {hyp}}(t) - \frac{t-1}{2v_{\varGamma _0(p^2)}} \end{aligned}$$

with

$$\begin{aligned} {\mathcal {E}}^{\mathrm {hyp}}(t) = -\frac{1}{2v_{\varGamma _0(p^2)}}\left( \int _0^t (\varTheta _{\varGamma _0(p^2)}(\xi )-1)\,d\xi +1\right) . \end{aligned}$$

Let \(Z_{\varGamma _0(p^2)}\) be the Selberg’s zeta function for \(X_0(p^2)\). Note that by [1, Lemma 3.3.6]:

$$\begin{aligned} \int _0^{\infty } (\varTheta _{\varGamma _0(p^2)}(\xi )-1)\,d\xi = \lim _{s \rightarrow 1}\left( \frac{Z_{\varGamma _0(p^2)}^{\prime }(s)}{Z_{\varGamma _0(p^2)}(s)} -\frac{1}{s-1}\right) -1. \end{aligned}$$
(4.12)

Using [17, p. 27], we obtain

$$\begin{aligned} \lim _{s \rightarrow 1} \left( \frac{Z'_{\varGamma _0(p^2)}}{Z_{\varGamma _0(p^2)}}-\frac{1}{s-1}\right) =O_{\epsilon }(p^{2\epsilon }). \end{aligned}$$
(4.13)

It follows that \(\lim _{t \rightarrow \infty } {\mathcal {E}}^{\mathrm {hyp}}(t) =\frac{1}{2 v_{\varGamma _0(p^2)}} O_{\epsilon }(p^{2\epsilon })\) as required. \(\square \)

4.2.2 The elliptic contribution

Recall that (see Sect. 3.4) there is a bijective correspondence between matrices of trace l and quadratic forms of discriminant \(l^2-4\). The explicit map is given by:

$$\begin{aligned} \varPhi =[a,b,c] \mapsto \gamma _\varPhi =\left( \begin{array}{cc} \frac{l-b}{2}&{} -c\\ a &{} \frac{l+b}{2}\\ \end{array}\right) . \end{aligned}$$

To obtain the elliptic contribution in the trace formula, consider matrices with traces \(l \in \{0, \pm 1\}\).

For \(D=b^2-4ac=l^2-4\), consider the imaginary quadratic field \(K={\mathbb {Q}}(\sqrt{-D})\). In this section, we only consider \(D \in \{1,3\}\). For the complex number \(\theta =\frac{b+\sqrt{D}}{2a}\), set \(a_{\theta }={\mathbb {Z}}+{\mathbb {Z}}\theta \) and \({\mathcal {A}}\) be the ideal class (same as narrow ideal classes for imaginary quadratic fields) corresponding to \(a_{\theta }\). For any number field K, let \(\zeta _K(s)\) be the Dedekind zeta function of K.

Let \(\zeta (s, {\mathcal {A}})\) be the partial zeta function associated to a (narrow) ideal class \({\mathcal {A}}\) [13, p.131, Appendix C]. By [13, p.78], observe that \(\zeta _{\varPhi }(s)=\zeta (s, {\mathcal {A}})\). In loc. cit. the author assumed that \(|l|>2\). Since all the ingredients to write down this equation are there in [34] and valid for \(|l| \ne 2\), it is easy to see that same proof works for \(|l| \ne 2\).

In our case, the class numbers of the quadratic fields \(K={\mathbb {Q}}(\sqrt{-D})\) are 1. We have an equality of two different zeta functions

$$\begin{aligned} \zeta (s,{\mathcal {A}})=\zeta _K(s). \end{aligned}$$
(4.14)

For any \(N \in {\mathbb {N}}\), let \(h_l(N)\) be the cardinality of the set \(|Q_l(N) /\varGamma _0(N)|\). In particular for \(l \in \{0, \pm 1\}\), we have \(h_l(1)=1\) (since the class numbers of the corresponding imaginary quadratic fields are one). We have the following estimate for \(h_l(p^2)\):

$$\begin{aligned} \big | Q_l(p^2) /\varGamma _0(p^2) \big |&\le \big | Q_l /{\mathrm {SL}}_2({\mathbb {Z}}) \big | \big | {\mathrm {SL}}_2({\mathbb {Z}}):\varGamma _0(p^2)\big | \nonumber \\&\le h_l(1) \big | {\mathrm {SL}}_2({\mathbb {Z}}):\varGamma _0(p^2) \big | \le \big | {\mathrm {SL}}_2({\mathbb {Z}}):\varGamma _0(p^2) \big |. \end{aligned}$$
(4.15)

Proof of Proposition 4.4 (iii)

The elliptic contribution is obtained by putting \(l=0, 1, -1\) in Proposition 4.2. We have

$$\begin{aligned} R_{{{\mathcal {E}}}}(t,s) =&\sum _{l \in \{0,1,-1\}}\big (I_2(t,s,l)-I_0(t,s,l)\big ) \zeta _{\varGamma _0(p^2)}(s,l). \end{aligned}$$

Recall the Laurent series expansions of the following functions [see Definition 3.12 and [1, §3.3.3, p. 57]]:

$$\begin{aligned} \zeta _{\varGamma _0(p^2)}(s,l)&=\frac{a_{-1}(l)}{s-1}+a_0(l)+O(s-1),\\ I_2(t,s,l)-I_0(t,s,l)&= b_0(t,l)+b_1(t,l)(s-1)+O((s-1)^2). \end{aligned}$$

Observe that

$$\begin{aligned} R^{\mathrm {ell}}_{-1}(t)&=\sum _{l=-1}^{1}a_{-1}(l)b_0(t,l),\\ R^{\mathrm {ell}}_{0}(t)&= \sum _{l=-1}^{1} a_0(l)b_0(t,l)+a_{-1}(l)b_1(t,l). \end{aligned}$$

Note that \(b_i(t,l)\) differ from \(C_{i,l}(t)\) of [1, §3.3.3, p. 57] by a multiplicative function that does not depend on t and therefore \(\lim _{t \rightarrow \infty } b_i(t,l)\) exists, say \(b_i(\infty ,l)\).

We now proceed to find an estimate on \(R^{\mathrm {ell}}_{0}(t) \). From Definition 3.12 and (3.25), we obtain

$$\begin{aligned}&\zeta _{\varGamma _0(p^2)}(s,l) \\&\quad = \frac{1}{2 \zeta (2s) (1-p^{-2s})}\sum _{d\in \{1,p\}} \mu (d) \frac{1}{(p^2d)^s}\sum _{\varPhi \in Q_l(p^2)/\varGamma _0(p^2)} \zeta _{\varPhi ^{*d}}(s)\\&\quad = \frac{1}{2 \zeta (2s) (1-p^{-2s})} \frac{1}{p^{2s}} \left( \sum _{\varPhi \in Q_l(p^2)/\varGamma _0(p^2)} \zeta _{\varPhi ^{*1}}(s)-\frac{1}{p^s}\sum _{\varPhi \in Q_l(p^2)/\varGamma _0(p^2)} \zeta _{\varPhi ^{*p}}(s)\right) \\&\quad =I_1(s) I_2(s) \end{aligned}$$

with \(I_1(s)= \frac{1}{2 \zeta (2s) (1-p^{-2s})} \frac{1}{p^{2s}}\) and

$$\begin{aligned} I_2(s)=\sum _{\varPhi \in Q_l(p^2)/\varGamma _0(p^2)} \zeta _{\varPhi ^{*1}}(s)-\frac{1}{p^s}\sum _{\varPhi \in Q_l(p^2)/\varGamma _0(p^2)} \zeta _{\varPhi ^{*p}}(s). \end{aligned}$$

For \(t \in \{1,p\}\), consider the following series (depending on l):

$$\begin{aligned} J_t(s)=\sum \limits _{\varPhi \in Q_l(p^2)/\varGamma _0(p^2)} \zeta _{\varPhi ^{*t}}(s). \end{aligned}$$

To study the Laurent series expansion of the above series, write

$$\begin{aligned} J_t(s)= \frac{c_{-1,t}}{s-1}+c_{0,t} +O((s-1)). \end{aligned}$$

For the Dedekind zeta functions \(\zeta _K(s)\) with \(K={\mathbb {Q}}(\sqrt{l^2-4})\) and \(l \in \{0,\pm 1\}\), the residues and the constants of the Laurent series expansions depend only on the fields K. For \(i \in \{0,-1\}\) and \(t \in \{1,p\}\), we then get the following bounds of the coefficients of the above expansion using the estimate (4.15):

$$\begin{aligned} \vert c_{i,t}\vert \le C_i\cdot \, \big |{\mathrm {SL}}_2({\mathbb {Z}}):\varGamma _0(p^2)\big | \end{aligned}$$

for some constants \(C_i\)’s. These constants are determined by the Dedekind zeta functions and are independent of the prime p. The corresponding Laurent series expansion is given by

$$\begin{aligned} I_2(s)&=J_1(s)-\frac{1}{p^s} J_p(s) \\&= \frac{c_{-1,1}}{s-1}+c_{0,1}+O(s-1)-\frac{1}{p^s}\left( \frac{c_{-1,p}}{s-1}+c_{0,p}+O(s-1)\right) \\&= \frac{c_{-1,1}}{s-1}+c_{0,1}+O(s-1) \\&\quad -\left( \frac{1}{p}-\frac{\log p}{p}(s-1)+O((s-1)^2)\right) \left( \frac{c_{-1,p}}{s-1}+c_{0,p}+O(s-1)\right) \\&= \frac{1}{s-1}\left( c_{-1,1}-\frac{c_{-1,p}}{p}\right) +\left( c_{0,1}-\frac{c_{0,p}}{p}+\log p\frac{c_{-1,p}}{p}+O(s-1)\right) \\&= \frac{A_{-1}(p)}{s-1}+A_0(p)+O(s-1); \end{aligned}$$

with \(A_{-1}(p)=c_{-1,1}-\frac{c_{-1,p}}{p}\) and \(A_0(p)=c_{0,1}-\frac{c_{0,p}}{p}+\log p\frac{c_{-1,p}}{p}\). A small check shows that:

$$\begin{aligned} I_1(s) =\frac{3}{\pi ^2(p^2-1)} + \left( \frac{D_1\log p +D_2}{p^2-1}-\frac{6\log p}{\pi ^2(p^2-1)}\right) (s-1)+O(s-1)^2; \end{aligned}$$

where the constants \(D_1\) and \(D_2\) are independent of p. We conclude that

$$\begin{aligned} \zeta _{\varGamma _0(p^2)}(s,l)&= \frac{1}{2 \zeta (2s) (1-p^{-2s})}\sum _{d\in \{1,p\}} \mu (d) \frac{1}{(p^2d)^s}\sum _{\varPhi \in Q_l(p^2)/\varGamma _0(p^2)} \zeta _{\varPhi ^{*d}}(s)\\&= I_1(s) I_2(s) \\&=\frac{3 A_{-1}(p)}{\pi ^2(p^2-1)} \frac{1}{s-1}+\frac{3 A_0(p)}{\pi ^2(p^2-1)}\\&\quad +\left( \frac{D_1\log (p) +D_2}{p^2-1}-\frac{6\log p}{\pi ^2(p^2-1)}\right) A_{-1}(p)+O((s-1)). \end{aligned}$$

Hence, we obtain \(a_{-1}(l)=\frac{3 A_{-1}(p)}{\pi ^2(p^2-1)}\) and \(a_0(l)=\frac{3 A_0(p)}{\pi ^2(p^2-1)}+\left( \frac{D_1\log p +D_2}{p^2-1}-\frac{6\log p }{\pi ^2(p^2-1)}\right) A_{-1}(p)\). Observe that all the terms in the above expression have \(p^2-1\) in the denominator and it is of same order in p as \( |{\mathrm {SL}}_2({\mathbb {Z}}):\varGamma _0(p^2)|\). Since the constants \(D_1\) and \(D_2\) are independent of p, we get \(R^{\mathrm {ell}}_0 =o( \log p)\). \(\square \)

4.2.3 The parabolic and spectral contribution

Recall that we are interested in the term \(R_{P-C}\) of (4.10). Let \(a_0(y,s;q,\infty ,k)\) be the zero-th Fourier coefficient of \(E_{q,k}(z,s)\). It is given by

$$\begin{aligned} a_0\left( y,s;q,\infty ,0\right)&=\delta _{q,\infty } y^s +\phi _{q,\infty }(s)y^{1-s},\\ a_0\left( y,s;q,\infty ,2\right)&= \delta _{q,\infty }y^s+\phi _{q,\infty }(s)\frac{1-s}{s} y^{1-s}. \end{aligned}$$

Consider the series

$$\begin{aligned} \tilde{E}_{q,k}(z,s)=E_{q,k}(z,s)-a_0\left( y,\frac{1}{2}+ir;q\infty ,k\right) . \end{aligned}$$

For any \(z \in {\mathbb {H}}\) with \(z=x+iy\), define the functions

$$\begin{aligned} p_1(t,y,k)&= \frac{1}{2}\int _{-1/2}^{1/2} \sum _{\begin{array}{c} \gamma \in \varGamma _0(p^2),\\ \vert {\text {tr}}(\gamma )\vert =2, \gamma \not \in \varGamma _0(p^2)_{\infty } \end{array}} \nu _k(t, \gamma ; z) dx, \\ p_2(t,y,k)&=\frac{1}{2} \int _{-1/2}^{1/2} \sum _{\gamma \in \varGamma _0(p^2)_{\infty }} \nu _k(t,\gamma ; z) \, dx \ - \ \frac{y}{2\pi }\int _{-\infty }^{\infty } h_t(r) \, dr, \\ p_3(t,y,k)&=-\frac{y}{2\pi } \int _{-\infty }^{\infty } h_t(r) \phi _{\infty ,\infty }\left( \frac{1}{2}-ir\right) \left( \frac{\frac{1}{2}+ir}{\frac{1}{2}-ir}\right) ^{k/2}y^{2ir} \,dr - \frac{2-k}{2} \frac{1}{v_{p^2}},\\ p_4(t,y,k)&= -\frac{1}{4\pi } \sum _{q \in \partial X_0(p^2)} \int _{-1/2}^{1/2} \int _{-\infty }^{\infty } h_t(r) \left| \tilde{E}_{q,k}\left( x+iy, \frac{1}{2}+ir\right) \right| ^2\, dr dx. \end{aligned}$$

For \(j \in \{1,2,3,4\}\), the corresponding Mellin transforms are defined as

$$\begin{aligned} {\mathcal {M}}_j(t,s)=\int _0^{\infty } \big (p_j(t,y,2)-p_j(t,y,0)\big ) y^{s-2}dy. \end{aligned}$$

With our assumption on the prime p, observe that \(g_{p^2} > 1\). By [13, Lemma 4.4.1], we have for \({\text { Re }}(s) >1\)

$$\begin{aligned} R_{P-C}(t,s) = {\mathcal {M}}_1(t,s)+{\mathcal {M}}_2(t,s)+{\mathcal {M}}_3(t,s)+{\mathcal {M}}_4(t,s). \end{aligned}$$
(4.16)

To study the function \(p_1(t,y,k)\), we examine the matrices that appear in the sum. Say \({\mathcal {B}}=\varGamma _0(N)_{\infty }= \left\{ \pm \left( {\begin{matrix} 1 &{} m\\ 0 &{} 1\\ \end{matrix}}\right) \mid m \in {\mathbb {Z}}\right\} \) be the parabolic subgroup of \(\varGamma _0(N)\). A simple computation [1, p. 37] involving matrices shows that any matrix in \({\mathrm {SL}}_2({\mathbb {Z}})\) of trace 2 is of the form \( \left( {\begin{matrix} 1-a &{} b\\ -c &{} 1+a\\ \end{matrix}}\right) \) with \(a^2=bc\). Matrices with traces \(-2\) can be treated in a similar manner by multiplying by \(-I\). Write the matrices as

$$\begin{aligned} \left( {\begin{matrix} 1-a &{}\quad b\\ -c &{}\quad 1+a\\ \end{matrix}}\right) =\left( {\begin{matrix} \delta &{} -\beta \\ -\gamma &{} \alpha \\ \end{matrix}}\right) \left( {\begin{matrix} 1 &{}\quad m\\ 0 &{}\quad 1\\ \end{matrix}}\right) \left( {\begin{matrix} \alpha &{}\quad \beta \\ \gamma &{}\quad \delta \\ \end{matrix}}\right) =\left( {\begin{matrix} 1-m\gamma \delta &{}\quad m \delta ^2\\ -m \gamma ^2 &{}\quad 1+m\gamma \delta \\ \end{matrix}}\right) \in \varGamma _0(p^2). \end{aligned}$$

To ensure the same we need \(p^2 \mid m \gamma ^2\). Hence, we have three possibilities (i) \(p \mid \gamma \) but \(p \not \mid m\) (ii) \(p \mid m\) but \(p^2 \not \mid m\), \(p \mid \gamma \), and (iii) \(p^2 \mid m\). In other words, any matrix \(\gamma \in \varGamma _0(p^2)\) with trace 2 is of the form \(\sigma ^{-1}\left( {\begin{matrix} 1 &{} m\\ 0 &{} 1\\ \end{matrix}}\right) \sigma \) with \(m\in \mathbb {Z}\) and for some unique matrix \(\sigma \). In the first two cases, \(\sigma \in {\mathcal {B}}\backslash \varGamma _0(p)\) where as in the third case \(\sigma \in {\mathcal {B}}\backslash {\mathrm {SL}}_2(\mathbb {Z})\).

For weights \(k \in \{0,2\}\) and \(d \in \{1,p\}\), define the following functions that appear as sub-sums in \(p_1(t,y,k)\) corresponding to the cases (i) and (ii):

$$\begin{aligned} q_d(t,y,k)&= \frac{1}{2}\int _{-1/2}^{1/2} \ \sum _{m \ne 0, (m,p)=1} \ \sum _{\begin{array}{c} \sigma \in {\mathcal {B}}\backslash \varGamma _0(p), \\ \sigma \ne {\mathcal {B}} \end{array}} \nu _k\left( t, \sigma ^{-1} \left( {\begin{matrix} \pm 1 &{}\quad md\\ 0 &{}\quad \pm 1\\ \end{matrix}} \right) \sigma ; z\right) dx \\&=\int _{-1/2}^{1/2} \ \sum _{m \ne 0, (m,p)=1} \ \sum _{\begin{array}{c} \sigma \in {\mathcal {B}}\backslash \varGamma _0(p) /{\mathcal {B}},\\ \sigma \ne {\mathcal {B}} \end{array}} \, \sum _{n=-\infty }^{\infty }\nu _k\left( t, \sigma ^{-1} \left( {\begin{matrix} 1 &{}\quad md\\ 0 &{}\quad 1\\ \end{matrix}} \right) \sigma ; z+n \right) dx. \end{aligned}$$

Next, we consider the other sub-sum that appears in \(p_1(t,y,k)\) corresponding to case (iii):

$$\begin{aligned} q_{p^2}(t,y,k)&= \frac{1}{2}\int _{-1/2}^{1/2} \sum _{m \ne 0} \sum _{\begin{array}{c} \sigma \in {\mathcal {B}}\backslash {\mathrm {SL}}_2({\mathbb {Z}}), \\ \sigma \ne {\mathcal {B}} \end{array}} \nu _k \left( t, \sigma ^{-1} \left( {\begin{matrix} \pm 1 &{}\quad mp^2\\ 0 &{}\quad \pm 1\\ \end{matrix}} \right) \sigma ; z \right) dx \\&=\int _{-1/2}^{1/2} \sum _{m \ne 0} \, \sum _{\begin{array}{c} \sigma \in {\mathcal {B}}\backslash {\mathrm {SL}}_2({\mathbb {Z}}) /{\mathcal {B}}, \\ \sigma \ne {\mathcal {B}} \end{array}}\, \sum _{n=-\infty }^{\infty }\nu _k \left( \sigma ^{-1} \left( {\begin{matrix} 1 &{}\quad mp^2\\ 0 &{}\quad 1\\ \end{matrix}} \right) \sigma ; z+n \right) dx. \end{aligned}$$

From the above observation, we have a following decomposition of the function \(p_1\):

$$\begin{aligned} p_1(t,y,k) = q_1(t,y,k)+ q_p(t,y,k)+q_{p^2}(t,y,k). \end{aligned}$$

Recall that \({\mathcal {B}}=\varGamma _0(N)_{\infty }= \left\{ \pm \left( {\begin{matrix} 1 &{} m\\ 0 &{} 1\\ \end{matrix}}\right) \mid m \in {\mathbb {Z}}\right\} \) is the parabolic subgroup of \(\varGamma _0(N)\).

Lemma 4.6

For any matrix \(\tau = \left( {\begin{matrix} a &{} b\\ c &{} d\\ \end{matrix}}\right) \in {\mathrm {SL}}_2({\mathbb {R}}) - {\mathcal {B}}\) with \({\text {tr}}(\tau )=\pm 2\), we have

$$\begin{aligned} \int _{{\mathbb {H}}} \nu _k(t,\tau ;z) {\text { Im }}(z)^s \mu _{\mathrm {hyp}}(z) = \frac{1}{|c|^s}\int _{{\mathbb {H}}} \nu _k(t, L_{\pm };z) {\text { Im }}(z)^s \mu _{\mathrm {hyp}}(z); \end{aligned}$$

where \(L_{\pm } = \left( {\begin{matrix} \pm 1 &{} 0\\ 1 &{} \pm 1\\ \end{matrix}}\right) \).

Proof

Substitute \(z=Tw\) where \(T=\dfrac{1}{\sqrt{c}}\left( {\begin{matrix} 1 &{}\quad \frac{a-d}{2}\\ 0&{}\quad c\\ \end{matrix}}\right) \). \(\square \)

For any positive integer M, define

$$\begin{aligned} {\mathcal {L}}_M(s)=\sum _{\begin{array}{c} \sigma \in {\mathcal {B}}\backslash \varGamma _0(M) /{\mathcal {B}}, \\ \sigma =\left( {\begin{matrix} * &{} *\\ c &{} *\\ \end{matrix}} \right) ,\ c \ne 0 \end{array}} \frac{1}{|c|^{2s}} \end{aligned}$$

and

$$\begin{aligned} \zeta _M(s)=\sum _{m \ge 1, (m,M)=1} \frac{1}{m^s}. \end{aligned}$$

For \(k \in \{0,2\}\), consider the following integrals:

$$\begin{aligned} I_k(t,s,2)=\int _{{\mathbb {H}}} \nu _k(t,L_+;z) ({\text { Im }}z)^s \mu _{\mathrm {hyp}}(z)+\int _{{\mathbb {H}}} \nu _k(t,L_-;z)({\text { Im }}z)^s \mu _{\mathrm {hyp}}(z). \end{aligned}$$

Proposition 4.7

For the modular curves of the form \(X_0(p^2)\), the Mellin transform \({\mathcal {M}}_1(t,s)\) can be written as product of following simple functions

$$\begin{aligned} {\mathcal {M}}_1(t,s)=\frac{p}{p-1} \left( 1-\frac{1}{p^{2s}}\right) \zeta (s){\mathcal {L}}_p(s)\big (I_2(t,s,2)-I_0(t,s,2)\big ). \end{aligned}$$

Proof

For any matrix \(\sigma \in {\mathcal {B}}\backslash \varGamma _0(p^2)\) with \(\sigma = \left( {\begin{matrix} a &{} b\\ c &{} d\\ \end{matrix}}\right) \), denote \(c(\sigma )=c\). Note that this choice is independent of the right coset representative.

By Lemma 4.6, we obtain

$$\begin{aligned} \int _0^{\infty } q_1(t,y,k) y^{s-2} dy&= \sum _{\begin{array}{c} m \ne 0,\\ (m,p)=1 \end{array}} \ \sum _{\begin{array}{c} \sigma \in {\mathcal {B}}\backslash \varGamma _0(p) / {\mathcal {B}}, \\ \sigma \ne {\mathcal {B}} \end{array}} \int _{{\mathbb {H}}} \nu _k(t,\sigma ^{-1} \left( {\begin{matrix} 1 &{} m\\ 0 &{} 1\\ \end{matrix}}\right) \sigma ; z) ({\text { Im }}z)^{s} \mu _{\mathrm {hyp}}\\&= \sum _{\begin{array}{c} m \ne 0,\\ (m,p)=1 \end{array}} \ \sum _{\begin{array}{c} \sigma \in {\mathcal {B}}\backslash \varGamma _0(p) / {\mathcal {B}}, \\ \sigma \ne {\mathcal {B}} \end{array}} \frac{1}{|c(\sigma ^{-1} \left( {\begin{matrix} 1 &{} m\\ 0 &{} 1\\ \end{matrix}}\right) \sigma )|^s}\ I_k(t,s,2)\\&= \zeta _p(s) I_k(t,s,2) {\mathcal {L}}_p(s),\\ \int _0^{\infty } q_p(t,y,k) y^{s-2} dy&= \sum _{\begin{array}{c} m \ne 0,\\ (m,p)=1 \end{array}}\ \sum _{\begin{array}{c} \sigma \in {\mathcal {B}}\backslash \varGamma _0(p) / {\mathcal {B}}, \\ \sigma \ne {\mathcal {B}} \end{array}} \ \int _{{\mathbb {H}}} \nu _k(t,\sigma ^{-1}\left( {\begin{matrix} 1 &{} mp\\ 0 &{} 1\\ \end{matrix}}\right) \sigma ; z) ({\text { Im }}z)^{s} \mu _{\mathrm {hyp}}\\&= \frac{\zeta _p(s)}{p^s} I_k(t,s,2) {\mathcal {L}}_p(s),\\ \int _0^{\infty } q_{p^2}(t,y,k) y^{s-2} dy&= \sum _{m \ne 0} \ \sum _{\begin{array}{c} \sigma \in {\mathcal {B}}\backslash {\mathrm {SL}}_2({\mathbb {Z}}) / {\mathcal {B}},\\ \sigma \ne {\mathcal {B}} \end{array}}\ \int _{{\mathbb {H}}} \nu _k(t,\sigma ^{-1} \left( {\begin{matrix} 1 &{} mp^2\\ 0 &{} 1\\ \end{matrix}}\right) \sigma ; z) ({\text { Im }}z)^{s} \mu _{\mathrm {hyp}}\\&=\frac{\zeta (s)}{p^{2s}} I_k(t,s,2) {\mathcal {L}}_1(s). \end{aligned}$$

Recall the identity \(\zeta _p(s)=\zeta (s)(1-p^{-s})\). By summing up, we deduce that

$$\begin{aligned} {\mathcal {M}}_1(t,s)=\left( \left( 1-\frac{1}{p^{2s}}\right) {\mathcal {L}}_p(s)+\frac{1}{p^{2s}} {\mathcal {L}}_1(s)\right) \zeta (s)\big (I_2(t,s,2)-I_0(t,s,2)\big ). \end{aligned}$$
(4.17)

By [16, p. 49 and Theorem 2.7, p. 46], we now get

$$\begin{aligned} {\mathcal {L}}_1(s)=\frac{\zeta (2s-1)}{\zeta (2s)}. \end{aligned}$$

From [1, Lemma 3.2.19], we deduce that

$$\begin{aligned} \phi _{\infty , \infty }^{\varGamma _0(p)}(s)=\sqrt{\pi } \frac{\varGamma (s-\frac{1}{2})}{\varGamma (s)} \frac{p-1}{p^{2s}-1}\frac{\zeta (2s-1)}{\zeta (2s)}. \end{aligned}$$
(4.18)

Hence, we obtain [1, p.56]

$$\begin{aligned} \phi _{\infty , \infty }^{\varGamma _0(p)}(s)=\sqrt{\pi } \frac{\varGamma (s-\frac{1}{2})}{\varGamma (s)}{\mathcal {L}}_p(s). \end{aligned}$$
(4.19)

Comparing (4.18) and (4.19), we have \({\mathcal {L}}_1(s)=\dfrac{p^{2s}-1}{p-1}{\mathcal {L}}_p(s)\). The result follows by substituting this in (4.17). \(\square \)

Proposition 4.8

The Laurent series expansion of \({\mathcal {M}}_1(t,s)\) at \(s=1\) is given by

$$\begin{aligned} {\mathcal {M}}_1(t,s) =&\left( \frac{1}{v_{\varGamma _0(p)}}\frac{p+1}{p}A_1(t) \right) \frac{1}{s-1} \\&+ \frac{1}{v_{\varGamma _0(p)}} \frac{p+1}{p} \bigg (\left( 3\gamma _{\mathrm {EM}}+ \frac{a\pi }{6} - \log (p^2)\right) A_1(t) + B_1(t) \bigg ) \\&+ O(s-1); \end{aligned}$$

where the functions \(A_1(t)\) and \(B_1(t)\) are independent of p. Furthermore, we have \(\lim _{t \rightarrow \infty } A_1(t) = 1/2\) and \(B_1(t)\) has a finite limit as \(t \rightarrow \infty \), which we call \(B_1(\infty )\).

Proof

From Proposition 4.7 and (4.19), we obtain

$$\begin{aligned} {\mathcal {M}}_1(t,s)&= \frac{p}{p-1} \left( 1-\frac{1}{p^{2s}}\right) \zeta (s)\phi _{\infty , \infty }^{\varGamma _0(p)}(s) \left( \frac{\varGamma (s)}{\sqrt{\pi }\varGamma (s-\frac{1}{2})} \big (I_2(t,s,2)-I_0(t,s,2)\big )\right) . \end{aligned}$$
(4.20)

By [1, p.59] we have

$$\begin{aligned} \phi _{\infty , \infty }^{\varGamma _0(p)}(s)= \frac{1}{v_{\varGamma _0(p)}}\frac{1}{s-1} + \frac{1}{v_{\varGamma _0(p)}}\left( 2 \gamma _{\mathrm {EM}}+ a \frac{\pi }{6}-\frac{p^2}{p^2-1} \log (p^2)\right) + O(s-1), \end{aligned}$$

and

$$\begin{aligned} \zeta (s)=\frac{1}{s-1} + \gamma _{\mathrm {EM}}+ O(s-1). \end{aligned}$$

Recall that we have a well-known Laurent series expansion

$$\begin{aligned} \left( 1-\frac{1}{p^{2s}}\right) = \left( 1-\frac{1}{p^2}\right) + \frac{\log (p^2)}{p^2}(s-1)+ O((s-1)^2). \end{aligned}$$

By a verbatim generalization of [13, Lemma B.2.1] and [1, Proposition 3.3.4] we have:

$$\begin{aligned} \frac{\varGamma (s) }{\sqrt{\pi }\varGamma (s-\frac{1}{2})} [I_2(t,s,2)-I_0(t,s,2)] = A_1(t) (s-1)+B_1(t) (s-1)^2+O((s-1)^3); \end{aligned}$$

where \(\lim _{t \rightarrow \infty }A_1(t)=1/2\). It also follows from [1, Lemma 3.3.10] that \(B_1(t)\) has a finite limit as \(t \rightarrow \infty \). Putting these in Eq. (4.20), we get the required result. \(\square \)

From [13, Lemma 4.4.7-9], we obtain

$$\begin{aligned} \begin{aligned} {\mathcal {M}}_2(t,s)&= \frac{1}{s-1}\left( A_2(t)+ \frac{1}{4\pi } \right) + \left( \dfrac{1-\log (4\pi )}{4\pi }+\gamma _{\mathrm {EM}}A_2(t)+B_2(t) \right) + O(s-1),\\ {\mathcal {M}}_3(t,s)&= \frac{1}{v_{\varGamma _0(p^2)}}\frac{1}{s-1} + \left( \frac{{\mathcal {C}}_{\infty , \infty }^{\varGamma _0(p^2)}}{2}+\frac{t+1}{2 v_{\varGamma _0(p^2)}} \right) + O(s-1),\\ {\mathcal {M}}_4(t,s)&= A_4(t)+O(s-1), \end{aligned} \end{aligned}$$
(4.21)

where \(A_2\), \(B_2\), \(A_4\) depend only on t and tends to zero as \(t \rightarrow \infty \), and \({\mathcal {C}}_{\infty , \infty }^{\varGamma _0(p^2)}\) is the constant term of \(\phi _{\infty , \infty }^{\varGamma _0(p^2)}\) (see Lemma 3.3).

Proof of Proposition 4.4 (iv)

 Combining (4.16), Proposition 4.8 and (4.21), we get

$$\begin{aligned} R_{P-C}(t,s) = \frac{R^{\mathrm {par}}_{-1}(t)}{s-1} + R^{\mathrm {par}}_0(t) + O(s-1); \end{aligned}$$

where

$$\begin{aligned} R^{\mathrm {par}}_{-1} = \frac{1}{4\pi } + \frac{1}{v_{\varGamma _0(p^2)}} + \frac{1}{v_{\varGamma _0(p)}}\frac{p+1}{p}A_1(t) + A_2(t) \end{aligned}$$

and

$$\begin{aligned} R^{\mathrm {par}}_0&= \frac{1}{v_{\varGamma _0(p)}} \frac{p+1}{p} \bigg (\left( 3\gamma _{\mathrm {EM}}+ \frac{a\pi }{6} - \log (p^2)\right) A_1(t) + B_1(t) \bigg ) \\&\quad + \dfrac{1-\log (4\pi )}{4\pi }+\gamma _{\mathrm {EM}}A_2(t)+B_2(t) + \frac{{\mathcal {C}}_{\infty ,\infty }^{\varGamma _0(p^2)}}{2}+\frac{t+1}{2 v_{\varGamma _0(p^2)}} + A_4(t). \end{aligned}$$

Moreover writing \(R^{\mathrm {par}}_0(t) = \dfrac{t+1}{2 v_{\varGamma _0(p^2)}} + {\mathcal {E}}^{\mathrm {par}}(t)\) we have

$$\begin{aligned} \lim _{t \rightarrow \infty } {\mathcal {E}}^{\mathrm {par}}(t)&= \frac{1}{2 v_{\varGamma _0(p)}} \frac{p+1}{p} \left( 3\gamma _{\mathrm {EM}}+ \frac{a\pi }{6} - \log (p^2) + 2 B_1(\infty ) \right) \\&\qquad + \frac{1-\log (4\pi )}{4\pi } + \frac{{\mathcal {C}}_{\infty ,\infty }^{\varGamma _0(p^2)}}{2}\\&\quad = \frac{1-\log (4\pi )}{4\pi }+ \frac{{\mathcal {C}}_{\infty ,\infty }^{\varGamma _0(p^2)}}{2} + O\left( \frac{\log (p^2)}{p^2} \right) . \end{aligned}$$

We only need to show that \({\mathcal {C}}_{\infty , \infty }^{\varGamma _0(p^2)} = O\left( \frac{\log (p^2)}{p^2}\right) \) which follows from our computation of \(\phi _{\infty , \infty }^{\varGamma _0(p^2)}\) in Lemma 3.3. This concludes the proof of Proposition 4.4. \(\square \)

4.3 Asymptotics of the canonical Green’s function

Proof of Theorem  1.1

From Eq. (4.10), we have

$$\begin{aligned} g_{p^2}R_F(s)=-R_D(t,s) +R_{H}(t,s) + R_{E}(t,s) + R_{P-C}(t,s). \end{aligned}$$

Hence, we get the following equality:

$$\begin{aligned} g_{p^2}{\mathcal {R}}_{\infty }^{\varGamma _0(p^2)}= -R^{\mathrm {dis}}_0(t) + R^{\mathrm {hyp}}_0(t) + R^{\mathrm {ell}}_0(t) + R^{\mathrm {par}}_0(t). \end{aligned}$$

According to the Proposition 4.4 as \(t \rightarrow \infty \), we have \(R^{\mathrm {dis}}_0(t)\rightarrow 0, R^{\mathrm {ell}}_0=o(\log (p))\). We also have \( R^{\mathrm {hyp}}_0(t) = {\mathcal {E}}^{\mathrm {hyp}}(t) - \frac{t-1}{2 v_{\varGamma _0(p^2)}}\) and \(R^{\mathrm {par}}_0(t) = \frac{t+1}{2 v_{\varGamma _0(p^2)}} + {\mathcal {E}}^{\mathrm {par}}(t)\) and hence

$$\begin{aligned} R^{\mathrm {hyp}}_0(t) +R^{\mathrm {par}}_0(t)={\mathcal {E}}^{\mathrm {hyp}}(t)+ {\mathcal {E}}^{\mathrm {par}}(t)+\frac{1}{ v_{\varGamma _0(p^2)}}. \end{aligned}$$

From Proposition 4.4, recall that \(\lim _{t \rightarrow \infty } {\mathcal {E}}^{\mathrm {hyp}}(t) = \frac{1}{2 v_{\varGamma _0(p^2)}} O_{\epsilon }(p^{2 \epsilon })\) and \(\lim _{t \rightarrow \infty } {\mathcal {E}}^{\mathrm {par}}(t) = \frac{1-\log (4\pi )}{4\pi } + O\left( \frac{\log (p^2)}{p^2}\right) \). As \(t \rightarrow \infty \), we have

$$\begin{aligned} {\mathcal {R}}_{\infty }^{\varGamma _0(p^2)}= & {} \frac{1}{g_{p^2}} \lim _{t \rightarrow \infty } \left( -R^{\mathrm {dis}}_0(t) + R^{\mathrm {hyp}}_0(t) + R^{\mathrm {ell}}_0(t) + R^{\mathrm {par}}_0(t)\right) \\= & {} \frac{1}{g_{p^2}} \lim _{t \rightarrow \infty } \left( R^{\mathrm {hyp}}_0(t) +R^{\mathrm {par}}_0(t)+R^{\mathrm {ell}}_0\right) \\= & {} \frac{1}{g_{p^2}}\lim _{t \rightarrow \infty } \left( {\mathcal {E}}^{\mathrm {hyp}}(t)+ {\mathcal {E}}^{\mathrm {par}}(t)+\frac{1}{ v_{\varGamma _0(p^2)}} \right) +o\left( \frac{\log (p^2)}{g_{p^2}} \right) \\= & {} \frac{1}{g_{p^2}} \left( \frac{1-\log (4\pi )}{4\pi } + O\left( \frac{\log (p^2)}{p^2}\right) \right. \\&\left. + \frac{1}{2 v_{\varGamma _0(p^2)}} O_{\epsilon }(p^{2 \epsilon })+\frac{1}{ v_{\varGamma _0(p^2)}} \right) +o\left( \frac{\log (p^2)}{g_{p^2}} \right) \\= & {} o\left( \frac{\log (p^2)}{g_{p^2}} \right) .\\ \end{aligned}$$

The last equality follows from \(v_{\varGamma _0(p^2)} = \frac{\pi }{3}p(p+1)\). \(\square \)

Proposition 4.9

For \(p > 7\), we have the following asymptotic expression

$$\begin{aligned} {\mathfrak {g}}_{\mathrm {can}}(\infty ,0) = \frac{6\log (p^2)}{p(p+1)} + o\left( \frac{\log (p^2)}{g_{p^2}}\right) . \end{aligned}$$

Proof

Using Theorem 1.1, Remark 2.1 and (3.17), we obtain

$$\begin{aligned} {\mathfrak {g}}_{\mathrm {can}}(\infty ,0) = -2\pi {\mathcal {C}}_{\infty ,0}^{\varGamma _0(p^2)}+ o\left( \frac{\log (p^2)}{g_{p^2}}\right) . \end{aligned}$$

By Corollary 3.6, we also have

$$\begin{aligned} -2\pi {\mathcal {C}}_{\infty ,0}^{\varGamma _0(p^2)}= \frac{6 \log (p^2)}{p(p+1)} + o\left( \frac{\log (p^2)}{g_{p^2}}\right) ; \end{aligned}$$

noting that \(v_{\varGamma _0(p^2)} = \frac{\pi }{3}p(p+1)\). Hence, asymptotically the main contribution for \({\mathfrak {g}}_{\mathrm {can}}(\infty ,0)\) comes from \({\mathcal {C}}_{\infty ,0}^{\varGamma _0(p^2)}\) and the proposition follows. \(\square \)

Remark 4.10

We need the assumption \(p > 7\) in the parabolic part as the computations are carried out under the assumption that \(g_{p^2} > 1\).

In [4], the estimates on Arakelov-Green’s functions are provided for general non-compact orbisurfaces.

5 Minimal regular models of Edixhoven

For primes \(p \ge 7\), the modular curve \(X_0(p^2)\) is an algebraic curve defined over \({\mathbb {Q}}\).

In [10], Bas Edixhoven constructed regular models \(\widetilde{{\mathcal {X}}}_0(p^2)\) for all such primes. Note that the above mentioned models are arithmetic surfaces over \({{\,\mathrm{Spec}\,}}{\mathbb {Z}}\). These models however are not minimal. In this section, we recall the regular models of Edixhoven and describe the minimal regular models obtained from them.

For any prime q of \({\mathbb {Z}}\) such that \(q \ne p\) the fiber \(\widetilde{{\mathcal {X}}}_0(p^2)_{{\mathbb {F}}_q}\) is a smooth curve of genus \(g_{p^2}\), the genus of \(X_0(p^2)\). For the prime p the fiber \(\widetilde{{\mathcal {X}}}_0(p^2)_{{\mathbb {F}}_p}\) is reducible and non-reduced, of arithmetic genus \(g_{p^2}\), and whose geometry depends on the class of p in \({\mathbb {Z}}/ 12{\mathbb {Z}}\). To describe \(\widetilde{{\mathcal {X}}}_0(p^2)\) it is thus enough to describe the special fiber.

The minimal regular model \({\mathcal {X}}_0(p^2)\) is obtained from \(\widetilde{{\mathcal {X}}}_0(p^2)\) by three successive blow downs of curves in the special fiber \(\widetilde{{\mathcal {X}}}_0(p^2)_{{\mathbb {F}}_p}\) and we shall denote by \(\pi : \widetilde{{\mathcal {X}}}_0(p^2) \rightarrow {\mathcal {X}}_0(p^2)\) the morphism from Edixhoven’s model. In the computations, we shall use [26, Chapter 9, Theorem 2.12] repeatedly. Let \(\cdot \) be the local intersection pairing as in [26].

In the following subsections, we shall explicitly describe the special fiber of the minimal regular model \({\mathcal {X}}_0(p^2)\). We shall also compute the local intersection numbers among the various components in the fiber. The Arakelov intersections in this case are obtained by simply multiplying the local intersection numbers by \(\log (p)\). The following proposition is the key finding of this section:

Proposition 5.1

The special fiber of \( {\mathcal {X}}_0(p^2)\) consists of two curves \(C_{2,0}'\) and \(C_{0,2}'\) and the local intersection numbers are given by:

$$\begin{aligned} C_{2,0}'\cdot C_{0,2}'= - (C_{2,0}')^2 = - (C_{0,2}')^2 = \frac{p^2-1}{24}. \end{aligned}$$

The proof of the previous proposition shall take up the rest of this section. It is divided into four cases depending on the residue of p modulo 12.

5.1 Case \(p \equiv 1 \pmod {12}\)

Following Edixhoven [10], we draw the special fiber \(V_p = \widetilde{{\mathcal {X}}}_0(p^2)_{{\mathbb {F}}_p}\) in Fig. 1 in this case. Each component is a \({\mathbb {P}}^1\) and the pair (nm) adjacent to each component denotes the multiplicity of the component n and the local self-intersection number m. The arithmetic genus is given by \(g_{p^2} = 12k^2 -3k -1\) where \(p = 12k+1\).

Fig. 1
figure 1

The special fiber \(\widetilde{{\mathcal {X}}}_0(p^2)_{{\mathbb {F}}_p}\) when \(p \equiv 1 \pmod {12}\)

Proposition 5.2

The local intersection numbers of the vertical components supported on the special fiber of \(\widetilde{{\mathcal {X}}}_0(p^2)\) are given in the following table.

$$\begin{aligned} \begin{array}{l|c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} &{} C_{2,0} &{} C_{0,2} &{} C_{1,1} &{} \ E &{} \ F \\ \hline C_{2,0} &{} -\frac{p(p-1)}{12} &{} \frac{p-1}{12} &{} \frac{p-1}{12} &{} 0 &{} 0 \\ C_{0,2} &{} \frac{p-1}{12} &{} -\frac{p(p-1)}{12} &{} \frac{p-1}{12} &{} 0 &{} 0 \\ C_{1,1} &{} \frac{p-1}{12} &{} \frac{p-1}{12} &{} -1 &{} 1 &{} 1 \\ E &{} 0 &{} 0 &{} 1 &{} -2 &{} 0 \\ F &{} 0 &{} 0 &{} 1 &{} 0 &{} -3 \end{array} \end{aligned}$$

Proof

The self-intersections \(C_{1,1}^2, E^2\) and \(F^2\) were calculated by Edixhoven (see [10, Fig. 1.5.2.1]). Since \(V_p\) is the principal divisor (p), we must have \(V_p \cdot D = 0\) for any vertical divisor D. Moreover, \(V_p = C_{2,0} + C_{0,2} + (p-1) C_{1,1} + \frac{p-1}{2} E + \frac{p-1}{3} F\) is the linear combination of all the prime divisors of the special fiber counted with multiplicities. All the other intersection numbers can be easily calculated using these information. For example \(V_p\cdot E = 0\) gives \((p-1) C_{1,1}\cdot E + \frac{(p-1)}{2} E^2 = 0\), now since \(E^2 = -2\) we have \(C_{1,1}\cdot E = 1\). The other calculations are analogous and we omit them here. \(\square \)

Proof of Proposition 5.1

When \(p \equiv 1 \pmod {12}\): By Proposition 5.2, note that the component \(C_{1,1}\) is rational and has self-intersection \(-1\). By Castelnuovo’s criterion [26, Chapter 9, Theorem 3.8] we can thus blow down \(C_{1,1}\) without introducing a singularity. Let \({\mathcal {X}}_0(p^2)'\) be the corresponding arithmetic surface and \(\pi _1: \widetilde{{\mathcal {X}}}_0(p^2) \rightarrow {\mathcal {X}}_0(p^2)'\), be the blow down morphism.

For \(E' = \pi _1(E)\), we see that \(\pi _1^* E' = E + C_{1,1}\). In fact \(\pi _1^* E' = E + \mu C_{1,1}\) (see Liu [26, Chapter 9, Proposition 2.18]). Using [26, Chapter 9, Theorem 2.12], we obtain \(0 = C_{1,1}\cdot \pi _1^* E' = 1 - \mu \). Hence, we deduce that \((E')^2 = (\pi _1^* E')^2 = \langle E + C_{1,1}, E +C_{1,1} \rangle = -1\). Thus \(E'\) is a rational curve in the special fiber of \({\mathcal {X}}_0(p^2)'\) with self intersection \(-1\). It can thus be blown down again and the resulting scheme is again regular. Let \({\mathcal {X}}_0(p^2)''\) be the blow down and \(\pi _2: \widetilde{{\mathcal {X}}}_0(p^2) \rightarrow {\mathcal {X}}_0(p^2)''\) the morphism from \(\widetilde{{\mathcal {X}}}_0(p^2)\).

Let \(F' = \pi _2(F)\), and if \(\pi _2^*F' = F + \mu C_{1,1} + \nu E\) for \(\mu , \nu \in {\mathbb {Q}}\) then using the fact that \(C_{1,1} \cdot \pi _2^*F' = E \cdot \pi _2^*F' = 0\) we find \(\mu = 2\) and \(\nu =1\). This yields \(\pi _2^*F' = F + 2 C_{1,1} + E\) and hence \((F')^2 = -1\). We can thus blow down \(F'\) further to arrive finally at an arithmetic surface \({\mathcal {X}}_0(p^2)\). This is the minimal regular model of \(X_0(p^2)\) since no further blow down is possible. Let \(\pi : \widetilde{{\mathcal {X}}}_0(p^2) \rightarrow {\mathcal {X}}_0(p^2)\) be the morphism obtained by composing the sequence of blow downs.

The special fiber of \({\mathcal {X}}_0(p^2)\) consists of two curves \(C_{2,0}'\) and \(C_{0,2}'\), that are the images of \(C_{2,0}\) and \(C_{0,2}\) respectively under \(\pi \). They intersect with high multiplicity at a single point. To calculate the intersections we notice that

$$\begin{aligned} \pi ^* C_{2,0}' = C_{2,0} + \frac{p-1}{2} C_{1,1} + \frac{p-1}{4} E + \frac{p-1}{6} F, \\ \pi ^* C_{0,2}' = C_{0,2} + \frac{p-1}{2} C_{1,1} + \frac{p-1}{4} E + \frac{p-1}{6} F, \end{aligned}$$

obtained as before from the fact that the intersections of \(\pi ^* C'_{2,0}\) and \(\pi ^* C'_{0,2}\) with \(C_{1,1}\), E and F are all 0. This yields

$$\begin{aligned} C_{2,0}'\cdot C_{0,2}'= - (C_{2,0}')^2 = - (C_{0,2}')^2 = \frac{p^2-1}{24}. \end{aligned}$$

\(\square \)

5.2 Case \(p \equiv 5 \pmod {12}\)

In this case the special fiber \(V_p = \widetilde{{\mathcal {X}}}_0(p^2)_{{\mathbb {F}}_p}\) is described by Fig. 2. Each component is a \({\mathbb {P}}^1\) and the genus is given by \(g_{p^2} = 12k^2 + 5k\) where \(p = 12k+5\).

Fig. 2
figure 2

The special fiber \(\widetilde{{\mathcal {X}}}_0(p^2)_{{\mathbb {F}}_p}\) when \(p \equiv 5 \pmod {12}\)

Proposition 5.3

The local intersection numbers of the vertical components supported on the special fiber of \(\widetilde{{\mathcal {X}}}_0(p^2)\) for \(p \equiv 5 \pmod {12}\) are given in the following table.

$$\begin{aligned} \begin{array}{l|c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} &{} C_{2,0} &{} C_{0,2} &{} C_{1,1} &{} \ E &{} \ F \\ \hline C_{2,0} &{} -\frac{p^2-p+4}{12} &{} \frac{p-5}{12} &{} \frac{p-5}{12} &{} 0 &{} 1 \\ C_{0,2} &{} \frac{p-5}{12} &{} -\frac{p^2-p+4}{12} &{} \frac{p-5}{12} &{} 0 &{} 1 \\ C_{1,1} &{} \frac{p-5}{12} &{} \frac{p-5}{12} &{} -1 &{} 1 &{} 1 \\ E &{} 0 &{} 0 &{} 1 &{} -2 &{} 0 \\ F &{} 1 &{} 1 &{} 1 &{} 0 &{} -3 \end{array} \end{aligned}$$

The calculations are very similar to the previous case hence we omit the proof.

Proof of Proposition 5.1

When \(p \equiv 5 \pmod {12}\): The minimal regular model is obtained by blowing down \(C_{1,1}\), then the image of E and then the image of F as in the previous section. We again denote the minimal regular model by \({\mathcal {X}}_0(p^2)\), and by \(\pi : \widetilde{{\mathcal {X}}}_0(p^2) \rightarrow {\mathcal {X}}_0(p^2)\) the morphism obtained by the successive blow downs.

The special fiber of \({\mathcal {X}}_0(p^2)\) again consists of two curves \(C_{2,0}'\) and \(C_{0,2}'\) that are the images of \(C_{2,0}\) and \(C_{0,2}\) respectively under \(\pi \). The curves \(C_{2,0}'\) and \(C_{0,2}'\) intersect at a single point. In this case we have

$$\begin{aligned} \pi ^* C_{2,0}' = C_{2,0} + \frac{p-1}{2} C_{1,1} + \frac{p-1}{4} E + \frac{p+1}{6} F, \\ \pi ^* C_{0,2}' = C_{0,2} + \frac{p-1}{2} C_{1,1} + \frac{p-1}{4} E + \frac{p+1}{6} F, \end{aligned}$$

obtained as before from the fact that the intersections of \(\pi ^* C'_{2,0}\) and \(\pi ^* C'_{0,2}\) with \(C_{1,1}\), E and F are all 0. This yields

$$\begin{aligned} C_{2,0}' \cdot C_{0,2}' = - (C_{2,0}')^2 = - (C_{0,2}')^2 = \frac{p^2-1}{24}. \end{aligned}$$

\(\square \)

5.3 Case \(p \equiv 7 \pmod {12}\)

In this case the special fiber \(V_p = \widetilde{{\mathcal {X}}}_0(p^2)_{{\mathbb {F}}_p}\) is described by Fig. 3. Each component is a \({\mathbb {P}}^1\) occurring with the specified multiplicity. The genus is given by \(g_{p^2} = 12k^2 + 9k+1\) where \(p = 12k+7\).

Fig. 3
figure 3

The special fiber \(\widetilde{{\mathcal {X}}}_0(p^2)_{{\mathbb {F}}_p}\) when \(p \equiv 7 \pmod {12}\)

Proposition 5.4

The local intersection numbers of the prime divisors supported on the special fiber of \(\widetilde{{\mathcal {X}}}_0(p^2)\) for \(p \equiv 7 \pmod {12}\) are given in the following table.

$$\begin{aligned} \begin{array}{l|ccccc} &{} C_{2,0} &{} C_{0,2} &{} C_{1,1} &{} \ E &{} \ F \\ \hline C_{2,0} &{} -\frac{p^2-p+6}{12} &{} \frac{p-7}{12} &{} \frac{p-7}{12} &{} 1 &{} 0 \\ C_{0,2} &{} \frac{p-7}{12} &{} -\frac{p^2-p+6}{12} &{} \frac{p-7}{12} &{} 1 &{} 0 \\ C_{1,1} &{} \frac{p-7}{12} &{} \frac{p-7}{12} &{} -1 &{} 1 &{} 1 \\ E &{} 1 &{} 1 &{} 1 &{} -2 &{} 0 \\ F &{} 0 &{} 0 &{} 1 &{} 0 &{} -3 \end{array} \end{aligned}$$

Proof of Proposition 5.1

When \(p \equiv 7 \pmod {12}\): The minimal regular model is obtained by blowing down \(C_{1,1}\), then the image of E and then the image of F as in the previous sub-section. Let \({\mathcal {X}}_0(p^2)\) be the minimal regular model and \(\pi : \widetilde{{\mathcal {X}}}_0(p^2) \rightarrow {\mathcal {X}}_0(p^2)\) the morphism obtained by the successive blow downs.

The special fiber of \({\mathcal {X}}_0(p^2)\) consists of two curves \(C_{2,0}'\) and \(C_{0,2}'\) that are the images of \(C_{2,0}\) and \(C_{0,2}\) respectively under \(\pi \). The curves \(C_{2,0}'\) and \(C_{0,2}'\) intersect at a single point. Here

$$\begin{aligned} \pi ^* C_{2,0}' = C_{2,0} + \frac{p-1}{2} C_{1,1} + \frac{p+1}{4} E + \frac{p-1}{6} F, \\ \pi ^* C_{0,2}' = C_{0,2} + \frac{p-1}{2} C_{1,1} + \frac{p+1}{4} E + \frac{p-1}{6} F, \end{aligned}$$

easily calculated using fact that the intersections of \(\pi ^* C'_{2,0}\) and \(\pi ^* C'_{0,2}\) with \(C_{1,1}\), E and F are all 0. This yields

$$\begin{aligned} C_{2,0}' \cdot C_{0,2}' = - (C_{2,0}')^2 = - (C_{0,2}')^2 = \frac{p^2-1}{24}. \end{aligned}$$

\(\square \)

5.4 Case \(p \equiv 11 \pmod {12}\)

In this final case the special fiber \(V_p = \widetilde{{\mathcal {X}}}_0(p^2)_{{\mathbb {F}}_p}\) is described by Fig. 4. Each component is a \({\mathbb {P}}^1\). The genus is given by \(g_{p^2} = 12k^2 + 17k+6\) where \(p = 12k+11\).

Fig. 4
figure 4

The special fiber \(\widetilde{{\mathcal {X}}}_0(p^2)_{{\mathbb {F}}_p}\) when \(p \equiv 11 \pmod {12}\)

Proposition 5.5

The local intersection numbers of the prime divisors supported on the special fiber of \(\widetilde{{\mathcal {X}}}_0(p^2)\) for \(p \equiv 11 \pmod {12}\) are given in the following table.

$$\begin{aligned} \begin{array}{l|c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} &{} C_{2,0} &{} C_{0,2} &{} C_{1,1} &{} \ E &{} \ F \\ \hline C_{2,0} &{} -\frac{p^2-p+10}{12} &{} \frac{p-11}{12} &{} \frac{p-11}{12} &{} 1 &{} 1 \\ C_{0,2} &{} \frac{p-11}{12} &{} -\frac{p^2-p+10}{12} &{} \frac{p-11}{12} &{} 1 &{} 1 \\ C_{1,1} &{} \frac{p-11}{12} &{} \frac{p-11}{12} &{} -1 &{} 1 &{} 1 \\ E &{} 1 &{} 1 &{} 1 &{} -2 &{} 0 \\ F &{} 1 &{} 1 &{} 1 &{} 0 &{} -3 \end{array} \end{aligned}$$

Let us now complete the proof of Proposition 5.1 by presenting the final case.

Proof of Proposition 5.1

When \(p \equiv 11 \pmod {12}\): The minimal regular model is again obtained by blowing down \(C_{1,1}\), then the image of E and then the image of F as in the previous section. We again denote the minimal regular model by \({\mathcal {X}}_0(p^2)\). Let \(\pi : \widetilde{{\mathcal {X}}}_0(p^2) \rightarrow {\mathcal {X}}_0(p^2)\) be the morphism obtained by the successive blow downs.

The special fiber of \({\mathcal {X}}_0(p^2)\) consists of two curves \(C_{2,0}'\) and \(C_{0,2}'\) that are the images of \(C_{2,0}\) and \(C_{0,2}\) respectively under \(\pi \) intersecting at a single point. Mimicking the calculations of the previous sub-section

$$\begin{aligned} \pi ^* C_{2,0}' = C_{2,0} + \frac{p-1}{2} C_{1,1} + \frac{p+1}{4} E + \frac{p+1}{6} F, \\ \pi ^* C_{0,2}' = C_{0,2} + \frac{p-1}{2} C_{1,1} + \frac{p+1}{4} E + \frac{p+1}{6} F. \end{aligned}$$

This again yields

$$\begin{aligned} C_{2,0}' \cdot C_{0,2}' = - (C_{2,0}')^2 = - (C_{0,2}')^2 = \frac{p^2-1}{24}. \end{aligned}$$

\(\square \)

6 Algebraic part of self-intersection

We continue with the notation from Sect. 5. Let \(H_0\) and \(H_{\infty }\) be the sections of \({\mathcal {X}}_0(p^2)/{\mathbb {Z}}\) corresponding to the cusps \(0, \infty \in X_0(p^2)({\mathbb {Q}})\). The horizontal divisor \(H_0\) intersects exactly one of the curves of the special fiber at an \({\mathbb {F}}_p\) rational point transversally (cf. Liu [26, Chapter 9, Proposition 1.30 and Corollary 1.32]). We call that component \(C_0'\). It follows from the cusp and component labelling of Katz and Mazur [20, p. 296] that \(H_{\infty }\) meets the other component transversally and we call it \(C_{\infty }'\). The components \(C_0'\) and \(C_{\infty }'\) intersect in a single point.

Recall that the local intersection numbers are given by [cf. Proposition 5.1]:

$$\begin{aligned} C_0'\cdot C_{\infty }' = - (C_0')^2 = -(C_{\infty }')^2 = \frac{p^2 - 1}{24}. \end{aligned}$$
(6.1)

Let \(K_{{\mathcal {X}}_0(p^2)}\) be a canonical divisor of \({\mathcal {X}}_0(p^2)\), that is any divisor whose corresponding line bundle is the relative dualizing sheaf. We then have the following result:

Lemma 6.1

For \(s_p = \dfrac{p^2-1}{24}\), consider the vertical divisors \(V_0 = -\dfrac{g_{p^2} - 1}{s_p} C_0'\) and \(V_{\infty } = -\dfrac{g_{p^2} - 1}{s_p} C_{\infty }'\). The divisors

$$\begin{aligned} D_m = K_{{\mathcal {X}}_0(p^2)} - (2g_{p^2} -2)H_m + V_m, \qquad m\in \{0, \infty \} \end{aligned}$$

are orthogonal to all vertical divisors of \({\mathcal {X}}_0(p^2)\) with respect to the Arakelov intersection pairing.

Proof

For any prime \(q \ne p\) if V is the corresponding fiber over \((q) \in {{\,\mathrm{Spec}\,}}{\mathbb {Z}}\), then \(\langle V_m, V \rangle = 0\). Moreover, by the adjunction formula [26, Chapter 9, Proposition 1.35], \(\langle K_{{\mathcal {X}}_0(p^2)}, V \rangle = (2 g_{p^2} - 2)\log p\). The horizontal divisor \(H_m\) meets any fiber transversally at a smooth \({\mathbb {F}}_p\) rational point which gives \(\langle D_m, V \rangle = 0\).

Again using the adjunction formula

$$\begin{aligned} \langle K_{{\mathcal {X}}_0(p^2)}, C_0' + C_{\infty }' \rangle = (2 g_{p^2} - 2)\log p, \end{aligned}$$

and on the other hand from the discussion of \({\mathcal {X}}_0(p^2)\) it is clear that \(\langle K_{{\mathcal {X}}_0(p^2)}, C_0' \rangle = \langle K_{{\mathcal {X}}_0(p^2)}, C_{\infty }' \rangle \), hence we have

$$\begin{aligned} \langle K_{{\mathcal {X}}_0(p^2)}, C_0' \rangle = \langle K_{{\mathcal {X}}_0(p^2)}, C_{\infty }' \rangle = (g_{p^2} - 1)\log p. \end{aligned}$$

If \(m \in \{0,\infty \}\), then

$$\begin{aligned} \langle D_m, C_m' \rangle = (g_{p^2} - 1)\log p - (2g_{p^2} - 2)\log p + (g_{p^2} - 1)\log p = 0. \end{aligned}$$

Finally if \(n \in \{0, \infty \}\) and \(n \ne m\) then

$$\begin{aligned} \langle D_m, C_n' \rangle = (g_{p^2} - 1)\log p - 0 - (g_{p^2} - 1)\log p = 0. \end{aligned}$$

This completes the proof. \(\square \)

The proof of the following lemma is analogous to Abbes–Ullmo [1, Proposition D].

Lemma 6.2

For \(m \in \{0, \infty \}\), consider the horizontal divisors \(H_m\) as above. We have the following equality of the Arakelov self-intersection number of the relative dualizing sheaf:

$$\begin{aligned} (\overline{\omega }_{p^2})^2 = -4g_{p^2}(g_{p^2}-1) \langle H_0, H_{\infty }\rangle + \dfrac{(g_{p^2}^2 - 1)\log p}{s_p}+e_p \end{aligned}$$

with

$$\begin{aligned} e_p= {\left\{ \begin{array}{ll} 0 &{}\quad \text {if }p \equiv 11 \pmod {12}, \\ O(\log p)&{}\quad \text {if }p \not \equiv 11 \pmod {12}. \\ \end{array}\right. } \end{aligned}$$

Proof

Since \(D_m\) has degree 0 and is perpendicular to vertical divisors, by a theorem of Faltings–Hriljac [12, Theorem 4], we obtain:

$$\begin{aligned} \langle D_m, D_m \rangle = -2 \left( \text {N}\acute{{\mathrm{e}}}\text {ron-Tate height of } {\mathcal {O}}(D_m) \right) :=h_m. \end{aligned}$$
(6.2)

This yields

$$\begin{aligned} \langle D_m, \ K_{{\mathcal {X}}_0(p^2)} - (2g_{p^2}-2)H_m \rangle = h_m. \end{aligned}$$

The previous expression expands to

$$\begin{aligned} \overline{\omega }_{p^2}^2= & {} -(2g_{p^2}-2)^2 H_m^2 + 2(2g_{p^2}-2) \langle K_{{\mathcal {X}}_0(p^2)}, H_m\rangle - \langle K_{{\mathcal {X}}_0(p^2)}, V_m \rangle \\&+\, (2g_{p^2}-2) \langle H_m, V_m\rangle +h_m. \end{aligned}$$

Now using the equality \(\langle D_m, V_m \rangle = 0\) which yields \(\langle K_{{\mathcal {X}}_0(p^2)}, V_m \rangle - (2g_{p^2}-2) \langle H_m, V_m \rangle + V_m^2 = 0\) and the adjunction formula \(\langle K_{{\mathcal {X}}_0(p^2)}, H_m \rangle = - H_m^2\), (see Lang [25, Ch. IV, Sec. 5, Corollary 5.6]), we get:

$$\begin{aligned} (\overline{\omega }_{p^2})^2 = -4g_{p^2}(g_{p^2}-1)H_m^2 + V_m^2+h_m. \end{aligned}$$

This yields

$$\begin{aligned} (\overline{\omega }_{p^2})^2 = -2g_{p^2}(g_{p^2}-1) (H_0^2 + H_{\infty }^2) + \frac{1}{2}(V_0^2 + V_{\infty }^2)+ \frac{1}{2}(h_0 +h_{\infty }). \end{aligned}$$
(6.3)

Consider the divisor \(D_{\infty } - D_0 = (2g_{p^2}-2)(H_0 - H_{\infty }) + (V_{\infty } - V_0)\). The generic fiber of the line bundle corresponding to the above divisor is supported on cusps. By Manin–Drinfeld theorem [9, 27], \(D_{\infty }- D_0\) is a torsion element of the Jacobian \(J_0(p^2)\). Moreover the divisor \(D_{\infty }- D_0\) satisfies the hypothesis of the Faltings–Hriljac theorem, which along with the vanishing of Neron–Tate height at torsion points implies \( \langle D_0- D_{\infty }, D_0- D_{\infty }\rangle =0\). Hence, we obtain

$$\begin{aligned} H_0^2 + H_{\infty }^2 = 2 \langle H_0, H_{\infty } \rangle + \frac{V_0^2 - 2 \langle V_0, V_{\infty }\rangle + V_{\infty }^2}{(2g_{p^2}-2)^2}. \end{aligned}$$

Substituting this in (6.3) we deduce

$$\begin{aligned} (\overline{\omega }_{p^2})^2= & {} -4g_{p^2}(g_{p^2}-1) \langle H_0, H_{\infty }\rangle - \frac{1}{2g_{p^2}-2} (V_0^2 + V_{\infty }^2) \\&+ \frac{g_{p^2}}{g_{p^2}-1} \langle V_0, V_{\infty }\rangle + \frac{1}{2}(h_0 +h_{\infty }). \end{aligned}$$

For \(p \equiv 11 \pmod {12}\), the modular curve \(X_0(p^2)\) has no elliptic points. We deduce that for \(m \in \{0, \infty \}\), the divisors \(D_m\) are supported at cusps and hence \(h_0=h_{\infty }=0\) (see [1, Lemma 4.1.1]).

If \(p \not \equiv 11 \pmod {12}\), the canonical divisor \(K_{{\mathcal {X}}_0(p^2)}\) is supported at the set of cusps and the set of elliptic points. By Manin–Drinfeld theorem, the Néron–Tate heights \(h_{NT}\) of the divisors supported at cusps is zero. We provide a bound on the Néron–Tate heights of elliptic points by a computation similar to [30, Section 6, equation 36]. Let \(f_{p^2}: X_0(p^2) \rightarrow X_0(1)={\mathbb {P}}^1\) be the natural projection. Let i and j be the points on \(X_0(1)\) corresponding to the points i and \(j=e^{\frac{2 \pi \sqrt{-1}}{3}}\) of the complex upper half plane \({\mathbb {H}}\). Let \(H_i\) (respectively \(H_j\)) be the divisor of \(X_0(p^2)\) consisting of elliptic points lying above i (respectively j).

By an application of Hurwitz formula [30, cf. proof of Lemma 6.1, p. 670], we have:

$$\begin{aligned} K_{{\mathcal {X}}_0(p^2)} \sim C -\frac{1}{2} \sum _{\begin{array}{c} f_{p^2}(Q_i)=i,\\ e_{Q_i} = 1 \end{array}} Q_i -\frac{2}{3} \sum _{\begin{array}{c} f_{p^2}(Q_j)=j,\\ e_{Q_j} = 1 \end{array}} Q_j , \end{aligned}$$

where C is a divisor with rational coefficients supported at the cusps and \(Q_i\) (respectively \(Q_j\)) are points on \(X_0(p^2)\) above i (respectively j) with ramification index \(e_{Q_i}\) (respectively \(e_{Q_j}\)).

Hence by an application of the Manin–Drinfeld theorem, we have an equality [30, Lemma 6.1]:

$$\begin{aligned} h_0 = h_{\infty } = \frac{1}{36} h_{NT}\left( 3(H_i-v_2 \infty )+4(H_j-v_3 \infty ) \right) ; \end{aligned}$$
(6.4)

with \(v_2=(1+(\frac{-1}{p}))\) (the number of elliptic points in \(X_0(p^2)\) lying above i) and \(v_3=(1+(\frac{-3}{p}))\) (the number of elliptic points in \(X_0(p^2)\) lying above j ).

In [30, Lemma 6.2] the authors show that the preimages of i under \(f_{p^2}\) with ramification index 1 are Heegner points of discriminant \(-4\), these are precisely the elliptic points of \(X_0(p^2)\) lying over i. On the other hand, the preimages of j with ramification index 1 are Heegner points of index \(-3\). These are the elliptic points over j.

Let c be an elliptic point of \(X_0(p^2)\) lying above i or j. By [15, p. 307], we have

$$\begin{aligned} h_{NT}((c)-(\infty ))=\langle c,c\rangle _{\infty }+\langle c, c\rangle _{\mathrm {fin}}. \end{aligned}$$

The expression in loc. cit. can be simplified in our situation. To compute \(\langle c, c\rangle _{\mathrm {fin}}\) for c lying above i (respectively above j), we count the number of roots of unity in \({\mathbb {Z}}[i]\) (respectively in \({\mathbb {Z}}[\frac{1+\sqrt{-3}}{2}]\)). We deduce that \(\langle c, c\rangle _{\mathrm {fin}}=2 \log (p^2)\) if \(c \in X_0(p^2)\) is a Heegner point lying above i (respectively, \(\langle c, c\rangle _{\mathrm {fin}} =3 \log (p^2)\) if c lies above j).

The simplification of \( \langle c,c\rangle _{\infty }\) follows from [30, Section 6, p. 671]. Recall that

$$\begin{aligned} \langle c,c\rangle _{\infty }= \lim _{s \rightarrow 1} \left( H(s)+ \frac{4 \pi }{v_{\varGamma _0(p^2)}(s-1)} \right) +O(1); \end{aligned}$$

with

$$\begin{aligned} H(s)=- 8 \sum _{n=1}^{\infty } \sigma (n) r(p^2n+4) Q_{s-1}\left( 1+\frac{np^2}{2}\right) . \end{aligned}$$

In the above expression, \(\sigma (n)\) is the function as defined in [15, Prop (3.2), Chap IV] with \(|\sigma (n)| \le \tau (n)\) and \(\tau (n)\) the number of divisors of n, r(n) is the number of ideals of norm n in \({\mathbb {Z}}[i]\) (respectively in \({\mathbb {Z}}[\frac{1+\sqrt{-3}}{2}]\)) and \(Q_{s-1}(x)\) is the Legendre function of second kind [15, p. 238]. For any \(\epsilon >0\), we have an estimate \(r(n)=O_{\epsilon }(n^{\epsilon })\).

Using the estimate \(Q_{s-1}(x)=O(x^{-s})\) as \(x \rightarrow +\infty \), we get \(H(s)=O_{\epsilon }\left( p^{2 \epsilon -2}\left( 1+\frac{1}{|s-1|}\right) \right) \) for all s with \(\mathfrak {Re}(s) >1\). For the other part, a suitable bound on H(s) is obtained in the region \(\frac{7}{8} \le \mathfrak {Re}(s) \le 1+\epsilon \) using [16, Proposition 7.2]. By the Phragmén–Lindelöf principle and using the above estimate, we deduce [30, Section 6, p. 673]

$$\begin{aligned} \lim _{s \rightarrow 1} \left( H(s)+\frac{4\pi }{v_{\varGamma _0(p^2)}(s-1)} \right) =O_{\epsilon }(p^{2 \epsilon -2}). \end{aligned}$$

From the above, we get an estimate \(\langle c,c\rangle _{\infty }=O_{\epsilon }(p^{2\epsilon -2})\). By the parallelogram law of Néron–Tate heights (Néron–Tate heights are always positive) and using (6.4) we obtain (see [30, Section 6, p. 673])

$$\begin{aligned} e_p = \frac{1}{2}(h_0 +h_{\infty }) = O(\log p). \end{aligned}$$

The lemma now follows from (6.1). \(\square \)

Using the above results, we obtain Theorem 1.2 of the paper.

Proof of Theorem 1.2

By the previous lemma, we have

$$\begin{aligned} \overline{\omega }_{p^2}^2&= -4g_{p^2}(g_{p^2}-1) \langle H_0, H_{\infty }\rangle + \dfrac{(g_{p^2}^2 - 1)\log p}{s_p} + e_p\\&= 4g_{p^2}(g_{p^2}-1){\mathfrak {g}}_{\mathrm {can}}(\infty ,0) + \dfrac{(g_{p^2}^2 - 1)\log p}{s_p} + e_p. \end{aligned}$$

An explicit computation of genus [cf. Remark 2.1] shows that

$$\begin{aligned} g_{p^2}-1=\frac{(p+1)(p-6)-12c}{12} \end{aligned}$$

with \(c \in \left\{ 0,\frac{1}{2},\frac{2}{3}, \frac{7}{6}\right\} \).

By Proposition 4.9, we obtain

$$\begin{aligned} 4g_{p^2}(g_{p^2}-1){\mathfrak {g}}_{\mathrm {can}}(\infty ,0)=4g_{p^2} \log p+o(g_{p^2} \log p). \end{aligned}$$
(6.5)

Furthermore, we deduce the following asymptotic:

$$\begin{aligned} \dfrac{(g_{p^2}^2 - 1)\log p}{s_p}&= \dfrac{(g_{p^2}+1)(g_{p^2}-1) \log p}{s_p}\\&= (g_{p^2}+1)\log p[2+o(1)]\\&= 2 g_{p^2}\log p+o(g_{p^2}\log p). \end{aligned}$$

Since \(e_p=o(g_{p^2}\log p)\), this completes the proof. \(\square \)