1 Introduction

The study of level sets of random functions is a central topic in probability theory, furthermore at the crossroad of several other domains of mathematics and physics. In this framework, universality results refer to asymptotic properties of these random level sets, holding regardless of the specific nature of the randomness involved. Establishing such universal properties for generic zero sets allows one to manage what would be intricate objects. As such, the literature on this topic is very extended and we refer to the introduction of [18] and the references therein for a more exhaustive overview.

Among the great variety of models that have been investigated, the most emblematic one is perhaps the so-called Kac polynomials \(P_n(x)=\sum _{k=1}^n a_k x^k\). Assume first that the coefficients \((a_k)_{1\le k\le n}\) are chosen independently and according to the same centered and standardized distribution (\({\mathbb {E}}(a_1)=0,\,{\mathbb {E}}(a_1^2)=1\)). Then, set \( {\mathcal {N}}_n(\mathbb {R})\) its number of real roots:

$$\begin{aligned} {\mathcal {N}}_n(\mathbb {R})=\text {card}\left\{ x\in \mathbb {R} \,\left| \right. \,P_n(x)=0\right\} . \end{aligned}$$

As a synthesis of the following (non exhaustive) list of landmark articles [8, 12, 14, 15] the following phenomena hold under mild conditions, universally, that is to say regardless of the choice of the peculiar distribution of the coefficients:

  • Universality of the mean: \(\displaystyle \mathbb { E}\left( {\mathcal {N}}_n(\mathbb {R})\right) \sim \frac{2}{\pi }\log (n);\)

  • Universality of the variance: \(\displaystyle \text {Var}\left( {\mathcal {N}}_n(\mathbb {R})\right) \sim \frac{4}{\pi }\left( 1- \frac{2}{\pi }\right) \log (n);\)

  • Universality of the fluctuations around the mean: \(\displaystyle \frac{{\mathcal {N}}_n(\mathbb {R})-{\mathbb {E}}\left( {\mathcal {N}} _n(\mathbb {R})\right) }{\sqrt{\text {Var}\left( {\mathcal {N}}_n(\mathbb {R})\right) }}\xrightarrow [n\rightarrow \infty ]{\text {Law}}{\mathcal {N}}(0,1)\).

Above, the notation \(u_n\sim v_n\) means \(\frac{u_n}{v_n}\rightarrow 1\) as \( n\rightarrow \infty \), and \({\mathcal {N}}(0,1)\) stands for the standard normal law. Many other models of random polynomials exist in the literature for which universal properties have been intensively investigated. For most of them, both local universality (i.e. joint distribution of roots at microscopic scales) and universality of the expectation at a global scale have been achieved successfully. Concerning local universality, we refer to [7, 13, 18] and for expectation to [9, 10, 16]. Very often, the extension to the global scale of the microscopic distribution of the roots is not an easy task, and one needs first to provide suitable estimates for the so-called phenomenon of repulsion of zeros. Let us also mention that multivariate models have been recently studied, for which we refer to [1, 6]. To the best of our knowledge, it must be emphasized that the universality of the variance has only been reached for Kac polynomials.

Here, we investigate this problem for trigonometric models and show that the variance behavior is actually not universal by computing exactly the correction with respect to the case of Gaussian coefficients. This result displays a strong difference with the well-known Kac polynomials models. We stress that our main result only requires the independence of the coefficients. More concretely, we shall consider for different sequences of independent random vectors \(Y_k=(Y_{k}^1,Y_k^{2})\), \(k\in \mathbb {N}\), the number of real roots over the set \([0,\pi ]\) of

$$\begin{aligned} p_n(t,Y)=\sum _{k=1}^n Y_k^{1} \cos ( k t) +Y_k^{2} \sin (k t). \end{aligned}$$

In order to take benefit from the Central Limit Theorem (hereafter, CLT), we first make a scale change and rather consider

$$\begin{aligned} P_n(t,Y)=\frac{1}{\sqrt{n}} \sum _{k=1}^n Y_k^{1} \cos \left( \frac{k t}{n} \right) +Y_k^{2} \sin \left( \frac{k t}{n}\right) , \quad t\in [0,2n\pi ]. \end{aligned}$$

Indeed, it can be established that \(P_n(\cdot ,Y)\) converge in distribution towards a stationary Gaussian process whose correlation function is \(\frac{ \sin (x)}{x}\). On the other hand, doing so, the number of roots of \(p_n(\cdot ,Y)\) over \([0,\pi ]\) is also the number of roots of \(P_n(\cdot ,Y)\) over \([0,n\pi ]\) and one loses nothing in this procedure. We also highlight that \(P_n(\cdot ,Y)\) is much more manageable thanks to the aforementioned limit theorem.

In order to state more precisely our main theorem, we need some preliminary notations given in the following subsection.

Main result We consider a sequence of centered, independent random vectors \(\{Y_{k}\}_{k\ge 1}\in \mathbb {R}^{2}\) with the normalization \({\mathbb {E}}(Y_{k}^{i}Y_{k}^{j})=\delta _{i,j}\) and which satisfy Doeblin’s condition (2.1) with the moment conditions (2.2). Next, we consider the following trigonometric polynomials:

$$\begin{aligned} P_{n}(t,Y)=\frac{1}{\sqrt{n}}\sum _{k=1}^{n}\cos \left( \frac{kt}{n}\right) Y_{k}^{1}+\sin \left( \frac{kt}{n}\right) Y_{k}^{2} \end{aligned}$$

and we denote by \(N_{n}(Y)\) the number of roots of \(P_{n}(t,Y)\) in the interval \((0,n\pi )\). We shall focus on the variance of \(N_{n}(Y)\) given by

$$\begin{aligned} \text {Var}\left( N_{n}(Y)\right) ={\mathbb {E}}\left( N_{n}^{2}(Y)\right) -({\mathbb {E}} (N_{n}(Y))^{2} \end{aligned}$$

It is known thanks to the appearance of [11] that if \(G=(G_{k})_{k\in N} \) is a sequence of standard Gaussian i.i.d. two dimensional vectors then the following limit exists

$$\begin{aligned} \lim _{n}\frac{1}{n}\text {Var}\left( N_{n}(G)\right) =C(G)\approx 0.56 \end{aligned}$$

(for the explicit expression of C(G) see page 298 of [11], we stress that the previous approximation of C(G) concerns the number of zeros over \([0,2\pi ]\)). Besides a Central Limit Theorem is also established regarding the fluctuations of the number of roots around the mean. We also refer to [2, 3] for alternative proofs and some refinements obtained by following the so-called Nourdin–Peccati method for establishing central limit theorems for functionals of Gaussian processes. Our aim is to prove a similar result for the variance of \(N_{n}(Y) \) and all the more to compute explicitly the constant C(Y). At this point, it must be emphasized that outside the scope of functionals of Gaussian processes, one cannot anymore deploy the powerful combination of Malliavin calculus and Wiener chaos theory as explained in the book [17]. In order to bypass this restriction, as explained below, our approach heavily relies on combination of Edgeworth expansion and Kac–Rice formulae. Let us also mention that the universality of the expected number of roots has been recently fully established in [9] under a second moment condition.

An important aspect of our contribution is that we can formulate explicitly C(Y). Our main result is the following (see Theorem 2.1). Suppose that the sequence \(Y_k,k\in \mathbb {N}\) satisfies the Doeblin’s condition (the precise definition is given in Sect. 2). Suppose also that for every \(\alpha \in \{1,2\}^m\), with \(m=3,4\), the following limits exist and are finite:

$$\begin{aligned} \lim _{n}{\mathbb {E}}\left( \prod _{i=1}^{3}Y_{n}^{\alpha _{i}}\right) =y_{\infty }(\alpha ) \text{ if } m=3 \text{ and } \lim _{n}{\mathbb {E}}\left( \prod _{i=1}^{4}Y_{n}^{\alpha _{i}}\right) =y_{\infty }(\alpha ) \text{ if } m=4. \end{aligned}$$

Then

$$\begin{aligned} \lim _{n}\frac{1}{n}V_{n}(Y)=C(G)+\frac{1}{60}\times y_{*} \end{aligned}$$

with

$$\begin{aligned} y_{*}= & {} ((y_{\infty }(1,1,2,2)-1)+(y_{\infty }(2,2,1,1)-1)\\&+(y_{\infty }(1,1,1,1)-3)+(y_{\infty }(2,2,2,2)-3)). \end{aligned}$$

Notice that the random vectors \((Y_{k})_{k\ge 1}\) are not supposed here to be identically distributed (however, the hypotheses (2.1) and (2.2) from the Doeblin’s condition display some uniformity because \(\eta ,r\) and \(M_{p}(Y)\) are uniform parameters). For simplicity, suppose for a moment that they are uniformly distributed and moreover that the components \(Y_{k}^1\) and \(Y_{k}^2\) of \( Y_{k}=(Y_{k}^ 1,Y_{k}^2\)) are also i.i.d. Then \(y_{\infty }(1,1,2,2)=y_{\infty }(2,2,1,1)=1\) and \(y_{\infty }(1,1,1,1)=y_{\infty }(2,2,2,2)={\mathbb {E}}((Y^1_1)^{4})\). In such a case, the non-universality of the variance becomes more transparent since

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\text {Var}\left( N_{n}(Y)\right) }{n} =\lim _{n\rightarrow \infty }\frac{\text {Var}\left( N_{n}(G)\right) }{n} +\frac{1}{30}\left( {\mathbb {E}}\left( \left( Y_1^1\right) ^{4}\right) -3\right) . \end{aligned}$$
(1.1)

In particular, the deviation from the Gaussian behavior is exactly proportional to the kurtosis of the random variables under consideration.

Strategy of the proof Let us summarize briefly the main steps of our proofs. Basically, up to some technical details, it illustrates rather well the main ideas of our approach.

Step 1: An approximated Kac–Rice formula

Let us recall the celebrated and very useful Kac–Rice formula. Consider a smooth deterministic function f defined on [ab] such that \( |f(t)|+|f^{\prime }(t)|>0\) for all \(t\in [a,b]\). Then, one has

$$\begin{aligned} \text {Card}\left\{ t\in ]a,b[\,\,\left| \right. \,\,f(t)=0\right\} =\lim _{\delta \rightarrow 0} \frac{1}{2\delta } \int _a^b |f^{\prime }(t)|\mathbf {1} _{\left\{ |f(t)|<\delta \right\} }dt. \end{aligned}$$

When one applies the latter to the random functions \(P_n(t,Y)\) one needs to handle the level of non degeneracy which determines the speed of convergence in the Kac–Rice formula. More concretely, in our proof, we will use that for \(\delta _n=1/{n^5}\):

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}\text {Var}\left( N_n(Y)\right) -\lim _{n\rightarrow \infty }\frac{1}{n}\text {Var}\left( \frac{1}{2\delta _{n} }\int _{0}^{n\pi }|P_{n}^{\prime }(t,Y)|\mathbf {1}_{\left\{ |P_{n}(t,Y)|<\delta _{n}\right\} }dt\right) =0. \end{aligned}$$

We refer to Lemma 4.2 for this step.

Step 2: Removing the diagonal

When computing the variance, expressions of the following kind appear:

$$\begin{aligned} \begin{array}{l} \displaystyle \frac{1}{n}\int _{0}^{n\pi }\int _{0}^{n\pi }\Phi _{n}(t,s,Y)dsdt, \text{ where }\\ \displaystyle \Phi _{n}(t,s,Y) =|P_{n}^{\prime }(t,Y)|\frac{1}{2\delta _{n}} \mathbf {1}_{\left\{ |P_{n}(t,Y)|<\delta _{n}\right\} }|P_{n}^{\prime }(s,Y)| \frac{1}{ 2\delta _{n}}\mathbf {1}_{\left\{ |P_{n}(s,Y)|<\delta _{n}\right\} } . \end{array} \end{aligned}$$
(1.2)

Notice that

$$\begin{aligned} P_{n}^{\prime }(t,Y)=\frac{1}{\sqrt{n}}\sum _{k=1}^{n}\frac{k}{n}\cos \left( \frac{ kt}{n}\right) Y_{k}^{2}-\frac{k}{n}\sin \left( \frac{kt}{n}\right) Y_{k}^{1}, \end{aligned}$$

so it becomes clear that in order to study the asymptotic behaviour of \( {\mathbb {E}}(\Phi _{n}(t,s,Y))\) one has to use the central limit theorem (CLT) for the random vector \( S_{n}(t,s,Y)=(P_{n}(t,Y),P_{n}^{\prime }(t,Y),P_{n}(s,Y),P_{n}^{\prime }(s,Y))\). A first difficulty in doing this is that \(\frac{1}{2\delta _{n}} \mathbf {1}_{\left\{ |P_{n}(t,Y)|<\delta _{n}\right\} }\rightarrow \delta _{0}(P_{n}(t,Y))\) so we are out from the framework of continuous and bounded test functions considered in the classical CLT. We have to use a variant of this theorem concerning convergence in distribution norms—this result is established in [5]. A second difficulty concerns the non degeneracy of the vector \(S_{n}(t,s,Y)\): when \(|t-s|\approx 0\), the random vector \( S_{n}(t,s,Y)\) becomes degenerate and employing the CLT or its Edgeworth expansions turn out to be hard. In order to avoid that, we give us a fixed parameter \(\epsilon >0\) and we prove that

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \limsup _{n\rightarrow \infty }&\frac{1}{n}\left( \frac{1}{4 \delta _n^2} \int _{[0,n\pi ]^2, |t-s|< \epsilon } \text {Cov} \left( |P_n^{\prime }(t,Y)|\mathbf {1}_{\left\{ |P_n(t,Y)|<\delta _n\right\} },\right. \right. \\&\quad \left. \left. |P_n^{\prime }(s,Y)|\mathbf {1}_{\left\{ |P_n(s,Y)|<\delta _n\right\} } \right) dt ds\right. \bigg ) =0. \end{aligned}$$

The latter enables us to impose the condition \(|t-s|\ge \epsilon \) in all our Kac–Rice estimates. This is particularly convenient since the underlying processes become uniformly non-degenerate.

A third difficulty comes for the fact that, roughly speaking,

$$\begin{aligned} \frac{1}{n}\int _{0}^{n\pi }\int _{0}^{n\pi }&\big | {\mathbb {E}}(\Phi _{n}(t,s,Y))-{\mathbb {E}}(\Phi _{n}(t,s,G))\big | dsdt\sim \frac{1}{n}\times (\pi n)^{2}\\&\times \big | {\mathbb {E}}(\Phi _{n}(\cdot ,\cdot ,Y))-{\mathbb {E}}(\Phi _{n}(\cdot ,\cdot ,G))\big | \end{aligned}$$

so it is sufficient that

$$\begin{aligned} \left| {\mathbb {E}}(\Phi _{n}(t,s,Y))-{\mathbb {E}}(\Phi _{n}(t,s,G))\right| \le \frac{C}{ n^{3/2}} \end{aligned}$$

and in order to achieve this, it is not sufficient to use the CLT, but we have to use an Edgeworth expansion of order three.

Step 3: Performing Edgeworth expansions

In this step, we make use of Edgeworth expansion in distribution norm developed in [5]. We first set

$$\begin{aligned} F_n(x_1,x_2,x_3,x_4)=\frac{1}{4\delta _n^2}\times |x_1|\mathbf {1} _{\{|x_2|<\delta _n\}}|x_3|\mathbf {1}_{\{|x_4|<\delta _n\}}, \end{aligned}$$

and \(\rho _{n,t,s} \) the density of \((P_n(t,G),P_n^{\prime }(t,G),P_n(s,G),P_n^{\prime }(s,G))\). By using the Edgeworth expansion, we will prove that

$$\begin{aligned}&{\mathbb {E}}\left( F_n\left( P_n(t,Y),P_n^{\prime }(t,Y),P_n(s,Y),P_n^{\prime }(s,Y)\right) \right) \\&\quad = \int _{\mathbb {R}^4} F_n(x) \rho _{n,t,s}(x)\left( 1+\frac{1}{\sqrt{n}} Q_{n,t,s}(x)+\frac{1}{n} R_{n,t,s}(x) \right) dx_1 dx_2 dx_3 dx_4 \\&\qquad + \mathcal {R}_n(t,s). \end{aligned}$$

where \(Q_n\) and \(R_n\) are totally explicit polynomials of degree less than 6 whose coefficients involve the moments of the sequence of the random variables \(\{Y_k^1,Y_k^2\}_{k\ge 1}\) and where the remaining term satisfies

$$\begin{aligned} \lim _{\epsilon \rightarrow 0}\lim _{n\rightarrow \infty } \frac{1}{n}\int _{[0,n\pi ]^2,|t-s|\ge \epsilon } \mathcal {R}_n(t,s) dt ds =0. \end{aligned}$$

Doing so, some computations are involved but they are totally transparent in terms of the moments of the coefficients of our polynomial. This step allows one to handle explicitly the various cancellations occurring in the variance. This step is the heart of the proof and is done in Sect. 5. We strongly emphasize that getting a polynomial speed of convergence in the Kac–Rice formula is crucial in order to manage the remainder of the Edgeworth expansions.

2 The problem

We consider a sequence of centered, independent random variables \(Y_{k}\in \mathbb {R}^{2},k\in \mathbb {N}\) with \({\mathbb {E}}(Y_{k}^{i}Y_{k}^{j})=\delta _{i,j}\). We assume that they satisfy the following “Doeblin’s condition”: there exist some points \( y_{k}\in \mathbb {R}^{2}\) and \(r,\eta \in (0,1)\) such that for every \(k\in \mathbb {N}\) and every measurable set \(A\subset B_{r}(y_{k})\)

$$\begin{aligned} \mathbb {P}(Y_{k}\in A)\ge \eta {\mathrm {Leb}}_2(A), \end{aligned}$$
(2.1)

\({\mathrm {Leb}}_d\) denoting the Lebesgue measure in \(\mathbb {R}^d\). Moreover we assume that \(Y_{k},k\in \mathbb {N}\) have finite moments of any order which are uniformly bounded with respect to k : 

$$\begin{aligned} \sup _{k}({\mathbb {E}}(\left| Y_{k}\right| ^{p}))^{1/p}=M_{p}(Y)<\infty . \end{aligned}$$
(2.2)

We denote by \(\mathcal {D}(r,\eta )\) the sequences of random variables \( Y=(Y_{k})_{k\in \mathbb {N}}\) which are independent and verify (2.1) and (2.2) for every \(p\ge 1\). Moreover we put

$$\begin{aligned} P_{n}(t,Y)=\frac{1}{\sqrt{n}}\sum _{k=1}^{n}\cos \left( \frac{kt}{n}\right) Y_{k}^{1}+\sin \left( \frac{kt}{n}\right) Y_{k}^{2} \end{aligned}$$
(2.3)

and we denote by \(N_{n}(Y)\) the number of roots of \(P_{n}(t,Y)\) in the interval \((0,n\pi )\) and by \(V_{n}(Y)\) the variance of \(N_{n}(Y):\)

$$\begin{aligned} V_{n}(Y)={\mathbb {E}}\left( N_{n}^{2}(Y)\right) -({\mathbb {E}}(N_{n}(Y))^{2} \end{aligned}$$
(2.4)

It is known (see e.g. [11]) that if \(G=(G_{k})_{k\in \mathbb {N}}\) is a sequence of two dimensional standard random variables then the following limit exists

$$\begin{aligned} \lim _{n}\frac{1}{n}V_{n}(G)=C(G). \end{aligned}$$

Our main result is the following.

Theorem 2.1

Suppose that \(Y\in \mathcal {D}(\eta ,r)\) and suppose also that for every \(\alpha \in \{1,2\}^m\), with \(m=3,4\), the following limits exist and are finite:

$$\begin{aligned}&\lim _{n}{\mathbb {E}}\left( \prod _{i=1}^{3}Y_{n}^{\alpha _{i}}\right) =y_{\infty }(\alpha ), \quad \text{ for } \;m=3,\\&\lim _{n}{\mathbb {E}}\left( \prod _{i=1}^{4}Y_{n}^{\alpha _{i}}\right) =y_{\infty }(\alpha ),\quad \text{ for } \;m=4. \end{aligned}$$

Then

$$\begin{aligned} \lim _{n}\frac{1}{n}V_{n}(Y)=C(G)+\frac{1}{60}\times y_{*} \end{aligned}$$

with

$$\begin{aligned} y_{*}= & {} ((y_{\infty }(1,1,2,2)-1)+(y_{\infty }(2,2,1,1)-1)\\&+\,(y_{\infty }(1,1,1,1)-3)+(y_{\infty }(2,2,2,2)-3)) . \end{aligned}$$

Proof

The proof is an immediate consequence of Lemma 4.2 point C (see (4.11)) and of Lemma 5.1 (see (5.2)). \(\square \)

Remark 2.2

Notice that the random variables \(Y_{k}\in \mathbb {R}^{2},k\in \mathbb {N}\) are not supposed to be identically distributed. However, the hypothesis (2.1) and (2.2) contain some uniformity assumptions because \(\varepsilon ,r\) and \(M_{p}(Y)\) are common for all of them. Suppose for a moment that they are identically distributed and moreover, that the components \(Y^{1}=Y_{k}^{1}\) and \( Y^{2}=Y_{k}^{2}\) are independent. Then \(y_{\infty }(1,1,2,2)=y_{\infty }(2,2,1,1)=1\) and \(y_{\infty }(1,1,1,1)={\mathbb {E}}(\vert Y^{1}\vert ^{4})\) and \(y_{\infty }(2,2,2,2)={\mathbb {E}}(\vert Y^{2}\vert ^{4}),\) so \(y_{*} \) is the sum of the kurtosis of \(Y^{1}\) and of \(Y^{2}\). Put it otherwise: take \(\overline{Y}=((Y^{1})^{2},(Y^{2})^{2})\). Then \(y_{*}=0\) iff the covariance matrix of \(\overline{Y}\) coincides with the covariance matrix of the corresponding \(\overline{G}\).

3 CLT and Edgeworth expansion

The main tool in the this paper is the CLT and the Edgeworth development of order two that we proved in [5] Proposition 2.5. We recall them here, firstly in the general case and then in our specific framework.

3.1 The general case

For a positive definite matrix \(\Sigma \in \mathcal {M}_{d\times d}\) and \(\varepsilon >0\), we say that \(A\ge \varepsilon \) if \(\langle A\xi ,\xi \rangle \ge \varepsilon |\xi |^2\) for every \(\xi \in \mathbb {R}^d\), \(\langle \cdot ,\cdot \rangle \) denoting the standard scalar product. And for a matrix \(C=(c_{ij})_{i,j}\in \mathcal {M}_{d\times l}\) we use the norm \(\Vert C\Vert =\max _{i,j}|c_{i,j}|\).

We consider a sequence of matrices \(C_{n}(k)\in \mathcal {M}_{d\times 2},n,k\in \mathbb {N}\) which verify

$$\begin{aligned} \Sigma _{n}:=\frac{1}{n}\sum _{k=1}^{n}C_{n}(k)C_{n}^{*}(k)\ge \varepsilon _{*}\quad \text{ and }\quad \sup _{n,k\in \mathbb {N}}\left\| C_{n}(k)\right\| <\infty . \end{aligned}$$
(3.1)

We denote

$$\begin{aligned} X_{n,k}=C_{n}(k)Y_{k},\quad G_{n,k}=C_{n}(k)G_{k} \end{aligned}$$

where \(Y=(Y_{k})_{k\in \mathbb {N}}\) is the sequence introduced in the previous section and \(G=(G_{k})_{k\in \mathbb {N}}\) is a sequence of independent standard normal random variables in \(\mathbb {R}^{2}\). For a multi-index \(\alpha =(\alpha _{1},\ldots ,\alpha _{m})\in \{1,\ldots ,d\}^{m}\), we denote \(|\alpha |=m\) and

$$\begin{aligned} \Delta _{\alpha }(X_{n,k})= & {} {\mathbb {E}}(X_{n,k}^{\alpha })-{\mathbb {E}}\left( G_{n,k}^{\alpha }\right) ={\mathbb {E}}\left( \prod _{i=1}^{m}X_{n,k}^{\alpha _{i}}\right) -{\mathbb {E}}\left( \prod _{i=1}^{m}G_{n,k}^{\alpha _{i}}\right) , \end{aligned}$$
(3.2)
$$\begin{aligned} c_{n}(\alpha ,X)= & {} \frac{1}{n}\sum _{k=1}^{n}\Delta _{\alpha }(X_{n,k}) \end{aligned}$$
(3.3)

By hypothesis, for \(\left| \alpha \right| =1,2\) we have \(\Delta _{\alpha }(X_{n,k})=0\).

For a function \(f\in C_{pol}^{q }(\mathbb {R}^{d})\) (\(C^{q }\) functions with polynomial growth), we define \(L_{q}(f)\) and \(l_{q}(f)\) to be two numbers such that

$$\begin{aligned} \sum _{\left| \gamma \right| \le q}\left| \partial ^{\gamma }f(x)\right| \le L_{q}(f)(1+\left| x\right| )^{l_{q}(f)}. \end{aligned}$$
(3.4)

Moreover we denote

$$\begin{aligned} \mathcal {S}_{n}(Y)= & {} \frac{1}{\sqrt{n}}\sum _{k=1}^{n}X_{n,k}=\frac{1}{\sqrt{n}} \sum _{k=1}^{n}C_{n}(k)Y_{k}, \nonumber \\ \mathcal {S}_{n}(G)= & {} \frac{1}{\sqrt{n}}\sum _{k=1}^{n}G_{n,k}=\frac{1}{\sqrt{n}} \sum _{k=1}^{n}C_{n}(k)G_{k}. \end{aligned}$$
(3.5)

The CLT in [5] (see Theorem 2.3 with \(N=0\) therein) says that if \(Y\in \mathcal {D}(\eta ,r)\) then, for every multi-index \(\gamma \) with \(\left| \gamma \right| \le q\)

$$\begin{aligned} {\mathbb {E}}(\partial ^{\gamma }f(\mathcal {S}_{n}(Y)))= & {} {\mathbb {E}}(\partial ^{\gamma }f(\mathcal {S}_{n}(G)))+\frac{ 1}{n^{1/2}}R^{(0)}_{n}(f)\quad \text{ with } \end{aligned}$$
(3.6)
$$\begin{aligned} \vert R^{(0)}_{n}(f)\vert\le & {} C(L_{0}(f)+n^{1/2}L_{q}(f)e^{-cn}) \end{aligned}$$
(3.7)

where \(C\ge 1\ge c>0\) are constants which depend on \(\eta ,r\) in (2.1 ), on \(\varepsilon _{*}\) in (3.1) and on \(M_{p}(Y)\) for a sufficiently large p.

We go further and we recall the Edgeworth development. We consider the Hermite polynomials \(H_{\alpha }\) which are characterized by the equality

$$\begin{aligned} {\mathbb {E}}(\partial ^{\alpha }f(W))={\mathbb {E}}(f(W)H_{\alpha }(W))\quad \forall f\in C_{pol}^{\infty }(\mathbb {R}^{d}) \end{aligned}$$
(3.8)

where \(W\in \mathbb {R}^{d}\) is a standard normal random variable. Let us mention that \(H_{\alpha }\) may be represented as follows. Let \(h_{k}\) be the Hermite polynomial of order k on \(\mathbb {R}\), i.e.,

$$\begin{aligned} h_k(x)=(-1)^ke^{\frac{x^2}{2}}\frac{d^n}{dx^n}e^{-\frac{x^2}{2}}. \end{aligned}$$

Now, for the multi-index \(\alpha \) and for \(j\in \{1,\ldots ,d\}\) we denote \(i_{j}(\alpha )={\mathrm {card}}\{i:\alpha _{i}=j\}\). Then \(H_{\alpha }(x_{1},\ldots ,x_{d})=h_{i_{1}(\alpha )}(x_{1})\times \cdots \times h_{i_{d}(\alpha )}(x_{d})\). It is known that \(h_{k}\) is even (respectively odd) if k is even (respectively odd) so \(H_{\alpha }\) itself has the corresponding properties on each variable (we will use this in the sequel).

We introduce now the following functions which represent the correctors of order one and two in the Edgeworth development:

$$\begin{aligned} \Gamma _{n,1}(X,x)= & {} \frac{1}{6}\sum _{\left| \beta \right| =3}c_{n}(\beta ,X)H_{\beta }(x), \end{aligned}$$
(3.9)
$$\begin{aligned} \Gamma _{n,2}(X,x)= & {} \Gamma _{n,2}^{\prime }(X,x)+\Gamma _{n,2}^{\prime \prime }(X,x), \end{aligned}$$
(3.10)

with

$$\begin{aligned} \Gamma _{n,2}^{\prime }(X,x)= & {} \frac{1}{24}\sum _{\left| \beta \right| =4}c_{n}(\beta ,X)H_{\beta }(x),\quad \end{aligned}$$
(3.11)
$$\begin{aligned} \Gamma _{n,2}^{\prime \prime }(X,x)= & {} \frac{1}{72}\sum _{\left| \rho \right| =3}\sum _{\left| \beta \right| =3}c_{n}(\beta ,X)c_{n}(\rho ,X)H_{(\beta ,\rho )}(x) \end{aligned}$$
(3.12)

We set

$$\begin{aligned} Q_{n}(X, x)=1+\frac{1}{\sqrt{n}}\Gamma _{n,1}\left( \Sigma _{n}^{-1/2}X,x\right) +\frac{1}{n} \Gamma _{n,2}\left( \Sigma _{n}^{-1/2}X,x\right) , \end{aligned}$$
(3.13)

where \(\Sigma _{n}^{-1/2}X=(\Sigma _{n}^{-1/2}X_{n,k})_{k\in \mathbb {N}}\) .

In Proposition 2.6 from [5] we prove the following. Let \(n\in \mathbb {N}\). For every \(f\in C_{pol}^{q }(\mathbb {R}^{d})\) and for every multi-index \(\gamma \) with \(\left| \gamma \right| \le q\)

$$\begin{aligned} {\mathbb {E}}(\partial ^{\gamma }f(\mathcal {S}_{n}(Y))) ={\mathbb {E}}\left( \partial ^{\gamma }f\left( \Sigma _{n}^{1/2}W\right) Q_n(X,W)\right) +\frac{1}{n^{3/2}}R^{(2)}_{n}(f), \end{aligned}$$
(3.14)

where \(W\in \mathbb {R}^{d}\) is a standard normal random variable and the remainder \(R^{(2)}_{n}(f)\) verifies

$$\begin{aligned} \left| R^{(2)}_{n}(f)\right| \le C(L_{0}(f)+n^{3/2}L_{q}(f)e^{-cn}) \end{aligned}$$
(3.15)

where \(C\ge 1\ge c>0\) are constants which depend on \(\eta ,r\) in (2.1), on \(\varepsilon _{*}\) in (3.1) and on \(M_{p}(Y)\) for a sufficiently large p.

The drawback of the above formulas (3.6) and (3.14) is that they apply to smooth functions. In order to bypass this difficulty and to take into account more general functions (as we need in this paper), we give a new statement in terms of primitives which we prove by using a regularization argument. Given a function \(f:\mathbb {R}^{d}\rightarrow \mathbb {R}\) and \(m\in \mathbb {N}\) we define by recurrence \(f^{(-0)}=f\) and

$$\begin{aligned} f^{(-m)}(x)=\int _{0}^{x_{1}}d\xi _{1}\int _{0}^{x_{2}}d\xi _2\cdots \int _{0}^{x_{d-1}}f^{(-(m-1))}(\xi _{1},\ldots ,\xi _{d})d\xi _{d}. \end{aligned}$$
(3.16)

So \(f^{(-m)}\) represents a primitive of order \(d\times m\) of f. We also consider the multi-index \(\gamma ^{(m)}\) such that \(\partial _{x}^{\gamma ^{(m)}}=\partial _{x_{1}}^{m}\ldots \partial _{x_{d}}^{m}\), so \(|\gamma ^{(m)}|=d\times m\). We notice that for every \(\varphi \in C_{c}^{\infty }(\mathbb {R}^{d})\) we have

$$\begin{aligned} \int f^{(-m)}(x)\partial ^{\gamma ^{(m)}}\varphi (x)dx=(-1)^{d\times m}\int f(x)\varphi (x)dx. \end{aligned}$$
(3.17)

Then we can state the following more general result.

Lemma 3.1

Let \(Y\in \mathcal {D}(\eta ,r)\). Let \(D\subset \mathbb {R}^{d}\) be an open set such that \(D^c\) has zero Lebesgue measure and assume that there exist \(A,a>0\) such that

$$\begin{aligned} \mathbb {P}(\mathcal {S}_n(Y)\notin D)\le Ae^{-an} \end{aligned}$$
(3.18)

for every n. Let \(f:\mathbb {R}^{d}\rightarrow \mathbb {R}\) be a function with polynomial growth which is continuous on D. Let \(m\in \mathbb {N}\) and \(f^{(-m)}\) be as in (3.16). There exist \(C>0\), \(c>0\), depending on Aam, \(\eta ,r\) in (2.1), \(\varepsilon _{*}\) in (3.1) and on \(M_{p}(Y)\) for p large enough, such that the following properties hold.

  1. (i)

    CLT: for every n,

    $$\begin{aligned}&{\mathbb {E}}(f(\mathcal {S}_{n}(Y))) ={\mathbb {E}}(f(\mathcal {S}_{n}(G)))+\frac{ 1}{n^{1/2}}R^{(0)}_{n,m}(f) \quad \text{ with } \end{aligned}$$
    (3.19)
    $$\begin{aligned}&\vert R^{(0)}_{n,m}(f)\vert \le C(L_{0}(f^{(-m)})+n^{1/2}e^{-cn}L_{0}(f)); \end{aligned}$$
    (3.20)
  2. (ii)

    Edgeworth expansion up to order 2: for every n,

    $$\begin{aligned}&{\mathbb {E}}(f(\mathcal {S}_{n}(Y))) ={\mathbb {E}}\left( f\left( \Sigma _{n}^{1/2}W\right) Q_n(X,W)\right) +\frac{1}{n^{3/2}}R^{(2)}_{n,m}(f) \quad \text{ with } \end{aligned}$$
    (3.21)
    $$\begin{aligned}&\left| R^{(2)}_{n,m}(f)\right| \le C(L_{0}(f^{(-m)})+n^{3/2} e^{-cn}L_{0}(f)), \end{aligned}$$
    (3.22)

\(Q_n(X,x)\) being defined in (3.13) and \(W\in \mathbb {R}^d\) denoting a standard Gaussian random variable.

Proof

We prove (3.21)–(3.22), the proof of (3.19)–(3.20) being similar.

Let \(p_{n}(x)\) denote the density function of \(\Sigma _{n}^{-1/2}W\). Then, with \( \varphi _{\varepsilon }\) some regularization kernel in \(C_{c}^{\infty }(\mathbb {R}^{d})\),

$$\begin{aligned} {\mathbb {E}}\left( f\left( \Sigma _{n}^{-1/2}W\right) Q_{n}(W)\right)= & {} \int f(y)Q_{n}\left( \Sigma _{n}^{1/2}y\right) p_{n}(y)dy \\= & {} \int \lim _{\varepsilon \rightarrow 0}f*\varphi _{\varepsilon }(y)Q_{n}\left( \Sigma _{n}^{1/2}y\right) p_{n}(y)dy \\= & {} \lim _{\varepsilon \rightarrow 0}\int f*\varphi _{\varepsilon }(y)Q_{n}\left( \Sigma _{n}^{1/2}y\right) p_{n}(y)dy. \end{aligned}$$

Here we have used the fact that f is continuous on D and \(D^c\) has zero Lebesgue measure, and also the fact that \(\vert f*\varphi _{\varepsilon }(y)Q_{n}(X,\Sigma _{n}^{1/2}y)\vert \le C L_{0}(f)(1+\left| y\right| )^{l_{0}(f)+r}\) (with r the order of the polynomial \(Q_{n}(X,y)\)), which is integrable with respect to \(p_{n}(y)dy\). Now, using (3.17),

$$\begin{aligned} f*\varphi _{\varepsilon }(y) =(-1)^{\vert \gamma ^{(m)}\vert }\partial ^{\gamma ^{(m)}}(f^{(-m)}*\varphi _{\varepsilon })(y). \end{aligned}$$

It follows that, using (3.12) for smooth functions,

$$\begin{aligned}&\int f*\varphi _{\varepsilon }(y)Q_{n}\left( \Sigma _{n}^{1/2}y\right) p_{n}(y)dy \\&\quad =(-1)^{\vert \gamma ^{(m)}\vert }{\mathbb {E}}\left( \partial ^{\gamma ^{(m)}}(f^{(-m)}*\varphi _{\varepsilon })\left( \Sigma _{n}^{-1/2}W\right) Q_{n}(W)\right) \\&\quad =(-1)^{\vert \gamma ^{(m)}\vert }\left[ {\mathbb {E}}\left( \partial ^{\gamma ^{(m)}}(f^{(-m)}*\varphi _{\varepsilon })(S_{n}(Y))\right) + \frac{1}{n^{3/2}}R^{(2)}_{n}(f^{(-m)}*\varphi _{\varepsilon })\right] \\&\quad ={\mathbb {E}}((f*\varphi _{\varepsilon })(S_{n}(Y)))+\frac{1}{n^{3/2}} (-1)^{\vert \gamma ^{(m)}\vert }R^{(2)}_{n}(f^{(-m)}*\varphi _{\varepsilon }), \end{aligned}$$

with

$$\begin{aligned} \left| R_{n}(f^{(-m)}*\varphi _{\varepsilon })\right| \le L_{0}(f^{(-m)}*\varphi _{\varepsilon })+n^{\frac{3}{2} }e^{-cn}L_{\vert \gamma ^{(m)}\vert }(f^{(-m)}*\varphi _{\varepsilon }). \end{aligned}$$

Since, for every \(\varepsilon >0\),

$$\begin{aligned} L_{0}(f^{(-m)}*\varphi _{\varepsilon })\le CL_{0}(f^{(-m)})\quad \text{ and }\quad L_{d\times m}(f^{(-m)}*\varphi _{\varepsilon })\le L_{0}(f), \end{aligned}$$

we obtain

$$\begin{aligned} \left| R_{n}(f^{(-m)}*\varphi _{\varepsilon })\right| \le CL_{0}(f^{(-m)})+n^{\frac{3}{2} }e^{-cn}L_0(f). \end{aligned}$$

Moreover, \({\mathbb {E}}((f*\varphi _{\varepsilon })(S_{n}(Y)))={\mathbb {E}}((f*\varphi _{\varepsilon })(S_{n}(Y))1_{S_n(Y)\in D})+{\mathbb {E}}((f*\varphi _{\varepsilon })(S_{n}(Y))1_{S_n(y)\notin D})\). Now, for every \(\varepsilon >0\),

$$\begin{aligned} |{\mathbb {E}}((f*\varphi _{\varepsilon })(S_{n}(Y))1_{S_n(y)\notin D})| \le CL_0(f)e^{-an}. \end{aligned}$$

And since f is continuous in D and with polynomial growth,

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}{\mathbb {E}}((f*\varphi _{\varepsilon })(\mathcal {S}_{n}(Y))1_{S_n(Y)\in D})= & {} {\mathbb {E}}(f(\mathcal {S}_{n}(Y))1_{\mathcal {S}_n(Y)\in D})\\= & {} {\mathbb {E}}(f(\mathcal {S}_{n}(Y))) -{\mathbb {E}}(f(\mathcal {S}_{n}(Y))1_{\mathcal {S}_n(Y)\notin D}) \end{aligned}$$

with \(|{\mathbb {E}}(f(\mathcal {S}_{n}(Y))1_{\mathcal {S}_n(Y)\notin D})|\le CL_0(f)e^{-an}\). So, we pass to the limit as \(\varepsilon \rightarrow 0\) e we reach (3.21) with the estimate (3.22) for the reminder. \(\square \)

Let us mention some more facts which will be useful in our framework. We will work with an even function f (so \(f(x)=f(-x)\)) which satisfies the requests in Lemma 3.1. Since W and \(-W\) have the same law, and the Hermite polynomials of order three are odd we have

$$\begin{aligned} {\mathbb {E}}\left( f\left( \Sigma _{n}^{1/2}W\right) \Gamma _{n,1}\left( \Sigma _{n}^{-1/2}X,W\right) \right) =0, \end{aligned}$$

so this term does no more appear in our development. Moreover consider a diagonal matrix \(I_{d}(\lambda )\) such that \(I_{d}^{i}(\lambda )=\lambda _{i} \) and such that \(\lambda _{i}\ge \varepsilon _{*}\). Then a straightforward computation (using the non degeneracy of \(\Sigma _{n}\) and of \(I_{d}(\lambda )\) and some standard integration by parts techniques) gives

$$\begin{aligned}&{\mathbb {E}}\left( f\left( \Sigma _{n}^{1/2}W\right) \Gamma _{n,2}\left( \Sigma _{n}^{-1/2}X,W)\right) \right) \\&\quad ={\mathbb {E}}\left( f\left( I_{d}^{1/2}(\lambda )W\right) \Gamma _{n,2}\left( I_{d}^{-1/2}(\lambda )X,W)\right) \right) +r_{n}(f) \end{aligned}$$

with

$$\begin{aligned} \left| r_{n}(f)\right| \le CL_{0}(f)\times \left\| \Sigma _{n}-I_{d}(\lambda )\right\| \end{aligned}$$
(3.23)

with C depending on \(\varepsilon _{*}\). Recalling that \(S_{n}(G){\mathop {=}\limits ^{\text{ Law }}}\Sigma _{n}^{1/2}W\), we write (3.21) as

$$\begin{aligned} \begin{array}{rl} {\mathbb {E}}(f(\mathcal {S}_{n}(Y))) =&{} \displaystyle {\mathbb {E}}(f(\mathcal {S}_{n}(G)))+\frac{ 1}{n}{\mathbb {E}}\left( f\left( I_{d}^{1/2}(\lambda )W\right) \Gamma _{n,2}\left( I_{d}^{-1/2}(\lambda )X,W)\right) \right) \\ &{}\displaystyle +\,\frac{1}{n}r_{n}(f)+\frac{1}{n^{3/2}}R^{(2)}_{n,m}(f), \end{array} \end{aligned}$$
(3.24)

\(R^{(2)}_{n,m}(f)\) being given in (3.22). This is the equality that we will use in the sequel.

3.2 The case of trigonometric polynomials

We fix here the objects which will be taken into account and the results from Sect. 3.1 we are going to use.

Let \(Y=(Y_{k})_{k\in \mathbb {N}}\) denote the sequence introduced in Sect. 2. For each \(t\ge 0\) we consider the matrices \(C_{n}(k,t),\Sigma _n(t)\in \mathcal {M}_{2\times 2},n\in \mathbb {N},1\le k\le n\) defined by

figure a

Note that \(\Sigma _n(t)\) is non degenerate: for \(\xi \in \mathbb {R}^{2}\) one has \(\vert C_{n}(k,t)\xi \vert ^{2}=\xi _{1}^{2}+\frac{k^{2}}{n^{2}}\xi _{2}^{2}\ge \frac{k^{2} }{n^{2}}\left| \xi \right| ^{2}\) so that

$$\begin{aligned} \langle \Sigma _n(t)\xi ,\xi \rangle = \frac{1}{n}\sum _{k=1}^{n}\frac{k^{2}}{n^{2}}\left| \xi \right| ^{2}\ge \int _{0}^{1}x^{2}dx\times \left| \xi \right| ^{2}=\frac{1}{3} \left| \xi \right| ^{2}. \end{aligned}$$
(3.26)

We denote

$$\begin{aligned} Z_{n,k}(t,Y)=C_{n}(k,t)Y_{k}. \end{aligned}$$
(3.27)

We are concerned with

$$\begin{aligned} S_{n}(t,Y)=\frac{1}{\sqrt{n}}\sum _{k=1}^{n}Z_{n,k}(t,Y)=\frac{1}{\sqrt{n}} \sum _{k=1}^{n}C_{n}(k,t)Y_{k}. \end{aligned}$$
(3.28)

Moreover, with the notation from (2.3), \( S_{n}^{1}(t,Y)=P_{n}(t,Y) \) and \(S_{n}^{2}(t,Y)=P_{n}^{\prime }(t,Y)\).

We finally denote

figure b

We notice that, setting

figure c

then \(C_{n}(k,t,s),\Sigma _n(t,s)\in \mathcal {M}_{4\times 4},n\in \mathbb {N},1\le k\le n\) and we have

$$\begin{aligned}&Z_{n,k}(t,s,Y)=C_{n}(k,t,s)Y_{k}, \end{aligned}$$
(3.31)
$$\begin{aligned}&S_{n}(t,s,Y)=\frac{1}{\sqrt{n}}\sum _{k=1}^{n}Z_{n,k}(t,s,Y)=\frac{1}{\sqrt{n}} \sum _{k=1}^{n}C_{n}(k,t,s)Y_{k}. \end{aligned}$$
(3.32)

For \(\delta >0\), we define the following even functions:

$$\begin{aligned} \begin{array}{ll} &{}F_\delta (x)=\frac{1}{2\delta }\,1_{|x|<\delta },\quad x\in \mathbb {R},\\ &{}\Phi _{\delta }(x)=\left| x_{2}\right| F_{\delta }(x_{1}), \quad x=(x_1,x_2)\in \mathbb {R}^2\\ &{}\Psi _{\delta }(x)=\Phi _{\delta }(x_{1},x_{2})\Phi _{\delta }(x_{3},x_{4}), \quad x=(x_{1},x_{2},x_3,x_4)\in \mathbb {R}^4. \end{array} \end{aligned}$$
(3.33)

Notice that, taking \(m=1\) in (3.16), we have

$$\begin{aligned} L_0\left( F_\delta ^{(-1)}\right) =L_0\left( \Phi _\delta ^{(-1)}\right) =L_0 \left( \Psi _\delta ^{(-1)}\right) =1,\quad \text{ for } \text{ every } \delta >0. \end{aligned}$$
(3.34)

We are ready to state the results which will be used later on.

Proposition 3.2

Suppose that \(Y\in \mathcal {D}(\eta ,r)\). Let \(P_n(t,Y)\), \(S_n(t,Y)\), \(S_n(t,s,Y)\) be defined in (2.3), (3.28), (3.29) respectively and let \(F_\delta \), \(\Phi _{\delta }\), \(\Psi _{\delta }\) be defined in (3.33). There exist \(C,c>0\) such that for every \(\delta >0\) and \(n\in \mathbb {N}\) the following statements hold.

  1. (i)

    For the function \(F_\delta \), it holds

    $$\begin{aligned} |{\mathbb {E}}(F_\delta (P_n(t,Y))) -{\mathbb {E}}(F_\delta (W))|\le C\Big (\frac{1}{\sqrt{n}}+\frac{1}{\delta }e^{-cn}\Big ), \end{aligned}$$
    (3.35)

    where W is standard Gaussian in \(\mathbb {R}\).

  2. (ii)

    For the function \(\Phi _{\delta }\), it holds

    $$\begin{aligned} |{\mathbb {E}}(\Phi _\delta (S_n(t,Y))) -{\mathbb {E}}(\Phi _\delta (S_n(t,G)))|\le C\Big (\frac{1}{\sqrt{n}}+\frac{1}{\delta }e^{-cn}\Big ), \end{aligned}$$
    (3.36)

    and for any invertible diagonal matrix \(I_2(\lambda )\in \mathcal {M}_{2\times 2}\),

    $$\begin{aligned} {\mathbb {E}}(\Phi _\delta (S_n(t,Y)))= & {} \displaystyle {\mathbb {E}}(\Phi _\delta (S_n(t,G)))+\frac{ 1}{n}{\mathbb {E}}\left( \Phi _{\delta }\left( I_{2}^{1/2}(\lambda )W\right) \right. \nonumber \\&\left. \Gamma _{n,2}\left( I_{2}^{-1/2}(\lambda )Z(t,Y),W)\right) \right) \nonumber \\&\displaystyle +\frac{1}{n}r_{n}(t,\Phi _{\delta })+\frac{1}{n^{3/2}}R_{n}(t,\Phi _{\delta }),\quad \text{ where }\nonumber \\ \displaystyle |r_n(t,\Phi _{\delta })|\le & {} C\Vert \Sigma _n(t)-I_2(\lambda )\Vert \quad \text{ and }\quad |R_{n}(t,\Phi _{\delta })| \nonumber \\&\quad \le C(1+n^{3/2}\,e^{-cn}), \end{aligned}$$
    (3.37)

    W denoting a standard normal vector in \(\mathbb {R}^2\) and \(\Sigma _n(t)\) being defined in (3.25).

  3. (iii)

    Let t, s be such that \(\mathrm {det}\,\Sigma _n(t,s)\ge \lambda _*>0\) and let \(I_4(\lambda )\in \mathcal {M}_{4\times 4}\) denote any invertible diagonal matrix. For the function \(\Psi _{\delta }\), it holds

    $$\begin{aligned} {\mathbb {E}}(\Psi _\delta (S_n(t,s,Y)))= & {} \displaystyle {\mathbb {E}}(\Psi _\delta (S_n(t,s,G)))+\frac{1}{n}{\mathbb {E}}\left( \Phi _{\delta }\left( I_{4}^{1/2}(\lambda )W\right) \right. \nonumber \\&\left. \Gamma _{n,2}\left( I_{4}^{-1/2}(\lambda )Z(t,Y),W)\right) \right) \nonumber \\&\displaystyle +\frac{1}{n}r_{n}(t,s,\Psi _{\delta })+\frac{1}{n^{3/2}}R_{n}(t,s,\Psi _{\delta }),\quad \text{ where }\nonumber \\ \displaystyle |r_n(t,s,\Psi _{\delta })|\le & {} C\Vert \Sigma _n(t,s)-I_4(\lambda )\Vert \quad \text{ and }\quad |R_{n}(t,s,\Psi _{\delta })| \nonumber \\&\quad \le C(1+n^{3/2}\,e^{-cn}), \end{aligned}$$
    (3.38)

    W denoting a standard normal vector in \(\mathbb {R}^4\) and \(\Sigma _n(t,s)\) being defined in (3.30). We stress that here C depends on \(\lambda _*\) as well.

Proof

We first prove that there exists \(a>0\) such that for every \(\delta \) and n,

$$\begin{aligned} \mathbb {P}(P_n(t,Y)=\pm \delta )\le e^{-an}. \end{aligned}$$

In fact, the Doeblin’s condition \(Y\in \mathcal {D}(\eta ,r)\) implies the following splitting (see Section 2.1 in [5]): \( Y_{k}{\mathop {=}\limits ^{\text{ Law }}}\chi _{k}V_{k}+(1-\chi _{k})U_{k}, \) where \(\chi _k,V_k,U_k\) are three independent random variables, \(\chi _k\) has a Bernoulli law of parameter \(p_\chi \in (0,1)\) (the same for every k), \(V_k, U_k\) take values in \(\mathbb {R}^2\) and the law of \(V_k\) is absolutely continuous. We denote \(A_{n}=\cap _{k=1}^{n}\{\chi _{k}=0\}\) and we notice that, conditionally to \(A_{n}^{c},\) the law of \(P_n(t,Y)\) is absolutely continuous (because there is at least one \(V_{k}\) acting). So \(\mathbb {P}(\{P_n(t,Y)=\pm \delta \}\cap A_{n}^{c})=0\) and then \( \mathbb {P}(P_n(t,Y)=\pm \delta )\le \mathbb {P}(A_n)=(1-p_\chi )^n=e^{-an}. \)

We give the proof of (3.38), the other statements following by using similar arguments.

By (3.32), \(S_n(t,s,Y)\) is of the from \(\mathcal {S}_n(Y)\) in (3.5) (just take \(C_n(k)=C_{n,k}(t,s)\) in (3.30)). For \(f=\Psi _{\delta }\), the set in (3.18) is \(D=\mathbb {R}^2{\setminus }(\{x_1=\pm \delta \}\cup \{x_3=\pm \delta \})\) and \(\mathbb {P}(S_n(t,s,Y)\notin D)=\mathbb {P}(P_n(t,Y)=\pm \delta )+\mathbb {P}(P_n(s,Y)=\pm \delta )\le 2e^{-an}\). So, taking into account that \(\Sigma _n(t,s)\ge \lambda _*\) and (3.34), the statement follows by applying (3.24) with \(m=1\). We finally notice that the constant C in (3.23) depends on \(\lambda _*\) as well.

Let us finally recall the “small balls property” from Section 3.2 in [5]. First, we consider \(P_n(t,Y)\) and we note that hypotheses (3.8) and (3.9) in [5] hold. So, we can apply A of Theorem 3.2 in [5] (take \(\eta =1/n^\theta \) therein): there exist \(C,c>0\) such that for every \(\theta >0\) and n,

$$\begin{aligned} \sup _{t\ge 0}\mathbb {P}\big (\left| P_{n}(t,Y)\right| \le n^{-\theta }\big )\le C\Big ( \frac{1}{n^{2\theta }}+e^{-cn}\Big ). \end{aligned}$$
(3.39)

Then, we consider \(S_n(t,Y)\). We have already seen in (3.26) that \(\langle \Sigma _n(t)\xi ,\xi \rangle \ge \frac{1}{3} |\xi |^2\), so again (3.8) and (3.9) in [5] hold, and we are able to apply B of Theorem 3.2 in [5] (take \(l=a=1\) and \(d=2\) therein): there exists \(C>0\) such that for every \(\theta >1\), \(\varepsilon >0\) and n,

$$\begin{aligned} \mathbb {P}\left( \inf _{|t|\le n}\left| S_{n}(t,Y)\right| \le n^{-\theta }\right) \le \frac{C}{n^{\theta -1-\varepsilon }}. \end{aligned}$$
(3.40)

4 Estimates based on Kac–Rice formula

In this section we will use Kac–Rice lemma that we recall now. Let \( f:[a,b]\rightarrow \mathbb {R}\) be a differentiable function and let

$$\begin{aligned} \omega _{a,b}(f)=\inf _{x\in [a,b]}(\left| f(x)\right| +\left| f^{\prime }(x)\right| )\quad \text{ and }\quad \delta _{a,b}(f)=\min \{\left| f(a)\right| ,\left| f(b)\right| ,\omega _{a,b}(f)\}. \end{aligned}$$
(4.1)

We denote by \(N_{a,b}(f)\) the number of solutions of \(f(t)=0\) for \(t\in [a,b]\). The Kac–Rice lemma says that if \(\delta _{a,b}(f)>0\) then

$$\begin{aligned} N_{a,b}(f)=I_{a,b}(\delta ,f):=\int _{a}^{b}\left| f^{\prime }(t)\right| 1_{\{\left| f(t)\right| \le \delta \}}\frac{dt}{ 2\delta }\quad for\quad 0<\delta \le \delta _{a,b}(f). \end{aligned}$$
(4.2)

Notice that we also have, for every \(\delta >0\)

$$\begin{aligned} I_{a,b}(\delta ,f)\le 1+N_{a,b}(f^{\prime }) \end{aligned}$$
(4.3)

Indeed, we may assume that \(N_{a,b}(f^{\prime })=p<\infty \) and then we take \(a=a_{0}\le a_{1}<\cdots <a_{p}\le a_{p+1}=b\) to be the roots of \(f^{\prime }. \) Since f is monotonic on each \((a_{i},a_{i+1})\) one has \( I_{a_{i},a_{i+1}}(\delta ,f)\le 1\) so (4.3) holds. In the following we will refer this result as the Kac–Rice lemma.

We will use this formula for \(f(t)=P_{n}(t,Y)\). We denote

$$\begin{aligned} \phi _{\delta }(t,Y)=\left| P_{n}^{\prime }(t,Y)\right| \times \frac{ 1}{2\delta }1_{\{\left| y\right| \le \delta \}}(P_{n}(t,Y)). \end{aligned}$$
(4.4)

Then, essentially, the Kac–Rice lemma says that for sufficiently small \(\delta _{n}\) we have

$$\begin{aligned} {\mathbb {E}}(N_{n}(Y))\sim & {} {\mathbb {E}}\left( \int _{0}^{n\pi }\phi _{\delta _{n}}(t,Y)dt\right) \quad \text{ and } \\ {\mathbb {E}}(N_{n}^{2}(Y))\sim & {} 2{\mathbb {E}}\left( \int _{0}^{n\pi }dt\int _{0}^{t}\phi _{\delta _{n}}(t,Y)\phi _{\delta _{n}}(s,Y)ds\right) . \end{aligned}$$

We make this precise in Lemma 4.2 below. Note that we will use the above representations in connection with the CLT—in particular we will use the CLT for \((\phi _{\delta _{n}}(t,Y),\phi _{\delta _{n}}(s,Y))\) in order to estimate \({\mathbb {E}}(\phi _{\delta _{n}}(t,Y)\phi _{\delta _{n}}(s,Y))\). But we will have to handle the following difficulty: if \(t=s\) then the random vector \((\phi _{\delta _{n}}(t,Y),\phi _{\delta _{n}}(s,Y))\) is degenerated, so, in order to avoid this difficulty, we have to cancel a band around the diagonal. The main ingredient in order to do it is the following lemma:

Lemma 4.1

Let \(I=(a,a+\varepsilon )\) and let \(N_{n}(I,Y)=N_{a,a+ \varepsilon }(P_{n}(.,Y))\) be the number of zeros of \(P_{n}(t,Y)\) in I. There exist universal constants \(C\ge 1\ge c>0\) (independent of \( n,a,\varepsilon )\) such that

$$\begin{aligned} {\mathbb {E}}\left( N_{n}^{2}(I,Y)1_{\{N_{n}(I,Y)\ge 2\}}\right) \le C(\varepsilon ^{4/3}+ne^{-cn}). \end{aligned}$$
(4.5)

Proof

Since the polynomial \(P_{n}(t,Y)\) has at most 2n roots we have

$$\begin{aligned} {\mathbb {E}}\left( N_{n}^{2}(I,Y)1_{\{N_{n}(I,Y)\ge 2\}}\right) =\mathbb {P}(N_{n}(Y)\ge 2)+\sum _{p=1}^{2n}(2p+1)\mathbb {P}(N_{n}(I,Y)>p) \end{aligned}$$
(4.6)

so we have to upper bound \(\mathbb {P}(N_{n}(I,Y)>p)\). In order to do it we will use the following fact: if \(f:[a,a+\varepsilon ]\rightarrow \mathbb {R}\) is \(p+1\) times differentiable and has at list \(p+1\) zeros in this interval, then

$$\begin{aligned} \sup _{x\in [a,a+\varepsilon ]}\left| f(x)\right| \le \frac{ \varepsilon ^{p+1}}{(p+1)!}\sup _{x\in [a,a+\varepsilon ]}\vert f^{(p+1)}(x)\vert . \end{aligned}$$

An argument which proves this is the following: Lagrange’s interpolation theorem says that given any \(p+1\) points \(x_{i},i=1,\ldots ,p+1\) in \( [a,a+\varepsilon ]\) one may find a polynomial P of order p such that \( P(x_{i})=f(x_{i})\) and \(\sup _{x\in [a,a+\varepsilon ]}\left| f(x)-P(x)\right| \) is upper bounded as in the previous inequality. Then we take \(x_{i},i=1,\ldots ,p+1\) to be the zeros of f and, since P is of order p and has \(p+1\) roots, we have \(P=0\) and we are done.

We denote \(M_{n,p}=\sup _{t\in [a,a+\varepsilon ]}\vert P_{n}^{(p+1)}(t,Y)\vert \) and we use the above inequality for \( f(t)=P_{n}(t,Y)\) in order to obtain

$$\begin{aligned} \mathbb {P}(N_{n}(I,Y)>p)\le \mathbb {P}\left( \left| P_{n}(a,Y)\right| \le M\times \frac{ (2\varepsilon )^{p+1}}{(p+1)!}\right) +\mathbb {P}(M_{n,p}\ge M) \end{aligned}$$

A reasoning based on Sobolev’s inequality and on Burkholder’s inequality (see the proof of Lemma 3.3 in the section “small balls” of [5]) proves that

$$\begin{aligned} \mathbb {P}(M_{n,p}\ge M)\le \frac{1}{M^{2}}{\mathbb {E}}\left( M_{n,p}^{2}\right) \le \frac{C}{M^{2}} \end{aligned}$$

with C a constant which depends on p and on \(M_{3}(Y)\) (defined in (2.2)).

We denote now \(\delta =M\times \frac{(2\varepsilon )^{p+1}}{(p+1)!}\) and we estimate

$$\begin{aligned} \mathbb {P}(\left| P_{n}(a,Y)\right| \le \delta )=\delta {\mathbb {E}}(F_{\delta }(P_{n}(a,Y)))\quad \text{ with }\quad F_{\delta }(x)=1_{\{\left| x\right| \le \delta \}}\frac{1}{2\delta }. \end{aligned}$$

We will use (3.35): there exist \(C,c>0\) such that for every \(\delta \) and n, we get the existence of \(C,c>0\) such that for every \(\delta >0\) and \(n\in \mathbb {N}\),

$$\begin{aligned} |{\mathbb {E}}(F_\delta (P_n(t,Y))) -{\mathbb {E}}(F_\delta (W))|\le C\Big (\frac{1}{\sqrt{n}}+\frac{1}{\delta }e^{-cn}\Big ), \end{aligned}$$

with W a standard normal random variable. Since \(\left| {\mathbb {E}}(F_{\delta }(W))\right| \le \frac{1}{2\pi }\) we get

$$\begin{aligned} \left| {\mathbb {E}}(F_{\delta }(P_{n}(a,Y))\right| \le C\left( 1+\frac{1}{ \delta }e^{-cn}\right) . \end{aligned}$$

This gives

$$\begin{aligned} \mathbb {P}(\left| P_{n}(a,Y)\right| \le \delta )\le C\delta +Ce^{-cn} \end{aligned}$$

and coming back

$$\begin{aligned} \mathbb {P}(N_{n}(I,Y)>p)\le CM\times \frac{(2\varepsilon )^{p+1}}{(p+1)!}+Ce^{-cn}+ \frac{C}{M^{2}}. \end{aligned}$$

We optimize on M in order to obtain (for \(p\ge 1)\)

$$\begin{aligned} \mathbb {P}(N_{n}(I,Y)>p)\le C\frac{\varepsilon ^{4/3}}{(p+1)!^{2/3}}+Ce^{-cn}. \end{aligned}$$

We insert this in (4.6) and, since \(\sum _{p=1}^{\infty }p/(p+1)!^{2/3}<\infty ,\) we obtain (4.5). \(\square \)

We fix now \(\varepsilon >0,\) we denote

$$\begin{aligned} I_{k}^{\varepsilon }=[k\varepsilon ,(k+1)\varepsilon )\quad \text{ and }\quad D_{n,\varepsilon }=\cup _{0\le k\le n\pi /\varepsilon }\cup _{p=0,k-2}I_{k}^{\varepsilon }\times I_{p}^{\varepsilon }. \end{aligned}$$

We also denote

$$\begin{aligned} \begin{array}{rl} V_{n}(Y) = &{}\displaystyle {\mathbb {E}}\left( N_{n}^{2}(Y)\right) -\left( {\mathbb {E}}\left( N_{n}(Y)\right) \right) ^{2}\quad \text{ and }\\ v_{n}(t,s,Y) = &{}\displaystyle {\mathbb {E}}(\phi _{\delta _{n}}(t,Y)\phi _{\delta _{n}}(s,Y))-{\mathbb {E}}(\phi _{\delta _{n}}(t,Y)){\mathbb {E}}(\phi _{\delta _{n}}(s,Y)) \end{array} \end{aligned}$$
(4.7)

with \(N_{n}(Y)\) defined in (2.4) and \(\phi _{\delta _{n}}(t,Y)\) defined in (4.4).

Lemma 4.2

A. Let \(\delta _{n}=n^{-\theta }\) with \(\theta =5\). Then

$$\begin{aligned} {\mathbb {E}}(N_{n}^{2}(Y))={\mathbb {E}}(N_{n}(Y))+2\int _{D_{n,\varepsilon }}{\mathbb {E}}(\phi _{\delta _{n}}(t,Y)\phi _{\delta _{n}}(s,Y))dsdt+R_{n,\varepsilon } \end{aligned}$$
(4.8)

with

$$\begin{aligned} \overline{\lim }_{n}\frac{1}{n}\left| R_{n,\varepsilon }\right| \le C\varepsilon ^{1/3}. \end{aligned}$$
(4.9)

B. And

$$\begin{aligned} ({\mathbb {E}}(N_{n}(Y)))^{2}=2\int _{D_{n,\varepsilon }}{\mathbb {E}}(\phi _{\delta _{n}}(t,Y)){\mathbb {E}}(\phi _{\delta _{n}}(s,Y))dsdt+R_{n,\varepsilon } \end{aligned}$$
(4.10)

with \(R_{n,\varepsilon }\) which verifies (4.9).

C.

$$\begin{aligned} V_{n}(Y)=V_{n}(G)+2\int _{D_{n,\varepsilon }}(v_{n}(t,s,Y)-v_{n}(t,s,G))dsdt+R_{n,\varepsilon } \end{aligned}$$
(4.11)

with \(R_{n,\varepsilon }\) which verifies (4.9).

Proof of A. Step 1

We write

$$\begin{aligned} {\mathbb {E}}\left( N_{n}^{2}(Y)\right) =J_{1}(n)+2J_{2}(n)+2J_{3}(n) \end{aligned}$$

with

$$\begin{aligned} J_{1}(n)= & {} \sum _{0\le k\le n\pi /\varepsilon }{\mathbb {E}}\left( N_{n}^{2}\left( I_{k}^{\varepsilon },Y\right) \right) ,\quad J_{2}(n)=\sum _{0\le k\le n\pi /\varepsilon }{\mathbb {E}}\left( N_{n}\left( I_{k}^{\varepsilon },Y\right) N_{n}\left( I_{k+1}^{\varepsilon },Y\right) \right) \\ J_{3}(n)= & {} \sum _{0\le k\le n\pi /\varepsilon }\sum _{p=0}^{k-2}{\mathbb {E}}\left( N_{n}\left( I_{k}^{\varepsilon },Y\right) N_{n}\left( I_{p}^{\varepsilon },Y\right) \right) . \end{aligned}$$

Note that

$$\begin{aligned} {\mathbb {E}}\left( N_{n}\left( I_{k}^{\varepsilon },Y\right) N_{n}\left( I_{k+1}^{\varepsilon },Y\right) \right) \le {\mathbb {E}}\left( N_{n}^{2}\left( I_{k}^{\varepsilon }\cup I_{k+1}^{\varepsilon },Y\right) 1_{\{N_{n}\left( I_{k}^{\varepsilon }\cup I_{k+1}^{\varepsilon },Y\right) \ge 2\}}\right) \end{aligned}$$

Using (4.5)

$$\begin{aligned} \left| J_{2}(n)\right| \le C\times \frac{n}{\varepsilon }\times (\varepsilon ^{4/3}+ne^{-n}) \end{aligned}$$

so we get

$$\begin{aligned} \overline{\lim _{n}}\frac{1}{n}\left| J_{2}(n)\right| \le C\varepsilon ^{1/3}. \end{aligned}$$

We also have

$$\begin{aligned} {\mathbb {E}}\left( N_{n}^{2}\left( I_{k}^{\varepsilon },Y\right) \right) ={\mathbb {E}}\left( \left( N_{n}^{2}\left( I_{k}^{\varepsilon },Y\right) -N_{n}\left( I_{k}^{\varepsilon },Y\right) \right) 1_{\{N_{n}\left( I_{k}^{\varepsilon },Y\right) \ge 2\}})\right) +{\mathbb {E}}\left( N_{n}\left( I_{k}^{\varepsilon },Y\right) \right) \end{aligned}$$

so using (4.5) again

$$\begin{aligned} \overline{\lim _{n}}\frac{1}{n}\left| J_{1}(n)-{\mathbb {E}}(N_{n}(Y))\right| \le C\varepsilon ^{1/3}. \end{aligned}$$

Step 2. We want to estimate

$$\begin{aligned} \frac{1}{n}J_{3}(n)=\frac{1}{n}{\mathbb {E}}\left( \sum _{0\le k\le n\pi /\varepsilon }\sum _{p=0}^{k-2}N_{n}\left( I_{k}^{\varepsilon },Y\right) N_{n}\left( I_{p}^{\varepsilon },Y\right) \right) . \end{aligned}$$

We will use the Kac–Rice lemma for \( f(t)=P_{n}(t,Y)\) so we have \(N_{n}(Y)=N_{0,n\pi }(P_{n}(t,Y))\). We denote \( \delta _{n}(Y)=\delta _{0,n\pi }(P_{n}(.,Y)))\) (see (4.1)),  we take \( \delta _{n}=n^{-\theta }=n^{-5}\) and we write

$$\begin{aligned} {\mathbb {E}}\left( N_{n}\left( I_{k}^{\varepsilon },Y\right) N_{n}\left( I_{p}^{\varepsilon },Y\right) \right) =A_{n,k,p,\varepsilon }+B_{n,k,p,\varepsilon } \end{aligned}$$

with

$$\begin{aligned} A_{n,k,p,\varepsilon }= & {} {\mathbb {E}}\left( N_{n}\left( I_{k}^{\varepsilon },Y\right) N_{n}\left( I_{p}^{\varepsilon },Y\right) 1_{\{\delta _{n}\le \delta _{n}(Y)\}}\right) \\ B_{n,k,p,\varepsilon }= & {} {\mathbb {E}}\left( N_{n}\left( I_{k}^{\varepsilon },Y\right) N_{n}\left( I_{p}^{\varepsilon },Y\right) 1_{\{\delta _{n}>\delta _{n}(Y)\}}\right) . \end{aligned}$$

Since \(P_{n}(t,Y)\) has at most 2n roots we get

$$\begin{aligned} B_{n,k,p,\varepsilon }\le 4n^{2}\mathbb {P}(\delta _{n}\ge \delta _{n}(Y)). \end{aligned}$$

We use now the small balls property. Recall that \(\delta _{n}(Y)=\min \{\left| P_{n}(0,Y)\right| ,\left| P_{n}(n\pi ,Y)\right| ,\omega _{0,\pi }(P_{n})\}\) with \( \omega _{0,\pi }(P_{n})=\inf _{0\le t\le n\pi }(\left| P_{n}(t,Y)\right| +\left| P_{n}^{\prime }(t,Y)\right| )\). By using (3.39) and (3.40) with \( \theta =5\), we get

$$\begin{aligned} \mathbb {P}(\delta _{n}\ge \delta _{n}(Y))\le \sup _{t\ge 0}\mathbb {P}(|P_n(t,Y)\le \delta _{n}) +\mathbb {P}\left( \inf _{|t|\le n\pi }|S_n(t,Y)\le \delta _{n}\right) \le \frac{C}{n^{4-\varepsilon }}. \end{aligned}$$
(4.12)

So we get

$$\begin{aligned} \frac{1}{n}\sum _{0\le k\le n\pi /\varepsilon }\sum _{p=0}^{k-2}B_{n,k,p,\varepsilon }\le \frac{C}{n^{1-\varepsilon }} \rightarrow 0. \end{aligned}$$

Moreover using Kac–Rice formula (4.2) (notice that \(\delta _{n}(Y)\le \delta _{k\varepsilon ,(k+1)\varepsilon }(P_{n}(\cdot ,Y))\) for every k) we have

$$\begin{aligned} A_{n,k,p,\varepsilon }={\mathbb {E}}\Big (1_{\{\delta _{n}\le \delta _{n}(Y)\}}\int _{I_{k}^{\varepsilon }\times I_{p}^{\varepsilon }}\phi _{\delta _{n}}(t,Y)\phi _{\delta _{n}}(s,Y)dtds\Big ) \end{aligned}$$

and consequently

$$\begin{aligned} \sum _{0\le k\le n\pi /\varepsilon }\sum _{p=0}^{k-2}A_{n,k,p,\varepsilon }={\mathbb {E}}\Big (1_{\{\delta _{n}\le \delta _{n}(Y)\}}\int _{D_{n,\varepsilon }}\phi _{\delta _{n}}(t,Y)\phi _{\delta _{n}}(s,Y)dtds\Big )=a_{n,\varepsilon }+b_{n,\varepsilon } \end{aligned}$$

with

$$\begin{aligned} a_{n,\varepsilon }= & {} {\mathbb {E}}\Big (\int _{D_{n,\varepsilon }}\phi _{\delta _{n}}(t,Y)\phi _{\delta _{n}}(s,Y)dtds\Big ) \\ b_{n,\varepsilon }= & {} {\mathbb {E}}\Big (1_{\{\delta _{n}\ge \delta _{n}(Y)\}}\int _{D_{n,\varepsilon }}\phi _{\delta _{n}}(t,Y)\phi _{\delta _{n}}(s,Y)dtds\Big ). \end{aligned}$$

Since \(D_{n,\varepsilon }\subset [0,2\pi )^2\),

$$\begin{aligned} \int _{D_{n,\varepsilon }}\phi _{\delta _{n}}(t,Y)\phi _{\delta _{n}}(s,Y)dtds\le & {} \Big (\int _{0}^{n\pi }\phi _{\delta _{n}}(t,Y)dt\Big )^{2} \\\le & {} (1+N_{n}([0,n\pi ],P_{n}^{\prime }(\cdot ,Y)))^{2}, \end{aligned}$$

\(N_{n}([0,n\pi ],P_{n}^{\prime }(\cdot ,Y))\) denoting the number of roots of \(P_{n}^{\prime }(\cdot ,Y)\) in \([0,n\pi ]\). Since \(P_{n}^{\prime }\) is still a trigonometric polynomial of order n,  it has at most 2n roots. Then the above quantity is upper bounded by \( (1+2n)^{2}\) and finally, using the small balls result (4.12)

$$\begin{aligned} \frac{1}{n}b_{n,\varepsilon }\le Cn^{3}\mathbb {P}(\delta _{n}\ge \delta _{n}(Y))\le \frac{C}{n^{1-\varepsilon }}\rightarrow 0 \end{aligned}$$

so (4.8) is proved. \(\square \)

Proof of B

The proof is analogous (but simpler) so we just sketch it. We denote by \(R_{n}\) a quantity such that \(\overline{\lim }_{n}\frac{1}{n }\left| R_{n}\right| =0\). Using again Kac–Rice formula (4.2) and the small balls property

$$\begin{aligned} ({\mathbb {E}}(N_{n}(Y)))^{2}= & {} \Big ({\mathbb {E}}\Big (\int _{0}^{n\pi }1_{\{\delta _{n}\le \delta _{n}(Y)\}}\phi _{\delta _{n}}(t,Y)dt\Big )\Big )^{2}+R_{n} \\= & {} 2\int _{0}^{n\pi }dt\int _{0}^{t}{\mathbb {E}}(1_{\{\delta _{n}\le \delta _{n}(Y)\}}\phi _{\delta _{n}}(t,Y)){\mathbb {E}}(1_{\{\delta _{n}\le \delta _{n}(Y)\}}\phi _{\delta _{n}}(s,Y))ds+R_{n} \\= & {} 2\int _{0}^{n\pi }dt\int _{0}^{t}{\mathbb {E}}(\phi _{\delta _{n}}(t,Y)){\mathbb {E}}(\phi _{\delta _{n}}(s,Y))ds+R_{n}^{\prime } \\= & {} 2\int _{D_{n,\varepsilon }}{\mathbb {E}}(\phi _{\delta _{n}}(t,Y)){\mathbb {E}}(\phi _{\delta _{n}}(s,Y))dsdt+R_{n,\varepsilon }+R_{n}^{\prime } \end{aligned}$$

with

$$\begin{aligned} R_{n,\varepsilon }=\int _{D_{n,\varepsilon }^{c}}{\mathbb {E}}(\phi _{\delta _{n}}(t,Y)){\mathbb {E}}(\phi _{\delta _{n}}(s,Y))dsdt. \end{aligned}$$

In the notation of Sect. 3.2, we have \(\phi _{\delta _n}(t,Y)=\Phi _{\delta _n}(S_n(t,Y))\) and \(\phi _{\delta _n}(t,G)=\Phi _{\delta _n}(S_n(t,G))\). So, by using (3.36), we get

$$\begin{aligned} \left| {\mathbb {E}}(\phi _{\delta _{n}}(t,Y))-{\mathbb {E}}(\phi _{\delta _{n}}(t,G))\right| \le C\Big (\frac{1}{\sqrt{n}}+\delta _{n}^{-1}e^{-cn}\Big ). \end{aligned}$$

Recall that \(S_n(t,G)=(P_{n}(t,G),P_{n}^{\prime }(t,G))\) is a Gaussian random variable of covariance matrix \(\Sigma _{n}(t)\) and, for sufficiently large n one has \(\left\langle \Sigma _{n}(t)x,x\right\rangle \ge \frac{1}{3}\left| x\right| ^{2}\) (see (3.25) and (3.26). It follows that

$$\begin{aligned} {\mathbb {E}}(\phi _{\delta _{n}}(t,G))= & {} \int _{\mathbb {R}^{2}}\left| x_{2}\right| \frac{ 1}{2\delta _{n}}1_{\{\left| x_{1}\right| \le \delta _{n}\}}\frac{1}{ 2\pi }e^{-\left\langle \Sigma _{n}(t)x,x\right\rangle }dx \\\le & {} \int _{\mathbb {R}}\left| x_{2}\right| \frac{1}{\sqrt{2\pi }}e^{-\frac{1 }{6}\left| x_{2}\right| ^{2}}dx_{2}\times \int _{\mathbb {R}}\frac{1}{2\delta _{n}}1_{\{\left| x_{1}\right| \le \delta _{n}\}}\frac{1}{\sqrt{2\pi }}e^{-\frac{1}{6}\left| x_{1}\right| ^{2}}dx_{1} \\\le & {} C. \end{aligned}$$

So \({\mathbb {E}}(\phi _{\delta _{n}}(t,Y))\le C\) and consequently, for sufficiently large n

$$\begin{aligned} \frac{1}{n}\left| R_{n,\varepsilon }\right| \le \frac{C}{n} \left| D_{n,\varepsilon }^{c}\right| \le C\varepsilon . \end{aligned}$$

\(\square \)

Proof of C

We have proved in [5] that

$$\begin{aligned} \lim _{n}\frac{1}{n}({\mathbb {E}}(N_{n}(Y))-{\mathbb {E}}(N_{n}(G)))=0 \end{aligned}$$

so (4.11) is an immediate consequence of (4.8) and (4.10). \(\square \)

5 Cancellations

Having in mind (4.11) we will now estimate \(v_{n}(t,s,Y)-v_{n}(t,s,G)\). A careful analysis of this term involve a certain number of cancellations.

Here we strongly use the objects and formulas in Sect. 3.2. We recall the functions \(\Phi _{\delta }\) and \(\Psi _{\delta }\) in (3.33), so that \(\Phi _{\delta }(S_{n}(t,Y))=\phi _{\delta }(t,Y)\) (see (4.4)) and \(\Psi _{\delta }(S_{n}(t,s,Y))=\phi _{\delta }(t,Y)\phi _{\delta }(s,Y)\). Then (see (4.7) and recall that \(\delta _{n}=1/n^{5}\))

$$\begin{aligned} v_{n}(t,s,Y)={\mathbb {E}}(\Psi _{\delta _{n}}(S_{n}(t,s,Y)))-{\mathbb {E}}(\Phi _{\delta _{n}}(S_{n}(t,Y))){\mathbb {E}}(\Phi _{\delta _{n}}(S_{n}(s,Y))). \end{aligned}$$
(5.1)

Lemma 5.1

Suppose that for every multi-index \(\alpha \) with \(\left| \alpha \right| =3,4\) the following limits exist and are finite:

$$\begin{aligned} \lim _{n}{\mathbb {E}}\left( \prod _{i=1}^{\left| \alpha \right| }Y_{n}^{\alpha _{i}}\right) =y_{\infty }(\alpha ). \end{aligned}$$

Then, for every \(\varepsilon >0,\)

$$\begin{aligned} \lim _{n}\frac{1}{n}\int _{D_{n,\varepsilon }}(v_{n}(t,s,Y)-v_{n}(t,s,G))dsdt= \frac{1}{120}\times y_{*}+r_{\varepsilon } \end{aligned}$$
(5.2)

with \(\left| r_{\varepsilon }\right| \le C\varepsilon \) and

$$\begin{aligned} y_{*}&=(y_{\infty }(1,1,2,2)-1)+(y_{\infty }(2,2,1,1)-1)+(y_{\infty }(1,1,1,1)-3)\\&\quad +(y_{\infty }(2,2,2,2)-3). \end{aligned}$$

Proof. Step 1

We use here Proposition (3.2). Recall that \(\Sigma _{n}(t)\) is the covariance matrix of \(S_{n}(t,Y)\) and by \( \Sigma _{n}(t,s)\) the covariance matrix of \(S_{n}(t,s,Y)\) (see (3.25) and (3.30). We apply (3.37) We stress that the constants will depend on \(\det \Sigma _{n}(t,s)\) which is larger than \(\frac{ 1}{2}\lambda ^{2}(\varepsilon )>0\) for \((t,s)\in D_{n,\varepsilon }\) (see (C.3)). We will also use the diagonal matrices \(I_{2}=I_{2}(\lambda ) \) with \(\lambda _{1}=1,\lambda _{2}=\frac{1}{3}\) and \(I_{4}=I_{4}(\lambda ) \) with \(\lambda _{1}=\lambda _{3}=1,\lambda _{2}=\lambda _{4}=\frac{1}{3}\). By (3.37)

$$\begin{aligned} \begin{array}{rl} {\mathbb {E}}(\Phi _{\delta _{n}}(S_{n}(t,Y))) = &{}\displaystyle {\mathbb {E}}(\Phi _{\delta _{n}}(S_{n}(t,G)))\\ &{}\displaystyle +\frac{1}{n}{\mathbb {E}}\left( \Phi _{\delta _{n}}\left( I_{2}^{1/2}W\right) \Gamma _{n,2}\left( I_{2}^{-1/2}Z_{n}(t,Y),W)\right) \right) \\ &{}\displaystyle +\frac{1}{n}r_{n}(t,\Phi _{\delta _{n}})+\frac{1}{n^{3/2}}R_{n}(t,\Phi _{\delta _{n}})\phantom {{\displaystyle +\frac{1}{n}r_{n}(t,\Phi _{\delta _{n}})+\frac{1}{n^{3/2}}R_{n}(t,\Phi _{\delta _{n}})}^9} \end{array} \end{aligned}$$
(5.3)

and a similar expression holds for \({\mathbb {E}}(\Phi _{\delta _{n}}(S_{n}(s,Y)))\). The remainder \(r_{n}(t,\Phi _{\delta _n })\) verifies (3.23) with \(\Sigma _{n}(t)-I_{2}\). We also recall that \( S_{n}(t,G) \) has the same law as \(\Sigma _{n}^{1/2}(t)W\) so, (with \( r_{n}(t,\Phi _{\delta _n })\) which verifies (3.23)),

$$\begin{aligned} {\mathbb {E}}(\Phi _{\delta _{n}}(S_{n}(t,G)))={\mathbb {E}}\left( \Phi _{\delta _{n}}\left( \Sigma _{n}^{1/2}(t)W\right) \right) ={\mathbb {E}}\left( \Phi _{\delta _{n}}\left( I_{2}^{1/2}W\right) \right) +\frac{1}{n}r_{n}(t,\Phi _{\delta _n }). \end{aligned}$$

Moreover, by (3.38),

$$\begin{aligned} {\mathbb {E}}(\Psi _{\delta _{n}}(S_{n}(t,s,Y)))&=\displaystyle {\mathbb {E}}(\Psi _{\delta _{n}}(S_{n}(t,s,G))) +\frac{1}{n}{\mathbb {E}}\left( \Psi _{\delta _{n}}\left( I_{4}^{1/2}W\right) \right. \nonumber \\&\quad \Gamma _{n,2} \left. \left. \left( I_{4}^{-1/2}Z_{n}(t,s,Y),W\right) \right) \right) \nonumber \\&\quad \displaystyle +\frac{1}{n}r_{n}(t,s,\Psi _{\delta _{n}})+\frac{1}{n^{3/2}}R_{n}(t,s,\Psi _{\delta _{n}}). \end{aligned}$$
(5.4)

Here \(r_{n}(t,s,\Psi _{\delta _{n}})\) verifies (3.23) with \(\Sigma _{n}(t,s)-I_{4}\). And, as above,

$$\begin{aligned}&{\mathbb {E}}(\Psi _{\delta _{n}}(S_{n}(t,s,G)))={\mathbb {E}}\left( \Psi _{\delta _{n}}\left( \Sigma _{n}^{1/2}(t,s)W\right) \right) \\&\quad = {\mathbb {E}}\left( \Psi _{\delta _{n}}\left( I_{4}^{1/2}W\right) \right) +\frac{1}{n} r_{n}(t,s,\Psi _{\delta _n }). \end{aligned}$$

Our aim now is to estimate \(v_{n}(t,s,Y)-v_{n}(t,s,G)\) (recall that \( v_{n}(t,s,Y)\) is defined in (5.1)). In order to simplify notation we put

$$\begin{aligned} A_{n}(t,Y)= & {} {\mathbb {E}}(\Phi _{\delta _{n}}(S_{n}(t,Y))),\quad A_{n}(t,s,Y)={\mathbb {E}}(\Psi _{\delta _{n}}(S_{n}(t,s,Y))), \\ C_{n}(t)= & {} {\mathbb {E}}(\Phi _{\delta _{n}}(I_{2}^{1/2}W)\Gamma _{n,2}(I_{2}^{-1/2}Z_{n}(t,Y),W))), \\ C_{n}(t,s)= & {} {\mathbb {E}}(\Psi _{\delta _{n}}(I_{4}^{1/2}W)\Gamma _{n,2}(I_{4}^{-1/2}Z_{n}(t,s,Y),W))), \\ \widehat{R}_{n}(t)= & {} \frac{1}{n}r_{n}(t,\Phi _{\delta _{n}})+\frac{1}{ n^{3/2}}R_{n}(t,\Phi _{\delta _{n}}),\\ \widehat{R}_{n}(t,s)= & {} \frac{1}{n} r_{n}(t,s,\Psi _{\delta _{n}})+\frac{1}{n^{3/2}}R_{n}(t,s,\Psi _{\delta _{n}}). \end{aligned}$$

With this notation (5.3) and (5.4) read

$$\begin{aligned} A_{n}(t,Y)= & {} A_{n}(t,G)+\frac{1}{n}C_{n}(t)+\widehat{R}_{n}(t), \\ A_{n}(t,s,Y)= & {} A_{n}(t,s,G)+\frac{1}{n}C_{n}(t,s)+\widehat{R}_{n}(t,s) \end{aligned}$$

and consequently

$$\begin{aligned} v_{n}(t,s,Y)-v_{n}(t,s,G)=\frac{1}{n}\gamma _{n}(t,s)+\overline{R}_{n}(t,s) \end{aligned}$$

with

$$\begin{aligned} \overline{R}_{n}(t,s)=\Big (\frac{1}{n}C_{n}(t)+\widehat{R}_{n}(t)\Big )\Big (\frac{1}{n} C_{n}(s)+\widehat{R}_{n}(s)\Big )-\widehat{R}_{n}(t)A_{n}(s,Y)-\widehat{R} _{n}(s)A_{n}(t,Y) \end{aligned}$$

and

$$\begin{aligned} \gamma _{n}(t,s)= & {} C_{n}(t,s)-C_{n}(t)A_{n}(s,Y)-C_{n}(s)A_{n}(t,Y) \\= & {} {\mathbb {E}}(\Psi _{\delta _{n}}(I_{4}^{1/2}W)\Gamma _{n,2}(I_{4}^{-1/2}Z_{n}(t,s,Y),W))) \\&-\,{\mathbb {E}}(\Phi _{\delta _{n}}(I_{2}^{1/2}W))\times {\mathbb {E}}(\Phi _{\delta _{n}}(I_{2}^{1/2}W)\Gamma _{n,2}(I_{2}^{-1/2}Z_{n}(s,Y),W))) \\&-\,{\mathbb {E}}(\Phi _{\delta _{n}}(I_{2}^{1/2}W)\Gamma _{n,2}(I_{2}^{-1/2}Z_{n}(t,Y),W)))\times {\mathbb {E}}(\Phi _{\delta _{n}}(I_{2}^{1/2}W)). \end{aligned}$$

Notice that in the above expression of \(\gamma _{n}(t,s)\), W stands for a standard normal random variable which is in dimension 4 in the first expectation and in dimension two in the following two ones. In order to put everything together we take two independent two-dimensional standard normal random variables \(W^{\prime }\) and \(W^{\prime \prime }\) and we put \( W=(W^{\prime },W^{\prime \prime })\in \mathbb {R}^{4}\) which is itself a standard normal random variable. Then

$$\begin{aligned} \Phi _{\delta _{n}}\left( I_{2}^{1/2}W^{\prime }\right) \Phi _{\delta _{n}}\left( I_{2}^{1/2}W^{\prime \prime }\right) =\Psi _{\delta _{n}}\left( I_{4}^{1/2}W\right) \end{aligned}$$

so we obtain

$$\begin{aligned} \gamma _{n}(t,s)&={\mathbb {E}}\left( \Psi _{\delta _{n}}\left( I_{4}^{1/2}W\right) \left[ \Gamma _{n,2}\left( I_{4}^{-1/2}Z_{n}(t,s,Y),W\right) -\Gamma _{n,2}\left( I_{2}^{-1/2}Z_{n}(t,Y),W^{\prime }\right) \right. \right. \\&\quad \left. \left. -\Gamma _{n,2}\left( I_{2}^{-1/2}Z_{n}(s,Y),W^{\prime \prime }\right) \right] \right) . \end{aligned}$$

We recall the definitions of \(\Gamma _{n,2}^{\prime },\Gamma _{n,2}^{\prime \prime }\) given in (3.11) and we write \(\gamma _{n}(t,s)=\gamma _{n}^{\prime }(t,s)+\gamma _{n}^{\prime \prime }(t,s)\) with \(\gamma ^{\prime }\) which involves \(\Gamma ^{\prime }\) and \(\gamma ^{\prime \prime }\) which involves \(\Gamma ^{\prime \prime }\) instead of \(\Gamma \). We will analyze them separately.

Step 2 Estimate of \(\gamma ^{\prime \prime }\). Our aim is to prove that

$$\begin{aligned} \frac{1}{n^{2}}\int _{D_{n,\varepsilon }}\gamma _{n}^{\prime \prime }(t,s)dsdt=\int _{0}^{\pi }\int _{0}^{\pi }1_{D_{n,\varepsilon }}(nt,ns)\gamma _{n}^{\prime \prime }(nt,ns)dsdt\rightarrow 0. \end{aligned}$$
(5.5)

The analysis is based on (3.12). There are two kinds of cancellation which are at work.

First cancellation (mixed multi-indexes) Denote \(m_{k}(I)\) the set of the multi-indexes \(\alpha =(\alpha _{1},\ldots ,\alpha _{k})\) with \( \alpha _{i}\in I\). Recall that \(W=(W^{\prime },W^{\prime \prime })\) and notice that if \(\alpha \in m_{3}(1,2)\) then \(H_{\alpha }(W)=H_{\alpha }(W^{\prime })\). But, if \(\alpha \in m_{3}(3,4),\) then one has \(H_{\alpha }(W)=H_{\alpha }(W^{3},W^{4})=H_{\alpha }((W^{\prime \prime })^{1},(W^{\prime \prime })^{2})\). This means that, in the second case, a “change of variable” is needed: \(\alpha =(\alpha _{1},\alpha _{2},\alpha _{3})\mapsto \widehat{\alpha }=(\alpha _{1}-2,\alpha _{2}-2,\alpha _{3}-2)\): for example \((3,3,4)\mapsto (1,1,2)\) or \((4,4,3)\mapsto (2,2,1)\). Having this in mind we go on and analyze \(\Gamma _{n,2}^{\prime \prime }\) defined in (3.12):

$$\begin{aligned}&\Gamma _{n,2}^{\prime \prime }(I_{4}^{-1/2}Z_{n}(t,s,Y),W))\\&\quad =\frac{1}{72} \sum _{\left| \rho \right| =3}\sum _{\left| \beta \right| =3}c_{n}(\beta ,I_{4}^{-1/2}Z_{n}(t,s,Y))c_{n}(\rho ,I_{4}^{-1/2}Z_{n}(t,s,Y))H_{(\beta ,\rho )}(W), \\&\Gamma _{n,2}^{\prime \prime }(I_{4}^{-1/2}Z_{n}(t,Y),W))\\&\quad =\frac{1}{72} \sum _{\left| \rho \right| =3}\sum _{\left| \beta \right| =3}c_{n}(\beta ,I_{4}^{-1/2}Z_{n}(t,Y))c_{n}(\rho ,I_{4}^{-1/2}Z_{n}(t,Y))H_{(\beta ,\rho )}(W^{\prime }), \\&\Gamma _{n,2}^{\prime \prime }(I_{4}^{-1/2}Z_{n}(s,Y),W))\\&\quad =\frac{1}{72} \sum _{\left| \rho \right| =3}\sum _{\left| \beta \right| =3}c_{n}(\beta ,I_{4}^{-1/2}Z_{n}(s,Y))c_{n}(\rho ,I_{4}^{-1/2}Z_{n}(s,Y))H_{(\beta ,\rho )}(W^{\prime \prime }), \end{aligned}$$

where \((\beta ,\rho )\) denotes the concatenation. Notice that the multi-indexes in the first line belong to \(m_{3}(1,2,3,4)\) while the multi-indexes in the second and in the third line belong to \( m_{3}(1,2)\). We look now to the sums in the first line. If all the elements of \((\beta ,\rho )\) belong to \(\{1,2\}\) then \(H_{(\beta ,\rho )}(W)=H_{(\beta ,\rho )}(W^{\prime })\) and \(c_{n}(\beta ,I_{4}^{-1/2}Z_{n}(nt,ns,Y))=c_{n}(\beta ,I_{2}^{-1/2}Z_{n}(nt,Y))\) so the corresponding term cancels. In the same way, if all the elements of \((\beta ,\rho )\) belong to \(\{3,4\}\) then \(H_{(\beta ,\rho )}(W)=H_{(\widehat{\beta } ,\widehat{\rho })}(W^{\prime \prime })\) and \(c_{n}(\beta ,I_{4}^{-1/2}Z_{n}(nt,ns,Y))=c_{n}(\widehat{\beta },I_{2}^{-1/2}Z_{n}(ns,Y))\) and the corresponding term cancels as well. We remain with “mixed multi-indexes”, such that \((\beta ,\rho )\) contain at least one element from each of \(\{1,2\}\) and of \(\{3,4\}\).

Second cancellation (even multi-indexes) For each \(i=1,\ldots ,4\) the function \(W_{i}\mapsto \Psi _{\delta _{n}}(I_{4}^{1/2}W)\) is even, so, because the symmetry argument

$$\begin{aligned} {\mathbb {E}}(\Psi _{\delta _{n}}(I_{4}^{1/2}W))H_{(\rho ,\beta )}(W))=0 \end{aligned}$$

except the case when all the elements in \((\rho ,\beta )\) appear an even number of times (this means that \(i_{j}((\rho ,\beta ))\) is even for every \( j=1,\ldots ,4)\).

There are three types of multi-indexes which verify both conditions: take \( i\in \{1,2\}\) and \(j,p\in \{3,4\}\) (or the converse).

$$\begin{aligned} \text{ Case } \text{1: }\quad \rho= & {} (i,j,j),\quad \beta =(i,p,p) \end{aligned}$$
(5.6)
$$\begin{aligned} \text{ Case } \text{2: }\quad \rho= & {} (i,i,j),\quad \beta =(j,p,p) \end{aligned}$$
(5.7)
$$\begin{aligned} \text{ Case } \text{3: }\quad \rho= & {} (i,j,p),\quad \beta =(i,j,p) \end{aligned}$$
(5.8)

We treat the Case 1 (the other cases are similar). In order to fix the ideas we take \(i=1\) and \(j=4,\) so that \(\rho =(1,4,4)\) (all the other cases are similar). We compute

$$\begin{aligned}&{\mathbb {E}}\left( (C_{n}(k,nt)Y_{k})^{1}((C_{n}(k,ns)Y_{k})^{2})^{2}\right) \\&\quad =\sum _{l_{1},l_{2},l_{3}=1}^{2}C_{n}^{1,l_{1}} (k,nt)C_{n}^{2,l_{2}}(k,ns) C_{n}^{2,l_{3}}(k,ns){\mathbb {E}}\left( \prod _{i=1}^{3}Y_{k}^{l_{i}}\right) \end{aligned}$$

Since in the Gaussian case we have \({\mathbb {E}}(\prod _{i=1}^{3}G_{k}^{l_{i}})=0,\) we conclude that

$$\begin{aligned} \Delta _{\rho }(Z_{n,k}(nt,ns,Y))= \sum _{l_{1},l_{2},l_{3}=1}^{2}C_{n}^{1,l_{1}} (k,nt)C_{n}^{2,l_{2}}(k,ns)C_{n}^{2,l_{3}}(k,ns) {\mathbb {E}}\left( \prod _{i=1}^{3}Y_{k}^{l_{i}}\right) \end{aligned}$$

and then

$$\begin{aligned}&c_{n}\left( \rho ,I_{4}^{-1/2}Z_{n}(nt,ns,Y)\right) =c_{n}^{\prime }\left( \rho ,I_{4}^{-1/2}Z_{n}(nt,ns,Y)\right) \\&\quad +c_{n}^{\prime \prime }\left( \rho ,I_{4}^{-1/2}Z_{n}(nt,ns,Y)\right) \end{aligned}$$

with

$$\begin{aligned} c_{n}^{\prime }\left( \rho ,I_{4}^{-1/2}Z_{n}(nt,ns,Y)\right)&= \sum _{l_{1},l_{2},l_{3}=1}^{2}y_{\infty }(l_{1},l_{2},l_{3})\\&\quad \times \frac{1 }{n} \sum _{k=1}^{n}C_{n}^{1,l_{1}}(k,nt)C_{n}^{2,l_{2}}(k,ns)C_{n}^{2,l_{3}}(k,ns)\\ c_{n}^{\prime \prime }(\rho ,I_{4}^{-1/2}Z_{n}(nt,ns,Y))&= \sum _{l_{1},l_{2},l_{3}=1}^{2}\frac{1}{n} \sum _{k=1}^{n}C_{n}^{1,l_{1}}(k,nt)C_{n}^{2,l_{2}}(k,ns) C_{n}^{2,l_{3}}(k,ns)\\&\quad \times \left( {\mathbb {E}}\left( \prod _{i=1}^{3}Y_{k}^{l_{i}}\right) -y_{\infty }\left( l_{1},l_{2},l_{3}\right) \right) . \end{aligned}$$

Since \(\left| C_{n}^{i,j}(k,u)\right| \le 1\) for every \(i,j\in \{1,2\}\) and \(u>0,\) we have

$$\begin{aligned} \left| c_{n}^{\prime \prime }(\rho ,I_{4}^{-1/2}Z_{n}(nt,ns,Y))\right| \le \sum _{l_{1},l_{2},l_{3}=1}^{2} \frac{1}{n}\sum _{k=1}^{n}\left| {\mathbb {E}}\left( \prod _{i=1}^{3}Y_{k}^{l_{i}}\right) -y_{\infty }(l_{1},l_{2},l_{3})\right| \rightarrow 0. \end{aligned}$$

And using (A.5) we get \(c_{n}^{\prime }(\rho ,I_{4}^{-1/2}Z_{n}(nt,ns,Y))\rightarrow 0\). This is true for t and s such that \(\frac{t}{\pi },\frac{s}{\pi },\frac{t+s}{\pi }\) and \(\frac{t-s}{ \pi }\) are irrational. But this means that this is true dtds almost surely. Then, using Lebesgue’s dominated convergence theorem (notice that the coefficients \(c_{n},n\in \mathbb {N}\) are uniformly bounded) we get

$$\begin{aligned}&\int _{0}^{\pi }\int _{0}^{\pi }1_{D_{n,\varepsilon }}(nt,ns)c_{n}\left( \rho ,I_{4}^{-1/2}Z_{n}(nt,ns,Y)\right) c_{n}\left( \beta ,I_{4}^{-1/2}Z_{n}(nt,ns,Y)\right) dtds\rightarrow 0. \end{aligned}$$

So we have finished to prove (5.5).

Step 3 We compute now

$$\begin{aligned} \lim _{n}\frac{1}{n^{2}}\int _{D_{n,\varepsilon }}\gamma _{n}^{\prime }(t,s)dsdt= & {} \frac{1}{2}\lim _{n}\frac{1}{n^{2}}\int _{0}^{n\pi }\int _{0}^{n\pi }\gamma _{n}^{\prime }(t,s)dsdt+O(\epsilon )\\= & {} \frac{1}{2}\lim _{n}\int _{0}^{\pi }\int _{0}^{\pi }\gamma _{n}^{\prime }(nt,ns)dsdt+O(\epsilon ), \end{aligned}$$

where \(O(\epsilon )\) is uniform in n. We recall (3.9). As in the previous discussion we notice that we have two kind of cancellations: if all the components of \(\alpha \) belong to \( \{1,2\}\) or to \(\{3,4\}\) then the corresponding term cancels. And for symmetry reasons one also needs to have each component of \(\alpha \) an even number of times. So the only multi-indexes which have a non null contribution are (up to permutations) \(\alpha =(i,i,j,j)\) with \(i\in \{1,2\}\) and \(j\in \{3,4\}\). More precisely, for every fixed \((i,j)\in \{1,2\}\times \{3,4\}\) the following multi-indexes bring a non zero contribution: (iijj), (ijij), (ijji), (jjii), (jiji), (jiij). Besides all the forthcoming computations are independent of the chosen permutations and we will simply assume that the multi-index is (iijj) and multiply the final result by a factor 6. Indeed, we observe that

$$\begin{aligned} \gamma _{n}^{\prime }(nt,ns)=\frac{1}{24}\sum _{\alpha }{\mathbb {E}}\left( \Psi _{\delta _{n}}\left( I_{4}^{1/2}W\right) H_{\alpha }(W)\right) c_{n}\left( \alpha ,I_{4}^{-1/2}Z_{n}(nt,ns,Y)\right) \end{aligned}$$

with the sum over the multi-indexes of the form (up to permutations) \(\alpha =(i,i,j,j)\) with \( i\in \{1,2\}\) and \(j\in \{3,4\}\). We fix such a multi index \(\alpha =(i,i,j,j)\) and we denote (with \(j^{\prime }=j-2)\)

$$\begin{aligned} p(\alpha )= & {} 3^{i+j-4}=3^{i+j^{\prime }-2},\quad \\ U(\alpha )= & {} \frac{p(\alpha )y_{*}}{4(1+2(i+j-4))}=\frac{p(\alpha )y_{*}}{4(1+2(i+j^{\prime }-2))}. \end{aligned}$$

Our first aim is to prove that, if \(\frac{t}{\pi },\frac{s}{\pi },\frac{t+s}{ \pi },\frac{t-s}{\pi }\) are irrational, then

$$\begin{aligned} \lim _{n}c_{n}\left( \alpha ,I_{4}^{-1/2}Z_{n}(nt,ns,Y)\right) =U(\alpha ). \end{aligned}$$
(5.9)

We compute

$$\begin{aligned}&{\mathbb {E}}((C_{n}(k,nt)Y_{k})^{i})^{2}(C_{n}(k,ns)Y_{k})^{j-2})^{2}) \\&\quad = \sum _{l_{1},l_{2},l_{3},l_{4}=1}^{2}C_{n}^{i,l_{1}} (k,nt)C_{n}^{i,l_{2}}(k,nt)C_{n}^{j-2,l_{3}}(k,ns)C_{n}^{j-2,l_{4}} (k,ns){\mathbb {E}}\left( \prod _{i=1}^{4}Y_{k}^{l_{i}}\right) . \end{aligned}$$

Then

$$\begin{aligned} \Delta _{\alpha }\left( I_{4}^{-1/2}Z_{n,k}(nt,ns,Y)\right)= & {} p(\alpha )\sum _{l_{1},l_{2},l_{3},l_{4}=1}^{2}C_{n}^{i,l_{1}} (k,nt)C_{n}^{i,l_{2}}(k,nt)C_{n}^{j-2,l_{3}}(k,ns)\\&\times C_{n}^{j-2,l_{4}} (k,ns)\left( {\mathbb {E}}\left( \prod _{i=1}^{4}Y_{k}^{l_{i}}\right) -{\mathbb {E}}\left( \prod _{i=1}^{4}G_{k}^{l_{i}}\right) \right) \end{aligned}$$

and finally

$$\begin{aligned} c_{n}(\alpha ,I_{4}^{-1/2}Z_{n}(nt,ns,Y))= & {} \frac{1}{n}\sum _{k=1}^{n}\Delta _{\alpha }(I_{4}^{-1/2}Z_{n,k}(nt,ns,Y)) \\= & {} p(\alpha )\sum _{l_{1},l_{2},l_{3},l_{4}=1}^{2}c_{n}^{\prime }(\alpha ,l_{1},l_{2},l_{3},l_{4})+c_{n}^{\prime \prime }(\alpha ,l_{1},l_{2},l_{3},l_{4}) \end{aligned}$$

with

$$\begin{aligned} c_{n}^{\prime }(\alpha ,l_{1},l_{2},l_{3},l_{4}) =&\frac{1}{n} \sum _{k=1}^{n}C_{n}^{i,l_{1}}(k,nt)C_{n}^{i,l_{2}}(k,nt)C_{n}^{j-2,l_{3}}(k,ns)C_{n}^{j-2,l_{4}}(k,ns)\\&\times \left( y_{\infty }(l_{1},l_{2},l_{3},l_{4}) -{\mathbb {E}}\left( \prod _{i=1}^{4}B^{l_{i}}\right) \right) \\ c_{n}^{\prime \prime }(\alpha ,l_{1},l_{2},l_{3},l_{4}) =&\frac{1}{n} \sum _{k=1}^{n}C_{n}^{i,l_{1}}(k,nt)C_{n}^{i,l_{2}}(k,nt)C_{n}^{j-2,l_{3}}(k,ns)C_{n}^{j-2,l_{4}}(k,ns)\\&\times \left( {\mathbb {E}}\left( \prod _{i=1}^{4}Y_{k}^{l_{i}}\right) -y_{\infty }(l_{1},l_{2},l_{3},l_{4})\right) . \end{aligned}$$

Here \(B=(B^{1},B^{2})\) is a standard Gaussian random variable. Since \( {\mathbb {E}}(\prod _{i=1}^{4}Y_{k}^{l_{i}})\rightarrow y_{\infty }(l_{1},l_{2},l_{3},l_{4})\) we get \(c_{n}^{\prime \prime }(\alpha ,l_{1},l_{2},l_{3},l_{4})\rightarrow 0\). We analyze now \(c_{n}^{\prime }(\alpha ,l_{1},l_{2},l_{3},l_{4})\). By (A.4), if \(l_{1}\ne l_{2}\) or if \(l_{3}\ne l_{4}\) this term converges to zero. So we have to consider only

$$\begin{aligned} c_{n}^{\prime }(\alpha ,l,l,l^{\prime },l^{\prime })=\frac{1}{n} \sum _{k=1}^{n}C_{n}^{i,l}(k,nt)^{2}C_{n}^{j-2,l^{\prime }}(k,ns)^{2}(y_{\infty }(l,l,l^{\prime },l^{\prime })-{\mathbb {E}}\left( (B^{l})^{2}(B^{l^{\prime }})^{2}\right) \end{aligned}$$

Take first \(l=1\) and \(l^{\prime }=2\). Then, using (A.3), we have

$$\begin{aligned} c_{n}^{\prime }(\alpha ,1,1,2,2)= & {} \frac{1}{n} \sum _{k=1}^{n}C_{n}^{i,l}(k,nt)^{2}C_{n}^{j-2,l^{\prime }}(k,ns)^{2}(y_{\infty }(1,1,2,2)-1) \\\rightarrow & {} \frac{1}{4(1+2(i+j-4))}(y_{\infty }(1,1,2,2)-1). \end{aligned}$$

And if \(l=l^{\prime }=1\) (or if \(l=l^{\prime }=2)\) we have

$$\begin{aligned} c_{n}^{\prime }(\alpha ,1,1,1,1)= & {} \frac{1}{n} \sum _{k=1}^{n}C_{n}^{i,l}(k,nt)^{2}C_{n}^{j-2,l^{\prime }}(k,ns)^{2}(y_{\infty }(1,1,1,1)-3) \\\rightarrow & {} \frac{1}{4(1+2(i+j-4))}(y_{\infty }(1,1,1,1)-3). \end{aligned}$$

So (5.9) is proved and, as an immediate consequence we obtain

$$\begin{aligned} \lim _{n}\int _{0}^{\pi }\int _{0}^{\pi }c_{n}\left( \alpha ,I_{4}^{-1/2}Z_{n}(nt,ns,Y)\right) dsdt=\pi ^{2}U(\alpha ). \end{aligned}$$
(5.10)

We compute now

$$\begin{aligned} \lim _{n}{\mathbb {E}}\left( \Psi _{\delta _{n}}\left( I_{4}^{1/2}W\right) H_{\alpha }(W)\right) . \end{aligned}$$

Notice that if \(i\in \{1,2\}\) and \(j\in \{3,4\}\) then (recall that \(h_{2}(x)=x^{2}-1\) is the Hermite polynomial of order 2 on \(\mathbb {R}\))

$$\begin{aligned} H_{(i,i,j,j)}(W)=h_{2}(W_{i}^{\prime })h_{2}(W_{j-2}^{\prime \prime })) \end{aligned}$$

so that

$$\begin{aligned} {\mathbb {E}}\left( \Psi _{\delta _{n}}\left( I_{4}^{1/2}W\right) H_{\alpha }(W)\right)= & {} {\mathbb {E}}\left( \Phi _{\delta _{n}}\left( I_{2}^{1/2}W^{\prime }\right) h_{2}\left( W_{i}^{\prime }\right) \right) \times {\mathbb {E}}\left( \Phi _{\delta _{n}}\left( I_{2}^{1/2}W^{\prime \prime }\right) h_{2}\left( W_{j-2}^{\prime \prime }\right) \right) \\\rightarrow & {} \frac{1}{3}{\mathbb {E}}(\left| B_{2}\right| \delta _{0}(B_{1})h_{2}(B_{i})) \\&\quad \times {\mathbb {E}}(\left| B_{2}\right| \delta _{0}(B_{1})h_{2}(B_{j-2})) \end{aligned}$$

where \(B=(B_{1},B_{2})\) is standard normal. If \(i=1\) then

$$\begin{aligned} {\mathbb {E}}(\left| B_{2}\right| \delta _{0}(B_{1})h_{2}(B_{1}))={\mathbb {E}}(\left| B_{2}\right| ){\mathbb {E}}(\delta _{0}(B_{1})(B_{1}^{2}-1))=-\frac{2}{\sqrt{2\pi }} \times \frac{1}{\sqrt{2\pi }}=-\frac{1}{\pi } \end{aligned}$$

and if \(i=2\) then

$$\begin{aligned} {\mathbb {E}}(\left| B_{2}\right| \delta _{0}(B_{1})h_{2}(B_{2}))={\mathbb {E}}(\left| B_{2}\right| (B_{2}^{2}-1)){\mathbb {E}}(\delta _{0}(B_{1}))=\frac{1}{\pi }. \end{aligned}$$

So, discussing according to the possible values of ij, we may define

$$\begin{aligned} \rho _{i,j}= \frac{1}{\pi ^2} (-1)^{i+j} \end{aligned}$$

and we finally obtain, for \(\alpha =(i,i,j,j)\)

$$\begin{aligned} \lim _{n}{\mathbb {E}}\left( \Psi _{\delta _{n}}\left( I_{4}^{1/2}W\right) H_{\alpha }(W)\right) =\frac{1}{3}\rho _{i,j} \end{aligned}$$

and

$$\begin{aligned} \lim _{n}\frac{1}{n^{2}}\int _{D_{n,\varepsilon }}\gamma _{n}^{\prime }(t,s)dsdt= & {} 6\times \frac{1}{2}\times \frac{1}{24}\sum _{i,j=1}^{2}\frac{1}{3}\rho _{i,j}\times \pi ^{2}U((i,i,j,j)) +O(\epsilon )\\= & {} \frac{1}{216}\sum _{i,j=1}^{2}\frac{(-3)^{i+j}}{4(1+2(i+j-2))}\times y_*+O(\epsilon )\\= & {} \frac{1}{120}\times y_*+O(\epsilon ). \end{aligned}$$

Step 4 We estimate \(r_{n}(t,s)\) and \(r_{n}(t)\) (see (3.38) and (3.37) respectively). Since \(L_{0}(\Phi _{\delta _{n}})=1\) we have \(\left| r_{n}(nt,ns)\right| \le \left\| \Sigma _{n}(nt,ns)-I_{4}\right\| \). Let us compute \(\Sigma _{n}^{i,j}(nt,ns)\). By direct computations on has \(\Sigma _{n}^{1,1}(nt,ns)=\Sigma _{n}^{3,3}(nt,ns)=1\) and \(\Sigma _{n}^{1,2}(nt,ns)=\Sigma _{n}^{3,4}(nt,ns)=0\). Moreover

$$\begin{aligned} \Sigma _{n}^{2,2}(nt,ns)=\frac{1}{n}\sum _{k=1}^{n}{\mathbb {E}}\left( Z_{n,k}^{2}(nt,Y)\right) =\frac{ 1}{n}\sum _{k=1}^{n}\frac{k^{2}}{n^{2}}\rightarrow \int _{0}^{1}x^{2}dx=\frac{1 }{3}=I_{4}^{2,2}. \end{aligned}$$

The same is true for \(\Sigma _{n}^{4,4}(nt,ns)\). We look now to \(\Sigma _{n}^{i,j}(nt,ns)\) with \(i\in \{1,2\}\) and \(j\in \{3,4\}\). Say for example that \(i=1\) and \(j=4\). Then we compute

$$\begin{aligned} {\mathbb {E}}(Z_{n,k}^{1}(nt,Y)Z_{n,k}^{2}(ns,Y))= & {} \frac{k}{n}{\mathbb {E}}\left( \left( \cos (kt)Y_{k}^{1}+\sin (kt)Y_{k}^{2}\right) \left( -\sin (ks)Y_{k}^{1}+\cos (ks)Y_{k}^{2}\right) \right) \\= & {} \frac{k}{n}(\cos (ks)\sin (kt)-\cos (kt)\sin (ks))=\frac{k}{n}\sin (k(t-s)). \end{aligned}$$

Then, by using the ergodic lemma, if \(\frac{t-s}{\pi }\) is irrational we get

$$\begin{aligned} \Sigma _{n}^{1,4}(nt,ns)=\frac{1}{n}\sum _{k=1}^{n}\frac{k}{n}\sin (k(t-s))\rightarrow \frac{1}{4\pi }\int _{0}^{2\pi }\sin (u)du=0. \end{aligned}$$

The same result is obtained in the other cases. We conclude that \( \lim _{n}r_{n}(nt,ns)=0\)dtds almost surely. Since \(\left| r_{n}(nt,ns)\right| \le 1,\) we may use Lebesgue’s convergence theorem and we obtain

$$\begin{aligned} \frac{1}{n^{2}}\int _{D_{\nu ,\varepsilon }}\left| r_{n}(t,s)\right| dsdt=\int _{[0,\pi ]^{2}}1_{D_{n,\varepsilon }}(nt,ns)\left| r_{n}(nt,ns)\right| dsdt\rightarrow 0. \end{aligned}$$

For \(r_{n}(t)\) the same conclusion is (trivially) true.

Step 5. Estimate of \(R_{n}(t,s,\Psi _{\delta _{n}})\). By (3.38) we have

$$\begin{aligned} \left| R_{n}(t,s,\Psi _{\delta _{n}})\right| \le C(1+n^{3/2}\times e^{-cn}) \end{aligned}$$

with C a constant which depends on \(r,\eta \) from (2.1) on \(M_{p}(Y)\) from (2.2) and on the lower eigenvalue \(\varepsilon _{*}\) defined in (C.2) for the covariance matrix \(\Sigma _{n}(t,s)\). We have proved in (C.3) that this lower eigenvalue is lower bounded uniformly with respect to n so we conclude that the constant C in the above inequality does not depend on n. Consequently

$$\begin{aligned} \sup _{n}\sup _{(t,s)\in D_{n,\varepsilon }}\left| R_{n}(t,s,\Psi _{\delta _{n}})\right| \le C<\infty \end{aligned}$$

and then

$$\begin{aligned} \frac{1}{n^{2}}\int _{D_{n,\varepsilon }}\frac{1}{\sqrt{n}}\left| R_{n}(t,s,\Psi _{\delta _{n}})\right| dsdt\rightarrow 0. \end{aligned}$$

Similar estimates hold for \(R_{n}(t,\Phi _{\delta _{n}})\). Since W is standard normal, direct computations show that

$$\begin{aligned} \left| {\mathbb {E}}(\Phi _{\delta _{n}}(I_{2}^{1/2}W)\Gamma _{n,2}(I_{2}^{-1/2}Z_{n}(t,Y),W)))\right| \le C \end{aligned}$$

and so

$$\begin{aligned} \frac{1}{n^{3}}\int _{D_{n,\varepsilon }}&\left| {\mathbb {E}}(\Phi _{\delta _{n}}(I_{2}^{1/2}W)\Gamma _{n,2}(I_{2}^{-1/2}Z_{n}(t,Y),W)))\right. \\&\left. {\mathbb {E}}(\Phi _{\delta _{n}}(I_{2}^{1/2}W)\Gamma _{n,2}(I_{2}^{-1/2}Z_{n}(s,Y),W)))\right| dsdt\rightarrow 0 \end{aligned}$$

So we have proved that

$$\begin{aligned} \frac{1}{n^{2}}\int _{D_{n,\varepsilon }}\left| \overline{R} _{n}(t,s)\right| dsdt\rightarrow 0 \end{aligned}$$

and the whole proof is completed. \(\square \)