1 Introduction and Main Results

Consider an ensemble of random symmetric \(n\times n\) matrices with entries of the form

$$\begin{aligned} \mathcal {M}_{ij}=(u_{ij}/b)^{1/2}\tilde{w}_{ij},\quad u_{ij}=u(|i-j|/b) \end{aligned}$$
(1.1)

where \(\{\tilde{w}_{ij}\}_{1\le i<j\le n}\) are i.i.d. (up to the symmetry \(\tilde{w}_{ij}=\tilde{w}_{ji}\)) random variables, satisfying the moment conditions

$$\begin{aligned} E\{\tilde{w}_{ij}\}=0,\quad E\{|\tilde{w}_{ij}|^2\}=1,\quad E\{|\tilde{w}_{ij}|^4\}=3+\kappa _4, \quad E\{|\tilde{w}_{ij}|^{4+\varepsilon }\}\le C<\infty , \end{aligned}$$
(1.2)

diagonal entries \(\{\tilde{w}_{ii}\}_{1\le i\le n}\) are also i.i.d., independent of off diagonal entries,

$$\begin{aligned} E\{\tilde{w}_{ii}\}=0,\quad E\{\tilde{w}_{ii}^2\}=w_2, \quad E\{|\tilde{w}_{ii}|^{4}\}\le C<\infty , \end{aligned}$$
(1.3)

and u(x) is a piece-wise continuous (with a finite number of jumps) continuous at \(x=0\) function with a compact support, satisfying the conditions

$$\begin{aligned} u(x)=u(-x),\quad 0\le u(x)\le C,\quad \int u(x)dx=1,\quad \mathrm {supp\,}u\subset [-C^*,C^*]. \end{aligned}$$
(1.4)

It is easy to see that the entries of \(\mathcal {M}\) are nonzero only inside the band \(|i-j|\le C^*b\). Hence for fixed b we have a matrix with a finite numbers of diagonals, while if \(b\sim n\), we obtain some kind of the Wigner matrix, with all of the entries having the variances of the same order (see [20]). The model is now widely discussed in mathematical literatures, since by non rigorous conjecture of [6] it is expected that the behavior of local eigenvalue statistics demonstrates a kind of phase transition: for \(b<<n^{1/2}\) the statistics is of Poisson type and for \(b>>n^{1/2}\) it is of the same type as for the Wigner matrices. Till now this result is not proven rigorously, but the problem is one of the most challenging in the random matrix theory (see, e.g. [3, 4, 18, 19] and references therein).

It was proved many years ago (see [10]) that in the limit

$$\begin{aligned} b\rightarrow \infty ,\quad b/n\rightarrow 0,\quad \hbox {as} \quad n\rightarrow \infty , \end{aligned}$$
(1.5)

the normalized eigenvalue counting measure converges weakly to the Wigner semicircle low, which has the density

$$\begin{aligned} \rho _{sc}(\lambda )=\frac{1}{2\pi }\sqrt{4-\lambda ^2}\mathrm {1}_{[-2,2]}. \end{aligned}$$
(1.6)

This means that if we denote \(\{\lambda _i\}_{i=1}^n\) the eigenvalues \(\mathcal {M}\), choose any bounded integrable test function \(\varphi \), and consider the linear eigenvalue statistics of the form

$$\begin{aligned} \mathcal {N}_n[\varphi ,\mathcal {M}]=\sum _{j=1}^n\varphi (\lambda _j),\quad \mathcal {N}_n^\circ [\varphi ,\mathcal {M}]= \mathcal {N}_n[\varphi ,\mathcal {M}]-E\{\mathcal {N}_n[\varphi ,\mathcal {M}]\}, \end{aligned}$$
(1.7)

then in the limit (1.5) we have

$$\begin{aligned} E\{n^{-1}\mathcal {N}_n[\varphi ,\mathcal {M}]\}\rightarrow \int \varphi (\lambda )\rho _{sc}(\lambda )d\lambda , \quad \mathrm {Var}\{\mathcal {N}_n[\varphi ,\mathcal {M}]\}\rightarrow 0. \end{aligned}$$

In particular, for \(\varphi (\lambda )=(\lambda -z)^{-1}\)

$$\begin{aligned}&n^{-1}\mathcal {N}_n[\varphi ,\mathcal {M}]=n^{-1}\mathrm {Tr\,}(\mathcal {M}-z)^{-1}\rightarrow g(z),\nonumber \\&g(z)=\frac{1}{2}\big (-z+\sqrt{z^2-4}\big ). \end{aligned}$$
(1.8)

Notice that below we almost always will omit the argument \(\mathcal {M}\) of \(\mathcal {N}_n[\varphi ,\mathcal {M}]\) and use it only in the proof of Lemma 1, where we compare the linear eigenvalue statistics of \(\mathcal {M}\) and of the truncated matrix M.

The next natural question is the behavior of the fluctuations \(\mathcal {N}_n^\circ [\varphi ]\) in the same limit, in particular, the behavior of its variance. This question was solved partially in the paper [9], where the main term of the covariance of the traces of two resolvents was found in the case of Gaussian \(\tilde{w}_{ij}\) and under the additional restriction \(b=n^\theta \), \(1/3<\theta <1\). The next step was done in the papers [8, 11], where the Central Limit Theorem (CLT) for the random variable \(\sqrt{b/n}\mathcal {N}_n^\circ [\varphi ]\) was proved for sufficiently smooth test functions, but again under the technical condition \(n>>b>>n^{1/2}\).

The main result of the present paper is the proof of CLT for the linear eigenvalue statistics (1.7) of the band matrices under the limiting transition (1.5) without any additional restriction on the growth of b.

We consider the test functions from the space \(\mathcal {H}_s\), possessing the norm

$$\begin{aligned} ||\varphi ||_s^2=\int (1+2|k|)^{2s}|\widehat{\varphi }(k)|^2dk,\quad s>2,\quad \widehat{\varphi }(k)=\int e^{ikx}\varphi (x)dx. \end{aligned}$$
(1.9)

Theorem 1

Consider an ensemble of random real symmetric band matrices (1.11.4) and any test function possessing the norm (1.9) with \(s>2\). Then the sequence of random variables \(\sqrt{b/n}\mathcal {N}_n^\circ [\varphi ]\) with \(\mathcal {N}_n^\circ [\varphi ]\) of (1.7) converges in distribution in the limit (1.5) to the normal random variable with zero mean and the variance

$$\begin{aligned}&V[\varphi ]\nonumber \\&\quad =\frac{1}{\pi ^2}\int _{0}^\pi dxdy\varphi (2\cos x)\varphi (2\cos y) \int \frac{\partial ^2}{\partial x\partial y}\log \bigg |\frac{1-\hat{u}(k)e^{i(x+y)}}{1-\hat{u}(k)e^{i(x-y)}} \bigg |dk\nonumber \\&\qquad +\frac{(u,u)\kappa _{4}}{\pi ^{2}}\left( \int _{0}^\pi \varphi (2\cos x )\cos 2x dx \right) ^{2} +\frac{u(0)(w_2-2)}{2\pi ^{2}}\left( \int _{0}^\pi \varphi (2\cos x )\cos x dx \right) ^{2}, \end{aligned}$$
(1.10)

where \((u,u)=\int u^2(x)dx\) and \(\hat{u}(k)\) is the Fourier transform of the function u defined as in (1.9)

Remark

Inspecting the proof of Theorem 1 it is easy to see, that it can be easily adopted to the hermitian case, i.e., the model (1.1) with complex valued independent (up to the symmetry conditions) entries \(\tilde{w}_{ij}\), satisfying the first, the second and the forth moment relations of (1.2) and (1.3). If we assume in addition that real and imaginary parts of entries are also i.i.d. and denote \(\kappa ^{(1)}_4=E\{(\mathfrak {R}w_{12})^4\} -3E^2\{(\mathfrak {R}w_{12})^2\}\), then the variance will have similar to (1.10) form with the first summand divided by 2, the second summand with \(\kappa _{4}\) replaced by \(2\kappa ^{(1)}_4\) and the third summand with \(w_2-2\) replaced by \(w_2-1\).

To prove CLT for the band matrices, we use the CLT for martingale (see [2, Theorem 35.12]).

Theorem 2

Let \( X_{n,k}=E_{<k}\{Y-E_{k}Y\}\) be a martingale differences array, corresponding to the real valued function \(Y(V_1,\dots ,V_n)\) depending on the family of some independent random vectors \(V_1,\dots ,V_n\), \(S_n=\sum _{k=1}^nX_k\). Here and below \(E_{k}\) means the averaging with respect to \(V_k\) and \(E_{<k}=E_1,\dots ,E_{k-1}\). Set \(\sigma _n=\sum _{k=1}^nE\{X_k^2\}\) and let \(\sigma _n=O(1)\). Assume that

$$\begin{aligned}&(1)\quad \sum E\{X_k^4\}\le \varepsilon _n,\qquad (2)\quad \mathrm {Var}\Big \{\sum _{k=1}^nX_k^2\Big \}\le \tilde{\varepsilon }_n. \end{aligned}$$
(1.11)

Then

$$\begin{aligned} |E\{e^{itS_n}\}-e^{-t^2\sigma _n/2}|\le C'(t)\big (\varepsilon _n^{1/2}+\tilde{\varepsilon }_n^{1/2}\big ). \end{aligned}$$
(1.12)

Remark

Here we have replaced a more general condition \(\sum E\{X_k^21_{|X_k|>\delta }\}\rightarrow 0\) used in [2] by condition (1) which is more easy to check for the random matrix models.

The idea to use Theorem 2 for the proof of CLT in the random matrix theory is not new. Since the paper of [1] it was used many times (see, e.g., [7, 13, 15]), but the method of the proof of CLT used in the present paper allows to prove CLT by the same way for all classical models of random matrix theory: deformed Wigner and sample covariance matrices, sparse and diluted random matrices etc. It becomes even simpler than that for band matrices, since the proof of condition (2) becomes simpler.

The paper is organized as follows. In Section 2.1 we give the sketch of the proof of CLT, introduce truncated band matrix and explain how one can extend CLT from some special class of the test functions to all functions of \(\mathcal {H}_s\). In Section 2.2 we check conditions (1.11) and in Section 2.3 prove Lemma 1 (given in Section 2.1) about the difference of linear eigenvalue statistics of initial and truncated matrices. In Sect. 3 we compute the variance (1.10). And in Sect. 4 the proofs of some auxiliary results (partially known before) are given in order to make the proof of Theorem 1 more self consistent.

2 Proof of CLT

2.1 Strategy of the Proof

We start from the proof of CLT for the truncated and “periodically continued” model:

$$\begin{aligned}&{M}_{ij}=(u_{ij}/b)^{1/2}w_{ij}, \qquad u_{ij}=u(|i-j|_n/b)\nonumber \\&w_{ij}=\left\{ \begin{array}{l@{\quad }l}\tilde{w}_{ij}1_{|\tilde{w}_{ij}|\le b^{1/2}}- E\{\tilde{w}_{ij}1_{|\tilde{w}_{ij}|\le b^{1/2}}\},&{}|i-j|\le C^*b\\ \omega _{ij},&{}||i-j|-n|\le C^*b\end{array}\right. \end{aligned}$$
(2.1)

Here and below

$$\begin{aligned} |i-j|_n:=\max \{|i-j|,||i-j|-n|\}, \end{aligned}$$
(2.2)

and \(\{\omega _{ij}\}_{||i-j|-n|\le C^*b}\) are independent (up to the symmetry conditions) and independent from \(\mathcal {M}\) copies of \(w_{12}\). Thus we not only truncate the entries of \(\mathcal {M}\), but also add entries in upper right and lower left corners of it, in order to obtain the periodic distribution, i.e., invariant with respect to the shift \(i\rightarrow |i+1|_n\).

Then the standard argument gives us that for \(|i-j|_n\le C^*b\)

$$\begin{aligned}&E\{ w_{ij}\}=0,\quad E\{ |w_{ij}|^2\}=1+ O(b^{-1-\varepsilon /2}),\nonumber \\&E\{ |w_{ij}|^4\}=3+\kappa _4+O(b^{-\varepsilon /2}),\quad E\{ |w_{ij}|^8\}\le Cb^{4-\varepsilon /2}. \end{aligned}$$
(2.3)

Moreover, it is easy to see that

$$\begin{aligned} n^{-1}E\big \{\mathrm {Tr\,}(\mathcal {M}-M)^2\big \}\le Cb^{-\varepsilon /2}. \end{aligned}$$
(2.4)

Then, using Theorem 2, we prove CLT for \(\nu _{1n}:=(b/n)^{1/2}\mathcal {N}_n^\circ [\varphi _\eta ,M]\) with the test functions of the form

$$\begin{aligned} \varphi _\eta =\varphi *\mathcal {P}_\eta , \end{aligned}$$
(2.5)

where \(*\) means a convolution, \(\mathcal {P}_\eta \) is a Poisson kernel

$$\begin{aligned} \mathcal {P}_\eta (\lambda )=\frac{\pi ^{-1}\eta }{\lambda ^2+\eta ^2}, \end{aligned}$$
(2.6)

and \(\varphi \in \mathcal {H}_s\cap L_1(\mathbb {R})\). It is easy to see that then

$$\begin{aligned} \mathcal {N}_n[\varphi _\eta ,M]=\pi ^{-1}\int \varphi (\lambda )\mathfrak {I}\gamma _n(\lambda +i\eta )d\lambda ,\quad \gamma _n(z)=\mathrm {Tr}(M-z)^{-1}. \end{aligned}$$
(2.7)

Then we shall prove the lemma

Lemma 1

Set \(\mathcal {G}(z)=(\mathcal {M}-z)^{-1}\), \(\tilde{\gamma }_n(z):=\mathrm {Tr\;}\mathcal {G}(z)\) and compare \(\tilde{\gamma }_n(z)\) with \(\gamma _n(z)\) of (2.7). Then for any fixed \(\eta >0\) uniformly in \(z:\mathfrak {I}z>\eta \)

$$\begin{aligned}&\frac{b}{n}\mathrm {Var}\Big \{\gamma _n(z)-\tilde{\gamma }_n(z)\Big \} \le C(\eta )b^{-\varepsilon /2}. \end{aligned}$$
(2.8)

The lemma implies that for any \(\varphi \in \mathcal {H}_s\cap L_1(\mathbb {R})\) if we set \(\nu _{2n}:=(b/n)^{1/2}\mathcal {N}_n^\circ [\varphi _\eta ,\mathcal {M}]\), then

$$\begin{aligned} \mathrm {Var}\{\nu _{2n}-\nu _{1n}\}&= \frac{b}{n}\mathrm {Var}\Big \{\mathcal {N}_n[\varphi _\eta ,\mathcal {M}]-\mathcal {N}_n[\varphi _\eta ,{M}]\Big \}\\&=\frac{b}{\pi ^{2}n} \int \int d\lambda _1d\lambda _2\varphi (\lambda _1)\varphi (\lambda _2)\\&\quad \times \mathrm {Cov}\{\mathfrak {I}\gamma _n(\lambda _1+i\eta )-\mathfrak {I}\tilde{\gamma }_n(\lambda _1+i\eta ), \mathfrak {I}\gamma _n(\lambda _2+i\eta )-\mathfrak {I}\tilde{\gamma }_n(\lambda _2+i\eta )\}\\&\le Cb^{-\varepsilon }\int \int d\lambda _1d\lambda _2|\varphi (\lambda _1)\varphi (\lambda _2)| \le C'b^{-\varepsilon /2}. \end{aligned}$$

Hence, for any fixed \(x\in \mathbb {R}\)

$$\begin{aligned} |E\{e^{ix\nu _{1n}}\}-E\{e^{ix\nu _{2n}}\}|\le |x|\mathrm {Var}^{1/2}\{\nu _{1n}-\nu _{2n}\}\le |x|Cb^{-\varepsilon /4}. \end{aligned}$$

Thus, CLT for \(v_{1n}\) and Lemma 1 imply CLT for \(v_{2n}\), if the test function has the form (2.5).

To extend CLT to the test functions from \(\mathcal {H}_s\), we use a proposition (see [14, Proposition 3.2.9]).

Proposition 1

Let \(\{\xi _{l}^{(n)}\}_{l=1}^{n}\) be a triangular array of random variables, \(\mathcal {N}_{n}[\varphi ]=\sum _{l=1}^{n}\varphi (\xi _{l}^{(n)})\) be its linear statistics, corresponding to a test function \(\varphi :\mathbb {R}\rightarrow \mathbb {R}\), and \(\{d_n\}_{l\ge 1}\) be some sequence of positive numbers. Assume that

  1. (a)

    there exists a vector space \(\mathcal {L}\) endowed with a norm \(\Vert ...\Vert \) such that uniformly in \(\varphi \in \mathcal {L}\)

    $$\begin{aligned} d_n\mathrm {Var}\{\mathcal {N}_{n}[\varphi ]\}\le C||\varphi ||^2,\;\forall \varphi \in \mathcal {L}; \end{aligned}$$
    (2.9)
  2. (b)

    there exists a dense linear manifold \(\mathcal {L}_{1}\subset \mathcal {L}\) such that CLT is valid for \(\mathcal {N}_{n}[\varphi ],\;\varphi \in \mathcal {L}_{1}\), i.e., there exists a continuous quadratic functional \(V:\mathcal {L}_{1}\rightarrow \mathbb {R}_{+}\) such that we have uniformly in x, varying on any compact interval

    $$\begin{aligned} \lim _{n\rightarrow \infty }Z_{n}[x\varphi ]=e^{-x^{2}V[\varphi ]/2},\;\forall \varphi \in \mathcal {L}_{1},\quad where\quad Z_{n}[x\varphi ]:={E}\big \{ e^{ix d_n^{1/2}\mathcal {N}^\circ _{n}[\varphi ]}\big \}. \end{aligned}$$
    (2.10)

    Then V admits a continuous extension to \(\mathcal {L}\) and CLT is valid for all \(\mathcal {N}_{n}[\varphi ]\), \(\varphi \in \mathcal {L}\).

The proposition allows to extend CLT from any dense subset of \(\mathcal {H}_s\) for which we are able to prove CLT on the whole \(\mathcal {H}_s\), if we can check (2.9). This can be done by using the another proposition (proven in [16] and also [17]) and Lemma 2.

Proposition 2

For any \(s>0\) and any \(\mathcal {M}\)

$$\begin{aligned} \mathrm {Var}\{\mathcal {N}_n[\varphi ,\mathcal {M}]\}\le C_s||\varphi ||_s^2\int _0^\infty dy e^{-y}y^{2s-1}\int _{-\infty }^\infty \mathrm {Var}\{\mathrm {Tr\,}\mathcal {G}(x+iy)\}dx. \end{aligned}$$
(2.11)

Lemma 2

If the conditions (1.1) and (1.4) are satisfied, then for any \(0<y<\frac{1}{2}\)

$$\begin{aligned} \frac{b}{n}\int dx\mathrm {Var}\{\mathrm {Tr\,}\mathcal {G}(x+iy)\}\le Cy^{-4}\log y^{-1} \end{aligned}$$
(2.12)

The proof of the lemma is given in Sect. 4.

Combining the proposition with (2.12), we prove (2.9).

2.2 Checking of Conditions (1.11)

According to the previous section, it suffices to prove CLT for the functions of the form (2.5) with a fixed n-independent \(\eta \). Hence, everywhere below we assume that \(|\mathfrak {I}z|\ge \eta \) with some n-independent \(\eta \) and hence almost all bounds depend on \(\eta \).

To apply Theorem 2, we denote \(E_{p}\) the averaging with respect to the variable \(\{w_{p,j}\}_{j\ge p}\), \(E_{<p}=E_{1}\dots E_{p-1}\) and consider

$$\begin{aligned}&X_p[\varphi _\eta ]=\pi ^{-1}(b/n)^{1/2}\int \varphi (\lambda )\mathfrak {I}\tilde{X}_p[\lambda +i\eta ]d\lambda ,\nonumber \\&\quad \tilde{X}_p[z]=E_{<p}\{\gamma _n(z)-E_{p}\gamma _n(z)\}. \end{aligned}$$
(2.13)

Then, according to Theorem 2, we have to check conditions (1,2) of (1.11) for \(\{X_p[\varphi _\eta ]\}\). It is evident, that condition (1) follows from the bounds

$$\begin{aligned} E_{p}\{|\tilde{X}_p[z]|^2\}\le Cb^{-1},\quad |\tilde{X}_p[z]|\le C, \end{aligned}$$
(2.14)

valid uniformly in \(|\mathfrak {I}z|\ge \eta \). And since

$$\begin{aligned} \mathrm {Var}\Big \{\sum X_p^2-\sum E_{p}\{X_p^2\}\Big \}=\sum _pE\big \{|X_p^2- E_{p}\{X_p^2\}|^2\big \} \le \sum _pE\{X_p^4\}\le \varepsilon _n, \end{aligned}$$

condition (2) of (1.11) follows from the uniform in \(|\mathfrak {I}z_1|,|\mathfrak {I}z_2|\ge \eta \) bound

$$\begin{aligned}&\mathrm {Var}\{\Sigma (z_1,z_2)\}\le \tilde{\varepsilon }_n,\nonumber \\&\Sigma (z_1,z_2):=\frac{b}{n}\sum _pE_{p}\{\tilde{X}_p[z_1]\tilde{X}_p[z_2]\}. \end{aligned}$$
(2.15)

Let us prove (2.14) and (2.15).

Denote \(M^{(p)}\) the \((n-1)\times (n-1)\) matrix which is obtained from M by removing the pth line and column. Set also

$$\begin{aligned}&G^{(p)}=(M^{(p)}-z)^{-1},\quad v^{(p)}:=(v_{p1},\ldots ,v_{p,p-1},v_{p,p+1},\ldots , v_{pn})\in \mathbb {R}^{n-1},\nonumber \\&\quad v_{ij}:=u_{ij}^{1/2}w_{ij}. \end{aligned}$$
(2.16)

Use the identities

$$\begin{aligned} G_{pp}=-A^{-1}_p,\;\quad {G}_{ij}=G^{(p)}_{ij}-Q^{(p)}_{ij},\quad \hbox {Tr }G-\hbox {Tr }G^{(p)} =-\frac{\partial }{\partial z}\log A_p(z), \end{aligned}$$
(2.17)

where

$$\begin{aligned}&A_p:= z+b^{-1/2}v_{pp}+b^{-1}\big (G^{(p)}v^{(p)},v^{(p)}\big ),\nonumber \\&Q^{(p)}_{ij}= b^{-1}A_p^{-1}\big ({G}^{(p)}v^{(p)}\big )_i\big ({G}^{(p)}v^{(p)}\big )_j. \end{aligned}$$
(2.18)

Since for the resolvent \(G(z)=(M-z)^{-1}\) of any symmetric or hermitian matrix M and any vector m

$$\begin{aligned} \mathfrak {I}(G(z) m,m)=\mathfrak {I}z(G(z) m,G(z) m), \end{aligned}$$
(2.19)

we have for \(|\mathfrak {I}z|\ge \eta \)

$$\begin{aligned}&|A_p(z)|\ge |\mathfrak {I}A_p(z)|=|\mathfrak {I}z|\big (1+b^{-1}(G^{(p)}v^{(p)},G^{(p)}v^{(p)})\big )\ge \eta ,\end{aligned}$$
(2.20)
$$\begin{aligned}&|\bar{A}_p|\ge |\mathfrak {I}\bar{A}_p|\ge \eta ,\quad \mathrm {where} \quad \bar{A}_p:= E_{p}\{A_p\}, \nonumber \\&\Vert Q^{(p)}\Vert \le |A_p|^{-1}|b^{-1}(G^{(p)}v^{(p)},G^{(p)}v^{(p)})|\le \eta ^{-1}, \end{aligned}$$
(2.21)

and

$$\begin{aligned} \Big |\frac{ A_p'(z)}{A_p}\Big |\le \frac{|1+b^{-1}((G^{(p)})^2v^{(p)},v^{(p)})|}{\mathfrak {I}A_p}\le \eta ^{-1}\quad \Rightarrow \quad |\tilde{X}_p|\le 2\eta ^{-1}, \end{aligned}$$

which implies the second inequality of (2.14).

The last relation of (2.17) yields

$$\begin{aligned}&E_p\{\tilde{X}_p(z_1)\tilde{X}_p(z_2)\}=:\frac{\partial ^2}{\partial z_1\partial z_1}D_p(z_1,z_2)\nonumber \\&D_p(z_1,z_2):=E_p\Big \{ E_{<p}\big \{(\log A_p(z_1))^\circ _p\big \}E_{<p}\big \{(\log A_p(z_2))^\circ _p\big \}\Big \}. \end{aligned}$$

Here and below for any random variable \(\xi \) we denote \(\xi ^\circ _p=\xi -E_p\{\xi \}\).

Since \(D_p(z_1,z_2)\) is an analytic function on \(z_1,z_2:|\mathfrak {I}z_1|,|\mathfrak {I}z_2|\ge \eta /2\), in order to prove the first bound of (2.14), it suffices to prove that uniformly in \(|\mathfrak {I}z|\ge \eta /2\)

$$\begin{aligned}&E_p\{|E_{<p}\{(\log A_p(z))^\circ _p\}|^2\}\le \eta ^{-2}E_p\{|E_{<p}\{A_p^\circ (z)\}|^2\}\le Cb^{-1}. \end{aligned}$$

Evidently

$$\begin{aligned} E_{<p}\{A_p^\circ (z)\}= b^{-1/2}v_{pp}+b^{-1}\sum _{i,j>p,i\not =j}E_{<p}\{G^{(p)}_{ij}(z)\} v_{pi}v_{pj}+ b^{-1}\sum _{i>p}E_{<p}\{G^{(p)}_{ii}(z)\}(v_{pi}^2-u_{pi}). \end{aligned}$$

Hence, averaging with respect to \(E_p\) and using (2.3), we obtain the first bound of (2.14). Similarly one can get the relation which we need below

$$\begin{aligned} E_p\{|E_{<p}\{A_p^\circ (z')\}|^4\}\le Cb^{-1-\varepsilon /2}. \end{aligned}$$
(2.22)

We are left to check (2.15). Writing \(A_p=\bar{A}_p+A_p^\circ \), expanding \(\log A_p\) around \(\bar{A}_p\), and using (2.22), we obtain

$$\begin{aligned} \Sigma (z_1,z_2)&=\frac{b}{n}\sum _p E_p\{X_p(z_1)X_p(z_2)\}=\frac{\partial ^2}{\partial z_1\partial z_1}\tilde{\Sigma }(z_1,z_2)\nonumber \\ \tilde{\Sigma }(z_1,z_2)&=:\frac{b}{n}\sum D_p(z_1,z_2) =\frac{b}{n}\sum (\bar{A}_p(z_1)\bar{A}_p(z_2))^{-1} T_p(z_1,z_2)\nonumber \\&\quad +\frac{b}{n}\sum \big (O(E_p\{|A_p^\circ (z_1)|^3\})+ O(E_p\{|A_p^\circ (z_2)|^3\}\big )\nonumber \\&=\frac{b}{n}\sum (\bar{A}_p(z_1)\bar{A}_p(z_2))^{-1} T_{p}(z_1,z_2)+O(b^{-\varepsilon /4}), \end{aligned}$$
(2.23)

where

$$\begin{aligned} T_{p}(z_1,z_2)&:= E_p\big \{E_{<p}\{ A_p^\circ (z_1)\}E_{<p}\{A_p^\circ (z_2)\}\big \}\nonumber \\&= 2b^{-2}\sum _{i,j>p} u_{pi}u_{pj}E_p\big \{E_{<p}\{G^{(p)}_{ij}(z_1)\}E_{<p}\{G_{ij}^{(p)}(z_1)\}\big \}\nonumber \\&\quad +\kappa _4b^{-2}\sum u_{pi}^2E_p\big \{E_{<p}\{G^{(p)}_{ii}(z_1)\}E_{<p}\{G^{(p)}_{ii}(z_2)\}\big \}+w_2b^{-1}u_{pp}. \end{aligned}$$
(2.24)

Lemma 3

Given \(\eta >0\) there exists \(\delta (\eta )>0\) such that uniformly in \(z:|\mathfrak {I}z|>\eta \)

$$\begin{aligned}&\mathrm {Var}\{ G_{jj}^{(p)}\}\le b^{-\delta },\quad E\{|G_{jj}^{(p)}-E\{G_{jj}\}|\}\le C_0^2b^{-1},\nonumber \\&E\{ |G_{jj}^{(p)}(z)-g(z)|\}\le C_0b^{-\delta }, \end{aligned}$$
(2.25)

where g(z) is defined by (1.8).

The proof of the lemma is given in Sect. 4.

Remark 1

Below we will often use a simple observation. If for some random variables \(|R_k|\le C_k\), \(\sum _kC_k\le C\), and \(f_k:E\{|f_k-f^*_k|\}\le C_1b^{-\delta }\), where \(f^*_k\) are some constants, then we have with the same C and \(C_0\) of (2.25)

$$\begin{aligned} \sum R_kf_k=\sum R_k f_{k}^*+r,\quad E\{|r|\}\le CC_1b^{-\delta }. \end{aligned}$$
(2.26)

In particular, since in view of (2.24) \(| T_{p}(z_1,z_2)|\le Cb^{-1}\), we have

$$\begin{aligned} \tilde{\Sigma }(z_1,z_2)&=\frac{b}{n}\sum (\bar{A}_p(z_1)\bar{A}_p(z_2))^{-1} T_{p}(z_1,z_2)+o(1)\nonumber \\&=\frac{2}{bn}\sum g(z_1)g(z_2) T_{p}'(z_1,z_2)+\kappa _4(g(z_1)g(z_2))^{2}+u(0)w_2g(z_1)g(z_2)+o(1), \end{aligned}$$
(2.27)

where \(T_{p}'(z_1,z_2)\) is the first sum in the r.h.s. of (2.24). The constant term here does not contribute into the variance of \(\Sigma (z_1,z_2)\), so it is not important in the proof of (2.15).

Let us denote \(\tilde{M}^{(<p)}\) the matrix M whose entries \(w_{ij}\) with \(\min \{i,j\}<p\) are replaced by \( w_{ij}'\) which are independent from all \(\{w_{kl}\}_{k,l=1}^n\) and have the same distribution as \(w_{ij}\). Let also \(\tilde{M}^{(<p,q)}\) be the matrix \(\tilde{M}^{(<p)}\) without qth line and column. We denote also \(\tilde{E}_{<p}\) the averaging with respect to all \(w_{ij}\) and \(w_{ij}'\) with \(\min \{i,j\}<p\). Set

$$\begin{aligned} \tilde{G}^{(<p,q)}=(\tilde{M}^{(<p,q)}-z)^{-1},\quad \tilde{G}^{(<p)}=(\tilde{M}^{(<p)}-z)^{-1}. \end{aligned}$$
(2.28)

Then evidently

$$\begin{aligned} T_{p}'(z_1,z_2)&=\sum _{jk}E_p\big \{\tilde{E}_{<p}\big \{\tilde{G}^{(<p,p)}_{jk}(z_1)G^{(p)}_{jk}(z_2)\big \}\big \}u_{jp}u_{kp}\\&=E_p\{\tilde{E}_{<p}\{\mathrm {Tr} \tilde{G}^{(<p,p)}(z_1)I^{(p)}G^{(p)}(z_2)I^{(p)}\}\}, \end{aligned}$$

where we denote by \(I^{(p)}\) the diagonal matrix with the entries

$$\begin{aligned} I^{(p)}_{jk}=\delta _{jk}u_{kp}\mathrm {1}_{k>p}. \end{aligned}$$
(2.29)

Moreover, if we replace \(G^{(p)}\) in (2.24) by G and set

$$\begin{aligned} T_{p}''(z_1,z_2)&=\sum _{i,j>p} u_{pi}u_{pj}E_p\{E_{<p}\{G_{ij}(z_1)\}E_{<p}\{G_{ij}(z_1)\}\}\nonumber \\&=E_p\{\tilde{E}_{<p}\{\mathrm {Tr\,} \tilde{G}^{(<p)}(z_1)I^{(p)}G(z_2)I^{(p)}\}\}, \end{aligned}$$
(2.30)

then in view of (2.17) and (2.21)

$$\begin{aligned} |T_{p}''(z_1,z_2)-T_{p}'(z_1,z_2)|&\le |E_p\{\tilde{E}_{<p}\{\mathrm {Tr\,} \tilde{Q}^{(p)}(z_1)I^{(p)}G(z_2)I^{(p)}\}\}|\nonumber \\&\quad +|E_p\{\tilde{E}_{<p}\{\mathrm {Tr\,} \tilde{G}^{(<p,p)}(z_1)I^{(p)}Q^{(p)}I^{(p)}\}\}|\le C, \end{aligned}$$
(2.31)

where we have used that since \(Q^{(p)}\) is a rank one matrix with a bounded norm, we have for any bounded matrix B

$$\begin{aligned} \mathrm {Tr\,}Q^{(p)}B\le \Vert B\Vert \Vert Q\Vert . \end{aligned}$$

Thus we need to study the variance of

$$\begin{aligned} \Sigma _1=\frac{1}{bn}\sum _p T_{p}''(z_1,z_2). \end{aligned}$$
(2.32)

To prove (2.15), it suffices to show that

$$\begin{aligned} \mathrm {Var}\{\Sigma _1\}=\sum _rE\{|E_{<r}^2\{\big (\Sigma _1\big )^\circ _r\}|^2\}\rightarrow 0. \end{aligned}$$

The last relation is a corollary of of the bounds, which we are going to prove

$$\begin{aligned} n^2E\{|(\Sigma _1\big )^\circ _r|^2\}\le C,\quad r=1,\dots ,n. \end{aligned}$$
(2.33)

By (2.32),

$$\begin{aligned} n\big (\Sigma _1\big )^\circ _r=\frac{1}{b}\sum _{p\le r}\big (T_p''(z_1,z_2)\big )^\circ _r. \end{aligned}$$
(2.34)

Notice also that \(\big (T_p''(z_1,z_2)\big )^\circ _r=0\) for \(p\ge r+1\), hence the sum in (2.35) is over \(p\le r\).

Then (2.17) yields

$$\begin{aligned}&\big (T_p''(z_1,z_2)\big )^\circ _r\\&\quad =\Big (\tilde{E}_{\le p}\{\mathrm {Tr\,} \tilde{G}^{(<p)}(z_1)I^{(p)}G(z_2)I^{(p)}\}- \tilde{E}_{\le p}\{\mathrm {Tr\,}\tilde{G}^{(<p,r)}(z_1)I^{(p)}G^{(r)}(z_2)I^{(p)}\}\Big )^\circ _r\\&\quad =\Big ( \tilde{E}_{\le p}\big \{(A_rb)^{-1}(G^{(r)}(z_2)I^{(p)}\tilde{G}^{(<p,r)}(z_1)I^{(p)}G^{(r)}(z_2)v^{(r)},v^{(r)})\big \}\Big )^\circ _r\\&\qquad +\mathrm {sim}+\Big (\tilde{E}_{\le p}\big \{(A_rb)^{-2}(G^{(r)}(z_2)I^{(p)} \tilde{G}^{(r)}(z_1)\tilde{v}^{(r)},v^{(r)})^2\big \}\Big )^\circ _r\\&\quad =:\big (\tilde{F}^{(r)}_{1p}(z_1,z_2)\big )^\circ _r+\big (\tilde{F}^{(r)}_{1p}(z_2,z_1)\big )^\circ _r+ \big (\tilde{F}^{(r)}_{2p}(z_1,z_2)\big )^\circ _r, \end{aligned}$$

where “+sim” means the adding of the term which can be obtained from the previous one by replacing \(z_2\) and \(z_1\). Since \(E\{|\xi ^\circ _r|^2\}\le E\{|\xi |^2\}\) for any random variable \(\xi \), (2.34) yields

$$\begin{aligned}&n^2 E\{|\big (\Sigma _1\big )^\circ _r|^2\}\nonumber \\&\quad \le CE\Big \{\Big |b^{-1}\sum _{p\le r}\big ( \tilde{F}^{(r)}_{1p}(z_1,z_2) +\tilde{F}^{(r)}_{1p}(z_2,z_1)+\tilde{F}^{(r)}_{2p}(z_1,z_2)\big ) \Big |^2\Big \}\nonumber \\&\quad \le CE\Big \{\Big |b^{-2}\sum _{p\le r}E_{\le p}\big \{\big (I^{(p)}G^{(r)}(z_1)v^{(r)},G^{(r)}(z_1)v^{(r)}\big ) \big (1+b^{-1}( v^{(r)}, v^{(r)})\big )\big \}\Big |^2\Big \}+\mathrm { sim}\nonumber \\&\quad =: CE\Big \{\Big |b^{-2}\sum _{p\le r}\big (F^{(r)}_p(z_1)+F^{(r)}_p(z_2)\big )\Big |^2\Big \}. \end{aligned}$$
(2.35)

To sum in the r.h.s of (2.35) with respect to p we would like to use the property

$$\begin{aligned} \sum _{p=1}^nI^{(p)}\le CbI, \end{aligned}$$
(2.36)

but since p appears not only in \(I^{(p)}\), first we need to remove p from other places. Write

$$\begin{aligned}&E\big \{n^2\big |\big (\Sigma _1\big )^\circ _r\big |^2\big \}\\&\quad \le Cb^{-4}\sum _{p\le q\le r}E\{ F^{(r)}_pF^{(r)}_q\}\\&\quad \le Cb^{-4}\sum _{ q=1}^r\sum _{p=1}^qE\Big \{E_{\le q}\big \{(I^{(p)}G^{(r)}v^{(r)},G^{(r)}v^{(r)}) (1+b^{-1}(v^{(r)},v^{(r)}))\big \}F^{(r)}_q\Big \}\\&\quad \le Cb^{-2}\sum _{ q=1}^rE\Big \{E_{\le q}\big \{b^{-1}(v^{(r)},v^{(r)})(1+b^{-1}(v^{(r)},v^{(r)}))\big \}F^{(r)}_q\Big \}\\&\quad \le Cb^{-2}\sum _{ q=1}^{r-C_*b}E\Big \{\big (I^{(q)}G^{(r)}v^{(r)},G^{(r)}v^{(r)}\big )\big (1+b^{-1}(v^{(r)},v^{(r)})\big )^3\Big \}\\&\qquad +CE\{(1+b^{-1}(v^{(r)},v^{(r)}))^4\}\le C'E\{(1+b^{-1}(v^{(r)},v^{(r)}))^4\}. \end{aligned}$$

Here in the first line we use (2.35), in the second line we use first that for \(p\le q\) the averaging \(E_{\le p}\) can be replaced by \(E_{\le q}\), and then use (2.36) for summation over \(p\le r\). The third line follows from the second one in view of the bound \(\Vert G^{(r)}\Vert \le C\). Next we split the sum over q into two parts: one over \(q<r-C^*b\) and another over \(r-C^*b\le q\le r\), and observed that for the q in the first part \((v^{(r)},v^{(r)})\) is a constant with respect to the averaging \(E_{<q}\), hence

$$\begin{aligned}&E\big \{E_{<q} \big \{b^{-1}(v^{(r)},v^{(r)})\big (1+b^{-1}(v^{(r)},v^{(r)})\big )\big \}F^{(r)}_q\big \}\\&\quad =E\big \{\big ((G^{(r)})^*I^{(q)}G^{(r)}v^{(r)},v^{(r)}\big )b^{-1}(v^{(r)},v^{(r)}) \big (1+b^{-1}(v^{(r)},v^{(r)})\big )^2\big \}. \end{aligned}$$

Then we can take the sum over \(q<r-C^*b\), using again the bound (2.36), and finish to estimate the sum using the bound \(\Vert G^{(r)}\Vert \le C\). As for the terms with \(r-C^*b\le q\le r\), they are estimated just using the boundedness of \(\Vert G^{(r)}\Vert \) and \(\Vert I^{(p)}\Vert \). Thus we have proved (2.33). \(\square \)

2.3 Proof of Lemma 1

Set

$$\begin{aligned} \mathcal {G}^{(p)}:=(\mathcal {M}^{(p)}-z)^{-1},\quad \mathcal {A}_p:=z+b^{-1}(\mathcal {G}^{(p)}\tilde{v}^{(p)},\tilde{v}^{(p)}),\quad \Delta A_p:=\mathcal {A}_p-A_p. \end{aligned}$$

The same argument as in the previous section implies that it suffices to check that

$$\begin{aligned} \frac{b}{n}\sum _{p}E\{ |\Delta A_p-E_p\{\Delta A_p\}|^2\}\rightarrow 0. \end{aligned}$$
(2.37)

Since we know that [see (2.24)]

$$\begin{aligned} \frac{b}{n}\sum _{|p|_n\le C^*b}E\{ |\Delta A_p-E_p\{\Delta A_p\}|^2\} \le \frac{b}{n}\sum _{|p|_n\le C^*b}2\Big (E\{|A_p^\circ |^2\} +E\{|\mathcal {A}_p^\circ |^2\}\Big )\le \frac{Cb}{n}, \end{aligned}$$

we conclude that it suffices to prove that

$$\begin{aligned} \frac{b}{n}\sum _{|p|_n>C^*b}E\{ |\Delta A_p-E_p\{\Delta A_p\}|^2\}\rightarrow 0. \end{aligned}$$
(2.38)

Let us write

$$\begin{aligned} \Delta A_p&= b^{-1/2}\Delta v_{pp}+b^{-1}(\mathcal {G}^{(p)}\Delta v^{(p)},\Delta v^{(p)}) +2b^{-1}(\mathcal {G}^{(p)}\Delta v^{(p)}, v^{(p)})\nonumber \\&\quad + b^{-1}((\mathcal {G}^{(p)}-{G}^{(p)}) v^{(p)}, v^{(p)})=:J_{0p}+J_{1p}+2J_{2p}+J_{3p}. \end{aligned}$$
(2.39)

Averaging with respect to \( v^{(p)}\) and \(\tilde{v}^{(p)}\) we get similarly to (2.24) for \(|p|_n\ge cb\)

$$\begin{aligned} E\{|J_{1p}-E_p\{J_{1p}\}|^2\}&=b^{-2}\sum _{i\not =j}E\{|\mathcal {G}^{(p)}_{ij}|^2(v_{pi}-\tilde{v}_{pi})^2 (v_{pj}-\tilde{v}_{pj})^2\}\nonumber \\&\quad +b^{-2}\sum _{i}E\{|\mathcal {G}^{(p)}_{ii}|^2\}(v_{pi}-\tilde{v}_{pi})^4\}\nonumber \\&\le b^{-4-\varepsilon }\sum _{i\not =j}E\{|\mathcal {G}^{(p)}_{ij}|^2\}I^{(p)}_{ii}I^{(p)}_{jj}+ b^{-2-\varepsilon /2}\sum _{i}E\{|\mathcal {G}^{(p)}_{ii}|^2\}I^{(p)}_{ii}\nonumber \\&\le Cb^{-1-\varepsilon /2}. \end{aligned}$$
(2.40)

Similarly

$$\begin{aligned} E\{|J_{2p}-E_p\{J_{2p}\}|^2\}\le Cb^{-2-\varepsilon /2},\quad E\{|J_{0p}-E_p\{J_{0p}\}|^2\}\le Cb^{-2-\varepsilon /2}. \end{aligned}$$
(2.41)

In addition, again similarly to (2.24) we have

$$\begin{aligned} E\{|J_{3p}-E_p\{J_{3p}\}|^2\}\le Cb^{-2}E\{\mathrm {Tr\,}I^{(p)}(\mathcal {G}^{(p)}-{G}^{(p)})I^{(p)} (\mathcal {G}^{(p)*}-{G}^{(p)*})\}. \end{aligned}$$
(2.42)

Now by the same way as in (2.30,2.31) we can replace here \(\mathcal {G}^{(p)}\) by \(\mathcal {G}\) and \({G}^{(p)}\) by G with an error \(O(b^{-2})\):

$$\begin{aligned} E\{|J_{3p}-E_p\{J_{3p}\}|^2\}\le 2b^{-2}E\{\mathrm {Tr\,}I^{(p)}(\mathcal {G}-{G})I^{(p)} (\mathcal {G}^{*}-{G}^{*})\}+O(b^{-2}). \end{aligned}$$
(2.43)

The resolvent identity implies

$$\begin{aligned} \mathcal {G}-{G}={G}(M^{(p)}-\mathcal {M})\mathcal {G}=-{G}\Delta M\mathcal {G}. \end{aligned}$$

Hence, the last term in the r.h.s. of (2.42) can be estimated as

$$\begin{aligned} b^{-2}E\{\mathrm {Tr\,}I^{(p)}(\mathcal {G}-{G})I^{(p)}(\mathcal {G}^{*}-{G}^{*})\}&= b^{-2}E\{\mathrm {Tr\,}I^{(p)}{G}\Delta M\mathcal {G}I^{(p)}\mathcal {G}^{*}\Delta M{G}^{*})\}\\&\le Cb^{-2}E\{\mathrm {Tr\,}I^{(p)}{G}(\Delta M)^2{G}^{*})\}. \end{aligned}$$

Hence, using (2.36) and (2.4), we obtain

$$\begin{aligned} \frac{b}{n}\sum _{C^*b< p<n- C^*b} E\{|J_{3p}-E_p\{J_{3p}\}|^2\}&\le Cn^{-1}b^{-1}E\{\mathrm {Tr\,}{G}(\Delta M)^2{G}^{*}\}\nonumber \\&\le Cn^{-1}b^{-1}E\{\mathrm {Tr\,}(\Delta M)^2\}\le Cb^{-1-\varepsilon /2}. \end{aligned}$$
(2.44)

Combining (2.44) with (2.392.42), we get (2.38).

3 Variance

In view of (2.32), to find \(\Sigma _1\), it suffices to find the main order of \(b^{-1}E\{T_p''(z_1,z_2)\}\) defined in (2.30). For this aim it suffices to compute for any i the main order of

$$\begin{aligned} t_i=\sum _{j>p}u_{pj}\tilde{E}_{<p}\{\tilde{G}_{ij}(z_1)G_{ij}(z_2)\}. \end{aligned}$$

Consider

$$\begin{aligned} s_i&:= \sum _{j>p}u_{pj}\tilde{E}_{<p}\Big \{\tilde{G}_{ji}(z_1)\sum _{k}b^{-1/2}v_{ik}G_{kj}(z_2)\Big \}\nonumber \\&= \sum _{j>p}u_{pj}\tilde{E}_{<p}\Big \{\tilde{G}_{ji}(z_1)\sum _{k}\Big (b^{-1/2}v_{ik}-z_2\delta _{ik} +z_2\delta _{ik}\Big )G_{kj}(z_2)\Big \}\nonumber \\&=\sum _{j>p}u_{pj}\delta _{ij}E\{G_{ii}(z_1)\}+z_2t_i=u_{pi}g(z_1)+z_2t_i+O(b^{-\delta /2}), \end{aligned}$$
(3.1)

where we used Lemma 3 for the last equality.

The idea is to compute the l.h.s. above in a way which gives us an equation with respect to \(\{t_i\}_{i>p}\). It is possible by using the formula (see e.g.[14]) valid for any random variable \(\xi \) which has zero mean and possesses \(m+2\) moments, and any function F, possessing \(m+1\) bounded derivatives

$$\begin{aligned} E\{\xi F(\xi )\}=\sum _{s=1}^{m}\frac{\kappa _{s+1}E\{ F^{(s)}(\xi )\}}{s!}+r_m,\quad |r_m|\le CE\{|\xi |^{m+2}\}\max |F^{(m+1)}|, \end{aligned}$$
(3.2)

where \(\kappa _s\) is the sth cumulant of \(\xi \), i.e., the coefficient at \(x^s\) in the formal expansion of \(\log E\{e^{x\xi }\}\) in the series with respect to x. We need to know that \(\kappa _1=E\{\xi \}\), \(\kappa _2=E\{|\xi ^\circ |^2\}\), and \(\kappa _s\le C_sE\{|\xi |^s\}\) with some absolute \(C_s\).

Applying the formula for \(\xi =b^{-1/2}v_{ik}\), \(m=4\), and \(F_{ijk}=\tilde{G}_{ji}(z_1)G_{ik}(z_2)\), we get

$$\begin{aligned} s_i&=-\sum _{j>p}u_{pj}\tilde{E}_{<p}\Big \{\tilde{G}_{ji}(z_1)G_{ij}(z_2)\sum _{k}b^{-1}u_{ik}G_{kk}(z_2)\Big \}\nonumber \\&\quad -\mathrm {1}_{i>p}\sum _{k>p}u_{ik}\sum _{j>p}b^{-1}u_{pj}\tilde{E}_{<p}\Big \{\tilde{G}_{ii}(z_1)\tilde{G}_{jk}(z_1)G_{kj}(z_2)\Big \}\nonumber \\&\quad +R_1+R_2+R_3+R_4. \end{aligned}$$
(3.3)

Here we used the differentiation formula for the resolvent of any symmetric matrix M

$$\begin{aligned} \frac{d}{d M_{ik}}G_{sl}(z)=-G_{sk}(z)G_{il}(z)-G_{si}(z)G_{kl}(z) \end{aligned}$$
(3.4)

and split the terms corresponding to \(s=1\) in (3.2) into two parts. The terms, corresponding to the first summand in the r.h.s of (3.4), are written in the r.h.s. of (3.3), the terms, corresponding to the second summand in the r.h.s of (3.4), are collected in the remainder \(R_1\) (see below). The remainders \(R_2\) and \(R_3\) collect the terms, corresponding to \(s=2\) and \(s=3\) respectively. And the remainder \(R_4\) appears because of the remainder in (3.2). Let us analyze the order of each of these terms. By (3.4)

$$\begin{aligned} R_1&= -b^{-1}\sum _{j>p,k}u_{jp}u_{ik}\tilde{E}_{<p}\{\tilde{G}_{ij}(z_1)G_{ik}(z_2)G_{kj}(z_2)\}\\&\quad -b^{-1}\sum _{j,k>p}u_{jp}u_{ik}\tilde{E}_{<p}\{\tilde{G}_{ik}(z_1)\tilde{G}_{ij}(z_1)G_{kj}(z_2)\}\\&=-b^{-1}\tilde{E}_{<p}\{(\tilde{G} I^{(p)}G I^{(i,p)}G )_{ii}\} -b^{-1}\tilde{E}_{<p}\{(\tilde{G} I^{(p)}G I^{(i,p)}\tilde{G} )_{ii}\}=O(b^{-1}). \end{aligned}$$

where \(I^{(i,p)}_{lk}=\delta _{lk}u_{lk}1_{k>p}\).

To estimate \(R_2\), observe that by (3.4) after two differentiation we obtain the sum of terms of the type \( \hat{G}_{l_1l_2}\hat{G}_{l_3l_4}\hat{G}_{l_5l_6}\hat{G}_{l_7,l_8}\), where \(\hat{G}\) can be G or \(\tilde{G}\) and the set of indexes \(l_1,l_2\dots l_7,l_8\) contains 3 times i, 3 times k, and 2 times j, but \(\hat{G}_{jj}\) can not appear. Thus, each term contains either \(\hat{G}_{jk}\hat{G}_{ji}\) or \(\hat{G}_{jk}\hat{G}_{jk}\), or \(\hat{G}_{ji}\hat{G}_{ji}\). Any of this combinations after summation with respect to j gives us O(1). Hence, after summation with respect to k we obtain O(b). But the factor which appears because of the third cumulant is \(b^{-3/2}\), hence \(R_2=O(b^{-1/2})\). By the same argument \(R_3=O(b^{-1})\).

Finally, to estimate \(R_4\), observe that we have two summations with respect to \(p<j<p+C_*b\) and \(i-C_*b<k<i+C_*b\), and the factor which appears because of \(b^{-3}E\{|v_{ik}|^6\}\) is bounded by \(b^{-2-\varepsilon /2}\). At the last step of transformations of (3.3) we write

$$\begin{aligned} G_{kk}(z_2)=g(z_2)+(G_{kk}(z_2)-g(z_2)),\quad G_{ii}(z_1)=g(z_1)+(G_{ii}(z_1)-g(z_1)) \end{aligned}$$

and use the bound (2.25). Then we obtain

$$\begin{aligned} s_i=-g(z_2)t_i-g(z_1)\sum _{k} U_{ik}^{(p)}t_k+r_i,\quad r_i\le C b^{-\varepsilon /2}, \end{aligned}$$

where

$$\begin{aligned} U_{ik}=b^{-1}u_{ik},\quad U^{(p)}_{ik}=b^{-1}u_{ik}\mathrm {1}_{i>p}\mathrm {1}_{k>p}. \end{aligned}$$
(3.5)

Combining (3.1) and (3.3) with above estimates for the reminders and using that by (1.8) we have \((z_2+g(z_2))=-g^{-1}(z_2)\), we obtain the system of equations

$$\begin{aligned}&\big ((\zeta -U^{(p)})t\big )_i=u^{(p)}_i+r'_i,\quad r_i'\le C (b^{-\varepsilon /2}+b^{-\delta /2}),\nonumber \\&\mathrm {with}\quad \zeta =(g(z_1)g(z_2))^{-1},\quad u^{(p)}_i=\mathrm {1}_{i>p}u_{pi}. \end{aligned}$$
(3.6)

Since \(|g(z_1)g(z_2)|<1\) and

$$\begin{aligned} \Vert U^{(p)}\Vert \le \max _{k}\sum _i|U_{ki}|\le 1+o(1), \end{aligned}$$

the operator \((\zeta -U^{(p)})^{-1}\) can be defined by the Neumann series

$$\begin{aligned} (\zeta -U^{(p)})^{-1}=\sum _{m=0}^\infty \zeta ^{-m-1}(U^{(p)})^m, \end{aligned}$$

and it possesses the properties

$$\begin{aligned} \sum _{k}|(U^{(p)}-\zeta )^{-1}_{ik}|\le C,\;\; i>p,\quad |(U^m)_{ii}|\le Cb^{-1},\; 1\le i\le n. \end{aligned}$$
(3.7)

An application of \((\zeta -U^{(p)})^{-1}\) to both parts of (3.6) and (3.7) implies

$$\begin{aligned} t_i&=\big ((U^{(p)}-\zeta )^{-1}u^{(p)}\big )_i+\tilde{r}_i,\quad |\tilde{r}_i|\le C (b^{-\varepsilon /2}+b^{-\delta /2}),\nonumber \\&\Rightarrow b^{-1}T_p''(z_1,z_2)=b^{-1}\big ((\zeta -U^{(p)})^{-1}u^{(p)},u^{(p)}\big )+o(1),\nonumber \\&\Rightarrow E\{\Sigma _1\}=\frac{2}{n\zeta }\sum _p b^{-1}\big ((\zeta -U^{(p)})^{-1}u^{(p)},u^{(p)}\big ) +o(1), \end{aligned}$$
(3.8)

where \(\Sigma _1\) was defined in (2.32).

Proposition 3

Let the matrices U and \(U^{(p)}\) be defined by (3.5), \(\{u_{ij}\}_{i,j=1}^{n}\) satisfy conditions (1.4), the vectors \(u^{(p)}\) be defined by (3.6), and \(|\zeta |>1\). Then

$$\begin{aligned}&\frac{1}{\zeta n}\sum _{p=1}^{n}b^{-1}((\zeta -U^{(p)})^{-1}u^{(p)},u^{(p)}) =-\frac{b}{n}\Big (\mathrm {Tr}\log (1-\zeta ^{-1}U)+\zeta ^{-1}\mathrm {Tr}U\Big ) +O(b^{-1}). \end{aligned}$$
(3.9)

Proof

Denoting by \(S_1(z)\) the l.h.s. of (3.9) and by \(S_2(z)\) the main term in the r.h.s. of (3.9), we have

$$\begin{aligned} S_2(z)&= \frac{b}{n}\sum _{m=2}^{\infty }m^{-1}\zeta ^{-m}\sum U_{i_1i_2}\dots U_{i_mi_1}\\&= \frac{b}{n}\sum _{p=1}^n\sum _{m=2}^{\infty }m^{-1}\zeta ^{-m}\sum _{\min \{i_1,\dots ,i_m\}=p} U_{i_1i_2}\dots U_{i_mi_1}\\&= \frac{b}{n}\sum _{p=1}^n\sum _{m=2}^{\infty }\zeta ^{-m}\sum _{i_2,\dots ,i_{m-1}>p} U_{pi_2}\dots U_{i_mp}+O(b^{-1})\\&= \frac{1}{bn}\sum _{p=1}^{n}\sum _{m=2}^{\infty }\zeta ^{-m}((U^{(p)})^{m-2}u^{(p)},u^{(p)})+O(b^{-1})=S_1(z)+O(b^{-1}). \end{aligned}$$

The term \(O(b^{-1})\) appears in the third line above as a sum of the terms, which have at least two p among \(\{i_1,\dots ,i_m\}\). But the contribution of these terms for fixed m in view of (3.7) can be estimated as

$$\begin{aligned} m|z|^{-m-1}\sum _{k=1}^{m-1} (U^{k})_{pp}(U^{m-k})_{pp}\le m^2|z|^{-m-1}b^{-2}. \end{aligned}$$

After summation with respect to m and multiplication by b we obtain \(O(b^{-1})\). \(\square \)

Now observe that the r.h.s. of (3.9) has a limit, as \(n,b\rightarrow \infty \) like in (1.5).

$$\begin{aligned} S_2(z)&=\sum _{m=2}^{\infty }\frac{1}{m\zeta ^{m}}\int u(x_1)u(x_1-x_2)\dots u(x_{m-2}-x_{m-1})u(x_{m-1})d\bar{x} +r_{n,b}\\&=-\int \log \big (1-\zeta ^{-1}\hat{u}(k)\big )dk-\zeta ^{-1}u(0)+o(1), \end{aligned}$$

where \(\hat{u}\) is the Fourier transform of the function u defined as in (1.9). Hence, the proposition and the last line of (3.8) yield

$$\begin{aligned} E\{\Sigma _1\}=-2\Big (\int \log \big (1-\zeta ^{-1}\hat{u}(k)\big )dk+\zeta ^{-1}u(0)\Big )+o(1). \end{aligned}$$

Thus by (2.23) and (2.27) we obtain

$$\begin{aligned}&\frac{b}{n}\mathrm {Cov}\{\gamma (z_1),\gamma (z_2)\}=E\{\Sigma (z_1,z_2)\}\\&\quad = \frac{\partial ^2}{\partial z_1\partial z_2}\Bigg (-2\int \log \big (1-g(z_1)g(z_2)\hat{u}(k)\big )dk\\&\qquad +(w_2-2)g(z_1)g(z_2)u(0) +\kappa _4g^2(z_1)g^2(z_2)\Bigg )+o(1)\\&=:\mathcal {C}(z_1,z_2)+o(1), \end{aligned}$$

where we used also that by (3.6) \(\zeta ^{-1}=g(z_1)g(z_2)\). According to the definition (2.7) and the above relation

$$\begin{aligned} \frac{b}{n}\mathrm {Var}\{\mathcal {N}_n[\varphi _\eta ]\}\rightarrow \int d\lambda _1d\lambda _2\varphi (\lambda _1)\varphi (\lambda _1) C_\eta (\lambda _1,\lambda _2), \end{aligned}$$

where

$$\begin{aligned} C_\eta (\lambda _1,\lambda _2)&=\frac{1}{4\pi ^2}\big (\mathcal {C}(\lambda _1+i\eta ,\lambda _2-i\eta )+ \mathcal {C}(\lambda _1-i\eta ,\lambda _2+i\eta )\\&\quad -\mathcal {C}(\lambda _1+i\eta ,\lambda _2+i\eta )-\mathcal {C}(\lambda _1-i\eta ,\lambda _2-i\eta )\big ). \end{aligned}$$

Now by Proposition 1 for any \(\varphi \) possessing the norm (1.9) we have

$$\begin{aligned} \frac{b}{n}\mathrm {Var}\{\mathcal {N}_n[\varphi ]\}\rightarrow \lim _{\eta \rightarrow 0} \int d\lambda _1d\lambda _2\varphi (\lambda _1)\varphi (\lambda _1) C_\eta (\lambda _1,\lambda _2). \end{aligned}$$

Let us make the change of variables \(\lambda _1=2\cos x_1\), \(\lambda _2=2\cos x_2\). Then, using that [see (1.8)]

$$\begin{aligned} \lim _{\eta \rightarrow + 0}g(\lambda _\alpha \pm i\eta )=-e^{\mp ix_\alpha },\quad \alpha =1,2, \end{aligned}$$

we obtain (1.10) by a simple calculus.

4 Auxiliary Results

Proof of Lemma 2 The first identity of (2.17) yields that it suffices to estimate \(E\{|A'_pA^{-1}_p-E_{1}\{A'_pA^{-1}\}|^2\}\). Note that for any a independent of \(\{w_{1i}\}\) we have

$$\begin{aligned} E_p\{|\xi ^\circ _p|^2\}\le E_p\{|\xi -a|^2\}. \end{aligned}$$

Hence it suffices to estimate

$$\begin{aligned} \bigg |\frac{A'_p}{A_p}-\frac{E_{p}\{A'_p\}}{E_{1}\{A_p\}}\bigg |= \bigg |\frac{A'^\circ _p}{E_{p}\{A\}}- \frac{A'^{\circ }_p}{E_{1}\{A_p\}}\,\frac{A'_p}{A_p}\bigg |\le \bigg |\frac{A'^{\circ }_p}{E_{p}\{A_p\}}\bigg |+ \bigg |\frac{A^\circ _p}{yE_{p}\{A_p\}}\bigg |. \end{aligned}$$

Here and below \(z=x+iy\), \(y>0\). Let us use also the relation (2.19) which yields, in particular, that \(|{A'_p}/{A_p}|\le y^{-1}\) . Using (2.18), we get

$$\begin{aligned} E_p\Big \{\Big |\frac{A^\circ _p}{E_{p}\{A_p\}}\Big |^2\Big \}\le \frac{Cb^{-2}\hbox {Tr }G^{(p)}I_pG^{(p)}*}{|E_{p}\{A_p\}|^2}, \end{aligned}$$
(4.1)

Similarly

$$\begin{aligned} E_p\bigg \{\bigg |\frac{A'^\circ _p}{E_{p}\{A_p\}}\bigg |^2\bigg \}\le \frac{Cb^{-2}\hbox {Tr }(G^{(p)})^2I_p(G^{(p)*})^2}{|E_{p}\{A_p\}|^2}\le \frac{Cb^{-2}\hbox {Tr }G^{(p)}I_pG^{(p)*}}{y^2|E_{p}\{A_p\}|^2}. \end{aligned}$$

Thus

$$\begin{aligned} \frac{b}{n}E\{|(\gamma _n(z))^\circ |^2\}\le Cn^{-1} \sum _p\frac{\hbox {Tr }G^{(p)}I_pG^{(p)*}}{by^2|E_{p}\{A_p\}|^2}. \end{aligned}$$
(4.2)

Notice that the Hölder inequality implies for any \(\delta >0\)

$$\begin{aligned}&\int \Big |b^{-1}\sum _{|j-p|\le bC^*}u_{pj} G^{(p)}_{jj}(x+iy)\Big |^{1+\delta }dx\\&\quad \le Cb^{-1}\sum _{|j-p|\le bC^*}u_{pj}\int |G^{(p)}_{jj}(xu_{pj}+iy)|^{1+\delta }dx\\&\quad \le b^{-1}\sum _{|j-p|\le bC^*}u_{pj}\int \sum _k\frac{|(\psi _k,e_j)|^2}{|(x-\lambda _k)^2+y^2|^{(1+\delta )/2}}dx\le C\delta ^{-1} y^{-\delta }. \end{aligned}$$

Hence, denoting \(\mathcal {L}_p=\{x:|\sum u_{pj}G^{(p)}_{jj}(x+iy)|>1\}\), we obtain for \(0\le y<\frac{1}{2}\)

$$\begin{aligned} \int 1_{\mathcal {L}_p}dx\le C\min _{\delta }\{\delta ^{-1} y^{-\delta }\}\le C\log y^{-1}. \end{aligned}$$

Then, using once more that by (2.19) each summand in the r.h.s. of (4.2) is bounded by \(y^{-4}\), we get

$$\begin{aligned}&\int \frac{b}{n}E\{|(\gamma _n(zx+iy))^\circ |^2\}dx\\&\quad \le Cn^{-1}\sum _{p}\Bigg (\int _{\mathbb {R}{\setminus }([-1,1]\cup \mathcal {L}_p)}(y^2b)^{-1}\hbox {Tr }G(x+iy)I_pG^{(p)*}(x+iy)dx\\&\qquad +Cy^{-4}\int _{[-1,1]\cup \mathcal {L}_p}dx\Bigg )\\&\quad \le Cy^3+Cy^{-4}\log y^{-1} \le C'y^{-4}\log y^{-1}. \end{aligned}$$

\(\square \)

Proof of Lemma 3. It follows from (2.17) that

$$\begin{aligned} E\{|G_{pp}-E\{G_{pp}\}|^2\}&\le |\mathfrak {I}z|^{-2}E\{|A_p-E\{A_p\}|^2\}\nonumber \\&\le 2|\mathfrak {I}z|^{-1}(E\{|\bar{A}_p-E\{\bar{A}_p\}|^2\} +\mathrm {Var}\{ A_p^\circ \})\nonumber \\&\le 2|\mathfrak {I}z|^{-2}b^{-1}\sum u_{pi}\mathrm {Var}\{G^{(p)}_{ii}\}+Cb^{-1}. \end{aligned}$$
(4.3)

But since

$$\begin{aligned} E\{|G_{ii}-G^{(p)}_{ii}|\}\le |\mathfrak {I}z|^{-2}b^{-1}E\{|(G^{(p)}v^{(p)})_i|^2\}\le C|\mathfrak {I}z|^{-2}b^{-1}, \end{aligned}$$

we have

$$\begin{aligned} \mathrm {Var}\{G^{(p)}_{ii}\}= \mathrm {Var}\{G_{ii}\}+O(b^{-1})=\mathrm {Var}\{G_{pp}\}+O(b^{-1}). \end{aligned}$$
(4.4)

Here the last equality is due to the invariance of the distribution of M with respect to the “shift” \(i\rightarrow (i+1)\mod (n)\). Hence for any \(z:|\mathfrak {I}z|\ge 2\) we obtain from (4.3)

$$\begin{aligned} \mathrm {Var}\{G_{pp}\}\le 2|\mathfrak {I}z|^{-2}\mathrm {Var}\{G_{pp}\}+Cb^{-1}\mathrm {Var}\{G_{pp}\}+ 2Cb^{-1}. \end{aligned}$$
(4.5)

Let us fix any \(z=x+i\eta \) with \(0<\eta <2\) and consider the function

$$\begin{aligned} \phi (\zeta )=\log (c_0b^{1/2}|\mathrm {Cov}\{G_{pp}(\zeta ),G_{pp}(z)\}|) \end{aligned}$$

in the half-circle \(\Omega =\{\mathfrak {I}\zeta <2\}\cap \{|\zeta -x-2i|\le |2-\eta /2|\}\). It is a harmonic function, and in view of (4.5) for \(\mathfrak {I}\zeta =2\) we can choose \(c_0\) sufficiently small to have

$$\begin{aligned} c_0b^{1/2}|\mathrm {Cov}\{G_{pp}(\zeta ),G_{pp}(z)\}|&\le c_0 b^{1/2}\mathrm {Var}^{1/2}\{G_{pp}(\zeta )\}\mathrm {Var}^{1/2}\{G_{pp}(z)\}\le 1\\&\Rightarrow \phi (\zeta )\le 0,\quad \zeta \in \ell _1:=\partial \Omega \cap \{\mathfrak {I}\zeta =2\}. \end{aligned}$$

Moreover, in view of the trivial bound \(|G_{pp}(\zeta )|\le |\mathfrak {I}\zeta |^{-1}\), we have

$$\begin{aligned} \phi (\zeta )\le \log b^{1/2}+\log c_0\eta ^{-2},\quad \zeta \in \ell _2:=\partial \Omega \cap \{|\zeta -x-2i|= |2-\eta /2|\}. \end{aligned}$$

Hence, by the theorem on two constants (see [5], p. 296), we have

$$\begin{aligned} \phi (\zeta )\le (\log b^{1/2}+\log c_0\eta ^{-2})\omega (\zeta ), \end{aligned}$$
(4.6)

where the harmonic function

$$\begin{aligned} \omega (\zeta ):=\frac{2}{\pi }\mathfrak {I}\log \frac{2-\eta /2-(\zeta -x-2i)}{2-\eta /2+(\zeta -x-2i)}, \end{aligned}$$

satisfy the conditions

$$\begin{aligned} \omega (\zeta )=0,\;\zeta \in \ell _1,\qquad \omega (\zeta )=1,\;\zeta \in \ell _2. \end{aligned}$$

Since \(\omega (z)=1-2\delta \) with some \(\delta (\eta )>0\), (4.6) implies the first line of (2.25):

$$\begin{aligned} c_0b^{1/2}\mathrm {Var}\{G_{pp}(z)\}\le (c_0b^{1/2})^{1-2\delta }\Rightarrow \mathrm {Var}\{G_{pp}(z)\}\le Cb^{-\delta }. \end{aligned}$$

Using (2.17), (4.3), and (4.6), we get similarly to (4.4),

$$\begin{aligned} E\{G_{pp}(z)\}&=-(z+E\{G_{pp}(z)\})^{-1}+O(b^{-\delta })\\&\Rightarrow E\{G_{pp}(z)\}= g(z)+O(b^{-\delta }). \end{aligned}$$

Thus, we have proved the second line of (2.25). \(\square \)