Abstract
We consider the fluctuations of linear eigenvalue statistics of random band \(n\times n\) matrices whose entries have the form \(\mathcal {M}_{ij}=b^{-1/2}u^{1/2}(|i-j|/b)\tilde{w}_{ij}\) with i.i.d. \(\tilde{w}_{ij}\) possessing the \((4+\varepsilon )\)th moment, where the function u has a finite support \([-C^*,C^*]\), so that M has only \(2C_*b+1\) nonzero diagonals. The parameter b (called the bandwidth) is assumed to grow with n in a way such that \(b/n\rightarrow 0\). Without any additional assumptions on the growth of b we prove CLT for linear eigenvalue statistics for a rather wide class of test functions. Thus we improve and generalize the results of the previous papers (Jana et al., arXiv:1412.2445; Li et al. Random Matrices 2:04, 2013), where CLT was proven under the assumption \(n>>b>>n^{1/2}\). Moreover, we develop a method which allows to prove automatically the CLT for linear eigenvalue statistics of the smooth test functions for almost all classical models of random matrix theory: deformed Wigner and sample covariance matrices, sparse matrices, diluted random matrices, matrices with heavy tales etc.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and Main Results
Consider an ensemble of random symmetric \(n\times n\) matrices with entries of the form
where \(\{\tilde{w}_{ij}\}_{1\le i<j\le n}\) are i.i.d. (up to the symmetry \(\tilde{w}_{ij}=\tilde{w}_{ji}\)) random variables, satisfying the moment conditions
diagonal entries \(\{\tilde{w}_{ii}\}_{1\le i\le n}\) are also i.i.d., independent of off diagonal entries,
and u(x) is a piece-wise continuous (with a finite number of jumps) continuous at \(x=0\) function with a compact support, satisfying the conditions
It is easy to see that the entries of \(\mathcal {M}\) are nonzero only inside the band \(|i-j|\le C^*b\). Hence for fixed b we have a matrix with a finite numbers of diagonals, while if \(b\sim n\), we obtain some kind of the Wigner matrix, with all of the entries having the variances of the same order (see [20]). The model is now widely discussed in mathematical literatures, since by non rigorous conjecture of [6] it is expected that the behavior of local eigenvalue statistics demonstrates a kind of phase transition: for \(b<<n^{1/2}\) the statistics is of Poisson type and for \(b>>n^{1/2}\) it is of the same type as for the Wigner matrices. Till now this result is not proven rigorously, but the problem is one of the most challenging in the random matrix theory (see, e.g. [3, 4, 18, 19] and references therein).
It was proved many years ago (see [10]) that in the limit
the normalized eigenvalue counting measure converges weakly to the Wigner semicircle low, which has the density
This means that if we denote \(\{\lambda _i\}_{i=1}^n\) the eigenvalues \(\mathcal {M}\), choose any bounded integrable test function \(\varphi \), and consider the linear eigenvalue statistics of the form
then in the limit (1.5) we have
In particular, for \(\varphi (\lambda )=(\lambda -z)^{-1}\)
Notice that below we almost always will omit the argument \(\mathcal {M}\) of \(\mathcal {N}_n[\varphi ,\mathcal {M}]\) and use it only in the proof of Lemma 1, where we compare the linear eigenvalue statistics of \(\mathcal {M}\) and of the truncated matrix M.
The next natural question is the behavior of the fluctuations \(\mathcal {N}_n^\circ [\varphi ]\) in the same limit, in particular, the behavior of its variance. This question was solved partially in the paper [9], where the main term of the covariance of the traces of two resolvents was found in the case of Gaussian \(\tilde{w}_{ij}\) and under the additional restriction \(b=n^\theta \), \(1/3<\theta <1\). The next step was done in the papers [8, 11], where the Central Limit Theorem (CLT) for the random variable \(\sqrt{b/n}\mathcal {N}_n^\circ [\varphi ]\) was proved for sufficiently smooth test functions, but again under the technical condition \(n>>b>>n^{1/2}\).
The main result of the present paper is the proof of CLT for the linear eigenvalue statistics (1.7) of the band matrices under the limiting transition (1.5) without any additional restriction on the growth of b.
We consider the test functions from the space \(\mathcal {H}_s\), possessing the norm
Theorem 1
Consider an ensemble of random real symmetric band matrices (1.1–1.4) and any test function possessing the norm (1.9) with \(s>2\). Then the sequence of random variables \(\sqrt{b/n}\mathcal {N}_n^\circ [\varphi ]\) with \(\mathcal {N}_n^\circ [\varphi ]\) of (1.7) converges in distribution in the limit (1.5) to the normal random variable with zero mean and the variance
where \((u,u)=\int u^2(x)dx\) and \(\hat{u}(k)\) is the Fourier transform of the function u defined as in (1.9)
Remark
Inspecting the proof of Theorem 1 it is easy to see, that it can be easily adopted to the hermitian case, i.e., the model (1.1) with complex valued independent (up to the symmetry conditions) entries \(\tilde{w}_{ij}\), satisfying the first, the second and the forth moment relations of (1.2) and (1.3). If we assume in addition that real and imaginary parts of entries are also i.i.d. and denote \(\kappa ^{(1)}_4=E\{(\mathfrak {R}w_{12})^4\} -3E^2\{(\mathfrak {R}w_{12})^2\}\), then the variance will have similar to (1.10) form with the first summand divided by 2, the second summand with \(\kappa _{4}\) replaced by \(2\kappa ^{(1)}_4\) and the third summand with \(w_2-2\) replaced by \(w_2-1\).
To prove CLT for the band matrices, we use the CLT for martingale (see [2, Theorem 35.12]).
Theorem 2
Let \( X_{n,k}=E_{<k}\{Y-E_{k}Y\}\) be a martingale differences array, corresponding to the real valued function \(Y(V_1,\dots ,V_n)\) depending on the family of some independent random vectors \(V_1,\dots ,V_n\), \(S_n=\sum _{k=1}^nX_k\). Here and below \(E_{k}\) means the averaging with respect to \(V_k\) and \(E_{<k}=E_1,\dots ,E_{k-1}\). Set \(\sigma _n=\sum _{k=1}^nE\{X_k^2\}\) and let \(\sigma _n=O(1)\). Assume that
Then
Remark
Here we have replaced a more general condition \(\sum E\{X_k^21_{|X_k|>\delta }\}\rightarrow 0\) used in [2] by condition (1) which is more easy to check for the random matrix models.
The idea to use Theorem 2 for the proof of CLT in the random matrix theory is not new. Since the paper of [1] it was used many times (see, e.g., [7, 13, 15]), but the method of the proof of CLT used in the present paper allows to prove CLT by the same way for all classical models of random matrix theory: deformed Wigner and sample covariance matrices, sparse and diluted random matrices etc. It becomes even simpler than that for band matrices, since the proof of condition (2) becomes simpler.
The paper is organized as follows. In Section 2.1 we give the sketch of the proof of CLT, introduce truncated band matrix and explain how one can extend CLT from some special class of the test functions to all functions of \(\mathcal {H}_s\). In Section 2.2 we check conditions (1.11) and in Section 2.3 prove Lemma 1 (given in Section 2.1) about the difference of linear eigenvalue statistics of initial and truncated matrices. In Sect. 3 we compute the variance (1.10). And in Sect. 4 the proofs of some auxiliary results (partially known before) are given in order to make the proof of Theorem 1 more self consistent.
2 Proof of CLT
2.1 Strategy of the Proof
We start from the proof of CLT for the truncated and “periodically continued” model:
Here and below
and \(\{\omega _{ij}\}_{||i-j|-n|\le C^*b}\) are independent (up to the symmetry conditions) and independent from \(\mathcal {M}\) copies of \(w_{12}\). Thus we not only truncate the entries of \(\mathcal {M}\), but also add entries in upper right and lower left corners of it, in order to obtain the periodic distribution, i.e., invariant with respect to the shift \(i\rightarrow |i+1|_n\).
Then the standard argument gives us that for \(|i-j|_n\le C^*b\)
Moreover, it is easy to see that
Then, using Theorem 2, we prove CLT for \(\nu _{1n}:=(b/n)^{1/2}\mathcal {N}_n^\circ [\varphi _\eta ,M]\) with the test functions of the form
where \(*\) means a convolution, \(\mathcal {P}_\eta \) is a Poisson kernel
and \(\varphi \in \mathcal {H}_s\cap L_1(\mathbb {R})\). It is easy to see that then
Then we shall prove the lemma
Lemma 1
Set \(\mathcal {G}(z)=(\mathcal {M}-z)^{-1}\), \(\tilde{\gamma }_n(z):=\mathrm {Tr\;}\mathcal {G}(z)\) and compare \(\tilde{\gamma }_n(z)\) with \(\gamma _n(z)\) of (2.7). Then for any fixed \(\eta >0\) uniformly in \(z:\mathfrak {I}z>\eta \)
The lemma implies that for any \(\varphi \in \mathcal {H}_s\cap L_1(\mathbb {R})\) if we set \(\nu _{2n}:=(b/n)^{1/2}\mathcal {N}_n^\circ [\varphi _\eta ,\mathcal {M}]\), then
Hence, for any fixed \(x\in \mathbb {R}\)
Thus, CLT for \(v_{1n}\) and Lemma 1 imply CLT for \(v_{2n}\), if the test function has the form (2.5).
To extend CLT to the test functions from \(\mathcal {H}_s\), we use a proposition (see [14, Proposition 3.2.9]).
Proposition 1
Let \(\{\xi _{l}^{(n)}\}_{l=1}^{n}\) be a triangular array of random variables, \(\mathcal {N}_{n}[\varphi ]=\sum _{l=1}^{n}\varphi (\xi _{l}^{(n)})\) be its linear statistics, corresponding to a test function \(\varphi :\mathbb {R}\rightarrow \mathbb {R}\), and \(\{d_n\}_{l\ge 1}\) be some sequence of positive numbers. Assume that
-
(a)
there exists a vector space \(\mathcal {L}\) endowed with a norm \(\Vert ...\Vert \) such that uniformly in \(\varphi \in \mathcal {L}\)
$$\begin{aligned} d_n\mathrm {Var}\{\mathcal {N}_{n}[\varphi ]\}\le C||\varphi ||^2,\;\forall \varphi \in \mathcal {L}; \end{aligned}$$(2.9) -
(b)
there exists a dense linear manifold \(\mathcal {L}_{1}\subset \mathcal {L}\) such that CLT is valid for \(\mathcal {N}_{n}[\varphi ],\;\varphi \in \mathcal {L}_{1}\), i.e., there exists a continuous quadratic functional \(V:\mathcal {L}_{1}\rightarrow \mathbb {R}_{+}\) such that we have uniformly in x, varying on any compact interval
$$\begin{aligned} \lim _{n\rightarrow \infty }Z_{n}[x\varphi ]=e^{-x^{2}V[\varphi ]/2},\;\forall \varphi \in \mathcal {L}_{1},\quad where\quad Z_{n}[x\varphi ]:={E}\big \{ e^{ix d_n^{1/2}\mathcal {N}^\circ _{n}[\varphi ]}\big \}. \end{aligned}$$(2.10)Then V admits a continuous extension to \(\mathcal {L}\) and CLT is valid for all \(\mathcal {N}_{n}[\varphi ]\), \(\varphi \in \mathcal {L}\).
The proposition allows to extend CLT from any dense subset of \(\mathcal {H}_s\) for which we are able to prove CLT on the whole \(\mathcal {H}_s\), if we can check (2.9). This can be done by using the another proposition (proven in [16] and also [17]) and Lemma 2.
Proposition 2
For any \(s>0\) and any \(\mathcal {M}\)
Lemma 2
If the conditions (1.1) and (1.4) are satisfied, then for any \(0<y<\frac{1}{2}\)
The proof of the lemma is given in Sect. 4.
Combining the proposition with (2.12), we prove (2.9).
2.2 Checking of Conditions (1.11)
According to the previous section, it suffices to prove CLT for the functions of the form (2.5) with a fixed n-independent \(\eta \). Hence, everywhere below we assume that \(|\mathfrak {I}z|\ge \eta \) with some n-independent \(\eta \) and hence almost all bounds depend on \(\eta \).
To apply Theorem 2, we denote \(E_{p}\) the averaging with respect to the variable \(\{w_{p,j}\}_{j\ge p}\), \(E_{<p}=E_{1}\dots E_{p-1}\) and consider
Then, according to Theorem 2, we have to check conditions (1,2) of (1.11) for \(\{X_p[\varphi _\eta ]\}\). It is evident, that condition (1) follows from the bounds
valid uniformly in \(|\mathfrak {I}z|\ge \eta \). And since
condition (2) of (1.11) follows from the uniform in \(|\mathfrak {I}z_1|,|\mathfrak {I}z_2|\ge \eta \) bound
Let us prove (2.14) and (2.15).
Denote \(M^{(p)}\) the \((n-1)\times (n-1)\) matrix which is obtained from M by removing the pth line and column. Set also
Use the identities
where
Since for the resolvent \(G(z)=(M-z)^{-1}\) of any symmetric or hermitian matrix M and any vector m
we have for \(|\mathfrak {I}z|\ge \eta \)
and
which implies the second inequality of (2.14).
The last relation of (2.17) yields
Here and below for any random variable \(\xi \) we denote \(\xi ^\circ _p=\xi -E_p\{\xi \}\).
Since \(D_p(z_1,z_2)\) is an analytic function on \(z_1,z_2:|\mathfrak {I}z_1|,|\mathfrak {I}z_2|\ge \eta /2\), in order to prove the first bound of (2.14), it suffices to prove that uniformly in \(|\mathfrak {I}z|\ge \eta /2\)
Evidently
Hence, averaging with respect to \(E_p\) and using (2.3), we obtain the first bound of (2.14). Similarly one can get the relation which we need below
We are left to check (2.15). Writing \(A_p=\bar{A}_p+A_p^\circ \), expanding \(\log A_p\) around \(\bar{A}_p\), and using (2.22), we obtain
where
Lemma 3
Given \(\eta >0\) there exists \(\delta (\eta )>0\) such that uniformly in \(z:|\mathfrak {I}z|>\eta \)
where g(z) is defined by (1.8).
The proof of the lemma is given in Sect. 4.
Remark 1
Below we will often use a simple observation. If for some random variables \(|R_k|\le C_k\), \(\sum _kC_k\le C\), and \(f_k:E\{|f_k-f^*_k|\}\le C_1b^{-\delta }\), where \(f^*_k\) are some constants, then we have with the same C and \(C_0\) of (2.25)
In particular, since in view of (2.24) \(| T_{p}(z_1,z_2)|\le Cb^{-1}\), we have
where \(T_{p}'(z_1,z_2)\) is the first sum in the r.h.s. of (2.24). The constant term here does not contribute into the variance of \(\Sigma (z_1,z_2)\), so it is not important in the proof of (2.15).
Let us denote \(\tilde{M}^{(<p)}\) the matrix M whose entries \(w_{ij}\) with \(\min \{i,j\}<p\) are replaced by \( w_{ij}'\) which are independent from all \(\{w_{kl}\}_{k,l=1}^n\) and have the same distribution as \(w_{ij}\). Let also \(\tilde{M}^{(<p,q)}\) be the matrix \(\tilde{M}^{(<p)}\) without qth line and column. We denote also \(\tilde{E}_{<p}\) the averaging with respect to all \(w_{ij}\) and \(w_{ij}'\) with \(\min \{i,j\}<p\). Set
Then evidently
where we denote by \(I^{(p)}\) the diagonal matrix with the entries
Moreover, if we replace \(G^{(p)}\) in (2.24) by G and set
then in view of (2.17) and (2.21)
where we have used that since \(Q^{(p)}\) is a rank one matrix with a bounded norm, we have for any bounded matrix B
Thus we need to study the variance of
To prove (2.15), it suffices to show that
The last relation is a corollary of of the bounds, which we are going to prove
By (2.32),
Notice also that \(\big (T_p''(z_1,z_2)\big )^\circ _r=0\) for \(p\ge r+1\), hence the sum in (2.35) is over \(p\le r\).
Then (2.17) yields
where “+sim” means the adding of the term which can be obtained from the previous one by replacing \(z_2\) and \(z_1\). Since \(E\{|\xi ^\circ _r|^2\}\le E\{|\xi |^2\}\) for any random variable \(\xi \), (2.34) yields
To sum in the r.h.s of (2.35) with respect to p we would like to use the property
but since p appears not only in \(I^{(p)}\), first we need to remove p from other places. Write
Here in the first line we use (2.35), in the second line we use first that for \(p\le q\) the averaging \(E_{\le p}\) can be replaced by \(E_{\le q}\), and then use (2.36) for summation over \(p\le r\). The third line follows from the second one in view of the bound \(\Vert G^{(r)}\Vert \le C\). Next we split the sum over q into two parts: one over \(q<r-C^*b\) and another over \(r-C^*b\le q\le r\), and observed that for the q in the first part \((v^{(r)},v^{(r)})\) is a constant with respect to the averaging \(E_{<q}\), hence
Then we can take the sum over \(q<r-C^*b\), using again the bound (2.36), and finish to estimate the sum using the bound \(\Vert G^{(r)}\Vert \le C\). As for the terms with \(r-C^*b\le q\le r\), they are estimated just using the boundedness of \(\Vert G^{(r)}\Vert \) and \(\Vert I^{(p)}\Vert \). Thus we have proved (2.33). \(\square \)
2.3 Proof of Lemma 1
Set
The same argument as in the previous section implies that it suffices to check that
Since we know that [see (2.24)]
we conclude that it suffices to prove that
Let us write
Averaging with respect to \( v^{(p)}\) and \(\tilde{v}^{(p)}\) we get similarly to (2.24) for \(|p|_n\ge cb\)
Similarly
In addition, again similarly to (2.24) we have
Now by the same way as in (2.30,2.31) we can replace here \(\mathcal {G}^{(p)}\) by \(\mathcal {G}\) and \({G}^{(p)}\) by G with an error \(O(b^{-2})\):
The resolvent identity implies
Hence, the last term in the r.h.s. of (2.42) can be estimated as
Hence, using (2.36) and (2.4), we obtain
3 Variance
In view of (2.32), to find \(\Sigma _1\), it suffices to find the main order of \(b^{-1}E\{T_p''(z_1,z_2)\}\) defined in (2.30). For this aim it suffices to compute for any i the main order of
Consider
where we used Lemma 3 for the last equality.
The idea is to compute the l.h.s. above in a way which gives us an equation with respect to \(\{t_i\}_{i>p}\). It is possible by using the formula (see e.g.[14]) valid for any random variable \(\xi \) which has zero mean and possesses \(m+2\) moments, and any function F, possessing \(m+1\) bounded derivatives
where \(\kappa _s\) is the sth cumulant of \(\xi \), i.e., the coefficient at \(x^s\) in the formal expansion of \(\log E\{e^{x\xi }\}\) in the series with respect to x. We need to know that \(\kappa _1=E\{\xi \}\), \(\kappa _2=E\{|\xi ^\circ |^2\}\), and \(\kappa _s\le C_sE\{|\xi |^s\}\) with some absolute \(C_s\).
Applying the formula for \(\xi =b^{-1/2}v_{ik}\), \(m=4\), and \(F_{ijk}=\tilde{G}_{ji}(z_1)G_{ik}(z_2)\), we get
Here we used the differentiation formula for the resolvent of any symmetric matrix M
and split the terms corresponding to \(s=1\) in (3.2) into two parts. The terms, corresponding to the first summand in the r.h.s of (3.4), are written in the r.h.s. of (3.3), the terms, corresponding to the second summand in the r.h.s of (3.4), are collected in the remainder \(R_1\) (see below). The remainders \(R_2\) and \(R_3\) collect the terms, corresponding to \(s=2\) and \(s=3\) respectively. And the remainder \(R_4\) appears because of the remainder in (3.2). Let us analyze the order of each of these terms. By (3.4)
where \(I^{(i,p)}_{lk}=\delta _{lk}u_{lk}1_{k>p}\).
To estimate \(R_2\), observe that by (3.4) after two differentiation we obtain the sum of terms of the type \( \hat{G}_{l_1l_2}\hat{G}_{l_3l_4}\hat{G}_{l_5l_6}\hat{G}_{l_7,l_8}\), where \(\hat{G}\) can be G or \(\tilde{G}\) and the set of indexes \(l_1,l_2\dots l_7,l_8\) contains 3 times i, 3 times k, and 2 times j, but \(\hat{G}_{jj}\) can not appear. Thus, each term contains either \(\hat{G}_{jk}\hat{G}_{ji}\) or \(\hat{G}_{jk}\hat{G}_{jk}\), or \(\hat{G}_{ji}\hat{G}_{ji}\). Any of this combinations after summation with respect to j gives us O(1). Hence, after summation with respect to k we obtain O(b). But the factor which appears because of the third cumulant is \(b^{-3/2}\), hence \(R_2=O(b^{-1/2})\). By the same argument \(R_3=O(b^{-1})\).
Finally, to estimate \(R_4\), observe that we have two summations with respect to \(p<j<p+C_*b\) and \(i-C_*b<k<i+C_*b\), and the factor which appears because of \(b^{-3}E\{|v_{ik}|^6\}\) is bounded by \(b^{-2-\varepsilon /2}\). At the last step of transformations of (3.3) we write
and use the bound (2.25). Then we obtain
where
Combining (3.1) and (3.3) with above estimates for the reminders and using that by (1.8) we have \((z_2+g(z_2))=-g^{-1}(z_2)\), we obtain the system of equations
Since \(|g(z_1)g(z_2)|<1\) and
the operator \((\zeta -U^{(p)})^{-1}\) can be defined by the Neumann series
and it possesses the properties
An application of \((\zeta -U^{(p)})^{-1}\) to both parts of (3.6) and (3.7) implies
where \(\Sigma _1\) was defined in (2.32).
Proposition 3
Let the matrices U and \(U^{(p)}\) be defined by (3.5), \(\{u_{ij}\}_{i,j=1}^{n}\) satisfy conditions (1.4), the vectors \(u^{(p)}\) be defined by (3.6), and \(|\zeta |>1\). Then
Proof
Denoting by \(S_1(z)\) the l.h.s. of (3.9) and by \(S_2(z)\) the main term in the r.h.s. of (3.9), we have
The term \(O(b^{-1})\) appears in the third line above as a sum of the terms, which have at least two p among \(\{i_1,\dots ,i_m\}\). But the contribution of these terms for fixed m in view of (3.7) can be estimated as
After summation with respect to m and multiplication by b we obtain \(O(b^{-1})\). \(\square \)
Now observe that the r.h.s. of (3.9) has a limit, as \(n,b\rightarrow \infty \) like in (1.5).
where \(\hat{u}\) is the Fourier transform of the function u defined as in (1.9). Hence, the proposition and the last line of (3.8) yield
Thus by (2.23) and (2.27) we obtain
where we used also that by (3.6) \(\zeta ^{-1}=g(z_1)g(z_2)\). According to the definition (2.7) and the above relation
where
Now by Proposition 1 for any \(\varphi \) possessing the norm (1.9) we have
Let us make the change of variables \(\lambda _1=2\cos x_1\), \(\lambda _2=2\cos x_2\). Then, using that [see (1.8)]
we obtain (1.10) by a simple calculus.
4 Auxiliary Results
Proof of Lemma 2 The first identity of (2.17) yields that it suffices to estimate \(E\{|A'_pA^{-1}_p-E_{1}\{A'_pA^{-1}\}|^2\}\). Note that for any a independent of \(\{w_{1i}\}\) we have
Hence it suffices to estimate
Here and below \(z=x+iy\), \(y>0\). Let us use also the relation (2.19) which yields, in particular, that \(|{A'_p}/{A_p}|\le y^{-1}\) . Using (2.18), we get
Similarly
Thus
Notice that the Hölder inequality implies for any \(\delta >0\)
Hence, denoting \(\mathcal {L}_p=\{x:|\sum u_{pj}G^{(p)}_{jj}(x+iy)|>1\}\), we obtain for \(0\le y<\frac{1}{2}\)
Then, using once more that by (2.19) each summand in the r.h.s. of (4.2) is bounded by \(y^{-4}\), we get
\(\square \)
Proof of Lemma 3. It follows from (2.17) that
But since
we have
Here the last equality is due to the invariance of the distribution of M with respect to the “shift” \(i\rightarrow (i+1)\mod (n)\). Hence for any \(z:|\mathfrak {I}z|\ge 2\) we obtain from (4.3)
Let us fix any \(z=x+i\eta \) with \(0<\eta <2\) and consider the function
in the half-circle \(\Omega =\{\mathfrak {I}\zeta <2\}\cap \{|\zeta -x-2i|\le |2-\eta /2|\}\). It is a harmonic function, and in view of (4.5) for \(\mathfrak {I}\zeta =2\) we can choose \(c_0\) sufficiently small to have
Moreover, in view of the trivial bound \(|G_{pp}(\zeta )|\le |\mathfrak {I}\zeta |^{-1}\), we have
Hence, by the theorem on two constants (see [5], p. 296), we have
where the harmonic function
satisfy the conditions
Since \(\omega (z)=1-2\delta \) with some \(\delta (\eta )>0\), (4.6) implies the first line of (2.25):
Using (2.17), (4.3), and (4.6), we get similarly to (4.4),
Thus, we have proved the second line of (2.25). \(\square \)
References
Bai, Z., Silverstein, J.W.: CLT for linear spectral statistics of large-dimensional sample covariance matrices. Ann. Probab. 32, 553–605 (2004)
Billingsley, P.: Probability and Measure. Wiley Series in Probability and Mathematical Statistics, 3rd edn. Wiley, New York (1995)
Erdös, L., Knowles, A.: Quantum diffusion and eigenfunction delocalization in a random band matrix model. Commun. Math. Phys. 303, 509–554 (2011)
Erdös, L., Knowles, A., Yau, H.-T., Yin, J.: Delocalization and diffusion profile for random band matrices. Electron. J. Probab. 18(59), 158 (2013)
Evgrafov, M.A.: Analytic Functions. Dover Pubications, Mineola (1978)
Fyodorov, Y.V., Mirlin, A.D.: Scaling properties of localization in random band matrices: a-model approach. Phys. Rev. Lett. 67, 2405–2409 (1991)
Benaych-Georges, F., Guionnet, A., Male, C.: Central limit theorems for linear statistics of heavy tailed random matrices. Commun. Math. Phys. 329(2), 641–686 (2014)
I.Jana, K.Saha, A.Soshnikov. Fluctuations of Linear Eigenvalue Statistics of Random Band Matrices arXiv:1412.2445
Khorunzhy, A., Kirsch, W.: On asymptotic expansions and scales of spectral universality in band random matrix ensembles. Commun. Math. Phys. 231, 223–255 (2002)
Molchanov, S.A., Pastur, L.A., Khorunzhy, A.M.: Eigenvalue distribution for band random matrices in the limit of their infinite rank. Teor. Matem. Fizika 90, 108–118 (1992)
Li, L., Soshnikov, A.: Central limit theorem for linear statistics of eigenvalues of band random matrices. Random Matrices 2, 04 (2013)
Lytova, A., Pastur, L.: Central limit theorem for linear eigenvalue statistics of random matrices with independent entries. Ann. Probab. 37(5), 1778–1840 (2009)
Najim, J., Yao, J.: Gaussian fluctuations for linear spectral statistics of large random covariance matrices. arXiv:1309.3728
Pastur, L., Shcherbina, M.: Eigenvalue Distribution of Large Random Matrices. Mathematical Survives and Monographs, vol. 171. American Mathematical Society: Providence (2011)
O’Rourke, S., Renfrew, D., Soshnikov, A., Vu, V.: Products of independent elliptic random matrices. arXiv:1403.6080
Shcherbina, M.: Central Limit Theorem for linear eigenvalue statistics of the Wigner and sample covariance random matrices. J. Math. Phys. Anal. Geom. V7(2), 176–192 (2011)
Shcherbina, M., Tirozzi, B.: Central limit theorem for fluctuations of linear eigenvalue statistics of large random graphs. Diluted regime. J. Math. Phys. 53, 1–18 (2012)
Shcherbina, T.: On the second mixed moment of the characteristic polynomials of the 1D band matrices. Commun. Math. Phys. 328, 45–82 (2012)
Spencer, T.: SUSY statistical mechanics and random band matrices. Lecture notes in Mathematics 2051 (CIME Foundation subseries), Quantum many body system
Wigner, E.P.: On the distribution of the roots of certain symmetric matrices. Ann. Math. 67, 325–327 (1958)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Shcherbina, M. On Fluctuations of Eigenvalues of Random Band Matrices. J Stat Phys 161, 73–90 (2015). https://doi.org/10.1007/s10955-015-1324-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10955-015-1324-8