Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

This Note results from a few discussions with A. Klein (UCI, summer 011) on Minami’s inequality and the results from [7] on Poisson local spacing behavior for the eigenvalues of certain Anderson type models. Recall that the Hamiltonian H on the lattice \(\mathbb{Z}^{d}\) has the form

$$\displaystyle{ H =\lambda V + \Delta }$$
(1)

with \(\Delta \) the nearest neighbor Laplacian on \(\mathbb{Z}^{d}\) and \(V = (v_{n})_{n\in \mathbb{Z}^{d}}\) IID variables with a certain distribution. Given a box \(\Omega \subset \mathbb{Z}^{d}\), \(H_{\Omega }\) denotes the restriction of H to \(\Omega \) with Dirichlet boundary conditions. Minami’s inequality, which is a refinement of Wegner’s estimate, is a bound on the expectation that \(H_{\Omega }\) has two eigenvalues in a given interval \(I \subset \mathbb{R}\). This quantity can be expressed as

$$\displaystyle{ \mathbb{E}\big[\mathit{Tr}\mathcal{X}_{I}(H_{\Omega })\big(\mathit{Tr}\mathcal{X}_{I}(H_{\Omega }) - 1\big)\big] }$$
(2)

where the expectation is taken over the randomness V. An elegant treatment may be found in [6].

Assuming the site distribution has a bounded density, (2) satisfies the expected bound

$$\displaystyle{ C\vert \Omega \vert ^{2}\vert I\vert ^{2}. }$$
(3)

More generally, considering a site distribution probability measure μ which is Hölder with exponent 0 < β ≤ 1, i.e.

$$\displaystyle{ \mu (I) \leq C\vert I\vert ^{\beta }\mbox{ for all intervals $I \subset \mathbb{R}$} }$$
(4)

it is shown in [6] that

$$\displaystyle{ (2) \leq C\vert \Omega \vert ^{2}\vert I\vert ^{2\beta }. }$$
(5)

For the sake of the exposition, we briefly recall the argument. Rewrite (2) as

$$\displaystyle{ \mathbb{E}_{V }\Big[\sum _{j\in \Omega }\langle \delta _{j},\mathcal{X}_{I}(H_{\Omega }^{(V )})\delta _{ j}\rangle \big(\mathit{Tr}\mathcal{X}_{I}(H_{\Omega }^{(V )}) - 1\big)\Big] }$$
(6)

where (δ j ) denote the unit vectors of \(\mathbb{Z}^{d}\). Introduce a second independent copy W = (w n ) of the potential V. Fixing \(j \in \Omega \), denote by \((V _{j}^{\perp },\tau _{j})\) the potential with assignments v n for n ≠ j and τ j for n = j. Assuming τ j  ≥ v j , it follows from the interlacing property for rank-one perturbations that

$$\displaystyle{ \mathit{Tr}\mathcal{X}_{I}(H_{\Omega }^{(V )}) \leq \mathit{Tr}\mathcal{X}_{ I}\big(H_{\Omega }^{(V _{j}^{\perp },\tau _{ j})}\big) + 1 }$$
(7)

and hence

$$\displaystyle{ (6) \leq \mathbb{E}_{V }\mathbb{E}_{W}\Big[\sum _{j\in \Omega }\langle \delta _{j},\mathcal{X}_{I}(H_{\Omega }^{(V )})\delta _{ j}\rangle \ \mathit{Tr}\mathcal{X}_{I}(H_{\Omega }^{(V _{j}^{\perp },\Vert v_{ j}\Vert _{\infty }+w_{j})})\Big]. }$$
(8)

Next, invoking the fundamental spectral averaging estimate (see [6, Appendix A]), we have

$$\displaystyle{ \mathbb{E}_{v_{j}}[\langle \delta _{j},\mathcal{X}_{I}(H_{\Omega }^{(V _{j}^{\perp },v_{ j})})\delta _{j}] \leq C\vert I\vert ^{\beta } }$$
(9)

so that

$$\displaystyle{ (8) \leq C\vert I\vert ^{\beta }\sum _{j\in \Omega }\mathbb{E}_{V _{j}^{\perp }}\mathbb{E}_{w_{j}}\big[\mathit{Tr}\mathcal{X}_{I}\big(H_{\Omega }^{(V _{j}^{\perp },\Vert v_{ j}\Vert _{\infty }+w_{j})}\big)\big]. }$$
(10)

The terms in (10) may be bounded using a Wegner estimate. Applying again (9), the j-term in (10) is majorized by \(C\vert \Omega \vert \,\vert I\vert ^{\beta }\), leading to the estimate \(C\vert I\vert ^{2\beta }\vert \Omega \vert ^{2}\) for (2). It turns out that at least in 1D, one can do better than reapplying the spectral averaging estimate. Indeed, it was shown in [2] that in 1D, SO’s with Hölder regular site distribution have a smooth density of states. This suggests in (5) a better | I | -dependence, of the form | I | 1+β. Some additional work will be needed in order to turn the result from [2] into the required finite scale estimate. We prove the following (set λ = 1 in (1)).

Proposition 1.

Let H be a 1D lattice random SO with Hölder site distribution satisfying  (4) for some β > 0. Denote H N = H [1,N] . Then

$$\displaystyle{ \mathbb{E}[I \cap \,\text{ Spec }\,H_{N}\not =\phi ] \leq \mathit{Ce}^{-\mathit{cN}} + \mathit{CN}\vert I\vert. }$$
(11)

It follows that \(\mathbb{E}[\mathit{Tr}\mathcal{X}_{i}(H_{N})] \leq \mathit{Ce}^{-\mathit{cN}} + \mathit{CN}^{2}\vert I\vert \).

The above discussion then implies the following Minami-type estimate.

Corollary 2.

Under the assumption from Proposition  1, we have

$$\displaystyle{ \mathbb{E}[\mathit{Tr}\mathcal{X}_{I}(H_{\Omega })(\mathit{Tr}\mathcal{X}_{I}(H_{\Omega }) - 1)] \leq C\vert \Omega \vert ^{3}\vert I\vert ^{1+\beta } }$$
(12)

provided \(\Omega \subset \mathbb{Z}\) is an interval of size \(\vert \Omega \vert > C_{1}\log (2 + \frac{1} {\vert I\vert })\) , where C,C 1 depend on V.

Denote \(\mathcal{N}\) the integrated density of states (IDS) of H and \(k(E) = \frac{d\mathcal{N}} {dE}\). Recall that k is smooth for Hölder regular site distribution (cf. [2]).

Combined with Anderson localization, Proposition 1 and Corollary 2 permit to derive for H as above.

Proposition 3.

Assuming \(\log \frac{1} {\delta } < \mathit{cN}\) , we have for \(I = [E_{0}-\delta,E_{0}+\delta ]\) that

$$\displaystyle{ \mathbb{E}[\mathit{Tr}\mathcal{X}_{I}(H_{N})] = \mathit{Nk}(E_{0})\vert I\vert + O\Big(N\delta ^{2} +\delta \log \Big (N + \frac{1} {\delta } \Big)\Big) }$$
(13)

and

Proposition 4.

$$\displaystyle{ \mathbb{E}[H_{\Omega }\text{ has at least two eigenvalues in }I] \leq C\vert \Omega \vert ^{2}\vert I\vert ^{2}+C\vert \Omega \vert \log ^{2}\Big(\vert \Omega \vert + \frac{1} {\vert I\vert }\Big).\vert I\vert ^{1+\beta }. }$$
(14)

Following a well-known strategy, Anderson localization permits a decoupling for the contribution of pairs of eigenvectors with center of localization that are at least \(C\log \frac{1} {\vert I\vert }\)-apart. Invoking (11), this yields the first term in the r.h.s of (14). For the remaining contribution, use Corollary 2.

With Propositions 34 at hand and again exploiting Anderson localization, the analysis from [7] becomes available and we obtain the following universality statement for 1D random SO’s with Hölder regular site distribution.

Proposition 5.

Let \(E_{0} \in \mathbb{R}\) and \(I = [E_{0},E_{0} + \frac{L} {N}]\) where we let first N →∞ and then L →∞. The rescaled eigenvalues

$$\displaystyle{\{N(E - E_{0})\mathcal{X}_{I}(E)\}_{E\in \,\text{ Spec }\,H_{ N}}}$$

converge to a Poisson point process in the limit N →∞.

At the end of the paper, we will make some comments on eigenvalue spacings for the Anderson-Bernoulli (A-B) model, where in (1) the v n are {0, 1}-valued. Further results in line of the above for A-B models with certain special couplings λ will appear in [4].

2 Proof of Proposition 1

Set λ = 1 in (1). We denote

$$\displaystyle{ M_{n} = M_{n}(E) =\prod _{ j=n}^{1}\left (\begin{array}{*{10}c} E - v_{j}&-1 \\ 1 & 0 \end{array} \right ) }$$
(15)

the usual transfer operators. Thus the equation H ξ = E ξ is equivalent to

$$\displaystyle{ M_{n}\left (\begin{array}{*{10}c} \xi _{1}\\ \xi _{ 0} \end{array} \right ) = \left (\begin{array}{*{10}c} \xi _{n+1}\\ \xi _{ n} \end{array} \right ). }$$
(16)

Considering a finite scale [1, N], let H [1, N] be the restriction of H with Dirichlet boundary conditions. Fix \(I = [E_{0}-\delta,E_{0}+\delta ]\) and assume H [1, N] has an eigenvalue E ∈ I with eigenvector ξ = (ξ j )1 ≤ j ≤ N . Then

$$\displaystyle{ M_{N}(E)\left (\begin{array}{*{10}c} \xi _{1} \\ 0 \end{array} \right ) = \left (\begin{array}{*{10}c} 0\\ \xi _{N} \end{array} \right ). }$$
(17)

Assume | ξ 1 | ≥ | ξ N  | (otherwise replace M N by M N −1 which can be treated similarly). It follows from (17) that

$$\displaystyle{ \Vert M_{N}(E)e_{1}\Vert \leq 1 }$$
(18)

with (e 1, e 2) the \(\mathbb{R}^{2}\)-unit vectors. On the other hand, from the large deviation estimates for random matrix products (cf. [1]), we have that

$$\displaystyle{ \log \Vert M_{N}(E_{0})e_{1}\Vert > \mathit{cN} }$$
(19)

with probability at least \(1 - e^{-\mathit{cN}}\) (in the sequel, c, C will denote various constants that may depend on the potential).

Write

$$\displaystyle{ \big\vert \log \Vert M_{N}(E)e_{1}\Vert -\log \Vert M_{N}(E_{0})e_{1}\Vert \big\vert \leq \int _{-\delta }^{\delta }\Big\vert \frac{d} {\mathit{dt}}[\log \Vert M_{N}(E_{0} + t)e_{1}\Vert ]\Big\vert \mathit{dt}. }$$
(20)

The integrand in (20) is clearly bounded by

$$\displaystyle{ \sum _{j=1,2}\,\sum _{n=1}^{N}\frac{\vert \langle M_{N-n}^{(v_{N},\ldots,v_{n+1})}(E_{ 0} + t)e_{1},e_{j}\rangle \vert.\vert \langle M_{n-1}^{(v_{n-1},\ldots,v_{1})}(E_{ 0} + t)e_{1},e_{1}\rangle \vert } {\Vert M_{N}^{(v_{N},\ldots,v_{1})}(E_{0} + t)e_{1}\Vert } }$$
(21)
$$\displaystyle{ \leq 2\vert E - E_{0}\vert \sum _{n=1}^{N} \frac{\Vert M_{N-n}^{(v_{N},\ldots,v_{n+1})}(E_{ 0} + t)\Vert } {\Vert M_{N-n}^{(v_{N},\ldots,v_{n+1})}(E_{0} + t)\zeta _{n}\Vert } }$$
(22)

where

$$\displaystyle{ \zeta _{n} = \frac{M_{n-1}^{(v_{n-1},\ldots,v_{1})}(E_{0} + t)e_{1}} {\Vert M_{n-1}^{(v_{n-1},\ldots,v_{1})}(E_{0} + t)e_{1}\Vert } }$$
(23)

depends only on the variables v 1, , v n−1.

At this point, we invoke some results from [2]. It follows from the discussion in [2, Sect. 5] on SO’s with Hölder potential that for  > C = C(V ), the inequality

$$\displaystyle{ \mathbb{E}_{v_{1},\ldots,v_{\ell}}[\Vert M_{\ell}(\zeta )\Vert <\varepsilon \Vert M_{\ell}\Vert ] \lesssim \varepsilon }$$
(24)

holds for any \(\varepsilon > 0\) and unit vector \(\zeta \in \mathbb{R}^{2}\), \(M_{\ell} = M_{\ell}^{(v_{1},\ldots,v_{\ell})}\).

A word of explanation. It is proved in [2] that if we take n large enough, the map \((v_{1},\ldots,v_{n})\mapsto M_{n}^{(v_{n},\ldots,v_{n})}\) defines a bounded density on \(SL_{2}(\mathbb{R})\). Fix then some n = O(1) with the above property and write for  > n,

$$\displaystyle{\Vert M_{\ell}(\zeta )\Vert \geq \vert \langle M_{n}(\zeta ),M_{\ell-n}^{{\ast}}e_{ j}\rangle \vert \qquad (j = 1,2)}$$

noting that here M n and M n are independent as functions of the potential. Choose j such that \(\Vert M_{\ell-n}^{{\ast}}e_{j}\Vert \sim \Vert M_{\ell-n}^{{\ast}}\Vert =\Vert M_{\ell-n}\Vert \sim \Vert M_{\ell}\Vert\) and fix the vector M n e j . Since then \((v_{1},\ldots,v_{n})\mapsto M_{n}(\zeta )\) defines a bounded density, inequality (24) holds.

Since always \(\Vert M_{\ell}\Vert < C^{\ell}\) and \(\Vert M_{\ell}(\zeta )\Vert > C^{-\ell}\), it clearly follows from (24) that

$$\displaystyle{ \mathbb{E}_{V }\Big[ \frac{\Vert M_{\ell}^{(V )}\Vert } {\Vert M_{\ell}^{(V )}(\zeta )\Vert }\Big] \leq C\ell. }$$
(25)

Therefore

$$\displaystyle{ \mathbb{E}_{V }[(22)] < \mathit{CN}^{2}\delta. }$$
(26)

Hence, we showed that, assuming (19), Spec\(\,H_{N}^{(V )} \cap I\not =\phi\) with probability at most \(\mathit{CN}\delta\). Therefore Spec\(\,H_{N}^{(V )} \cap I\not =\phi\) with probability at most \(\mathit{CN}\delta + \mathit{Ce}^{-\mathit{cN}}\), proving (11).

3 Proof of Propositions 3 and 4

Assume \(\log \frac{1} {\vert I\vert } < \mathit{cN}\) and set \(M = C\log \big(N + \frac{1} {\vert I\vert }\big)\) for appropriate constants c, C. From the theory of Anderson localization in 1D, the eigenvectors ξ α of H N , | ξ α  |  = 1 satisfy

$$\displaystyle{ \vert \xi _{\alpha }(j)\vert < e^{-c\vert j-j_{\alpha }\vert }\text{ for }\vert j - j_{\alpha }\vert > \frac{M} {10} }$$
(27)

with probability at least 1 − e cM, with j α the center of localization of ξ α .

The above statement is well-known and relies on the large deviation estimates for the transfer matrix. Let us also point out however that the above (optimal) choice of M is not really important in what follows and taking for M some power of the log would do as well.

We may therefore introduce a collection of intervals \((\Lambda _{s})_{1\leq s\lesssim \frac{N} {M} }\) of size M covering [1, N], such that for each α, there is some \(1 \leq s \lesssim \frac{N} {M}\) satisfying

$$\displaystyle{ j_{\alpha } \in \Lambda _{s}\text{ and }\Vert \xi _{\alpha }\vert _{[1,N]\setminus \Lambda _{s}}\Vert < e^{-\mathit{cM}} }$$
(28)
$$\displaystyle{ \Vert (H_{\Lambda _{s}} - E_{\alpha })\xi _{\alpha,s}\Vert < e^{-\mathit{cM}} }$$
(29)

with \(\xi _{\alpha,s} =\xi _{\alpha }\vert \Lambda _{s}\). Therefore dist (E α , Spec\(\,H_{\Lambda _{s}}) < e^{-\mathit{cM}} <\delta\).

Let us establish Proposition 3. Denoting \(\Lambda _{1}\) and \(\Lambda _{s_{{\ast}}}\) the intervals appearing at the boundary of [1, N], one obtains by a well-known argument based on exponential localization

$$\displaystyle{ \mathbb{E}[\mathit{Tr}\mathcal{X}_{I}(H_{N})] = N.\mathcal{N}(I) + O\big(e^{-\mathit{cM}} + \mathbb{E}[\mathit{Tr}\mathcal{X}_{\tilde{ I}}(H_{\Lambda _{1}})] + \mathbb{E}[\mathit{Tr}\mathcal{X}_{\tilde{I}}(H_{\Lambda _{s_{{\ast}}}})]\big) }$$
(30)

with \(\tilde{I} = [E_{0} - 2\delta,E_{0} + 2\delta ]\). Invoking then Proposition 1 and Corollary 2, we obtain

$$\displaystyle{ \mathbb{E}[\mathit{Tr}\mathcal{X}_{I}(H_{\Lambda _{s}})] < \mathit{ce}^{-\mathit{cM}} + \mathit{CM}\delta + \mathit{CM}^{3}\delta ^{1+\beta } < \mathit{CM}\delta }$$
(31)

by the choice of M and assuming (logN)2 δ β < 1, as we may.

Substituting (31) in (30) gives then

$$\displaystyle{\begin{array}{ll} &N\int _{I}k(E)dE + O(M\delta ) \\ & = Nk(E_{0})\vert I\vert + O\Big(N\delta ^{2} +\delta \log \Big (N + \frac{1} {\delta } \Big)\Big)\end{array} }$$

since k is Lipschitz. This proves (13).

Next, we prove Proposition 4.

Assume \(E_{\alpha },E_{\alpha ^{{\prime}}} \in I,\alpha \not =\alpha ^{{\prime}}\). We distinguish two cases.

Case 1.

\(\vert j_{\alpha } - j_{\alpha ^{{\prime}}}\vert > \mathit{CM}\).

Here C is taken large enough as to ensure that the corresponding boxes \(\Lambda _{s},\Lambda _{s^{{\prime}}}\) introduced above are disjoint. Thus

$$\displaystyle{ \text{Spec}\,H_{\Lambda _{s}} \cap I\not =\phi }$$
(32)
$$\displaystyle{ \text{Spec}\,H_{\Lambda _{s^{{\prime}}}}\cap I\not =\phi. }$$
(33)

Since the events (32), (33) are independent, it follows from Proposition 1 that the probability for the joint event is at most

$$\displaystyle{ \mathit{Ce}^{-\mathit{cM}} + \mathit{CM}^{2}\delta ^{2} < \mathit{CM}^{2}\delta ^{2} }$$
(34)

by our choice of M. Summing over the pairs \(s,s^{{\prime}}\lesssim \frac{N} {M}\) gives therefore the bound CN 2 δ 2 for the probability of a Case 1 event.

Case 2.

\(\vert j_{\alpha } - j_{\alpha ^{{\prime}}}\vert \leq \mathit{CM}\).

We obtain an interval \(\Lambda \) as union of at most C consecutive \(\Lambda _{s}\)-intervals such that (28), (29) hold with \(\Lambda _{s}\) replaced by \(\Lambda \) for both (ξ α , E α ), \((\xi _{\alpha ^{{\prime}}},E_{\alpha ^{{\prime}}})\). This implies that Spec\(\,H_{\Lambda } \cap \tilde{ I}\) contains at least two elements. By Corollary 2, the probability for this is at most CM 3 δ 1+β. Hence, we obtain the bound CM 2 N δ 1+β for the Case 2 event.

The final estimate is therefore

$$\displaystyle{e^{-\mathit{cM}} + \mathit{CN}^{2}\delta ^{2} + \mathit{CM}^{2}N\delta ^{1+\beta }}$$

and (14) follows from our choice of M.

4 Sketch of the Proof of Proposition 5

Next we briefly discuss local eigenvalue statistics, following [7].

The Wegner and Minami type estimates obtained in Propositions 3 and 4 above permit to reproduce essentially the analysis from [G-K] proving local Poisson statistics for the eigenvalues of H N ω. We sketch the details (recall that we consider a 1D model with Hölder site distribution).

Let \(M = K\log N,M_{1} = K_{1}\log N\) with K ≫ K 1 ≫ 1 ( →  with N) and partition

$$\displaystyle{\Lambda = [1,N] = \Lambda _{1} \cup \Lambda _{1,1} \cup \Lambda _{2} \cup \Lambda _{2,1}\cup \ldots =\bigcup _{\alpha \lesssim \frac{N} {M+M_{1}} }(\Lambda _{\alpha } \cup \Lambda _{\alpha,1})}$$

where \(\Lambda _{\alpha }\) (resp. \(\Lambda _{\alpha,1}\)) are M (resp. M 1) intervals

Denote

$$\displaystyle{\begin{array}{*{10}c} &\mathcal{E}_{\alpha }\ \ = \mbox{ eigenvalue of $H_{\Lambda }$ with center of localization in }& \Lambda _{\alpha } \\ & \mathcal{E}_{\alpha,1} = \qquad \qquad \qquad -------\qquad \qquad \qquad &\ \Lambda _{\alpha,1} \end{array} }$$

Let \(\Lambda _{\alpha }^{{\prime}}\) (resp. \(\Lambda _{\alpha,1}^{{\prime}}\)) be a neighborhood of \(\Lambda _{\alpha }\) (resp. \(\Lambda _{\alpha,1}\)) of size ∼ logN taken such as to ensure that

$$\displaystyle{\text{dist}\,(E,\text{ Spec}\,H_{\Lambda _{\alpha }^{{\prime}}}) < \frac{1} {N^{A}}\mbox{ for $E \in \mathcal{E}_{\alpha }$}}$$

(A a sufficiently large constant), and

$$\displaystyle{ \text{dist}\,(E,\text{ Spec}\,H_{\Lambda _{\alpha,1}^{{\prime}}}) < \frac{1} {N^{A}}\mbox{ for $E \in \mathcal{E}_{\alpha,1}$}. }$$
(35)

Choosing K 1 large enough, we ensure that the \(\Lambda _{\alpha }^{{\prime}}\) are disjoint and hence \(\{\text{Spec}\,H_{\Lambda _{\alpha }^{{\prime}}}^{\omega }\}\) are independent.

Consider an energy interval

$$\displaystyle{I =\Big [E_{0},E_{0} + \frac{L} {N}\Big]}$$

Denote

$$\displaystyle{P_{\Omega }(I) = \mathcal{X}_{I}(H_{\Omega })}$$

with L a large parameter, eventually → .

We obtain from (11) and our choice of M 1 that

$$\displaystyle{\mathbb{P}[\mathcal{E}_{\alpha,1} \cap I\not =\phi ] \lesssim M_{1}\vert I\vert }$$

and hence

$$\displaystyle{ \mathbb{P}[\bigcup _{\alpha }\mathcal{E}_{\alpha,1} \cap I\not =\phi ] \lesssim \frac{N} {M}M_{1}\vert I\vert \lesssim \frac{\mathit{LK}_{1}} {K} = o(1) }$$
(36)

provided

$$\displaystyle{ K_{1}L = o(K). }$$
(37)

Also, by (12)

$$\displaystyle\begin{array}{rcl} & & \mathbb{P}[\vert \mathcal{E}_{\alpha }\cap I\vert \geq 2] \leq \\ & & \mathbb{P}[H_{\Lambda _{\alpha }^{{\prime}}}\text{ has at least two eigenvalues in }\tilde{I}] \lesssim M^{3}\vert I\vert ^{1+\beta } < M^{3} \frac{L^{1+\beta }} {N^{1+\beta }}{}\end{array}$$
(38)

so that

$$\displaystyle{ \mathbb{P}[\max _{\alpha }\vert \mathcal{E}_{\alpha }\cap I\vert \geq 2] \lesssim \frac{N} {M}\mbox{ (38)} \lesssim \frac{M^{2}L^{1+\beta }} {N^{\beta }} < N^{-\beta /2}. }$$
(39)

Next, we introduce the (partially defined) random variables

$$\displaystyle{ E_{\alpha }(V ) =\sum _{E\in \text{ Spec}\,H_{ \Lambda _{\alpha }^{{\prime}}}\cap I}E\,\mbox{ provided $\vert $Spec $H_{\Lambda _{\alpha }^{{\prime}}}\cap I\vert \leq 1$}. }$$
(40)

Thus the \(E_{\alpha },\alpha = 1,\ldots, \frac{N} {M+M_{1}}\) take values in I, are independent since \(\{\text{Spec}\,H_{\Lambda _{\alpha }^{{\prime}}}\}\) are independent and have the same distribution.

Let J ⊂ I be an interval, | J | of the order of \(\frac{1} {N}\). Then by (38) and Proposition 3.

$$\displaystyle{ \mathbb{E}[1_{J}(E_{\alpha })] = \mathbb{E}[\mathit{Tr}\,P_{\Lambda _{\alpha }^{{\prime}}}(J)] + O\Big( \frac{1} {N^{1+\beta /2}}\Big) = k(E_{0})\Big(1 + O\Big( \frac{1} {K}\Big)\Big)\vert J\vert M^{{\prime}} }$$
(41)

where \(M^{{\prime}} = \vert \Lambda _{\alpha }^{{\prime}}\vert \).

Therefore \(\{N(E_{\alpha } - E_{0})\}_{\alpha \leq \frac{N} {M+M_{1}} }\) converge in distribution to a Poisson process (in a weak sense), proving Proposition 5.

5 Comments on the Bernoulli Case

Consider the model (1) with \(V = (v_{n})_{n\in \mathbb{Z}}\) independent {0, 1}-valued. For large | λ | , H does not have a bounded density of states. It was shown in [3] that for certain small algebraic values of the coupling constant λ, \(k(E) = \frac{d\mathcal{N}} {dE}\) can be made arbitrarily smooth (see [3] for the precise statement). In particular k ∈ L and one could ask if Proposition 4 remains valid in this situation. One could actually conjecture that the analogue of Proposition 4 holds for the A-B model in 1D, at small disorder. This problem will be pursued further in [4]. What we prove here is an eigenvalue separation property at finite scale for the A-B model at arbitrary disorder λ ≠ 0. Denote again H N the restriction of H to [1, N] with Dirichlet boundary conditions. We have

Proposition 6.

With large probability, the eigenvalues of H N are at least N −C separated, C = C(λ).

A statement of this kind is known for random SO’s with Hölder site distribution of regularity \(\beta > \frac{1} {2}\), in arbitrary dimension [6]. But note that our proof of Proposition 6 is specifically 1D, as will be clear below. There are three ingredients, each well-known.

  1. 1.

    Anderson localization

Anderson localization holds also for the 1D A-B model at any disorder. In fact, there is the following quantitative form. Denote ξ (1), , ξ (N) the normalized eigenvectors of H N . Then, with large probability ( > 1 − N A), each ξ (j) is essentially localized on some interval of size C(λ)logN, in the sense that there is a center of localization ν j  ∈ [1, N] such that

$$\displaystyle{ \vert \xi _{n}^{(j)}\vert < e^{-c(\lambda )\vert n-\nu _{j}\vert }\text{ for }\vert n -\nu _{ j}\vert > C(\lambda )\log N. }$$
(42)
  1. 2.

    Hölder regularity of the IDS

The IDS \(\mathcal{N}(E)\) of H is Hölder of exponent γ = γ(λ) > 0. There are various proofs of this fact (see in particular [5] and [8]). In fact, it was shown in [2] that γ(λ) → 1 for λ → 0 but we will not need this here. What we use is the following finite scale consequence.

Lemma 7.

Let \(M \in \mathbb{Z}_{+}\) , \(E \in \mathbb{R}\) , δ > 0. Then

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}[\mbox{ there is a vector $\xi = (\xi _{j})_{1\leq j\leq M},\Vert \xi \Vert = 1$, such that} \\ & & \Vert (H_{M} - E)\xi \Vert <\delta,\vert \xi _{1}\vert <\delta,\vert \xi _{M}\vert <\delta ] \leq \mathit{CM}\delta ^{\gamma }. {}\end{array}$$
(43)

The derivation is standard and we do briefly recall the argument.

Take N →  and split [1, N] in intervals of size M. Denoting τ the l.h.s. of (43), we see that

$$\displaystyle{\mathbb{E}\big[\#(\text{Spec}\,H_{N} \cap [E - 5\delta,E + 5\delta ])\big] \geq \frac{N} {M}\tau.}$$

Dividing both sides by N and letting N → , one obtains that

$$\displaystyle{ \frac{\tau } {M} \leq \mathcal{N}([E - 5\delta,E + 5\delta ])}$$

where \(\mathcal{N}\) is the IDS of H.

  1. 3.

    A repulsion phenomenon

The next statement shows that eigenvectors with eigenvalues that are close together have their centers far away. The argument is based on the transfer matrix and hence strictly 1D.

Lemma 8.

Let ξ,ξ be distinct normalized eigenvectors of H N with centers ν,ν ,

$$\displaystyle\begin{array}{rcl} H_{N}\xi & =& E\xi \\ H_{N}\xi ^{{\prime}}& =& E^{{\prime}}\xi.{}\end{array}$$
(44)

Assuming |E − E | < N −C(λ) , it follows that

$$\displaystyle{ \vert \nu -\nu ^{{\prime}}\vert \gtrsim \log \frac{1} {\vert E - E^{{\prime}}\vert }. }$$
(45)

Proof.

Let δ =  | EE  | and assume \(1 \leq \nu \leq \nu ^{{\prime}}\leq N\). Take M = C(λ)logN satisfying (42) and \(\Lambda \) an M-neighborhood of [ν, ν ] in [1, N].

In particular, we ensure that

$$\displaystyle{ \vert \xi _{n}\vert,\vert \xi _{n}^{{\prime}}\vert < N^{-10}\text{ for }n\not\in \Lambda. }$$
(46)

We can assume that \(\vert \xi _{\nu }\vert > \frac{1} {2\sqrt{M}}\). Since \(\Vert \xi _{\nu }^{{\prime}}\xi -\xi _{\nu }\xi ^{{\prime}}\Vert \geq \vert \xi _{\nu }\vert > \frac{1} {2\sqrt{M}}\), it follows from (46) that for some \(n_{0} \in \Lambda \)

$$\displaystyle{ \vert \xi _{\nu }^{{\prime}}\xi _{ n_{0}} -\xi _{\nu }\xi _{n_{0}}^{{\prime}}\vert \gtrsim \frac{1} {\sqrt{M}\sqrt{\vert \Lambda \vert }}. }$$
(47)

Next, denote for n ∈ [1, N]

$$\displaystyle{D_{n} =\xi _{ \nu }^{{\prime}}\xi _{ n} -\xi _{\nu }\xi _{n}^{{\prime}}}$$

and

$$\displaystyle{W_{n} =\xi _{ n}^{{\prime}}\xi _{ n+1} -\xi _{n}\xi _{n+1}^{{\prime}}.}$$

Clearly, using Eq. (44)

$$\displaystyle{ \Vert (H_{N} - E)D\Vert \leq \delta }$$
(48)

and

$$\displaystyle{ \sum _{1\leq n<N}\vert W_{n} - W_{n+1}\vert <\delta. }$$
(49)

Let ν < N. Since D ν  = 0, it follows from (48) that

$$\displaystyle{ \vert D_{n}\vert \leq (2 + \vert \lambda \vert + \vert E\vert )^{\vert n-\nu \vert }(\vert D_{\nu +1}\vert + 2\delta ). }$$
(50)

(If ν = N, replace ν + 1 by ν − 1). From (47), (50)

$$\displaystyle{ \frac{1} {\sqrt{M}\sqrt{\vert \Lambda \vert }} \lesssim (2 + \vert \lambda \vert + \vert E\vert )^{\vert \Lambda \vert }(\vert D_{\nu +1} + 2\delta )}$$

and since D ν+1 = W ν , it follows that

$$\displaystyle{ \vert W_{\nu }\vert + 2\delta > 10^{-\vert \Lambda \vert }. }$$
(51)

Invoking (49), we obtain for n ∈ [1, N]

$$\displaystyle{ \vert W_{n}\vert > 10^{-\vert \Lambda \vert }- (\vert n -\nu \vert + 1)\delta. }$$
(52)

On the other hand, by (42)

$$\displaystyle{\vert W_{n}\vert \leq \vert \xi _{n}\vert + \vert \xi _{n+1}\vert < e^{-c\lambda ^{2}\vert n-\nu \vert }\text{ for }\vert n -\nu \vert > C(\lambda )\log N.}$$

Taking \(\vert n -\nu \vert \sim \vert \Lambda \vert \) appropriately, it follows that

$$\displaystyle{\delta \gtrsim \frac{1} {\vert \Lambda \vert }10^{-\vert \Lambda \vert }}$$

and hence

$$\displaystyle{\vert \nu -\nu ^{{\prime}}\vert + M \gtrsim \log \frac{1} {\delta }.}$$

Lemma 8 follows. □ 

Proof of Proposition 6.

Assume H N has two eigenvalues E, E such that

$$\displaystyle{\vert E - E^{{\prime}}\vert <\delta < N^{-C_{1} }}$$

where C 1 is the constant from Lemma 8. It follows that the corresponding eigenvectors ξ, ξ have resp. centers ν, ν  ∈ [1, N] satisfying

$$\displaystyle{ \vert \nu -\nu ^{{\prime}}\vert \gtrsim \log \frac{1} {\delta }. }$$
(53)

Introduce δ 0 > δ (to specify), \(M = C_{2}(\lambda )\log \frac{1} {\delta _{0}}\) and \(\Lambda = [\nu -M,\nu +M] \cap [1,N]\), \(\Lambda ^{{\prime}} = [\nu ^{{\prime}}- M,\nu ^{{\prime}} + M] \cap [1,N]\). Let \(\tilde{\xi }= \frac{\xi \vert _{\Lambda }} {\Vert \xi \vert _{\Lambda }\Vert },\tilde{\xi }^{{\prime}} = \frac{\xi ^{{\prime}}\vert _{ \Lambda ^{{\prime}}}} {\Vert \xi ^{{\prime}}\vert _{\Lambda ^{{\prime}}}\Vert }\). According to (42), choose M such that

$$\displaystyle{ \Vert (H_{\Lambda } - E)\tilde{\xi }\Vert < e^{-c\lambda ^{2}M } <\delta _{0}\text{ and }\vert \xi \vert _{\partial \Lambda }\vert <\delta _{0} }$$
(54)

and

$$\displaystyle{ \Vert H_{\Lambda ^{{\prime}}}- E^{{\prime}})\tilde{\xi }^{{\prime}}\Vert <\delta _{ 0}\text{ and }\vert \xi ^{{\prime}}\vert _{ \partial \Lambda ^{{\prime}}}\vert <\delta _{0}. }$$
(55)

Requiring

$$\displaystyle{\log \frac{1} {\delta } > C_{3}M}$$

(53) will ensure disjointness of \(\Lambda,\Lambda ^{{\prime}}\). Hence \(H_{\Lambda },H_{\Lambda ^{{\prime}}}\) are independent as functions of V. It follows in particular from (54) that dist\(\,(E,\text{Spec}\,H_{\Lambda }) <\delta _{0}\), hence | EE 0 |  < δ 0 for some \(E_{0} \in \text{Spec}\,H_{\Lambda }\). Having fixed E 0, (55) implies that

$$\displaystyle{ \Vert (H_{\Lambda ^{{\prime}}}- E_{0})\xi ^{{\prime}}\Vert < \vert E - E^{{\prime}}\vert + 2\delta _{ 0} < 3\delta _{0}. }$$
(56)

Apply Lemma 7 to \(H_{\Lambda ^{{\prime}}}\) in order to deduce that the probability for (56) to hold with \(E_{0} \in \text{Spec}\,H_{\Lambda }\) fixed, is at most CM δ 0 γ. Summing over all \(E_{0} \in \,\text{ Spec}\,H_{\Lambda }\) and then over all pairs of boxes \(\Lambda,\Lambda ^{{\prime}}\) gives the bound

$$\displaystyle{ O(N^{2}M^{2}\delta _{ 0}^{\gamma }) = O\Big(N^{2}\Big(\log \frac{1} {\delta _{0}} \Big)^{2}\delta _{ 0}^{\gamma }\Big) < N^{2}\delta _{ 0}^{\gamma /2}. }$$
(57)

It remains to take \(\delta _{0} = N^{-\frac{5} {\gamma } }\), \(\log \frac{1} {\delta } > C\log \frac{1} {\delta _{0}}\). □