Abstract
We study eigenvalue spacings and local eigenvalue statistics for 1D lattice Schrödinger operators with Hölder regular potential, obtaining a version of Minami’s inequality and Poisson statistics for the local eigenvalue spacings. The main additional new input are regularity properties of the Furstenberg measures and the density of states obtained in some of the author’s earlier work.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
This Note results from a few discussions with A. Klein (UCI, summer 011) on Minami’s inequality and the results from [7] on Poisson local spacing behavior for the eigenvalues of certain Anderson type models. Recall that the Hamiltonian H on the lattice \(\mathbb{Z}^{d}\) has the form
with \(\Delta \) the nearest neighbor Laplacian on \(\mathbb{Z}^{d}\) and \(V = (v_{n})_{n\in \mathbb{Z}^{d}}\) IID variables with a certain distribution. Given a box \(\Omega \subset \mathbb{Z}^{d}\), \(H_{\Omega }\) denotes the restriction of H to \(\Omega \) with Dirichlet boundary conditions. Minami’s inequality, which is a refinement of Wegner’s estimate, is a bound on the expectation that \(H_{\Omega }\) has two eigenvalues in a given interval \(I \subset \mathbb{R}\). This quantity can be expressed as
where the expectation is taken over the randomness V. An elegant treatment may be found in [6].
Assuming the site distribution has a bounded density, (2) satisfies the expected bound
More generally, considering a site distribution probability measure μ which is Hölder with exponent 0 < β ≤ 1, i.e.
it is shown in [6] that
For the sake of the exposition, we briefly recall the argument. Rewrite (2) as
where (δ j ) denote the unit vectors of \(\mathbb{Z}^{d}\). Introduce a second independent copy W = (w n ) of the potential V. Fixing \(j \in \Omega \), denote by \((V _{j}^{\perp },\tau _{j})\) the potential with assignments v n for n ≠ j and τ j for n = j. Assuming τ j ≥ v j , it follows from the interlacing property for rank-one perturbations that
and hence
Next, invoking the fundamental spectral averaging estimate (see [6, Appendix A]), we have
so that
The terms in (10) may be bounded using a Wegner estimate. Applying again (9), the j-term in (10) is majorized by \(C\vert \Omega \vert \,\vert I\vert ^{\beta }\), leading to the estimate \(C\vert I\vert ^{2\beta }\vert \Omega \vert ^{2}\) for (2). It turns out that at least in 1D, one can do better than reapplying the spectral averaging estimate. Indeed, it was shown in [2] that in 1D, SO’s with Hölder regular site distribution have a smooth density of states. This suggests in (5) a better | I | -dependence, of the form | I | 1+β. Some additional work will be needed in order to turn the result from [2] into the required finite scale estimate. We prove the following (set λ = 1 in (1)).
Proposition 1.
Let H be a 1D lattice random SO with Hölder site distribution satisfying (4) for some β > 0. Denote H N = H [1,N] . Then
It follows that \(\mathbb{E}[\mathit{Tr}\mathcal{X}_{i}(H_{N})] \leq \mathit{Ce}^{-\mathit{cN}} + \mathit{CN}^{2}\vert I\vert \).
The above discussion then implies the following Minami-type estimate.
Corollary 2.
Under the assumption from Proposition 1, we have
provided \(\Omega \subset \mathbb{Z}\) is an interval of size \(\vert \Omega \vert > C_{1}\log (2 + \frac{1} {\vert I\vert })\) , where C,C 1 depend on V.
Denote \(\mathcal{N}\) the integrated density of states (IDS) of H and \(k(E) = \frac{d\mathcal{N}} {dE}\). Recall that k is smooth for Hölder regular site distribution (cf. [2]).
Combined with Anderson localization, Proposition 1 and Corollary 2 permit to derive for H as above.
Proposition 3.
Assuming \(\log \frac{1} {\delta } < \mathit{cN}\) , we have for \(I = [E_{0}-\delta,E_{0}+\delta ]\) that
and
Proposition 4.
Following a well-known strategy, Anderson localization permits a decoupling for the contribution of pairs of eigenvectors with center of localization that are at least \(C\log \frac{1} {\vert I\vert }\)-apart. Invoking (11), this yields the first term in the r.h.s of (14). For the remaining contribution, use Corollary 2.
With Propositions 3, 4 at hand and again exploiting Anderson localization, the analysis from [7] becomes available and we obtain the following universality statement for 1D random SO’s with Hölder regular site distribution.
Proposition 5.
Let \(E_{0} \in \mathbb{R}\) and \(I = [E_{0},E_{0} + \frac{L} {N}]\) where we let first N →∞ and then L →∞. The rescaled eigenvalues
converge to a Poisson point process in the limit N →∞.
At the end of the paper, we will make some comments on eigenvalue spacings for the Anderson-Bernoulli (A-B) model, where in (1) the v n are {0, 1}-valued. Further results in line of the above for A-B models with certain special couplings λ will appear in [4].
2 Proof of Proposition 1
Set λ = 1 in (1). We denote
the usual transfer operators. Thus the equation H ξ = E ξ is equivalent to
Considering a finite scale [1, N], let H [1, N] be the restriction of H with Dirichlet boundary conditions. Fix \(I = [E_{0}-\delta,E_{0}+\delta ]\) and assume H [1, N] has an eigenvalue E ∈ I with eigenvector ξ = (ξ j )1 ≤ j ≤ N . Then
Assume | ξ 1 | ≥ | ξ N | (otherwise replace M N by M N −1 which can be treated similarly). It follows from (17) that
with (e 1, e 2) the \(\mathbb{R}^{2}\)-unit vectors. On the other hand, from the large deviation estimates for random matrix products (cf. [1]), we have that
with probability at least \(1 - e^{-\mathit{cN}}\) (in the sequel, c, C will denote various constants that may depend on the potential).
Write
The integrand in (20) is clearly bounded by
where
depends only on the variables v 1, …, v n−1.
At this point, we invoke some results from [2]. It follows from the discussion in [2, Sect. 5] on SO’s with Hölder potential that for ℓ > C = C(V ), the inequality
holds for any \(\varepsilon > 0\) and unit vector \(\zeta \in \mathbb{R}^{2}\), \(M_{\ell} = M_{\ell}^{(v_{1},\ldots,v_{\ell})}\).
A word of explanation. It is proved in [2] that if we take n large enough, the map \((v_{1},\ldots,v_{n})\mapsto M_{n}^{(v_{n},\ldots,v_{n})}\) defines a bounded density on \(SL_{2}(\mathbb{R})\). Fix then some n = O(1) with the above property and write for ℓ > n,
noting that here M n and M ℓ−n are independent as functions of the potential. Choose j such that \(\Vert M_{\ell-n}^{{\ast}}e_{j}\Vert \sim \Vert M_{\ell-n}^{{\ast}}\Vert =\Vert M_{\ell-n}\Vert \sim \Vert M_{\ell}\Vert\) and fix the vector M ℓ−n ∗ e j . Since then \((v_{1},\ldots,v_{n})\mapsto M_{n}(\zeta )\) defines a bounded density, inequality (24) holds.
Since always \(\Vert M_{\ell}\Vert < C^{\ell}\) and \(\Vert M_{\ell}(\zeta )\Vert > C^{-\ell}\), it clearly follows from (24) that
Therefore
Hence, we showed that, assuming (19), Spec\(\,H_{N}^{(V )} \cap I\not =\phi\) with probability at most \(\mathit{CN}\delta\). Therefore Spec\(\,H_{N}^{(V )} \cap I\not =\phi\) with probability at most \(\mathit{CN}\delta + \mathit{Ce}^{-\mathit{cN}}\), proving (11).
3 Proof of Propositions 3 and 4
Assume \(\log \frac{1} {\vert I\vert } < \mathit{cN}\) and set \(M = C\log \big(N + \frac{1} {\vert I\vert }\big)\) for appropriate constants c, C. From the theory of Anderson localization in 1D, the eigenvectors ξ α of H N , | ξ α | = 1 satisfy
with probability at least 1 − e −cM, with j α the center of localization of ξ α .
The above statement is well-known and relies on the large deviation estimates for the transfer matrix. Let us also point out however that the above (optimal) choice of M is not really important in what follows and taking for M some power of the log would do as well.
We may therefore introduce a collection of intervals \((\Lambda _{s})_{1\leq s\lesssim \frac{N} {M} }\) of size M covering [1, N], such that for each α, there is some \(1 \leq s \lesssim \frac{N} {M}\) satisfying
with \(\xi _{\alpha,s} =\xi _{\alpha }\vert \Lambda _{s}\). Therefore dist (E α , Spec\(\,H_{\Lambda _{s}}) < e^{-\mathit{cM}} <\delta\).
Let us establish Proposition 3. Denoting \(\Lambda _{1}\) and \(\Lambda _{s_{{\ast}}}\) the intervals appearing at the boundary of [1, N], one obtains by a well-known argument based on exponential localization
with \(\tilde{I} = [E_{0} - 2\delta,E_{0} + 2\delta ]\). Invoking then Proposition 1 and Corollary 2, we obtain
by the choice of M and assuming (logN)2 δ β < 1, as we may.
Substituting (31) in (30) gives then
since k is Lipschitz. This proves (13).
Next, we prove Proposition 4.
Assume \(E_{\alpha },E_{\alpha ^{{\prime}}} \in I,\alpha \not =\alpha ^{{\prime}}\). We distinguish two cases.
Case 1.
\(\vert j_{\alpha } - j_{\alpha ^{{\prime}}}\vert > \mathit{CM}\).
Here C is taken large enough as to ensure that the corresponding boxes \(\Lambda _{s},\Lambda _{s^{{\prime}}}\) introduced above are disjoint. Thus
Since the events (32), (33) are independent, it follows from Proposition 1 that the probability for the joint event is at most
by our choice of M. Summing over the pairs \(s,s^{{\prime}}\lesssim \frac{N} {M}\) gives therefore the bound CN 2 δ 2 for the probability of a Case 1 event.
Case 2.
\(\vert j_{\alpha } - j_{\alpha ^{{\prime}}}\vert \leq \mathit{CM}\).
We obtain an interval \(\Lambda \) as union of at most C consecutive \(\Lambda _{s}\)-intervals such that (28), (29) hold with \(\Lambda _{s}\) replaced by \(\Lambda \) for both (ξ α , E α ), \((\xi _{\alpha ^{{\prime}}},E_{\alpha ^{{\prime}}})\). This implies that Spec\(\,H_{\Lambda } \cap \tilde{ I}\) contains at least two elements. By Corollary 2, the probability for this is at most CM 3 δ 1+β. Hence, we obtain the bound CM 2 N δ 1+β for the Case 2 event.
The final estimate is therefore
and (14) follows from our choice of M.
4 Sketch of the Proof of Proposition 5
Next we briefly discuss local eigenvalue statistics, following [7].
The Wegner and Minami type estimates obtained in Propositions 3 and 4 above permit to reproduce essentially the analysis from [G-K] proving local Poisson statistics for the eigenvalues of H N ω. We sketch the details (recall that we consider a 1D model with Hölder site distribution).
Let \(M = K\log N,M_{1} = K_{1}\log N\) with K ≫ K 1 ≫ 1 ( → ∞ with N) and partition
where \(\Lambda _{\alpha }\) (resp. \(\Lambda _{\alpha,1}\)) are M (resp. M 1) intervals
Denote
Let \(\Lambda _{\alpha }^{{\prime}}\) (resp. \(\Lambda _{\alpha,1}^{{\prime}}\)) be a neighborhood of \(\Lambda _{\alpha }\) (resp. \(\Lambda _{\alpha,1}\)) of size ∼ logN taken such as to ensure that
(A a sufficiently large constant), and
Choosing K 1 large enough, we ensure that the \(\Lambda _{\alpha }^{{\prime}}\) are disjoint and hence \(\{\text{Spec}\,H_{\Lambda _{\alpha }^{{\prime}}}^{\omega }\}\) are independent.
Consider an energy interval
Denote
with L a large parameter, eventually → ∞.
We obtain from (11) and our choice of M 1 that
and hence
provided
Also, by (12)
so that
Next, we introduce the (partially defined) random variables
Thus the \(E_{\alpha },\alpha = 1,\ldots, \frac{N} {M+M_{1}}\) take values in I, are independent since \(\{\text{Spec}\,H_{\Lambda _{\alpha }^{{\prime}}}\}\) are independent and have the same distribution.
Let J ⊂ I be an interval, | J | of the order of \(\frac{1} {N}\). Then by (38) and Proposition 3.
where \(M^{{\prime}} = \vert \Lambda _{\alpha }^{{\prime}}\vert \).
Therefore \(\{N(E_{\alpha } - E_{0})\}_{\alpha \leq \frac{N} {M+M_{1}} }\) converge in distribution to a Poisson process (in a weak sense), proving Proposition 5.
5 Comments on the Bernoulli Case
Consider the model (1) with \(V = (v_{n})_{n\in \mathbb{Z}}\) independent {0, 1}-valued. For large | λ | , H does not have a bounded density of states. It was shown in [3] that for certain small algebraic values of the coupling constant λ, \(k(E) = \frac{d\mathcal{N}} {dE}\) can be made arbitrarily smooth (see [3] for the precise statement). In particular k ∈ L ∞ and one could ask if Proposition 4 remains valid in this situation. One could actually conjecture that the analogue of Proposition 4 holds for the A-B model in 1D, at small disorder. This problem will be pursued further in [4]. What we prove here is an eigenvalue separation property at finite scale for the A-B model at arbitrary disorder λ ≠ 0. Denote again H N the restriction of H to [1, N] with Dirichlet boundary conditions. We have
Proposition 6.
With large probability, the eigenvalues of H N are at least N −C separated, C = C(λ).
A statement of this kind is known for random SO’s with Hölder site distribution of regularity \(\beta > \frac{1} {2}\), in arbitrary dimension [6]. But note that our proof of Proposition 6 is specifically 1D, as will be clear below. There are three ingredients, each well-known.
-
1.
Anderson localization
Anderson localization holds also for the 1D A-B model at any disorder. In fact, there is the following quantitative form. Denote ξ (1), …, ξ (N) the normalized eigenvectors of H N . Then, with large probability ( > 1 − N −A), each ξ (j) is essentially localized on some interval of size C(λ)logN, in the sense that there is a center of localization ν j ∈ [1, N] such that
-
2.
Hölder regularity of the IDS
The IDS \(\mathcal{N}(E)\) of H is Hölder of exponent γ = γ(λ) > 0. There are various proofs of this fact (see in particular [5] and [8]). In fact, it was shown in [2] that γ(λ) → 1 for λ → 0 but we will not need this here. What we use is the following finite scale consequence.
Lemma 7.
Let \(M \in \mathbb{Z}_{+}\) , \(E \in \mathbb{R}\) , δ > 0. Then
The derivation is standard and we do briefly recall the argument.
Take N → ∞ and split [1, N] in intervals of size M. Denoting τ the l.h.s. of (43), we see that
Dividing both sides by N and letting N → ∞, one obtains that
where \(\mathcal{N}\) is the IDS of H.
-
3.
A repulsion phenomenon
The next statement shows that eigenvectors with eigenvalues that are close together have their centers far away. The argument is based on the transfer matrix and hence strictly 1D.
Lemma 8.
Let ξ,ξ ′ be distinct normalized eigenvectors of H N with centers ν,ν ′ ,
Assuming |E − E ′ | < N −C(λ) , it follows that
Proof.
Let δ = | E − E ′ | and assume \(1 \leq \nu \leq \nu ^{{\prime}}\leq N\). Take M = C(λ)logN satisfying (42) and \(\Lambda \) an M-neighborhood of [ν, ν ′] in [1, N].
In particular, we ensure that
We can assume that \(\vert \xi _{\nu }\vert > \frac{1} {2\sqrt{M}}\). Since \(\Vert \xi _{\nu }^{{\prime}}\xi -\xi _{\nu }\xi ^{{\prime}}\Vert \geq \vert \xi _{\nu }\vert > \frac{1} {2\sqrt{M}}\), it follows from (46) that for some \(n_{0} \in \Lambda \)
Next, denote for n ∈ [1, N]
and
Clearly, using Eq. (44)
and
Let ν < N. Since D ν = 0, it follows from (48) that
(If ν = N, replace ν + 1 by ν − 1). From (47), (50)
and since D ν+1 = W ν , it follows that
Invoking (49), we obtain for n ∈ [1, N]
On the other hand, by (42)
Taking \(\vert n -\nu \vert \sim \vert \Lambda \vert \) appropriately, it follows that
and hence
Lemma 8 follows. □
Proof of Proposition 6.
Assume H N has two eigenvalues E, E ′ such that
where C 1 is the constant from Lemma 8. It follows that the corresponding eigenvectors ξ, ξ ′ have resp. centers ν, ν ′ ∈ [1, N] satisfying
Introduce δ 0 > δ (to specify), \(M = C_{2}(\lambda )\log \frac{1} {\delta _{0}}\) and \(\Lambda = [\nu -M,\nu +M] \cap [1,N]\), \(\Lambda ^{{\prime}} = [\nu ^{{\prime}}- M,\nu ^{{\prime}} + M] \cap [1,N]\). Let \(\tilde{\xi }= \frac{\xi \vert _{\Lambda }} {\Vert \xi \vert _{\Lambda }\Vert },\tilde{\xi }^{{\prime}} = \frac{\xi ^{{\prime}}\vert _{ \Lambda ^{{\prime}}}} {\Vert \xi ^{{\prime}}\vert _{\Lambda ^{{\prime}}}\Vert }\). According to (42), choose M such that
and
Requiring
(53) will ensure disjointness of \(\Lambda,\Lambda ^{{\prime}}\). Hence \(H_{\Lambda },H_{\Lambda ^{{\prime}}}\) are independent as functions of V. It follows in particular from (54) that dist\(\,(E,\text{Spec}\,H_{\Lambda }) <\delta _{0}\), hence | E − E 0 | < δ 0 for some \(E_{0} \in \text{Spec}\,H_{\Lambda }\). Having fixed E 0, (55) implies that
Apply Lemma 7 to \(H_{\Lambda ^{{\prime}}}\) in order to deduce that the probability for (56) to hold with \(E_{0} \in \text{Spec}\,H_{\Lambda }\) fixed, is at most CM δ 0 γ. Summing over all \(E_{0} \in \,\text{ Spec}\,H_{\Lambda }\) and then over all pairs of boxes \(\Lambda,\Lambda ^{{\prime}}\) gives the bound
It remains to take \(\delta _{0} = N^{-\frac{5} {\gamma } }\), \(\log \frac{1} {\delta } > C\log \frac{1} {\delta _{0}}\). □
References
Ph. Bougerol, J. Lacroix, Products of Random Matrices with Applications to Schrödinger Operators (Birkhauser, Boston, 1985)
J. Bourgain, On the Furstenberg measure and density of states for the Anderson-Bernoulli model at small disorder. J. Anal. Math. 117, 273–295 (2012)
J. Bourgain, An application of group expansion to the Anderson-Bernoulli model. Preprint 07/13
J. Bourgain, On the local eigenvalue spacings for certain Anderson-Bernoulli Hamiltonians. Preprint 08/13
R. Carmona, A. Klein, G. Martinelli, Anderson localization for Bernoulli and other singular potentials. Commun. Math. Phys. 108, 41–66 (1987)
J.-M. Combes, F. Germinet, A. Klein, Generalized eigenvalue - counting estimates for the Anderson model. J. Stat. Phys. 135, 201–216 (2009)
F. Germinet, F. Klopp, Spectral statistics for random Schrödinger operators in the localized regime. JEMS (to appear)
C. Shubin, T. Vakilian, T. Wolff, Some harmonic analysis questions suggested by Anderson-Bernoulli models. Geom. Funct. Anal. 8, 932–964 (1988)
Acknowledgements
The author is grateful to an anonymous referee and A. Klein for comments and to the UC Berkeley mathematics department for their hospitality. This work was partially supported by NSF grant DMS-1301619.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Bourgain, J. (2014). On Eigenvalue Spacings for the 1-D Anderson Model with Singular Site Distribution. In: Klartag, B., Milman, E. (eds) Geometric Aspects of Functional Analysis. Lecture Notes in Mathematics, vol 2116. Springer, Cham. https://doi.org/10.1007/978-3-319-09477-9_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-09477-9_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-09476-2
Online ISBN: 978-3-319-09477-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)