Abstract
Introduced first is the classification of the classical GOE, GUE and GSE ensembles. To get started with their properties, nearest neighbor spacing distributions (NNSD) for the simple 2×2 matrix version of these ensembles are derived. Going further, one and two-point functions (in the eigenvalues) for general N×N GOE and GUE are derived using the so called binary correlation approximation. Measures for level fluctuations as given by the number variance and Dyson-Mehta Δ 3 statistic on one hand and on the other strength functions, information entropy and Porter-Thomas distribution for wavefunction structure as generated by these ensembles are briefly discussed. In addition, presented are some aspects of data analysis.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Strength Function
- Transition Strength
- Inverse Participation Ratio
- Random Matrix Ensemble
- Quaternion Real
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
2.1 Hamiltonian Structure and Dyson’s Classification of GOE, GUE and GSE Random Matrix Ensembles
The discussion in this section is largely from Porter’s book [1]. Original contributions here are due to Wigner and Dyson and all their papers were reprinted in [1]. More recent discussion on Dyson’s classification of classical random matrix ensembles is given by Haake [2]. The classification is based on the properties of time reversal operator in quantum mechanics [3]. Appendix A gives a brief discussion on time reversal and the results given there are used in this section. In finite Hilbert spaces, the Hamiltonian of a quantum system can be represented by a N×N matrix. Now we will consider the properties of this matrix, with regard to time-reversal T and angular momentum J symmetries.
Firstly, imagine there is no time reversal invariance. Then we know nothing about the H matrix except that it should be complex Hermitian. And all such H’s should be complex Hermitian in any representation differing from any other by a unitary transformation.
Now, consider H to be T invariant. Then we have, from the results in Appendix A, T 2=±1. Let us say T is good and T 2=1. Then, it is possible to construct an orthogonal basis ψ K such that Tψ K =ψ K is satisfied. Let us start with a normalized state Φ 1 and construct,
where a is an arbitrary complex number. This gives trivially
Now consider Φ 2 such that 〈Ψ 1 Φ 2〉=0 and construct
Then,
Here we used Eq. (A.12) and T 2=1. Continuing this way an entire set of orthogonal Ψ i can be produced satisfying
and they can be normalized. In this basis all H’s that are T invariant, i.e. THT −1=H will be real symmetric,
Therefore for systems with T 2=1, all H’s that are T invariant can be made, independent of J, real symmetric in a single basis.
Let us consider the situation with T is good, J is good and T 2=−1. Now, say \(\overline{T}=\exp({-i\pi J_{y}})T\) and then using T=exp(iπS y )K as given by Eq. (A.23), we have
as L y is an integer in the (L 2, L y ) diagonal basis. Now we have \((\overline{T})^{2}=1\) and therefore we can proceed to construct a Γ k basis with \(\overline{T}\varGamma_{i}=\varGamma_{i}\) just as in the situation with T 2=1. In the Γ i basis, a T invariant H will be real symmetric,
Therefore if H is invariant under both J and T, the H matrix can be made symmetric. In fact all such H’s will be simultaneously real symmetric in the Γ k basis and they remain so by an orthogonal transformation.
The final situation is where T is good, J is not good but T 2=−1. In the situation we still have Kramer’s degeneracy and given a |ψ〉, the |ψ〉 and |Tψ〉 are orthogonal. With a basis of 2N states, (|i〉, T|i〉) with i=1,2,…,N, consider
Then,
and here we have used T 2=−1 with \(T C_{0}=C_{0}^{*}\) for any number C 0. As T=UK, Eq. (2.10) then gives for each (|m〉, |Tm〉) pair. Also T 2=−1⇒UKUK=−1 and then UU ∗=−1. Also UU †=1 and therefore \(U=-\widetilde{U}\). Any U such that \(U=-\widetilde{U}\) can be brought to the form
by a similarity transformation. We can also chose
Now we consider a unitary matrix S that commutes with T=UK. Then
Therefore with
S must be
The S that are unitary and satisfying Eq. (2.15) are called symplectic matrices. We will now construct H matrices that are invariant under S, i.e. symplectic transformations. To-wards this end consider
and H in the form, with ,
Then
Now THT −1=H and will give,
Comparing Eqs. (2.18) and (2.19) we have,
Now we will prove that, if H is T invariant, then SHS −1 is also T invariant,
Therefore the quaternion structure of H given by Eq. (2.17), valid for T invariant H with T 2=−1 and J may not be good, will be preserved by symplectic transformations, that is by S that are unitary and satisfying the condition \(SZ\widetilde{S}=Z\). Together with Eq. (2.20), the H’s are quaternion real (QR) matrices.
The results proved above will give the Hamiltonian form and the corresponding group structure under (J,T) invariance as follows:
-
1.
For T not good and J is good or not good, the Hamiltonian is complex Hermitian and the canonical group of transformations is U(N), the unitary group in N dimensions (N is the dimension of the H matrix).
-
2.
For T is good and J is good, the Hamiltonian is real symmetric and the canonical group of transformations is O(N), the orthogonal group in N dimensions.
-
3.
For T is good and J is not good but J is a integer, the Hamiltonian is real symmetric and the canonical group of transformations is again O(N).
-
4.
For T is good and J is not good but J is a half-odd integer, the Hamiltonian is quaternion real and the canonical group of transformations is Sp(2N), the symplectic group in 2N dimensions (note that here we are using the H matrix dimension as 2N as it must be even).
Note that in (1)–(4) above, in a single basis all H’s can be simultaneously made real symmetric, QR or complex Hermitian as appropriate. In the absence of any other information except invariance with respect to J and T are known, one can represent the H matrix of a given quantum system by an ensemble of N×N matrices with structure as given by (1)–(4) above. The matrix elements are then chosen to be independent random variables. Note that for the U(N), O(N) and Sp(2N) systems mentioned above, the number of independent variables (note that for a complex number there are two variables—one for the real part and other for the complex part) will be N 2, N(N+1)/2 and N(N+1) respectively. In the classical random matrix ensembles, called GUE, GOE and GSE respectively, the matrix elements are chosen to be independent Gaussian variables with zero center and variance unity (except that the diagonal matrix elements—they are real—have variance 2). Then these ensembles will be invariant under U(N), O(N) and Sp(2N) transformations respectively and accordingly they are called Gaussian orthogonal (GOE), unitary (GUE) and symplectic (GSE) ensembles. Table 2.1 gives for these three ensembles, the corresponding transformation matrices and the mathematical structure of the Hamiltonians. In order to make a beginning in deriving the properties of GOE, GUE and GSE, we will start with the simplest 2×2 matrix version of these ensembles. Hereafter, zero centered Gaussian variables with variance v 2 will be denoted by G(0,v 2).
2.1.1 2×2 GOE
For a 2×2 GOE, the Hamiltonian matrix is
and the joint distribution for the independent variables X 1, X 2 and X 3 is
The X i in Eq. (2.22) are G(0,v 2). Then, (X 1+X 2) is G(0,2v 2) and (X 1−X 2) is G(0,2v 2). Let the eigenvalues of the H matrix be λ 1 and λ 2. Using the properties of the sum and product of eigenvalues, we have λ 1+λ 2=2X 1 and \(\lambda_{1}\lambda _{2}=X_{1}^{2}-X_{2}^{2}-X_{3}^{2}\). This gives
Now x 2=2X 2 is G(0,4v 2), x 3=2X 3 is G(0,4v 2) and they are independent. Therefore
Transforming the variables x 2,x 3 to S,ϕ where x 2=Scosϕ, x 3=Ssinϕ we have
Then the NNSD is,
Note that, with \(\overline{D}\) denoting mean (or average) spacing,
In terms of the normalized spacing \(\hat{S}=S/\overline{D}\),
Thus, GOE displays linear level repulsion with P(S)∼S as S→0. This is indeed the von Neumann-Wigner level repulsion discussed in 1929 [4] signifying that quantum levels with same quantum numbers will not come close. The variance of P(S) is
Here, used are Eqs. (2.24) and (2.28).
2.1.2 2×2 GUE
Here the Hamiltonian matrix is,
with X i being G(0,v 2) and independent. Solving for the eigenvalues of H, we get λ 1+λ 2=2X 1 and \(\lambda_{1}\lambda_{2} = X_{1}^{2} - X_{2}^{2} - X_{3}^{2} - X_{4}^{2}\). Therefore \(S^{2} = (\lambda_{1} - \lambda_{2})^{2} = 4(X_{2}^{2} + X_{3}^{2} + X_{4}^{2})\). With x 2=2X 2, x 3=2X 3 and x 4=2X 4, we have x 2, x 3 and x 4 to be independent G(0,4v 2) variables. The joint probability distribution function for these is
Transforming to spherical co-ordinates i.e. x 2=Ssinθsinϕ, x 3=Ssinθcosϕ and x 4=Scosθ with dx 2 dx 3 dx 4=S 2 dSsinθdθdϕ,
Note that \(\int_{0}^{\infty}P(S)dS=1\) and \(\overline{D}=\int_{0}^{\infty}SP(S)dS=8v/\sqrt{2\pi}\). With \(\hat{S}=S/\overline{D}\),
Thus, GUE gives quadratic level repulsion with P(S)∼S 2 for S small. The variance of NNSD for GUE is,
2.1.3 2×2 GSE
Here H is quaternion real defined by Eqs. (2.17) and (2.20). These equations and the choice
will give
Eigenvalue equation for this matrix is,
Simplifying, we obtain {(a−λ)(c−λ)−(b 2+z 2+y 2+x 2)}2=0 which implies that λ’s are doubly degenerate and they are given by,
This gives S=|λ 1−λ 2|=[(a−c)2+4(b 2+z 2+y 2+x 2)]1/2. Let us define X 1=a+c, X 2=a−c, X 3=2b, X 4=2x, X 5=2y and X 6=2z. The X i ’s are independent Gaussian variables G(0,4v 2). Note that a and c are G(0,2v 2) and b,x,y,z are G(0,v 2). Thus \(S^{2}=X_{2}^{2}+X_{3}^{2}+X_{4}^{2}+X_{5}^{2}+X_{6}^{2}\). Transforming to spherical polar co-ordinates in 5 dimensions with hyper-radius S gives X 2=Scosθ 1cosθ 2cosθ 3cosθ 4, X 3=Scosθ 1cosθ 2cosθ 3sinθ 4, X 4=Scosθ 1cosθ 2sinθ 3, X 5=Scosθ 1sinθ 2 and X 6=Ssinθ 1 (volume element being dv=S 4 dScos3 θ 1cos2 θ 2cosθ 3 dθ 1 dθ 2 dθ 3 dθ 4). Then,
Simplifying, we have
Note that \(\int_{0}^{\infty}P(S)dS=1\) and \(\overline{D}=\int _{0}^{\infty}S P(S)dS=32v/3\sqrt{2\pi}\). With \(\hat{S}=S/\overline{D}\), the NNSD is
Thus, GSE generates quartic level repulsion with P(S)∼S 4 for S small. The variance of the NNSD is
Figure 2.1 shows NNSD for GOE, GUE and GSE as given by Eqs. (2.29), (2.34) and (2.42) respectively. More importantly, these random matrix forms for NNSD are seen in many different quantum systems such as nuclei, atoms, molecules etc. In addition, the RMT results for NNSD are also seen in microwave cavities, aluminum blocks, vibrating plates, atmospheric data, stock market data and so on. Figure 2.2 shows some examples. It is important to stress that the simple 2×2 matrix results are indeed very close, as proved by Mehta (see [5]), to the NNSD for any N×N matrix (N→∞). Thus, level repulsion given by random matrices is well established in real systems.
2.2 One and Two Point Functions: N×N Matrices
For more insight into the Gaussian ensembles and for the analysis of data, we will consider one and two point functions in the eigenvalues. Although we consider only GOE in this section, many of the results extend to GUE and GSE [13]. Also Appendix B gives some properties of univariate and bivariate distributions in terms of their moments and cumulants and these are used throughout this book. A general reference here is Kendall’s book [14].
2.2.1 One Point Function: Semi-circle Density
Say energies, in a spectra, are denoted by x (equivalently, the eigenvalues of the corresponding H matrix). Then, number of states in a given interval Δx around the energy x defines ρ(x)dx, where ρ(x) is normalized to unity. In fact the number of states in Δx interval around x is I(x)Δx=dρ(x)dx where d is total number of states. Carrying out ensemble averaging (for GOE, GUE and GSE), we have \(\overline{\rho(E)}\) defined accordingly. Given \(\overline{\rho (E)}\), the average spacing \(\overline{D(E)}=[d\overline{\rho(E)}]^{-1}\). If we start with the joint probability distribution for the matrix elements of Gaussian ensembles and convert this into joint distribution for eigenvalues P N (E 1,E 2,…,E N ), one sees [1]
Here β=1,2 and 4 for GOE, GUE and GSE respectively. Then \(\overline{\rho(E)}\) is the integral of ρ(E 1,E 2,…,E N ) over all E’s except one E. For completeness, let us point out that \(\rho(x)=\langle \delta(H-x)\rangle =d^{-1}\sum _{i=1}^{d}\delta(E_{i}-x)\). One can construct \(\overline{\rho(E)}\) via its moments \(\overline {M_{p}}=\overline{\langle H^{p}\rangle }=d^{-1}\overline{\mbox{Trace}(H^{p})}\).
Using the binary correlation approximation (BCA), used first by Wigner [15], it is possible to derive for \(\overline{\langle H^{p}\rangle }\) a recursion relation. In the present book most results, both for classical and embedded ensembles, are derived using BCA. In fact, for the embedded ensembles BCA is the only tractable method available at present for deriving formulas for higher moments (even though this also has limitations as discussed in later chapters).
In BCA, only terms that contain squares (but not any other power) of a given matrix element are considered and in the N→∞ limit, only these terms will survive. As the matrix elements are zero centered, all \(\overline{M_{p}}\) for p odd will be zero. Firstly, for p=2 we have
Let us say that the variance of the matrix elements for the GOE matrices is v 2. Then the ensemble averaged second moment (note that all the moments are central moments as the centroid is zero) is,
In Eq. (2.46) we have introduced a notation for correlated matrix elements. Also note that we have ignored the fact that the diagonal elements have variance 2v 2 as this will give a correction of 1/d order and this will vanish as d→∞. The first complicated moment is M 4 or 〈H 4〉. Explicitly,
Then, ensemble average gives three terms in the above sum (it contains product of four matrix elements): (i) first two H matrix are correlated and similarly the last two matrix elements; (ii) the first and third and similarly the second and the fourth matrix element are correlated; (iii) the first and fourth and similarly the second and the third matrix element are correlated. Symbolically they can be written as , and . Their values are
Thus the second term that involves cross correlation with odd number of H’s inside [in the second term in Eq. (2.48), there is one H in between] will vanish as d→∞. Then, we have
Note that in Eqs. (2.45)–(2.49), the correlated H’s are joined by the symbol ‘⊔’. Continuing the above procedure we have [16, 17], valid for all three ensembles with the normalization \(\overline{\langle H^{2}\rangle }=1\),
The solution is M 2ν+1=0 and \(M_{2\nu}=(\nu+1)^{-1}{2\nu \choose\nu}\). They are the Catalan numbers and it can be verified easily that they are the moments of a semi-circle. Thus \(\overline{\rho(E)}\) is a semi-circle,
Here ψ(x) is the angle between the x axis and the radius vector; x=−2cosψ(x), 0≤ψ≤π. Note that \(\overline{\rho(x)}\) vanishes for |x|>2 and M 2=1. Figure 2.3 shows an example for the semi-circle.
Given a ρ(x), one can define the distribution function F(x),
With ρ(x) being discrete, as the spectrum is discrete, F(x) is a staircase. Note that F(x) counts the number of levels up to the energy x and therefore increase by one unit as we cross each energy level (if there are no degeneracies).
Another important property is that the exact density ρ(x) can be expanded in terms of \(\overline{\rho(x)}\) by using the polynomials defined with respect to \(\overline{\rho(x)}\),
If we know the moments M r of \(\overline{\rho(x)}\), the polynomials P ζ (x) defined by \(\overline{\rho(x)}\) can be constructed [18] such that \(\int_{-\infty}^{+\infty}\overline{\rho(x)}\;P_{\zeta}(x)\; P_{\zeta^{\prime }}(x)dx =\delta_{\zeta\zeta^{\prime }}\). Using this one can study level motion in Gaussian ensembles as described ahead.
2.2.2 Two Point Function S ρ(x,y)
The two point function is defined by
If there are no fluctuations S ρ(x,y)=0. Thus S ρ measures fluctuations. The moments of S ρ are
To derive S ρ(x,y) (also M pq ) we consider, as in [13, 17] the polynomials defined by the semi-circle, i.e. Chebyshev polynomials of second kind U n (x). They are defined in terms of the sum,
They satisfy the orthonormality condition \(\int_{-1}^{+1}\omega(x)U_{n}(x)U_{m}(x)dx=\pi/2\;\delta_{mn}\) with \(\omega(x)=\sqrt{1-x^{2}}\). Substituting y=2x in the orthonormality condition and using V n (y)=U n (y/2), we obtain,
Note also that \(U_{n}(x)=\frac{1}{a_{n}\omega (x)}\frac{\partial^{n}}{\partial x^{n}}[ \omega(x)[g(x)]^{n}]\); \(a_{n}=(-1)^{n}2^{n+1}\frac{\varGamma(n+3/2)}{(n+1)\sqrt{\pi }}\), g(x)=1−x 2. Similarly, V ζ (x)=(−1)ζ[sinψ(x)]−1sin(ζ+1)ψ(x).
Returning to M pq it is seen that in \(\overline{\langle H^{p} \rangle \langle H^{q} \rangle }\) evaluation (again we use BCA) it should be recognized that the correlations come when say ζ number of H’s in H p correlate with ζ number of H’s in H q. When ζ=0 we get \(\{\overline {\langle H^{p}\rangle}\}\{\overline{\langle H^{q}\rangle }\}\). Therefore
and \(\mu_{\zeta}^{p}\) are obtained by a counting argument. French, Mello and Pandey [17] showed that (see also [19]),
Now let us evaluate . Firstly \(\langle H^{\zeta}\rangle =d^{-1}\sum_{i,j,k,\ldots=1}^{d} H_{ij}H_{jk}\ldots H_{ni}\). Then the number of indices are ζ and the number of terms in the sum are d ζ. In there will be d ζ terms of the type 〈H ij H ji 〉〈H jk H kj 〉…. Choosing v 2 d=1, we have \(\langle H_{ij}^{2}\rangle = v_{ij}^{2} = v^{2}= 1/d\). Therefore . However for every H ij there are 〈H ij H ij 〉 and 〈H ij H ji 〉 for GOE. Both give v 2 and therefore for GOE. In case of GUE \(\overline{H_{ij}H_{ij}}= \overline{(a+ib)(a+ib) }=\overline{a^{2}}-\overline{b^{2}}+2i\overline{a}\overline{b}=0\) and \(\overline{H_{ij}H_{ji}}= \overline{(a+ib)(a-ib)}=\overline{a^{2}}+\overline{b^{2}}=1\) (\(|H_{ij}|^{2}=v_{ij}^{2}=v^{2}\) and v 2 d=1). Thus . In addition there is cyclic invariance and therefore there is an additional ζ factor [for example, H 12 H 21⊕H 21 H 12,H 12 H 23 H 31⊕H 23 H 31 H 12⊕H 31 H 12 H 23]. Then the final result is,
with β=1 for GOE and β=2 for GUE. Putting this result in the expression for M pq we see that
Then by inversion we get,
Note that \(S^{F}(x,y)=\int_{-\infty}^{x}\int_{-\infty}^{y}S^{\rho }(x^{\prime }, y^{\prime })dy^{\prime }dx^{\prime }=\overline {F(x)F(y)}-\overline{F(x)} \,\overline{F(y)}\). The sum can be simplified by introducing a cut-off e −αζ and extending the sum to ζ=1 to ∞,
where we have used the property that \(\mathrm{Re}[\ln(1+z)]=\frac{1}{2} \ln[(1+z)(1+z^{\ast})]\). Finally have,
Let us consider the structure of S F(x,y) when x∼y. Firstly the number of levels r in the energy interval Δx=dx=x−y is d ρ(x)dx. But 1−x 2/4=sin2 ψ gives x=2cosψ. Therefore, dx=2sinψdψ and
Then, S F(x,y) is, with x∼y⇒sinψ, ψ 1+ψ 2∼2ψ and ψ 1−ψ 2∼δψ,
In the last step, Eq. (2.65) is used. Therefore the behavior of S F(x,y) is that it behaves as lnr. Now let us consider the self-correlation term S F(x,x) which determines the level motion \(\delta x^{2}/\overline{D}^{2}=d^{2}S^{F}(x,x)\) in GOE,
Before going further, it is useful to point out that the moment method with BCA used for deriving the asymptotic form of one and two point functions extends to many other random matrix ensembles. A recent discussion on the power of the moment method in random matrix theory is given in [20].
2.2.3 Fluctuation Measures: Number Variance Σ 2(r) and Dyson-Mehta Δ 3 Statistic
P(S), the nearest neighbor spacing distribution and its variance σ 2(0) are measures of fluctuations. Using S F(x,y) we can define a new measure called ‘number variance’ Σ 2(r). Say in a energy interval x to y there are r levels, then r=d[F(y)−F(x)]. The statistic Σ 2(r) is ensemble averaged variance of r and
Thus Σ 2(r) is an exact “two-point measure”. Using the asymptotic expressions for S F(x,x) and S F(x,y), we obtain
Thus Σ 2(r) behaves as lnr and hence the GOE spectrum is rigid. The exact results valid at the center of the semi-circle are:
where γ is Euler’s constant. Other important measure is Dyson-Mehta Δ 3 statistic and importantly, it is related to Σ 2(r) and hence it is also a two-point measure.
Dyson and Mehta Δ 3 statistic [21] is defined as the mean square deviation of F(E), of the unfolded spectrum, from the best fit straight line and \(\varDelta _{3}(\overline{n})\) corresponds to the same but over a spectrum of length \(\overline{n}\overline{D}\). The ensemble averaged \(\overline{\varDelta _{3}}(\overline{n})\) is then defined similarly,
Here \(L=\frac{\overline{n}}{2}\). The \(\overline{\varDelta _{3}} (\overline{n})\) statistic is an exact two-point measure. In fact it can be written as an integral involving Σ 2(r). This is proved in Appendix C. Using the GOE (similarly GUE) expression for Σ 2(r) and applying Eq. (C.8), we obtain the following expression for \(\overline{ \varDelta _{3}}(\overline{n})\) for GOE and GUE,
For a novel application of \(\overline{ \varDelta _{3}}(\overline{n})\) statistic, see [22].
2.3 Structure of Wavefunctions and Transition Strengths
2.3.1 Porter-Thomas Distribution
Given a transition operator T (this should not be confused with the time reversal operator ‘T’ used before or the isospin label ‘T’), transition strength connecting two eigenstates is defined by |〈E f ∣T∣E i 〉|2. In nuclei T’s of interest are electromagnetic (magnetic dipole, electric quadrupole for example), one particle addition or removal, Gamow-Teller operator and so on. Similarly, in atoms and molecules dipole operator is very important. It is also important to recognize that the widths of resonances also measure transition strengths. Leaving detailed discussion on transition strength distributions to Ref. [13], here we will give only some basic results. One can think of T|E〉 to be a compound state and represent it by a basis state |i〉. Therefore, statistical properties of transition strengths will be same as those of the expansion coefficients \(C_{i}^{E}\) of the eigenstates in terms of the basis states |i〉,
Now an important question is: what is the distribution of \(|x_{i}|^{2}=|C^{E}_{i}|^{2}\). First, let us consider the joint distribution P(x 1,x 2,x 3,…,x d ) of x 1, x 2, x 3,…,x d for GOE. Because the GOE is an orthogonally invariant ensemble, the eigenvectors uniformly cover the d-dimensional unit sphere. Then the normalization condition ∑ i |x i |2=1 gives,
Now, integrating over all but one variable (say x 1 and denote it by x) will give
Then, in the d→∞ limit we obtain,
Thus, asymptotically x will be zero centered Gaussian variables with variance 1/d. As \(\overline{|C^{E}_{i}|^{2}}\) should not depend on the index i and \(\sum_{i} |C^{E}_{i}|^{2}=1\) will give us the significant result that for GOE \(\overline{|C^{E}_{i}|^{2}}=1/d\). Therefore, the distribution of the renormalized strengths \(z=|C^{E}_{i}|^{2}/\overline{|C^{E}_{i}|^{2}}\) is, putting d x 2=z in Eq. (2.76),
and this is nothing but \(\chi^{2}_{1}\) distribution. Equation (2.77), for GOE, is called Porter-Thomas (P-T) law for strengths [23]. Thus locally renormalized strengths, \(z=|C^{i}_{j}|^{2}/\overline{|C^{i}_{j}|^{2}}\), follow P-T law; note that \(\int_{0}^{\infty}z \rho(z) dz = 1\). The GOE P-T law was well tested in many examples as shown in Fig. 2.4. Similar to GOE, for GUE the P-T law is \(\chi_{2}^{2}\) as \(C^{i}_{j}\) will have real (say A) and complex (say B) parts with each of them being G(0,v 2=1/d) and independent; then \(|C^{i}_{j}|^{2}=A^{2}+B^{2}\).
2.3.2 NPC, Sinfo and Strength Functions
With eigenfunctions expanded in terms of some basis states (they form a complete set) |k〉, let us define the following: (i) NPC (denoted as ξ 2)—number of principal components; (ii) S info—information entropy or ℓ H —localization length. They are, with \(|E\rangle = \sum_{k=1}^{d} C^{E}_{k} |k\rangle \),
In Eq. (2.78), degeneracies of the eigenvalues E are taken into account. It is important to stress that S info is the first and NPC [or the inverse participation ratio (IPR)] the second Rényi entropy introduced in chaos literature [25, 26]. For this reason NPC is denoted by ξ 2(E). Similarly, information entropy is also called Shannon entropy [27]. As we shall see ahead, S info and lnξ 2(E) carry the same information and their difference, called structural entropy, is also some times used (with S info or ξ 2) as a chaos measure; see [26] and references therein. For GOE, \(\overline {|C^{E}_{k}|^{2}}=\frac{1}{d}\) and \(C^{E}_{k}\) are Gaussian variables \(G(0,\frac{1}{d})\). Therefore \(\overline{|C^{E}_{k}|^{4}}=3(\overline{ |C^{E}_{k}|^{2}})^{2}=\frac{3}{d^{2}}\) and then
Thus \(\overline{\xi_{2}(E)}\) is independent of E for GOE. Similarly \(\overline{S^{info}(E)} = -d\overline{|C^{E}_{k}|^{2} \ln|C^{E}_{k}|^{2}}\) where \(C^{E}_{k}\) are \(G(0,\frac{1}{d})\). Then,
Thus for GOE, S info=ln(0.48d) independent of E and the localization length is unity by definition. The NPC and S info formulas are well verified in numerical examples.
Besides NPC and S info, localization properties of wavefunctions can be inferred from strength functions. Given \(C_{k}^{E}\), strength functions are defined by,
where denotes the average of \(|C_{k}^{E}|^{2}\) over the eigenstates with the same energy E. A commonly used form for strength functions is the Breit-Wigner (BW) form and its derivation is given in Appendix D. Many other aspects of transition strengths and strength fluctuations are discussed in [13, 28–30]; see also Appendix E.
2.4 Data Analysis
2.4.1 Unfolding and Sample Size Errors
In the analysis of data one has to pay attention to the following facts: (1) data is available for a given system (say a nucleus); (2) the sample size (number of levels) is usually small (∼100); (3) over the sample, the density may vary. Point (3) is true for shell model data or any other model data. To take care of (1) and (2) it is possible to invoke ergodicity and stationarity properties of the Gaussian ensembles (GE-GOE, GUE or GSE) [13, 31]. Due to ergodicity, we have a permit to compare ensemble averaged results from GE to spectral averaged results from a given spectrum. Similarly due to stationarity, the measures (statistics) of fluctuations will be independent of which part of the spectrum one is looking at. To the extent that the spectra are “complex”, we can use “stationarity” to combine the values of the measures (with appropriate weights) to increase the sample size. Let us first consider point #(3). When ρ(x) is varying, it is necessary to remove the “secular variation” before the data is analyzed. This is called “unfolding”. For this we have to map the given energies E i to new energies ε i such that the ε i spectrum has constant density. Say ε i =g(E i ) such that ε i has unit mean spacing on the average in the interval \(E_{i}\pm\frac{\varDelta E}{2}\). Then, with ΔN levels in the interval Δε,
Now integrating Eq. (2.82) on both sides will give the map,
The ε i ’s given by Eq. (2.83) will have \(\overline{D}=1\). Significance of the physically (theoretically) defined \(\overline{\rho(E)}\) in unfolding has been discussed by Brody et al. [13] and more recently by Jackson et al. [32]. Normally one tries to fit F(E) to a smooth curve, by a least square procedure, to obtain \(\overline{F(E)}\), but often this is not proper as fluctuations depend on \(\overline{F(E)}\) used in the unfolding procedure.
Coming to the sample size errors, let us consider a measure w calculated over say p levels. Say the theoretical value of w (for infinite sample size) is \(\overline{w}\) and its variance \(\overline{\mbox{var}(w)}\). Then the figure of merit \(f=\frac{\sqrt{\mathrm{var}(w)_{p}}}{(w)_{p}} \times 100\). Now \(\overline{w}\rightarrow\overline{w}\{1\pm\frac{f}{100}\}\). For \(\varSigma^{2}(\overline{n})\), the Poisson estimate gives \(f\sim\sqrt{\frac{2\overline{n}}{p}} \times100\), thus f→0 for p→∞. In practice also used are overlapping intervals. Then it is seen via Monte-Carlo calculations that f reduces by 0.65. Applying the same size error to the GOE analytical results, it is seen that theory and experiment agree almost exactly in the case of Nuclear Data Ensemble (NDE) constructed by combining neutron resonance data from many nuclei [6, 12, 33]. Finally, in practice, for \(\overline{\varDelta _{3}}(\overline{n})\) and \(\overline{D}\) calculations, it is more useful to use the simple formulas given by French, Pandey, Bohigas and Giannoni [13, 34]. Given a sequence of (ordered) energies (E 1,E 2,…,E n ), the mean spacing \(\overline{D}\) is [13],
Similarly consider \(\overline{\varDelta _{3}}(L)\) over an energy interval α to α+L. Defining \(\widetilde{E}_{i}\) to be \(\widetilde{E}_{i}=E_{i}-(\alpha+\frac{L}{2})\), we have [34],
2.4.2 Poisson Spectra
When comparing GOE (or GUE, GSE) results with data, it is important to consider the Poisson case, i.e. uncorrelated spectra so that the effects due to GOE correlations will be clear. We can generate a Poisson spectrum as follows. First generate a set {s} such that s is a random variable following exp−x probability distribution. Then choose x 1=0 and x n+1=x n +s n ; n=1,2,3,… and draw s n from {s}. Now the nearest neighbor spacing S=s n . Therefore P(S)dS for the sequence (x 1,x 2,x 3,…) is
An important question is what is \(\varSigma^{2}(\overline{n})\) and \(\overline{\varDelta _{3}}(\overline{n})\) for a Poisson spectrum. Before proceeding further let us mention that random superposition of several independent spectra leads to Poisson as proved by Porter and Rosenzweig [35]. In order to derive the results for \(\varSigma^{2}(\overline {n})\) and \(\varDelta _{3}( \overline{n})\) for a Poisson spectrum, it is useful to consider the R 2(E 1,E 2) and Y 2(r) correlation functions for an unfolded spectrum. R 2(E 1,E 2) is the integral of P(E 1,E 2,…,E n ) over E 3,E 4,…,E n and Y 2 is simply related to R 2,
Note that R 1(x 1)=1=Y 1(x 1) and the x’s are unfolded energies, i.e. mean spacing \(\overline{D}=1\). The important point is that for a Poisson Y 2(r)=0 and therefore, as \(\varSigma^{2}(L)=L-\int_{0}^{L}\,(L-r)Y_{2}(r)dr\),
for a Poisson. It is also useful to mention that for pseudo-integrable systems (which possess singularities and are integrable in the absence of these singularities) [36–39] follow semi-Poisson statistics [40, 41],
This form is also seen recently in the low-energy part of the spectra generated by two-body interactions [42].
2.4.3 Analysis of Nuclear Data for GOE and Poisson
Bohigas, Haq and Pandey [6, 12] analyzed slow neutron resonance, proton resonance and (n,γ) reaction data (for level and width fluctuations) and established that GOE describes, within sample size errors, almost exactly the experimental data. They constructed nuclear data ensemble (NDE) with 1762 resonance energies corresponding to 36 sequences of 32 different nuclei and they contain: (i) slow-neutron resonance 1/2+ levels from 64,66,68Zn, 114Cd, 152,154Sm, 154,156,158,160Gd, 160,162,164Dy, 166,168,170Er, 172,174,176Yb, 182,184,186W, 186,190Os, 232Th and 238U targets; (ii) proton resonance 1/2+ levels (1/2− also for Ca) from 44Ca, 48Ti and 56Fe targets; (iii) (n,γ) reaction data on 177,179Hf and 235U giving two sequences of J π levels. Similarly considered also are 1182 widths corresponding to 21 sequences of the above neutron resonance data. Comparisons are made with the GOE Wigner’s law P(S)dS=(πS/2)exp(−πS 2/4)dS for nearest neighbor spacing distribution (NNSD) and Porter Thomas (P-T) \(\chi^{2}_{1}\) law \(P(x)dx = (1/\sqrt{2\pi x}) \exp(-x/2) dx\) for widths; S is in units of average mean spacing (\(\overline{D}\)) and x is width (rate of transition from a initial state to a channel) in units of average width. Similarly, the GOE number variance Σ 2(L) and Δ 3(L) are also seen to agree (for L≤20) extremely well with NDE. In the analysis sample size corrections are made as discussed earlier. Unlike the resonance energies, the resonance widths analysis appears to have some uncertainties (see [6] and Appendix E).
Garrett et al. [43] analyzed NNSD for high-spin levels near the yrast line in rare-earth nuclei. Considered are 3130 experimental level spacings from deformed even-even and odd-A nuclei with Z=62–75 and A=155–185. As expected P(S) is seen to follow regular Poisson form. Following this study, Enders et al. [44] analyzed NNSD for scissors mode levels in deformed nuclei and found Poisson behavior as scissors mode is a well defined collective mode. Used here are 152 levels from 13 heavy deformed nuclei [146,148,150Nd, 152,154Sm, 156,158Gd, 164Dy, 166,168Er, 174Yb, 178,180Hf] in the energy range 2.5<E x <4 MeV with the constraint that there must be at least 8 levels in the given energy interval in a given nucleus. Thus low-lying levels of well deformed nuclei and scissor states, being regular, follow Poisson as expected. Finally, Enders et al. [45] analyzed also the electric pigmy dipole resonances located around 5–7 MeV in four N=82 isotones. They made an ensemble of 184 1− states and an analysis, though difficult due to many missing levels, has been carried out. The authors conclude that there is GOE behavior. Thus, level fluctuations bring out the expected difference between the scissor mode and pigmy dipole resonance.
References
C.E. Porter, Statistical Theories of Spectra: Fluctuations (Academic Press, New York, 1965)
F. Haake, Quantum Signatures of Chaos, 3rd edn. (Springer, Heidelberg, 2010)
R.G. Sachs, The Physics of Time Reversal (University of Chicago Press, Chicago, 1987)
J. von Neumann, E.P. Wigner, On the bahavior of eigenvalues in adiabatic processes. Phys. Z. 30, 467–470 (1929)
M.L. Mehta, Random Matrices, 3rd edn. (Elsevier, Amsterdam, 2004)
O. Bohigas, R.U. Haq, A. Pandey, Fluctuation properties of nuclear energy levels and widths: comparison of theory and experiment, in Nuclear Data for Science and Technology, ed. by K.H. Böckhoff (Reidel, Dordrecht, 1983), pp. 809–813
O. Bohigas, M.J. Giannoni, C. Schmit, Characterization of chaotic quantum spectra and universality of level fluctuation laws. Phys. Rev. Lett. 52, 1–4 (1984)
V. Plerou, P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, T. Guhr, H.E. Stanley, Random matrix approach to cross correlations in financial data. Phys. Rev. E 65, 066126 (2006)
M.S. Santhanam, P.K. Patrta, Statistics of atmospheric correlations. Phys. Rev. E 64, 016102 (2001)
V.K.B. Kota, Embedded random matrix ensembles for complexity and chaos in finite interacting particle systems. Phys. Rep. 347, 223–288 (2001)
K. Patel, M.S. Desai, V. Potbhare, V.K.B. Kota, Average-fluctuations separation in energy levels in dense interacting boson systems. Phys. Lett. A 275, 329–337 (2000)
R.U. Haq, A. Pandey, O. Bohigas, Fluctuation properties of nuclear energy levels: do theory and experiment agree? Phys. Rev. Lett. 48, 1086–1089 (1982)
T.A. Brody, J. Flores, J.B. French, P.A. Mello, A. Pandey, S.S.M. Wong, Random matrix physics: spectrum and strength fluctuations. Rev. Mod. Phys. 53, 385–479 (1981)
A. Stuart, J.K. Ord, Kendall’s Advanced Theory of Statistics: Distribution Theory (Oxford University Press, New York, 1987)
E.P. Wigner, Characteristic vectors of bordered matrices with infinite dimensions. Ann. Math. 62, 548–564 (1955)
E.P. Wigner, Statistical properties of real symmetric matrices with many dimensions, in Can. Math. Congr. Proc. (University of Toronto Press, Toronto, 1957), pp. 174–184
J.B. French, P.A. Mello, A. Pandey, Statistical properties of many-particle spectra II. Two-point correlations and fluctuations. Ann. Phys. (N.Y.) 113, 277–293 (1978)
G. Szegő, Orthogonal Polynomials, Colloquium Publications, vol. 23 (Am. Math. Soc., Providence, 2003)
K.K. Mon, J.B. French, Statistical properties of many-particle spectra. Ann. Phys. (N.Y.) 95, 90–111 (1975)
A. Bose, R.S. Hazra, K. Saha, Patterned random matrices and method of moments, in Proceedings of the International Congress of Mathematics 2010, vol. IV, ed. by R. Bhatia (World Scientific, Singapore, 2010), pp. 2203–2231
F.J. Dyson, M.L. Mehta, Statistical theory of energy levels of complex systems IV. J. Math. Phys. 4, 701–712 (1963)
R.G. Nazmitdinov, E.I. Shahaliev, M.K. Suleymanov, S. Tomsovic, Analysis of nucleus-nucleus collisions at high energies and random matrix theory. Phys. Rev. C 79, 054905 (2009)
C.E. Porter, R.G. Thomas, Fluctuations of nuclear reaction widths. Phys. Rev. 104, 483–491 (1956)
H. Alt, H.-D. Gräf, H.L. Harney, R. Hofferbert, H. Lengeler, A. Richter, R. Schardt, H.A. Weidenmüller, Gaussian orthogonal ensemble statistics in a microwave stadium billiard with chaotic dynamics: Porter-Thomas distribution and algebraic decay of time correlations. Phys. Rev. Lett. 74, 62–65 (1995)
A. Rényi, On measures of information and entropy, in Proceedings of the 4th Berkeley Symposium on Mathematics, Statistics and Probabilities, vol. 1 (1961), pp. 547–561
I. Varga, J. Pipek, Rényi entropies characterizing the shape and the extension of the phase space representation of quantum wave functions in disordered systems. Phys. Rev. E 68, 026202 (2003)
C.E. Shannon, W. Weaver, The Mathematical Theory of Communication (University of Illinois Press, Champaign, 1949)
N. Ullah, On a generalized distribution of the Poles of the unitary collision matrix. J. Math. Phys. 10, 2099–2104 (1969)
P. Shukla, Eigenfunction statistics of complex systems: a common mathematical formulation. Phys. Rev. E 75, 051113 (2007)
B. Dietz, T. Guhr, H.L. Harney, A. Richter, Strength distributions and symmetry breaking in coupled microwave billiards. Phys. Rev. Lett. 96, 254101 (2006)
A. Pandey, Statistical properties of many-particle spectra III. Ergodic behavior in random matrix ensembles. Ann. Phys. (N.Y.) 119, 170–191 (1979)
A.D. Jackson, C. Mejia-Monasterio, T. Rupp, M. Saltzer, T. Wilke, Spectral ergodicity and normal modes in ensembles of sparse matrices. Nucl. Phys. A 687, 405–434 (2001)
O. Bohigas, R.U. Haq, A. Pandey, Higher-order correlations in spectra of complex systems. Phys. Rev. Lett. 54, 1645–1648 (1982)
O. Bohigas, M.J. Giannoni, Level density fluctuations and random matrix theory. Ann. Phys. (N.Y.) 89, 393–422 (1975)
N. Rosenzweig, C.E. Porter, Repulsion of energy levels in complex atomic spectra. Phys. Rev. 120, 1698–1714 (1960)
D. Biswas, S.R. Jain, Quantum description of a pseudointegrable system: the π/3-rhombus billiard. Phys. Rev. A 42, 3170–3185 (1990)
G. Date, S.R. Jain, M.V.N. Murthy, Rectangular billiard in the presence of a flux line. Phys. Rev. E 51, 198–203 (1995)
P.R. Richens, M.V. Berry, Pseudointegrable systems in classical and quantum mechanics. Physica D 2, 495–512 (1998)
A.M. Garcïa-Garcïa, J. Wang, Semi-Poisson statistics in quantum chaos. Phys. Rev. E 73, 036210 (2006)
E.B. Bogomolny, U. Gerland, C. Schmit, Models of intermediate spectral statistics. Phys. Rev. E 59, R1315–1318 (1999)
E.B. Bogomolny, C. Schmit, Spectral statistics of a quantum interval-exchange map. Phys. Rev. Lett. 93, 254102 (2004)
J. Flores, M. Horoi, M. Mueller, T.H. Seligman, Spectral statistics of the two-body random ensemble revisited. Phys. Rev. E 63, 026204 (2000)
J.D. Garrett, J.Q. Robinson, A.J. Foglia, H.-Q. Jin, Nuclear level repulsion and order vs. chaos. Phys. Lett. B 392, 24–29 (1997)
J. Enders, T. Guhr, N. Huxel, P. von Neumann-Cosel, C. Rangacharyulu, A. Richter, Level spacing distribution of scissors mode states in heavy deformed nuclei. Phys. Lett. B 486, 273–278 (2000)
J. Enders, T. Guhr, A. Heine, P. von Neuman-Cosel, V.Yu. Ponomarev, A. Richter, J. Wambach, Spectral statistics and the fine structure of the electric pygmy dipole resonance in N=82 nuclei. Nucl. Phys. A 41, 3–28 (2004)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Kota, V.K.B. (2014). Classical Random Matrix Ensembles. In: Embedded Random Matrix Ensembles in Quantum Physics. Lecture Notes in Physics, vol 884. Springer, Cham. https://doi.org/10.1007/978-3-319-04567-2_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-04567-2_2
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-04566-5
Online ISBN: 978-3-319-04567-2
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)