Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

2.1 Hamiltonian Structure and Dyson’s Classification of GOE, GUE and GSE Random Matrix Ensembles

The discussion in this section is largely from Porter’s book [1]. Original contributions here are due to Wigner and Dyson and all their papers were reprinted in [1]. More recent discussion on Dyson’s classification of classical random matrix ensembles is given by Haake [2]. The classification is based on the properties of time reversal operator in quantum mechanics [3]. Appendix A gives a brief discussion on time reversal and the results given there are used in this section. In finite Hilbert spaces, the Hamiltonian of a quantum system can be represented by a N×N matrix. Now we will consider the properties of this matrix, with regard to time-reversal T and angular momentum J symmetries.

Firstly, imagine there is no time reversal invariance. Then we know nothing about the H matrix except that it should be complex Hermitian. And all such H’s should be complex Hermitian in any representation differing from any other by a unitary transformation.

Now, consider H to be T invariant. Then we have, from the results in Appendix A, T 2=±1. Let us say T is good and T 2=1. Then, it is possible to construct an orthogonal basis ψ K such that K =ψ K is satisfied. Let us start with a normalized state Φ 1 and construct,

$$ \varPsi_1 = a \varPhi_1 + T a \varPhi_1 $$
(2.1)

where a is an arbitrary complex number. This gives trivially

$$ T\varPsi_1 = Ta\varPhi_1 + T^2a\varPhi_1 \stackrel{T^2=1}{\rightarrow} \varPsi _1. $$
(2.2)

Now consider Φ 2 such that 〈Ψ 1 Φ 2〉=0 and construct

$$ \varPsi_2 = a^\prime \varPhi_2 + T a^\prime \varPhi_2. $$
(2.3)

Then,

$$\begin{aligned} \langle \varPsi_1|\varPsi_2 \rangle = & a^\prime \langle \varPsi_1 | \varPhi_2 \rangle + \bigl\langle \varPsi_1 \big| Ta^\prime \varPhi_2 \bigr\rangle = \bigl(a^\prime \bigr)^* \langle \varPsi _1 | T \varPhi_2 \rangle \\ = & \bigl(a^\prime \bigr)^* \bigl\langle T \varPsi_1 \big| T^2 \varPhi_2 \bigr\rangle ^* = \bigl(a^\prime \bigr)^* \langle \varPsi_1 | \varPhi_2 \rangle ^* = 0. \end{aligned}$$
(2.4)

Here we used Eq. (A.12) and T 2=1. Continuing this way an entire set of orthogonal Ψ i can be produced satisfying

$$ T\varPsi_i = \varPsi_i $$
(2.5)

and they can be normalized. In this basis all H’s that are T invariant, i.e. THT −1=H will be real symmetric,

$$ H_{kl}= \langle \varPsi_k \mid H \mid\varPsi_l \rangle = \langle T\varPsi_k \mid TH \varPsi_l \rangle ^* = \bigl\langle T\varPsi_k \,\big|\, THT^{-1}T \varPsi _l \bigr\rangle ^* = H_{kl}^* . $$
(2.6)

Therefore for systems with T 2=1, all H’s that are T invariant can be made, independent of J, real symmetric in a single basis.

Let us consider the situation with T is good, J is good and T 2=−1. Now, say \(\overline{T}=\exp({-i\pi J_{y}})T\) and then using T=exp(iπS y )K as given by Eq. (A.23), we have

$$\begin{aligned} (\overline{T})^2 = & \exp ({-i\pi J_y} )\exp ({i \pi S_y} )K \exp ({-i\pi J_y} )\exp ({i \pi S_y} )K \\ = & \exp ({-i\pi L_y} ) \exp ({-i\pi L_y} ) K^2 \\ = & \exp ({-2i\pi L_y} ) = 1, \end{aligned}$$
(2.7)

as L y is an integer in the (L 2, L y ) diagonal basis. Now we have \((\overline{T})^{2}=1\) and therefore we can proceed to construct a Γ k basis with \(\overline{T}\varGamma_{i}=\varGamma_{i}\) just as in the situation with T 2=1. In the Γ i basis, a T invariant H will be real symmetric,

$$\begin{aligned} &H_{k\ell} = \langle \varGamma_k |H| \varGamma_\ell \rangle = \langle T \varGamma_k \mid T H \varGamma_\ell \rangle ^* \\ &\phantom{H_{k\ell}} {} = \bigl\langle e^{-i\pi J_y} T \varGamma_k \,\big|\, e^{-i\pi J_y} T H \varGamma_\ell \bigr\rangle ^* = \bigl\langle \overline{T} \varGamma_k \,\big|\, e^{-i\pi J_y} T H T^{-1} T \varGamma_\ell \bigr\rangle ^* \\ &\phantom{H_{k\ell}} {}\stackrel{THT^{-1}=H}{\longrightarrow} \bigl\langle \varGamma_k \,\big|\, e^{-i\pi J_y} H e^{i\pi J_y} e^{-i\pi J_y} T \varGamma_\ell \bigr\rangle ^* \\ &\phantom{H_{k\ell}} {}\stackrel{J_i H J_i^{-1}=H}{\longrightarrow} \langle \varGamma_k \mid H \mid \overline{T} \varGamma_\ell \rangle ^* = \langle \varGamma _k \mid H \mid \varGamma_\ell \rangle ^* \\ &\phantom{H_{k\ell}} {}=H_{k \ell}^*. \end{aligned}$$
(2.8)

Therefore if H is invariant under both J and T, the H matrix can be made symmetric. In fact all such H’s will be simultaneously real symmetric in the Γ k basis and they remain so by an orthogonal transformation.

The final situation is where T is good, J is not good but T 2=−1. In the situation we still have Kramer’s degeneracy and given a |ψ〉, the |ψ〉 and |〉 are orthogonal. With a basis of 2N states, (|i〉, T|i〉) with i=1,2,…,N, consider

$$ |\psi\rangle =\sum_m \bigl[C_{m+}|m\rangle + C_{m-} |Tm\rangle \bigr]. $$
(2.9)

Then,

$$ T|\psi\rangle= \sum_m \bigl[-C^*_{m-}|m\rangle + C^*_{m+} |Tm\rangle \bigr] $$
(2.10)

and here we have used T 2=−1 with \(T C_{0}=C_{0}^{*}\) for any number C 0. As T=UK, Eq. (2.10) then gives for each (|m〉, |Tm〉) pair. Also T 2=−1⇒UKUK=−1 and then UU =−1. Also UU =1 and therefore \(U=-\widetilde{U}\). Any U such that \(U=-\widetilde{U}\) can be brought to the form

$$ U = \left [ \begin{array}{@{}c@{\quad}c@{}} 0 & -I \\ I & 0 \end{array} \right ] $$
(2.11)

by a similarity transformation. We can also chose

$$ U= \left [ \begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 0 & -1 & & & & \\ 1& 0 & & & &\\ & & 0 & -1 & & \\ & & 1&0 & & \\ & & & &\ddots& \end{array} \right ]. $$
(2.12)

Now we consider a unitary matrix S that commutes with T=UK. Then

$$ \begin{aligned}[c] \mathit{SU}K & = UKS \\ \mathit{SU} & = UKSK^{-1} \\ & = US^* \\ \Rightarrow\quad U & = \mathit{SU}\bigl(S^*\bigr)^{-1} = \mathit{SU}\bigl(S^{-1}\bigr)^* = \mathit{SU}{S^\dagger}^* \\ & = \mathit{SU} \widetilde{S}. \end{aligned} $$
(2.13)

Therefore with

$$ Z=\left [ \begin{array}{@{}c@{\quad}c@{}} 0 & -I \\ I & 0 \end{array} \right ], $$
(2.14)

S must be

$$ SZ\widetilde{S}=Z. $$
(2.15)

The S that are unitary and satisfying Eq. (2.15) are called symplectic matrices. We will now construct H matrices that are invariant under S, i.e. symplectic transformations. To-wards this end consider

(2.16)

and H in the form, with ,

(2.17)

Then

$$ H=H^{\dagger}\quad\Rightarrow\quad H_0^{\dagger}=H_0, \qquad H_k^{\dagger} = -H_k. $$
(2.18)

Now THT −1=H and will give,

$$ \begin{aligned}[c] &H=\left [ \begin{array}{@{}c@{\quad}c@{}} H_0^*-iH_3^* & -iH_1^*-H_2^* \\ -iH_1^*+H_2^* & H_0^*+iH_3^* \end{array} \right ] \\ &\quad\Rightarrow\quad H_i = H_i^*,\quad i=0,1,2,3. \end{aligned} $$
(2.19)

Comparing Eqs. (2.18) and (2.19) we have,

$$ H_0 = H_0^* =\widetilde{H}_0,\qquad H_k = H_k^* = -\widetilde{H}_k. $$
(2.20)

Now we will prove that, if H is T invariant, then SHS −1 is also T invariant,

$$\begin{aligned} T\bigl[SHS^{-1}\bigr]T^{-1} =& TSHS^{-1}T^{-1} \\ =& STHT^{-1}S^{-1} \\ =& SHS^{-1}. \end{aligned}$$
(2.21)

Therefore the quaternion structure of H given by Eq. (2.17), valid for T invariant H with T 2=−1 and J may not be good, will be preserved by symplectic transformations, that is by S that are unitary and satisfying the condition \(SZ\widetilde{S}=Z\). Together with Eq. (2.20), the H’s are quaternion real (QR) matrices.

The results proved above will give the Hamiltonian form and the corresponding group structure under (J,T) invariance as follows:

  1. 1.

    For T not good and J is good or not good, the Hamiltonian is complex Hermitian and the canonical group of transformations is U(N), the unitary group in N dimensions (N is the dimension of the H matrix).

  2. 2.

    For T is good and J is good, the Hamiltonian is real symmetric and the canonical group of transformations is O(N), the orthogonal group in N dimensions.

  3. 3.

    For T is good and J is not good but J is a integer, the Hamiltonian is real symmetric and the canonical group of transformations is again O(N).

  4. 4.

    For T is good and J is not good but J is a half-odd integer, the Hamiltonian is quaternion real and the canonical group of transformations is Sp(2N), the symplectic group in 2N dimensions (note that here we are using the H matrix dimension as 2N as it must be even).

Note that in (1)–(4) above, in a single basis all H’s can be simultaneously made real symmetric, QR or complex Hermitian as appropriate. In the absence of any other information except invariance with respect to J and T are known, one can represent the H matrix of a given quantum system by an ensemble of N×N matrices with structure as given by (1)–(4) above. The matrix elements are then chosen to be independent random variables. Note that for the U(N), O(N) and Sp(2N) systems mentioned above, the number of independent variables (note that for a complex number there are two variables—one for the real part and other for the complex part) will be N 2, N(N+1)/2 and N(N+1) respectively. In the classical random matrix ensembles, called GUE, GOE and GSE respectively, the matrix elements are chosen to be independent Gaussian variables with zero center and variance unity (except that the diagonal matrix elements—they are real—have variance 2). Then these ensembles will be invariant under U(N), O(N) and Sp(2N) transformations respectively and accordingly they are called Gaussian orthogonal (GOE), unitary (GUE) and symplectic (GSE) ensembles. Table 2.1 gives for these three ensembles, the corresponding transformation matrices and the mathematical structure of the Hamiltonians. In order to make a beginning in deriving the properties of GOE, GUE and GSE, we will start with the simplest 2×2 matrix version of these ensembles. Hereafter, zero centered Gaussian variables with variance v 2 will be denoted by G(0,v 2).

Table 2.1 Classification of classical random matrix ensembles

2.1.1 2×2 GOE

For a 2×2 GOE, the Hamiltonian matrix is

$$ H = \left [ \begin{array}{@{}c@{\quad}c@{}} X_1 +X_2 & X_3 \\ X_3 & X_1-X_2 \end{array} \right ] $$
(2.22)

and the joint distribution for the independent variables X 1, X 2 and X 3 is

$$ p(X_1,X_2,X_3)\,dX_1 \,dX_2\,dX_3 = P_1(X_1) P_2(X_2) P_3(X_3) \,dX_1\,dX_2\,dX_3. $$
(2.23)

The X i in Eq. (2.22) are G(0,v 2). Then, (X 1+X 2) is G(0,2v 2) and (X 1X 2) is G(0,2v 2). Let the eigenvalues of the H matrix be λ 1 and λ 2. Using the properties of the sum and product of eigenvalues, we have λ 1+λ 2=2X 1 and \(\lambda_{1}\lambda _{2}=X_{1}^{2}-X_{2}^{2}-X_{3}^{2}\). This gives

$$ S^2=(\lambda_1-\lambda_2)^2 = 4X_2^2+4X_3^2. $$
(2.24)

Now x 2=2X 2 is G(0,4v 2), x 3=2X 3 is G(0,4v 2) and they are independent. Therefore

$$ P(x_2,x_3)\;dx_2\;dx_3 = \frac{1}{2\pi(4v^2)}\exp -\frac{(x_2^2+x_3^2)}{8v^2}dx_2dx_3. $$
(2.25)

Transforming the variables x 2,x 3 to S,ϕ where x 2=Scosϕ, x 3=Ssinϕ we have

$$ P(S)dS = \frac{e^{-S^2/8v^2} SdS}{8\pi v^2}\int_{0}^{2\pi}d\phi. $$
(2.26)

Then the NNSD is,

$$ P(S)dS=\frac{S}{4v^2}\exp\,-\frac {S^2}{8v^2} dS;\quad 0\le S\le\infty. $$
(2.27)

Note that, with \(\overline{D}\) denoting mean (or average) spacing,

$$ \int_0^{\infty}P(S)dS=1;\qquad\overline{D}=\int_0^{\infty}S P(S)dS= \sqrt{2\pi}v. $$
(2.28)

In terms of the normalized spacing \(\hat{S}=S/\overline{D}\),

$$ P(\hat{S})d\hat{S}=\frac{\pi\hat{S}}{2}\exp \biggl( -\frac{\pi\hat{S}^2}{4} \biggr) d\hat{S};\quad\overline{\hat{S}}=1. $$
(2.29)

Thus, GOE displays linear level repulsion with P(S)∼S as S→0. This is indeed the von Neumann-Wigner level repulsion discussed in 1929 [4] signifying that quantum levels with same quantum numbers will not come close. The variance of P(S) is

$$\begin{aligned} \sigma^2(0) = & \overline{\hat{S}^2}-1= \frac{\overline{S^2}}{\overline{D}^2} -1 \\ = & \frac{8v^2}{2\pi v^2}-1=\frac{4}{\pi}-1\simeq0.272. \end{aligned}$$
(2.30)

Here, used are Eqs. (2.24) and (2.28).

2.1.2 2×2 GUE

Here the Hamiltonian matrix is,

$$ H = \left [ \begin{array}{@{}c@{\quad}c@{}} X_1+X_2 & X_3+iX_4 \\ X_3-iX_4 & X_1-X_2 \end{array} \right ] $$
(2.31)

with X i being G(0,v 2) and independent. Solving for the eigenvalues of H, we get λ 1+λ 2=2X 1 and \(\lambda_{1}\lambda_{2} = X_{1}^{2} - X_{2}^{2} - X_{3}^{2} - X_{4}^{2}\). Therefore \(S^{2} = (\lambda_{1} - \lambda_{2})^{2} = 4(X_{2}^{2} + X_{3}^{2} + X_{4}^{2})\). With x 2=2X 2, x 3=2X 3 and x 4=2X 4, we have x 2, x 3 and x 4 to be independent G(0,4v 2) variables. The joint probability distribution function for these is

$$ P(x_2,x_3,x_4)dx_2dx_3dx_4 = \frac{1}{(2\pi )^{3/2}(2v)^3}\exp \biggl( -\frac{x_2^2+x_3^2+x_4^2}{8v^2} \biggr)dx_2dx_3dx_4. $$
(2.32)

Transforming to spherical co-ordinates i.e. x 2=Ssinθsinϕ, x 3=Ssinθcosϕ and x 4=Scosθ with dx 2 dx 3 dx 4=S 2 dSsinθdθdϕ,

$$\begin{aligned} P(S)dS = & \frac{S^2}{(2\pi )^{3/2}(2v)^3}dS\int_{0}^{\pi} \sin \theta d\theta \int_{0}^{2\pi}d\phi \\ = & \frac{S^2}{4\sqrt{2\pi}v^3}\exp \biggl( - \frac{S^2}{8v^2} \biggr) dS. \end{aligned}$$
(2.33)

Note that \(\int_{0}^{\infty}P(S)dS=1\) and \(\overline{D}=\int_{0}^{\infty}SP(S)dS=8v/\sqrt{2\pi}\). With \(\hat{S}=S/\overline{D}\),

$$ P(\hat{S})d\hat{S}= \frac{32\hat{S}^2}{\pi^2}\exp \biggl(-\frac{4\hat{S}^2}{\pi} \biggr)d\hat{S};\quad \overline{\hat{S}}=1. $$
(2.34)

Thus, GUE gives quadratic level repulsion with P(S)∼S 2 for S small. The variance of NNSD for GUE is,

$$ \sigma^2(0)=\overline{\hat{S}^2}-1= \frac{\overline{S^2}}{\overline{D}^2} -1 = \frac{3\pi}{8}-1\simeq0.178. $$
(2.35)

2.1.3 2×2 GSE

Here H is quaternion real defined by Eqs. (2.17) and (2.20). These equations and the choice

$$ \begin{aligned}[c] &H_0 = \left ( \begin{array}{@{}c@{\quad}c@{}} a & b \\ b & c \end{array} \right ),\qquad H_1 = \left (\begin{array}{@{}c@{\quad}c@{}} 0 & -x \\ x & 0 \end{array} \right ),\qquad H_2 = \left ( \begin{array}{@{}c@{\quad}c@{}} 0 & y\\ -y & 0 \end{array} \right ),\\ & H_3 = \left ( \begin{array}{@{}c@{\quad}c@{}} 0 & -z \\ z & 0 \end{array} \right ) \end{aligned} $$
(2.36)

will give

$$ H = \left [ \begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} a & 0 & b+iz & ix-y \\ 0 & a & ix+y & b-iz \\ b-iz & y-ix & c & 0 \\ -y-ix & b+iz & 0 & c \end{array} \right ]. $$
(2.37)

Eigenvalue equation for this matrix is,

$$\begin{aligned} &(a-\lambda) \left | \begin{array}{@{}c@{\quad}c@{\quad}c@{}} a-\lambda& ix+y & b-iz \\ y-ix & c-\lambda& 0 \\ b+iz & 0 & c-\lambda \end{array} \right | + (b+iz) \left | \begin{array}{@{}c@{\quad}c@{\quad}c@{}} 0 & a-\lambda& b-iz \\ b-iz & y-ix & 0 \\ -y-ix & b+iz & c-\lambda \end{array} \right | \\ &\quad{}+ (y-ix) \left | \begin{array}{@{}c@{\quad}c@{\quad}c@{}} 0 & a-\lambda& ix+y \\ b-iz & y-ix & c-\lambda\\ -y-ix & b+iz & 0 \end{array} \right | = 0. \end{aligned}$$
(2.38)

Simplifying, we obtain {(aλ)(cλ)−(b 2+z 2+y 2+x 2)}2=0 which implies that λ’s are doubly degenerate and they are given by,

$$ \lambda= \frac{(a+c)\pm [(a-c)^2+4(b^2+z^2+y^2+x^2)]^{1/2}}{2}. $$
(2.39)

This gives S=|λ 1λ 2|=[(ac)2+4(b 2+z 2+y 2+x 2)]1/2. Let us define X 1=a+c, X 2=ac, X 3=2b, X 4=2x, X 5=2y and X 6=2z. The X i ’s are independent Gaussian variables G(0,4v 2). Note that a and c are G(0,2v 2) and b,x,y,z are G(0,v 2). Thus \(S^{2}=X_{2}^{2}+X_{3}^{2}+X_{4}^{2}+X_{5}^{2}+X_{6}^{2}\). Transforming to spherical polar co-ordinates in 5 dimensions with hyper-radius S gives X 2=Scosθ 1cosθ 2cosθ 3cosθ 4, X 3=Scosθ 1cosθ 2cosθ 3sinθ 4, X 4=Scosθ 1cosθ 2sinθ 3, X 5=Scosθ 1sinθ 2 and X 6=Ssinθ 1 (volume element being dv=S 4 dScos3 θ 1cos2 θ 2cosθ 3 1 2 3 4). Then,

$$\begin{aligned} P(S)dS = & \frac{S^4dS}{(2\pi )^{5/2}(4v^2)^{5/2}}\exp \biggl( -\frac{S^2}{8v^2} \biggr)\int _{-\pi/2}^{+\pi/2}\cos^3\theta_1 d \theta_1 \int_{-\pi/2}^{+\pi/2} \cos^2 \theta_2d\theta_2 \\ &{}\times\int_{-\pi/2}^{+\pi/2}\cos\theta_3d \theta_3 \displaystyle \int_{0}^{2\pi}d \theta_4. \end{aligned}$$
(2.40)

Simplifying, we have

$$ P(S)dS = \frac{S^4}{48\sqrt{2\pi}v^5}\exp \biggl(-\frac{S^2}{8v^2} \biggr)dS. $$
(2.41)

Note that \(\int_{0}^{\infty}P(S)dS=1\) and \(\overline{D}=\int _{0}^{\infty}S P(S)dS=32v/3\sqrt{2\pi}\). With \(\hat{S}=S/\overline{D}\), the NNSD is

$$ P(\hat{S})d\hat{S}=\frac{2^{18}}{3^6\pi^3}\hat{S}^4\exp \biggl( - \frac{64\hat{S}^2}{9\pi} \biggr)d\hat{S};\quad \overline{\hat{S}}=1. $$
(2.42)

Thus, GSE generates quartic level repulsion with P(S)∼S 4 for S small. The variance of the NNSD is

$$ \sigma^2(0)=\overline{\hat{S}^2}-1=\frac{\overline {S^2}}{\overline{D}^2} -1 = \frac{45\pi}{128}-1\simeq0.105. $$
(2.43)

Figure 2.1 shows NNSD for GOE, GUE and GSE as given by Eqs. (2.29), (2.34) and (2.42) respectively. More importantly, these random matrix forms for NNSD are seen in many different quantum systems such as nuclei, atoms, molecules etc. In addition, the RMT results for NNSD are also seen in microwave cavities, aluminum blocks, vibrating plates, atmospheric data, stock market data and so on. Figure 2.2 shows some examples. It is important to stress that the simple 2×2 matrix results are indeed very close, as proved by Mehta (see [5]), to the NNSD for any N×N matrix (N→∞). Thus, level repulsion given by random matrices is well established in real systems.

Fig. 2.1
figure 1

Nearest-neighbor spacing distributions for GOE, GUE and GSE ensembles compared with Poisson

Fig. 2.2
figure 2

Figure showing NNSD for different systems and their comparison with RMT. (i) Nuclear data ensemble [6]; (ii) chaotic Sinai billiard [7]; (iii) example from Econophysics [8]; (iv) example from atmospheric science [9]; (v) EGOE(1+2) ensemble for fermion systems [10]; (vi) BEGOE(2) ensemble for boson systems [11]. In (i) and (ii) results for Δ 3 statistic are also shown and these are from [12] and [7] respectively. In (iii), the NNSD is for the eigenvalues of the cross correlation matrix for 30-min returns of 1000 US stocks for the 2-yr period 1994–1995. Here a fit to the Brody distribution [Eq. (3.47)] is also shown. Similarly in (iv) the NNSD is for the eigenvalues of the correlation matrix for monthly mean sea-level pressure for the Atlantic domain from 1948 to 1999. Shown in the insect is the cumulative distribution for monthly and daily averaged correlation matrix. Finally, the embedded ensembles EGOE(1+2) in (v) and BEGOE(2) in (vi) are discussed in detail in Chaps. 5 and 9 respectively. Figures (i)–(iv) (except the NNSD figure for nuclear data ensemble and this is taken from [7] with permission from Springer) are taken from the above references with permission from American Physical Society and figures (v) and (vi) with permission from Elsevier

2.2 One and Two Point Functions: N×N Matrices

For more insight into the Gaussian ensembles and for the analysis of data, we will consider one and two point functions in the eigenvalues. Although we consider only GOE in this section, many of the results extend to GUE and GSE [13]. Also Appendix B gives some properties of univariate and bivariate distributions in terms of their moments and cumulants and these are used throughout this book. A general reference here is Kendall’s book [14].

2.2.1 One Point Function: Semi-circle Density

Say energies, in a spectra, are denoted by x (equivalently, the eigenvalues of the corresponding H matrix). Then, number of states in a given interval Δx around the energy x defines ρ(x)dx, where ρ(x) is normalized to unity. In fact the number of states in Δx interval around x is I(x)Δx=(x)dx where d is total number of states. Carrying out ensemble averaging (for GOE, GUE and GSE), we have \(\overline{\rho(E)}\) defined accordingly. Given \(\overline{\rho (E)}\), the average spacing \(\overline{D(E)}=[d\overline{\rho(E)}]^{-1}\). If we start with the joint probability distribution for the matrix elements of Gaussian ensembles and convert this into joint distribution for eigenvalues P N (E 1,E 2,…,E N ), one sees [1]

$$ P_N(E_1,E_2,\ldots,E_N)\propto \prod_{i<j}|E_i-E_j|^{\beta} \exp \biggl\{ -\alpha \sum_i E_i^2 \biggr\} . $$
(2.44)

Here β=1,2 and 4 for GOE, GUE and GSE respectively. Then \(\overline{\rho(E)}\) is the integral of ρ(E 1,E 2,…,E N ) over all E’s except one E. For completeness, let us point out that \(\rho(x)=\langle \delta(H-x)\rangle =d^{-1}\sum _{i=1}^{d}\delta(E_{i}-x)\). One can construct \(\overline{\rho(E)}\) via its moments \(\overline {M_{p}}=\overline{\langle H^{p}\rangle }=d^{-1}\overline{\mbox{Trace}(H^{p})}\).

Using the binary correlation approximation (BCA), used first by Wigner [15], it is possible to derive for \(\overline{\langle H^{p}\rangle }\) a recursion relation. In the present book most results, both for classical and embedded ensembles, are derived using BCA. In fact, for the embedded ensembles BCA is the only tractable method available at present for deriving formulas for higher moments (even though this also has limitations as discussed in later chapters).

In BCA, only terms that contain squares (but not any other power) of a given matrix element are considered and in the N→∞ limit, only these terms will survive. As the matrix elements are zero centered, all \(\overline{M_{p}}\) for p odd will be zero. Firstly, for p=2 we have

$$ \bigl\langle H^2\bigr\rangle = d^{-1}\sum _{i,j} H_{ij}H_{ji} = d^{-1} \sum _{i,j} (H_{ij} )^2. $$
(2.45)

Let us say that the variance of the matrix elements for the GOE matrices is v 2. Then the ensemble averaged second moment (note that all the moments are central moments as the centroid is zero) is,

(2.46)

In Eq. (2.46) we have introduced a notation for correlated matrix elements. Also note that we have ignored the fact that the diagonal elements have variance 2v 2 as this will give a correction of 1/d order and this will vanish as d→∞. The first complicated moment is M 4 or 〈H 4〉. Explicitly,

$$ \bigl\langle H^4 \bigr\rangle = d^{-1}\sum _{i,j,k,l} H_{ij}H_{jk}H_{kl}H_{lk}. $$
(2.47)

Then, ensemble average gives three terms in the above sum (it contains product of four matrix elements): (i) first two H matrix are correlated and similarly the last two matrix elements; (ii) the first and third and similarly the second and the fourth matrix element are correlated; (iii) the first and fourth and similarly the second and the third matrix element are correlated. Symbolically they can be written as , and . Their values are

(2.48)

Thus the second term that involves cross correlation with odd number of H’s inside [in the second term in Eq. (2.48), there is one H in between] will vanish as d→∞. Then, we have

(2.49)

Note that in Eqs. (2.45)–(2.49), the correlated H’s are joined by the symbol ‘⊔’. Continuing the above procedure we have [16, 17], valid for all three ensembles with the normalization \(\overline{\langle H^{2}\rangle }=1\),

(2.50)

The solution is M 2ν+1=0 and \(M_{2\nu}=(\nu+1)^{-1}{2\nu \choose\nu}\). They are the Catalan numbers and it can be verified easily that they are the moments of a semi-circle. Thus \(\overline{\rho(E)}\) is a semi-circle,

$$ \overline{\rho(x)} = \frac{1}{2\pi} \bigl(4-x^2\bigr)^{1/2} = \frac{1}{\pi}\sin\psi(x). $$
(2.51)

Here ψ(x) is the angle between the x axis and the radius vector; x=−2cosψ(x), 0≤ψπ. Note that \(\overline{\rho(x)}\) vanishes for |x|>2 and M 2=1. Figure 2.3 shows an example for the semi-circle.

Fig. 2.3
figure 3

Semi-circle density of eigenvalues for a 500 dimensional GOE ensemble. The x-axis corresponds to standardized (zero center and unit variance) eigenvalues

Given a ρ(x), one can define the distribution function F(x),

$$ F(x) = \int_{-\infty}^x \rho(y)dy. $$
(2.52)

With ρ(x) being discrete, as the spectrum is discrete, F(x) is a staircase. Note that F(x) counts the number of levels up to the energy x and therefore increase by one unit as we cross each energy level (if there are no degeneracies).

Another important property is that the exact density ρ(x) can be expanded in terms of \(\overline{\rho(x)}\) by using the polynomials defined with respect to \(\overline{\rho(x)}\),

$$ \rho(x) = \overline{\rho(x)} \biggl\{ 1 + \sum_{\zeta\ge 1}S_{\zeta}P_{\zeta}(x) \biggr\} . $$
(2.53)

If we know the moments M r of \(\overline{\rho(x)}\), the polynomials P ζ (x) defined by \(\overline{\rho(x)}\) can be constructed [18] such that \(\int_{-\infty}^{+\infty}\overline{\rho(x)}\;P_{\zeta}(x)\; P_{\zeta^{\prime }}(x)dx =\delta_{\zeta\zeta^{\prime }}\). Using this one can study level motion in Gaussian ensembles as described ahead.

2.2.2 Two Point Function S ρ(x,y)

The two point function is defined by

$$ S^{\rho}(x,y) = \overline{\rho(x)\rho(y)}-\overline{\rho(x)}\, \overline{\rho(y)}. $$
(2.54)

If there are no fluctuations S ρ(x,y)=0. Thus S ρ measures fluctuations. The moments of S ρ are

$$\begin{aligned} M_{pq} = & \int\int x^p y^q S^{\rho}(x,y)dxdy \\ = & \overline{ \bigl\langle H^p \bigr\rangle \bigl\langle H^q \bigr\rangle }-\overline{ \bigl\langle H^p \bigr\rangle } \,\overline{ \bigl\langle H^q \bigr\rangle }. \end{aligned}$$
(2.55)

To derive S ρ(x,y) (also M pq ) we consider, as in [13, 17] the polynomials defined by the semi-circle, i.e. Chebyshev polynomials of second kind U n (x). They are defined in terms of the sum,

$$ U_n(x)=\sum_{m=0}^{[n/2]} (-1)^m{n-m \choose m}(2x)^{n-2m}. $$
(2.56)

They satisfy the orthonormality condition \(\int_{-1}^{+1}\omega(x)U_{n}(x)U_{m}(x)dx=\pi/2\;\delta_{mn}\) with \(\omega(x)=\sqrt{1-x^{2}}\). Substituting y=2x in the orthonormality condition and using V n (y)=U n (y/2), we obtain,

$$ \begin{aligned}[c] &\int_{-2}^{+2}dy\overline{\rho(y)}V_n(y)V_m(y) = \delta_{nm}; \\ &\overline{\rho(y)}=\frac{1}{\pi}\sqrt{1-\frac{y^2}{4}} = \frac{\sin\psi(x)}{\pi}. \end{aligned} $$
(2.57)

Note also that \(U_{n}(x)=\frac{1}{a_{n}\omega (x)}\frac{\partial^{n}}{\partial x^{n}}[ \omega(x)[g(x)]^{n}]\); \(a_{n}=(-1)^{n}2^{n+1}\frac{\varGamma(n+3/2)}{(n+1)\sqrt{\pi }}\), g(x)=1−x 2. Similarly, V ζ (x)=(−1)ζ[sinψ(x)]−1sin(ζ+1)ψ(x).

Returning to M pq it is seen that in \(\overline{\langle H^{p} \rangle \langle H^{q} \rangle }\) evaluation (again we use BCA) it should be recognized that the correlations come when say ζ number of H’s in H p correlate with ζ number of H’s in H q. When ζ=0 we get \(\{\overline {\langle H^{p}\rangle}\}\{\overline{\langle H^{q}\rangle }\}\). Therefore

(2.58)

and \(\mu_{\zeta}^{p}\) are obtained by a counting argument. French, Mello and Pandey [17] showed that (see also [19]),

$$ \mu_{\zeta}^{p} = {p \choose(p-\zeta)/2} = -\zeta^{-1} \int x^p \frac{d}{ dx} \bigl\{ \overline{\rho(x)}V_{\zeta-1}(x) \bigr\} dx. $$
(2.59)

Now let us evaluate . Firstly \(\langle H^{\zeta}\rangle =d^{-1}\sum_{i,j,k,\ldots=1}^{d} H_{ij}H_{jk}\ldots H_{ni}\). Then the number of indices are ζ and the number of terms in the sum are d ζ. In there will be d ζ terms of the type 〈H ij H ji 〉〈H jk H kj 〉…. Choosing v 2 d=1, we have \(\langle H_{ij}^{2}\rangle = v_{ij}^{2} = v^{2}= 1/d\). Therefore . However for every H ij there are 〈H ij H ij 〉 and 〈H ij H ji 〉 for GOE. Both give v 2 and therefore for GOE. In case of GUE \(\overline{H_{ij}H_{ij}}= \overline{(a+ib)(a+ib) }=\overline{a^{2}}-\overline{b^{2}}+2i\overline{a}\overline{b}=0\) and \(\overline{H_{ij}H_{ji}}= \overline{(a+ib)(a-ib)}=\overline{a^{2}}+\overline{b^{2}}=1\) (\(|H_{ij}|^{2}=v_{ij}^{2}=v^{2}\) and v 2 d=1). Thus . In addition there is cyclic invariance and therefore there is an additional ζ factor [for example, H 12 H 21H 21 H 12,H 12 H 23 H 31H 23 H 31 H 12H 31 H 12 H 23]. Then the final result is,

(2.60)

with β=1 for GOE and β=2 for GUE. Putting this result in the expression for M pq we see that

$$\begin{aligned} \overline{M_{pq}} = &\sum_{\zeta\ge1} \frac{2\zeta}{\beta d^2\zeta^2}\int\!\!\!\int x^py^q\frac{\partial}{\partial x}\frac{\partial}{\partial y}\overline{\rho(x)}\,\overline{\rho(y)}V_{\zeta-1}(x)V_{\zeta-1}(y) dxdy \\ = & \frac{2}{\beta d^2} \sum_{\zeta \ge1}\zeta^{-1}\int\!\!\!\int x^p y^q\frac{\partial}{\partial x}\frac{\partial}{\partial y}\overline{\rho(x)}\,\overline{\rho(y)}V_{\zeta-1}(x)V_{\zeta-1}(y) dxdy. \end{aligned}$$
(2.61)

Then by inversion we get,

$$ \begin{aligned}[c] S^{\rho}(x,y) & \stackrel{\beta=1;\mathrm{GOE}}{=} \frac{2}{d^2} \frac{\partial}{\partial x}\frac{\partial}{\partial y} \biggl\{ \overline{ \rho(x)}\,\overline{\rho(y)}\displaystyle \sum_{\zeta \ge 1} \zeta^{-1}V_{\zeta-1}(x)V_{\zeta-1}(y) \biggr\} \\ \Rightarrow\quad S^F(x,y) & = \frac{2}{d^2} \overline{\rho(x)} \,\overline{\rho(y)} \biggl\{ \sum_{\zeta\ge1} \zeta^{-1} V_{\zeta-1}(x)V_{\zeta-1}(y) \biggr\} \\ & = \frac{2}{\pi^2d^2}\sum_{\zeta\ge 1}\zeta^{-1} \sin\zeta\psi_1(x)\sin\zeta\psi_2(y). \end{aligned} $$
(2.62)

Note that \(S^{F}(x,y)=\int_{-\infty}^{x}\int_{-\infty}^{y}S^{\rho }(x^{\prime }, y^{\prime })dy^{\prime }dx^{\prime }=\overline {F(x)F(y)}-\overline{F(x)} \,\overline{F(y)}\). The sum can be simplified by introducing a cut-off e αζ and extending the sum to ζ=1 to ∞,

$$\begin{aligned} &\sum_{\zeta=1}^d\zeta^{-1}\sin\zeta \psi_1\sin \zeta\psi_2 \\ &\quad{}=\sum _{\zeta=1}^{\infty}\zeta^{-1}\exp(-\alpha\zeta )\sin \zeta\psi_1 \sin\zeta\psi_2 \\ &\quad{} = \sum_{\zeta=1}^{\infty} \biggl[\int _{\alpha}^{\infty}e^{-z\zeta}dz \biggr] \sin\zeta \psi_1\sin\zeta\psi_2 \\ &\quad{} = \mbox{Re} \Biggl\{ \frac{1}{2} \sum_{\zeta=1}^{\infty} \biggl[\int_{\alpha}^{\infty}e^{-z\zeta}dz \biggr] \bigl[e^{-i\zeta(\psi_1-\psi_2)}-e^ {-i\zeta(\psi_1+\psi_2)}\bigr] \Biggr\} \\ &\quad{} = \mbox{Re} \Biggl\{ \frac{1}{2}\sum_{\zeta=1}^{\infty} \int_{\alpha}^{\infty}\bigl\{ e^{-\zeta(z+i(\psi_1-\psi_2))}- e^{-\zeta(z+i(\psi_1+\psi_2))} \bigr\} dz \Biggr\} \\ &\quad{} = \mbox{Re}\frac{1}{2} \bigl[ \ln\bigl[ 1-e^{-(z+i(\psi_1-\psi_2))} \bigr]_{\alpha}^{\infty} - \ln\bigl[ 1-e^{-(z+i(\psi_1+\psi_2))} \bigr]_{\alpha}^{\infty} \bigr] \\ &\quad{} = \frac{1}{4}\ln \frac{1+e^{-2\alpha }-2e^{-\alpha}\cos(\psi_1+\psi_2)}{ 1+e^{-2\alpha}-2e^{-\alpha}\cos(\psi_1-\psi_2)}, \end{aligned}$$
(2.63)

where we have used the property that \(\mathrm{Re}[\ln(1+z)]=\frac{1}{2} \ln[(1+z)(1+z^{\ast})]\). Finally have,

$$ S^F(x,y) = \frac{1}{2\pi^2d^2}\ln\frac {1+e^{-2\alpha}-2e^{-\alpha} \cos(\psi_1+\psi_2)}{1+e^{-2\alpha}-2e^{-\alpha}\cos(\psi_1-\psi _2)}. $$
(2.64)

Let us consider the structure of S F(x,y) when xy. Firstly the number of levels r in the energy interval Δx=dx=xy is dρ(x)dx. But 1−x 2/4=sin2 ψ gives x=2cosψ. Therefore, dx=2sinψdψ and

$$ r = d\rho(x)dx= \biggl[\frac{d}{\pi} \sin\psi \biggr]2\sin\psi d\psi. $$
(2.65)

Then, S F(x,y) is, with xy⇒sinψ, ψ 1+ψ 2∼2ψ and ψ 1ψ 2δψ,

$$\begin{aligned} S^F(x,y) \stackrel{\alpha\sim1/d}{\to}& \frac{1}{2\pi^2d^2}\ln \frac{2-2\cos 2\psi}{2-2\cos\delta\psi} \\ =& \frac{1}{\pi^2d^2}\ln\frac{\sin\psi }{\sin\delta\psi/2} \simeq \frac{1}{\pi^2d^2}\ln\frac{2\sin\psi }{\delta\psi} \\ =& \frac{1}{\pi^2d^2}\ln\frac{4d\sin ^3\psi}{\pi r}. \end{aligned}$$
(2.66)

In the last step, Eq. (2.65) is used. Therefore the behavior of S F(x,y) is that it behaves as lnr. Now let us consider the self-correlation term S F(x,x) which determines the level motion \(\delta x^{2}/\overline{D}^{2}=d^{2}S^{F}(x,x)\) in GOE,

$$ \begin{aligned}[c] S^F(x,x) & = \frac{1}{2\pi^2d^2}\ln\frac {1+e^{-2\alpha}-2e^{-\alpha}\cos 2\psi}{1+e^{-2\alpha}-2e^{-\alpha}} \\ & \stackrel{\alpha\sim 1/d}{\Rightarrow} \frac{1}{2\pi^2d^2}\ln\frac{2(1-\cos 2\psi)}{(1-e^{-\alpha})^2}e^{-\alpha} = 1-\alpha\\ & = \frac{1}{2\pi^2d^2}\ln \frac{4\sin^2\psi}{\alpha^2} \\ & = \frac{1}{2\pi^2d^2}\ln(2d\sin\psi)^2 \\ \Rightarrow\quad S^F(x,x) & = \frac{1}{\pi^2d^2}\ln2d\sin\psi= \frac{1}{\pi^2d^2}\ln2\pi d \overline{\rho(x)}. \end{aligned} $$
(2.67)

Before going further, it is useful to point out that the moment method with BCA used for deriving the asymptotic form of one and two point functions extends to many other random matrix ensembles. A recent discussion on the power of the moment method in random matrix theory is given in [20].

2.2.3 Fluctuation Measures: Number Variance Σ 2(r) and Dyson-Mehta Δ 3 Statistic

P(S), the nearest neighbor spacing distribution and its variance σ 2(0) are measures of fluctuations. Using S F(x,y) we can define a new measure called ‘number variance’ Σ 2(r). Say in a energy interval x to y there are r levels, then r=d[F(y)−F(x)]. The statistic Σ 2(r) is ensemble averaged variance of r and

$$\begin{aligned} \varSigma^2(r) = & \overline{r^2} - ( \overline{r})^2 \\ = & d^2 \bigl[ \overline{\bigl(F(y)-F(x)\bigr)^2} - \bigl(\overline{F(y)-F(x)}\bigr)^2 \bigr] \\ = & d^2 \bigl[ \overline{F^2(y)}-\overline{F(y)}^2+ \overline {F^2(x)}-\overline{F(x)}^2 -2\bigl( \overline{F(x)F(y)}-\overline{F(x)}\,\overline{F(y)}\bigr) \bigr] \\ = & d^2 \bigl[ S^F(x,x) + S^F(y,y) - 2S^F(x,y) \bigr]. \end{aligned}$$
(2.68)

Thus Σ 2(r) is an exact “two-point measure”. Using the asymptotic expressions for S F(x,x) and S F(x,y), we obtain

$$\begin{aligned} \varSigma^2(r) = & d^2 \biggl[ \frac{2}{\pi^2d^2}\ln 2d \sin\psi- \frac{2}{\pi^2d^2}\ln \frac{4d\sin ^3\psi}{\pi r} \biggr] \\ = & \frac{2}{\pi^2}\ln \frac{\pi r}{2\sin ^2\psi}. \end{aligned}$$
(2.69)

Thus Σ 2(r) behaves as lnr and hence the GOE spectrum is rigid. The exact results valid at the center of the semi-circle are:

$$\begin{aligned} \begin{array}{rcl} \varSigma^2_{\mathrm{GOE}}(r) & = & \displaystyle\frac{2}{\pi^2} \biggl[ \ln 2 \pi r+1+\gamma- \frac {\pi^2}{ 8} \biggr] + O\biggl(\frac{1}{r}\biggr) \\ \varSigma^2_{\mathrm{GUE}}(r) & = & \displaystyle\frac{1}{\pi^2} [ \ln 2\pi r+1+ \gamma ] \end{array} \end{aligned}$$
(2.70)

where γ is Euler’s constant. Other important measure is Dyson-Mehta Δ 3 statistic and importantly, it is related to Σ 2(r) and hence it is also a two-point measure.

Dyson and Mehta Δ 3 statistic [21] is defined as the mean square deviation of F(E), of the unfolded spectrum, from the best fit straight line and \(\varDelta _{3}(\overline{n})\) corresponds to the same but over a spectrum of length \(\overline{n}\overline{D}\). The ensemble averaged \(\overline{\varDelta _{3}}(\overline{n})\) is then defined similarly,

$$ \overline{\varDelta _3}(\overline{n})=\overline{\varDelta _3}(2L)= \min \biggl[ \frac{1}{2L} \int_{x-L}^{x+L} \bigl[dF(y)-Ay-B\bigr]^2dy \biggr]_{(A,B)}. $$
(2.71)

Here \(L=\frac{\overline{n}}{2}\). The \(\overline{\varDelta _{3}} (\overline{n})\) statistic is an exact two-point measure. In fact it can be written as an integral involving Σ 2(r). This is proved in Appendix C. Using the GOE (similarly GUE) expression for Σ 2(r) and applying Eq. (C.8), we obtain the following expression for \(\overline{ \varDelta _{3}}(\overline{n})\) for GOE and GUE,

$$ \begin{aligned}[c] \bigl[\overline{\varDelta _3}(\overline{n}) \bigr]_{\mathrm{GOE}} & = \frac{1}{\pi^2} \biggl[\ln(2 \pi\overline{n}) + \gamma- \frac{5}{4} -\frac{\pi^2}{8} \biggr] + O \biggl(\frac{1}{\pi^2 \overline {n}} \biggr), \\ \bigl[\overline{\varDelta _3}(\overline{n}) \bigr]_{\mathrm{GUE}} & = \frac{1}{2\pi^2} \biggl[\ln(2 \pi\overline{n}) + \gamma-\frac{5}{4} \biggr] + O \biggl(\frac{1}{\pi^2 \overline{n}} \biggr). \end{aligned} $$
(2.72)

For a novel application of \(\overline{ \varDelta _{3}}(\overline{n})\) statistic, see [22].

2.3 Structure of Wavefunctions and Transition Strengths

2.3.1 Porter-Thomas Distribution

Given a transition operator T (this should not be confused with the time reversal operator ‘T’ used before or the isospin label ‘T’), transition strength connecting two eigenstates is defined by |〈E f TE i 〉|2. In nuclei T’s of interest are electromagnetic (magnetic dipole, electric quadrupole for example), one particle addition or removal, Gamow-Teller operator and so on. Similarly, in atoms and molecules dipole operator is very important. It is also important to recognize that the widths of resonances also measure transition strengths. Leaving detailed discussion on transition strength distributions to Ref. [13], here we will give only some basic results. One can think of T|E〉 to be a compound state and represent it by a basis state |i〉. Therefore, statistical properties of transition strengths will be same as those of the expansion coefficients \(C_{i}^{E}\) of the eigenstates in terms of the basis states |i〉,

$$ |E\rangle = \sum _{i=1}^d C_i^E |i\rangle. $$
(2.73)

Now an important question is: what is the distribution of \(|x_{i}|^{2}=|C^{E}_{i}|^{2}\). First, let us consider the joint distribution P(x 1,x 2,x 3,…,x d ) of x 1, x 2, x 3,…,x d for GOE. Because the GOE is an orthogonally invariant ensemble, the eigenvectors uniformly cover the d-dimensional unit sphere. Then the normalization condition ∑ i |x i |2=1 gives,

$$ P(x_1,x_2,x_3,\ldots,x_d) = \frac{\varGamma(d/2)}{(\pi)^{d/2}}\delta \Biggl(\sum_{i=1}^{d} x_i^2 -1 \Biggr). $$
(2.74)

Now, integrating over all but one variable (say x 1 and denote it by x) will give

$$ \rho(x)dx= \frac{1}{\sqrt{\pi}} \frac {\varGamma(\frac{d}{2})}{ \varGamma(\frac{d-1}{2})} \bigl(1-x^2 \bigr)^{\frac{d-3}{2}}dx. $$
(2.75)

Then, in the d→∞ limit we obtain,

$$ \rho(x)dx = \sqrt{ \frac{d}{2\pi}} \exp-\frac{dx^2}{2}dx,\quad -\infty\leq x\leq\infty. $$
(2.76)

Thus, asymptotically x will be zero centered Gaussian variables with variance 1/d. As \(\overline{|C^{E}_{i}|^{2}}\) should not depend on the index i and \(\sum_{i} |C^{E}_{i}|^{2}=1\) will give us the significant result that for GOE \(\overline{|C^{E}_{i}|^{2}}=1/d\). Therefore, the distribution of the renormalized strengths \(z=|C^{E}_{i}|^{2}/\overline{|C^{E}_{i}|^{2}}\) is, putting dx 2=z in Eq. (2.76),

$$ \rho(z)dz =\frac{1}{\sqrt{2\pi}} z^{-\frac{1}{2}} \exp \biggl(-\frac{z}{2} \biggr)dz,\quad 0 \leq z \leq\infty $$
(2.77)

and this is nothing but \(\chi^{2}_{1}\) distribution. Equation (2.77), for GOE, is called Porter-Thomas (P-T) law for strengths [23]. Thus locally renormalized strengths, \(z=|C^{i}_{j}|^{2}/\overline{|C^{i}_{j}|^{2}}\), follow P-T law; note that \(\int_{0}^{\infty}z \rho(z) dz = 1\). The GOE P-T law was well tested in many examples as shown in Fig. 2.4. Similar to GOE, for GUE the P-T law is \(\chi_{2}^{2}\) as \(C^{i}_{j}\) will have real (say A) and complex (say B) parts with each of them being G(0,v 2=1/d) and independent; then \(|C^{i}_{j}|^{2}=A^{2}+B^{2}\).

Fig. 2.4
figure 4

Porter-Thomas distribution for strengths compared with: (a) neutron resonance widths in 167Er [13]; (b) nuclear shell model transition strengths generated by a special two-body transition operator [10]; (c) widths from a microwave resonator [24]. Figures (a) and (c) are reproduced with permission from American Physical Society and (b) with permission from Elsevier

2.3.2 NPC, Sinfo and Strength Functions

With eigenfunctions expanded in terms of some basis states (they form a complete set) |k〉, let us define the following: (i) NPC (denoted as ξ 2)—number of principal components; (ii) S info—information entropy or H —localization length. They are, with \(|E\rangle = \sum_{k=1}^{d} C^{E}_{k} |k\rangle \),

$$ \begin{aligned}[c] \xi_2(E) & = \Biggl[ \frac{1}{d\;\rho(E)} \sum _{E^\prime } \sum_{k=1}^d \bigl\vert C^{E^\prime }_k\bigr\vert ^4 \delta \bigl(E-E^\prime \bigr) \Biggr]^{-1} , \\ \ell_H(E) & = \exp \bigl[\bigl(S^{info}\bigr)_E \bigr]\big/(0.48d), \\ S^{info}(E) & = - \frac{1}{d\rho(E)} \sum_{E^\prime } \sum_{k=1}^d\bigl\vert C^{E^\prime }_k\bigr\vert ^2 \ln \bigl\vert C^{E^\prime }_k \bigr\vert ^2 \delta \bigl(E-E^\prime \bigr). \end{aligned} $$
(2.78)

In Eq. (2.78), degeneracies of the eigenvalues E are taken into account. It is important to stress that S info is the first and NPC [or the inverse participation ratio (IPR)] the second Rényi entropy introduced in chaos literature [25, 26]. For this reason NPC is denoted by ξ 2(E). Similarly, information entropy is also called Shannon entropy [27]. As we shall see ahead, S info and lnξ 2(E) carry the same information and their difference, called structural entropy, is also some times used (with S info or ξ 2) as a chaos measure; see [26] and references therein. For GOE, \(\overline {|C^{E}_{k}|^{2}}=\frac{1}{d}\) and \(C^{E}_{k}\) are Gaussian variables \(G(0,\frac{1}{d})\). Therefore \(\overline{|C^{E}_{k}|^{4}}=3(\overline{ |C^{E}_{k}|^{2}})^{2}=\frac{3}{d^{2}}\) and then

$$ \overline{\xi_2(E)} = \left ( \sum_{k=1}^d\; \frac{3}{d^2}\right )^{-1} = \frac{d}{3}\quad\mbox{for GOE}. $$
(2.79)

Thus \(\overline{\xi_{2}(E)}\) is independent of E for GOE. Similarly \(\overline{S^{info}(E)} = -d\overline{|C^{E}_{k}|^{2} \ln|C^{E}_{k}|^{2}}\) where \(C^{E}_{k}\) are \(G(0,\frac{1}{d})\). Then,

$$ \begin{aligned}[c] \overline{S^{info}(E)} & = - \frac{d}{\sqrt{2\pi }\sigma} \int_{-\infty}^{\infty}x^2 \ln x^2 e^{-\frac {x^2}{2\sigma^2}}dx;\quad \sigma^2= \frac{1}{d} \\ & = -\frac{8d\sigma^2}{\sqrt{\pi}}\int_0^{\infty } y^2 [\ln y + \ln\sqrt{2}\sigma ]e^{-y^2}dy;\quad x = \sqrt{2}\sigma y \\ & = -\frac{8}{\sqrt{\pi}} \biggl[\int_0^{\infty} y^2 \ln y e^{-y^2}dy +\ln\sqrt{\frac{2}{d}}\int _{0}^{\infty}y^2e^{-y^2}dy \biggr] \\ & = -\frac{8}{\sqrt{\pi}}\int_{0}^{\infty}y^2 \ln y e^{-y^2}dy - \ln\frac{2}{d} \\ \Rightarrow\quad\exp\bigl(S^{info}\bigr) & = \frac{d}{2} \exp {- \zeta},\qquad \exp{-\zeta} = \exp\biggl(-\frac{8}{\sqrt{\pi}}\int_0^{\infty}x^2 \ln x e^{-x^2}dx\biggr) \\ &\phantom{= \frac{d}{2} \exp {-\zeta},\qquad \exp{-\zeta}\ } {}= 4 \exp{ \gamma-2} \simeq 0.964 \\ & \Rightarrow\quad \exp S^{info}(E) = 0.48d,\qquad \ell_H(E)=1. \end{aligned} $$
(2.80)

Thus for GOE, S info=ln(0.48d) independent of E and the localization length is unity by definition. The NPC and S info formulas are well verified in numerical examples.

Besides NPC and S info, localization properties of wavefunctions can be inferred from strength functions. Given \(C_{k}^{E}\), strength functions are defined by,

(2.81)

where denotes the average of \(|C_{k}^{E}|^{2}\) over the eigenstates with the same energy E. A commonly used form for strength functions is the Breit-Wigner (BW) form and its derivation is given in Appendix D. Many other aspects of transition strengths and strength fluctuations are discussed in [13, 2830]; see also Appendix E.

2.4 Data Analysis

2.4.1 Unfolding and Sample Size Errors

In the analysis of data one has to pay attention to the following facts: (1) data is available for a given system (say a nucleus); (2) the sample size (number of levels) is usually small (∼100); (3) over the sample, the density may vary. Point (3) is true for shell model data or any other model data. To take care of (1) and (2) it is possible to invoke ergodicity and stationarity properties of the Gaussian ensembles (GE-GOE, GUE or GSE) [13, 31]. Due to ergodicity, we have a permit to compare ensemble averaged results from GE to spectral averaged results from a given spectrum. Similarly due to stationarity, the measures (statistics) of fluctuations will be independent of which part of the spectrum one is looking at. To the extent that the spectra are “complex”, we can use “stationarity” to combine the values of the measures (with appropriate weights) to increase the sample size. Let us first consider point #(3). When ρ(x) is varying, it is necessary to remove the “secular variation” before the data is analyzed. This is called “unfolding”. For this we have to map the given energies E i to new energies ε i such that the ε i spectrum has constant density. Say ε i =g(E i ) such that ε i has unit mean spacing on the average in the interval \(E_{i}\pm\frac{\varDelta E}{2}\). Then, with ΔN levels in the interval Δε,

$$ \begin{aligned}[b] &\frac{\varDelta \varepsilon }{\varDelta N}= 1 = \frac{1}{\varDelta N} \biggl[g\biggl(E+ \frac{\varDelta E}{2}\biggr)- g\biggl(E-\frac{\varDelta E}{2}\biggr) \biggr] \\ &\phantom{\frac{\varDelta \varepsilon }{\varDelta N}} {}=\frac{\varDelta E}{\varDelta N}g^\prime (E) = \frac{1}{N\overline{\rho(E)}}g^\prime (E), \\ &\quad\Rightarrow\quad g^\prime (E) = N\overline{\rho(E)}. \end{aligned} $$
(2.82)

Now integrating Eq. (2.82) on both sides will give the map,

$$ g(E) = N\overline{F(E)}\quad \Rightarrow\quad \varepsilon _i = N \overline{F(E_i)}. $$
(2.83)

The ε i ’s given by Eq. (2.83) will have \(\overline{D}=1\). Significance of the physically (theoretically) defined \(\overline{\rho(E)}\) in unfolding has been discussed by Brody et al. [13] and more recently by Jackson et al. [32]. Normally one tries to fit F(E) to a smooth curve, by a least square procedure, to obtain \(\overline{F(E)}\), but often this is not proper as fluctuations depend on \(\overline{F(E)}\) used in the unfolding procedure.

Coming to the sample size errors, let us consider a measure w calculated over say p levels. Say the theoretical value of w (for infinite sample size) is \(\overline{w}\) and its variance \(\overline{\mbox{var}(w)}\). Then the figure of merit \(f=\frac{\sqrt{\mathrm{var}(w)_{p}}}{(w)_{p}} \times 100\). Now \(\overline{w}\rightarrow\overline{w}\{1\pm\frac{f}{100}\}\). For \(\varSigma^{2}(\overline{n})\), the Poisson estimate gives \(f\sim\sqrt{\frac{2\overline{n}}{p}} \times100\), thus f→0 for p→∞. In practice also used are overlapping intervals. Then it is seen via Monte-Carlo calculations that f reduces by 0.65. Applying the same size error to the GOE analytical results, it is seen that theory and experiment agree almost exactly in the case of Nuclear Data Ensemble (NDE) constructed by combining neutron resonance data from many nuclei [6, 12, 33]. Finally, in practice, for \(\overline{\varDelta _{3}}(\overline{n})\) and \(\overline{D}\) calculations, it is more useful to use the simple formulas given by French, Pandey, Bohigas and Giannoni [13, 34]. Given a sequence of (ordered) energies (E 1,E 2,…,E n ), the mean spacing \(\overline{D}\) is [13],

$$ \overline{D}=\frac{12}{n(n^2-1)} \sum_{i=1}^n \biggl(i-\frac{n+1}{2} \biggr) E_i. $$
(2.84)

Similarly consider \(\overline{\varDelta _{3}}(L)\) over an energy interval α to α+L. Defining \(\widetilde{E}_{i}\) to be \(\widetilde{E}_{i}=E_{i}-(\alpha+\frac{L}{2})\), we have [34],

$$\begin{aligned} \overline{\varDelta _3}(\alpha;L) = & \frac{n^2}{16} - \frac{1}{L^2} \Biggl[\sum_{i=1}^n \widetilde{E}_i \Biggr]^2 + \frac{3n}{2L^2} \Biggl[\sum _{i=1}^{n}\widetilde{ E}_i^2 \Biggr] \\ &{} - \frac{3}{L^4} \Biggl[ \sum_{i=1}^n \widetilde{E}_i^2 \Biggr]^2 + \frac{1}{L} \Biggl[ \sum_{i=1}^n (n-2i+1)\widetilde{E}_i \Biggr]. \end{aligned}$$
(2.85)

2.4.2 Poisson Spectra

When comparing GOE (or GUE, GSE) results with data, it is important to consider the Poisson case, i.e. uncorrelated spectra so that the effects due to GOE correlations will be clear. We can generate a Poisson spectrum as follows. First generate a set {s} such that s is a random variable following exp−x probability distribution. Then choose x 1=0 and x n+1=x n +s n ; n=1,2,3,… and draw s n from {s}. Now the nearest neighbor spacing S=s n . Therefore P(S)dS for the sequence (x 1,x 2,x 3,…) is

$$ P(S)dS=\exp-SdS. $$
(2.86)

An important question is what is \(\varSigma^{2}(\overline{n})\) and \(\overline{\varDelta _{3}}(\overline{n})\) for a Poisson spectrum. Before proceeding further let us mention that random superposition of several independent spectra leads to Poisson as proved by Porter and Rosenzweig [35]. In order to derive the results for \(\varSigma^{2}(\overline {n})\) and \(\varDelta _{3}( \overline{n})\) for a Poisson spectrum, it is useful to consider the R 2(E 1,E 2) and Y 2(r) correlation functions for an unfolded spectrum. R 2(E 1,E 2) is the integral of P(E 1,E 2,…,E n ) over E 3,E 4,…,E n and Y 2 is simply related to R 2,

$$ \begin{aligned}[c] R_2(x_1,x_2) & = N(N-1)\int dx_3dx_4\ldots dx_N P_N(x_1,x_2,\ldots ,x_N), \\ Y_2(x_1,x_2) & = -R_2(x_1,x_2)+R_1(x_1)R_1(x_2). \end{aligned} $$
(2.87)

Note that R 1(x 1)=1=Y 1(x 1) and the x’s are unfolded energies, i.e. mean spacing \(\overline{D}=1\). The important point is that for a Poisson Y 2(r)=0 and therefore, as \(\varSigma^{2}(L)=L-\int_{0}^{L}\,(L-r)Y_{2}(r)dr\),

$$ \varSigma^2(L) = L,\qquad\overline{\varDelta _3}(L) = \frac{L}{15} $$
(2.88)

for a Poisson. It is also useful to mention that for pseudo-integrable systems (which possess singularities and are integrable in the absence of these singularities) [3639] follow semi-Poisson statistics [40, 41],

$$ P(S)dS = 4S \exp-2S dS. $$
(2.89)

This form is also seen recently in the low-energy part of the spectra generated by two-body interactions [42].

2.4.3 Analysis of Nuclear Data for GOE and Poisson

Bohigas, Haq and Pandey [6, 12] analyzed slow neutron resonance, proton resonance and (n,γ) reaction data (for level and width fluctuations) and established that GOE describes, within sample size errors, almost exactly the experimental data. They constructed nuclear data ensemble (NDE) with 1762 resonance energies corresponding to 36 sequences of 32 different nuclei and they contain: (i) slow-neutron resonance 1/2+ levels from 64,66,68Zn, 114Cd, 152,154Sm, 154,156,158,160Gd, 160,162,164Dy, 166,168,170Er, 172,174,176Yb, 182,184,186W, 186,190Os, 232Th and 238U targets; (ii) proton resonance 1/2+ levels (1/2 also for Ca) from 44Ca, 48Ti and 56Fe targets; (iii) (n,γ) reaction data on 177,179Hf and 235U giving two sequences of J π levels. Similarly considered also are 1182 widths corresponding to 21 sequences of the above neutron resonance data. Comparisons are made with the GOE Wigner’s law P(S)dS=(πS/2)exp(−πS 2/4)dS for nearest neighbor spacing distribution (NNSD) and Porter Thomas (P-T) \(\chi^{2}_{1}\) law \(P(x)dx = (1/\sqrt{2\pi x}) \exp(-x/2) dx\) for widths; S is in units of average mean spacing (\(\overline{D}\)) and x is width (rate of transition from a initial state to a channel) in units of average width. Similarly, the GOE number variance Σ 2(L) and Δ 3(L) are also seen to agree (for L≤20) extremely well with NDE. In the analysis sample size corrections are made as discussed earlier. Unlike the resonance energies, the resonance widths analysis appears to have some uncertainties (see [6] and Appendix E).

Garrett et al. [43] analyzed NNSD for high-spin levels near the yrast line in rare-earth nuclei. Considered are 3130 experimental level spacings from deformed even-even and odd-A nuclei with Z=62–75 and A=155–185. As expected P(S) is seen to follow regular Poisson form. Following this study, Enders et al. [44] analyzed NNSD for scissors mode levels in deformed nuclei and found Poisson behavior as scissors mode is a well defined collective mode. Used here are 152 levels from 13 heavy deformed nuclei [146,148,150Nd, 152,154Sm, 156,158Gd, 164Dy, 166,168Er, 174Yb, 178,180Hf] in the energy range 2.5<E x <4 MeV with the constraint that there must be at least 8 levels in the given energy interval in a given nucleus. Thus low-lying levels of well deformed nuclei and scissor states, being regular, follow Poisson as expected. Finally, Enders et al. [45] analyzed also the electric pigmy dipole resonances located around 5–7 MeV in four N=82 isotones. They made an ensemble of 184 1 states and an analysis, though difficult due to many missing levels, has been carried out. The authors conclude that there is GOE behavior. Thus, level fluctuations bring out the expected difference between the scissor mode and pigmy dipole resonance.