Keywords

1 Introduction

In random matrix theory, the determinant is naturally an important functional. The study of determinants of random matrices has a long history. The earlier papers focused on the determinant detA n of a non-Hermitian iid matrix A n , where the entries of the matrix were independent random variables with mean 0 and variance 1. Szekeres and Turán [23] studied an extremal problem. Later, in a series of papers moments of the determinants were computed, see [20] and [4] and references therein. In [24], Tao and Vu proved for Bernoulli random matrices, that with probability tending to one as n tends to infinity

$$\sqrt{n!}\exp (-c\sqrt{n\log n}) \leq \vert \det A_{n}\vert \leq \sqrt{n!}\,\,\omega (n)$$
(1)

for any function ω(n) tending to infinity with n. This shows that almost surely, log | detA n  | is \((\frac{1} {2} + o(1))n\,\log n\). In [11], Goodman considered the random Gaussian case, where the entries of A n are iid standard real Gaussian variables. Here the square of the determinant can be expressed as a product of independent chi-square variables and it was proved that

$$\frac{\log (\vert \det A_{n}\vert ) -\frac{1} {2}\log n! + \frac{1} {2}\log n} {\sqrt{\frac{1} {2}\log n}} \rightarrow N(0,1)_{\mathbb{R}},$$
(2)

where \(N(0,1)_{\mathbb{R}}\) denotes the real standard Gaussian (convergence in distribution). A similar analysis also works for complex Gaussian matrices, in which the entries remain jointly independent but now have the distribution of the complex Gaussian \(N(0,1)_{\mathbb{C}}\). In this case a slightly different law holds true:

$$\frac{\log (\vert \det A_{n}\vert ) -\frac{1} {2}\log n! + \frac{1} {4}\log n} {\sqrt{\frac{1} {4}\log n}} \rightarrow N(0,1)_{\mathbb{R}}.$$
(3)

Girko [9] stated that (2) holds for real iid matrices under the assumption that the fourth moment of the atom variables is 3. In [10] he claimed the same result under the assumption that the atom variables have bounded (4 + δ)-th moment. Recently, Nguyen and Vu [19] gave a proof for (2) under an exponential decay hypothesis on the entries. They also present an estimate for the rate of convergence, which is that the Kolmogorov distance of the distribution of the left hand side of (2) and the standard real Gaussian can be bounded by \({\log }^{-\frac{1} {3} +o(1)}n\). In our paper we will be able to improve the bound to \({\log }^{-\frac{1} {2} }n\) in the Gaussian case.

In the non-Hermitian iid model A n it is a crucial fact that the rows of the matrix are jointly independent. This independence no longer holds true for Hermitian random matrices, which makes the analysis of determinants of Hermitian random matrices more challenging. The analogue of (1) for Hermitian random matrices was first proved in [25, Theorem 31] as a consequence of the famous Four Moment Theorem. Even in the Gaussian case, it is not simple to prove an analogue of the Central Limit Theorem (CLT) (3). The observations in [11] do not apply due to the dependence between the rows. In [18] and in [15], the authors computed the moment generating function of the log-determinant for the Gaussian unitary and Gaussian orthogonal ensembles, respectively, and discussed the central limit theorem via the method of cumulants (see [15, (40) and Appendix D]): consider a Hermitian n ×n matrix X n in which the atom distribution ζ ij are given by the complex Gaussian \(N(0,1)_{\mathbb{C}}\) for i < j and the real Gaussian \(N(0,1)_{\mathbb{R}}\) for i = j (which is called the Gaussian Unitary Ensemble (GUE)). The calculations in [15] should imply a Central Limit Theorem (see Remark 2.4 in our paper):

$$\frac{\log (\vert \det X_{n}\vert ) -\frac{1} {2}\log n! + \frac{1} {4}\log n} {\sqrt{\frac{1} {2}\log n}} \rightarrow N(0,1)_{\mathbb{R}},$$
(4)

Recently, Tao and Vu [26] presented a different approach to prove this result approximating the log-determinant as a sum of weakly dependent terms, based on analyzing a tridiagonal form of the GUE due to Trotter [27]. They have to apply stochastic calculus and a martingale central limit theorem to get their result. This method is quite different and also quite involved. More important for us, the techniques due to Tao and Vu seem not to be applicable to get finer asymptotics like Cramér-type moderate deviations, Berry-Esseen bounds and moderate deviations principles. The reason for this is the quality of the approximation by a sum of weakly dependent terms they have chosen is not sharp enough. Let us emphasize that Tao and Vu proved the CLT (4) for certain Wigner matrices, generating a Four Moment Theorem for determinants.

The aim of our paper is to use a closed formula for the moments of the determinant of a GUE matrix, giving at the same time a closed formula for the cumulant generating function of the log-determinant. We will be able to present good bounds for all cumulants. As a consequence we will obtain Cramér-type moderate deviations, Berry-Esseen bounds and moderate deviation principle (for definitions see Sect. 2) for the log-determinant of the GUE, improving results in [15] and [26]. Moreover we will obtain similar results for the GOE ensemble. Good estimates on the cumulants imply such results. To do so we apply a celebrated lemma of the theory of large deviations probabilities due to Rudzkis et al. [21, 22] as well as results on moderate deviation principles via cumulants due to the authors [6]. Applying the recent Four Moment theorem for determinants due to Tao and Vu [26], we are able to prove the moderate deviation principle and Berry-Esseen bounds for the log-determinant for Wigner matrices matching four moments with either the GUE or GOE ensemble. Moreover we will be able to prove moderate deviations results and will improve the Berry-Esseen type bounds in [19] in the cases of non-symmetric and non-Hermitian Gaussian random matrices, called Ginibre ensembles.

Remark that the first universal result of a moderate deviations principle was proved in [7] and [8] for the number of eigenvalues of a Wigner matrix, based on fine asymptotics of the variance of the eigenvalue counting function of GUE matrices, on the Four Moment theorem and on localization results.

2 Gaussian Ensembles and Wigner Matrices

Among the ensembles of n ×n random matrices X n , Gaussian orthogonal and unitary ensembles have been studied extensively and are still being investigated. Their probability densities are proportional to exp( − tr(X n 2)), where tr denotes the trace. Matrices are real symmetric for the Gaussian orthogonal ensemble (GOE) and Hermitian for the Gaussian unitary ensemble (GUE). The joint distributions of eigenvalues for the Gaussian ensembles are ([1, Theorem 2.5.2], [17, Chap. 3])

$$P_{n,\beta }(\lambda _{1},\ldots ,\lambda _{n}) := \frac{1} {Z_{n,\beta }}\exp {\biggl ( - \frac{\beta } {4}\displaystyle\sum _{i=1}^{n}\lambda _{ i}^{2}\biggr )}\displaystyle\prod _{ 1\leq j<k\leq n}\vert \lambda _{j} -\lambda _{k}{\vert }^{\beta },$$
(5)

where β = 1, 2 for the orthogonal and unitary ensembles, respectively, and Z n, β is the normalizing constant, sometimes called the Mehta integral (see [1, Theorem 2.5.2, formula (2.5.4), and Corollay 2.5.9, Selberg’s integral formula]).

Let us denote by X n β the random matrices of the two Gaussian ensembles. We are interested in the moments of | detX n β | for these ensembles, that is

$$M_{n,\beta }(s) :=\langle \vert \det X_{n}^{\beta }{\vert }^{s}\rangle _{ \beta } :=\displaystyle\int _{{\mathbb{R}}^{n}}P_{n,\beta }(\lambda _{1},\ldots ,\lambda _{n})\displaystyle\prod _{i=1}^{n}\vert \lambda _{ i}{\vert }^{s}\,d\lambda _{ i}.$$

All information about the distribution of log | detX n β | can be obtained from the generating function M n, β (s). The moments of log | detX n β | may be obtained from the coefficients in the Taylor expansion of M n, β evaluated at s = 0,

$$M_{n,\beta }(s) =\displaystyle\sum _{j\geq 0}\frac{\langle {(\log \vert \det X_{n}^{\beta }\vert )}^{j}\rangle _{\beta }} {j!} \,{s}^{j},$$

the corresponding cumulants \(\Gamma _{j}(n,\beta ) := {(-i)}^{j} \frac{{d}^{j}} {d{t}^{j}}\log \mathbb{E}{\bigl[{e}^{it\log \vert \det X_{n}^{\beta }\vert }\bigr ]}\bigr |_{t=0}\) are related to the Taylor coefficients of logM n, β via

$$\log M_{n,\beta }(s) =\displaystyle\sum _{j\geq 0}\frac{\Gamma _{j}(n,\beta )} {j!} \,{s}^{j}.$$

In the literature the Mellin transform of the probability density of | detX n β | was calculated for the Gaussian ensembles, giving an explicit formula for M n, β (s). To be more precise, if g n, β ( ⋅) denotes the probability density of the determinant of a GOE or a GUE matrix and \(g_{n,\beta }^{+}(y) := \frac{1} {2}(g_{n,\beta }(y) + g_{n,\beta }(-y))\) be the even part, the Mellin transform of g n, β  +  is defined by

$$\mathcal{M}_{n,\beta }(s) :=\displaystyle\int _{ 0}^{\infty }{y}^{s-1}g_{ n,\beta }^{+}(y)dy.$$

For the GOE and GUE ensembles we obtain

$$\mathcal{M}_{n,\beta }(s) = \frac{1} {2}\displaystyle\int _{-\infty }^{\infty }\cdots \displaystyle\int _{ -\infty }^{\infty }P_{ n,\beta }(\lambda _{1},\ldots ,\lambda _{n})\vert \lambda _{1}\cdots \lambda _{n}{\vert }^{s-1}d\lambda _{ 1}\cdots d\lambda _{n}$$

and an obvious consequence is the relation

$$M_{n,\beta }(s) = 2\mathcal{M}_{n,\beta }(s + 1).$$
(6)

It is quite involved to calculate the Mellin transform even for the Gaussian ensembles. The case β = 1 was calculated in [15, formulas (31), (19) and (26)] (see also [17, Chap. 26.5]). Here the Mellin transform is a Pfaffian of an anti-symmetric matrix applying the method of (skew) orthogonal polynomials. With (6), for \(n = 2p + 1\) one obtains

$$M_{2p+1,1}(s) = {4}^{ns/2}\displaystyle\prod _{ m=1}^{n}\frac{\Gamma (\frac{s} {2} + \frac{1} {2} + b_{m}^{1})} {\Gamma (\frac{1} {2} + b_{m}^{1})}$$
(7)

with \(b_{m}^{1} := \frac{1} {2}\lfloor \frac{m-1} {2} \rfloor + \frac{1} {4}\). If n = 2p one obtains

$$M_{2p,1}(s) = {2}^{\frac{(n+1)s} {2} }F{\biggl (\frac{s + 1} {2} ,-\frac{s} {2}; \frac{n + 1 + s} {2} ; \frac{1} {2}\biggr )}\frac{\Gamma ((s + 1)/2)\Gamma ((n + 1)/2)} {\Gamma (\frac{1} {2})\Gamma ((n + 1 + s)/2)} \displaystyle\prod _{m=1}^{p}\frac{\Gamma (s + m + \frac{1} {2})} {\Gamma (m + \frac{1} {2})} ,$$
(8)

where F is the (Gauß) hypergeometric function

$$F(a,b;c;z) :=\displaystyle\sum _{ m=0}^{\infty }\frac{{(a)}^{(m)}{(b)}^{(m)}} {{(c)}^{(m)}} \frac{{z}^{m}} {m!}$$
(9)

with \({(x)}^{(m)} := x(x + 1)(x + 2)\cdots (x + m - 1)\) denoting the Pochhammer symbol. F is convergent for arbitrary a, b, c and for real − 1 < z < 1. In [3], an alternative derivation for (7) and (8) is presented using terminating hypergeometric series. The case β = 2 was calculated in [18, Sect. 2]. Here a knowledge of determinants and orthogonal polynomials is needed. One obtains

$$M_{n,2}(s) = {2}^{ns/2}\displaystyle\prod _{ m=1}^{n}\frac{\Gamma (\frac{s} {2} + \frac{1} {2} + b_{m}^{2})} {\Gamma (\frac{1} {2} + b_{m}^{2})}$$
(10)

with \(b_{m}^{2} = \lfloor \frac{m} {2} \rfloor \). As a consequence of (10) we obtain the following results for the cumulants \(\Gamma _{j}(n,2)\) of log | detX n 2 | :

Lemma 2.1 (Bounds for the cumulants of log | detX n 2 | , GUE). 

For the Gaussian unitary ensemble β = 2 we obtain

$$\Gamma _{1}(n,2) = -\frac{n} {2} (1 +\log 2) + \frac{n} {2} \log {\bigl (2\lfloor n/2\rfloor \bigr )} + \text{const} + O(1/n)$$

and

$$\sigma _{2}^{2} := \Gamma _{ 2}(n,2) = \frac{1} {2}\log {\bigl (2\lfloor n/2\rfloor \bigr )} + \frac{1} {2}(\gamma +\log 2 + 1) + O(1/n),$$

where γ denotes the Euler-Mascheroni constant. Moreover for any j ≥ 3 we have

$$\big\vert \Gamma _{j}(n,2)\big\vert \leq \text{const}\,j!.$$
(11)

Proof.

Let us remark that some of our calculations can be found in [15]. We work out all the details to get good bounds on the cumulants, which is not the aim in [15]. With \(\psi (x) := \frac{d} {dx}\log \Gamma (x)\) we denote the digamma function. From (10) we obtain

$$\Gamma _{1}(n,2) = \frac{d} {ds}\log M_{n,2}(s)\bigg\vert _{s=0} = \frac{n} {2} \log 2 + \frac{1} {2}\displaystyle\sum _{i=1}^{n}\psi (1/2 + b_{ i}^{2}).$$
(12)

For any \(n = 2k + 1\) we obtain \(\frac{1} {2}\sum _{i=1}^{n}\psi (1/2 + b_{ i}^{2}) =\sum _{ j=1}^{k}\psi (1/2 + j) + \frac{1} {2}\psi (\frac{1} {2})\) and for n = 2k we have \(\frac{1} {2}\sum _{i=1}^{n}\psi (1/2 + b_{ i}^{2}) =\sum _{ j=1}^{k}\psi (1/2 + j) + \frac{1} {2}\psi (1/2) -\frac{1} {2}\psi (\frac{n+1} {2} )\). With \(\Gamma (1 + x) = x\Gamma (x)\) it follows that \(\psi (1 + x) =\psi (x) + \frac{1} {x}\) and therefore recursively \(\psi (1/2 + j) =\psi (1/2) + 2{\biggl (\sum _{l=1}^{j} \frac{1} {2l-1}\biggr )}\), see [14, Sect. 1.3, (1.3.9)]. Using

$$2\displaystyle\sum _{j=1}^{k}\displaystyle\sum _{ l=1}^{j} \frac{1} {2l - 1}\,=\,2(k+1)\displaystyle\sum _{l=1}^{k} \frac{1} {2l - 1} -\displaystyle\sum _{l=1}^{k} \frac{2l} {2l - 1}\,=\,(2k+1){\biggl (\displaystyle\sum _{l=1}^{2k}\frac{1} {l} -\displaystyle\sum _{l=1}^{k} \frac{1} {2l}\biggr )}-k$$

we obtain \(\sum _{j=1}^{k}\psi (1/2 + j) = k\psi (1/2) - k + (2k + 1){\biggl (\sum _{l=1}^{2k}\frac{1} {l} -\sum _{l=1}^{k} \frac{1} {2l}\biggr )}\). Applying

$$\displaystyle\sum _{l=1}^{n}\frac{1} {l} =\gamma +\log n + \frac{1} {2n} + O( \frac{1} {{n}^{2}}),$$
(13)

it follows that \((2k + 1){\biggl (\sum _{l=1}^{2k}\frac{1} {l} -\sum _{l=1}^{k} \frac{1} {2l}\biggr )}=\,(2k + 1)\frac{1} {2}(\gamma +2\log 2) + (2k + 1)\frac{1} {2}\log k + O(\frac{1} {k})\). With \(\psi (1/2) = -2\log 2-\gamma\) we have

$$\displaystyle\sum _{j=1}^{k}\psi (1/2 + j) + \frac{1} {2}\psi (1/2) = -k + (k + \frac{1} {2})\log k + O(\frac{1} {k}).$$
(14)

In the case n = 2k we have to consider in addition the term \(\frac{1} {2}\psi (1/2 + k) = \frac{1} {2}\log k + O(\frac{1} {k})\). Summarizing we obtain for every n:

$$\Gamma _{1}(n,2) = -\frac{n} {2} {\bigl (\log 2 + 1\bigr )} + \frac{n} {2} \log (2k) + \text{const} + O(1/n).$$

From (10) and (12) we obtain for \(n = 2k + 1\)

$$\Gamma _{j}(n,2) = \frac{{d}^{j}} {d{s}^{j}}\log M_{n,2}(s)\bigg\vert _{s=0} = \frac{1} {{2}^{j}}{\psi }^{(j-1)}(1/2) + \frac{1} {{2}^{j-1}}\displaystyle\sum _{i=1}^{k}{\psi }^{(j-1)}(1/2 + i)$$
(15)

with the polygamma function \({\psi }^{(k)}(x) := \frac{{d}^{k}} {d{x}^{k}}\log \Gamma (x)\). For n = 2k one has to subtract from the right hand side the term \({ \frac{1} {{2}^{j}}\psi }^{(j-1)}(\frac{n+1} {2} )\). We remind the representation of \(\Gamma {(x)}^{-1}\) due to Weierstrass (see for example [14, Sect. 1.3, (1.3.17)]): \(\frac{1} {\Gamma (x)} = x{e}^{\gamma x}\prod _{ k=1}^{\infty }(1 + \frac{x} {k}){e}^{-\frac{x} {k} }\). Differentiating \(-\log \Gamma (x)\) leads to

$$\psi (x) = -\gamma -\frac{1} {x} +\displaystyle\sum _{ k=1}^{\infty }{\biggl (\frac{1} {k} - \frac{1} {x + k}\biggr )} = -\gamma +\displaystyle\sum _{ n=0}^{\infty }{\biggl ( \frac{1} {n + 1} - \frac{1} {x + n}\biggr )}.$$

Therefore one obtains

$${ \psi }^{(k)}(x) = {(-1)}^{k+1}\,k!\,\displaystyle\sum _{ n=0}^{\infty } \frac{1} {{(x + n)}^{k+1}}.$$
(16)

It follows that

$$\displaystyle\begin{array}{rcl} \displaystyle\sum _{i=1}^{k}{\psi }^{(j-1)}(1/2 + i)& =& {(-1)}^{j}\,(j - 1)!\,{2}^{j}\displaystyle\sum _{ i=1}^{k}\displaystyle\sum _{ m=i}^{\infty } \frac{1} {{(2m + 1)}^{j}} \\ & & = {(-1)}^{j}\,(j - 1)!\,{2}^{j-1}{\biggl (2\displaystyle\sum _{ i=1}^{k}\displaystyle\sum _{ m=i}^{k} \frac{1} {{(2m + 1)}^{j}} + 2\displaystyle\sum _{i=1}^{k}\displaystyle\sum _{ m=k+1}^{\infty } \frac{1} {{(2m + 1)}^{j}}\biggr )} \\ & & =: T_{1} + T_{2}.\end{array}$$

With \(2\sum _{i=1}^{k}\sum _{m=i}^{k} \frac{1} {{(2m+1)}^{j}} =\sum _{ m=1}^{k} \frac{1} {{(2m+1)}^{j-1}} -\sum _{m=1}^{k} \frac{1} {{(2m+1)}^{j}}\) we obtain

$$\displaystyle\begin{array}{rcl} T_{1}& =& {(-1)}^{j}\,(j - 1)!\,{2}^{j-1}\displaystyle\sum _{ m=0}^{k} \frac{1} {{(2m + 1)}^{j-1}} - {(-1)}^{j}\,(j - 1)!\,{2}^{j-1} \\ & & -{(-1)}^{j}\,(j - 1)!\,{2}^{j-1}\displaystyle\sum _{ m=1}^{k} \frac{1} {{(2m + 1)}^{j}}.\end{array}$$

Further we get

$$T_{2} = {(-1)}^{j}\,(j - 1)!\,{2}^{j-1}\,2k\,\displaystyle\sum _{ m=k+1}^{\infty } \frac{1} {{(2m + 1)}^{j}}.$$

Hence using (16) for ψ (j − 1) we obtain

$$\displaystyle\begin{array}{rcl} \displaystyle\sum _{i=1}^{k}{\psi }^{(j-1)}(1/2 + i)& =& {(-1)}^{j}\,(j - 1)!\,{2}^{j-1}\displaystyle\sum _{ m=0}^{k} \frac{1} {{(2m + 1)}^{j-1}} -\frac{1} {2}{\psi }^{(j-1)}{\bigl (\frac{1} {2}\bigr )} \\ & & + {(-1)}^{j}\,(j - 1)!\,{2}^{j-1}(2k + 1)\displaystyle\sum _{ m=k+1}^{\infty } \frac{1} {{(2m + 1)}^{j}}.\end{array}$$
(17)

In particular for j = 2, we have

$$\displaystyle\begin{array}{rcl} \displaystyle\sum _{i=1}^{k}{\psi }^{(1)}(1/2 + i)& =& 2{\bigl (\frac{1} {2}\log (k) + \frac{1} {2}(\gamma +2\log (2))\bigr )} -\frac{1} {2}{\psi }^{(1)}(1/2) \\ & & +\frac{1} {2}(2k + 1){\psi }^{(1)}{\bigl (k + \frac{3} {2}\bigr )} \\ & =& \log (k) +\gamma +2\log (2) -\frac{1} {2}{\psi }^{(1)}(1/2) + 1 + O{\bigl (\frac{1} {n}\bigr )}.\end{array}$$
(18)

With (15) we obtain for \(n = 2k + 1\) that

$$\Gamma _{j}(n,2)\,=\,{(-1)}^{j}(j-1)!\displaystyle\sum _{ m=0}^{k} \frac{1} {{(2m + 1)}^{j-1}}+{(-1)}^{j}(j-1)!(2k+1)\displaystyle\sum _{ m=k+1}^{\infty } \frac{1} {{(2m+1)}^{j}}.$$

The first term is \(-{2}^{1-j}{(j - 1)\psi }^{(j-2)}(\frac{1} {2}) + O(1/k)\). The second term is \({2}^{-j}{(2k + 1)\psi }^{(j-1)}(\frac{1} {2} + k + 1)\). For n = 2k we have to subtract \({2{}^{-j}\psi }^{(j-1)}(\frac{1} {2} + k)\). Finally we will apply some bounds for the polygamma functions ψ (j). Therefore we will apply the following integral-representation (see for example [14, Sect. 1.4, (1.4.12)]):

$$\displaystyle\begin{array}{rcl} \psi (x)& =& \log (x) -\displaystyle\int _{0}^{\infty }{e}^{-tx}{\biggl (tf(t) + \frac{1} {2}\biggr )}\,dt\quad \text{with} \\ f(t)& :=& {\biggl (\frac{1} {2} -\frac{1} {t} + \frac{1} {{e}^{t} - 1}\biggr )}\,\frac{1} {t},\,\,t \geq 0. \end{array}$$
(19)

Differentiating we see that for j ≥ 1:

$${ \psi }^{(j)}(x) = {(-1)}^{j-1}j!{x}^{-j} + {(-1)}^{j-1}\displaystyle\int _{ 0}^{\infty }{e}^{-tx}{t}^{j}{\biggl (tf(t) + \frac{1} {2}\biggr )}\,dt.$$
(20)

Notice that \(0 <{\bigl ( tf(t) + \frac{1} {2}\bigr )} < 1\) for every t ≥ 0; hence we obtain for every x ≥ 0 and every j ≥ 1:

$${\vert \psi }^{(j)}(x)\vert \leq j!{x}^{-j} + j!{x}^{-j-1}.$$
(21)

Let us consider the variance \(\sigma _{2}^{2} = \Gamma _{2}(n,2)\). With (21) we have \({\vert \psi }^{(1)}(1/2 + k)\vert \leq {(\frac{1} {2} + k)}^{-1} + {(\frac{1} {2} + k)}^{-2}\). Hence we have \(\sigma _{2}^{2} = \frac{1} {2}\sum _{i=1}^{k}{\psi }^{(1)}(1/2 + i) + \frac{1} {2}\psi (1/2) + O(1/k)\) and with (18) we obtain

$$\sigma _{2}^{2} = \frac{1} {2}\log k + \frac{1} {2}(\gamma +2\log 2 + 1) + O(1/k).$$

For j ≥ 3 the cumulants can be bounded by: With (21) we obtain

$$\displaystyle\begin{array}{rcl} \vert \Gamma _{j}(n,2)\vert & \leq & \bigg\vert {2}^{1-j}{(j - 1)\psi }^{(j-2)}(1/2)\bigg\vert +\bigg \vert {2}^{-j}{(2k + 1)\psi }^{(j-1)}(1/2 + k + 1)\bigg\vert \\ & +& \bigg\vert {2}^{-j}{\psi }^{(j-1)}(1/2 + k)\bigg\vert + O(1/k) \\ & \leq & 6(j\,-\,1)!+\,\text{const}{\biggl (\frac{(j\,-\,1)!} {{2}^{j-1}} \, \frac{1} {{k}^{j-2}}\,+\,\frac{(j\,-\,1)!} {{2}^{j-1}} \, \frac{1} {{k}^{j-1}}\biggr )}\leq \,\text{const}(j\,-\,1)!.\end{array}$$

Therefore the cumulants satisfy the stated bounds. □ 

With some more technical effort we obtain similar results for the Gaussian orthogonal ensembles:

Lemma 2.2 (Bounds for the cumulants of log | detX n 1 | , GOE). 

For the orthogonal Gaussian ensemble ( β = 1) we obtain

$$\Gamma _{1}(n,1) = \frac{n} {2} \log {\bigl (2\lfloor n/2\rfloor \bigr )}-\frac{n} {2} + \text{const} + O(1/n)$$

and

$$\sigma _{1}^{2} := \Gamma _{ 2}(n,1) =\log {\bigl ( 2\lfloor n/2\rfloor \bigr )} + \frac{\gamma } {2} + 1 - 2K + \frac{{\pi }^{2}} {4} + O(1/n),$$

where K denotes Catalan’s constant \(K =\sum _{ m=0}^{\infty } \frac{{(-1)}^{m}} {{(2m+1)}^{2}}\), and for any j ≥ 3

$$\big\vert \Gamma _{j}(n,1)\big\vert \leq \text{const}\,j!.$$

Proof.

For β = 1 and n = 2k + 1, formula (7) for the Mellin transform implies

$$\displaystyle\begin{array}{rcl} \Gamma _{1}(n,1)& =& \frac{d} {ds}\log M_{n,s}(s)\big\vert _{s=0} = \frac{n} {2} \log (4) + \frac{1} {2}\displaystyle\sum _{i=1}^{n}\psi {\bigl (\frac{1} {2} + \frac{1} {2}{\bigl \lfloor\frac{i - 1} {2} \bigr \rfloor} + \frac{1} {4}\bigr )} \\ & =& n\log (2) +\displaystyle\sum _{ i=0}^{k-1}\psi {\bigl (\frac{3} {4} + \frac{i} {2}\bigr )} + \frac{1} {2}\psi {\bigl (\frac{3} {4} + \frac{k} {2}\bigr )} \\ & =& n\log (2) + \frac{1} {2}\psi {\bigl (\frac{3} {4}\bigr )} +\displaystyle\sum _{ i=1}^{k}{\biggl (\frac{1} {2}\psi {\bigl (\frac{3} {4} + \frac{i - 1} {2} \bigr )} + \frac{1} {2}\psi {\bigl (\frac{3} {4} + \frac{i} {2}\bigr )}\biggr )}.\end{array}$$

The last transformation is useful since we are now able to apply Legendre’s duplication formula \(\Gamma (z)\Gamma (z + 1/2) = {2}^{1-2z}\sqrt{\pi }\Gamma (2z)\) (see for example [14, Sect. 1.2]). This implies

$$\frac{1} {2}\psi (z) + \frac{1} {2}\psi {\bigl (z + \frac{1} {2}\bigr )} =\psi (2z) -\log (2).$$
(22)

With z = 3 ∕ 4 + i ∕ 2 − 1 ∕ 2 we obtain

$$\Gamma _{1}(n,1) = n\log (2) + \frac{1} {2}\psi {\bigl (\frac{3} {4}\bigr )} +\displaystyle\sum _{ i=1}^{k}\psi {\bigl (1/2 + i\bigr )} - k\log (2).$$
(23)

The summand \(\frac{1} {2}\psi {\bigl (\frac{3} {4}\bigr )}\) equals via the same identity \(\psi {\bigl (\frac{1} {2}\bigr )} -\log (2) -\frac{1} {2}\psi {\bigl (\frac{1} {4}\bigr )} =\psi {\bigl ( \frac{1} {2}\bigr )} -\log (2) + \frac{\pi } {4} + \frac{3} {2}\log (2) + \frac{1} {2}\gamma = \frac{\pi } {4} -\frac{3} {2}\log (2) -\frac{1} {2}\gamma\). As in the GUE case, we have \(\sum _{i=1}^{k}\psi {\bigl (1/2 + i\bigr )} = -\frac{1} {2}\psi {\bigl (\frac{1} {2}\bigr )} - k +{\bigl ( k + \frac{1} {2}\bigr )}\log (k) + O{\bigl (\frac{1} {k}\bigr )}\), see (14). Now (23) implies that

$$\Gamma _{1}(n,1) = \frac{n} {2} \log (n - 1) -\frac{n} {2} + \frac{\pi +2} {4} + O{\bigl (\frac{1} {n}\bigr )}.$$

The jth cumulant, j ≥ 2, is given by

$$\displaystyle\begin{array}{rcl} \Gamma _{j}(n,1)& =& \frac{{d}^{j}} {d{s}^{j}}\log M_{n,s}(s)\big\vert _{s=0} = \frac{1} {{2}^{j}}\displaystyle\sum _{i=1}^{n}{\psi }^{(j-1)}{\bigl (\frac{1} {2} + \frac{1} {2}{\bigl \lfloor\frac{i - 1} {2} \bigr \rfloor} + \frac{1} {4}\bigr )} \\ & =& \frac{1} {{2}^{j-1}}\displaystyle\sum _{i=0}^{k-1}{\psi }^{(j-1)}{\bigl (\frac{3} {4} + \frac{i} {2}\bigr )} + \frac{1} {{2}^{j}}{\psi }^{(j-1)}{\bigl (\frac{3} {4} + \frac{k} {2}\bigr )}.\end{array}$$

Differentiating (22) implies \({\psi }^{(j-1)}(2z) ={ \frac{1} {{2}^{j}}\psi }^{(j-1)}(z) +{ \frac{1} {{2}^{j}}\psi }^{(j-1)}{\bigl (z + \frac{1} {2}\bigr )}\) and therefore

$$\Gamma _{j}(n,1) = \frac{1} {{2}^{j}}{\psi }^{(j-1)}{\bigl (\frac{3} {4}\bigr )} +\displaystyle\sum _{ i=1}^{k}{\psi }^{(j-1)}{\bigl (1/2 + i\bigr )}$$
(24)

hold. The duplicity formula for \(z = \frac{1} {4}\) implies \({\frac{1} {4}\psi }^{(1)}{\bigl (\frac{3} {4}\bigr )} {=\psi }^{(1)}{\bigl (\frac{1} {2}\bigr )} -{\frac{1} {4}\psi }^{(1)}{\bigl (\frac{1} {4}\bigr )},\) where \({\psi }^{(1)}{\bigl (\frac{1} {4}\bigr )} = 16\sum _{m=0}^{\infty } \frac{1} {{(4m+1)}^{2}} = 8\sum _{m=0}^{\infty }{\bigl ( \frac{1} {{(2m+1)}^{2}} + \frac{{(-1)}^{m}} {{(2m+1)}^{2}} \bigr )} = 2\sum _{m=0}^{\infty } \frac{1} {{(m+\frac{1} {2} )}^{2}} +8\sum _{m=0}^{\infty } \frac{(-1)} {{(2m+1)}^{2}} = {2\psi }^{(1)}{\bigl (\frac{1} {2}\bigr )}+8K\)with Catalan’s constant K, resulting in \(\frac{1} {4}{\psi }^{(1)}{\bigl (\frac{3} {4}\bigr )} = \frac{{\pi }^{2}} {4} - 2K\). With (24) and (18) we can conclude

$$\displaystyle\begin{array}{rcl} \Gamma _{2}(n,1)& =& \frac{1} {4}{\psi }^{(1)}{\bigl (\frac{3} {4}\bigr )} +\displaystyle\sum _{ i=1}^{k}{\psi }^{(1)}{\bigl (1/2 + i\bigr )} \\ & =& \frac{{\pi }^{2}} {4} - 2K +\log (k) + \frac{\gamma } {2} +\log (2) + 1 + O{\bigl (\frac{1} {n}\bigr )}.\end{array}$$
(25)

For every j ≥ 3, the first summand can be bounded using (21)

$${\bigl |{ \frac{1} {{2}^{j}}\psi }^{(j-1)}{\bigl (\frac{3} {4}\bigr )}\bigr |} \leq (j - 1)!{\bigl ({\frac{2} {3}\bigr )}}^{j-1} + (j - 1)!2{\bigl ({\frac{2} {3}\bigr )}}^{j} = (j - 1)!\frac{7} {3}{\bigl ({\frac{2} {3}\bigr )}}^{j-1},$$

and the remaining sum in (24) is the same as in the GUE case: With (17) we have

$$\displaystyle\begin{array}{rcl} & & \displaystyle\sum _{i=1}^{k}{\psi }^{(j-1)}{\bigl (1/2 + i\bigr )} + \frac{1} {2}{\psi }^{(j-1)}{\bigl (\frac{1} {2}\bigr )} \\ & & \quad = -2{(j - 1)\psi }^{(j-2)}{\bigl (\frac{1} {2}\bigr )} + {(2k + 1)\psi }^{(j-1)}{\bigl (1/2 + k + 1\bigr )} + O{\bigl (\frac{1} {k}\bigr )}.\end{array}$$

Applying (21) we obtain \({\biggl |\sum _{i=1}^{k}{\psi }^{(j-1)}(1/2 + i) + \frac{1} {2}{\psi }^{(j-1)}{\bigl (\frac{1} {2}\bigr )}\biggr |} \leq \text{const}(j - 1)!,\) which implies the bound for the jth cumulant, j ≥ 3.

In the case of n = 2k even, we have to study the asymptotic behaviour of the hypergeometric function (see (9)): \(F{\biggl (\frac{s+1} {2} ,-\frac{s} {2}; \frac{n+1+s} {2} ; \frac{1} {2}\biggr )} := 1 +\sum _{ m=1}^{\infty }x_{ m}\), denoting \(\frac{{\bigl ({\frac{1+s} {2} \bigr )}}^{(m)}{\bigl ( -{\frac{s} {2}\bigr )}}^{(m)}} {{\bigl ({\frac{n+1+s} {2} \bigr )}}^{(m)}} \frac{1} {{2}^{m}m!}\text{ by }x_{m}.\) Each x m is of order O(n  − m) and, for s ∈ [0, 2) and n large enough, the hypergeometric function takes values in the interval ( − 1, 1). Therefore we can study the power series of the logarithm and get

$$\displaystyle\begin{array}{rcl} \log F{\biggl (\frac{s + 1} {2} ,-\frac{s} {2}; \frac{n + 1 + s} {2} ; \frac{1} {2}\biggr )}& =& \log {\biggl (1 +\displaystyle\sum _{ m=1}^{\infty }x_{ m}\biggr )} \\ & =& \displaystyle\sum _{m=1}^{\infty }x_{ m} +\displaystyle\sum _{ l=2}^{\infty }{(-1)}^{l}\frac{1} {l} {\bigl (\displaystyle\sum _{m=1}^{\infty }x{_{ m}\bigr )}}^{l}.\end{array}$$

We differentiate each x m via the quotient rule and the product rule in the enumerator. Setting s = 0, the only remaining term in the enumerator is the one where we differentiate the factor \(-\frac{s} {2}\). Thus the square of the denominator cancels out. The derivative of x m equals a constant times \(\frac{1} {{2}^{m}m!} \frac{1} {{\bigl ({\frac{n+1} {2} \bigr )}}^{(m)}}\). It follows that the sum over l is of order O(n  − 1), too. Similarly we obtain that for every j ≥ 1

$$\frac{{d}^{j}} {d{s}^{j}}\log F{\biggl (\frac{s + 1} {2} ,-\frac{s} {2}; \frac{n + 1 + s} {2} ; \frac{1} {2}\biggr )}\bigg\vert _{s=0} = O{\bigl (1/n\bigr )}.$$

Thus with (8) and (14) it follows that

$$\displaystyle\begin{array}{rcl} \Gamma _{1}(n,1)& =& \frac{n + 1} {2} \log (2) + \frac{d} {ds}\log F{\biggl (\frac{s + 1} {2} ,-\frac{s} {2}; \frac{n + 1 + s} {2} ; \frac{1} {2}\biggr )}\bigg\vert _{s=0} \\ & & +\frac{1} {2}\psi {\bigl (\frac{1} {2}\bigr )} -\frac{1} {2}\psi {\bigl (\frac{n + 1} {2} \bigr )} +\displaystyle\sum _{ m=1}^{k}\psi {\bigl (1/2 + m\bigr )} \\ & =& \frac{n + 1} {2} \log (2) + O{\bigl (\frac{1} {n}\bigr )} + \frac{1} {2}\psi {\bigl (\frac{1} {2}\bigr )} -\frac{1} {2}\psi {\bigl (\frac{n + 1} {2} \bigr )} \\ & & -\frac{1} {2}\psi {\bigl (\frac{1} {2}\bigr )} -\frac{n} {2} + \frac{n + 1} {2} \log {\bigl (\frac{n} {2} \bigr )} \\ & =& \frac{n} {2} \log (n) -\frac{n} {2} + \frac{1} {2}\log (2) + O{\bigl (1/n\bigr )} \\ \end{array}$$

and by (17)

$$\displaystyle\begin{array}{rcl} \Gamma _{j}(n,1)& =& \frac{{d}^{j}} {d{s}^{j}}\log F{\biggl (\frac{s + 1} {2} ,-\frac{s} {2}; \frac{n + 1 + s} {2} ; \frac{1} {2}\biggr )}\bigg\vert _{s=0} + \frac{1} {{2}^{j}}{\psi }^{(j-1)}{\bigl (\frac{1} {2}\bigr )} \\ & & -\frac{1} {{2}^{j}}{\psi }^{(j-1)}{\bigl (\frac{n + 1} {2} \bigr )} +\displaystyle\sum _{ m=1}^{k}{\psi }^{(j-1)}{\bigl (1/2 + m\bigr )} \\ & =& \frac{1} {{2}^{j}}{\psi }^{(j-1)}{\bigl (\frac{1} {2}\bigr )} - \frac{1} {{2}^{j}}{\psi }^{(j-1)}{\bigl (\frac{n + 1} {2} \bigr )} +\displaystyle\sum _{ m=1}^{k}{\psi }^{(j-1)}{\bigl (1/2 + m\bigr )} + O{\bigl (1/n\bigr )}.\end{array}$$

Note that the only difference to the case \(n = 2k + 1\), see (24), is the summand \({ \frac{1} {{2}^{j}}\psi }^{(j-1)}{\bigl (\frac{n+1} {2} \bigr )}\), which is of order O( (1 ∕ n)). Therefore the second and higher cumulant satisfy the stated bounds for all n. □ 

Good estimates on cumulants imply asymptotic results for the log-determinant of GUE and GOE ensembles, respectively. Before we state our results, we remind the reader on Cramér-type moderate deviations and a moderate deviation principle. The classical result due to Cramér is the following. For independent and identically distributed random variables X 1, , X n with \(\mathbb{E}(X_{1}) = 0\) and \(\mathbb{E}(X_{1}^{2}) = 1\) such that \(\mathbb{E}{e}^{t_{0}\vert X_{1}\vert }\leq c < \infty \) for some t 0 > 0, the following expansion for tail probabilities can be proved:

$$\frac{P(W_{n} > x)} {1 - \Phi (x)} = 1 + O(1)(1 + {x}^{3})/\sqrt{n}$$

for 0 ≤ x ≤ n 1 ∕ 6 with \(W_{n} := (X_{1} + \cdots + X_{n})/\sqrt{n}\), \(\Phi \) the standard normal distribution function, and O(1) depends on c and t 0. This result is sometimes called a large deviations relation. Let us recall the definition of a large deviation principle (LDP) due to Varadhan, see for example [5]. A sequence of probability measures \(\{(\mu _{n}),n \in \mathbb{N}\}\) on a topological space \(\mathcal{X}\) equipped with a \(\sigma\)-field \(\mathcal{B}\) is said to satisfy the LDP with speed \(s_{n} \nearrow \infty \) and good rate function I( ⋅) if the level sets {x : I(x) ≤ α} are compact for all α ∈ [0, ) and for all \(\Gamma \in \mathcal{B}\) the lower bound

$$liminf _{n\rightarrow \infty } \frac{1} {s_{n}}\log \mu _{n}(\Gamma ) \geq -\inf _{x\in \text{int}(\Gamma )}I(x)$$

and the upper bound

$$limsup_{n\rightarrow \infty } \frac{1} {s_{n}}\log \mu _{n}(\Gamma ) \leq -\inf _{x\in \text{cl}(\Gamma )}I(x)$$

hold. Here \(\text{int}(\Gamma )\) and \(\text{cl}(\Gamma )\) denote the interior and closure of \(\Gamma \) respectively. We say a sequence of random variables satisfies the LDP when the sequence of measures induced by these variables satisfies the LDP. Formally a moderate deviation principle is nothing else but the LDP. However, we will speak about a moderate deviation principle (MDP) for a sequence of random variables, whenever the scaling of the corresponding random variables is between that of an ordinary Law of Large Numbers and that of a Central Limit Theorem.

We consider

$$W_{n,\beta } := \frac{\log \vert \det X_{n}^{\beta }\vert - \Gamma _{1}(n,\beta )} {\sigma _{\beta }} \quad \text{for}\quad \beta = 1,2$$
(26)

as well as

$$\widetilde{W}_{n,\beta } := \frac{\log \vert \det X_{n}^{\beta }\vert -\frac{n} {2} \log n + \frac{n} {2} } {\sqrt{\frac{1} {\beta } \log n}} \quad \text{for}\quad \beta = 1,2.$$
(27)

Theorem 2.3.

For β = 1,2 we can prove:

  1. (1)

    Cramér-type moderate deviations: There exist two constants C 1 and C 2 depending on β, such that the following inequalities hold true:

    $$\bigg\vert \log \frac{P(W_{n,\beta } \geq x)} {1 - \Phi (x)} \bigg\vert \leq C_{2}\frac{1 + {x}^{3}} {\sigma _{\beta }}$$

    and

    $$\bigg\vert \log \frac{P(W_{n,\beta } \leq -x)} {\Phi (-x)} \bigg\vert \leq C_{2}\frac{1 + {x}^{3}} {\sigma _{\beta }}$$

    for all 0 ≤ x ≤ C 1 σ β . On all cases σ β is of order \(\sqrt{\log n}\).

  2. (2)

    Berry-Esseen bounds: We obtain the following bounds:

    $$\displaystyle\begin{array}{rcl} \sup _{x\in \mathbb{R}}\big\vert P(W_{n,\beta } \leq x) - \Phi (x)\big\vert &\leq & C(\beta )({\log n)}^{-1/2}, \\ \sup _{x\in \mathbb{R}}\big\vert P(\widetilde{W}_{n,\beta } \leq x) - \Phi (x)\big\vert &\leq & C(\beta )({\log n)}^{-1/2}.\end{array}$$
  3. (3)

    Moderate deviations principle: For any sequence (a n)n of real numbers such that \(1 \ll a_{n} \ll \sigma _{\beta }\) the sequences \({\bigl ( \frac{1} {a_{n}}W_{n,\beta }\bigr )}_{n}\) and \({\bigl ( \frac{1} {a_{n}}\widetilde{W}_{n,\beta }\bigr )}_{n}\) satisfy a MDP with speed a n 2 and rate function \(I(x) = \frac{{x}^{2}} {2}\) , respectively.

Remark 2.4.

The Berry-Esseen bound implies the Central Limit Theorem stated in (4). The statement of the central limit theorem in [15] was given differently. In section 3, they considered a variance of order \({2\sigma }^{2} = \frac{1} {\beta n}\), meaning that the spectrum of the GUE model is concentrated on a finite interval (the support of the semicircular law). Then the D is the determinant of the rescaled (!) GUE model, given a \(\frac{n} {2} \log n + n\log 2\) summand in addition to the expectation \(-n(\frac{1} {2} +\log 2) + O( \frac{1} {n})\) they stated in [15, (43)]. This is actually the expectation in (4). Choosing the variance \({\sigma }^{2} = \frac{1} {4n}\) in the case β = 2 implies that we have to rescale each matrix-entry ζ ij by \(\zeta _{ij}/(2\sqrt{n})\) and hence the determinant of the rescaled matrix is 2n n n ∕ 2 times the determinant of the matrix X n 2.

Proof.

With the bound on the cumulants (11) we obtain that \(\big\vert \Gamma _{j}(W_{n,2})\big\vert \leq 7 \frac{j!} {\sigma _{2}^{j}}\). With \(\sigma _{2}^{2} \geq \frac{1} {2}(\gamma +2\log 2 + 1)\) we get

$$\big\vert \Gamma _{j}(W_{n,2})\big\vert \leq j! \frac{1} {\sigma _{2}^{j-2}} \frac{7 \cdot 2} {(\gamma +2\log 2 + 1)} \leq j! \frac{1} {\sigma _{2}^{j-2}}5 \leq j!{\Bigl ({\frac{5} {\sigma _{2}} \Bigr )}}^{j-2} \leq \frac{j!} {{\Delta }^{j-2}}$$

with \(\Delta =\sigma _{2}/5\) for all n ≥ 2. With Lemma 2.3 in [22] one obtains

$$\frac{P{\bigl (W_{n,2} \geq x\bigr )}} {1 - \Phi (x)} =\exp (L(x)){\biggl (1 + q_{1}\phi (x)\frac{x + 1} {\Delta _{1}} \biggr )}$$

and

$$\frac{P{\bigl (W_{n,2} \leq -x\bigr )}} {\Phi (-x)} =\exp (L(-x)){\biggl (1 + q_{2}\phi (x) \frac{x + 1} {\sqrt{2}\Delta _{1}}\biggr )}$$

for \(0 \leq x \leq \Delta _{1}\), where \(\Delta _{1} = \sqrt{2}\Delta /36\),

$$\phi (x) = \frac{60{\bigl (1 + 10\Delta _{1}^{2}\exp {\bigl ( - (1 - x/\Delta _{1})\sqrt{\Delta _{1}}\bigr )}\bigr )}} {1 - x/\Delta _{1}} ,$$

q 1, q 2 are constants in the interval [ − 1, 1] and L is a function defined in [22, Lemma 2.3, (2.8)] satisfying \(\big\vert L(x)\big\vert \leq \frac{\vert x{\vert }^{3}} {3\Delta _{1}} \quad \text{for all}\,\,x\,\,\text{with}\,\,\vert x\vert \leq \Delta _{1}\). The Cramér-type moderate deviations follow applying [7, Lemma 6.2]. The Berry-Esseen bound follows from [22, Lemma 2.1] which is

$$\sup _{x\in \mathbb{R}}\big\vert P{\bigl (W_{n,2} \leq x\bigr )} - \Phi (x)\big\vert \leq \frac{18} {\Delta _{1}} = \text{const} \frac{1} {{(\log n)}^{1/2}}.$$

The same Berry-Esseen bound follows using the asymptotic behavior of the first two moments. Finally the MDP follows from [6, Theorem 1.1] which is a MDP for \({\bigl ( \frac{1} {a_{n}}W_{n,2}\bigr )}_{n}\) for any sequence (a n ) n of real numbers growing to infinity slow enough such that \(a_{n}/\Delta \rightarrow 0\) as n → . Moreover \({\bigl ( \frac{1} {a_{n}}W_{n,2}\bigr )}_{n}\) and \({\bigl ( \frac{1} {a_{n}}\widetilde{W}_{n,2}\bigr )}_{n}\) are exponentially equivalent in the sense of [5, Definition 4.2.10]: with \(\hat{W}_{n,2} := \frac{\log \vert \det X_{n}^{2}\vert -\frac{n} {2} \log n+\frac{n} {2} } {\sigma _{2}}\) we have that \(\vert W_{n,2} -\hat{ W}_{n,2}\vert \rightarrow 0\) as n → , and it follows that \({\bigl ( \frac{1} {a_{n}}\hat{W}_{n,2}\bigr )}_{n}\) and \({\bigl ( \frac{1} {a_{n}}W_{n,2}\bigr )}_{n}\) are exponentially equivalent. By Taylor we have \(\big\vert \frac{1} {a_{n}}(\hat{W}_{n,2} -\widetilde{ W}_{n,2})\big\vert = o(1)\,\hat{W}_{n,2}\) and hence the result follows with [5, Theorem 4.2.13]. □ 

Next we will consider the following class of random matrices. Consider two independent families of i.i.d. random variables \((Z_{i,j})_{1\leq i<j}\) (complex-valued) and \((Y _{i})_{1\leq i}\) (real-valued), zero mean, such that \(\mathbb{E}Z_{1,2}^{2} = 0, \mathbb{E}\vert Z_{1,2}{\vert }^{2} = 1\) and \(\mathbb{E}Y _{1}^{2} = 1\). Consider the (Hermitian) n ×n matrix M n with entries \(M_{n}^{{\ast}}(j,i) = M_{n}(i,j) = Z_{i,j}\) for i < j and \(M_{n}^{{\ast}}(i,i) = M_{n}(i,i) = Y _{i}\). Such a matrix is called Hermitian Wigner matrix. The GUE matrices are the special case with complex Gaussian random variables \(N(0,1)_{\mathbb{C}}\) in the upper triangular and real Gaussian random variables \(N(0,1)_{\mathbb{R}}\) on the diagonal.

We say that a Wigner Hermitian matrix obeys Condition (C 1) for some constant C if one has

$$\mathbb{E}\vert Z_{i,j}{\vert }^{C} \leq C_{ 1}\quad \text{and}\quad \mathbb{E}\vert Y _{i}{\vert }^{C} \leq C_{ 1}$$
(28)

for some constant C 1 independent on n. Two Wigner Hermitian matrices \(M_{n} = (\zeta _{i,j})_{1\leq i,j\leq n}\) and \(M_{n}^\prime = (\zeta _{i,j}^\prime )_{1\leq i,j\leq n}\) match to order m off the diagonal and to order k on the diagonal if one has

$$\mathbb{E}({(\text{Re}(\zeta _{i,j}))}^{a}{(\text{Im}(\zeta _{ i,j}))}^{b}) = \mathbb{E}({(\text{Re}(\zeta _{ i,j}^\prime ))}^{a}{(\text{Im}(\zeta _{ i,j}^\prime ))}^{b})$$

for all 1 ≤ i ≤ j ≤ n and natural numbers a, b ≥ 0 with a + b ≤ m for i < j and a + b ≤ k for i = j.

Applying [26, Theorem 5], the Four Moment Theorem for the determinant, we are able to prove an MDP for the log-determinant even for a class of Wigner Hermitian matrices. For any Wigner Hermitian matrix M n consider

$$W_{n} := \frac{\log \vert \det M_{n}\vert -\frac{1} {2}\log n! + \frac{1} {4}\log n} {\sqrt{\frac{1} {2}\log n}}.$$

Theorem 2.5 (Universal moderate deviations principle). 

Let M n be a Wigner Hermitian matrix whose atom distributions are independent of n, have real and imaginary parts that are independent and match GUE to fourth order and obey Condition (C 1), (28) , for some sufficiently large C, then for any sequence (a n)n of real numbers such that \(1 \ll a_{n} \ll \sqrt{\log n}\) the sequence \({\bigl ( \frac{1} {a_{n}}W_{n}\bigr )}_{n}\) satisfies a MDP with speed a n 2 and rate function \(I(x) = \frac{{x}^{2}} {2}\) . If M n matches GOE instead of GUE, then one instead has that \({\bigl ( \frac{\sqrt{\frac{1} {2} \log n}} {a_{n}\sqrt{\log n}}W_{n}\bigr )}_{n}\) satisfies the MDP with same speed and rate function.

Proof.

Let M n be the Wigner Hermitian matrix whose entries satisfy the conditions of the theorem and M n denotes the GUE matrix. Then [26, Theorem 5] says that there exists a small c 0 > 0 such that for all \(G : \mathbb{R} \rightarrow \mathbb{R}_{+}\) with \(\big\vert \frac{{d}^{j}} {d{x}^{j}}G(x)\big\vert = O({n}^{c_{0}})\) for j = 0, , 5, we have

$$\big\vert \mathbb{E}{\bigl (G(\log \vert \det (M_{n})\vert )\bigr )} - \mathbb{E}{\bigl (G(\log \vert \det (M_{n}^\prime )\vert )\bigr )}\big\vert \leq {n}^{-c_{0} }$$

We consider for any \(b,c \in \mathbb{R}\) the interval I n : = [b n , c n ] with

$$b_{n}\,:=\,b\,a_{n}\sqrt{\frac{1} {2}\log n} + \frac{1} {2}\log n! -\frac{1} {4}\log n\,\,\text{and}\,\,c_{n}\,:=\,c\,a_{n}\sqrt{\frac{1} {2}\log n} + \frac{1} {2}\log n! -\frac{1} {4}\log n$$

With \(I_{n}^{+} := [b_{n} - {n}^{-c_{0}/10},c_{n} + {n}^{-c_{0}/10}]\) and \(I_{n}^{-} := [b_{n} + {n}^{-c_{0}/10},c_{n} - {n}^{-c_{0}/10}]\) we construct a bump function \(G_{n} : \mathbb{R} \rightarrow \mathbb{R}_{+}\) which is equal to one on the smaller interval I n  −  and vanishes outside the larger interval I n  + . It follows that \(P(\log \vert \det (M_{n})\vert \in I_{n}) \leq \mathbb{E}G_{n}(\log \vert \det (M_{n})\vert )\) and \(\mathbb{E}G_{n}(\log \vert \det (M_{n}^\prime )\vert ) \leq P(\log \vert \det (M_{n}^\prime )\vert \in I_{n}^{+})\). One can choose G n to satisfy the condition \(\big\vert \frac{{d}^{j}} {d{x}^{j}}G_{n}(x)\big\vert = O({n}^{c_{0}})\) for j = 0, , 5 and hence

$$P(\log \vert \det (M_{n})\vert \in I_{n}) \leq P(\log \vert \det (M_{n}^\prime )\vert \in I_{n}^{+}) + {n}^{-c_{0} }.$$
(29)

By the same argument we get

$$P(\log \vert \det (M_{n}^\prime )\vert \in I_{n}^{-}) - {n}^{-c_{0} } \leq P(\log \vert \det (M_{n})\vert \in I_{n}).$$
(30)

With \(P{\bigl ( \frac{1} {a_{n}}W_{n} \in [b,c]\bigr )} = P{\bigl (\log \vert \det (M_{n})\vert \in I_{n}\bigr )}\). With (29) and [5, Lemma 1.2.15] we see that

$$\displaystyle\begin{array}{rcl} & & limsup_{n\rightarrow \infty } \frac{1} {a_{n}^{2}}\log P{\bigl (W_{n}/a_{n} \in [b,c]\bigr )} \\ & & \quad \leq \max {\biggl ( limsup_{n\rightarrow \infty } \frac{1} {a_{n}^{2}}\log P(\log \vert \det (M_{n}^\prime )\vert \in I_{n}^{+});limsup_{ n\rightarrow \infty } \frac{1} {a_{n}^{2}}\log {n}^{-c_{0} }\biggr )}.\end{array}$$

For the first object we have

$$\displaystyle\begin{array}{rcl} & & limsup_{n\rightarrow \infty } \frac{1} {a_{n}^{2}}\log P(\log \vert \det (M_{n}^\prime )\vert \in I_{n}^{+}) \\ & & \quad = limsup_{n\rightarrow \infty } \frac{1} {a_{n}^{2}}\log P{\biggl ( \frac{1} {a_{n}}\widetilde{W}_{n,2} \in [b -\eta (n),c +\eta (n)]\biggr )} \\ \end{array}$$

with \(\eta (n) := {n}^{-c_{0}/10}{\bigl (a_{n}{\sqrt{\frac{1} {2}\log n}\bigr )}}^{-1} \rightarrow 0\) as n → . Since c 0 > 0 and logn ∕ a n 2 →  for n →  by assumption, applying Theorem 2.3 we have

$$limsup_{n\rightarrow \infty } \frac{1} {a_{n}^{2}}\log P{\bigl (W_{n}/a_{n} \in [b,c]\bigr )} \leq -\inf _{x\in [b,c]}\frac{{x}^{2}} {2}.$$

Applying (30) we obtain in the same manner that

$$limsup_{n\rightarrow \infty } \frac{1} {a_{n}^{2}}\log P{\bigl (W_{n}/a_{n} \in [b,c]\bigr )} \geq -\inf _{x\in [b,c]}\frac{{x}^{2}} {2}.$$

The conclusion follows applying [5, Theorem 4.1.11 and Lemma 1.2.18]. □ 

Remark 2.6.

The bump function G n in the proof of Theorem 2.5 can be chosen to fulfill \(\big\vert \frac{{d}^{j}} {d{x}^{j}}G_{n}(x)\big\vert = O({n}^{c_{0}})\) for j = 0, , 5 uniformly in the endpoints of the interval [b, c]. Hence the Berry-Esseen bound in Theorem 2.3 can be obtained for Wigner matrices considered in Theorem 2.5:

$$\sup _{x\in \mathbb{R}}\big\vert P(W_{n} \leq x) - \Phi (x)\big\vert \leq \text{const}\,{\bigl ({(\log n)}^{-1/2} + {n}^{-c_{0} }\bigr )}.$$

We omit the details.

3 Non-symmetric and Non-Hermitian Gaussian Random Matrices

As already mentioned, recently Nguyen and Vu proved in [19], that for A n be an n ×n matrix whose entries are independent real random variables with mean zero and variance one, the Berry-Esseen bound

$$\sup _{x\in \mathbb{R}}\big\vert P(W_{n} \leq x) - \Phi (x)\big\vert {\leq \log }^{-1/3+o(1)}n$$

with

$$W_{n} := \frac{\log (\vert \det A_{n}\vert ) -\frac{1} {2}\log (n - 1)!} {\sqrt{\frac{1} {2}\log n}}$$
(31)

holds true. We will prove good bounds for the cumulants of W n in the case where the entries are Gaussian random variables. Therefore we will be able to prove Cramér-type moderate deviations and an MDP as well as a Berry-Esseen bound of order \({(\log n)}^{-1/2}\) (and it seems that one cannot have a rate of convergence better than this). In the Gaussian case, again the calculation of the Mellin transform is the main tool. Fortunately, the transform can be calculated much easier.

Let A n be an n ×n matrix whose entries are independent real or complex Gaussian random variables with mean zero and variance one. Denote by A n  †  the transpose or Hermitian conjugate of A n according as A n is real or complex. Then A n A n  †  is positive semi-definite and its eigenvalues are real and non-negative. The positive square roots of the eigenvalues of A n A n  †  are known as the singular values of A n . One has that

$$\displaystyle\prod _{i=1}^{n}\lambda _{ i}^{2} =\det (A_{ n}A_{n}^{\dag }) = \vert \det A_{ n}{\vert }^{2} =\displaystyle\prod _{ i=1}^{n}\vert x_{ i}{\vert }^{2},$$

where λ i are the singular values and x i are the eigenvalues of A n . Now \(A_{n}A_{n}^{\dag }\) is called Wishart matrix. For the real case we consider independent \(N(0,1)_{\mathbb{R}}\) distributed entries, for the complex case we assume that the real and imaginary parts are independent and \(N(0,1)_{\mathbb{R}}\) distributed entries. These ensembles are called Ginibre ensembles. One obtains for the joint probability distribution of the eigenvalues of A n A n  †  on \(\mathbb{R}_{+}^{n}\) the density

$$\frac{1} {\tilde{Z}_{n,\beta }}\exp {\bigl ( - \frac{\beta } {2}\displaystyle\sum _{i=1}^{n}y_{ i}\bigr )}\displaystyle\prod _{i=1}^{n}y_{ i}^{\beta /2-1}\displaystyle\prod _{ i<j}\vert y_{i} - y_{j}{\vert }^{\beta }$$

with β = 1 for the real and β = 2 for the complex case and \(\tilde{Z}_{n,\beta }\) being the normalizing constant (see for example [2, Chap. 7]). As a result the Gaussian joint probability density for the singular values λ i gets transformed to

$$Q_{n,\beta }(\lambda _{1},\ldots ,\lambda _{n}) := \frac{1} {Z_{n,\beta }(n)}\exp {\bigl ( - \frac{\beta } {2}\displaystyle\sum _{i=1}^{n}\lambda _{ i}^{2}\bigr )}\displaystyle\prod _{ i=1}^{n}\lambda _{ i}^{\beta -1}\displaystyle\prod _{ i<j}\vert \lambda _{i}^{2} -\lambda _{ j}^{2}{\vert }^{\beta }$$

with

$$Z_{n,\beta }(p) :=\displaystyle\int \cdots \displaystyle\int \exp {\bigl ( - \frac{\beta } {2}\displaystyle\sum _{i=1}^{n}\lambda _{ i}^{2}\bigr )}\displaystyle\prod _{ i=1}^{n}\lambda _{ i}^{(p-n)+\beta -1}\displaystyle\prod _{ i<j}\vert \lambda _{i}^{2} -\lambda _{ j}^{2}{\vert }^{\beta }\displaystyle\prod _{ i=1}^{n}d\lambda _{ i}$$
(32)

Now the Mellin transform of the probability density of the determinant of A n is given by

$$\mathcal{M}_{n,\beta }(s)\,=\,\displaystyle\int _{0}^{\infty }\cdots \displaystyle\int _{ 0}^{\infty }\vert \lambda _{ 1}\cdots \lambda _{n}{\vert }^{s-1}Q_{ n,\beta }(\lambda _{1},\ldots ,\lambda _{n})\displaystyle\prod _{i=1}^{n}d\lambda _{ i}\,=\,\frac{Z_{n,\beta }(n + s - 1)} {Z_{n,\beta }(n)}.$$

But using the Selberg identity of the Laguerre form, [17, formula 17.6.5], we obtain for the moment generating function \(M_{n,\beta }(s) = \mathcal{M}_{n,\beta }(s - 1)\):

$$M_{n,\beta }(s) ={\bigl ({ \frac{2} {\beta } \bigr )}}^{ns/2}\displaystyle\prod _{ i=1}^{n}\frac{\Gamma {\bigl ((s + i\,\beta )/2\bigr )}} {\Gamma {\bigl ((i\,\beta )/2\bigr )}}.$$
(33)

This formula makes even sense for β = 4, where A n is a quaternion matrix and A n  †  denotes the dual of A n (see [17, Sect. 15.4] for a discussion of the definition of a determinant in this case). We will concentrate on the real case β = 1. The results of the following theorem can be stated and proved similarly in the two other cases β = 2, 4. We omit the details. We consider W n as in (31) and

$$\widetilde{W}_{n} := \frac{\log \vert \det A_{n}\vert - \mathbb{E}(\log \vert \det A_{n}\vert )} {\mathbb{V}{(\log \vert \det A_{n}\vert )}^{1/2}}.$$
(34)

Theorem 3.1.

Let A n be an n × n matrix whose entries are independent real \(N(0,1)_{\mathbb{R}}\) random variables. Then we have:

  1. (1)

    Cramér-type moderate deviations: There exists two constants C 1 and C 2 depending on β, such that the following inequalities hold true:

    $$\bigg\vert \log \frac{P(\widetilde{W}_{n} \geq x)} {1 - \Phi (x)} \bigg\vert \leq C_{2}\frac{1 + {x}^{3}} {\sigma _{\beta }}$$

    and

    $$\bigg\vert \log \frac{P(\widetilde{W}_{n} \leq -x)} {\Phi (-x)} \bigg\vert \leq C_{2}\frac{1 + {x}^{3}} {\sigma _{\beta }}$$

    for all \(0 \leq x \leq C_{1}\mathbb{V}{(\log \vert \det A_{n}\vert )}^{1/2}\).

  2. (2)

    Berry-Esseen bounds: We obtain the following bounds:

    $$\displaystyle\begin{array}{rcl} \sup _{x\in \mathbb{R}}\big\vert P(W_{n} \leq x) - \Phi (x)\big\vert &\leq & C(\beta ){(\log n)}^{-1/2}, \\ \sup _{x\in \mathbb{R}}\big\vert P(\widetilde{W}_{n} \leq x) - \Phi (x)\big\vert &\leq & C(\beta ){(\log n)}^{-1/2}.\end{array}$$
  3. (3)

    Moderate deviations principle: For any sequence (a n)n of real numbers such that \(1 \ll a_{n} \ll \sigma _{\beta }\) the sequences \({\bigl ( \frac{1} {a_{n}}W_{n}\bigr )}_{n}\) and \({\bigl ( \frac{1} {a_{n}}\widetilde{W}_{n}\bigr )}_{n}\) satisfies a MDP with speed a n 2 and rate function \(I(x) = \frac{{x}^{2}} {2}\) , respectively.

Proof.

With (33) we are able to estimate the cumulants \(\Gamma _{j}(n)\) of log | detA n  | . The calculations will benefit from a few results presented in the proofs of Lemmas 2.1 and 2.2. Therefore we restrict ourselves to the major steps of the proof. We denote by ψ the digamma function and by ψ (k), \(k \in \mathbb{N}\), the polygamma function (see Lemma 2.1). With (33) we have

$$\Gamma _{1}(n) = \frac{n} {2} \log n + \frac{1} {2}\displaystyle\sum _{i=1}^{n}\psi (i/2)\quad \text{and}\quad \Gamma _{ j}(n) = \frac{1} {{2}^{j}}\displaystyle\sum _{i=1}^{n}{\psi }^{(j-1)}(i/2)\,\,\text{for}\,\,j \geq 2.$$

For \(n = 2k + 1\) we have \(\frac{1} {2}\sum _{i=1}^{n}\psi (i/2) = \frac{1} {2}{\bigl (\sum _{i=0}^{k}\psi (1/2 + i) +\sum _{ i=1}^{k}\psi (i)\bigr )}\). Using (14) the first summand is equal to \(-\frac{k} {2} + \frac{k} {2} \log k + \frac{1} {4}\log k + \frac{1} {4}\psi (1/2) + O(1/k)\). With \(\psi (1 + x) =\psi (x) + \frac{1} {x}\) (see Lemma 2.1) one obtains that \(\psi (i) =\psi (1) +\sum _{ j=1}^{i-1}\frac{1} {j}\). Thus applying (13) we have \(\frac{1} {2}\sum _{i=1}^{k}\psi (i) = \frac{k} {2} \log (k - 1) -\frac{k} {2} + \text{const} + O(1/k)\). Summarizing we get

$$\displaystyle\begin{array}{rcl} \Gamma _{1}(2k + 1)& =& -k + k\log k + \frac{1} {4}\log k + \text{const} + O(1/k) \\ & =& -\frac{n} {2} (1 +\log 2) + \frac{n} {2} \log (n - 1) -\frac{1} {4}\log (n - 1) + \text{const} + O(1/n).\end{array}$$

Therefore the leading term of the expectation of log | detA n  | is log(((n − 1)! )). In the case n = 2k one obtains the same order. For \(\Gamma _{j}(2k + 1)\) with j ≥ 2 we proceed as following:

$$\displaystyle\begin{array}{rcl} \Gamma _{j}(2k + 1)& =& \frac{1} {{2}^{j}}\displaystyle\sum _{i=1}^{2k+1}{\psi }^{(j-1)}(i/2) \\ & =& \frac{1} {{2}^{j}}{\biggl ({\psi }^{(j-1)}(1/2) +\displaystyle\sum _{ i=1}^{k}{\psi }^{(j-1)}(1/2 + i) +\displaystyle\sum _{ i=1}^{k}{\psi }^{(j-1)}(i)\biggr )}.\end{array}$$

Take the representation (16) to see that \({\psi }^{(j-1)}(i) = {(-1)}^{j}(j - 1)!\sum _{m=i}^{\infty } \frac{1} {{m}^{j}}\), such that

$$\displaystyle\begin{array}{rcl} \displaystyle\sum _{i=1}^{k}{\psi }^{(j-1)}(i)& =& {(-1)}^{j}(j - 1)!{\biggl (\displaystyle\sum _{ m=1}^{k} \frac{1} {{m}^{j-1}} + k\displaystyle\sum _{m=k+1}^{\infty } \frac{1} {{m}^{j}}\biggr )} \\ & =& -{(j - 1)\psi }^{(j-2)}(1) + O(1/k) + {k\psi }^{(j-1)}(k + 1).\end{array}$$

With the help of (17) we obtain for j ≥ 3 that

$$\displaystyle\begin{array}{rcl} \Gamma _{j}(n)& =& \frac{1} {{2}^{j+1}}{\psi }^{(j-1)}(1/2) - \frac{1} {{2}^{j}}(j - 1){\bigl({\psi }^{(j-2)}(1/2) {+\psi }^{(j-2)}(1)\bigr )} \\ & & + \frac{1} {{2}^{j+1}}(2k + 1){\psi }^{(j-1)}(1/2 + k + 1) + \frac{1} {{2}^{j}}\,{k\,\psi }^{(j-1)}(k + 1) + O(1/k).\end{array}$$

With (21) we are able to bound the cumulants in a similar way as in the proof of Lemma 2.1 and obtain \(\vert \Gamma _{j}(n)\vert \leq \text{const}j!\). Moreover with (18) we obtain for the variance

$$\Gamma _{2}(n) = \frac{1} {2}\log n + \frac{1} {2}{\bigl (\gamma + 1 + \frac{{\pi }^{2}} {8}\bigr )} + O(1/n).$$

Therefore the leading term of the variance of log | detA n  | is \(\frac{1} {2}\log n\). Now the theorem follows exactly as in the proof of Theorem 2.3. □ 

Remark 3.2.

Let A n be an n ×n matrix whose entries are independent complex and quaternion, respectively. Then W n and \(\widetilde{W}_{n}\) as defined before satisfy Cramér-type moderate deviations, Berry-Esseen bounds and a moderate deviations principle. This can easily be checked noting that, for β = 1, 2, 4,

$$\Gamma _{j}^{(\beta )}(n) = \frac{n} {2} \log {\bigl (\frac{2} {\beta } \bigr )}\delta _{\{j=1\}} + \frac{1} {{2}^{j}}\displaystyle\sum _{i=1}^{n}{\psi }^{(j-1)}{\biggl ( \frac{i\beta } {2}\biggr )}$$

is of order \(\frac{1} {2\beta }\log (n)\): For β = 2 we have already bounded these summands in the proof above. In the case β = 4 use (22) and its derivatives to see, that the cumulant can be represented via sums of ψ (j − 1)(i) and \({\psi }^{(j-1)}(i + 1/2)\).

Remark 3.3 (Trace-fixed ensembles). 

In [16], the authors considered fixed-trace Gaussian random matrix ensembles (real-symmetric and Hermitian ones). Here the trace of the matrix is kept constant with no other restriction on the matrix elements. These ensembles are shown to be equivalent as far as finite moments of the matrix elements are concerned. Especially, the Mellin transform of the fixed-trace Gaussian matrices can be deduced from the Mellin transform of the Gaussian orthogonal and unitary ensemble, respectively, see [16, formulas (17), (20) and (22)]. Hence it is expected that the distribution of the log-determinant of these ensembles is asymptotically Gaussian with a variance of order logn. We would be able to deduce the results in Theorem 3.1 for the Gaussian trace-fixed ensembles by the same technique. We omit the details. Remark, that universal limits for the eigenvalue correlation functions in the bulk of the spectrum for fixed trace matrix ensembles are considered in [12, 13]. In this case, the class of matrices are of nondeterminantal structure.