Introduction

In the last few years, there has been a growing interest in the theory of random matrices. Many authors investigate the global and local asymptotic behavior of the spectrum of random matrices. It is well known that the Wigner semicircle law describes the global density of the eigenvalues of an \(n\times n\) random Gaussian Hermitian matrix when the dimensional \(n\) goes to infinity. This result has been proven by Wigner (1955). The same result was later extended to other ensembles of random matrices; see [1], [2] and the references given there. For a background on random matrices, also see [3]–[7].

In some cases of ensembles of random matrices, the asymptotic behavior of the density of eigenvalues coincides with the limit density of the zeros of some orthogonal polynomials. For instance, in the case of the Gaussian ensemble of Hermitian matrices, the Wigner semicircle law describes the scaled limit of the statistical density of the zeros of Hermite polynomials.

Others contributions of the asymptotic behavior of eigenvalues were made. For example, in [8], the author studies the eigenvalues of the ensemble of fixed-trace Gaussian Hermitian matrix, and he gave the limit of the level density (density of the eigenvalues or statistical density), which coincides with the Wigner semicircle law. In [9], the author investigates the asymptotic behavior of the level density of a generalized Gaussian random matrix without constraints.

In this paper, we investigate the statistical density of a generalized Gaussian random Hermitian matrix with fixed-trace. That is to say, the elements \(x_{ij}\) of the random Hermitian matrix \(X\) are distributed with respect to the probability density \(\psi_\lambda(x)=c_\lambda|x|^{2\lambda}e^{-x^2}\) under the extra condition \(\sum_{i,j=1}^n|x_{ij}|^2=a_n\), where \(\lambda>-1\) and \(a_n\) is a real sequence. We use a technique different from the one used in [10], [8] and [11]. Indeed, the orthogonal polynomial method cannot be applied due to the constraints on the entries of the matrix, which makes the level density not determinantal and require a use of the potential theory method. To my knowledge, the first papers on this subject which use this technical refer to [12] and [13].

Our main result reads as follows: given \(\lambda_n\geq 0\) a positive sequence, and a Hermitian matrix \(X\) of order \(n\) with coefficients in the field \(\mathbb F=\mathbb R,\mathbb C\) or \(\mathbb H\) the quaternion field, and assume that the coefficients are distributed according to the density \(\psi_{\lambda_n}\). Let \(x^{(n,\lambda_n)}_1,...,x^{(n,\lambda_n)}_n\) be the \(n\) eigenvalues of \(X\), and define on \(\mathbb R\) the probability measure

$$\nu_n^{\lambda_n}=\frac1n\sum_{k=1}^n\delta_{x_k^{(n,\lambda_n)}}$$

with the extra condition on the eigenvalues

$$\sum_{k=1}^n\big(x_k^{(n,\lambda_n)}\big)^2=a_n,$$

where \(a_n\) is a sequence of order \(n^2\), \(a_n\sim n^2\). Then, after scaling the probability \(\nu_n^{\lambda_n}\) by the factor \(1/\sqrt n\), and if \(\lambda_n/n\) converges to \(c\geq 0\), the measure \(\nu_n^{\lambda_n}\) converges weakly to a probability measure \(\nu_{\beta, c}\), in the sense that, for every bounded continuous function \(f\),

$$\lim_{n\to\infty}\int_\mathbb R f(t)\nu_n^{\lambda_n}(dt)=\int_\mathbb R f(t)\nu_{\beta,c}(dt).$$

The measure \(\nu_{\beta, c}\) admits the support \(S=[-a,-b]\cup[a,b]\), and its density is given by

$$\frac1{\pi\beta |t|}\sqrt{(b^2-t^2)(t^2-a^2)},$$

where \(a=a(\beta,c),b=b(\beta,c)>0\), are some constants depending only on the parameters \(\beta\) and \(c\), which will be specified later.

The remainder of the paper is divided into three sections and is organized as follows. Section 1 contains the relevant background about generalized Hermite polynomials and some preliminary results needed for the proof of our main results. In Section 2, we investigate the asymptotic behavior of the zeros of the generalized Hermite polynomials and his relationship with the level density of the eigenvalues of a generalized Gaussian matrix. In the last Section 4, we prove our main result.

1. Preliminaries

Let \(\mu\) be a positive measure on \(\mathbb R\). Assume that the support of \(\mu\) is infinite, and that, for any \(k\geq 0\),

$$\int_{\mathbb R}|t|^k\mu(dt)<\infty.$$

On the space \(\mathscr P\) of polynomials in one variable with real coefficients, we consider the inner product

$$(p\mid q)=\int_{\mathbb R}p(x)q(x)\mu(dx),$$

which makes \(\mathscr P\) into a pre-Hilbert space. From the system \(\big\{t^m, m \in\mathbb N\big\}\), the Schmidt orthogonalization process produces a sequence \((p_m)_m\) of orthogonal polynomials: \(p_m\) is a polynomial of degree \(m\) and

$$\int_{\mathbb R}p_m(x)p_n(x)\mu(dx)=0 \qquad\text{if}\quad m\neq n.$$

Generalized Hermite polynomials. All the notation are from Rosenblum [13]; see also [15]. In this example, \(\mu\) is a generalized Gaussian measure: let \(\lambda>-1/2\), and

$$\mu(dx)=|x|^{2\lambda}e^{-x^2}dx.$$

The generalized Hermite polynomial \(H^{\lambda}_n\) is defined for \(n=2m\) even by

$$\begin{aligned}H^{\lambda}_{2m}(x)&:=(-1)^m\frac{(2m)!}{m!}\,{}_1F_1\biggl(-m,\lambda+\frac12;x^2\biggr)\\ &=(-1)^m\frac{(2m)!}{m!}\sum_{k=0}^{m}(-1)^k\binom{m}{k} \frac{\Gamma(\lambda+1/2)}{\Gamma(k+\lambda+1/2)}x^{2k}.\end{aligned}$$

They are defined for \(n=2m+1\) odd by

$$\begin{aligned}H^{\lambda}_{2m+1}(x)&:=(-1)^m\frac{(2m+)!}{m!}\frac{x}{\lambda+\frac12} \,{}_1F_1(-m,\lambda+\frac32;x^2)\\ &=(-1)^m\frac{(2m+1)!}{m!}\sum_{k=0}^{m}(-1)^k\binom{m}{k} \frac{\Gamma(\lambda+1/2)}{\Gamma(k+\lambda+3/2)}x^{2k+1},\end{aligned}$$

where \({}_1F_1\) is the confluent hypergeometric function.

They are orthogonal in the Hilbert space \(L^2(\mathbb R, \mu)\) with square norm

$$||H^{\lambda}_n||^2=2^n(n!)^2\frac{\Gamma(\lambda+1/2)}{\gamma_{\lambda}(n)}, \qquad \gamma_\lambda(n)=\left\{\begin{aligned} \, &2^{2m}m!\frac{\Gamma(m+\lambda+1/2)}{\Gamma (\lambda+1/2)}\quad\mbox {if}\; n=2m,\\ &2^{2m+1}m!\frac{\Gamma(m+\lambda+3/2)}{\Gamma(\lambda+1/2)}\qquad\mbox {if}\quad n=2m+1. \end{aligned}\right.$$

The generalized Hermite polynomial \(H^{\lambda}_n\) is a solution of the differential equation

$$x^2y''+2x(\lambda -x^2)y'+(2nx^2-\lambda(1-(-1)^n))y=0,$$

and satisfies the three terms recurrence relation

$$xH_{n-1}^{\lambda}=\frac{\gamma_\lambda(n)}{2n\gamma_\lambda(n-1)}H_n^{\lambda}+ (n-1)H^{\lambda}_{n-2}.$$

Let \(D_\lambda\) be the one-dimensional Dunkl operator acting on entire functions by

$$D_{\lambda}(\varphi)(x)=\varphi'(x)+\frac{\lambda}{x}(\varphi(x)-\varphi(-x)).$$

It corresponds to the particular case of the Dunkl transform for the reflection group \(G=\mathbb Z_2\). In this case the polynomials \(H_n^\lambda\) are eigenvectors of the Dunkl operator \(D_\lambda\), and satisfy

$$D_\lambda(H_n^{\lambda})=2nH_{n-1}^\lambda.$$

See for instance [15] and [16] for a background on the Dunkl transform in the multidimensional case. Other relations between Dyson’s brownian motion model and Dunkl heat equation are made, see for instance [17], [18] and [15].

2. Asymptotics of the Zeros of Generalized Hermite Polynomials

2.1. Maximum Value and Asymptotics of the Energy

Let \(x_1^{(n,\lambda)} , . . . , x_n^{(n,\lambda)},\) be the zeros of \(H_n^\lambda\). In order to study the global behavior of the zeros, we consider on \(\mathbb R\) the probability measure \(\nu_{n,\lambda}\) defined by

$$\nu_{n,\lambda}=\frac1n\sum_{i=1}^n\delta_{x^{(n,\lambda)}_i}.$$

In other words, for a continuous function \(f\) defined on \(\mathbb R\)

$$\int_{\mathbb R}f(t)\nu_{n,\lambda}(dt)=\frac1n\sum_{i=1}^nf(x^{(n,\lambda)}_i).$$

We will see that under some conditions on the parameter \(\lambda\), the measure \(\nu_{n,\lambda}\) converges weakly.

Given \(n\) distinct points \(x_1,...,x_n\) in \(\mathbb R^n\), we define its energy with respect to a potential \(Q\) in the following way:

$$ E_{ Q,n}(x_1,\cdots,x_n)=2\sum_{i< j}\log\frac{1}{|x_i-x_j|}+\sum_{i=1}^nQ(x_i).$$
(1)

For a lower semi continuous and convex potential \(Q\) with \(\lim_{|x|\to\infty}(Q(x)-\log(1+x^2))=+\infty\), the energy \(E_{Q,n}(x_1, . . . ,x_n)\) is defined for \(x_i\in\mathbb R^*\), with values in \(] -\infty,\infty]\), bounded from below, and lower semi-continuous. Further, the energy goes to infinity at the boundary of \(\mathbb R^n\setminus\{0\}.\) Hence there is at least one point \(x^{(n)}=(x^{(n)}_1,\cdots, x_n^{(n)})\in\mathbb R^n\) which minimizes the energy:

$$E^*_{Q,n}=E_{Q,n}\big(x^{(n)}\big)=\inf_{x\in\mathbb R^n}E_{Q, n}(x).$$

Proposition 1.

For \(Q(x)=x^2+2\lambda\log(1/|x|)\) , and \(\lambda\geq 0\) , the minimum of the energy \(E^*_{\lambda, n}\) is reached at the zeros of the generalized Hermite polynomials \(H_n^\lambda\) .

Proof.

First, remark that, for the minimizer point \(x^{(n,\lambda)}\), we have \(x^{(n,\lambda)}_i\neq x^{(n,\lambda)}_j\) for \(i\neq j\) and \(x^{(n,\lambda)}_i\neq 0\). On the other hand, \(Q_{\lambda}\) is \({\cal C}^1\) on \(\mathbb R^n\setminus\{0\}\) and the energy is of class \({\cal C}^1\) on \(\mathbb R^n\setminus\{x\in\mathbb R^n\mid x_i\neq x_j,\,i\neq j\} \). Since \(x^{(n,\lambda)}\) is a critical point of the function \(E_{Q, n}(x)\), it follows that, for all \(i=1,\cdots,n\),

$$\frac{\partial E_{\lambda, n}}{\partial x_i}\big(x_i^{(n,\lambda)}\big) =0,$$

which means that

$$\frac{\partial E_{\lambda, n}}{\partial x_i}(x_i^{(n,\lambda)})=-2\sum_{i=1, i\neq j}\frac{1}{x^{(n,\lambda)}_j-x^{(n,\lambda)}_i}+2x^{(n,\lambda)}_j-\frac{2\lambda} {x^{(n,\lambda)}_j}=0.$$

Then \(x^{(n,\lambda)}\) is a critical point of \(E_{\lambda, n}\) if

$$\sum_{i=1, i\neq j}^n\frac{1}{x^{(n,\lambda)}_j-x^{(n,\lambda)}_i}=x^{(n,\lambda)}_j-\frac{\lambda} {x^{(n,\lambda)}_j} \qquad(i=1,...,n).$$

With a point \(x \in\mathbb R^n\) we associate the polynomial

$$p(t)=\prod_{i=1}^n(t-x_i).$$

Let us compute the logarithmic derivative of \(p\):

$$\frac{p'(t)}{p(t)}=\sum_{i=1}^n\frac{1}{t-x_i},$$

and we have

$$\lim_{t\to x_j}\biggl(\frac{p'(t)}{p(t)}-\frac{1}{t-x_j}\biggr)=\sum_{i=1, i\neq j}^n\frac{1}{x_j-x_i},$$

which gives

$$\frac{p''(x_j)}{2p'(x_j)}=\sum_{i=1, i\neq j}^n\frac{1}{x_j-x_i}.$$

Therefore,

$$\frac{\partial E_{\lambda, n}}{\partial x_i}(x^{(n,\lambda)})=\frac{p''(x^{(n,\lambda)}_j)}{2p'(x^{(n,\lambda)}_j)},$$

and

$$\big(x^{(n,\lambda)}_j\big)^2 p''(x^{(n,\lambda)}_j)-2x^{(n,\lambda)}_j\big(\big(x^{(n,\lambda)}_j\big)^2-\lambda\big) p'\big(x^{(n,\lambda)}_j\big)=0.$$

Consider the polynomial

$$x^2p''(x)-2x(x^2-\lambda)p'(x).$$

It is a polynomial of degree \(n+2\), whose distinct zeros are \(x^{(n,\lambda)}_j\), \(j=1,...,n\), whence

$$ x^2p''(x)-2x(x^2-\lambda)p'(x)=(ax^2+bx+c)p(x).$$
(2)

By the symmetry of the potential \(Q_\lambda(x)=Q_\lambda(-x)\), we are looking for the symmetric polynomials which satisfy (2). Using this fact, we conclude that the solution is an incomplete polynomial

$$p(x)=x^{(1-(-1)^n)/2}\sum_{k=0}^mu_kx^{2(m-k)},$$

where \(m=[n/2].\)

By symmetry of \(p\) we obtain \(b=0\). Further, by considering the coefficient of \(x^{n+2}\), we have \(a=-2n\), and

$$x^2p''(x)-2x(x^2-\lambda)p'(x)=(-2nx^2+c)p(x).$$

Since \(p(0)\neq 0\); hence \(c=0\) for \(n\) even. If \(n\) is odd, by letting \(x\) to \(0\) in the next equation

$$xp''(x)-2(x^2-\lambda)p'(x)=(-2nx^2+c)\frac{p(x)}{x},$$

we obtain

$$2\lambda p'(0)=cp'(0).$$

Since the zeros of \(p\) are simple; hence \(c=2\lambda\). One deduces that the polynomial \(p\) satisfies the differential equation

$$x^2y''+2x(\lambda-x^2)y'+2(nx^2-\lambda(1-(-1)^n)/2)y=0.$$

The only polynomials solutions (up to constant facto) of the equation above is the generalized Hermite polynomials \(H_n^{\lambda}\). Hence the energy is minimal at the zeros of \(H_n^{\lambda}\).

Recall that the discriminant of a polynomial \(p\) of degree \(n\) with leading coefficient equal to one,

$$p(t)=t^n+a_{n-1}t^{n-1}+...+a_0,$$

is the scalar \(\Delta_n(p)\) defined by

$$\Delta_n(p)=\prod_{1\leq i<j\leq n}(x_i^{(n)}-x_j^{(n)})^2,$$

where \(x_1^{(n)}, . . . , x_n^{(n)}\) are the zeros of \(p\). Using equation (1), one deduces that

$$ \exp(-E^*_{n})=\exp\biggl(-\sum_{i=1}^nQ(x^{(n)}_i)\biggr) \Delta_n(p).$$
(3)

In the case of \( p=(1/\alpha_n)H^{\lambda}_n\), the generalized Hermite polynomial, where \(\alpha_n\) is the leading coefficient of \(H_n^\lambda\), we obtain the following relation for the discriminant of \(H_n^\lambda\).

Proposition 2.

For \(n\in\mathbb N\) ,

$$\Delta_n(H_n^\lambda)=2^{-n(n-1)/2}\prod_{k=1}^n \biggl(\frac{\gamma_\lambda(k)}{\gamma_\lambda(k-1)}\biggr)^k.$$

The outline of the computation can be found in [19] Proposition I.3.7 with slight modifications.

Proposition 3.

The value of the energy \(E^*_{\lambda, n}\) stated in Proposition 1 is given by the relation

$$\exp(-E^*_{\lambda, n})=2^{-n(n+4\lambda-1)/2} \biggl(\frac{\gamma_\lambda(n)}{[n/2]!}\biggr)^{2\lambda}\exp\biggl(-\frac{\gamma_\lambda(n)}{2\gamma_\lambda(n-2)}\biggr) \prod_{k=1}^{n} \biggl(\frac{\gamma_\lambda(k)}{\gamma_\lambda(k-1)}\biggr)^k,$$

where \([x]\) is the integer part of the real \(x\) .

Proof.

From Proposition 1, Proposition 2 and equation (3), we saw that

$$\begin{aligned} \, \exp(-E^*_{\lambda,n})&=\exp\biggl(-\sum_{i=1}^nQ_\lambda(x^{(n,\lambda)}_i)\biggr)\Delta_n(H_n^\lambda)\\ &=2^{-n(n-1)/2}\prod_{i=1}^n\bigl|x^{(n,\lambda)}_i\bigr|^{2\lambda}\exp\biggl(-\sum_{i=1}^n\bigl(x^{(n,\lambda)}_i\bigr)^2\biggr) \prod_{k=1}^{n}\biggl(\frac{\gamma_\lambda(k)}{\gamma_\lambda(k-1)}\biggr)^k. \end{aligned}$$

By using the relations between the zeros and the coefficients of \(H_n^\lambda\), we obtain

$$\sum_{i=1}^n\bigl(x^{(n,\lambda)}_i\bigr)^2=\frac{\alpha^2_{n-1}}{\alpha^2_n}-2\frac{\alpha_{n-2}} {a_n},\quad \prod_{i=1}^nx^{(n,\lambda)}_i=\frac{\alpha_0}{\alpha_n},$$

where \(\alpha_k\) are the coefficients of the generalized Hermite polynomial \(H_n^\lambda\).

Since

$$H_n^{\lambda}(x)=\frac{2^nn!}{\gamma_\lambda(n)}x^n-\frac{2^{n-2}n!}{\gamma_\lambda(n-2) }x^{n-2}+\cdots +(-1)^{[n/2]}\frac{n!}{[n/2]!},$$

whence

$$\sum_{i=1}^n\bigl(x^{(n,\lambda)}_i\bigr)^2=\frac{\gamma_\lambda(n)}{2\gamma_\lambda(n-2)}\qquad \text{and}\qquad\prod_{i=1}^nx^{(n,\lambda)}_i=(-1)^{[n/2]}\frac{\gamma_\lambda(n)}{2^n[n/2]!}.$$

By a straightforward computation, we have the desired result.

From the propositions above, we deduce the following result.

Proposition 4.

For \(\beta>0\) , the function \(\varphi(x)=\prod_{i=1}^n|x_i|^{2\lambda}\prod_{1\leq i<j\leq n}|x_i-x_j|^{\beta}\) , restricted to the closed ball

$$B=\biggl\{x\in\mathbb R^n\mid \sum_{i=1}^nx_i^2\leq\frac{\gamma_{2\lambda/\beta}(n)}{2\gamma_{2\lambda/\beta}(n-2)}\biggr\},$$

attains its maximum at the zeros of the generalized Hermite polynomial \(H_n^{2\lambda/\beta}\) . Moreover, the value of the maximum is given by

$$\max_{x\in B}\varphi(x)=2^{-n((n-1)\beta+8\lambda)/4} \biggl(\frac{\gamma_{2\lambda/\beta}(n)}{[n/2]!}\biggr)^{2\lambda} \prod_{k=1}^{n}\biggl(\frac{\gamma_{2\lambda/\beta}(k)}{\gamma_{2\lambda/\beta}(k-1)}\biggr)^{{k\beta}/{2}}.$$

Proof.

First, we write \(\varphi\) as

$$\varphi(x)=\exp\biggl(-\frac{\beta}2\biggl(\frac{4\lambda}\beta\sum_{i=1}^n\log\frac{1}{|x_i|} +2\sum_{i<j}\log\frac{1}{|x_i-x_j|}\biggr)\biggr).$$

Therefore, the maximum points of \(\varphi\) are the minimum points of

$$E_n(x_1,...,x_n)=\frac{4\lambda}\beta\sum_{i=1}^n\log\frac{1}{|x_i|}+2\sum_{i<j}\log\frac{1} {|x_i-x_j|}.$$

By Proposition 1, we saw that the function \(U(x_1,...,x_n)= \sum_{i=1}^nx^2_i+E_n(x_1,...,x_n)\) attains its minimum at \(x^{(n,\lambda)}_1,...,x^{(n,\lambda)}_n,\) the zeros of the generalized Hermite polynomial \(H^{2\lambda/\beta}_n\); moreover,

$$\sum_{i=1}^n\bigl(x^{(n,\lambda)}_i\bigr)^2 =\frac{\gamma_{2\lambda/\beta}(n)}{2\gamma_{2\lambda/\beta}(n-2)}.$$

If we set \(\delta=\gamma_{2\lambda/\beta}(n)/(2\gamma_{2\lambda/\beta}(n-2))\), then the function \(U\) reaches its minimum on the sphere of radius \(\sqrt\delta\). Using Proposition 1, we obtain

$$\begin{aligned} \, \max_B\varphi(x)=\max_{S(0,\sqrt\delta)}\varphi(x)&=e^{\beta\delta/2} \max_B\exp{\beta/2}(-\sum_{i=1}^nx^2_i-E_n(x_1,...,x_n))\\ &=e^{\beta\delta/2}\max_{\mathbb R^n}\exp{\beta/2}(-\sum_{i=1}^nx^2_i-E_n(x_1,...,x_n))\\ &=e^{\beta\delta/2}\exp(-{\beta/2}E^*_{\frac{2\lambda}{\beta}, n}) \\ &=\biggl(2^{-n(n+(8\lambda/\beta)-1)/2} \biggl(\frac{\gamma_{2\lambda/\beta}(n)}{[n/2]!}\biggr)^{4\lambda/\beta} \prod_{k=1}^{n}\biggl(\frac{\gamma_{2\lambda/\beta}(k)} {\gamma_{2\lambda/\beta}(k-1)}\biggr)^k\biggr)^{\beta/2} \\ &=2^{-n((n-1)\beta+8\lambda)/4}\biggl(\frac{\gamma_{2\lambda/\beta}(n)}{[ n/2]!}\biggr)^{2\lambda}\prod_{k=1}^{n} \biggl(\frac{\gamma_{2\lambda/\beta}(k)}{\gamma_{2\lambda/\beta}(k-1)}\biggr)^{k\beta/2}. \end{aligned}$$

Proposition 5.

Let \(\lambda_n\) be a positive real sequence. Assume that \( \lim_{n\to\infty}\lambda_n/n=c\) , then

$$\begin{aligned} \, &\lim_{n\to\infty}\biggl(\frac1{n^2}E^*_{\lambda_n, n}+\biggl(\frac{\lambda_n}n+\frac{1}2\biggr) \log n\biggr) \\&\qquad=\frac34+\frac12\log2+\biggl(\frac{3}{2}+\log2\biggr)c+c^2\log(2c)-\biggl(c^2+c+\frac14\biggr)\log(1+2c). \end{aligned}$$

Proof.

By Proposition 1, we have

$$\begin{aligned} \, \frac1{n^2}E^*_{\lambda_n, n}&=\frac{n(n+4\lambda_n-1)}{2n^2}\log2+\frac{\gamma_{\lambda_n}(n)}{2n^2\gamma_{\lambda_n }(n-2)}-\frac{2\lambda_n}{n^2} \log\biggl(\frac{\gamma_{\lambda_n}(n)}{[n/2]!}\biggr)\\ &\qquad{}-\frac1{n^2}\sum_{k=1}^nk\log\biggl(\frac{\gamma_{\lambda_n}(k)}{\gamma_{\lambda_n}(k-1)}\biggr).\end{aligned}$$

Moreover, for all \(k\in\mathbb N^*\),

$$\gamma_{\lambda_n}(k)=(k+2\lambda_n\theta_k)\gamma_{\lambda_n}(k-1),$$

where \(\theta_k=0\) if \(k\) is even and \(\theta_k=1\) if \(k\) is odd.

Since the sequence \(\lambda_n/n\) converges to \(c\), we have

$$\lim_{n\to\infty}\frac{n(n+4\lambda_n-1)}{2n^2} =\biggl(\frac{1}2+2c\biggr)\log2,$$
(4)
$$\lim_{n\to\infty}\frac{\gamma_{\lambda_n}(n)} {2n^2\gamma_{\lambda_n}(n-2)} =\frac{1}2+c.$$
(5)

Let us define the two sequences

$$I_{1,n}=-\frac{2\lambda_n}{n^2} \log\bigg(\frac{\gamma_{\lambda_n}(n)}{[n/2]!}\bigg) \qquad\text{and}\qquad I_{2,n}=-\frac1{n^2}\sum_{k=1}^nk\log\bigg(\frac{\gamma_{\lambda_n}(k)}{\gamma_{\lambda_n}(k-1)}\bigg).$$

Using the analytic expression of \(\gamma_{\lambda_n}(n)\) and the formula for the gamma function \(\Gamma(x+1)=x\Gamma(x),\) we obtain

$$I_{1,n}=-\frac{2\lambda_n}n\log2-\frac{2\lambda_n}{n^2}\sum_{k=0}^{[ n/2]-1}\log(k+\lambda_n)-\frac{2\lambda_n}{n^2}\sum_{k=0}^{[n/2]-1}\log\bigg(1+\frac{1}{2(k+\lambda_n)}\bigg).$$

By concavity of the logarithmic function, we have

$$0\leq \frac{2\lambda_n}{n^2}\sum_{k=0}^{[n/2]-1}\log(1+\frac{1}{2(k+\lambda_n)})\leq \frac{2\lambda_n}{n^2}\sum_{k=0}^{[ n/2]-1}\frac{1}{2(k+\lambda_n)} \leq \frac{[ n/2]}{n^2}.$$

Hence the last sum in \(I_{1,n}\) has a zero limit. This means

$$I_{1,n}=-\frac{2\lambda_n}n\log2 -2\biggl[\frac{n}2\biggr]\frac{\lambda_n}{n^2}\log\lambda_n -\frac{2\lambda_n}{n^2}\sum_{k=0}^{[n/2]-1}\log\bigg(1+\frac{k}{\lambda_n}\bigg)+o(1).$$

\(o(1)\) means a small term in \(n\).

Applying Riemann sums with the bisection \(a_k=k/\lambda_n\) of the interval \([0,1/(2c)]\), for \(k=0,..., [n/2]-1\) and \(a_{[ n/2]}=1/(2c)\), then

$$\frac{2\lambda_n}{n^2}\sum_{k=0}^{[n/2]-1} \log\bigg(1+\frac{k}{\lambda_n}\bigg)=\frac{2\lambda^2_n}{n^2}\sum_{k=0}^{[n/2]}(a_k-a_{k-1}) \log(1+a_k) -\frac{2\lambda^2_n}{n^2}(a_{[n/2]}-a_{[n/2]-1})\log\bigg(1+\frac{1}{2c}\bigg),$$

and

$$\lim_{n\to\infty}\frac{2\lambda_n}{n^2}\sum_{k=0}^{[ n/2]-1}\log\bigg(1+\frac{k}{\lambda_n}\bigg)=2c^2\int_0^{1/(2c)}\log(1+x)dx.$$

By using the fact that

$$\lim_{n\to\infty}\biggl(\frac{2[n/2]\lambda_n}{n^2}\log\lambda_n-\frac{\lambda_n}n\log n\biggr)=c\log c.$$

It follows that

$$\lim_{n\to\infty}\biggl(I_{1,n}+\frac{\lambda_n}n\log n\biggr)=-2c\log2-c\log(c)-2c^2\int_0^{1/(2c)}\log(1+x)dx.$$

Hence

$$ \lim_{n\to\infty}\biggl(I_{1,n}+\frac{\lambda_n}n\log n\biggr)=-2c\log2-c\log(c)-(2c^2+c)\log\biggl(1+\frac1{2c}\biggr)+c.$$
(6)

Now, we compute the limit of the sequence \(I_{2, n}\). First, for \(n=2m\) even

$$\begin{aligned}I_{2,2m}&=-\frac1{(2m)^2}\sum_{k=1}^{2m}k\log(k+2\lambda_{2m}\theta_k)\\ &=-\frac1{(2m)^2}\sum_{k=1}^m2k\log(2k)-\frac1{(2m)^2}\sum_{k=0}^{m-1}(2k+1) \log(2k+1+2\lambda_{2m}).\end{aligned}$$

Applying Riemann sums again, we obtain

$$-\frac1{(2m)^2}\sum_{k=1}^m2k\log(2k)=-\frac14\log(2m)-\frac12\int_0^1x\log x(dx)+o(1)=-\frac14\log (2m)+\frac18+o(1).$$

For the second sum, let us write it as

$$\begin{aligned} \, &-\frac1{(2m)^2}\sum_{k=0}^{m-1}(2k+1)\log(2k+1+2\lambda_{2m})\\ &=-\frac1{(2m)^2}\log(2\lambda_{2m})\sum_{k=0}^{m-1}(2k+1)-\frac1{(2m)^2} \sum_{k=0}^{m-1}(2k+1)\log\bigg(1+\frac{2k+1}{2\lambda_{2m}}\bigg)\\ &=-\frac14\log(2\lambda_{2m})-\frac1{(2m)^2} \sum_{k=0}^{m-1}(2k+1)\log\bigg(1+\frac{2k+1}{2\lambda_{2m}}\bigg). \end{aligned}$$

By using Riemann sums, this can be written as

$$-\frac1{(2m)^2}\sum_{k=0}^{m-1}(2k+1) \log(2k+1+2\lambda_{2m})=-\frac14\log(2\lambda_{2m})-2c^2\int_0^{1/(2c)}x\log(1+x)(dx)+o(1 ).$$

Hence

$$\begin{aligned} \, &-\frac1{(2m)^2}\sum_{k=0}^{m-1}(2k+1)\log(2k+1+2\lambda_{2m}) \\&\qquad=-\frac14\log(2m)-\frac14\log(2c)-2c^2\int_0^{1/(2c)}x\log(1+x)(dx)+o(1). \end{aligned}$$

A straightforward computation gives

$$I_{2, 2m}=-\frac14\log(2m)+\frac18-\frac14\log(2c)-2c^2\int_0^{1/(2c)}x\log(1+x)(dx)+o(1).$$

Therefore,

$$\lim_{m\to\infty}\biggl(I_{2, 2m}+\frac{1}2\log(2m)\biggr) =\frac{1}4-\frac{c}2-\frac{1}4\log(2c)+\biggl(c^2-\frac{1}4\biggr)\log\bigg(1+\frac{1}{2c}\bigg).$$

Similarly, we prove an analogous result for \(I_{2, 2m+1}\) and \(\lim_{m\to\infty}I_{2, 2m+1}=\lim_{m\to\infty}I_{2, 2m}\). Hence

$$ \lim_{n\to\infty}(I_{2, n}+\frac{1}2\log(n)) =\frac{1}4-\frac{c}2-\frac{1}4\log(2c)+(c^2-\frac{1}4)\log(1+\frac1{2c}).$$
(7)

From equations (4), (5), (6) and (7), we see that

$$\begin{aligned} \, &\lim_{n\to\infty}\biggl(\frac{1}{n^2}E^*_{\lambda_n, n}+\biggl(\frac{\lambda_n}{n}+\frac12\biggr) \log n\biggr) \\*&\qquad=\frac34+\biggl(c^2+c+\frac12\biggr)\log2+\frac{3c}{2}+c^2\log c-\biggl(c^2+c+\frac14\biggr)\log(1+2c). \end{aligned}$$

This completes the proof.

Corollary 1.

Let \(\beta>0\) . If we consider the energy

$$E_{\beta, \lambda, n}(x_1,\cdots,x_n)=\frac{\beta}2\sum_{i\neq j}\log\frac{1}{|x_i-x_j|} +\sum_{i=1}^nQ_\lambda(x_i),$$

one proves first that the minimum of the energy is attained at the zeros of the generalized Hermite polynomials \(H_n^{2\lambda/\beta}(\sqrt{2/\beta}\,x)\) . Further, under the condition \(\lim_{n\to\infty}\lambda_n/n=c\) , and with \(a=\beta/4\) , we show by the same method that

$$\begin{aligned} \, &E^*_{\beta, c}:=\lim_{n\to\infty}\bigg(\frac{1}{n^2}E^*_{\beta, \lambda_n,n} +\bigg(\frac{\lambda_n}{n}+a\bigg)\log n\bigg) \\&\qquad =\frac{3}2a-a\log a+\bigg(\frac{3}{2} -\log a\bigg)c+\frac{c^2}{2a}\log{c}a-\bigg({c^2}{2a}+c+\frac{a}2\bigg)\log\bigg(1+\frac{c}a\bigg). \end{aligned}$$

2.2. Density of the Zeros of Generalized Hermite Polynomials

Let denote by \(x^{(n)}_1,...,x^{(n)}_n\) the zeros of the generalized Hermite polynomials \(H^{\lambda_n}_n\), where \(\lambda_n\) is some positive real sequence, and define on \(\mathbb R\) the probability measure

$$\nu_n=\frac1n\sum_{i=1}^n\delta_{x^{(n)}_i}.$$

The first two moments of the measure \(\nu_n\) are

$$m_1(\nu_n)=\int_{\mathbb R}t\nu_n(dt)=\frac1n\sum_{i=1}^nx^{(n)}_i=0,$$
$$\begin{aligned}m_2(\nu_n)&=\int_{\mathbb R}t^2\nu_n(dt)=\frac1n\sum_{i=1}^n(x^{(n)}_i)^2\\&=\frac{\gamma_{\lambda_n}(n)}{2n\gamma_{ \lambda_n}(n-2)} =\frac{(n+2\lambda_n\theta_n)(n-1+2\lambda_n\theta_{n-1})}{2n},\end{aligned}$$

where \(\theta_k\) is \(0\) if \(k\) is even and \(1\) if \(k\) is odd. One observes that the second moment \(m_2(\nu_n)\) is of order \(n\) and a scaling of order \(\sqrt n\) of the measure \(\nu_n\) is necessary. Let us denote

$$\widetilde{\nu}_n=\frac1n\sum_{i=1}^n\delta_{\sigma^{(n)}_i},\qquad \sigma^{(n)}_i=\frac{x^{(n)}_i}{\sqrt n} .$$

Similarly, we scale the energy \(E_{\lambda_n, n}\) of equation (1) as

$$\widetilde{E}_{\lambda_n, n}(x_1,\cdots,x_n)=2\sum_{i< j}\log\frac{1}{|x_i-x_j|}+n\sum_{i=1}^nQ_{\alpha_n}(x_i),$$

where \( \alpha_n=2\lambda_n/ n\).

The minimum \(\widetilde{E}^*_{\alpha_n, n}\) of the energy \(\widetilde{E}_{\alpha_n, n}\) is reached at the \(n!\) points \(\sigma^{(n)}_i\), these scaled zeros of the generalized Hermite polynomials. Moreover,

$$\widetilde{E}^*_{\alpha_n, n}=\min_{\mathbb R^n}\widetilde{E}_{\alpha_n, n}(x_1,...,x_n)=\widetilde{E}_{\alpha_n, n}(\sigma^{(n)}_1,...,\sigma^{(n)}_n).$$

For a probability measure \(\mu\) on \(\mathbb R\), consider the energy

$$E_c(\mu)=\int_{\mathbb R}\log\frac{1}{|s-t|}\mu(ds)\mu(dt)+\int_{\mathbb R}Q_c(t)\mu(dt),$$

where \( Q_c(t)=t^2+2c\log(1/|t|).\) The measure which realizes the minimum of the energy \(E\) is \(\nu_c\) with density \(f_c\) given by

$$f_c(t)=\left\{\begin{aligned} \, &\frac{1}{\pi }\frac{\sqrt{(t^2-a^2)(b^2-t^2)}}{| t|}\quad \text{if}\;t\in S\\&0\qquad\qquad\qquad\qquad\quad\;\;\; \text{if}\;t\notin S,\end{aligned}\right.$$

where \(S=[-a,-b]\cup[a,b]\), \(a=\sqrt{1+c-\sqrt {1+2c}}\) and \(b=\sqrt{1+c+\sqrt {1+2c}}\).

Therefore,

$$E^*_c=E(\nu_c)=\min_{\mu\in\mathfrak{M}^1(\mathbb R)}E(\mu),$$

and the value of the energy is given by

$$E^*_c=\frac{3}4+\frac{1}2\log2+\bigg(\frac{3}2+\log2\bigg)c+c^2\log2c-\bigg(c^2+c+\frac{1}4\bigg)\log(1+2c);$$

see Theorem 4.1 in [9].

On the other hand, remark that \(\widetilde{E}^*_{\alpha_n, n}=E^*_{\lambda_n, n}+n(n-1)/2\log n+n\lambda_n\log n.\) Therefore, by Proposition 5 one deduces that \(\lim_{n\to\infty}\widetilde{E}^*_{\alpha_n, n}/n^2=E^*_c\).

Theorem 1.

Let \(\lambda_n\) be a positive real sequence, assume that \( \lim_{n\to\infty}\lambda_n/n=c\) . Then the measure \(\widetilde\nu_n\) converges weakly to the probability measure \(\nu_c\) stated above. This means that, for every bounded continuous function \(f\) on \(\mathbb R\) ,

$$\lim_{n\to\infty}\int_{\mathbb R}f(x)\widetilde\nu_n(dx)=\int_{\mathbb R}f(x)\nu_c(dx),$$

The proof of the theorem can be found in [10], [20], [21] (or [19], Theorem 3.1 and Theorem 2.1), and references therein. We give it here for completeness.

Proof.

For a probability measure \(\mu\), consider its energy

$$E_{\alpha_n,n}(\mu)=\int_{\mathbb R^2}\log\frac1{|x-y|}\mu(dx)\mu(dy)+\int_\mathbb R Q_{\alpha_n}(x)\mu(dx).$$

As it is stated above, the measure which realizes the minimum of \(E_{\alpha_n,n}(\mu)\) is \(\nu_{\alpha_n}\) whose density \(f_{\alpha_n}\) and support \(S_{\alpha_n}\). Moreover, the value of the energy is given by

$$\begin{aligned} \, &E^*_{\alpha_n}:=E_{\alpha_n,n}(\nu_{\alpha_n}) \\*&\qquad=\frac{3}4+\frac{1}2\log2+\biggl(\frac{3}2+\log2\biggr){\alpha_n}+\alpha_ n^2\log2\alpha_n-\biggl(\alpha_n^2 +\alpha_n+\frac{1}4\biggr)\log(1+2\alpha_n). \end{aligned}$$

Set

$$\tau_n=\frac1{n(n-1)}\inf_{\mathbb R^n}\widetilde{E}_{\alpha_n, n}(x).$$

Recall that \(Q_{\alpha_n}(\sqrt n x)=nx^2+2\alpha_n\log(1/|x|)-\alpha_n\log n\). Then

$$\int_{\mathbb R^n}\widetilde{E}_{\alpha_n, n}(x_1,...,x_n)\mu(dx_1)...\mu(dx_n)=n(n-1)\int_{\mathbb R^2}\log\frac1{|x-y|}\mu(dy)\mu(dx)+n^2\int_\mathbb R Q_{\alpha_n}(x)\mu(dx),$$

and

$$\int_{\mathbb R^n}\widetilde{E}_{\alpha_n, n}(x_1,...,x_n)\mu(dx_1)...\mu(dx_n)=n(n-1)E_{\alpha_n,n}(\mu)+n\int_\mathbb R Q_{\alpha_n}(x)\mu(dx).$$

For \(\mu=\nu_{\alpha_n}\), we have

$$\tau_n\leq \widetilde E^*_{\alpha_n,n}+\frac1{n-1}\int_\mathbb R Q_{\alpha_n}(x)\nu_{\alpha_n}(dx).$$

Let \(x\in S_{\alpha_n}\), then \(|Q_{\alpha_n}(x)|\leq\max(Q_{-\alpha_n}(a_n),Q_{-\alpha_n}(b_n))\). The right hand side of the previous inequality is bounded, indeed \(\alpha_n, a_n\), and \(b_n\) are convergent sequences. Moreover, \(\nu_{\alpha_n}\) is a probability measure, so by applying the dominated convergence theorem, we obtain

$$\limsup_n\tau_n\leq \limsup_n\widetilde E^*_{\alpha_n,n}=E_c^*.$$

Observe that if we set

$$k_n(x,y)=\log\frac1{|x-y|}+\frac12Q_{\alpha_n}(x)+\frac12Q_{\alpha_n}(y).$$

Then

$$k_n(x,y)\geq\frac12 h_n(x)+\frac12 h_n(y),$$

where \(h_n(x)=Q_{\alpha_n}(x)-\log(1+x^2)\). Further,

$$\sum_{i\neq j}k_n(x_i,x_j)=\sum_{i\neq j}\log\frac1{|x_i-x_j|}+(n-1)\sum_{i=1}^nQ_{\alpha_n}(x_i),$$

and

$$\begin{aligned} \, \widetilde E_{\alpha_n,n}(x)&=\sum_{i\neq j}k_n(x_i,x_j)+\sum_{i=1}^nQ_{\alpha_n}(x_i)\\ &\geq \frac12\bigg(\sum_{i\neq j}^nh_n(x_i)+h_n(x_j)\bigg)+\sum_{i=1}^nQ_{\alpha_n}(x_i).\end{aligned}$$

Remark that \(Q_{\alpha_n}'(x)=2x-{\alpha_n}/x\); hence

$$\min_\mathbb R Q_{\alpha_n}(x)=\frac{\alpha_n}2\bigg(1-\log\frac{\alpha_n}2\bigg).$$

It follows that

$$n(n-1)\int_\mathbb R h_n(s)\widetilde\nu_n(ds)\leq n(n-1)\tau_n-n\int_\mathbb R Q_{\alpha_n}(s)\widetilde\nu_n(ds)\leq n(n-1)\tau_n-n\frac{\alpha_n}2\bigg(1-\log\frac{\alpha_n}2\bigg).$$

Thus,

$$\int_\mathbb R h_n(s)\widetilde\nu_n(ds)\leq \tau_n -\frac{\alpha_n}{2(n-1)}\bigg(1-\log\frac{\alpha_n}2\bigg),$$

or

$$ \int_\mathbb R h_n(s)\widetilde\nu_n(ds)\leq \widetilde E^*_{\alpha_n,n}+\frac1{n-1}\int_\mathbb R Q_{\alpha_n}(x)\nu_{\alpha_n}(dx) -\frac{\alpha_n}{2(n-1)}\bigg(1-\log\frac{\alpha_n}2\bigg).$$
(8)

On the other hand, \(\alpha_n\) converges to \(c\geq 0\), \((0\leq a_1\leq\alpha_n\leq a_2)\), and \(\nu_{\alpha_n}\) converges pointwise to the measure \(\nu_c\) with support provided by \(0\), and \(Q_{\alpha_n}\) converges uniformly to \(Q_c\). Moreover, \(h_n(x)\geq h(x)\), where \(h(x)=\min(h_1(x),h_2(x))\) and \(h_i(x)=x^2+a_i\log(1/|x|)-\log(1+x^2)\). Put all these relations together, we deduce that the right hand side in equation (8) remains bounded by \(C\), and

$$\int_\mathbb R h(s)\widetilde\nu_n(ds)\leq C.$$

By the Prokhorov Criterion, the sequence (\(\widetilde\nu_n\)) is relatively compact for the tight topology. Let us assume for some subsequence that \(\lim_{k\to\infty}\widetilde\nu_{n_k}=\nu\) and, for \(\ell>0\), let us define the cut kernel \(k^\ell_n(x,y)=\inf(k_n(x,y),\ell)\) and its energy

$$E^\ell(\mu)=\int_{\mathbb R^2}k_n^\ell(x,y)\mu(dx)\mu(dy).$$

Then

$$E^\ell(\widetilde\nu_{n_k})\leq\frac{n_k-1}{n_k}\tau_{n_k}+\frac\ell {n_k}-\frac{\alpha_{n_k}}{n_k-1}(1-\log\alpha_{nk}).$$

Letting \(k\to \infty\), we obtain

$$E^\ell(\nu)\leq E_c^*.$$

Applying the monotone convergence theorem, we obtain

$$E(\nu)=\lim_{\ell\to\infty}E^\ell(\mu)\leq E_c^*.$$

Since \(E_c^*\) is the minimum of the energy, thus \(E(\nu)=E_c^*\). One deduces by uniqueness that the sequence of measure \(\widetilde\nu_n\) converges weakly to \(\nu\) and that \(\nu=\nu_c\). This completes the proof.

Proposition 6.

Given the asymptotic density of the zeros of the generalized Hermite polynomials \(H^{2\lambda_n/\beta}(\sqrt{2/\beta}\,x)\) , then, under the same condition for \(\lambda_n\) , \((\lim_{n\to\infty}\lambda_n/n=c)\) , a similar result holds with limit measure given by \(\nu_{\beta, c}\) ; its support is \(S_{\beta}=[-b,-a]\cup[a,b]\) with density \(2/\beta f_{2c/\beta},\) where \(a=\sqrt{\beta/2}\sqrt{1+2c/\beta-\sqrt {1+4c/\beta}}\) , \(b=\sqrt{\beta/2}\sqrt{1+2c/\beta+\sqrt {1+4c/\beta}}\) .

3. Probability Density Function with Second Moment Constraints

Consider the eigenvalue p.d.f (Probability density function) defined on \(\mathbb R^n\) by: for \(\beta>0\), \(\lambda\geq 0\),

$$ \mathbb{P}_{\lambda, n}({x})=\frac1{\widetilde Z_n}\delta\biggl(\sum_{i=1}^nx^2_i-\frac{a_n}n\biggr)\prod_{i=1}^n|x_i|^{\lambda}\prod_{1\leq i<j\leq n}|x_i-x_j|^{\beta}dx,$$
(9)

where \( a_n=\gamma_{{2\lambda/\beta}}(n)/(2\gamma_{{2\lambda/\beta}}(n-2))\), \(\delta\) is the Dirac delta function, and \(\widetilde{Z}_n\) is a normalizing constant.

Let us define the statistical density of eigenvalues as:

$$h^\lambda_{F, n}(x_1)=\int_{\mathbb R^{n-1}}\mathbb{P}_{\lambda, n}({ x})dx,$$

the integral is taken over the \(n-1\) variables \(x_2,...,x_n\).

It is straightforward to see that the function \(h^\lambda_{F, n}\) admits a continuous density with respect to the Lebesgue measure. We define on \(\mathbb R\) the probability measure \(\nu_n^{\lambda}\)

$$\nu_n^{\lambda}(dx)=h^\lambda_{F, n}(x)(dx).$$

Direct computations give, for a continuous function \(f\) on \(\mathbb R\),

$$ \int_{\mathbb R}f(t)\nu_n^{\lambda}(dt)=\mathbb{E}_n\biggl(\frac1n\sum_{i=1}^nf(x_i)\biggr),$$
(10)

where \( \mathbb{E}_n\) is the expectation with respect to the probability \(\mathbb{P}_{\lambda, n}\).

Interpretation of the density of eigenvalues. Let \(Herm(n,\mathbb F)\) be the space of square Hermitian matrices with coefficient in \(\mathbb R\), \(\mathbb C\) or \(\mathbb H\) the quaternion field and \(\beta=1,2\) or \(4\) the dimensional of \(\mathbb F\) as a real vector space, also we denote by \({\cal P}_n\) the generalized Gaussian probability density on \(Herm(n,\mathbb F)\) defined by \({\cal P}_n(dX)=(1/Z_n)|\det(X)|^{\lambda}e^{-{\rm tr}(X^2)}dX,\) where \(dX\) is the Lebesgue measure over \(Herm(n,\mathbb F)\) and \(Z_n\) is some normalizing constant. Moreover, we denote by

$$\Omega_n=\{X\in Herm(n,\mathbb F)\mid {\rm tr}(X^2)=a_n/n\},$$

the subset of fixed-trace Hermitian matrices.

For a continuous function \(f\) on \(\mathbb R\) using the functional calculus, we obtain

$$\int_{\Omega_n}{\rm tr}f(X){\cal P}_n(dX)=n\int_\mathbb R f(t)\nu_n^\lambda(dt).$$

Observe that, for \(\beta=1,2\) or \(4\), the p.d.f is closely related to the ensemble of fixed-trace generalized Gaussian Hermitian matrices, and the probability measure \(\nu_n^\lambda\) is the counting eigenvalues measure; see also [11] for the case \(\lambda=0\), where the author computed the limit of the sequence of measures \(\nu_n^0\).

Our goal in this section is to study the asymptotics of the measure \(\nu_n^\lambda\) when \(n\) goes to infinity.

Theorem 2.

Let \(\lambda_n\) be a positive real sequence such that

$$\lim_{n\to\infty}\frac{\lambda_n}n=c.$$

Then after scaling the probability measure \(\nu^{\lambda_n}_n\) by the factor \(1/\sqrt n\) , it converges to the probability measure \(\nu_{\beta, c}\) , whose density is \(f_{\beta,c}\) and support \(S=[-b,-a]\cup[a,b]\) , where

$$f_{\beta,c}(x)=\frac2{\beta\pi |x|}\sqrt{(x^2-a^2)(b^2-x^2)},$$

\( a^2=\beta/2(1+2c/\beta-\sqrt {1+4c/\beta})\) , \(b^2=\beta/2(1+2c/\beta+\sqrt {1+4c/\beta}).\) The convergence is in the sense of a bounded continuous function \(\varphi\) ,

$$\lim_{n\to\infty}\int_{\mathbb R}\varphi(x/\sqrt n)\nu^{\lambda_n}_n(dx)=\int_{\mathbb R}\varphi(x)\nu_{\beta, c}(dx).$$

For \(\beta=2\), and \(c=0\), one recovers the limit density of the ensemble of fixed-trace Gaussian matrix; see [8] and [11]. Moreover, for \(\beta=1\), we retrieved the limit density of the zeros of the generalized Hermite polynomials, Theorem 1.

To prove the theorem, one needs some preparations. Let \(Q\) be a lower semicontinuous and convex function on \(\mathbb R\). For a probability measure \(\mu\) with compactly support in \(\mathbb R\), we define its potential by

$$U^{\mu}(x)=\int_{\mathbb R}\log\frac1{|x-y|}\mu(dy),$$

and its energy by

$$E_{\beta, Q}(\mu)=\beta/2\int_{\mathbb R}U^{\mu}(x)\mu(dx)+\int_{\mathbb R}Q(x)\mu(dx).$$

For \(c\geq 0\), we use the notation

$$Q_c(x)=x^2+2c\log(1/|x|),\quad \widetilde{Q_c}(x)= 2c\log(1/|x|).$$

If \(\mu\in{\cal M}_c(\mathbb R)\) ( \({\cal M}_c(\mathbb R)\) is the space of probability measure with compact support on \(\mathbb R\)) is a compactly support probability measure, then \(E_{\beta, \widetilde {Q_c}}(\mu)\) is bounded below. This allows us to define

$$E^*_{\beta, \widetilde{Q_c}}=\inf_{\mu\in{\cal M}_c(\mathbb R)}E_{\beta, \widetilde{Q_c}}(\mu).$$

By Theorem II.2.3 in [19], there is a unique probability measure with compact support \(\mu^*\) such that

$$E^*_{\beta, \widetilde{Q_c}}=E_{\beta, \widetilde{Q_c}}(\mu^*).$$

\(\mu^*\) is the equilibrium measure. In the sequel, we will prove that \(\mu^*=\nu_{\beta, c}\), since we saw that \(E^*_{\beta, c}=E_{\beta, Q_c}(\nu_{\beta, c})=\inf_{\mu\in{\cal M}_c(\mathbb R)}E_{\beta, Q_c}(\mu)\) and the result follows by uniqueness.

Proposition 7.

Let \(\lambda_n\) be a positive real sequence such that \(\lim_{n\to\infty}\lambda_n/n=c\) . Then

  1. (i)

    \(\lim_{n\to\infty}-(1/n^2)\log\widetilde Z_n=E_{\beta, \widetilde{Q_c}}(\mu^*).\)

  2. (ii)

    \(\lim_{n\to\infty}-(1/n^2)\log\widetilde Z_n=E_{\beta, \widetilde{Q_c}}(\nu_{\beta, c})=E^*_{\beta, c}-(c+\beta/4)-(c+\beta/4)\log(2/\beta),\)

    where \(\widetilde Z_n\) is the normalizing constant given in equation (9).

  3. (iii)

    The equilibrium measure \(\mu^*=\nu_{\beta, c}\) .

Proof.

(i). For the proof of this step, see, for instance, [19], Theorem III.2.1, Theorem III.4.1, Proposition IV.4.2, and the references therein.

(ii). Let \(\alpha>0\), and consider the integral

$$ Z_n(\alpha)=\int_{\mathbb R^n}e^{-\alpha\sum_{i=1}^nx_i^2}\prod_{i=1}^n|x_i|^{2\lambda_n}\prod_{1\leq i<j\leq n}|x_i-x_j|^{\beta}dx.$$
(11)

By performing a change of variable to polar coordinate, we obtain

$$Z_n(\alpha)=\int_0^{+\infty}e^{-\alpha r^2}r^{2n\lambda_n+(\beta/2)n(n-1)+n-1}dr\int_{S_{n-1}}\prod_{i=1}^n|x_i|^{2\lambda_n} \prod_{1\leq i<j\leq n}|x_i-x_j|^{\beta}d\sigma_n(x),$$

where \(S_{n-1}\) is the unit sphere of \(\mathbb R^{n-1}\), and \(\sigma_{n-1}\) is the uniform measure on \(S_{n-1}\). The integral in the variable \(r\) is easily computed:

$$ Z_n(\alpha) =\frac{\Gamma(n\lambda_n+(\beta/4)n(n-1)+n/2)}{\alpha^{n\lambda_n+(\beta/4)n(n-1)+n/2}} \int_{S_{n-1}}\prod_{i=1}^n|x_i|^{2\lambda_n} \prod_{1\leq i<j\leq n}|x_i-x_j|^{\beta}d\sigma_n(x).$$
(12)

Moreover, if we put \(u_i=\sqrt{a_n/n}\,x_i\), we see that

$$Z_n(\alpha)=\frac{\Gamma(n\lambda_n+(\beta/4)n(n-1)+n/2)} {\alpha^{n/2}(\sqrt{\frac{a_n\alpha}{n}})^{2n\lambda_n+(\beta/2)n(n-1)} }\widetilde Z_n.$$

Let us choose \(\alpha=n\); then

$$\log(\widetilde Z_n)=\log(Z_n)-\log\Gamma\bigg(n\lambda_n+\frac{\beta}{4}n(n-1)+\frac n2\bigg)+\bigg({n\lambda_n+\frac{\beta}{4}n(n-1)}\bigg)\log a_n+\frac n2\log n.$$

Since by Proposition 4.6 in [9],

$$\frac1{n^2}\log(Z_n)=-E^*_{\beta, c}+o(1),$$

On the other hand, by using the asymptotic Stirling’s formula for the gamma function, we obtain

$$\log\Gamma\bigg(n\lambda_n+\frac{\beta}{4}n(n-1)\bigg)=n^2\bigg(c+\frac{\beta}{4}\bigg) \log\bigg(n^2\bigg(c+\frac{\beta}{4}\bigg)\bigg)-n^2\bigg(c+\frac{\beta}{4}\bigg)+o(n^2).$$

Further,

$$ a_n=n^2\bigg(\frac12+\frac{2c}{\beta}\bigg)+o(n^2),$$
(13)

and

$$n\lambda_n+\frac{\beta}{4}n(n-1)=n^2\bigg(c+\frac{\beta}{4}\bigg)+o(n^2).$$

Putting all this together, we have

$$\frac1{n^2}\log(\widetilde Z_n)=-E^*_{\beta, c}+\bigg(c+\frac{\beta}4\bigg)+\bigg(c+\frac{\beta}{4}\bigg)\log\bigg(\frac 2\beta\bigg)+o(1).$$

To complete the proof, we split the discussion into two cases:

First case. Assume that \(\beta=2\), We saw by Proposition 4.7 in [9] that

$$E^*_{2,c}=\frac34+\frac12\log2+\bigg(\frac{3}{2}+\log2\bigg)c+c^2\log(2c) -\bigg(c^2+c+\frac14\bigg)\log(1+2c).$$

Moreover,

$$E^*_{2,c}=E_{Q_c}(\nu_c)=E_{\widetilde{Q_c}}(\nu_c)+\int_Sx^2\nu_c(dx)$$

and

$$\int_Sx^2\nu_c(dx)=\frac2\pi\int_a^bx\sqrt{(x^2-a^2)(b^2-x^2)}dx.$$

By the change of variable \(u=x^2\), we see that

$$\int_Sx^2\nu_c(dx)=\frac1\pi\int_{a^2}^{b^2}\sqrt{(x-a^2)(b^2-x)}dx=\frac1\pi(b^2-a^2)^2 B\bigg(\frac32,\frac32\bigg),$$

where \(B\) states for the beta function. This gives

$$\int_Sx^2\nu_c(dx)=\frac{1}2+c,\qquad E_{\widetilde{Q_c}}(\nu_c)=E^*_{2,c}-\bigg(\frac{1}2+c\bigg).$$

Second case. Let \(\beta>0\). We saw that

$$E_{\beta, \widetilde{Q_c}}(\nu_{\beta, c})=\frac{\beta}2\int_SU^{\nu_c}(x)\nu_{\beta, c}(dx)+\int_S\widetilde{Q_c}(x)\nu_{\beta, c}(dx).$$

It follows that

$$E_{\beta, \widetilde{Q_c}}(\nu_{\beta, c})=\frac{\beta}{2}E_{ \widetilde{Q_{2c/\beta}}}(\nu_{\beta, c})$$

and

$$ E_{\beta, \widetilde{Q_c}}(\nu_{\beta, c})=\frac{\beta}{2}E^*_{2,2c/\beta}-\bigg(\frac\beta4+c\bigg).$$
(14)

Moreover,

$$\frac\beta2E^*_{2,2c/\beta}=\frac{3\beta}8+\frac\beta4\log2+\bigg(\frac{3}{2}+\log2\bigg)c+ \frac{2c^2}{\beta}\log\bigg(\frac{4c}{\beta}\bigg)-\bigg(\frac{2c^2}{\beta}+c+\frac\beta8\bigg) \log\bigg(1+\frac{4c}{\beta}\bigg).$$

Adding and subtracting \(\beta/4\log(2/\beta)+c\log(2/\beta)\) to the right hand side, we obtain

$$\frac\beta2E^*_{2,2c/\beta}=E_{\beta, c}^*-\bigg(c+\frac\beta4\bigg)\log\frac2\beta,$$

where \(E^*_{\beta, c}\) is the energy given in Corollary 1. Substituting the previous equation in equation (14), we obtain

$$E_{\beta, \widetilde{Q_c}}(\nu_{\beta, c})=E_{\beta,c}^* -\bigg(\frac\beta4+c\bigg)-\bigg(c+\frac\beta4\bigg)\log\frac2\beta.$$

which gives the desired result.

The equality of energy \(E_{\beta, \widetilde{Q_c}}(\mu^*)= E_{\beta, \widetilde{Q_c}}(\nu_{\beta, c})\), implies by uniqueness that \(\mu^*=\nu_{\beta, c}\) and hence \(\widetilde\nu_n\) converges weakly to \(\nu_{\beta,c}\).

Lemma 1.

For a positive sequence \(\lambda_n\) such that \(\lim_{n\to\infty}\lambda_n/n=c\) , the probability \(\mathbb {P}_{\lambda_n, n}\) concentrates in a neighborhood of the points where the function

$$K_n(x)=\frac{\beta}{2}\sum_{i\neq j}\log\frac1{|x_i-x_j|}+(n-1)\sum_{i=1}^n\widetilde{Q_{\lambda_n}}(x_i),$$

attains its minimum in the sense that, for \(\eta>0\) , the following set is valid:

$$A_{\eta, n}=\{x\in S_{n-1}\mid K_n(x)\leq (E^*_{\beta, \widetilde{Q_c}}+\eta)n^2\};$$

\(A_{\eta, n}\) is compact and, moreover,

$$\lim_{n\to\infty}\mathbb{P}_{\lambda_n,n}( A_{\eta, n})=1,$$

\(S_{n-1}\) being the sphere of \(\mathbb R^{n}\) with radius \(\sqrt{a_n/n}\) .

Proof.

First, one see that \(K_n\) is a lower-semicontinuous function. Thus, \(A_{\eta,n}\) is closed in the sphere \(S_{n-1}\) and hence compact. Let \(\varepsilon>0\), by definition of \(A_{\eta,n}\), we have on \(S_{n-1}\backslash A_{\eta,n}\),

$$K_n(x)>(E^*_{\beta, \widetilde{Q_c}}+\eta)n^2.$$
$$ \mathbb{P}_{\lambda_n, n}(\mathbb R^n\backslash A_{\eta,n})\leq\frac{2\pi^{ n/2}}{\Gamma(n/2)}\bigg(\frac{a_n}n\bigg)^{(n-1)/2}\frac1{\widetilde Z_n}e^{-(E^*_{\beta, \widetilde{Q_c}}+\eta)n^2}.$$
(15)

By Proposition 7, we have

$$\lim_{n\to\infty}-\frac1{n^2}\log\widetilde{Z_n}=E^*_{\beta, \widetilde{Q_c}}.$$

Moreover, applying Stirling’s formula, we obtain

$$\begin{aligned} \, &\log \bigg(\frac{2\pi^{n/2}}{\Gamma(n/2)}\bigg(\frac{a_n}n\bigg)^{(n-1)/2}\bigg) \\&\qquad=\frac{n-1}2\log\frac{a_n}n+\log2+\frac n2\log\pi-\bigg(\frac n2-\frac12\bigg)\log\frac n2+\frac n2-\frac12\log2\pi+o\bigg(\frac1n\bigg). \end{aligned}$$

Since \(a_n\sim n^2\); hence

$$\lim_{n\to\infty}\frac1{n^2}\log\frac{2\pi^{n/2}}{\Gamma(n/2)}=0.$$

This implies, there is \(n_0\in\mathbb N\), such that, for \(n\geq n_0\),

$$\frac{2\pi^{n/2}}{\Gamma(n/2)}\frac1{\widetilde Z_n}\leq n^2e^{(E^*_{\beta, \widetilde{Q_c}}+\varepsilon/2)n^2}.$$

Put all this together with equation (15), then, for \(\eta=\varepsilon \) and for every \(n\geq n_0\),

$$\mathbb{P}_{\lambda_n, n}(\mathbb R^n\backslash A_{\eta,n})\leq n^2e^{-(\varepsilon/2)n^2}.$$

This completes the proof.

The proof of Theorem 2 is a consequence of Lemma 1, Proposition 7 and is in the same manner as the proof of Theorem IV.5.1 in [19]. One can also see Theorem 3 in [9] for the proof. We give it here for completeness.

4. Proof of Theorem 2

For \(\delta\geq 0\), we use the following notation in the proof:

$$k_\delta(x,y)=\log\frac1{|x-y|}+\frac12\widetilde{Q_\delta}(x) +\frac12\widetilde{Q_\delta}(y),$$

and for \(\ell>0\), \(k_\delta^\ell(x,y)=\inf(k_\delta(x,y),\ell)\), where \(\widetilde{Q_\delta}(x)= 2\delta\log(1/|x|).\)

Let \(\psi\) be a bounded continuous function on \(\mathbb R\) and, on \(\mathbb R^n\), we define the continuous function

$$\Psi_n(x)=\frac1n\sum_{i=1}^n\psi(x_i).$$

Let \(\epsilon>0\), the set \(A_{\epsilon,n}\) is compact; hence \(\Psi_n\) attains its maximum at some point in \(A_{\epsilon,n}\)

$$x^{(n)}_\epsilon=(x^{(n)}_{1,\epsilon},...,x^{(n)}_{n,\epsilon}),$$

say. Hence one deduces by equation (10) that

$$ \int_{\mathbb R}\psi(x)\nu^{\lambda_n}_n(dx)\leq \Psi_n(x^{(n)}_\epsilon)+||\psi||_\infty(1-\mathbb{P}_{\lambda_n, n}(A_{\epsilon,n})).$$
(16)

To the point \(x^{(n)}_\epsilon\) we associate the probability

$$\sigma_{n,\epsilon}=\frac1n\sum_{k=1}^n\delta_{x^{(n)}_{k,\epsilon}},$$

then equation (16) reads as

$$ \int_{\mathbb R}\psi\bigg(\frac{x}{\sqrt n}\bigg)\nu^{\lambda_n}_n(dx) \leq \int_\mathbb R\psi\bigg(\frac{x}{\sqrt n}\bigg)\sigma_{n,\epsilon}(dx)+||\psi||_\infty(1-\mathbb{P}_{\lambda_n, n}(A_{\epsilon,n})).$$
(17)

By simple computations and by using Lemma 1, we obtain

$$ E^\ell_{\beta,Q_c}(\sigma_{n,\epsilon})\leq\frac\ell n+E^*_{\beta,\widetilde {Q_c}}+\epsilon.$$
(18)

Moreover, since \({x^{(n)}_{k,\epsilon}/\sqrt n}\) is in the sphere of radius \(\sqrt{a_n/n^2}\), we saw by equation (13) that \(\lim_{n\to\infty}\sqrt{a_n/n^2}=1/2+2c/\beta\). Thus, the sequence of measures \((\widetilde\sigma_{n,\epsilon})_n \) scaled by the factor \(1/{\sqrt n}\) has a compact support in \([-1/2+2c/\beta,1/2+2c/\beta]\). One can assume without loss of generality that the measure \(\widetilde\sigma_{n,\epsilon}\) converges weakly to some probability measure \(\sigma_\epsilon\). Letting \(n\) tend to infinity in equation (18), we obtain

$$E^\ell_{\beta,Q_c}(\sigma_\epsilon)\leq E^*_{\beta,\widetilde {Q_c}}+\epsilon.$$

Now, as \(\ell\to\infty\), it follows that

$$E_{\beta,Q_c}(\sigma_\epsilon)\leq E^*_{\beta,\widetilde {Q_c}}+\epsilon.$$

Letting \(\epsilon\to0\), we see that \(\lim_{\epsilon\to 0}E_{\beta,Q_c}(\sigma_\epsilon)=E^*_{\beta,\widetilde {Q_c}}\). Therefore, the measure \(\sigma_\epsilon\) converges weakly to the equilibrium measure \(\nu_{\beta,c}\). Therefore, by letting \(n\) tend to infinity in equation (17), we obtain

$$\limsup_n\int_{\mathbb R}\psi\bigg(\frac{x}{\sqrt n}\bigg)\nu^{\lambda_n}_n(dx)\leq \int_\mathbb R\psi(x)\sigma_{\epsilon}(dx)$$

and, as \(\epsilon\to 0\),

$$\limsup_n\int_{\mathbb R}\bigg(\frac{x}{\sqrt n}\bigg)\nu^{\lambda_n}_n(dx)\leq \int_\mathbb R\psi(x)\nu_{\beta,c}(dx).$$

Replace \(\psi\) by \(-\psi\); it follows that

$$\liminf_n\int_{\mathbb R}\bigg(\frac{x}{\sqrt n}\bigg)\nu^{\lambda_n}_n(dx)\geq \int_\mathbb R\psi(x)\nu_{\beta,c}(dx).$$

which gives the desired result; namely, for every bounded continuous function \(\psi\),

$$\lim_n\int_{\mathbb R}\bigg(\frac{x}{\sqrt n}\bigg)\nu^{\lambda_n}_n(dx)= \int_\mathbb R\psi(x)\nu_{\beta,c}(dx).$$