1 Introduction and Main Results

In this paper we consider a class of distributions in \(\mathbb{R}^{n}\) of the form

(1.1)

where the function H, which we call Hamiltonian to stress the analogy with statistical mechanics, and the normalizing constant Z n,β [V] (called the partition function) have the form

(1.2)
(1.3)

We denote also

(1.4)
(1.5)

the corresponding expectation and the lth marginal densities (correlation functions) of (1.1). The function V in (1.2), called the potential, is a real valued Hölder function satisfying the condition

$$ V(\lambda )\ge 2(1+\epsilon )\log\bigl(1+ |\lambda |\bigr). $$
(1.6)

Such distributions can be considered for any β>0, but the cases β=1,2,4 are especially important, since they correspond to the eigenvalue distributions of real symmetric, hermitian, and symplectic matrix models respectively.

Since the papers [3, 11] it is known that if V is a Hölder function, then

$$n^{-2}\log Z_{n,\beta}[V]=\frac{\beta}{2}\mathcal{E}[V]+O(\log n/n), $$

where

$$ \mathcal{E}[V]=\max_{m\in\mathcal{M}_1} \biggl\{ L[dm,dm]-\int V(\lambda)m(d\lambda) \biggr\}=\mathcal{E}_V\bigl(m^* \bigr), $$
(1.7)

and the maximizing measure m (called the equilibrium measure) has a compact support \(\sigma:=\operatorname{supp}m^{*}\). Here and below we denote

$$ \everymath{\displaystyle} \begin{array}{l} L[dm,dm]=\int\log|\lambda-\mu|dm(\lambda) dm(\mu), \\[10pt] L[f](\lambda)=\int\log|\lambda-\mu|f(\mu)d\mu, \quad L[f,g]=\bigl(L[f],g\bigr), \end{array} $$
(1.8)

where (.,.) is a standard inner product in \(L_{2}[\mathbb{R}]\).

If V′ is a Hölder function, then the equilibrium measure m has a density ρ (equilibrium density). The support σ and the density ρ are uniquely defined by the conditions:

$$ \everymath{\displaystyle} \begin{array}{l} v(\lambda ):=2\int \log |\mu -\lambda |\rho (\mu )d\mu -V(\lambda )=\sup v(\lambda):=v^*,\quad\lambda\in\sigma,\\[10pt] v(\lambda )\le \sup v(\lambda),\quad \lambda\notin\sigma, \ \sigma=\operatorname{supp}\{\rho\}. \end{array} $$
(1.9)

Without loss of generality we will assume below that σ⊂(−1,1) and v =0.

In this paper we discuss the asymptotic expansion in n k of the partition function Z n,β [V] and of the Stieltjes transforms of the marginal densities. The problems of this kind appear in many fields of mathematics, e.g., statistical mechanics of log-gases, combinatorics (graphical enumeration), theory of orthogonal polynomials, etc. (see [7] for the detailed and interesting discussion on the motivation of the problem). Here we are going to discuss with more details the applications of the problem to the analysis of the eigenvalue distributions of random matrices.

One of the most important problems of the eigenvalue distribution is the behavior of the random variables, called the linear eigenvalue statistics, corresponding to the smooth test function h

$$ \mathcal{N}_n[h] =\sum_{i=1}^n h(\lambda_i). $$
(1.10)

The result of [3] gives us the main term of the expectation of \(\mathbf{E}_{n,\beta}\{\mathcal{N}_{n}[h]\}\) which is n(h,ρ). It was also proven in [3] that the variance of \(\mathcal{N}_{n}[h]\) tends to zero, as n→∞. But the behavior of the fluctuations of \(\mathcal{N}_{n}[h]\) was studied only in the case of one-cut potentials (see [11]). Even the bound for \(\mathbf{Var}_{n,\beta}\{\mathcal{N}_{n}[h]\}\) in the multi-cut regime till the recent time was known only for β=2. Thus the behavior of the characteristic functional, corresponding to the linear eigenvalue statistics (1.10) of the test function h

$$ \varPhi_{n,\beta}[h]=\mathbf{E}_{n,\beta} \bigl \{e^{\beta(\mathcal{N}_n[h]- (\rho,h))/2} \bigr\}= \frac{Z_{n,\beta} [V-\frac{1}{n}(h-(\rho,h)) ]}{Z_{n,\beta} [V ]}, $$
(1.11)

is one of the questions of primary interest in the random matrix theory. Since Φ n,β [h] is a ratio of two partition functions, to study the behavior of Φ n,β [h], it suffices to find the coefficients of the expansion of logZ n,β [V] up to the order O(n −1).

Let us mention the most important results on the expansion of logZ n,β [V] and the correlation functions. The CLT for linear eigenvalue statistics in the one-cut regime for any β and polynomial V was proven in [11]. The expansion for the first and the second correlators for β=2 and one-cut real analytic V was constructed in [1]. The expansion of logZ n,β [V] for a one-cut polynomial V and β=2 was obtained in [7]. The formal expansions for any β and polynomial V were obtained in the physical papers [4] and [9]. The CLT for β=2, real analytic multi-cut V, and special choice h=V was obtained in [13]. The expansion of logZ n,β [V] up to O(1) for one-cut real analytic V and multi-cut real analytic V was performed in [12] and [15] respectively. The complete asymptotic expansion of the partition function and all the correlators for one-cut real analytic V and any β was constructed in [2]. It worth to mention that the papers [2, 11, 12] are based on the same method, the first version of which was proposed in [11]. The method is based on the analysis of the first loop equation by the methods of the perturbation theory, where the results of [3] give zero order approximation. The subsequent papers [2, 12] simplified and developed the method of [11]. This allowed to the authors to extend the method to non-polynomial V (see [12]), and to apply it to the loop equations of higher orders (see [2]). As a result, in [2] the complete asymptotic expansion of the partition function and all the correlators were constructed. The essential disadvantage of this method is that it is not applicable to the multi-cut case. A method which allows to factorize Z n,β [V] in the multi cut case to the product of the partition functions of the one cut “effective” potentials, was proposed in [15]. In the present paper the same idea is used to study the limit of the characteristic functional Φ n,β [h] and to construct the expansion of Z n,β [V] up to o(1) terms (see Theorem 2). We assume the following conditions:

C1. :

V is a real analytic potential satisfying (1.6). The support of the equilibrium measure is

$$ \sigma=\bigcup_{\alpha=1}^q \sigma_\alpha,\quad \sigma_\alpha=[a_{\alpha},b_{\alpha}]; $$
(1.12)
C2. :

The equilibrium density ρ can be represented in the form

$$ \rho(\lambda)=\frac{1}{2\pi}P(\lambda)\Im X_\sigma^{1/2}( \lambda+i0),\qquad \inf_{\lambda\in\sigma}\big|P(\lambda)\big|>0, $$
(1.13)

where

$$ X_\sigma(z)= \prod_{\alpha=1}^{q}(z-a_\alpha) (z-b_\alpha), $$
(1.14)

and we choose a branch of \(X_{\sigma}^{1/2}(z)\) such that \(X_{\sigma}^{1/2}(z)\sim z^{q}\), as z→+∞. Moreover, the function v defined by (1.9) attains its maximum only if λ belongs to σ.

Remark 1

It is known (see, e.g., [1]) that for analytic V the equilibrium density ρ always has the form (1.13)–(1.14). The function P in (1.13) is analytic in the domain D, where V(z) is analytic, and P can be represented in the form

$$ P(z)=\frac{1}{2\pi i}\oint_\mathcal{L}\frac{V'(z)-V'(\zeta)}{(z-\zeta) X_\sigma^{1/2}(\zeta)}d\zeta, $$
(1.15)

where the contour \(\mathcal{L}\subset\mathbf{D}\) encircles σ. Hence condition C2 means that ρ has no zeros in the internal points of σ and behaves like square root near the edge points. This behavior of V is usually called generic.

The first result of the paper is the theorem which allows us to control Φ n,β [h] and logZ n,β [V] in the one cut case up to O(n −1) terms. The essential difference with similar results of [11, 12] and [2] is that Theorem 1 is applicable to a non real h. This fact is very important because the proof of Theorem 2 is based on the application of Theorem 1 to a non real h. Besides, since the results of [2] were obtained for real analytic h, the remainder bounds found here cannot be used in the proof of Theorem 2.

Theorem 1

Let V satisfy (1.6), the equilibrium density ρ (see (1.9)) have the form (1.13) with q=1, and \(\sigma=\operatorname{supp}\rho=[a,b]\). Assume also that V is analytic in the domain Dσ ε , where σ ε is the ε-neighborhood of σ. Consider any test function h whose support belongs to σ ε and such thath (6),∥h′∥n 1/2logn (here and below \(\|h\|_{\infty}=\sup_{\lambda\in\sigma_{\varepsilon}}|h(\lambda)|\)). Then:

  1. (i)

    for real valued h the characteristic functional Φ n,β [h] of (1.11) has the form

    (1.16)

    where the operator \(\overline{D}_{\sigma}\) is defined as

    $$\overline{D}_\sigma=\frac{1}{2}\bigl(D_\sigma+D^*_\sigma \bigr), $$

    where for smooth h we define D σ h by the principal value integral

    (1.17)

    and \(D^{*}_{\sigma}\) is the adjoint operator to D σ in L 2(σ). A non positive measure ν β in (1.16) has the form

    (1.18)

    with P defined by (1.15) and \(X_{\sigma}^{1/2}(\lambda):=\Im X_{\sigma}^{1/2}(\lambda+i0)\) with X σ of (1.14);

  2. (ii)

    for non real h such that |β(D σ h,ℑh)|≤k logn with some absolute k andh (6)n 1/6

    (1.19)
  3. (iii)

    moreover,

    (1.20)

    where r β [ρ] is given by the integral representation of (2.25), and F β (n) corresponds to the linear, logarithmic and zero order terms of the expansion in n of logZ n,β [V ] for V (λ)=λ 2/2. According to [10]

    (1.21)

    where \(c_{\beta}=\frac{\beta}{24}-\frac{1}{4}+\frac{1}{6\beta}\) and \(c^{(1)}_{\beta}\) is some constant, depending only on β (for β=2, \(c^{(1)}_{\beta}=\zeta'(1)\)).

Remark 2

Let us note that the operator D σ is “almost” \((-\mathcal{L}_{\sigma})^{-1}\), where \(\mathcal{L}_{\sigma}\) is the integral operator defined by (1.8) for the interval σ. More precisely, if we denote \(X_{\sigma}^{-1/2}=\mathbf{1}_{\sigma}|X_{\sigma}^{-1/2}|\) with X σ of (1.14)

$$ \everymath{\displaystyle} \begin{array}{l} D_{\sigma}\mathcal{L}_{\sigma}v= -v+ \pi^{-1}(v,\mathbf{1}_{\sigma})X^{-1/2}_{\sigma}, \qquad \mathcal{L}_{\sigma}D_{\sigma} v= -v+\pi^{-1} \bigl(v,X^{-1/2}_{\sigma}\bigr)\mathbf{1}_{\sigma}, \\[10pt] \mathcal{L}_{\sigma}D^*_{\sigma} v=-v+\pi^{-1} \bigl(v,X^{-1/2}_{\sigma}\bigr) \mathbf{1}_{\sigma} \quad \Rightarrow \quad \mathcal{L}_{\sigma}\bar{D}_{\sigma} v=-v+ \pi^{-1}\bigl(v,X^{-1/2}_{\sigma}\bigr) \mathbf{1}_{\sigma}. \end{array} $$
(1.22)

Remark 3

For β=2 in the one-cut case we have

(1.23)

where P 0=16/(ba)2 corresponds to the Gaussian potential V 0(λ)=2(2λab)2/(ba)2, such that the support of its equilibrium measure is [a,b].

Consider the space

$$ \mathcal{H}=\bigoplus_{\alpha=1}^q L_1[ \sigma_\alpha]. $$
(1.24)

Note that we need \(\mathcal{H}\) mainly as a set of functions, its topology is not important below. Define the operator \(\mathcal{L}\) as

$$ \mathcal{L}f=\mathbf{1}_\sigma L[f\mathbf{1}_\sigma], \qquad\widehat{\mathcal{L}}_\alpha f:= \mathbf{1}_{\sigma_\alpha} L[f \mathbf{1}_{\sigma_\alpha}]. $$
(1.25)

Moreover, we will consider the block diagonal operators

$$ \overline{D}:=\bigoplus_{\alpha=1}^q \overline{D}_\alpha,\qquad \widehat{\mathcal{L}}:= \bigoplus_{\alpha=1}^q \widehat{\mathcal{L}}_\alpha, $$
(1.26)

where \(\overline{D}_{\alpha}\) is defined by (1.17) for σ α . Introduce also

$$ \widetilde{\mathcal{L}}:=\mathcal{L}-\widehat{\mathcal{L}},\qquad \mathcal{G}:=(1-\overline{D}\widetilde{\mathcal{L}})^{-1}. $$
(1.27)

An important role below belongs to a positive definite matrix of the form

$$ \mathcal{Q}=\{\mathcal{Q}_{\alpha\alpha'}\}_{\alpha,\alpha'=1}^{q}, \qquad \mathcal{Q}_{\alpha\alpha'}=-\bigl(\mathcal{L}\psi^{(\alpha)}, \psi^{(\alpha')}\bigr), $$
(1.28)

where \(\psi^{(\alpha)}(\lambda)= p_{\alpha}(\lambda) X^{-1}_{\sigma}(\lambda)\mathbf{1}_{\sigma}\) (p α is a polynomial of degree q−1) is a unique solution of the system of equations

$$ -\bigl(\mathcal{L}\psi^{(\alpha)} \bigr)_{\alpha'}=\delta_{\alpha\alpha'},\quad \alpha'=1, \dots,q. $$
(1.29)

Denote also

$$ I[h]=\bigl(I_1[h],\dots,I_q[h]\bigr),\qquad I_\alpha[h]:= \sum_{\alpha'} \mathcal{Q}^{-1}_{\alpha\alpha'}\bigl(h,\psi^{(\alpha')}\bigr), $$
(1.30)
$$ \mu_\alpha=\int_{\sigma_\alpha} \rho_\alpha(\lambda)d\lambda,\quad \rho_\alpha:= \mathbf{1}_{\sigma_\alpha}\rho. $$
(1.31)

The main result of the paper is the following theorem:

Theorem 2

Let the potential V satisfy conditions C1–C2, and let h be any test function whose support belongs to σ ε and sup n h (l)<∞, l=1,…,6. Then there exists κ>0, such that

(1.32)

where

(1.33)

with a positive definite matrix \(\mathcal{Q}\) of (1.28), I[h] defined by (1.30), and \(\log\overline{\rho}=(\log\rho_{1},\allowbreak\dots,\allowbreak\log\rho_{q})\).

Moreover,

(1.34)

where the multiplier \(\mathcal{Z}_{n,\beta}^{(0)}[V]\) (introduced in order to simplify formulas here and in the proof of Theorem 2) collects terms analogous to (1.20):

(1.35)

with μ α , ρ α defined in (1.31), r β [ρ] defined in (2.25), F β (n) and \(c_{\beta},c_{\beta}^{(1)}\) defined in (1.21), and det here means the Fredholm determinant of \(\overline{D}\widetilde{\mathcal{L}}\) on σ.

Note that since by the definitions of \(\overline{D}\) and \(\widetilde{\mathcal{L}}\) the kernel of \((\overline{D}\widetilde{\mathcal{L}})_{\alpha,\alpha'}\) has the form \(X_{\sigma_{\alpha}}^{-1/2}(\lambda)M_{\alpha,\alpha'}(\lambda,\mu)\), where M α,α(λ,μ) is a bounded smooth function. Hence, using the Hadamard inequality (see [5], Section I.5.2), it is easy to check that the Fredholm determinant of \(\overline{D}\widetilde{\mathcal{L}}\) on σ does exist. Moreover, it follows from the proof of Theorem 2 that this determinant is not zero.

Remark 4

Function (1.33) looks similar to θ-function, which corresponds to the Riemann surface, corresponding to the polynomial (1.14), which due to [6] appears in the asymptotics of the orthogonal polynomials with the varying weight e nV. But it is not so simple to check if these two functions really coincide, hence we formulate the result without reference on the standard θ-function and leave corresponding computations for the future works.

An important corollary of Theorem 2 is that the fluctuations of \(\mathcal{N}_{n}[h]\) for generic h are non Gaussian. They are Gaussian, if there exists some c such that

$$ I_\alpha[h]=c,\quad\alpha=1,\dots,q;\quad \Leftrightarrow\quad\bigl(h-c,\psi^{(\alpha)}\bigr)=0,\quad \alpha=1,\dots,q. $$
(1.36)

In addition, inspecting the proof of Theorem 2, one can see that it is proven in fact that logΦ n,β [th] is an analytic function of t for some small enough t. Since

$$\everymath{\displaystyle} \begin{array}{l} n\bigl(p^{(1)}_{n,\beta}-\rho,h\bigr)=\frac{2}{\beta} \partial_t\log \varPhi_{n,\beta}[th] |_{t=0},\\[10pt] \mathbf{Var}_{n,\beta}\bigl\{\mathcal{N}_n[h]\bigr\}= \biggl( \frac{2}{\beta} \biggr)^2\partial_t^2\log \varPhi_{n,\beta}[th] |_{t=0}, \end{array} $$

one can find \(n(p^{(1)}_{n,\beta}-\rho,h)\) and \(\mathbf{Var}_{n,\beta}\{\mathcal{N}_{n}[h]\}\), differentiating the r.h.s. of (1.32). It is easy to see that if conditions (1.36) are not fulfilled, then both expressions contain the derivatives of logΘ(I(h);{}), hence they are quasi periodic functions.

Let us note that relations (1.22) imply

$$-\mathcal{L}\mathcal{G}\overline{D}=\bigl(1+P^{(1)}\widetilde{ \mathcal{L}}\mathcal{L}^{-1}\bigr)^{-1}\bigl(1-P^{(1)} \bigr)=1+P^{(1)} \widehat{F}, $$

where P (1) is a block-diagonal operator \(P_{\alpha}^{(1)}v=(v,X_{\alpha}^{-1/2})\mathbf{1}_{\sigma_{\alpha}}\) and \(\widehat{F}\) is some operator. Hence

$$(\mathcal{G}\overline{D}h) (\lambda)=-\bigl(\mathcal{L}^{-1}h\bigr) ( \lambda)+\sum c_\alpha(h)\psi^{(\alpha)}(\lambda), $$

where c α (v) are some constants and ψ (α) are defined by (1.29). Besides, evidently \(\mathcal{G}\overline{D}\mathbf{1}_{\sigma_{\alpha}}=\nobreak 0\), and therefore

$$0=(\mathcal{G}\overline{D}h,\mathbf{1}_{\sigma_\alpha})=-\bigl( \mathcal{L}^{-1}h,\mathbf{1}_{\sigma_\alpha}\bigr) +\sum _{\alpha'}Z_{\alpha\alpha'}c_\alpha(h),\quad\alpha=1, \dots,q. $$

These conditions determine c α (h) uniquely. On the other hand, if we define the operator \(\mathcal{D}_{\sigma}\) by the formula (1.17) with X σ from (1.14) for the multi cut case, then it has the same form with some \(\widetilde{c}_{\alpha}(h)\). Hence

$$\mathcal{G}\overline{D}=\mathcal{D}_\sigma+\sum _{\alpha} \psi^{(\alpha)}\otimes f^{(\alpha)}, $$

where f (α) are some functions of the form \(X^{-1/2}_{\sigma}p_{\alpha}\) with some polynomials p α .

The paper is organized as follows. The proofs of Theorem 1 and Theorem 2 are given in Section 2 and Section 3 respectively. Proofs of some auxiliary results, used in the proof of Theorem 2, are given in Section 4.

2 Proof of Theorem 1

To prove Theorem 1 we study the Stieltjes transform

$$ g_{n,\beta,h}(z)=\int\frac{p_{n,\beta,h}^{(1)}(\lambda)d\lambda}{z-\lambda} $$
(2.1)

of the first marginal density \(p_{n,\beta,h}^{(1)}\) defined by (1.5) for V replaced by \(V-\frac{1}{n}h\). Let us represent

$$g_{n,\beta,h}=g+n^{-1}u_{n,\beta,h}, $$

where g is the Stieltjes transform of the equilibrium density ρ. According to [15],

$$ u_{n,\beta,h}(z)= (\mathcal{K}F) (z), $$
(2.2)

where the operator \(\mathcal{K}:\mathrm{Hol}[\mathbf{D}\setminus\sigma_{\varepsilon}]\to \mathrm{Hol}[\mathbf{D}\setminus\sigma_{\varepsilon}]\) is defined by the formula

$$ (\mathcal{K}f) (z):=\frac{1}{2\pi iX^{1/2}(z)} \oint_{\mathcal{L} }\frac{f(\zeta)d\zeta}{P(\zeta)(z-\zeta)}, $$
(2.3)

with the contour \(\mathcal{L}\) which encircles σ ε and does not contain z and zeros of P. Note, that in what follows all integration contours will be assumed encircling σ ε and situated in D. The function F in (2.2) has the form

(2.4)

with

(2.5)

Moreover, according to [15], u n,β,h and δ n,β,h satisfy the bounds:

$$ \everymath{\displaystyle} \begin{array}{l} \big|u_{n,\beta,h}(z)\big|\le C_0 \bigl(1+\big\|h'\big\|_{\infty}\bigr)\frac{\log n}{d^{5/2}(z)}, \\[10pt] \big|\delta_{n,\beta,h}(z)\big|\le C\bigl(1+\big\|h'\big\|_{\infty} \bigr)^2\frac{\log^2 n}{d^{5}(z)}, \end{array} $$
(2.6)

if d(z):=dist{z,σ ε }≥n −1/3logn. In addition, if φ has a support belonging to σ ε and possesses 3 bounded derivatives, then

$$ \big|\bigl(p^{(1)}_{n,\beta,h}-\rho,\varphi\bigr)\big|\le Cn^{-1}\bigl(\big\|\varphi'''\big\|_\infty+\big\| \varphi'\big\|_\infty\bigr). $$
(2.7)

Using (2.6) in (2.2) we get for d(z)>n −1/3logn

(2.8)

where

$$\widehat{h}(z):=\int_\sigma\frac{h'(\lambda)\rho(\lambda)}{z-\lambda}d\lambda. $$

We note here that although (2.8) was obtained for z inside the domain D 2 where V is an analytic function and which does not contain zeros of P, we can extend (2.8) to \(z\not\in\mathbf{D}_{2}\), using that u n,β,h (z) is analytic everywhere in \(\mathbb{C}\setminus\sigma_{\varepsilon}\) and behaves like |u n,β,h (z)|∼nz −2, as z→∞. Applying the Cauchy theorem, we have for any zD 2

$$ u_{n,\beta,h}(z)=\frac{1}{2\pi i}\oint_{L}\frac{u_{n,\beta,h}(\zeta)d\zeta}{z-\zeta} $$

with the contour LD 2.

Let us transform

(2.9)

Similarly, we have

(2.10)

Hence, we obtain that for φ z (λ)=(zλ)−1 with d(z)≥n −1/3logn

(2.11)

where ν β is defined by (1.18).

To extend (2.11) on the differentiable φ, consider the Poisson kernel

$$ \mathcal{P}_y(\lambda)=\frac{y}{\pi(y^2+\lambda^2)}. $$

It is easy to see that for any integrable φ

$$(\mathcal{P}_y*\varphi) (\lambda)=\frac{1}{\pi}\Im\int \frac{\varphi(\mu)d\mu}{\mu-(\lambda+iy)}. $$

Hence (2.11) implies

$$ \everymath{\displaystyle} \begin{array}{l} \|\mathcal{P}_y*\nu_{n,\beta,h}\|_2^2 \le Cn^{-1} \biggl(\frac{1+\|h'\|^4_\infty}{y^{11}}+ \frac{\|h^{(4)}\|^2_\infty}{ y^{3}} \biggr),\quad |y| \ge \frac{\log n}{n^{1/3}}, \\[10pt] \nu_{n,\beta,h}(\lambda):=n \bigl(p^{(n)}_{\beta,h}( \lambda)-\rho(\lambda) \bigr)-\nu_\beta(\lambda) -\frac{1}{2}D_\sigma h(\lambda), \end{array} $$
(2.12)

where ∥.∥2 is the standard norm in \(L_{2}(\mathbb{R})\) and the sign measure ν β is defined in (1.18).

Then we use the following formula (see [11]) valid for any sign measure ν

$$ \int_0^\infty e^{-y}y^{2s-1}\| \mathcal{P}_y*\nu_{n,\beta,h}\|_2^2dy= \varGamma(2s) \int_{\mathbb{R}}\bigl(1+2|\xi|\bigr)^{-2s}\big|\widehat{ \nu}_{n,\beta,h}(\xi)\big|^2d\xi. $$
(2.13)

This formula for s=6, the Parseval equation for the Fourier integral, and the Schwarz inequality yield

To estimate the last integral here, we split it into two parts |y|≥n −1/3logn and |y|<n −1/3logn. For the first integral we use (2.12) and for the second—the bound (see [14])

where C is an n,η-independent constant which depends on \(\|V'+\frac{1}{n}h'\|_{\infty}\), ε, and |ba|. Thus we get that for any function φ with bounded sixth derivative

(2.14)

Since

$$\frac{d}{dt}\log \varPhi_{n,\beta}[th]=\int_{\sigma_\varepsilon}h( \lambda)p^{(1)}_{n,\beta,th}(\lambda), $$

integrating (2.14) with φ=th, with respect to t, we get (1.16) for real h. To extend this relation to the complex valued h we use the following lemma.

Lemma 1

Let {X n } n≥1 be a sequence of random variables such that

$$ \mathbf{E}\bigl\{e^{tX_n}\bigr\}= e^{t^2/2}\bigl(1+O \bigl(n^{-1}\log^{3/2}n\bigr)\bigr),\quad -\log^{1/2} n\le t\le\log^{1/2} n. $$
(2.15)

Then the relation

$$ \mathbf{E}\bigl\{e^{tX_n}\bigr\}= e^{t^2/2}\bigl(1+O \bigl(n^{-1/2}\log^{3/2} n\bigr)\bigr), $$
(2.16)

holds in the circle \(\frac{1}{7}D\), where D={t:|t|≤log1/2 n}.

Proof

Consider a strip S={t:|ℜt|≤log1/2 n}. It is evident that \(\mathbf{E}\{e^{tX_{n}}\}\) is analytic in S and bounded by \(2\sqrt{n}\) for sufficiently big n. Introduce the analytic function

$$f_n(t) := c\bigl(e^{-t^2/2}\mathbf{E}\bigl\{e^{tX_n} \bigr\}-1\bigr)n/ \log^{3/2} n,\quad t \in D, $$

where we choose the constant c>0 such that

$$\big|f_n(t)\big|\le 1,\quad t\in \gamma=\bigl[-\log^{1/2} n, \log^{1/2} n\bigr]. $$

It is possible by (2.15). Moreover, f n (t)≤n 2, tD. Then, by the theorem on two constants (see [8]), we conclude that

$$\log \big|f_n(t)\big|\le 2\bigl(1-\omega\bigl(t;\gamma,D'\bigr) \bigr)\log n, $$

where ω(t;γ,D′) is the harmonic measure of the set γ with respect to the domain D′ at the point tD′, where \(D' := D \cap \mathbb{C}_{+}\). It is well-known (see again [8]) that

$$\omega\bigl(t;\gamma,D'\bigr)=1-\frac{2}{\pi}\Im\log \frac{1+t\log^{-1/2} n}{1-t\log^{-1/2} n}. $$

Hence 1−ω(t;γ,D′)≤14ℑt/(3πlog1/2 n) for \(t\in\frac{1}{ 7} D'\), and we obtain from the above inequalities that

$$\log \bigl|f_n(t)\bigr| \le \frac{28 \log^{1/2} n}{3\pi}\Im t,\quad t\in \frac{1}{7}D'. $$

We finally deduce from the last bound that

$$\log \bigl|f_n(t)\bigr|\le \frac{1}{2}\log n\quad\Rightarrow\quad \big|f_n(t)\big|\le n^{1/2},\quad t\in\frac{1}{7}D', $$

and from the definition of f n we obtain (2.16). □

(iii) To prove (1.20), we need to control u n,β,h up to the order n −1. It follows from (2.2) and (2.4) that for this aim we need to control zero order term of u n,β,h (which is known already) and zero order term of δ n,β,h (z). It is easy to see that if we replace h(λ) by \(h_{t}=h(\lambda)+th_{z_{0}}(\lambda)\) with \(h_{z_{0}}(\lambda)=(\lambda-z_{0})^{-1}\), then

$$\delta_{n,\beta,h}(z_0)=\partial_t u_{n,\beta,h_t}(z_0) |_{t=0}. $$

It was proven in [11] that \(u_{n,\beta,h_{t}}(z)\) is an analytic function of t for small enough t. Hence, integrating with respect to t over the circle |t|=C 0 d 2(z 0)/2, we get that for any ∥h′∥≤C 0/2

$$\partial_tu_{n,\beta,h_t}(z) \bigg|_{t=0}= \frac{1}{\pi X_{\eta}^{1/2}(z)}\oint_{\mathcal{L}_d} \frac{h_{z_0}'(\lambda)|X^{1/2}(\lambda)| d\lambda}{(z-\lambda)} +n^{-1}O \bigl(d^{-11/2}(z)d^{-2}(z_0)\bigr). $$

Thus we obtain for h=0 and any real analytic V, satisfying conditions C1-C2:

(2.17)

Set

and consider the functions V t of the form

$$ V_t(\lambda)=V^{(0)}(\lambda)+t \Delta V(\lambda),\qquad \Delta V(\lambda)= V(\lambda)-V^{(0)}(\lambda). $$
(2.18)

Let Z n,β (t):=Z n,β [V t ] be defined by (1.28) with V replaced by V t . Then, evidently, Z n,β (1)=Z n,β [V], and Z n,β (0) corresponds to V (0). Hence

(2.19)

where \(p^{(1)}_{n,\beta}(\lambda;t)\) is the first marginal density corresponding to V t . Using (1.9), one can check that for the distribution (1.1) with V replaced by V t the equilibrium density ρ t has the form

(2.20)

with X,d of (1.20). Using (2.2), (2.4), (2.8), and (2.17), one can write:

$$ g_{n}(z,t)=g(z,t)+ n^{-1}u_{\beta}^{(0)}(z,t)+n^{-2}u_{\beta}^{(1)}(z,t)+O \bigl(n^{-3}\bigr), $$
(2.21)

where

$$ \everymath{\displaystyle} \begin{array}{l} u^{(0)}_\beta(z,t)=- \biggl(\frac{2}{\beta}-1\biggr) \bigl(\mathcal{K}_tg'_t\bigr) (z), \\[10pt] u^{(1)}_\beta(z,t)=\mathcal{K}_t \biggl( \bigl(u_\beta^{(0)}\bigr)^2-(2/\beta-1) \partial_z u_\beta^{(0)}+ \frac{1}{X^{2}} \biggr) (z,t), \end{array} $$
(2.22)

and the operator \(\mathcal{K}_{t}\) is defined by (2.3) with P replaced by P t =P 0+(1−t)P.

Substituting (2.21) in the last integral in (2.19), we get

(2.23)

Write ΔV=2Lρ]+v (0) where v (0) is a constant from (1.29), corresponding to V (0) (recall that we assumed that corresponding constant for V, is zero). Then, taking into account (1.18), we get \(\nu=(\frac{2}{\beta}-1)^{-1}\nu_{\beta}\)

Then (1.22) yields

Now we can integrate with respect to t and obtain

(2.24)

since

In addition, changing the variables in the corresponding integrals, we have

These relations combined with (2.23), (2.24) and (2.22) imply (1.20) with

(2.25)

where the contour L contains L′, which contains σ ε , all zeros of P t are outside of L, and \(u^{(0)}_{\beta}\) is defined in (2.22). For β=2, \(u^{(0)}_{\beta}=0\), hence we can leave only X −2(ζ) in the last numerator and take the integral with respect to ζ. Taking into account that

$$\Delta V'(z)=2\Delta g(z)+\Delta P(z)X^{1/2}(z), $$

and Δg(z)∼Cz −2, as z→∞, we have

and similar relation for integrals with (za) replaced by (zb). Thus we obtain (1.23).  □

3 Proof of Theorem 2

Denote

$$ \everymath{\displaystyle} \begin{array}{l} \sigma_\varepsilon=\bigcup _{\alpha=1}^q\sigma_{\alpha,\varepsilon},\qquad \sigma_{\alpha,\varepsilon}=[a_{\alpha}-\varepsilon,b_{\alpha}+ \varepsilon], \\[10pt] \operatorname{dist}\{\sigma_{\alpha,\varepsilon},\sigma_{\alpha',\varepsilon}\} >\delta>0, \quad \alpha\neq \alpha'. \end{array} $$
(3.1)

It is known (see [3, Lemmas 1,3] and [14, Theorems 11.1.4, 11.1.6]) that if we replace in (1.1) and (1.5) the integration over \(\mathbb{R}\) by the integration over σ ε , then the new partition function \(Z_{n,\beta}^{(\varepsilon)}[ V]\) and the old one Z n,β [V] satisfy the inequality

Thus, it suffices to study \(Z_{n,\beta}^{(\varepsilon)}[V]\) instead of Z n,β [V]. Starting from this moment, we assume that the replacement of the integration domain is made, but we will omit superindex ε.

Consider the “approximating” function H a (Hamiltonian)

(3.2)
$$ \everymath{\displaystyle} \begin{array}{rcl} V^{(a)}(\lambda)&=&\sum_{\alpha=1}^qV^{(a)}_\alpha( \lambda), \\[10pt] V^{(a)}_\alpha(\lambda)&=&\mathbf{1}_{\sigma_{\alpha,\varepsilon}}(\lambda) \biggl(V(\lambda)-2\int_{\sigma\setminus\sigma_\alpha}\log|\lambda-\mu|\rho(\mu)d\mu \biggr), \end{array} $$
(3.3)

where \(V^{(a)}_{\alpha}(\lambda)\) is the “effective potential”. It is easy to check that (1.9) implies

$$ V^{(a)}_\alpha=2L[\rho_\alpha]. $$
(3.4)

The “cross energy” Σ in (3.2) has the form

(3.5)

Then

(3.6)

Set

$$ \everymath{\displaystyle} \begin{array}{l} \overline{n}:=(n_1,\dots,n_q), \qquad |\bar{n}|:=\sum_{\alpha=1}^q n_\alpha,\\[10pt] \mathbf{1}_{\overline{n}}(\overline{\lambda}):=\prod _{j=1}^{n_1}\mathbf{1}_{\sigma_{1,\varepsilon}}( \lambda_j)\dots \prod_{j=|\bar{n}|-n_q+1}^{n} \mathbf{1}_{\sigma_{q,\varepsilon}}(\lambda_j). \end{array} $$
(3.7)

The key observation which explains our motivation to introduce H a and ΔH is that

(3.8)

It was proven in [15] that E n,β H}=O(1), hence this term is “smaller” than H a . On the other hand, by the construction, H a does not contain an “interaction” between different intervals σ α , so it is possible to apply to it the result of Theorem 1. This idea was used in [15] to prove that \(Z_{\bar{n},\beta}[V]\) can be factorized into a product of one-cut partition functions corresponding to \(V^{(a)}_{\alpha}\) with the error O(1). Here we are doing the next step.

It is easy to see that if we denote

(3.9)

then

(3.10)

Here and below in the proof of Theorem 2 we assume without loss of generality that (h,ρ)=0.

Lemma 2

There exist n-independent C,c>0 such that

(3.11)

where Δn=(Δn 1,…,Δn q ), Δn α =n α μ α n, and μ α were defined in (1.31).

Since the proof of the lemma repeats computations given below for the terms, satisfying (3.12), and uses Proposition 1 and Lemmas 3, 4, the proof of Lemma 2 is given at the end of Section 4.

Lemma 2 yields that to prove Theorem 2, it is enough to consider in (3.10) only those terms for which

(3.12)

with any n-independent c (the change of c will change only κ in (1.32) and (1.34)), so we can assume that c is as small as we need in the proof below. To manage with terms, satisfying (3.12), we are going “to linearize” the quadratic form (3.8) by using the integral Gaussian representation (see (3.17) below). Then we will apply Theorem 1 inside the integrals and then integrate the result. As the first step in this direction one should find a good approximation of the integral quadratic form (3.8) by some quadratic form of the finite rank. To this aim consider the space of functions

$$\mathcal{H}_{\varepsilon}=\bigoplus_{\alpha=1}^qL_1[ \sigma_{\alpha,2\varepsilon}], $$

and the operator \(\mathcal{L}\) with the kernel log|λμ|. It has a block structure \(\{\mathcal{L}_{\alpha,\alpha'}\}_{\alpha,\alpha'=1}^{q}\). Denote \(\widehat{\mathcal{L}}\) its block-diagonal part and by \(\widetilde{\mathcal{L}}\) the off diagonal part.

Consider the Chebyshev polynomials \(\{p^{(\alpha)}_{k}\}_{k=0}^{\infty}\) on σ α,2ε the corresponding orthonormal system of the functions

$$p^{(\alpha)}_{k}(\lambda)=\cos k \biggl(\arccos \biggl( \frac{2\lambda-(a_\alpha+b_\alpha)}{ b_\alpha-a_\alpha+4\varepsilon} \biggr) \biggr),\qquad \varphi^{(\alpha)}_{k}( \lambda)=p^{(\alpha)}_{k}(\lambda)\big|X^{-1/4}_{\sigma_{\alpha,2\varepsilon}}( \lambda)\big|. $$

It is well known that \(\{\varphi^{(\alpha)}_{k}\}_{k=0}^{\infty}\) make an orthonormal basis in L 2[σ α,2ε ], hence we can write

Proposition 1

There exists C,d>0 such that for all \(\alpha\not=\alpha'\)

$$ |\mathcal{L}_{k,\alpha;k',\alpha'}|\le Ce^{-d(k+k')}. $$
(3.13)

The proof of the proposition is given in Sect. 4.

Proposition 1 implies, in particular, that if we choose M=[log2 n], then uniformly in λ,μ

(3.14)

Consider the matrix . It is a symmetric block matrix in which the block \(\{\mathcal{L}_{k,\alpha;k',\alpha'}\}_{{k,k'=1,\dots,M}}\) corresponds to the kernel \(\widetilde{ \mathcal{L}}^{(M)}_{\alpha\alpha'}\) which is the r.h.s. sum of (3.14).

Now we would like to represent the matrix \(\widetilde{ \mathcal{L}}^{(M)}\) as a difference of two positive matrices. To this aim consider the integral operator \(\mathcal{A}\) in \(\mathcal{H}_{\varepsilon}\) with a kernel a(|λμ|) of the form

(3.15)

where the function

$$a_0(\lambda)=- \biggl(\frac{3}{4}\lambda^4- \frac{8}{3}\lambda^3+3\lambda^2 \biggr) $$

is chosen in such a way that a(λ) and its first 3 derivatives are continuous at λ=d, and the third derivative of a(|λ|) has a jump at λ=0.

Lemma 3

The integral operator A with the kernel a(|λμ|) is positive in L 2(Δ) where Δ⊂[−1,1] is any finite system of intervals in \(\mathbb{R}\). Moreover, the integral operator with a kernel log|λμ|−1a(|λμ|) is positive in L 2(Δ).

Remark 5

One can easily see that if we choose a 0(λ)=λ−1, then the operator A will be also positive, but in this case the Fourier transform \(\widehat{a}(k)\sim k^{-2}\), as k→∞, while we need below \(\widehat{a}(k)\sim k^{-4}\).

Let \(\widehat{\mathcal{A}}\) be a block-diagonal part of \(\mathcal{A}\). By the construction and the lemma we have

$$ \widetilde{\mathcal{L}}=\widehat{\mathcal{A}}- \mathcal{A},\quad \mathcal{A}\ge 0,\quad \widehat{\mathcal{A}}\ge 0, \quad \widehat{\mathcal{A}}\le -\widehat{\mathcal{L}}. $$
(3.16)

By (3.16), if we consider the matrix of \(\mathcal{A}^{(M)}\) and \(\widehat{ \mathcal{A}}^{(M)}\) at the same basis we obtain

$$\mathcal{L}_{k,\alpha;k',\alpha'}= \widehat{\mathcal{A}}^{(M)}_{k,\alpha;k',\alpha'}- \mathcal{A}^{(M)}_{k,\alpha;k',\alpha'}. $$

Since \(\mathcal{A}^{(M)}\) and \(\widehat{ \mathcal{A}}^{(M)}\) are positive matrices they can be written in the form \(\mathcal{A}^{(M)}=S^{2}\), \(\widehat{ \mathcal{A}}^{(M)}=\widehat{S}^{2}\). Thus

where

$$c^{(\alpha)}_{k}=\frac{n}{n_{\alpha}}\bigl(p^{(\alpha)}_{k}, \rho^{(\alpha)}\bigr). $$

Using the representations

$$ e^{\beta x^2/2}=\sqrt{\frac{\beta}{2\pi}}\int du e^{\beta xu/2-\beta u^2/8},\quad e^{-\beta x^2/2}=\sqrt{\frac{\beta}{2\pi}}\int du e^{i\beta xu/2-\beta u^2/8}, $$
(3.17)

we obtain

$$ Z_{\bar{n},\beta}\biggl[V-\frac{h}{n}\biggr]= \biggl( \frac{\beta}{2\pi} \biggr)^{Mq}e^{-n^2\beta\varSigma^*/2} \int e^{-\frac{\beta}{8}( u,u)} \prod_{\alpha=1}^q \frac{Z_{ n_\alpha}[\mu_\alpha^{-1}V^{(a)}_\alpha-n_\alpha^{-1}\widetilde{h}_\alpha]}{n_\alpha!}du, $$
(3.18)

where u:=(u (1),u (2)),

(3.19)

We are going to apply (1.16) to \(Z_{ n_{\alpha}}[\mu_{\alpha}^{-1}V^{(a)}_{\alpha}-n_{\alpha}^{-1}\widetilde{h}_{\alpha}]\). According to Theorem 1(ii), if u:=(u (1),u (2))∈U 1, with

$$ U_1=\biggl\{u:=\bigl(u^{(1)},u^{(2)} \bigr):\sum_\alpha\big|(D_\alpha\Im \dot{s}_\alpha ,\Im \dot{s}_\alpha)\big|\le k_*\log n \wedge \bigl(u^{(1)},u^{(1)}\bigr)\le\log^4n\biggr\}, $$
(3.20)

and \(\bar{n}\) satisfy (3.12), then we can apply (1.19). Remark that evidently \(\|\widetilde{h}^{(6)}_{\alpha}\|_{\infty}\le CM^{7}=C\log^{14}n\). It will be proven below (see Lemma 4) that the integral over the compliment of U 1 gives us O(n κ), so we should study mainly uU 1.

For uU 1, since (1.16) implies

where r β [ρ] is defined in (1.20), F β (n) is defined in (1.21), and \(\mathcal{E}_{\alpha}\) is the energy, corresponding to the potential \(V^{(a)}_{\alpha}\) on σ α and the remainder bound is uniform for uU 1 and h, satisfying (3.12). In view of (3.4) we have

$$\mathcal{E}_\alpha=L[\rho_\alpha,\rho_\alpha]- \bigl(V^{(a)}_\alpha,\rho_\alpha\bigr)=-L[ \rho_\alpha,\rho_\alpha]. $$

Moreover, note that (1.22) implies

(3.21)

In addition, the definition (1.18) combined with (3.3) and the fact that v of (1.9) is 0, yields

$$\bigl(\nu_{\beta,\alpha},\mu_\alpha^{-1}V^{(a)}_\alpha \bigr)= \biggl(\frac{2}{\beta}-1 \biggr) \biggl(\log\frac{\rho_\alpha}{\mu_\alpha}, \frac{\rho_\alpha}{\mu_\alpha} \biggr) - \biggl(\frac{2}{\beta}-1 \biggr) \biggl(\log \frac{\rho_\alpha}{\mu_\alpha},X^{-1/2}_\alpha\biggr). $$

The definition of \(\widetilde{h}_{\alpha}\) (3.19), (3.4), and above relations yield

Then, using (3.4), we obtain

Hence, if we introduce the notations

(3.22)

and use that \(\sum L[\rho_{\alpha},\rho_{\alpha}]+\varSigma^{*}=-\mathcal{E}[V]\), then for uU 1 we obtain finally

(3.23)

Note, that for any function ϕ which is a constant on each σ α we have \(\bar{D}\phi=0\), νϕ=0. Moreover, the definition (3.19) of \(\dot{s}(u)\) implies

$$\bigl(\dot{s}(u),n\rho_\alpha\bigr)-\frac{n}{n_\alpha}\bigl(s(u),n \rho_\alpha\bigr)\Delta n_\alpha=0. $$

Hence,

(3.24)

Denote

(3.25)

where \(\mathcal{Z}_{n,\beta}^{(0)}[V]\) is defined by (1.35). Then (3.23) and (3.24) yield that for uU 1

Taking into account that

we obtain

(3.26)

To integrate with respect to u, we introduce the block matrices

$$E=\left (\begin{array}{l@{\quad }l} I& I \\ I& I \end{array} \right ),\qquad\mathcal{D}= \bar{D}^{(M)}E=\left (\begin{array}{l@{\quad }l} \bar{D}^{(M)}& \bar{D}^{(M)} \\ \bar{D}^{(M)} & \bar{D}^{(M)} \end{array} \right ), \qquad\mathcal{S}= \left (\begin{array}{l@{\quad }l}\widehat{S}& 0\\ 0&iS\end{array} \right ), $$
$$\bar{D}^{(M)}_{\alpha,k;\alpha',k'}:=\delta_{\alpha,\alpha'} \bigl(\bar{D}_\alpha p^{(\alpha)}_{k}, p^{(\alpha)}_{k'} \bigr),\qquad \mathcal{F}=I-\mathcal{S}\mathcal{D}\mathcal{S}. $$

Thus,

(3.27)

where

$$ \everymath{\displaystyle} \begin{array}{l} R^{(M)}:=\bigl( r^{(M)}, r^{(M)}\bigr),\qquad r^{(M)}=\bigl\{r^{(M)}_{\alpha,k}\bigr\}, \\[10pt] r^{(M)}_{\alpha,k} :=\bigl(2 X^{-1/2}_{\bar{n}}+2 \nu_\beta+\bar{D}h,p^{(\alpha)}_{k} \bigr). \end{array} $$
(3.28)

Lemma 4

There exist δ 1,κ 1>0, such that

$$ \Re(\mathcal{F}u,u)\ge\delta_1(u,u), $$
(3.29)

and if c in the condition (3.12) is sufficiently small, then \(I_{\bar{n}}\) defined by (3.25) satisfy the bounds

$$ \everymath{\displaystyle} \begin{array}{l} \biggl(\frac{\beta}{2\pi} \biggr)^{Mq}\int _{U_1^c} e^{-\beta(u,u)/8}\big|I_{\bar{n}}(u)\big|du\le k_{\bar{n}} n^{-\kappa_1}, \\[10pt] \biggl(\frac{\beta}{2\pi} \biggr)^{Mq}\int e^{-\beta(u,u)/8}\big|I_{\bar{n}}^{*}(u)\big|du \le k_{\bar{n}} n^{1/6}, \end{array} $$
(3.30)

where \(U_{1}^{c}\) is a complement of U 1 from (3.20) and \(k_{\bar{n}}\) is defined in (3.26).

The lemma and (3.27) imply that the integral over u of \(I_{\bar{n}}(u)\) coincides with the integral over u of \(k_{\bar{n}}I_{\bar{n}}^{*}(u)\) up to the multiplication error \((1+O(n^{-\kappa_{1}}))\). By the virtue of the standard Gaussian integration formulas we obtain

(3.31)

where \(\bar{r}^{(M)}\) is defined in (3.28) and

But since for any A,B det(1+AB)=det(1+BA), we obtain

Similarly, since \(\mathcal{S}\mathcal{F}^{-1}\mathcal{S}=S^{2}\mathcal{F}^{-1}_{1}\) and

$$\left ( \begin{array}{l@{\quad }l}I- \bar{D}\widehat{\mathcal{A}}^{(M)} &\bar{D}^{(M)}\mathcal{A}^{(M)}\\ -\bar{D}^{(M)}\widehat{\mathcal{A}}^{(M)} &I+\bar{D}^{(M)}\mathcal{A}^{(M)} \end{array} \right )^{-1}=\mathcal{G}^{(M)} \left ( \begin{array}{l@{\quad }l}I+\bar{D}^{(M)}\mathcal{A}^{(M)}&-\bar{D}^{(M)}\mathcal{A}^{(M)}\\ \bar{D}^{(M)}\widehat{\mathcal{A}}^{(M)} &I-\bar{D}^{(M)}\widehat{\mathcal{A}}^{(M)} \end{array} \right ), $$

where \(\mathcal{G}^{(M)}:= (1-\bar{D}\widetilde{\mathcal{L}}^{(M)} )^{-1}\), we have

Hence we obtain for \(\mathcal{I}_{\bar{n}}^{*}\) of (3.31)

Using Proposition 1 and Lemma 4, we can now replace \(\widetilde{ \mathcal{L}}^{(M)}\) by the “block” integral operator \(\widetilde{L}\) with zero diagonal blocks and off-diagonal blocks \(\mathcal{L}_{\alpha,\alpha'}: L_{2}[\sigma_{\alpha',{2\varepsilon}}]\to L_{2}[\sigma_{\alpha,{2\varepsilon}}]\)

$$\widetilde{\mathcal{L}}_{\alpha,\alpha'}[f]= (\mathbf{1}_{\sigma_{\alpha,{2\varepsilon}}} \widetilde{\mathcal{L}} \mathbf{1}_{\sigma_{\alpha',{2\varepsilon}}} )[f]. $$

The error of this replacement is \(O(e^{-c\log^{2}n})\). Hence,

(3.32)

Moreover, since the operator \(\bar{D}\) is defined on σ (see (1.26) and (1.17)) and \(X^{-1/2}_{\bar{n}}\) are also defined on σ, one can see that the operator \(\widetilde{\mathcal{ L}}\) appears in (3.32) in the combination \(\mathbf{1}_{\sigma}\widetilde{\mathcal{ L}}\mathbf{1}_{\sigma}\), so starting from this moment we assume that \(\widetilde {\mathcal{L}}:\mathcal{H}\to\mathcal{H}\). Let us study

$$\psi_{\bar{n}}:=\mathcal{G} X^{-1/2}_{\bar{n}}\quad \Rightarrow\quad X^{-1/2}_{\bar{n}}=(1-\bar{D}\widetilde{\mathcal{L}}) \psi_{\bar{n}}. $$

In view of (1.22) we get

Thus we conclude that

$$(\mathcal{L}\psi_{\bar{n}})_\alpha(\lambda)=c_\alpha(\bar{n})=\mathrm{const}\quad \Rightarrow \quad \psi_{\bar{n}}=\sum c_\alpha(\bar{n})\psi^{(\alpha)}, $$

where ψ (α) are defined in (1.29). Moreover, by (1.28)–(1.29) we have

Now let us transform the last two terms S 3 and S 4 in the r.h.s. of (3.32).

(3.33)

since \((\widehat{\mathcal{L}}\psi_{\bar{n}},\nu_{\beta})= -(\widetilde{\mathcal{L}}\psi_{\bar{n}},\nu_{\beta})\) in view of \(({\mathcal{L}}\psi_{\bar{n}})_{\alpha}=\mbox{const}\) and \((\nu_{\beta,\alpha},\mathbf{1}_{\sigma_{\alpha}})=0\). Since by (1.22) \(\widehat{\mathcal{L}}D=\widehat{\mathcal{L}}\bar{D}\), we obtain

Thus,

(3.34)

In addition, using that \(\bar{D}\mathcal{L}\mathcal{G}X^{-1/2}_{\bar{n}}=0\), \(\bar{D}\widehat{\mathcal{L}}X^{-1/2}_{\bar{n}}=0\), we have

This relation, Lemma 4, (3.26), (3.32), (3.33), and (3.34) yield that under the condition (3.12) \(\mathcal{T}_{\bar{n}}\) of (3.11) satisfy the bound

Then, taking into account (3.10) and Lemma 2, we get (1.34) and (1.32).

4 Auxiliary Results

Proof of Proposition 1

Assume that k α k α, and prove that

$$ \big|I_{k}(\lambda)\big|:= \bigg| \mathbf{1}_{\sigma_{\alpha',2\varepsilon}}(\lambda) \int_{\sigma_{\alpha,2\varepsilon}} \log|\lambda-\mu|\frac{p^{(\alpha)}_{k}(\mu)}{|X^{1/2}_{\sigma_{\alpha,2\varepsilon}}(\mu)|}d\mu \bigg|\le Ce^{-2dk}. $$
(4.1)

Then, using that

$$\int_{\sigma_{\alpha',2\varepsilon}}\frac{|p^{(\alpha)}_{k}(\lambda)|}{ |X^{1/2}_{\sigma_{\alpha,2\varepsilon}}(\lambda)|}d\lambda\le 1, $$

we obtain (3.13), since k+k′≤2max{k,k′}. Changing the variables in the integral in (4.1) μ=c α +d α cosx with \(c_{\alpha}=\frac{1}{2} (a_{\alpha}+b_{\alpha})\), \(d_{\alpha}=\frac{1}{2} (b_{\alpha}-a_{\alpha}+4\varepsilon )\), and integrating by parts, we obtain

where \(z=(\lambda-c_{\alpha})d_{\alpha}^{-1}\), |z|>1+δ 1, \(\zeta(z)=z-\sqrt{z^{2}-1}\), |ζ(z)|≤e −2d. This proves (4.1). □

Proof of Lemma 3

Consider the Fourier transform \(\widehat{a}(k)\) of a(|λ|). Integrating by parts, it is easy to get that

(4.2)

Here the last equality can be obtained by the differentiation of the both parts with respect to kd. Let us check that the numerator in the last integral is positive. Indeed, it is 0 at t=0, its derivative is positive on (0,π), and it is evidently positive for tπ. Hence we get the first assertion of the lemma. To prove the second assertion, let us note that if we consider the function a 1(λ):=λ −1+a′(λ), then since \(a_{1}'''(\lambda)\le 0\) and \(a_{1}''(d)=0\), we get that \(a_{1}''(\lambda)\ge 0\) for λ∈(0,d] and then since \(a_{1}'(d)=0\), we obtain that \(a_{1}'(\lambda)<0\) for λ∈(0,d]. Hence, if we denote l(λ)=logλ −1, then the Fourier transform of l(|λ|)−a(|λ|) is

 □

Proof of Lemma 4

It is easy to see that, to prove (3.29), it suffices to show that

$$ \widehat{\mathcal{S}}_{\alpha\alpha}\overline{D}^{(M)}_{\alpha} \widehat{\mathcal{S}}_{\alpha\alpha}\le (1-\delta_1) \quad \Leftrightarrow\quad \widehat{\mathcal{A}}^{(M)}_{\alpha\alpha}\overline{D}^{(M)}_{\alpha} \widehat{\mathcal{A}}^{(M)}_{\alpha\alpha}\le (1-\delta_1) \widehat{\mathcal{A}}^{(M)}_{\alpha\alpha}. $$
(4.3)

Fix some α and denote \(A:=\widehat{\mathcal{A}}_{\alpha\alpha}\), \(D:=\overline{D}_{\alpha}\) and \(L:=\widehat{\mathcal{L}}_{\alpha}\) the complete matrices, corresponding to the above operators. Write them as a block matrices

$$A=\left (\begin{array}{c@{\quad }c}A^{(11)}&A^{(12)}\\ A^{(21)}&A^{(22)}\end{array} \right ),\qquad D=\left (\begin{array}{c@{\quad }c}D^{(11)}&D^{(12)}\\ D^{(21)}&D^{(22)}\end{array} \right ), \qquad L= \left (\begin{array}{c@{\quad }c}L^{(11)}&L^{(12)}\\ L^{(21)}&L^{(22)}\end{array} \right ), $$

such that \(A^{(11)}=:\widehat{\mathcal{A}}^{(M)}_{\alpha\alpha}\), \(D^{(11)}=\overline{D}^{(M)}_{\alpha}\), and \(L^{(11)}=\widehat{\mathcal{L}}^{(M)}_{\alpha}\). Below we will use the inequality valid for any block matrix B≥0

$$ B=\left (\begin{array}{c@{\quad }c}B^{(11)}&B^{(12)}\\ B^{(21)}&B^{(22)}\end{array} \right ),\qquad B^{(21)} \bigl(B^{(11)}\bigr)^{-1}B^{(12)}\le B^{(22)}. $$
(4.4)

Assume that we have proved the inequality

$$ D\le (1-\delta_1)A^{-1}\quad \Leftrightarrow\quad ADA \le (1-\delta_1)A\quad \Rightarrow \quad (ADA)^{(11)}\le (1- \delta_1)A^{(11)}. $$
(4.5)

Then we get

(4.6)

But

$$\big|\bigl(A^{(12)}D^{(21)}A^{(11)}f,f\bigr)\big|\le \big\| \bigl(A^{(11)}\bigr)^{-1/2}A^{(12)}D^{(21)} \bigl(A^{(11)}\bigr)^{1/2}\big\|\bigl(A^{(11)}f,f\bigr). $$

In addition, using (4.4) for the matrix A, we get

Then, taking into account that for any small enough ε>0 (4.10) implies that (−L (11))≤(D (11)+ε)−1, we can use (4.4) for D+ε in order to get

$$D^{(12)}A^{(11)}D^{(12)}\le D^{(21)} \bigl(-L^{(11)}\bigr)D^{(12)}\le D^{(21)} \bigl(D^{(11)}+\varepsilon\bigr)^{-1}D^{(12)}\le D^{(22)}+\varepsilon. $$

Hence we obtain

(4.7)

Integrating by parts it is easy to check that

(4.8)

where \(d_{q}=\frac{1}{2}(b_{q}-a_{q}+4\varepsilon)\) and \(\tilde{a}(x,y)\) is some bounded piece-wise continuous function. Hence we conclude that there exists a constant C 0 such that if we introduce the diagonal matrix A d with the entries (A d ) jk =δ jk k −4, then

$$A_d^{-1/2}AA_d^{-1/2}\le C_0\quad\Rightarrow \quad A\le C_0A_d. $$

Moreover, it is easy to check that there exists C 1>0 such that

(4.9)

Thus, from (4.7) and above bounds we obtain that

Finally we have from (4.6) and (4.5)

Hence we need only to prove (4.5). Since the last relations of (1.22) yields

$$ (\overline{D}_\alpha v,v)=\bigl((-\widehat{ \mathcal{L}}_{\alpha})^{-1}v,v\bigr)+\pi^{-2} \bigl(v,X_{\alpha}^{-1/2}\bigr)^2 \bigl(\widehat{ \mathcal{L}}_{\alpha}^{-1}\mathbf{1}_{\sigma_\alpha}, \mathbf{1}_{\sigma_\alpha}\bigr) \le\bigl((-\widehat{\mathcal{L}}_{\alpha})^{-1}v,v \bigr), $$
(4.10)

it suffices to prove that

$$ (-\widehat{\mathcal{L}})^{-1}\le(1-\delta_1) \widehat{\mathcal{A}}^{-1}\quad \Leftrightarrow \quad \widehat{\mathcal{A}}\le(1- \delta_1) (-\widehat{\mathcal{L}}). $$
(4.11)

But the last bound is a corollary of the following inequality for the Fourier transforms of a(|λ|) and log|λ|−1

$$\widehat{a}(k)<(1-\delta_1)\widehat{l}(k)=(1-\delta_1) \pi/k. $$

Since we have already proved this inequality for δ 1=0 in Lemma 3, we have \(\widehat{a}(k)/\widehat{l}(k)<1\). Besides, it follows from (4.2) that \(\widehat{a}(k)\sim k^{-4}\), hence \(\widehat{a}(k)/\widehat{l}(k)\to 0\), as k→∞, and moreover, \(\widehat{a}(k)/\widehat{l}(k)\to 0\), as k→0. Thus there exists δ 1>0 such that

$$\sup_{k>0}\widehat{a}(k)/\widehat{l}(k)=1-\delta_1. $$

To prove (3.30), we take sufficiently large n-independent C and note that

(4.12)

It is evident that

Moreover, the definition of \(\dot{s}_{\alpha}(\lambda,u)\) (see (3.19)) and the Schwarz inequality yield

(4.13)

where the last inequality is based on the fact that \(\widehat{\mathcal{A}}_{jj}\le C j^{-4}\) in view of (4.8). Hence, choosing sufficiently large C , we obtain

$$\biggl(\frac{\beta}{2\pi} \biggr)^{Mq}\int_{U_5}e^{-\beta(u,u)/8}I(u)du \le e^{-n^2c}. $$

Similarly to (4.13), we have

(4.14)

Thus \(n^{-1}_{\alpha}\widetilde{h}_{\alpha}(\lambda)\) is a Holder function for uU 4, and we can use the result of [3], according to which

where \(\mathcal{M}_{1}^{+}[\sigma_{\alpha,\varepsilon}]\) is a set of positive unit measures with supports belonging to σ α,ε . Since

$$-\mu_\alpha^{-1}V^{(a)}_\alpha(\lambda)\le -2\mu_\alpha^{-1}L[\rho_\alpha](\lambda),\quad \lambda \in\sigma_{\alpha,\varepsilon}, $$

we have

(4.15)

Here \(\mathcal{M}_{1}[\sigma_{\alpha,\varepsilon}]\) is a set of all signed unit measures with supports belonging to σ α,ε . It is easy to see that, if we remove the condition of positivity of measures, then the maximum point ρ 1,α is uniquely defined by the conditions:

$$2L[\rho_{1,\alpha}](\lambda)-2\mu_\alpha^{-1}L[ \rho_\alpha](\lambda) -n_\alpha^{-1}\Re\widetilde{h}_\alpha(\lambda)=\mbox{const},\quad\lambda\in \sigma_{\alpha,\varepsilon}, \quad\int_{\sigma_{\alpha,\varepsilon}}\rho_{1,\alpha}=1. $$

Hence \(\rho_{1,\alpha}=\mu_{\alpha}^{-1}\rho_{\alpha}+ \frac{1}{2}D_{\sigma_{\alpha,\varepsilon}}\widetilde{h}_{\alpha}\) and the r.h.s. of (4.15) takes the form

But by the definition of \(\widetilde{h}_{\alpha}\) (see (3.19))

$$n_\alpha^{-1}\bigl(\widetilde{h}_\alpha, \mu_\alpha^{-1}\rho_\alpha\bigr)=O \bigl(n^{-1}_{\alpha}\bigr) +O\bigl(n/n_\alpha- \mu_\alpha^{-1}\bigr)=O\bigl(n^{-1}\log n\bigr). $$

Hence

$$E_\alpha(u)=-\mu_\alpha^{-2}L[\rho_\alpha, \rho_\alpha]+ \frac{n_\alpha^{-2}}{4}\bigl(\widehat{\mathcal{S}}_\alpha D_{\sigma_{\alpha,\varepsilon}}\widehat{\mathcal{S}}_\alpha u^{(1)}, u^{(1)}\bigr) +O\bigl(n^{-1}\log n\bigr). $$

These relations, the definition (3.25) and (4.3) yield

where \(\bar{D}_{\sigma_{\varepsilon}}\) is a block diagonal matrix with blocks \(D_{\sigma_{\alpha,\varepsilon}}\), α=1,…,q. Then

(4.16)

For uU 3 (1.16) and (3.23)–(3.26) imply

(4.17)

where we used that n −1|u (1)|≤n −1/2lognn −1/3 in U 3. Then, using (4.17) combined with the Chebyshev inequality for \(\tau=\beta\frac{\delta_{1}}{16}\) with δ 1 (4.3), we get

where C 1,C 2,C 3,C 4 depend only on σ. Hence, (3.12) implies the bound (3.30) for U 3.

Similarly, using the Chebyshev inequality for τ=βSDS−1/16, we obtain for U 2

where \(C_{3}',C_{4}'\) depend only on σ. Hence, if c in (3.12) is chosen sufficiently small, the last inequality implies the bound (3.30) for U 2. The second inequality in (3.30) can be obtained by using the standard Gaussian integration formulas like in (3.31), if we choose c from (3.12) sufficiently small. Lemma 4 is proved. □

Proof of Lemma 2

Consider the variational problem of maximizing the functional (1.7) on the system of intervals (3.1) under the additional restrictions

(4.18)

By the method of [3] one can prove that the maximum \(\mathcal{E}^{({\bar{n}}/{n})}[V]\) for any partition (n 1,…,n q ) of n exists and corresponds to the unique measure \(\rho^{({\bar{n}}/{n})}=(\rho^{({\bar{n}}/{n})}_{1},\dots,\rho^{({\bar{n}}/{n})}_{q})\), and that

(4.19)

where C is some absolute constant. Moreover, evidently, there exists δ >0 such that for |Δn α |/nδ we can change each \(\rho^{({\bar{n}}/{n})}_{1}\) so that it’s mass will change by Δn α /n, and the total energy will change less than by \(l_{*}'n^{-2}(\Delta n,\Delta n)\), where l >0 is an absolute constant. Hence, for all Δn

(4.20)

with some absolute l . This inequality combined with (4.19) yields (3.11) for \((\Delta n,\Delta n)>2C^{*}l^{-1}_{*}n\log n\).

For \((\Delta n,\Delta n)<2C^{*}l^{-1}_{*}n\log n\) consider the approximate Hamiltonian \(H_{a}^{({\bar{n}}/{n})}\) defined by (3.2) with \(V^{(a)}_{\alpha,{\bar{n}}/{n}}\) defined by (3.3), if we replace here ρ α by \(\rho_{\alpha}^{({\bar{n}}/{n})}\), and Σ replaced by \(\varSigma^{*}_{{\bar{n}}/{n}}\), which is obtained as in (3.5) with ρ α replaced by \(\rho_{\alpha}^{({\bar{n}}/{n})}\). Then

$$\mathbf{1}_{\bar{n}} H=\mathbf{1}_{\bar{n}}H_a^{({\bar{n}}/{n})}+ \Delta H_{\bar{n}} $$

with \(\Delta H_{\bar{n}}\) of (3.8) with the same replacement. Then if we continue computation up to (3.26), we get

(4.21)

where

with \(\dot{s}(u,\lambda)\) is defined by (3.19) with ρ α replaced by \(\rho_{\alpha}^{({\bar{n}}/{n})}\) and u (2)=0. According to (3.18)–(3.26), for \(\tilde{U}_{2}=\{u^{(1)}: |(u^{(1)},u^{(1)})|\le n\log^{2}n\}\) we have

with r (M) defined by (3.28), but without \(2X_{\bar{n}}^{-1/2}\). Integrating this inequality with \(e^{-\beta(u^{(1)},u^{(1)})/8}\) in \(\tilde{U}_{2}\) and taking into account (4.3), we get the bound (3.11) for this part of the integral in (4.21). To integrate in U 4 and U 5 of (4.12), we repeat the estimates of (4.13)–(4.16) of Lemma 4. □