1 Introduction

In 1962, Dyson interpreted the \(N\times N\) Gaussian ensemble (real, complex or quaternion) as the dynamical limit of matrix-valued Brownian motion H(t), and observed that the eigenvalues of H(t) form an interacting N-particle system with a logarithmic Coulomb interaction and quadratic potential. That is, the eigenvalue process \(\{ \lambda _i (t)\}_{1\le i\le N} \) satisfies the following system of stochastic differential equations with quadratic \(V = x^2/2\) and classical \(\beta =1, 2\) or 4 (depending on the symmetry class of the Gaussian ensemble)

$$\begin{aligned} \mathrm{d}\lambda _i(t) = \sqrt{\frac{2}{\beta N}} \mathrm{d}B_i(t) +\frac{1}{N}\sum _{j:j\ne i}\frac{\mathrm{d}t}{\lambda _i(t)-\lambda _j(t)}-\frac{1}{2}V'(\lambda _i(t))\mathrm{d}t,\quad i=1,2,\ldots , N,\qquad \end{aligned}$$
(1.1)

where \((B_1, \ldots , B_N)\) is an N-dimensional Brownian motion defined on a probability space with a filtration \({\mathscr {F}}=\{{\mathscr {F}}_t, t\ge 0\}\). The initial data \({{\varvec{}}\lambda }(0)=(\lambda _1(0),\lambda _2(0)\ldots , \lambda _N(0))\in \overline{\Delta _N}\) is given by the eigenvalues of H(0). Here, \(\Delta _N\) denotes the Weyl chamber

$$\begin{aligned} \Delta _N=\{\{x_i\}_{1\le i\le N}\in {\mathbb R}^N: x_1<x_2<\cdots <x_{N}\}. \end{aligned}$$
(1.2)

The process \({\varvec{}}\lambda (t)=(\lambda _1(t),\lambda _2(t),\ldots , \lambda _N(t))\) defined by the stochastic differential equation system (1.1) is called the \(\beta \)-Dyson Brownian motion (\(\beta \)-DBM) with potential V, which is an interacting particle system with Hamiltonian of the form

$$\begin{aligned} E(x_1,\ldots , x_N)\,{:}{=}\,-\frac{1}{2N}\sum _{1\le i\ne j\le N}\log |x_i-x_j|+\frac{1}{2}\sum _{i=1}^{N}V(x_i). \end{aligned}$$
(1.3)

For the special case \(\beta =2\) and \(V=x^2/2\), at each fixed time t, the particles \({\varvec{}}\lambda (t)\) have the same distribution as the eigenvalues of

$$\begin{aligned} H(t){\mathop {=}\limits ^{d}}e^{-t/2}H(0)+\sqrt{1-e^{-t}}G, \end{aligned}$$
(1.4)

where G is a matrix drawn from the Gaussian Unitary Ensemble (GUE). The global eigenvalue density of the GUE follows Wigner’s semi-circle distribution [54], and the local eigenvalue statistics are given by the Sine kernel [18,19,20]. Clearly, \(H(t)\rightarrow G\) as \(t\rightarrow \infty \) for any choice of the initial data H(0), and so the system reaches a global equilibrium for \(t \gg 1\). One can also investigate the time to local equilibrium - that is, how long it takes for the local statistics to coincide with the GUE. Dyson conjectured [17] that the time to local equilibrium should be much faster than the order 1 global scale. It is expected that in the bulk, an eigenvalue statistic on the scale \(\eta \) should coincide with the GUE as long as \( t \gg \eta \). To be more precise, one expects the convergence of the following three types of statistics on three types of scales.

  1. 1.

    On the macroscopic scale, the global eigenvalue density should converge to Wigner’s semi-circle distribution provided \(t\gg 1\).

  2. 2.

    The linear eigenvalue statistics of test functions on the mesoscopic scale \(N^{-1}\ll \eta \ll 1\) should coincide with the GUE as long as \(t \gg \eta \).

  3. 3.

    On the microscopic scale \({{\mathrm{O}}}(N^{-1})\), the local eigenvalue statistics should be given by the sine kernel as long as \(t\gg N^{-1}\).

For the macroscopic scale, with quadratic potential V, it was proven by Rogers and Shi [48], that the global density converges to the semicircle distribution. For general potential V, it was proven by Li, Li and Xie [43, 44], the global eigenvalue density converges to a V-dependent equilibrium measure (which may not be the semicircle distribution for non-quadratic V). We refer to [4] for a nice presentation on the dynamical approach to Wigner’s semi-circle law.

The time to equilibrium at the microscopic scale was studied in a series of works [21,22,23,24,25,26,27, 29, 31, 32], by Erdős, Yau and their collaborators. For classical \(\beta =1, 2, 4\), quadratic V and initial data a Wigner matrix, it was proven that after a short time \(t \gg N^{-1}\) the local statistics coincide with the G\(\beta \)E. Later, the works [28, 39] established single gap universality for classical DBM for a broad class of initial data, relying on the discrete di-Giorgi–Nash–Moser theorem developed in [29]. Fixed energy universality was established in [12, 38] by developing a sophisticated homogenization theory for discrete parabolic systems. These results are a crucial component in proving bulk universality for various classes of random matrix ensembles. Another approach to universality, applicable in special cases was developed independently and in parallel by Tao and Vu [50].

A central and basic tool in the study of the local statistics of random matrices is the local law and the associated rigidity estimates. The local law is usually formulated in terms of concentration of the Stieltjes transform of the empirical eigenvalue density at short scales \(\eta \gtrsim N^{-1}\). Rigidity estimates give high probability concentration estimates for the eigenvalue locations. These results were first established for Wigner matrices in a series of papers [25, 26, 31, 32, 51], then extended to other matrix models, i.e. general Wigner-type matrices [1, 35], sparse random matrices [21], deformed Wigner ensembles [39, 42], correlated random matrices [2, 3]. Beyond matrix models, rigidity estimates have been established for one-cut and multi-cut \(\beta \)-ensembles [9,10,11, 45], and two-dimensional Couloub gas [5, 40].

For the special case of classical \(\beta =1, 2, 4\) and quadratic potential V, the solution of (1.1) is given by a matrix model and so the methods developed for deformed Wigner matrices [39, 41, 42] yield a local law for the Stieltjes transform of empirical eigenvalue density,

$$\begin{aligned} \tilde{m}_t(z)\,{:}{=}\,\frac{1}{N} \sum _{i=1}^{N} \frac{1}{\lambda _i(t)-z}=\int \frac{\mathrm{d}\tilde{\mu }_t(x)}{x-z}, \quad \tilde{\mu }_t=\frac{1}{N}\sum _{i=1}^N \delta _{\lambda _i(t)}. \end{aligned}$$
(1.5)

However for nonclassical \(\beta \) or non-quadratic V, the process (1.1) is not given by a matrix model and so a corresponding local law is not known.

Our first main result is to establish a local law for the Stieltjes transform \(\tilde{m}_t (z)\) for short scales and all short times \(t \ll 1\). This result is stated as Theorem 3.1 below. This implies a rigidity estimate for the particle locations \(\lambda _i(t)\), i.e., that they are close to deterministic classical locations with high probability. Our methods are purely dynamical and do not rely on any matrix representation. Instead, our method is based on analyzing the stochastic differential equation of the Stieltjes transform \(\tilde{m}_t\) along the characteristics of the limiting continuum equation. We remark that since the \(\beta \)-ensemble is the equilibrium measure of \(\beta \)-DBM, our results may be used to provide another proof for the rigidity of \(\beta \)-ensemble in the case \(\beta \ge 1\), provided that one takes some large deviation estimates (such as [47]) as input. The method of characteristics was used in [36] to prove that on the global scale the empirical measure process of \(\beta \)-DBM with quadratic potential V converges to a Gaussian process. The explicit formulas of the means and the covariances of the limit Gaussian process was derived in [7]. Very recently, Unterberger proved that the Global fluctuations of \(\beta \)-DBM with general potential V are asymptotically Gaussian [52, 53]. We also comment that the method of characteristics has recently been used, independently and in parallel, for the analysis of a different equation in [8].

Relying on our local law we then prove a mesoscopic central limit theorem for linear statistics of the particle process on scales \(\eta \ll t\). This is stated as Theorem 4.2 below. In particular we see that equilibrium holds for the process (1.1) on mesoscopic scales \(\eta \ll t\). Central limit theorems for mesoscopic linear statistics of Wigner matrices at all scales were established in a series of papers [13, 14, 34, 46]. Analogous results for invariant ensembles were proved in [6, 33, 37]. Mesoscopic statistics for DBM with \(\beta =2\) and quadratic potential was established in [16]. It was proven that at mesoscopic scale \(\eta \), the mesoscopic central limit theorem holds if and only if \(t\gg \eta \). Recently, related results were proven for classical \(\beta \) and the quadratic potential in [38]. The analysis in [16] relied on the Brézin–Hikami formula special to the \(\beta =2\) case, and the analysis in [38] relied on the matrix model which exists only for classical \(\beta \), i.e. \(\beta =1,2,4\), neither of which are applicable here. Our approach is based on a direct analysis of the stochastic differential equation of \(\tilde{m}_t\), where the leading fluctuation term is an integral with respect to Brownian motions. The central limit theorem follows naturally for all \(\beta \ge 1\) and general potential V.

We now outline the organization of the rest of the paper. In Sect. 2, we collect some properties of \(\beta \)-DBM (1.1), i.e., the existence and uniqueness of strong solutions and the existence and uniqueness of the hydrodynamic limit of the empirical density \(\tilde{\mu }_t\), which is a measure valued process \(\mu _t\). For quadratic V, these statements were proved by Chan [15] and Rogers and Shi [48]. For general potentials (under Assumption 2.1 below), the \(\beta \)-DBM was studied by Li, Li and Xie [43, 44]. In the second part of Sect. 2, we study the Stieltjes transform of the limit measure valued process \(\mu _t\) by the method of characteristics, which are used throughout the rest of the paper.

Section 3 contains the main novelty of this paper, in which we prove the local law and rigidity estimate of the particles Theorem 3.1. We directly analyze the stochastic differential equation satisfied by \(\tilde{m}_t\) using the method of characteristics. In Sect. 4, we prove that the linear statistics satisfy a central limit theorem at mesoscopic scales.

Finally we remark that by combining the rigidity results proven here and the methodology of [38] one can prove gap universality for the process (1.1), thus yielding equilibrium on the local scale \(\eta =1/ N\). We state the gap universality theorem in Appendix A, and sketch the proof.

In the rest of this paper, we use C to represent large universal constant, and c a small universal constant, which may depend on other universal constants, i.e., the constants \({\mathfrak a}, {\mathfrak b}, {\mathfrak {K}}\) in Assumptions 2.1 and 4.1, and may be different from line by line. We write that \(X=O(Y)\) if there exists some universal constant such that \(|X|\le CY\). We write \(X=o(Y)\), or \(X\ll Y\) if the ratio \(|X|/Y\rightarrow 0\) as N goes to infinity. We write \(X\asymp Y\) if there exist universal constants such that \(cY\le |X|\le CY\). We denote the set \(\{1, 2,\ldots , N\}\) by \([\![{1,N}]\!]\). We say an event \(\Omega \) holds with overwhelming probability, if for any \(D>0\), and \(N\ge N_0(D)\) large enough, \(\mathbb {P}(\Omega )\ge 1-N^{-D}\).

2 Background on \(\beta \)-Dyson Brownian motion

In this section we collect several properties of \(\beta \)-DBM, required in the remainder of the paper. More precisely, we state the existence and uniqueness of the strong solution to (1.1) and a weak convergence result for the empirical particle density.

In the rest of the paper, we make the following assumption on the potential V.

Assumption 2.1

We assume that the potential V is a \(C^4\) function, and that there exists a constant \({\mathfrak {K}}\ge 0\) such that \(\inf _{x\in {\mathbb R}} V''(x)\ge -2{\mathfrak {K}}\).

We denote \(M_1({\mathbb R})\) the set of probability measures on \({\mathbb R}\) and equip this set with the weak topology. For \(T>0\) we denote by \(C([0,T], M_1({\mathbb R}))\) the set of continuous processes on [0, T] taking values in \(M_1({\mathbb R})\). We have the following existence result from [43].

Theorem 2.2

Suppose that V satisfies Assumption 2.1. For all \(\beta \ge 1\) and initial data \({\varvec{}}\lambda (0)\in \overline{\Delta _N}\), there exists a strong solution \(({\varvec{}}\lambda (t))_{t\ge 0}\in C({\mathbb R}_+,\overline{\Delta _N})\) to the stochastic differential equation (1.1). For any \(t>0\), \({\varvec{}}\lambda (t)\in \Delta _N\) and \({\varvec{}}\lambda (t)\) is a continuous function of \({\varvec{}}\lambda (0)\).

Proof

The existence of strong solution with initial data \({\varvec{}}\lambda (0)\in \Delta _N\) follows from [43, Theorem 1.2]. Following the same argument in [4, Proposition 4.3.5], we can extend the statement to \({\varvec{}}\lambda (0)\in \overline{\Delta _N}\) by the following comparison lemma (the special case with potential \(V\equiv 0\) is proved in [4, Lemma 4.3.6] and the proof below is based on the proof given there) between strong solutions of (1.1) with initial data in \(\Delta _N\). \(\square \)

Lemma 2.3

Suppose that V satisfies the Assumption 2.1. Let \(({\varvec{}}\lambda (t))_{t\ge 0}\) and \(({\varvec{}}\eta (t))_{t\ge 0}\) be two strong solutions of (1.1) with initial data \({\varvec{}}\lambda (0)\in \Delta _N\) and \({\varvec{}}\eta (0)\in \Delta _N\). Assume that \(\lambda _i(0)>\eta _i(0)\) for all \(i\in [\![{1,N}]\!]\). Then, almost surely, for all \(t\ge 0\) and \(i\in [\![{1,N}]\!]\),

$$\begin{aligned} 0\le \lambda _i(t)-\eta _i(t)\le e^{{{\mathfrak {K}}}t}\max _{j\in [\![{1,N}]\!]}\{\lambda _j(0)-\eta _j(0)\}. \end{aligned}$$
(2.1)

Proof

By taking difference of the stochastic differential equations satisfied by \(({\varvec{}}\lambda (t))_{t\ge 0}\) and \(({\varvec{}}\eta (t))_{t\ge 0}\), we have

$$\begin{aligned} \partial _t(\lambda _i(t)-\eta _i(t))= & {} \frac{1}{N}\sum _{j:j\ne i}\frac{(\lambda _j(t)-\eta _j(t))-(\lambda _i(t)-\eta _i(t))}{(\lambda _i(t)-\lambda _j(t))(\eta _i(t)-\eta _j(t))}\nonumber \\&-\frac{1}{2}\left( V'(\lambda _i(t))-V'(\eta _i(t))\right) . \end{aligned}$$
(2.2)

Let \(i_0={{\mathrm{argmax}}}_{i\in [\![{N}]\!]}\{\lambda _i(t)-\eta _i(t)\}\). For \(i=i_0\), the first term of (2.2) is non-positive, and

$$\begin{aligned} \partial _t(\lambda _{i_0}(t)-\eta _{i_0}(t))\le -\frac{1}{2}\left( V'(\lambda _{i_0}(t))-V'(\eta _{i_0}(t))\right) . \end{aligned}$$
(2.3)

Either \(\lambda _{i_0}(t)-\eta _{i_0}(t)< 0\), or using Assumption 2.1 the above equation implies \(\partial _t(\lambda _{i_0}(t)-\eta _{i_0}(t))\le {{\mathfrak {K}}}(\lambda _{i_0}(t)-\eta _{i_0}(t))\). Hence,

$$\begin{aligned} \partial _t ( \lambda _{i_0 } (t) - \eta _{i_0} (t) )_+ \le {\mathfrak {K}}( \lambda _{i_0} (t) - \eta _{i_0} (t) )_+. \end{aligned}$$
(2.4)

Therefore, it follows from Gronwall’s inequality,

$$\begin{aligned} \max _{i\in [\![{N}]\!]}\{\lambda _i(t)-\eta _i(t)\}\le e^{{{\mathfrak {K}}}t}\max _{i\in [\![{N}]\!]}\{\lambda _i(0)-\eta _i(0)\}. \end{aligned}$$
(2.5)

Similarly, let \(i_0={{\mathrm{argmin}}}_{i\in [\![{N}]\!]}\{\lambda _i(t)-\eta _i(t)\}\). Either \(\lambda _{i_0}(t)-\eta _{i_0}(t)> 0\), or \(\partial _t(\lambda _{i_0}(t)-\eta _{i_0}(t))\ge {{\mathfrak {K}}}(\lambda _{i_0}(t)-\eta _{i_0}(t))\). Again by Gronwall’s inequality we obtain that \(\min _{i\in [\![{N}]\!]}\{\lambda _i(t)-\eta _i(t)\}\ge 0\). \(\square \)

The following theorem is a consequence of [43, Theorem 1.1 and 1.3]. It establishes the existence of a solution to the limiting hydrodynamic equation of the empirical particle process. In its statement we distinguish the parameter L from N. This is due to the fact that we will compare the empirical measure \(\tilde{\mu }_t\) to a solution of the equation (2.8) with initial data coming from the initial value of \({\varvec{}}\lambda (0)\) which is a finite N object. The existence of this solution is easily established using the theorem below by introducing an auxilliary process \({\varvec{}}\lambda ^{(L)}\) which converges to \(\tilde{\mu }_0\) (a fixed finite N object) as \(L \rightarrow \infty \).

Theorem 2.4

Suppose V satisfies the Assumption 2.1. Let \(\beta \ge 1\). Let \({\varvec{}}\lambda ^{(L)}(0)=(\lambda _1^{(L)}(0),\lambda _2^{(L)}(0)\ldots , \lambda _L^{(L)}(0))\in \overline{\Delta _N}\) be a sequence of initial data satisfying

$$\begin{aligned} \sup _{L>0}\frac{1}{L}\sum _{i=1}^L\log (\lambda _i^{(L)}(0)^2+1)<\infty . \end{aligned}$$
(2.6)

Assume that the empirical measure \(\tilde{\mu }^{(L)}_0=\frac{1}{L}\sum _{i=1}^L\delta _{\lambda _i^{(L)}(0)}\) converges weakly as L goes to infinity to \(\mu _0\in M_1({\mathbb R})\).

Let \({{\varvec{}}\lambda }^{(L)}(t)=(\lambda ^{(L)}_1(t),\ldots , \lambda ^{(L)}_L(t))_{t\ge 0}\) be the solution of (1.1) with initial data \({\varvec{}}\lambda ^{(L)}(0)\), and set

$$\begin{aligned} \tilde{\mu }_t^{(L)}=\frac{1}{L}\sum _{i=1}^L\delta _{\lambda _i^{(L)}(t)}. \end{aligned}$$
(2.7)

Then for any fixed time T, \((\tilde{\mu }_t^{(L)})_{t\in [0,T]}\) converges almost surely in \(C([0,T], M_1({\mathbb R}))\). Its limit is the unique measure-valued process \((\mu _t)_{t\in [0,T]}\) characterized by the McKean–Vlasov equation, i.e., for all \(f\in C_b^2({\mathbb R})\), \(t\in [0,T]\),

$$\begin{aligned} \partial _t \int _{\mathbb R}f(x)\mathrm{d}\mu _t(x)= & {} \frac{1}{2}\int \int _{{\mathbb R}^2}\frac{\partial _x f(x)-\partial _yf(y)}{x-y}\mathrm{d}\mu _t(x)\mathrm{d}\mu _t(y)\nonumber \\&-\frac{1}{2}\int _{\mathbb R}V'(x)f'(x)\mathrm{d}\mu _t(x). \end{aligned}$$
(2.8)

Taking \(f(x)=(x-z)^{-1}\) for \(z\in {\mathbb C}\setminus {\mathbb R}\) in (2.8), we see that the Stieltjes transform of the limiting measure-valued process, which is defined by

$$\begin{aligned} m_t(z)=\int _{{\mathbb R}} (x-z)^{-1}\mathrm{d}\mu _t(x) , \end{aligned}$$
(2.9)

satisfies the equation

$$\begin{aligned} \partial _t m_t(z)=m_t(z)\partial _z m_t(z)+\frac{1}{2}\int _{{\mathbb R}}\frac{V'(x)}{(x-z)^2}\mathrm{d}\mu _t(x). \end{aligned}$$
(2.10)

In a moment we will introduce a spatial cut-off of V. In order to do this we require the following exponential bound for \( || \lambda _i (t) ||_\infty \).

Proposition 2.5

Suppose V satisfies Assumption 2.1. Let \(\beta \ge 1\), and \({\varvec{}}\lambda (0)\in \overline{\Delta _N}\). Let \({\mathfrak a}\) be a constant such that the initial data \(\Vert {\varvec{}}\lambda (0)\Vert _\infty \le {\mathfrak a}\). Then for any fixed time T, there exists a finite constant \({\mathfrak b}={\mathfrak b}({\mathfrak a}, T )\), such that for any \(0\le t\le T\), the unique strong solution of (1.1) satisfies:

$$\begin{aligned} \mathbb {P}(\max \{|\lambda _1(t)|,|\lambda _N(t)|\}\ge {\mathfrak b})\le e^{-N}. \end{aligned}$$
(2.11)

Proof

Let \(({\varvec{}}\eta (t))_{t\ge 0}\) be the strong solution of \(\beta \)-DBM with potential \(V=0\),

$$\begin{aligned} \mathrm{d}\eta _i(t) = \sqrt{\frac{2}{\beta N}} \mathrm{d}B_i(t) +\frac{1}{N}\sum _{j:j\ne i}\frac{\mathrm{d}t}{\eta _i(t)-\eta _j(t)},\quad i=1,2,\ldots , N. \end{aligned}$$
(2.12)

We take the initial data as \({\varvec{}}\eta (0)={\varvec{}}\lambda (0)\in \overline{\Delta _N}\). Thanks to [4, Lemma 4.3.17], there exists a finite constant \({\mathfrak b}_1={\mathfrak b}_1({\mathfrak a}, T)\), such that

$$\begin{aligned} \Omega \,{:}{=}\,\{\max \{|\eta _1(t)|,|\eta _N(t)|\}\le {\mathfrak b}_1\},\quad \mathbb {P}(\Omega )\ge 1-e^{-N}. \end{aligned}$$
(2.13)

By taking difference of the stochastic differential equations satisfied by \(({\varvec{}}\lambda (t))_{t\ge 0}\) and \(({\varvec{}}\eta (t))_{t\ge 0}\), we get

$$\begin{aligned} \partial _t(\lambda _i(t)-\eta _i(t))=\frac{1}{N}\sum _{j:j\ne i}\frac{(\lambda _j(t)-\eta _j(t))-(\lambda _i(t)-\eta _i(t))}{(\lambda _i(t)-\lambda _j(t))(\eta _i(t)-\eta _j(t))}\mathrm{d}t -\frac{1}{2} V'(\lambda _i(t))\mathrm{d}t.\nonumber \\ \end{aligned}$$
(2.14)

Let \(i_0={{\mathrm{argmax}}}_{i\in [\![{N}]\!]}\{\lambda _i(t)-\eta _i(t)\}\). For \(i=i_0\), the first term of (2.14) is non-positive, and thus on the event \(\Omega \),

$$\begin{aligned} \begin{aligned} \partial _t(\lambda _{i_0}(t)-\eta _{i_0}(t)) \le&-\frac{1}{2}\left( V'(\lambda _{i_0}(t))-V'(\eta _{i_0}(t))\right) -\frac{1}{2}V'(\eta _{i_0}(t))\\ \le&-\frac{1}{2}\left( V'(\lambda _{i_0}(t))-V'(\eta _{i_0}(t))\right) +C, \end{aligned} \end{aligned}$$
(2.15)

where \(C=\max _{x\in [-{\mathfrak b}_1,{\mathfrak b}_1]}|V'(x)|/2\). Then thanks to Assumption (2.1), either \(\lambda _{i_0}(t)-\eta _{i_0}(t)< 0\), or \(\partial _t(\lambda _{i_0}(t)-\eta _{i_0}(t))\le {{\mathfrak {K}}}(\lambda _{i_0}(t)-\eta _{i_0}(t))+C\). Therefore, it follows from Gronwall’s inequality,

$$\begin{aligned} \max _{i\in [\![{N}]\!]}\{\lambda _i(t)-\eta _i(t)\}\le \frac{C(e^{{{\mathfrak {K}}}t}-1)}{{{\mathfrak {K}}}}. \end{aligned}$$
(2.16)

And thus

$$\begin{aligned} \max _{i\in [\![{N}]\!]}\{\lambda _i(t)\}\le {\mathfrak b}_1+\frac{C(e^{{{\mathfrak {K}}}t}-1)}{{{\mathfrak {K}}}}. \end{aligned}$$
(2.17)

Similarly, let \(i_0={{\mathrm{argmin}}}_{i\in [\![{N}]\!]}\{\lambda _i(t)-\eta _i(t)\}\), then either \(\lambda _{i_0}(t)-\eta _{i_0}(t)> 0\), or \(\partial _t(\lambda _{i_0}(t)-\eta _{i_0}(t))\ge {{\mathfrak {K}}}(\lambda _{i_0}(t)-\eta _{i_0}(t))-C\). It follows from Gronwall’s inequality that \( \min _{i\in [\![{N}]\!]}\{\lambda _i(t)\}\ge -{\mathfrak b}_1-C(e^{{{\mathfrak {K}}}t}-1)/{{\mathfrak {K}}}. \) Proposition 2.5 follows by taking \({\mathfrak b}={\mathfrak b}_1+C(e^{{{\mathfrak {K}}}t}-1)/{{\mathfrak {K}}}\). \(\square \)

Note that the constant \({\mathfrak b}\) in the previous proposition depends only on V through its \(C^1\) norm on the interval \([- {\mathfrak b}_1, {\mathfrak b}_1]\) and \({\mathfrak {K}}\). Hence, if we replace \(V'(x)\) by \(V'(x) \chi (x)\) where \(\chi \) is a smooth cut-off function on \([-2 {\mathfrak b}, 2{\mathfrak b}]\) (we assume \({\mathfrak b}> 1\)), then by Proposition 2.5 the solutions of (1.1) with the original potential \(V'(x)\) and the cut-off potential \(V'(x)\chi (x)\) agree with exponentially high probability. Hence for the remainder of the paper it will suffice for our purposes to work with the cut-off potential \(V'(x) \chi (x)\).

We introduce the following quasi-analytic extension of \(V'\) of order three,

$$\begin{aligned} V'(x+\mathrm {i}y)\,{:}{=}\,\left( V'(x)\chi (x)+\mathrm {i}y \partial _x(V'(x)\chi (x)) -\frac{y^2}{2}\partial _x^2(V'(x)\chi (x))\right) \chi (y).\nonumber \\ \end{aligned}$$
(2.18)

We denote,

$$\begin{aligned} \partial _z=\frac{1}{2}(\partial _x-\mathrm {i}\partial _y),\quad \partial _{\bar{z}}=\frac{1}{2}(\partial _x+\mathrm {i}\partial _y). \end{aligned}$$
(2.19)

We rewrite (2.10) in the following

$$\begin{aligned} \begin{aligned} \partial _t m_t(z)=\partial _z m_t(z)\left( m_t(z)+\frac{V'(z)}{2}\right) +\frac{m_t(z)\partial _z V'(z)}{2}+\int _{{\mathbb R}}g(z,x) \mathrm{d}\mu _t(x), \end{aligned} \end{aligned}$$
(2.20)

where

$$\begin{aligned} g(z,x)\,{:}{=}\,\frac{V'(x)-V'(z)-(x-z)\partial _zV'(z)}{2(x-z)^2},\quad g(x,x)\,{:}{=}\,\frac{V'''(x)}{4}. \end{aligned}$$
(2.21)

By our definition (2.18), \(V'\) is quasi-analytic along the real axis. One can directly check the following properties of g(zx) and V.

Proposition 2.6

Suppose V satisfies Assumption 2.1. Let \(V'(z)\) and g(zx) be as defined in (2.18) and (2.21). There exists a universal constant C depending on V, such that

  1. 1.

    \(\Vert V'(z)\Vert _{C^1} \le C\), \( | \mathop {\mathrm {Im}}[ V' (z) ] | \le C | \mathop {\mathrm {Im}}[z]| \) and \(| \mathop {\mathrm {Im}}[\partial _z V'(z)] | \le C | \mathop {\mathrm {Im}}[ z ]|\).

  2. 2.

    The following bounds hold uniformly over \(z\in {\mathbb C}\) and \(x \in {\mathbb R}\). We have \( |g (z, x) | + | \partial _x g(z, x) | \le C\). Furthermore, \( | \partial ^2_x g(z, x) | \le C |z-x|^{-1}\) and \(| \mathop {\mathrm {Im}}[ g (z, x) ] | \le C |\mathop {\mathrm {Im}}[z]|\).

  3. 3.

    If we further assume V is \(C^5\), then \(\Vert V'(z)\Vert _{C^2} \le C\), and uniformly over \(z\in {\mathbb C}\) and \(x \in {\mathbb R}\), \(|\partial _z g(z,x)|+|\partial _{{\bar{z}}}g(z,x)|\le C\).

We define the following quasi-analytic extension of \(g(z,\cdot )\) of order two,

$$\begin{aligned} \tilde{g}(z,x+\mathrm {i}y)\,{:}{=}\,(g(z,x)+\mathrm {i}y \partial _x g(z,x))\chi (y), \end{aligned}$$
(2.22)

By the Helffer-Sjöstrand formula, see [30, Chapter 11.2],

$$\begin{aligned} \int _{{\mathbb R}}g(z,x) \mathrm{d}\mu _t(x)=\frac{1}{\pi }\int _{{\mathbb C}} \partial _{{\bar{w}}} \tilde{g}(z,w) m_t(w)\mathrm{d}^2 w, \end{aligned}$$
(2.23)

and so we can rewrite (2.20) as an autonomous differential equation of \(m_t(z)\):

$$\begin{aligned} \partial _t m_t(z)= & {} \partial _zm_t(z)\left( m_t(z)+\frac{V'(z)}{2}\right) +\frac{m_t(z)\partial _z V'(z)}{2}\nonumber \\&+\frac{1}{\pi }\int _{{\mathbb C}} \partial _{{\bar{w}}} \tilde{g}(z,w) m_t(w)\mathrm{d}^2 w. \end{aligned}$$
(2.24)

2.1 Stieltjes transform of the limit measure-valued process

In this subsection we analyze the differential equation of the Stieltjes transform of the limiting measure-valued process (2.24) with initial data \(\mu _0\) which we assume to have \({{\mathrm{supp}}}\mu _0\in [-{\mathfrak a},{\mathfrak a}]\). We fix a constant time T. By Theorem 2.4 and Proposition 2.5, there exists a finite constant \({\mathfrak b}={\mathfrak b}({{\mathfrak a}}, T)\) such that \({{\mathrm{supp}}}\mu _t\in [-{\mathfrak b},{\mathfrak b}]\) for any \(0\le t\le T\).

We analyze (2.24) by the method of characteristics. Let

$$\begin{aligned} \partial _t z_t(u)=-m_t(z_t(u))-\frac{V'(z_t(u))}{2},\qquad z_0=u\in {\mathbb C}_+. \end{aligned}$$
(2.25)

If the context is clear, we omit the parameter u, i.e., we simply write \(z_t\) instead of \(z_t(u)\).

For any \(\varepsilon >0\), let \({\mathbb C}_+^\varepsilon =\{z:\mathop {\mathrm {Im}}[z]>\varepsilon \}\). Since \(m_t\) is analytic, bounded and Lipschitz on the closed domain \(\overline{{\mathbb C}_+^\varepsilon }\), we have that for any u with \(u\in {\mathbb C}_+^\varepsilon \), the solution \(z_t(u)\) exists, is unique, and is well defined before exiting the domain. Thanks to the local uniqueness of the solution curve, it follows, by taking \(\varepsilon \rightarrow 0\), that for any u with \(\mathop {\mathrm {Im}}[u]>0\), the solution curve \(z_t(u)\) is well defined before it exits the upper half plane. For any \(u\in {\mathbb C}_+\), either the flow \(z_t(u)\) stays in the upper half plane forever, or there exists some finite time t such that \(\lim _{s\rightarrow t}\mathop {\mathrm {Im}}[z_s(u)]=0\).

Plugging (2.25) into (2.24), and applying the chain rule we obtain

$$\begin{aligned} \partial _t m_t(z_t)=\frac{m_t(z_t)\partial _z V'(z_t)}{2}+\frac{1}{\pi }\int _{{\mathbb C}} \partial _{{\bar{w}}} \tilde{g}(z_t,w) m_t(w)\mathrm{d}^2 w. \end{aligned}$$
(2.26)

The behaviors of \(z_s\) and \(m_s(z_s)\) are governed by the system of equations (2.25) and (2.26). The following properties of the flows \(\mathop {\mathrm {Im}}[z_s]\) and \(\mathop {\mathrm {Im}}[m_s(z_s)]\) will be used throughout the paper.

Proposition 2.7

Suppose V satisfies the Assumption 2.1. Fix a time \(T>0\). There exists a constant \(C=C(V,{\mathfrak b})\) such that the following holds. For any \(0\le s\le t\le T\) with \(\mathop {\mathrm {Im}}[ z_t ] >0\), the following estimates hold uniformly for initial \(u = z_0\) in compact subsets of \( \mathbb {C}_+\).

$$\begin{aligned}&\mathop {\mathrm {Im}}[m_t(z)]\mathop {\mathrm {Im}}[z]\le 1,\quad |\partial _z m_t(z)|\le \frac{\mathop {\mathrm {Im}}[m_t(z)]}{\mathop {\mathrm {Im}}[z]}, \end{aligned}$$
(2.27)
$$\begin{aligned}&e^{-C(t-s)}\mathop {\mathrm {Im}}[z_t]\le \mathop {\mathrm {Im}}[z_s], \end{aligned}$$
(2.28)
$$\begin{aligned}&e^{-tC}\mathop {\mathrm {Im}}[m_0(z_0)]\le \mathop {\mathrm {Im}}[m_t(z_t)]\le e^{tC}\mathop {\mathrm {Im}}[m_0(z_0)], \end{aligned}$$
(2.29)
$$\begin{aligned}&e^{-Ct}\left( \mathop {\mathrm {Im}}[z_0]-\frac{e^{Ct}-1}{C}\mathop {\mathrm {Im}}[m_0(z_0)]\right) \nonumber \\&\quad \le \mathop {\mathrm {Im}}[z_t]\le e^{Ct}\left( \mathop {\mathrm {Im}}[z_0]-\frac{1-e^{-Ct}}{C}\mathop {\mathrm {Im}}[m_0(z_0)]\right) , \end{aligned}$$
(2.30)

and

$$\begin{aligned}&\int _{s}^t\frac{\mathop {\mathrm {Im}}[m_\tau (z_\tau )] \mathrm{d}\tau }{\mathop {\mathrm {Im}}[z_\tau ]}\le C(t-s)+\log \frac{\mathop {\mathrm {Im}}[z_s]}{\mathop {\mathrm {Im}}[z_t]},\nonumber \\&\quad \int _{s}^t\frac{\mathop {\mathrm {Im}}[m_\tau (z_\tau )]\mathrm{d}\tau }{\mathop {\mathrm {Im}}[z_\tau ]^p} \le \frac{C}{\mathop {\mathrm {Im}}[z_t]^{p-1}},\quad p>1. \end{aligned}$$
(2.31)

Proof

The estimates (2.27) are general and hold for any Stieltjes transform. First, we have

$$\begin{aligned} \mathop {\mathrm {Im}}[ m_t(z)]\mathop {\mathrm {Im}}[z]=\int _{{\mathbb R}}\frac{\mathop {\mathrm {Im}}[z]^2\mathrm{d}\mu _t(x)}{|x-z|^2}\le \int _{\mathbb R}\mathrm{d}\mu _t (x)= 1, \end{aligned}$$

and secondly we have,

$$\begin{aligned} |\partial _z m_t(z)|=\left| \int _{{\mathbb R}}\frac{\mathrm{d}\mu _t(x)}{(x-z)^2}\right| \le \frac{1}{\mathop {\mathrm {Im}}[z]}\int _{{\mathbb R}}\frac{\mathop {\mathrm {Im}}[z]\mathrm{d}\mu _t(x)}{|x-z|^2}=\frac{\mathop {\mathrm {Im}}[m_t(z)]}{\mathop {\mathrm {Im}}[z]}. \end{aligned}$$

Since \(\mathop {\mathrm {Im}}[m_s(z_s)]\ge 0\), it follows from (2.25) and the estimate \(|\mathop {\mathrm {Im}}[V'(z_s)]|={{\mathrm{O}}}(\mathop {\mathrm {Im}}[z_s])\) of Proposition 2.6 that there exists a constant C s.t.

$$\begin{aligned} \partial _s\mathop {\mathrm {Im}}[z_s]\le C\mathop {\mathrm {Im}}[z_s]. \end{aligned}$$
(2.32)

The estimate (2.28) follows.

By Proposition 2.6, \(\mathop {\mathrm {Im}}[V'(z)]={{\mathrm{O}}}( \mathop {\mathrm {Im}}[z])\). It follows from taking imaginary part of (2.25) that there exists some constant C depending on V, such that

$$\begin{aligned} \left| \partial _s\mathop {\mathrm {Im}}[z_s]+\mathop {\mathrm {Im}}[m_s(z_s)]\right| \le C\mathop {\mathrm {Im}}[z_s]. \end{aligned}$$
(2.33)

By rearranging, (2.33) leads to the inequalities

$$\begin{aligned} -e^{Cs}\mathop {\mathrm {Im}}[m_s(z_s)]\le \partial _s \left( e^{Cs}\mathop {\mathrm {Im}}[z_s]\right) , \quad \partial _s \left( e^{-Cs}\mathop {\mathrm {Im}}[z_s]\right) \le -e^{-Cs}\mathop {\mathrm {Im}}[m_s(z_s)].\nonumber \\ \end{aligned}$$
(2.34)

Similarly, by taking the imaginary part of (2.26), i.e.

$$\begin{aligned} \partial _s m_s(z_s)=\frac{m_s(z_s)\partial _z V'(z_s)}{2}+ \int _{{\mathbb R}} g (z, x ) \mathrm {d}\mu _s (x), \end{aligned}$$
(2.35)

and using the estimates \( | \mathop {\mathrm {Im}}[\partial _z V'(z)] | + |\mathop {\mathrm {Im}}[g(z,x)]|={{\mathrm{O}}}( \mathop {\mathrm {Im}}[z])\) and \(|\partial _z V' (z) | \le C\) in Proposition 2.6 we obtain,

$$\begin{aligned} \left| \partial _s\mathop {\mathrm {Im}}[m_s(z_s)]\right| \le \left| \mathop {\mathrm {Im}}\left[ \frac{\partial _zV'(z_s)m_s(z_s)}{2}\right] \right| +C'\mathop {\mathrm {Im}}[z_s]\le C \mathop {\mathrm {Im}}[m_s(z_s)].\qquad \end{aligned}$$
(2.36)

In the last inequality we used \({{\mathrm{supp}}}\mu _s\in [-{\mathfrak b}, {\mathfrak b}]\), and so

$$\begin{aligned}&|\mathop {\mathrm {Im}}[\partial _z V'(z_s)]\mathop {\mathrm {Re}}[m_s(z_s)]|=1_{\{ |\mathop {\mathrm {Re}}[z_s]|\le 2{\mathfrak b}\}}{{\mathrm{O}}}(\mathop {\mathrm {Im}}[z_s]|\mathop {\mathrm {Re}}[m_s(z_s)]|)\\&\quad =1_{ \{ |\mathop {\mathrm {Re}}[z_s]|\le 2{\mathfrak b}\}}{{\mathrm{O}}}\left( \int _{{\mathbb R}}\frac{\mathop {\mathrm {Im}}[z_s]\mathop {\mathrm {Re}}[x-z_s]\mathrm{d}\mu _s(x)}{|x-z_s|^2}\right) \\&\quad ={{\mathrm{O}}}\left( \int _{{\mathbb R}}\frac{3{\mathfrak b}\mathop {\mathrm {Im}}[z_s]\mathrm{d}\mu _s(x)}{|x-z_s|^2}\right) ={{\mathrm{O}}}(\mathop {\mathrm {Im}}[m_s(z_s)]). \end{aligned}$$

We also used \({{\mathrm{supp}}}\mu _s\in [-{\mathfrak b}, {\mathfrak b}]\), thus for \(x\in {{\mathrm{supp}}}\mu _s\) and \(|\mathop {\mathrm {Re}}[z_s]|\le 2{\mathfrak b}\), it holds \(|\mathop {\mathrm {Re}}[x-z_s]|\le 3{\mathfrak b}\).

The estimate (2.29) then follows from (2.36) and Gronwall’s inequality, and the estimate (2.30) follows from combining (2.34) and (2.29). For (2.31) we have by (2.34),

$$\begin{aligned} \int _{s}^t\frac{\mathop {\mathrm {Im}}[m_\tau (z_\tau )] \mathrm{d}\tau }{\mathop {\mathrm {Im}}[z_\tau ]^p}\le \int _{s}^t\frac{-\partial _\tau \left( e^{-C\tau }\mathop {\mathrm {Im}}[z_\tau ]\right) \mathrm{d}\tau }{\left( e^{-C\tau }\mathop {\mathrm {Im}}[z_\tau ]\right) ^p}. \end{aligned}$$
(2.37)

The case \(p=1\) follows. For \(p > 1\), we have

$$\begin{aligned}&\int _{s}^t\frac{\mathop {\mathrm {Im}}[m_\tau (z_\tau )] \mathrm{d}\tau }{\mathop {\mathrm {Im}}[z_\tau ]^p}\le \frac{1}{p-1}\left( \frac{1}{\left( e^{-Ct}\mathop {\mathrm {Im}}[z_t]\right) ^{p-1}}-\frac{1}{\left( e^{-Cs}\mathop {\mathrm {Im}}[z_s]\right) ^{p-1}}\right) \nonumber \\&\quad \le \frac{C'}{\mathop {\mathrm {Im}}[z_t]^{p-1}}. \end{aligned}$$
(2.38)

\(\square \)

We have the following result for the flow map \(u \rightarrow z_t (u)\).

Proposition 2.8

Suppose that V satisfies Assumption 2.1. Fix a time T. For any \(0\le t\le T\), there exists an open domain \(\Omega _t\subset {\mathbb C}_+\), such that the vector flow map \(u\mapsto z_t(u)\) is a \(C^1\) homeomorphism from \(\Omega _t\) to \({\mathbb C}_+\).

Proof

We define

$$\begin{aligned} \Omega _t\,{:}{=}\,\{u\in {\mathbb C}_+: z_s(u)\in {\mathbb C}_+, 0\le s\le t\}. \end{aligned}$$
(2.39)

By Assumption 2.1, V is a \(C^4\) function. From our construction of \(V'(z)\) as in (2.18), \(V'(z)\) is a \(C^1\) function. Thus the vector flow map \(u\mapsto z_t(u)\) is a \(C^1\) map from \(\Omega _t\) to \({\mathbb C}_+\). We need to show that it is invertible. Define the following flow map by

$$\begin{aligned} \partial _sy_s(v)=m_{t-s}(y_{s}(v))+\frac{V'(y_{s}(v))}{2}, \quad y_0=v\in {\mathbb C}_+. \end{aligned}$$
(2.40)

for \(0 \le s \le t\). Since \(\mathop {\mathrm {Im}}[m_{t-s}(y_s(v))]\ge 0\), there exists some constant C depending on V,

$$\begin{aligned} \partial _s \left( e^{Cs}\mathop {\mathrm {Im}}[y_s]\right) \ge 0. \end{aligned}$$
(2.41)

Therefore \(y_s(v)\) is well defined for \(0\le s\le t\), and it will stay in \({\mathbb C}_+\). Furthermore, \(v\mapsto y_t(v)\) is a \(C^1\) map, and is the inverse of \(u\mapsto z_t(u)\). \(\square \)

3 Rigidity of \(\beta \)-DBM

In this section we prove the local law and optimal rigidity for \(\beta \)-DBM with general initial data. Let \(\mu _t\) be the unique solution of (2.8) with initial data

$$\begin{aligned} \mu _0 (x) := \frac{1}{N} \sum _{i=1}^N \delta _{\lambda _i (0) } (x). \end{aligned}$$
(3.1)

Denote by \(m_t\) its Stieltjes transform. We introduce some notation used in the statement and proof of the local law. We fix a small parameter \(\delta \), a large constant \({\mathfrak c}\ge 1\), a large constant K, and control parameter \(M=(\log N)^{2+2\delta }\). For any time \(s\ll 1 \), we define the spectral domain,

$$\begin{aligned} {{\mathrm{\mathcal {D}}}}_s= & {} \left\{ w\in {\mathbb C}_+: \mathop {\mathrm {Im}}[w]\ge \frac{e^{Ks}M\log N}{N\mathop {\mathrm {Im}}[m_s(w)]}\vee \frac{e^{Ks}}{N^{{\mathfrak c}}},\quad \mathop {\mathrm {Im}}[w]\le 3{{\mathfrak b}}-s, \right. \nonumber \\&\left. \quad |\mathop {\mathrm {Re}}[w]|\le 3{\mathfrak b}-s \right\} . \end{aligned}$$
(3.2)

We take the spectral domain \({{\mathrm{\mathcal {D}}}}_s\) depending on time s, such that for any characteristic flow \(z_s(u)\) with initial value \(u\in {{\mathrm{\mathcal {D}}}}_0\), then it will remain in the spectral domain, \(z_s(u)\in {{\mathrm{\mathcal {D}}}}_s\).

The following is the local law for \(\beta \)-DBM.

Theorem 3.1

Suppose V satisfies the Assumption 2.1. Fix \(T=(\log N)^{-2}\). Let \(\beta \ge 1\) and assume that the initial data satisfies \(-{\mathfrak a}\le \lambda _1(0)\le \lambda _2(0)\cdots \le \lambda _N(0)\le {\mathfrak a}\) for a fixed \({\mathfrak a}>0\). Uniformly for any \(0\le t\ll T\), and \(w\in {{\mathrm{\mathcal {D}}}}_t\) the following estimate holds with overwhelming probability,

$$\begin{aligned} |\tilde{m}_t(w)- m_t(w)|\le \frac{M}{N\mathop {\mathrm {Im}}[w]}. \end{aligned}$$
(3.3)

The following rigidity estimates are a consequence of the local law.

Corollary 3.2

Under the assumptions of Theorem 3.1. Fix time \(T=(\log N)^{-2}\). With overwhelming probability, uniformly for any \(0\le t\le T\) and \(i\in [\![{1, N}]\!]\), we have

$$\begin{aligned} \gamma _{i-CM\log N}(t)-N^{-{\mathfrak c}+1}\le \lambda _i(t) \le \gamma _{i+CM\log N}(t)+N^{-{\mathfrak c}+1}, \end{aligned}$$
(3.4)

where \({\mathfrak c}\) is any large constant, \(\gamma _i(t)\) is the classical particle location at time t,

$$\begin{aligned} \gamma _i(t)=\inf _{x}\left\{ \int _{-\infty }^x \mathrm{d}\mu _t(x)\ge \frac{i}{N}\right\} ,\quad i\in [\![{1,N}]\!]. \end{aligned}$$
(3.5)

We make the convention that \(\gamma _i(t)=-\infty \) if \(i<0\), and \(\gamma _i(t)=+\infty \) if \(i>N\).

We will prove Theorem 3.1 at the end of Sect. 3.2. The proof of Corollary 3.2 is standard and is given in Sect. 3.3.

Remark 3.3

Notice that

$$\begin{aligned} \eta \mapsto \eta \mathop {\mathrm {Im}}[m_t(E+\mathrm {i}\eta )]=\int _{{\mathbb R}}\frac{\eta ^2\mathrm{d}\mu _t(x)}{(E-x)^2+\eta ^2}, \end{aligned}$$

is a monotonically increasing function. Similarly, \(\eta \mapsto \eta \mathop {\mathrm {Im}}[{\tilde{m}}_t(E+\mathrm {i}\eta )]\) is monotonic. We now prove the following deterministic fact. Suppose that the estimate (3.3) holds on \({{\mathrm{\mathcal {D}}}}_t\). We claim that under this assumption the estimate

$$\begin{aligned} |\mathop {\mathrm {Im}}[\tilde{m}_t(w)]- \mathop {\mathrm {Im}}[m_t(w)]|\le \frac{3e^{Kt}M\log N}{N\mathop {\mathrm {Im}}[w]}, \end{aligned}$$
(3.6)

holds on the larger domain \( w = E + \mathrm {i}\eta \) with \(|E| \le 3 {\mathfrak b}- t \) and \(e^{ K t } N^{- {\mathfrak c}} \le \eta \le 3 {\mathfrak b}- t\).

Let

$$\begin{aligned} \eta (E)=\inf _{\eta \ge 0}\{\eta \mathop {\mathrm {Im}}[m_t(E+\mathrm {i}\eta )]\ge e^{Kt}M\log N/N\}. \end{aligned}$$
(3.7)

By the assumption (3.3) and the definition of \({{\mathrm{\mathcal {D}}}}_t\) we only need to check the case that \( \eta (E) > e^{ Kt } N^{ - {\mathfrak c}}\) and \(\eta < \eta (E)\). In this case we have \(\eta (E)\mathop {\mathrm {Im}}[m_t(E+\mathrm {i}\eta (E))]=e^{Kt}M\log N/N\), and \(\mathop {\mathrm {Im}}[m_t(w)]\le e^{Kt}M\log N/N\eta \), and so

$$\begin{aligned}&|\mathop {\mathrm {Im}}[\tilde{m}_t(w)]- \mathop {\mathrm {Im}}[m_t(w)]| \\&\quad \le \mathop {\mathrm {Im}}[\tilde{m}_t(w)]+\frac{e^{Kt}M\log N}{N\eta }\\&\quad \le \frac{\eta (E)}{\eta }\mathop {\mathrm {Im}}[\tilde{m}_t(E+\mathrm {i}\eta (E))]+\frac{e^{Kt}M\log N}{N\eta }\\&\quad \le \frac{\eta (E)}{\eta }\left| \mathop {\mathrm {Im}}[\tilde{m}_t(E+\mathrm {i}\eta (E))]-\mathop {\mathrm {Im}}[m_t(E+\mathrm {i}\eta (E)]\right| +\frac{2e^{Kt}M\log N}{N\eta }\\&\quad \le \frac{3e^{Kt}M\log N}{N\eta }. \end{aligned}$$

In the second inequality we used monotonicity of \(\eta \mathop {\mathrm {Im}}[\tilde{m}_t(E+\mathrm {i}\eta )]\).

3.1 Properties of the spectral domain

In this section, we prove some properties of the spectral domain \(\mathcal D_s\) as defined in (3.2), which will be used throughout the proof of Theorem 3.1.

Lemma 3.4

Suppose V satisfies Assumption 2.1. Fix a time T. For any \(0\le t\le T\) such that \(z_t\in {{\mathrm{\mathcal {D}}}}_t\), we have

$$\begin{aligned} \mathop {\mathrm {Im}}[u] \le C N \mathop {\mathrm {Im}}[z_t(u)]. \end{aligned}$$
(3.8)

Proof

Notice that from (2.27), \(\mathop {\mathrm {Im}}[u]\le \mathop {\mathrm {Im}}[m_0(u)]^{-1}\), and by our assumption \(z_t\in {{\mathrm{\mathcal {D}}}}_t\) and (2.29),

$$\begin{aligned} \mathop {\mathrm {Im}}[z_t(u)]\ge (N\mathop {\mathrm {Im}}[m_t(z_t)])^{-1}\ge (CN\mathop {\mathrm {Im}}[m_0(z_0)])^{-1}\ge (CN)^{-1}\mathop {\mathrm {Im}}[u], \end{aligned}$$
(3.9)

which yields (3.8). \(\square \)

Proposition 3.5

Suppose V satisfies the Assumption 2.1. Fix time T. If for some \(t\in [0,T]\), \(z_t\in {{\mathrm{\mathcal {D}}}}_t\), then for any \(s\in [0, t]\), \(z_s\in {{\mathrm{\mathcal {D}}}}_s\).

Proof

By (2.28), we have \(\mathop {\mathrm {Im}}[z_s]\ge e^{-C(t-s)}\mathop {\mathrm {Im}}[z_t]\). Therefore if we have \(\mathop {\mathrm {Im}}[z_t]\ge e^{Kt}N^{-{\mathfrak c}}\) then \(\mathop {\mathrm {Im}}[z_s]\ge e^{Ks}N^{-{\mathfrak c}}\) as long as we take \(K\ge C\).

Combining \(\partial _s\mathop {\mathrm {Im}}[z_s]\le C\mathop {\mathrm {Im}}[z_s]\) from (2.32), with (2.36), yields that there is a constant \(C'\) so that

$$\begin{aligned} \partial _s\left( \mathop {\mathrm {Im}}[z_s]\mathop {\mathrm {Im}}[m_s(z_s)]\right) \le C'\mathop {\mathrm {Im}}[z_s]\mathop {\mathrm {Im}}[m_s(z_s)]. \end{aligned}$$
(3.10)

Therefore, \(\mathop {\mathrm {Im}}[z_s]\mathop {\mathrm {Im}}[m_s(z_s)]\ge e^{-C'(t-s)}\mathop {\mathrm {Im}}[z_t]\mathop {\mathrm {Im}}[m_t(z_t)]\) and so if \(\mathop {\mathrm {Im}}[z_t]\mathop {\mathrm {Im}}[m_t(z_t)]\ge e^{Kt}M\log N/N\), then \(\mathop {\mathrm {Im}}[z_s]\mathop {\mathrm {Im}}[m_s(z_s)]\ge e^{Ks}M\log N/N\), provided \(K\ge C'\).

Finally, we must prove that if \(\mathop {\mathrm {Im}}[z_t]\le 3{{\mathfrak b}}-t\) and \(|\mathop {\mathrm {Re}}[z_t]|\le 3{\mathfrak b}-t\), then for any \(s\in [0,t]\), we have \(\mathop {\mathrm {Im}}[z_s]\le 3{{\mathfrak b}}-s\) and \(|\mathop {\mathrm {Re}}[z_s]|\le 3{\mathfrak b}-s\). First, suppose for a contradiction that there exists s such that, say, \(|\mathop {\mathrm {Re}}[z_s]|> 3{{\mathfrak b}}-s\). By symmetry, say \(\mathop {\mathrm {Re}}[z_s]> 3{{\mathfrak b}}-s\). Let \(\tau =\inf _{\sigma \ge s}\{\mathop {\mathrm {Re}}[z_\sigma ]\le 3{{\mathfrak b}}-\sigma \}\); then \(\tau \le t\). For any \(\sigma \in [s,\tau ]\), \(\mathop {\mathrm {Re}}[z_\sigma ]\ge 3{\mathfrak b}-T\ge 2{\mathfrak b}\), we have \(V'(z_\sigma )=0\), and therefore \(|\partial _\sigma z_{\sigma }|\le |m_\sigma (z_{\sigma })|\le {{\mathrm{dist}}}(z_\sigma , {{\mathrm{supp}}}\mu _\sigma )^{-1}\). Recall that we have chosen \({\mathfrak b}\) large so that, \({{\mathrm{supp}}}\mu _t\in [-{\mathfrak b}, {\mathfrak b}]\). Therefore \(|\partial _\sigma z_{\sigma }|\le {\mathfrak b}^{-1}\) and \(\mathop {\mathrm {Re}}[z_\tau ]\ge \mathop {\mathrm {Re}}[z_s]-(\tau -s)/{\mathfrak b}> 3{\mathfrak b}-\tau \), as long as we take \({\mathfrak b}> 1\). Therefore we derive a contradiction. A similar argument applies to the case that \(\mathop {\mathrm {Im}}[z_s]> 3{{\mathfrak b}}-s\). This finishes the proof of Proposition 3.5. \(\square \)

We have the following weak control on the \(C^1\) norm of the flow map \(u\mapsto z_t(u)\). A much stronger version will be proved in Proposition 4.9.

Proposition 3.6

Suppose V satisfies the Assumption 2.1. Fix time T. For any \(0\le t\le T\) with \(z_t\in {{\mathrm{\mathcal {D}}}}_t\), we have with \(u = x + \mathrm {i}y\),

$$\begin{aligned} |\partial _x z_t(u)| + |\partial _y z_t(u)|={{\mathrm{O}}}(N), \end{aligned}$$
(3.11)

where the implicit constant depends on T and V.

Proof

From Proposition 2.8, we know that \(u\mapsto z_t(u)\) is a \(C^1\) map. By differentiating both sides of (2.25), we get

$$\begin{aligned} \partial _s\partial _xz_s(u)=-\partial _z m_s(z_s(u))\partial _xz_s(u) -\frac{\partial _z V'(z_s(u))\partial _x z_s(u)+\partial _{\bar{z}}V'(z_s(u))\partial _x{\bar{z}}_s(u)}{2}.\nonumber \\ \end{aligned}$$
(3.12)

It follows that

$$\begin{aligned}&\partial _s|\partial _xz_s(u)|^2=2\mathop {\mathrm {Re}}[\partial _s \partial _x z_s(u) \partial _x{\bar{z}}_s(u)] \nonumber \\&\quad =-2\mathop {\mathrm {Re}}[\partial _z m_s(z_s(u))]|\partial _xz_s(u)|^2 \nonumber \\&\qquad -\mathop {\mathrm {Re}}[\partial _z V'(z_s(u))|\partial _x z_s(u)|^2+\partial _{{\bar{z}}}V'(z_s(u))(\partial _x{\bar{z}}_s(u))^2]. \end{aligned}$$
(3.13)

By Proposition 2.7 we have \(|\partial _z m_s(z_s(u))|\le \mathop {\mathrm {Im}}[m_{s}(z_s(u))]/\mathop {\mathrm {Im}}[z_s(u)]\), and by Proposition 2.6 we have \(|\partial _{z}V'(z_s(u))|, |\partial _{{\bar{z}}}V'(z_s(u))|={{\mathrm{O}}}(1)\). Therefore,

$$\begin{aligned} \partial _s|\partial _xz_s(u)|^2\le 2\left( \frac{\mathop {\mathrm {Im}}[m_s(z_s(u))]}{\mathop {\mathrm {Im}}[z_s(u)]}+C\right) |\partial _xz_s(u)|^2. \end{aligned}$$
(3.14)

Since \(z_0(u)=u\), by Gronwall’s inequality and (2.31) of Proposition 2.7, we have

$$\begin{aligned} |\partial _xz_t(u)|^2\le \exp \left( 2 \int _0^t \left( \frac{\mathop {\mathrm {Im}}[m_s(z_s(u))]}{\mathop {\mathrm {Im}}[z_s(u)]}+C\right) \mathrm{d}s\right) \le \left( \frac{e^{Ct}\mathop {\mathrm {Im}}[u]}{\mathop {\mathrm {Im}}[z_t(u)]} \right) ^2 \le CN^2,\nonumber \\ \end{aligned}$$
(3.15)

where we used (3.8). It follows that \(|\partial _x z_t(u)|={{\mathrm{O}}}(N)\). The estimate for \(|\partial _y z_t(u)|\) follows from the same argument. \(\square \)

We define the following lattice on the upper half plane \({\mathbb C}_+\),

$$\begin{aligned} \mathcal L=\left\{ E+\mathrm {i}\eta \in {{\mathrm{\mathcal {D}}}}_0: E\in \mathbb {Z}/ N^{(3{\mathfrak c}+1)}, \eta \in \mathbb {Z}/N^{(3{\mathfrak c}+1)}\right\} . \end{aligned}$$
(3.16)

Thanks to Propositions 2.8 and 3.6, we have the following.

Proposition 3.7

Suppose V satisfies Assumption 2.1. Fix a time T. For any \(0\le t\le T\) and \(w\in {{\mathrm{\mathcal {D}}}}_t\), there exists some lattice point \(u\in \mathcal L\cap z_t^{-1}({{\mathrm{\mathcal {D}}}}_t)\), such that

$$\begin{aligned} |z_t(u)-w|={{\mathrm{O}}}(N^{-3{\mathfrak c}}), \end{aligned}$$
(3.17)

where the implicit constant depends on T and V.

3.2 Proof of Theorem 3.1

In this section we prove (3.3). By Proposition 2.8, the flow map \(u\mapsto z_t(u)\) is a surjection from \(\Omega _t\) (as defined in Proposition 2.8) to the upper half plane \({\mathbb C}_+\). We want to prove the estimate

$$\begin{aligned} \left| \tilde{m}_t(z_t)-m_t(z_t)\right| \le \frac{M}{N\mathop {\mathrm {Im}}[z_t]}, \end{aligned}$$
(3.18)

for \(z_t\in {{\mathrm{\mathcal {D}}}}_t\).

By Ito’s formula, \(\tilde{m}_s(z)\) satisfies the stochastic differential equation

$$\begin{aligned} \begin{aligned} \mathrm{d}\tilde{m}_s(z)= -&\sqrt{\frac{2}{\beta N^3}}\sum _{i=1}^N \frac{\mathrm{d} B_i(s)}{(\lambda _i(s)-z)^2}+\tilde{m}_s(z)\partial _z \tilde{m}_s(z)\mathrm{d}s \\ +&\frac{1}{2N}\sum _{i=1}^{N}\frac{V'(\lambda _i(s))}{(\lambda _i(s)-z)^2}\mathrm{d}s+\frac{2-\beta }{\beta N^2}\sum _{i=1}^{N}\frac{\mathrm{d}s}{(\lambda _i(s)-z)^3}. \end{aligned} \end{aligned}$$
(3.19)

We can rewrite (3.19) as

$$\begin{aligned} \begin{aligned} \mathrm{d}\tilde{m}_s(z)=-&\sqrt{\frac{2}{\beta N^3}}\sum _{i=1}^N \frac{\mathrm{d} B_i(s)}{(\lambda _i(s)-z)^2}+\partial _z \tilde{m}_s(z)\left( \tilde{m}_s(z)+\frac{V'(z)}{2}\right) \mathrm{d}s\\&\quad +\frac{\tilde{m}_s(z)\partial _z V'(z)}{2}\mathrm{d}s\\&\quad + \frac{1}{\pi }\int _{{\mathbb C}} \partial _{{\bar{w}}} \tilde{g}(z,w) \tilde{m}_s(w)\mathrm{d}^2 w\mathrm{d}s +\frac{2-\beta }{\beta N^2}\sum _{i=1}^{N}\frac{\mathrm{d}s}{(\lambda _i(s)-z)^3}, \end{aligned} \end{aligned}$$
(3.20)

where \(V'(z)\) and \(\tilde{g}(z,w)\) are defined in (2.18) and (2.22) respectively. Plugging (2.25) into (3.20), and by the chain rule, we have

$$\begin{aligned} \begin{aligned} \mathrm{d}\tilde{m}_s(z_s)&=-\sqrt{\frac{2}{\beta N^3}}\sum _{i=1}^N \frac{\mathrm{d} B_i(s)}{(\lambda _i(s)-z_s)^2}+\partial _z \tilde{m}_s(z_s)\left( \tilde{m}_s(z_s)\right. \\&\left. \quad -m_s(z_s)\right) \mathrm{d}s+\frac{\tilde{m}_s(z_s)\partial _zV'(z_s)}{2}\mathrm{d}s\\&\quad + \frac{1}{\pi }\int _{{\mathbb C}} \partial _{{\bar{w}}} \tilde{g}(z_s,w) \tilde{m}_s(w)\mathrm{d}^2 w \mathrm{d}s +\frac{2-\beta }{\beta N^2}\sum _{i=1}^{N}\frac{\mathrm{d}s}{(\lambda _i(s)-z_s)^3}. \end{aligned} \end{aligned}$$
(3.21)

It follows by taking the difference of (2.26) and (3.21) that,

$$\begin{aligned} \begin{aligned}&\mathrm{d}(\tilde{m}_s(z_s)-m_s(z_s))=-\sqrt{\frac{2}{\beta N^3}}\sum _{i=1}^N \frac{\mathrm{d} B_i(s)}{(\lambda _i(s)-z_s)^2}\\&\quad +\left( \tilde{m}_s(z_s)-m_s(z_s)\right) \partial _z \left( \tilde{m}_s(z_s)+\frac{V'(z_s)}{2}\right) \mathrm{d}s\\&\quad + \frac{1}{\pi }\int _{{\mathbb C}} \partial _{{\bar{w}}} \tilde{g}(z_s,w) (\tilde{m}_s(w)-m_s(w))\mathrm{d}^2 w\mathrm{d}s +\frac{2-\beta }{\beta N^2}\sum _{i=1}^{N}\frac{\mathrm{d}s}{(\lambda _i(s)-z_s)^3}. \end{aligned} \end{aligned}$$
(3.22)

Using the fact that \(m_0 (z) = \tilde{m}_0 (z)\), we can integrate both sides of (3.22) from 0 to t and obtain

$$\begin{aligned} \tilde{m}_t(z_t)-m_t(z_t)=\int _0^t \left( {\mathcal E}_1(s)\mathrm{d}s+\mathrm{d}{\mathcal E}_2(s) \right) , \end{aligned}$$
(3.23)

where the error terms are

$$\begin{aligned} {\mathcal E}_1(s)&=\left( \tilde{m}_s(z_s)-m_s(z_s)\right) \partial _z \left( \tilde{m}_s(z_s)+\frac{V'(z_s)}{2}\right) \nonumber \\&\quad + \frac{1}{\pi }\int _{{\mathbb C}} \partial _{{\bar{w}}} \tilde{g}(z_s,w) (\tilde{m}_s(w)-m_s(w))\mathrm{d}^2 w , \end{aligned}$$
(3.24)
$$\begin{aligned} \mathrm{d}{\mathcal E}_2(t)&=\frac{2-\beta }{\beta N^2}\frac{\mathrm{d}s}{(\lambda _i(s)-z_s)^3}-\sqrt{\frac{2}{\beta N^3}}\sum _{i=1}^N \frac{{\mathrm{d}} B_i(s)}{(\lambda _i(s)-z_s)^2}. \end{aligned}$$
(3.25)

We remark that \(\mathcal E_1\) and \(\mathcal E_2\) implicitly depend on u, the initial value of the flow \(z_s(u)\). The local law will eventually follow from an application of Gronwall’s inequality to (3.23).

We define the stopping time

$$\begin{aligned} \sigma \,{:}{=}\,\inf _{s\ge 0}\left\{ \exists w\in {{\mathrm{\mathcal {D}}}}_s: \left| \tilde{m}_s(w)-m_s(w)\right| \ge \frac{M}{N\mathop {\mathrm {Im}}[w]}\right\} \wedge t. \end{aligned}$$
(3.26)

In the rest of this section we prove that with overwhelming probability we have \(\sigma =t\). Theorem 3.1 follows.

For any lattice point \(u\in \mathcal L\) as in (3.16), we denote

$$\begin{aligned} t(u)= \sup _{s\ge 0}\{ z_s(u)\in {{\mathrm{\mathcal {D}}}}_s\}\wedge t. \end{aligned}$$
(3.27)

By Proposition 3.5 we have that \(z_s(u)\in {{\mathrm{\mathcal {D}}}}_s\) for any \(0\le s\le t(u)\). We decompose the time interval [0, t(u)] in the following way. First set \(t_0=0\), and define

$$\begin{aligned} t_{i+1}(u):=\sup _{s\ge t_i(u)}\left\{ \mathop {\mathrm {Im}}[z_s(u)] \ge \frac{\mathop {\mathrm {Im}}[z_{t_i}(u)]}{2}\right\} \wedge t(u),\quad i=0,1,2,\ldots . \end{aligned}$$
(3.28)

By (3.8), there exists some constant C depending on V, such that \(\mathop {\mathrm {Im}}[z_0(u)]\le CN\mathop {\mathrm {Im}}[z_{t(u)}(u)]\), and thus the above sequence will terminate at some \(t_k(u)=t(u)\) for \(k={{\mathrm{O}}}(\log N)\) depending on u, the initial value of \(z_s\). Moreover, by (2.28), for any \(t_i(u)\le s_1\le s_2\le t_{i+1}(u)\),

$$\begin{aligned} e^{-CT}\le e^{-C(s_2-s_1)}\le \frac{\mathop {\mathrm {Im}}[z_{s_1}(u)]}{\mathop {\mathrm {Im}}[z_{s_2}(u)]}\le \frac{e^{C(s_1-t_i)}\mathop {\mathrm {Im}}[z_{t_{i}}(u)]}{e^{C(s_2-t_{i+1})}\mathop {\mathrm {Im}}[z_{t_{i+1}}(u)]}\le 2e^{CT}. \end{aligned}$$
(3.29)

We first derive an estimate of \(\int \mathrm{d}{\mathcal E}_2(s)\) in terms of \(\{m_s(z_s(u)), 0\le s\le t(u)\}\).

Proposition 3.8

Under the assumptions of Theorem 3.1. There exists an event \(\Omega \) that holds with overwhelming probability on which we have for every \(0 \le \tau \le t (u)\) and \(u \in \mathcal L\),

$$\begin{aligned} \left| \int _0^{\tau \wedge \sigma } \mathrm{d}{\mathcal E}_2(s)\right| \le \frac{C(\log N)^{1+\delta }}{N\mathop {\mathrm {Im}}[z_{\tau \wedge \sigma }(u)]}. \end{aligned}$$
(3.30)

Proof

For simplicity of notation, we write \(t_i = t_i (u)\) and \(z_s = z_s (u)\). For any \(s\le t_i\), by our choice of the stopping time \(\sigma \) (as in (3.26)), and the definition of domain \({{\mathrm{\mathcal {D}}}}_s\) (as in (3.2)), we have

$$\begin{aligned} \mathop {\mathrm {Im}}[\tilde{m}_{s\wedge \sigma }(z_{s\wedge \sigma })]\le 2\mathop {\mathrm {Im}}[m_{s\wedge \sigma }(z_{s\wedge \sigma })]. \end{aligned}$$
(3.31)

For the first term in (3.25) we have

$$\begin{aligned}&\sup _{0\le \tau \le t_i}\left| \frac{2-\beta }{\beta N^2}\int _0^{\tau \wedge \sigma }\sum _{i=1}^{N}\frac{\mathrm{d}s}{(\lambda _i(s)-z_s)^3}\right| \nonumber \\&\quad \le \frac{2-\beta }{\beta N^2}\int _0^{t_i\wedge \sigma }\sum _{i=1}^{N}\frac{\mathrm{d}s}{|\lambda _i(s)-z_s|^3}\nonumber \\&\quad \le \frac{C}{ N^2}\int _0^{t_i\wedge \sigma }\sum _{i=1}^{N}\frac{\mathrm{d}s}{\mathop {\mathrm {Im}}[z_s]|\lambda _i(s)-z_s|^2} = C \int _0^{t_i\wedge \sigma }\frac{\mathop {\mathrm {Im}}[\tilde{m}_s(z_s)]\mathrm{d}s}{N\mathop {\mathrm {Im}}[z_s]^2}\nonumber \\&\quad \le C \int _0^{t_i\wedge \sigma }\frac{2\mathop {\mathrm {Im}}[m_s(z_s)]\mathrm{d}s}{N\mathop {\mathrm {Im}}[z_s]^2} \le \frac{C'}{N\mathop {\mathrm {Im}}[z_{t_i\wedge \sigma }]}, \end{aligned}$$
(3.32)

where we used (3.31) and (2.31).

For the second term in (3.25) we have

$$\begin{aligned} \begin{aligned}&\left\langle \sqrt{\frac{2}{\beta N^3}}\int _0^{\cdot \wedge \sigma }\sum _{i=1}^N \frac{\mathrm{d} B_i(s)}{(\lambda _i(s)-z_s)^2} \right\rangle _{t_i} = \frac{2}{\beta N^{3/2}}\int _0^{t_i\wedge \sigma }\sum _{i=1}^N \frac{ \mathrm{d}s}{|\lambda _i(s)-z_s|^4}\\&\quad \le \frac{2}{\beta N^{3/2}}\int _0^{t_i\wedge \sigma }\sum _{i=1}^N \frac{ \mathrm{d}s}{\mathop {\mathrm {Im}}[z_s]^2|\lambda _i(s)-z_s|^2} =\frac{2}{\beta }\int _0^{t_i\wedge \sigma }\frac{\mathop {\mathrm {Im}}[\tilde{m}_s(z_s)]\mathrm{d}s}{N^2\mathop {\mathrm {Im}}[z_s]^3}\\&\quad \le \frac{2}{\beta }\int _0^{t_i\wedge \sigma }\frac{2\mathop {\mathrm {Im}}[m_s(z_s)]\mathrm{d}s}{N^2\mathop {\mathrm {Im}}[z_s]^3} \le \frac{C}{N^2\mathop {\mathrm {Im}}[z_{t_i\wedge \sigma }]^2}, \end{aligned} \end{aligned}$$
(3.33)

again we used (3.31) and (2.31). Therefore, by Burkholder–Davis–Gundy inequality, for any \(u\in \mathcal L\) and \(t_i\), the following holds with overwhelming probability, i.e., \(1-C\exp \{ -c(\log N)^{1+\delta }\}\),

$$\begin{aligned} \begin{aligned}&\sup _{0\le \tau \le t_i}\left| \sqrt{\frac{2}{\beta N^3}}\int _0^{\tau \wedge \sigma }\sum _{i=1}^N \frac{\mathrm{d} B_i(s)}{(\lambda _i(s)-z_s)^2} \right| \le \frac{C(\log N)^{1+\delta }}{N\mathop {\mathrm {Im}}[z_{t_i\wedge \sigma }]}. \end{aligned} \end{aligned}$$
(3.34)

We define \(\Omega \) to be the set of Brownian paths \(\{B_1(s), \ldots , B_N(s)\}_{0\le s\le t}\) on which the following two estimates hold.

  1. 1.

    First we have, \(-{\mathfrak b}\le \lambda _1(s)\le \lambda _2(s)\le \cdots \le \lambda _N(s)\le {\mathfrak b}\) uniformly for all \(s\in [0,T]\).

  2. 2.

    Second for any \(u\in \mathcal L\) and \(i=0,1,2,\ldots , k\), (3.34) holds. We recall that \(t_0, t_1, \ldots , t_k\) are recursively defined in (3.28), and \(k={{\mathrm{O}}}(\log N)\).

It follows from Proposition 2.5 and the discussion above, \(\Omega \) holds with overwhelming probability, i.e., \(\mathbb {P}(\Omega )\ge 1-C|\mathcal L|(\log N)\exp \{ -c(\log N)^{1+\delta }\}\).

Therefore, for any \(\tau \in [t_{i-1},t_i]\), the bounds (3.32) and (3.34) yield

$$\begin{aligned} \begin{aligned} \left| \int _0^{\tau \wedge \sigma } \mathrm{d}{\mathcal E}_2(s)\right| \le&\frac{C'(\log N)^{1+\delta }}{N\mathop {\mathrm {Im}}[z_{t_i\wedge \sigma }]}\le \frac{C(\log N)^{1+\delta }}{N\mathop {\mathrm {Im}}[z_{\tau \wedge \sigma }]}, \end{aligned} \end{aligned}$$
(3.35)

where we used our choice of \(t_i\)’s, i.e. (3.29). \(\square \)

We now bound the second term of (3.24).

Proposition 3.9

Under the assumptions of Theorem 3.1, for any \(u\in \mathcal L\) and \(s\in [0,t(u)]\) (as in (3.27)), with probability \(1-e^{-N}\), we have

$$\begin{aligned} \left| \frac{1}{\pi }\int _{{\mathbb C}} \partial _{{\bar{w}}} \tilde{g}(z_{s\wedge \sigma }(u),w) (\tilde{m}_{s\wedge \sigma }(w)-m_{s\wedge \sigma }(w))\mathrm{d}^2 w\right| \le \frac{CM(\log N)^{2}}{N},\quad \end{aligned}$$
(3.36)

where the constant C depends on V.

Proof

First, we note that by Proposition 2.5, we can restrict to the case such that \({\tilde{\mu }}_t, \mu _t\) are supported on \([-{\mathfrak b}, {\mathfrak b}]\), and replace g by \(g_1 (z, x) := g (z, x) \chi (x)\) and the quasi-analytic extension by \(\tilde{g}_1 (z, x + \mathrm {i}y ) := ( g_1 (z, x) + \mathrm {i}y \partial _x g_1 (z, x) ) \chi (y)\).

The proof follows the same argument as [24, Lemma B.1]. Let \(S(x+\mathrm {i}y)=\tilde{m}_{s\wedge \sigma }(x+\mathrm {i}y)-m_{s\wedge \sigma }(x+\mathrm {i}y)\). we have

$$\begin{aligned} \begin{aligned}&\left| \frac{1}{\pi }\int _{{\mathbb C}} \partial _{{\bar{w}}} \tilde{g}_1(z_{s\wedge \sigma }(u),w) (\tilde{m}_{s\wedge \sigma }(w)-m_{s\wedge \sigma }(w))\mathrm{d}^2 w\right| \\&\quad =\left| \int _{{\mathbb R}} g_1(z_{s\wedge \sigma }(u),x) (\mathrm{d}{\tilde{\mu }}_{s\wedge \sigma }(x)-\mathrm{d}\mu _{s\wedge \sigma }(x))\right| \\&\quad \le C\int _{{\mathbb C}_+} (|g_1(z_{s\wedge \sigma },x)|+y|\partial _xg_1(z_{s\wedge \sigma }(u),x)|)|\chi '(y)||S(x+\mathrm {i}y)|\mathrm{d}x\mathrm{d}y\\&\qquad + C\int _{{\mathbb C}_+} y\chi (y)|\partial _{x}^2g_1(z_{s\wedge \sigma }(u),x)||\mathop {\mathrm {Im}}[S(x+\mathrm {i}y)]|\mathrm{d}x\mathrm{d}y. \end{aligned} \end{aligned}$$
(3.37)

We start by handling the first term on the RHS of (3.37). The integrand is supported in \(\{x + \mathrm {i}y : |x| \le 2 {\mathfrak b}, {\mathfrak b}\le |y| \le 2 {\mathfrak b}\} \subseteq {{\mathrm{\mathcal {D}}}}_t\) for every t. In this region we have from Proposition 2.6 that \(g_1\) and \(\partial _x g_1\) are bounded, and by the definition of \(\sigma \) we have that \(|S(x+\mathrm {i}y)|\le M/N\) in this region and so

$$\begin{aligned} \int _{{\mathbb C}_+} (|g_1(z_{s\wedge \sigma },x)|+y|\partial _xg_1(z_{s\wedge \sigma }(u),x)|)|\chi '(y)|S(x+\mathrm {i}y)|\mathrm{d}x\mathrm{d}y \le \frac{CM}{N}.\qquad \end{aligned}$$
(3.38)

We now handle the second term on the RHS of (3.37). By the definition of t(u), \(z_{s\wedge \sigma }(u)\in {{\mathrm{\mathcal {D}}}}_0\), and so \(\mathop {\mathrm {Im}}[z_{s\wedge \sigma }(u)]\ge N^{-{\mathfrak c}}\). From Proposition 2.6 we have \( | \partial ^2_x g(z,x) | \le C |z-x|^{-1}\), and \(|\partial _x^2 g_1(z,x)|\le C\chi (x)|z-x|^{-1}\).

We split the second integral on the righthand side of (3.37) into the two regions \(\Lambda \,{:}{=}\,\{E+\mathrm {i}\eta : 0\le \eta \le e^{K(s\wedge \sigma )}N^{-{\mathfrak c}}\}\) and \({\mathbb C}_+\setminus \Lambda \). On \(\Lambda \), we use the trivial bound \(\mathop {\mathrm {Im}}[S(x+\mathrm {i}y)]\le |\tilde{m}_{s\wedge \sigma }(x+\mathrm {i}y)|+|m_{s\wedge \sigma }(x+\mathrm {i}y)|\le 2/y\), and obtain

$$\begin{aligned} \begin{aligned}&\int _{\Lambda } y\chi (y)|\partial _{x}^2g_1(z_{s\wedge \sigma }(u),x)||\mathop {\mathrm {Im}}[S(x+\mathrm {i}y)]|\mathrm{d}x\mathrm{d}y\\&\quad \le C\int _{0\le y\le e^{K(s\wedge \sigma )}N^{-{\mathfrak c}}} \frac{\chi (x)}{|z_{s\wedge \sigma }(u)-x|}\mathrm{d}x\mathrm{d}y \le \frac{C\log N}{N^{{\mathfrak c}}}. \end{aligned}\end{aligned}$$
(3.39)

On \({\mathbb C}_+\setminus \Lambda \), by Remark 3.3, and the definition of \(\sigma \) that (3.6) holds for time \(0\le t\le s\wedge \sigma \), we have \(|\mathop {\mathrm {Im}}[S(x+\mathrm {i}y)]|\le 3e^{K(s\wedge \sigma )}M\log N/(Ny)\), and therefore,

$$\begin{aligned} \begin{aligned}&\int _{{\mathbb C}_+\setminus \Lambda } y\chi (y)|\partial _{x}^2g_1(z_{s\wedge \sigma }(u),x)||\mathop {\mathrm {Im}}[S(x+\mathrm {i}y)]|\mathrm{d}x\mathrm{d}y\\&\quad \le \frac{CM\log N}{N}\int _{{\mathbb C}_+} \frac{\chi (x)\chi (y)}{|z_{s\wedge \sigma }(u)-x|}\mathrm{d}x\mathrm{d}y \le \frac{CM(\log N)^{2}}{N}. \end{aligned} \end{aligned}$$
(3.40)

This completes the proof of (3.36). \(\square \)

Proof of Theorem 3.1

We can now start analyzing (3.23). For any lattice point \(u\in \mathcal L\) and \(\tau \in [0,t(u)]\) (as in (3.27)), by Proposition 3.8 and 3.9, we have

$$\begin{aligned}&\left| \tilde{m}_{\tau \wedge \sigma }(z_{\tau \wedge \sigma }(u)) -m_{\tau \wedge \sigma }(z_{\tau \wedge \sigma }(u))\right| \le \int _0^{\tau \wedge \sigma }\\&\quad \left| \tilde{m}_s(z_s(u))-m_s(z_s(u))\right| \left| \partial _z \left( \tilde{m}_s(z_s(u))+\frac{V'(z_s(u))}{2}\right) \right| \mathrm{d}s\\&\qquad + \frac{C({\tau \wedge \sigma })M(\log N)^{2}}{N} +\frac{C(\log N)^{1+\delta }}{N\mathop {\mathrm {Im}}[z_{\tau \wedge \sigma }(u)]}. \end{aligned}$$

Notice that for \(s\le \tau \wedge \sigma \),

$$\begin{aligned} \left| \partial _z \left( \tilde{m}_s(z_s(u))+\frac{V'(z_s(u))}{2}\right) \right| \le \frac{\mathop {\mathrm {Im}}[\tilde{m}_s(z_s(u))]}{\mathop {\mathrm {Im}}[z_s(u)]}+C, \end{aligned}$$
(3.41)

where we used (2.27), for the Stieltjes transform \({\tilde{m}}_s\), and \(\partial _z V'(z_s(u))={{\mathrm{O}}}(1)\) from Proposition 2.6. Since \(z_s(u)\in \mathcal D_s\), by the definition of \(\mathcal D_s\), we have \(\mathop {\mathrm {Im}}[m_s(z_s(u))]\ge M\log N/ (N\mathop {\mathrm {Im}}[z_s(u)])\). Moreover, since \(s\le \sigma \), we have \(|\tilde{m}_s(z_s(u))-m_s(z_s(u))|\le M/(N\mathop {\mathrm {Im}}[z_s(u)])\le \mathop {\mathrm {Im}}[m_s(z_s(u))]/\log N\). Therefore,

$$\begin{aligned} \left| \partial _z \left( \tilde{m}_s(z_s(u))+\frac{V'(z_s(u))}{2}\right) \right|&\le \frac{\mathop {\mathrm {Im}}[\tilde{m}_s(z_s(u))]}{\mathop {\mathrm {Im}}[z_s(u)]}+C\nonumber \\&\le \left( 1+\frac{1}{\log N}\right) \frac{\mathop {\mathrm {Im}}[ m_s(z_s(u))]}{\mathop {\mathrm {Im}}[z_s(u)]}+C. \end{aligned}$$
(3.42)

We denote

$$\begin{aligned} \beta _s(u)\,{:}{=}\,\left( 1+\frac{1}{\log N}\right) \frac{\mathop {\mathrm {Im}}[ m_s(z_s(u))]}{\mathop {\mathrm {Im}}[z_s(u)]}+C= {{\mathrm{O}}}\left( \frac{\mathop {\mathrm {Im}}[ m_s(z_s(u))]}{\mathop {\mathrm {Im}}[z_s(u)]}\right) . \end{aligned}$$
(3.43)

We have derived the inequality,

$$\begin{aligned} \begin{aligned} \left| \tilde{m}_{\tau \wedge \sigma }(z_{\tau \wedge \sigma }(u))-m_{\tau \wedge \sigma }(z_{\tau \wedge \sigma }(u))\right|&\le \int _0^{\tau \wedge \sigma }\beta _s(u)\left| \tilde{m}_s(z_s(u))-m_s(z_s(u))\right| \mathrm{d}s\\&\quad + \frac{C({\tau \wedge \sigma })M(\log N)^{2}}{N} +\frac{C(\log N)^{1+\delta }}{N\mathop {\mathrm {Im}}[z_{\tau \wedge \sigma }(u)]}. \end{aligned} \end{aligned}$$
(3.44)

By Gronwall’s inequality, this implies the estimate

$$\begin{aligned} \begin{aligned}&\left| \tilde{m}_{t\wedge \sigma }(z_{t\wedge \sigma }(u))-m_{t\wedge \sigma }(z_{t\wedge \sigma }(u))\right| \le \frac{C({t\wedge \sigma })M(\log N)^{2}}{N} +\frac{C(\log N)^{1+\delta }}{N\mathop {\mathrm {Im}}[z_{t\wedge \sigma }(u)]}\\&\quad +\int _0^{t\wedge \sigma }\beta _s(u)\left( \frac{sM(\log N)^{2}}{N} +\frac{C(\log N)^{1+\delta }}{N\mathop {\mathrm {Im}}[z_s(u)]}\right) e^{\int _s^{t\wedge \sigma } \beta _\tau (u)\mathrm{d}\tau }\mathrm{d}s. \end{aligned} \end{aligned}$$
(3.45)

By (2.31) of Proposition 2.7, and (3.43), we have

$$\begin{aligned} e^{\int _s^{t\wedge \sigma } \beta _\tau (u)\mathrm{d}\tau }&\le e^{C(t-s)} e^{\left( 1+\frac{1}{\log N}\right) \log \left( \frac{\mathop {\mathrm {Im}}[z_{s}(u)]}{\mathop {\mathrm {Im}}[z_{t\wedge \sigma }(u)]}\right) }\\&=e^{C(t-s)} \left( \frac{\mathop {\mathrm {Im}}[z_{s}(u)]}{\mathop {\mathrm {Im}}[z_{t\wedge \sigma }(u)]}\right) ^{1+\frac{1}{\log N}} \le C \frac{\mathop {\mathrm {Im}}[z_s(u)]}{\mathop {\mathrm {Im}}[z_{t\wedge \sigma }(u)]}. \end{aligned}$$

In the last equality, we used the estimate (3.8) which shows that \(\mathop {\mathrm {Im}}[z_{s}(u)]/\mathop {\mathrm {Im}}[z_{t\wedge \sigma }(u)] \le C N\). Combining the above inequality with (3.43) we can bound the last term in (3.45) by

$$\begin{aligned} \begin{aligned}&C\int _0^{t\wedge \sigma }\frac{\mathop {\mathrm {Im}}[ m_{s}(z_s(u))]}{\mathop {\mathrm {Im}}[z_{t\wedge \sigma }(u)]}\left( \frac{sM(\log N)^{2}}{N} +\frac{C(\log N)^{1+\delta }}{N\mathop {\mathrm {Im}}[z_s(u)]}\right) \mathrm{d}s\\&\quad \le \frac{CM(\log N)^{2}}{N\mathop {\mathrm {Im}}[z_{t\wedge \sigma }(u)]}\int _0^{t\wedge \sigma }s\mathop {\mathrm {Im}}[m_s(z_s(u))]\mathrm{d}s+\frac{C(\log N)^{2+\delta }}{N\mathop {\mathrm {Im}}[z_{t\wedge \sigma (u)}]} , \end{aligned} \end{aligned}$$
(3.46)

where we used (2.34) and that \(\log (\mathop {\mathrm {Im}}[z_0(u)/z_{t\wedge \sigma }(u)])={{\mathrm{O}}}(\log N)\) from (3.8). Since \(|V' (z) | \le C\), it follows from (2.25) that \(\mathop {\mathrm {Im}}[m_s(z_s(u))]=-\partial _s \mathop {\mathrm {Im}}[z_s(u)] +{{\mathrm{O}}}(1)\). Therefore we can bound the integral term in (3.46) by,

$$\begin{aligned} \begin{aligned} \int _0^{t\wedge \sigma }s\mathop {\mathrm {Im}}[m_s(z_s(u))]\mathrm{d}s =&\int _0^{t\wedge \sigma }(-\partial _s \mathop {\mathrm {Im}}[z_s(u)] )s \mathrm{d}s + {{\mathrm{O}}}( (t \wedge \sigma )^2 ) = {{\mathrm{O}}}( t \wedge \sigma ) . \end{aligned}\end{aligned}$$
(3.47)

It follows by combining (3.45), (3.46) and (3.47) that

$$\begin{aligned} \left| \tilde{m}_{t\wedge \sigma }(z_{t\wedge \sigma }(u))-m_{t\wedge \sigma }(z_{t\wedge \sigma }(u))\right| \le C\left( \frac{({t\wedge \sigma })M (\log N)^{2} +(\log N)^{2+\delta }}{N\mathop {\mathrm {Im}}[z_{t\wedge \sigma }(u)]}\right) . \end{aligned}$$
(3.48)

Therefore on the event \(\Omega \) as defined in Proposition 3.8,

$$\begin{aligned} \left| \tilde{m}_{t\wedge \sigma }(z_{t\wedge \sigma }(u))-m_{t\wedge \sigma }(z_{t\wedge \sigma }(u))\right| =o\left( \frac{M}{N\mathop {\mathrm {Im}}[z_{t\wedge \sigma }(u)]}\right) , \end{aligned}$$
(3.49)

provided \(t\ll T= (\log N)^{-2}\), and \(M=(\log N)^{2+2\delta }\). By Proposition 3.7, for any \(w\in {{\mathrm{\mathcal {D}}}}_{t\wedge \sigma }\), there exists some \(u\in \mathcal L\) such that \(z_{t\wedge \sigma }(u)\in {{\mathrm{\mathcal {D}}}}_{t\wedge \sigma }\), and

$$\begin{aligned} |z_{t\wedge \sigma }(u)-w|={{\mathrm{O}}}(N^{-3{\mathfrak c}}). \end{aligned}$$
(3.50)

Moreover, on the domain \(\mathcal D_{t\wedge \sigma }\), both \(\tilde{m}_{t\wedge \sigma }\) and \(m_{t\wedge \sigma }\) are Lipschitz with constant \(N^{2{\mathfrak c}}\). Therefore

$$\begin{aligned} \begin{aligned}&\left| \tilde{m}_{t\wedge \sigma }(w)-m_{t\wedge \sigma }(w)\right| \le \left| \tilde{m}_{t\wedge \sigma }(z_{t\wedge \sigma }(u))-m_{t\wedge \sigma }(z_{t\wedge \sigma }(u))\right| \\&\qquad + \left| \tilde{m}_{t\wedge \sigma }(w)-\tilde{m}_{t\wedge \sigma }(z_{t\wedge \sigma }(u))\right| + \left| m_{t\wedge \sigma }(w)-m_{t\wedge \sigma }(z_{t\wedge \sigma }(u))\right| \\&\quad = o\left( \frac{M}{N\mathop {\mathrm {Im}}[z_{t\wedge \sigma }(u)]}\right) +{{\mathrm{O}}}\left( \frac{|z_{t\wedge \sigma }(u)-w|}{N^{-2{\mathfrak c}}}\right) =o\left( \frac{M}{N\mathop {\mathrm {Im}}[z_{t\wedge \sigma }(u)]}\right) . \end{aligned} \end{aligned}$$
(3.51)

If \( \sigma < t\) somewhere on the event \(\Omega \) then by continuity there must be a point \(z \in {{\mathrm{\mathcal {D}}}}_\sigma \) s.t.

$$\begin{aligned} | \tilde{m}_{ \sigma } (z) - m_{\sigma } (z) | = \frac{ M}{ N \mathop {\mathrm {Im}}[z] } . \end{aligned}$$
(3.52)

This contradicts (3.51), and so we see that on \(\Omega \), \(\sigma = t\). This completes the proof of (3.3). \(\square \)

3.3 Proof of Corollary 3.2

Proof of Corollary 3.2

The proof follows a similar argument to [24, Lemma B.1]. Recall the function \(\eta (x)\) from Remark 3.3. Let \(S(x+\mathrm {i}y)=\tilde{m}_{t}(x+\mathrm {i}y)-m_{t}(x+\mathrm {i}y)\). Fix some \(E_0\in [-{\mathfrak b}, {\mathfrak b}]\). Define

$$\begin{aligned} \tilde{\eta }:= \inf _{ \eta \ge e^{ K t} N^{1-{\mathfrak c}} } \left\{ \eta : \max _{ E_0 \le x \le E_0 + \eta } \eta (x) \le \eta \right\} . \end{aligned}$$
(3.53)

For later use we define

$$\begin{aligned} \tilde{E} := {{\mathrm{argmax}}}_{E_0 \le x \le E_0 + \tilde{\eta }} \eta (x), \end{aligned}$$
(3.54)

so that

$$\begin{aligned} \eta ( \tilde{E} ) = \tilde{\eta }. \end{aligned}$$
(3.55)

We define a test function \(f:{\mathbb R}\rightarrow {\mathbb R}\), such that \(f(x)=1\) on \(x\in [-2{\mathfrak b}, E_0]\), and so that f(x) vanishes outside \([-2{\mathfrak b}-1, E_0+\tilde{\eta }]\). We take f so that \(f'(x)={{\mathrm{O}}}(1)\) and \(f''(x)={{\mathrm{O}}}(1)\) on \([-2{\mathfrak b}-1,-2{\mathfrak b}]\) and \(f'(x)={{\mathrm{O}}}(1/\tilde{\eta })\) and \(f''(x)={{\mathrm{O}}}(1/\tilde{\eta }^2)\) on \([E_0,E_0+\tilde{\eta }]\). By the Helffer-Sjöstrand formula, see [30, Chapter 11.2], we have,

$$\begin{aligned} \begin{aligned} \left| \int _{-\infty }^{\infty } f(x)( \mathrm{d}\tilde{\mu }_t(x)-\mathrm{d}\mu _t(x))\right|&\le C\int _{{\mathbb C}_+} ( |f(x)|+|y||f'(x)|)|\chi '(y)||S(x+\mathrm {i}y)|\mathrm{d}x\mathrm{d}y\\&\quad +C\left| \int _{{\mathbb C}_+} y\chi (y)f''(x)\mathop {\mathrm {Im}}[S(x+\mathrm {i}y)]\mathrm{d}x\mathrm{d}y\right| . \end{aligned} \end{aligned}$$
(3.56)

On the event such that (3.3) holds, the first term is easily bounded by

$$\begin{aligned} \int _{{\mathbb C}_+} ( |f(x)|+|y||f'(x)|)|\chi '(y)||S(x+\mathrm {i}y)|\mathrm{d}x\mathrm{d}y \le \frac{CM}{N}. \end{aligned}$$
(3.57)

For the second term, recall that \(f''(x)=0\) unless \(x\in [E_0,E_0+\tilde{\eta }]\cup [-2{\mathfrak b}-1,-2{\mathfrak b}]\). By Remark 3.3 we have the estimate

$$\begin{aligned} | \mathop {\mathrm {Im}}[ S ( x + \mathrm {i}y ) ] | \le \frac{ C M \log (N) }{ N y }, \qquad y \ge \frac{ e^{ K t }}{ N^{{\mathfrak c}}} =: \eta _{\mathfrak c}. \end{aligned}$$
(3.58)

Hence,

$$\begin{aligned}&\left| \int _{-2{\mathfrak b}-1\le x\le -2{\mathfrak b}} y\chi (y)f''(x)\mathop {\mathrm {Im}}[S(x+\mathrm {i}y)]\mathrm{d}x\mathrm{d}y\right| \nonumber \\&\quad \le \left| \int _{-2{\mathfrak b}-1\le x\le -2{\mathfrak b}, |y| \ge \eta _{\mathfrak c}} y\chi (y)f''(x)\mathop {\mathrm {Im}}[S(x+\mathrm {i}y)]\mathrm{d}x\mathrm{d}y\right| \nonumber \\&\qquad + \left| \int _{-2{\mathfrak b}-1\le x\le -2{\mathfrak b}, |y| \le \eta _{\mathfrak c}} | f'' (x) | \mathrm{d}x\mathrm{d}y\right| \nonumber \\&\quad \le \frac{CM\log N}{N} + \frac{C}{N^{ {\mathfrak c}} } \le \frac{CM\log N}{N} . \end{aligned}$$
(3.59)

In the first integral we used the estimate (3.58) and in the second we used \(|y \mathop {\mathrm {Im}}[ S (x + \mathrm {i}y ) ] | \le 2\). For the region \(x \in [E_0, E_0 + \tilde{\eta }]\) we do a similar decomposition. First we bound the region \(|y| \le \tilde{\eta }\). We have,

$$\begin{aligned}&\left| \int _{E_0\le x\le E_0+\tilde{\eta }}\int _{y\le \tilde{\eta }} y\chi (y)f''(x)\mathop {\mathrm {Im}}[S(x+\mathrm {i}y)]\mathrm{d}x\mathrm{d}y\right| \nonumber \\&\quad \le \left| \int _{E_0\le x\le E_0+\tilde{\eta }}\int _{y\le \eta _{\mathfrak c}} y\chi (y)f''(x)\mathop {\mathrm {Im}}[S(x+\mathrm {i}y)]\mathrm{d}x\mathrm{d}y\right| \nonumber \\&\qquad + \left| \int _{E_0\le x\le E_0+\tilde{\eta }}\int _{ \eta _{\mathfrak c}\le y\le \tilde{\eta }} y\chi (y)f''(x)\mathop {\mathrm {Im}}[S(x+\mathrm {i}y)]\mathrm{d}x\mathrm{d}y\right| \nonumber \\&\quad \le \frac{ C \eta _{\mathfrak c}}{ \tilde{\eta }} + \frac{ C \log (N) M}{N} \le \frac{ C \log (N) M}{N}. \end{aligned}$$
(3.60)

For the first integral we used \(y| \mathop {\mathrm {Im}}[S (x + \mathrm {i}y ) ] |\le 2\) and in the second region we used (3.58). For the other region we integrate by parts,

$$\begin{aligned} \begin{aligned}&\int _{E_0\le x\le E_0+\tilde{\eta }}\int _{y\ge \tilde{\eta }} y\chi (y)f''(x)\mathop {\mathrm {Im}}[S(x+\mathrm {i}y)]\mathrm{d}x\mathrm{d}y \\&\quad = -\int _{E_0\le x\le E_0+\tilde{\eta }} f'(x)\tilde{\eta }\mathop {\mathrm {Re}}[S(x+\mathrm {i}\tilde{\eta })]\mathrm{d}x\\&\qquad -\int _{E_0\le x\le E_0+\tilde{\eta }}\int _{y\ge \tilde{\eta }} f'(x)\partial _y(y\chi (y))\mathop {\mathrm {Re}}[S(x+\mathrm {i}y)]\mathrm{d}x\mathrm{d}y. \end{aligned} \end{aligned}$$
(3.61)

By the definition of \(\tilde{\eta }\), the estimate (3.3) holds in the region \(x+ \mathrm {i}y\) for \(x \in [E_0 , E_0 + \mathrm {i}\tilde{\eta }]\) and \(y \ge \tilde{\eta }\). Hence, both terms are easily estimated by \(C \log (N) M / N\).

The above estimates imply

$$\begin{aligned} \left| \int _{-\infty }^{\infty } f(x)( \mathrm{d}\tilde{\mu }_t(x)-\mathrm{d}\mu _t(x))\right| \le \frac{CM\log N}{N}. \end{aligned}$$
(3.62)

We can now prove the lower bound of (3.4). We have,

$$\begin{aligned} \begin{aligned}&|\{i:\lambda _i(t)\le E_0\}| \le N\int _{-\infty }^{\infty }f(x)\mathrm{d}\tilde{\mu }_t(x) \le N\int _{-\infty }^{E_0+\tilde{\eta }}\mathrm{d}\mu _t(x)+CM\log N. \end{aligned} \end{aligned}$$
(3.63)

If \(\tilde{\eta }= e^{ K t } / N^{{\mathfrak c}}\) then the lower bound of (3.4) follows by taking \(E_0=\gamma _{i-CM\log N}(t)- N^{-{\mathfrak c}+1}\). If \(\tilde{\eta }=\eta (\tilde{E})> e^{ K t } / N^{{\mathfrak c}}\), then by the defining relation of the function \(\eta (E)\) as in (3.7), we have \({\tilde{\eta }}\mathop {\mathrm {Im}}[m_t({\tilde{E}}+\mathrm {i}\tilde{\eta })]=e^{Kt}M\log N/N\). We calculate

$$\begin{aligned} \int _{E}^{E+\tilde{\eta }}\mathrm{d}\mu _t(x)\le & {} \int _{E}^{E+\tilde{\eta }}\frac{2\tilde{\eta }^2}{(x-\tilde{E})^2+\tilde{\eta }^2}\mathrm{d}\mu _t(x)\nonumber \\\le & {} 2\tilde{\eta }\mathop {\mathrm {Im}}[m_t(\tilde{E}+\mathrm {i}\tilde{\eta })]=\frac{2e^{Kt}M\log N}{N}. \end{aligned}$$
(3.64)

Hence,

$$\begin{aligned} |\{i:\lambda _i(t)\le E_0\}| \le N \int _{- \infty }^{ E_0} \mathrm {d}\mu _t (x) + C M \log (N). \end{aligned}$$
(3.65)

The lower bound then follows by taking \(E_0=\gamma _{i-CM\log N}(t)\). The upper bound of (3.4) is proven similarly. \(\square \)

4 Mesoscopic central limit theorem

In this section we prove a mesoscopic central limit theorem for \(\beta \)-DBM (1.1). We recall the parameters \(\delta \) and M defined at the beginning of Sect. 3. In this section, we fix scale parameters, \(\eta _*\) and r such that \(N^{-1}\le \eta _*\ll r\le 1\). If we assume that the initial data \({\varvec{}}\lambda (0)\) is regular down to the scale \(\eta _*\), on the interval \([E_0-r, E_0+r]\), we can prove that after time \(t\gg \eta _*\), the linear statistics satisfy a central limit theorem on the scale \(\eta \ll t\). The precise definition of regularity is the following assumption.

Assumption 4.1

We assume that the initial data satisfies the following two conditions.

  1. 1.

    There exists some finite constant \({\mathfrak a}\), such that \(-{\mathfrak a}\le \lambda _1(0)\le \lambda _2(0)\cdots \le \lambda _N(0)\le {\mathfrak a}\);

  2. 2.

    There exists some finite constant \({\mathfrak d}\), such that

    $$\begin{aligned} {\mathfrak d}^{-1}\le \mathop {\mathrm {Im}}[m_0(z)]\le {\mathfrak d}, \end{aligned}$$
    (4.1)

    uniformly for any \(z\in \{E+\mathrm {i}\eta : E\in [E_0-r, E_0+r], \eta _*\le \eta \le 1\}\).

Under the above assumption we can prove the following mesoscopic central limit theorem for the Stieltjes transform.

Theorem 4.2

Suppose V satisfies Assumption 2.1, and moreover that V is \(C^5\). Fix small constant \(\delta >0\), \(M=(\log N)^{2+2\delta }\), and \(N^{-1}\le \eta _*\ll r\le 1\), and assume that the initial data \({\varvec{}}\lambda (0)\) satisfies Assumption 4.1. For any time t with \(\eta _*\ll t\ll (\log N)^{-1}r\wedge (\log N)^{-2}\), the normalized Stieltjes transform \(\Gamma _t(z)\,{:}{=}\,N\mathop {\mathrm {Im}}[z]\left( \tilde{m}_t(z)-m_t(z)\right) \) is asymptotically a Gaussian field on \(\{E+\mathrm {i}\eta : E\in [E_0-r/2, E_0+r/2], M^2/N\ll \eta \ll t/(M\log N)\}\). We have for any \(z_1,z_2,\ldots , z_k\in \{E+\mathrm {i}\eta : E\in [E_0-r/2, E_0+r/2], M^2/N\ll \eta \ll t/(M\log N)\}\), the joint characteristic function of \(\Gamma _t(z_1), \Gamma _t(z_2),\ldots , \Gamma _t(z_k)\) is given by

$$\begin{aligned} \begin{aligned}&\mathbb {E}\left[ \exp \left\{ \mathrm {i}\sum _{j=1}^k a_j \mathop {\mathrm {Re}}[\Gamma _t(z_j)]+b_j \mathop {\mathrm {Im}}[\Gamma _t(z_j)]\right\} \right] \\&\quad =\exp \left\{ \sum _{1\le j,\ell \le k}\mathop {\mathrm {Re}}\left[ \frac{(a_j-\mathrm {i}b_j)(a_\ell +\mathrm {i}b_\ell )\mathop {\mathrm {Im}}[z_j]\mathop {\mathrm {Im}}[z_\ell ]}{2\beta (z_j-\bar{z}_\ell )^2}\right] \right\} \\&\qquad +{{\mathrm{O}}}\left( \frac{M^2}{N\min _j\{\mathop {\mathrm {Im}}[z_j]\}}+\frac{M\log N\max _j\{\mathop {\mathrm {Im}}[z_j]\}}{t}\right) . \end{aligned} \end{aligned}$$
(4.2)

Remark 4.3

The above covariance structure is universal, independent of the potential V. By a slightly more elaborate analysis, we can compute the joint characteristic function of \(\Gamma _{t_1}(z_1), \Gamma _{t_2}(z_2),\ldots , \Gamma _{t_k}(z_k)\), where the times \(\eta _*\ll t_1, t_2,\ldots , t_k\ll (\log N)^{-1}r\wedge (\log N)^{-2}\) and \(z_j\in \{E+\mathrm {i}\eta : E\in [E_0-r/2, E_0+r/2], M^2/N\ll \eta \ll t_i/(M\log N)\}\) for \(j=1,2,\ldots ,k\),

$$\begin{aligned} \begin{aligned}&\mathbb {E}\left[ \exp \left\{ \mathrm {i}\sum _{j=1}^k a_j \mathop {\mathrm {Re}}[\Gamma _{t_j}(z_j)]+b_j \mathop {\mathrm {Im}}[\Gamma _{t_j}(z_j)]\right\} \right] \\&\quad =\exp \left\{ \sum _{1\le j,\ell \le k}\mathop {\mathrm {Re}}\left[ \frac{(a_j-\mathrm {i}b_j)(a_\ell +\mathrm {i}b_\ell )\mathop {\mathrm {Im}}[z_j]\mathop {\mathrm {Im}}[z_\ell ]}{2\beta (z_{t_j\wedge t_\ell }\circ z_{t_j}^{-1}(z_j)-\overline{z_{t_j\wedge t_\ell }\circ z_{t_\ell }^{-1}(z_\ell )})^2}\right] \right\} \\&\qquad +{{\mathrm{O}}}\left( \frac{M^2}{N\min _j\{\mathop {\mathrm {Im}}[z_j]\}}+M\log N \max _j\left\{ \frac{\mathop {\mathrm {Im}}[z_j]}{t_j}\right\} \right) . \end{aligned} \end{aligned}$$
(4.3)

By the Littlewood–Paley type decomposition argument developed in [49], the above theorem implies the following central limit theorem for mesoscopic linear statistics.

Corollary 4.4

Under the assumptions of Theorem 4.2, the following holds for any compactly supported test function \(\psi \) in the Sobolev space \(H^{s}\) with \(s>1\). Let \(M^2/N\ll \eta \ll t\), \(E\in [E_0-r, E_0+r]\), and define

$$\begin{aligned} \psi _{\eta ,E}(x)=\psi \left( \frac{x-E}{\eta }\right) . \end{aligned}$$
(4.4)

The normalized linear statistics converges to a Gaussian

$$\begin{aligned} \mathcal L(\psi _{\eta ,E})\,{:}{=}\,\sum _{i=1}^N \psi _{\eta ,E}(\lambda _i(t))-N\int _{{\mathbb R}} \psi _{\eta ,E}(x) \mathrm{d}\mu _t(x)\rightarrow N(0, \sigma _\psi ^2), \end{aligned}$$
(4.5)

in distribution as \(N\rightarrow \infty \), where

$$\begin{aligned} \sigma _\psi ^2\,{:}{=}\,\frac{1}{2\beta \pi ^2}\int _{{\mathbb R}^2} \left( \frac{\psi (x)-\psi (y)}{x-y}\right) ^2\mathrm{d}x\mathrm{d}y. \end{aligned}$$
(4.6)

4.1 Regularity of the Stieltjes transform of the limit measure-valued process

In this subsection we analyze the differential equation of the Stieltjes transform of the limit measure-valued process (2.26) under the assumptions of Theorem 4.2. We will need some regularity results for \(m_t\). First we prove some preliminary estimates. The following two estimates are standard.

Lemma 4.5

Under the assumptions of Theorem 4.2, we have, for any interval \(I=[E-\eta , E+\eta ]\) with \(E\in [E_0-r, E_0+r]\) and \(\eta \in [4{\mathfrak d}^2\eta _*, 1]\), the estimate

$$\begin{aligned} \frac{|I|N}{16{\mathfrak d}^3}\le |\{i:\lambda _i(0)\in I\}|\le {\mathfrak d}|I|N . \end{aligned}$$
(4.7)

Proof

For the upper bound, by taking \(z=E+\mathrm {i}\eta \), we have

$$\begin{aligned} {\mathfrak d}\ge \mathop {\mathrm {Im}}[m_0(E+\mathrm {i}\eta )]\ge \frac{1}{N}\sum _{i:\lambda _i\in I}\frac{\eta }{(\lambda _i-E)^2+\eta ^2}\ge \frac{|\{i:\lambda _i\in I\}|}{2N\eta }. \end{aligned}$$
(4.8)

For the lower bound, let \(\eta _1=\eta /(4{\mathfrak d}^2)\ge \eta _*\), we have

$$\begin{aligned} {\mathfrak d}^{-1}\le&\mathop {\mathrm {Im}}[m_0(E+\mathrm {i}\eta _1)]=\frac{1}{N}\sum _{i:\lambda _i\in I}\frac{\eta _1}{(\lambda _i-E)^2+\eta _1^2}+\frac{1}{N}\sum _{i:\lambda _i\not \in I}\frac{\eta _1}{(\lambda _i-E)^2+\eta _1^2}\nonumber \\ \le&\frac{|\{i:\lambda _i\in I\}|}{N\eta _1}+\frac{\eta _1}{\eta }\frac{1}{N}\sum _{i=1}^N\frac{2\eta }{(\lambda _i-E)^2+\eta ^2}\nonumber \\ \le&\frac{|\{i:\lambda _i\in I\}|}{N\eta _1}+\frac{2\eta _1}{\eta }\mathop {\mathrm {Im}}[m_0(E+\mathrm {i}\eta )]\nonumber \\ \le&\frac{4{\mathfrak d}^2|\{i:\lambda _i\in I\}|}{N\eta }+\frac{1}{2{\mathfrak d}}, \end{aligned}$$
(4.9)

and the lower bound follows by rearranging. \(\square \)

Corollary 4.6

Assume the conditions of Theorem 4.2. Let \(u=E+\mathrm {i}\eta \) with \(E\in [E_0-r, E_0+r]\) and \(\eta \in [\eta _*, 1]\). There exists a constant \(C>0\) so that if \(\mathop {\mathrm {Im}}[z_t(u)]> 0\), then

$$\begin{aligned} |m_t(z_t(u))|\le C\log N, \end{aligned}$$
(4.10)

and

$$\begin{aligned} |\partial _t m_t(z_t(u))|\le C\log N. \end{aligned}$$
(4.11)

Proof

For \(t=0\), let \(\eta _1=4{\mathfrak d}^2\eta \). By a dyadic decomposition we have

$$\begin{aligned} \begin{aligned} |m_0(u)| \le&\frac{1}{N}\left( \sum _{|\lambda _{i}-E|\le \eta _1}\frac{1}{\eta }+\sum _{k=1}^{\lfloor -\log _2(\eta _1)\rfloor }\sum _{2^{k}\eta _1 \ge |\lambda _{i}-E|\ge 2^{k-1}\eta _1}\frac{1}{|\lambda _i-E|}+\sum _{|\lambda _{i}-E|\ge 1/2}\frac{1}{|\lambda _i-E|}\right) \\ \le&2{\mathfrak d}\eta _1/\eta -4{\mathfrak d}\log _2 \eta _1+2\le C\log N. \end{aligned} \end{aligned}$$
(4.12)

By Proposition 2.6 we have that \(|\partial _z V' (z) |\le C\) and \(| g(z, x) | \le C\) and so

$$\begin{aligned} \left| \partial _s m_s(z_s(u)\right| \le C( |m_s(z_s)|+1), \end{aligned}$$
(4.13)

and therefore,

$$\begin{aligned} |m_t(z_t(u))|\le e^{Ct}(|m_0(z_0(u))|+1)={{\mathrm{O}}}(\log N). \end{aligned}$$
(4.14)

The claim follows. \(\square \)

We now derive estimates on quantities appearing in our analysis of \(m_t\).

Lemma 4.7

Assume that Assumption 4.1 holds. Let \(t\ll (\log N)^{-1}r\wedge (\log N)^{-2}\), and \(u\in \{E+\mathrm {i}\eta : E\in [E_0-r, E_0+r],\eta \in [\eta _*, 1]\}\). If \(z_t(u)\in \{E+\mathrm {i}\eta : E\in [E_0-r/2, E_0+r/2], M^2/N\ll \eta \ll t\}\), then for \(0\le s\le t\), we have that \(z_s\in \mathcal D_s\) as defined in (3.2), and moreover,

$$\begin{aligned} \int _0^t\frac{\mathrm{d}s}{\mathop {\mathrm {Im}}[z_s]^p}\le 2{\mathfrak d}\int _0^t\frac{\mathop {\mathrm {Im}}[m_s(z_s)]}{\mathop {\mathrm {Im}}[z_s]^p}\mathrm{d}s \le {\left\{ \begin{array}{ll} C\log N, &{} p=1, \\ \frac{C}{\mathop {\mathrm {Im}}[z_t]^{p-1}},&{} p>1. \end{array}\right. } \end{aligned}$$
(4.15)

Proof

Let u be as in the statement of the lemma and denote \(z_s = z_s (u)\). By (4.10), we have \(|\mathop {\mathrm {Re}}[z_s]|\le r+Ct\log N\le 3{\mathfrak b}-s\) and \(\mathop {\mathrm {Im}}[z_s]\le 1+Ct\log N\le 3{\mathfrak b}-s\), since \(t\le (\log N)^{-2}\). By the assumption \({\mathfrak d}^{-1}\le \mathop {\mathrm {Im}}[m_0(u)]\le {\mathfrak d}\) and the estimate (2.29), we have uniformly for any \(0\le s\le t\),

$$\begin{aligned} (2{\mathfrak d})^{-1}\le \mathop {\mathrm {Im}}[m_s(z_s(u))]\le 2{\mathfrak d}, \end{aligned}$$
(4.16)

since \(t\ll 1\). Moreover, by (2.28), we have \(\mathop {\mathrm {Im}}[z_s]\ge c\mathop {\mathrm {Im}}[z_t]\gg M^2/N\). Therefore,

$$\begin{aligned} \frac{e^{Ks}M\log N}{N\mathop {\mathrm {Im}}[m_s(z_s)]}\vee \frac{e^{Ks}}{N^{{\mathfrak c}}}\ll \frac{M^2}{N}\ll \mathop {\mathrm {Im}}[z_s]. \end{aligned}$$

It follows that \(z_s(u)\in {{\mathrm{\mathcal {D}}}}_s\). Since \(\mathop {\mathrm {Im}}[m_s(z_s)]\ge (2{\mathfrak d})^{-1}\), we have

$$\begin{aligned} \int _0^t\frac{\mathrm{d}s}{\mathop {\mathrm {Im}}[z_s]^p}\le 2{\mathfrak d}\int _0^t\frac{\mathop {\mathrm {Im}}[m_s(z_s)]}{\mathop {\mathrm {Im}}[z_s]^p}\mathrm{d}s. \end{aligned}$$
(4.17)

The case \(p=1\) estimate of (4.15) follows from (2.31) by using the estimate \(\mathop {\mathrm {Im}}[u]/\mathop {\mathrm {Im}}[z_t(u)]\le CN\) of (3.8). The case \(p>1\) follows from (2.31). \(\square \)

Lemma 4.8

The following holds under the assumptions of Theorem 4.2. Let \(u=E+\mathrm {i}\eta \) with \(E\in [E_0-r, E_0+r]\) and \(\eta \in [\eta _*, 1]\). There exists a uniform constant \(c>0\) so that If \(\mathop {\mathrm {Im}}[z_t(u)]>0\), then

$$\begin{aligned} 1-t\mathop {\mathrm {Re}}[\partial _z m_0(u)]\ge c. \end{aligned}$$
(4.18)

Proof

By the upper bound in (2.30), since \(\mathop {\mathrm {Im}}[z_t(u)]\ge 0\), we have

$$\begin{aligned} \eta =\mathop {\mathrm {Im}}[u]\ge \frac{1-e^{-Ct}}{C}\mathop {\mathrm {Im}}[m_0(u)]\ge \left( t-\frac{Ct^2}{2}\right) \mathop {\mathrm {Im}}[m_0(u)]. \end{aligned}$$
(4.19)

We write the LHS of (4.18) as

$$\begin{aligned} 1-t\mathop {\mathrm {Re}}[\partial _z m_0(u)]=1-\frac{t}{\eta }\mathop {\mathrm {Im}}[m_0(u)]+\frac{t}{N}\sum _{i=1}^{N}\frac{2\eta ^2}{|\lambda _i(0)-u|^4}. \end{aligned}$$
(4.20)

We consider the following two cases:

  1. 1.

    If \(\eta \ge 2{\mathfrak d}t\), then by (4.20) and assumption (4.1), \(1-t\mathop {\mathrm {Re}}[\partial _z m_0(u)]\ge 1/2\).

  2. 2.

    If \(\eta <2{\mathfrak d}t\), let \(\eta _1=\eta \vee 4{\mathfrak d}^2 \eta _*\le 4{\mathfrak d}^2\eta \). By combining (4.19), (4.20) and (4.7), we have

    $$\begin{aligned} \begin{aligned} 1-t\mathop {\mathrm {Re}}[\partial _z m_0(u)] \ge&-\frac{Ct^2}{2\eta }\mathop {\mathrm {Im}}[m_0(u)]+\frac{t}{N}\sum _{i:|\lambda _i(0)-E|\le \eta _1}\frac{2\eta ^2}{(2\eta _1^2)^2}\\ \ge&-\frac{C{\mathfrak d}t^2}{2\eta }+\frac{t}{2^{10}{\mathfrak d}^9\eta } = \frac{t}{2^{10}{\mathfrak d}^9\eta }\left( 1-2^9C{\mathfrak d}^{10} t\right) \ge \frac{1}{2^{12}{\mathfrak d}^{10}}, \end{aligned}\end{aligned}$$
    (4.21)

    where we used \(t\ll 1\).

\(\square \)

In the following we derive the regularity of the Stieltjes transform of the limiting measure-valued process (2.26). As a preliminary we study the flow map \(u\rightarrow z_s(u)\), and prove that it is Lipschitz.

Proposition 4.9

Under the assumptions of Theorem 4.2 we have the following. Let \(u=E+\mathrm {i}\eta \), such that \(E\in [E_0-r, E_0+r]\) and \(\eta \in [\eta _*, 1]\). If \(\mathop {\mathrm {Im}}[z_t(u)]> 0\), then for \(0\le s\le t\),

$$\begin{aligned}&c \le |\partial _x z_s(u)|, |\partial _y z_s(u)| \le C, \end{aligned}$$
(4.22)
$$\begin{aligned}&|\partial _z m_s(z_s(u))|={{\mathrm{O}}}\left( t^{-1}\right) . \end{aligned}$$
(4.23)

where the constants depend on \(V'\) and \({\mathfrak d}\).

Proof

For \(s=0\), by (4.19) we have

$$\begin{aligned} \begin{aligned} |\partial _{z}m_0(u)|&\le \frac{1}{N}\sum _{i=1}^N\frac{1}{|\lambda _i(0)-u|^2}=\frac{\mathop {\mathrm {Im}}[m_0(u)]}{\mathop {\mathrm {Im}}[u]}={{\mathrm{O}}}\left( t^{-1}\right) . \end{aligned} \end{aligned}$$
(4.24)

By taking derivative with respect to x on both sides of (2.25), we get

$$\begin{aligned} \partial _s\partial _x z_s(u)=-\partial _z m_s(z_s(u))\partial _x z_s(u)+\frac{\partial _x V'(z_s(u))}{2},\quad \partial _x z_0(u)=1, \end{aligned}$$
(4.25)

where \(\partial _x V'(z_s(u))=\partial _z V'(z_s(u))\partial _x z_s(u)+\partial _{{\bar{z}}}V'(z_s(u))\partial _x \bar{z}_s(u)\). By taking derivative with respect to x on both sides of (2.26), we have

$$\begin{aligned} \begin{aligned}&\partial _s \left( \partial _z m_s(z_s(u)) \right) \partial _x z_s(u) +\partial _z m_s(z_s(u))\partial _s\partial _x z_s(u)\\&\quad =\frac{\partial _zm_s(z_s(u))\partial _z V'(z_s(u))\partial _x z_s(u) +m_s(z_s(u))\partial _x \partial _z V'(z_s(u))}{2}\\&\qquad +\int _{{\mathbb R}} \partial _x g(z_s(u), w)\mathrm{d}\mu _s(w), \end{aligned} \end{aligned}$$
(4.26)

where \(\partial _x g(z_s(u), w)=\partial _z g(z_s(u),w)\partial _xz_s(u)+\partial _{\bar{z}} g(z_s(u),w)\partial _x\bar{z}_s(u)\). Note that \(\partial _x z_0(u)=\partial _x(x+\mathrm {i}y)=1\). We define

$$\begin{aligned} \sigma =t\wedge \inf _{s\ge 0} \{\partial _x z_s(u)=0\}. \end{aligned}$$
(4.27)

Then \(0 < \sigma \le t \ll (\log N)^{-2}\), and for any \(0\le s<\sigma \) we have \(|\partial _x {\bar{z}}_s(u)| = |\partial _x z_s(u)|\).

By combining (4.25) and (4.26), and rearranging we have

$$\begin{aligned} \partial _s \left[ \partial _z m_s(z_s(u)) \right] =(\partial _z m_s(z_s(u)))^2+2\partial _z m_s(z_s(u)) b_s+c_s, \end{aligned}$$
(4.28)

where

$$\begin{aligned} \begin{aligned}&b_s = \frac{\partial _x V'(z_s(u))}{4\partial _x z_s(u)}+\frac{\partial _z V'(z_s(u))}{4},\quad c_s=\frac{m_s(z_s(u))\partial _x \partial _z V'(z_s(u))}{2\partial _x z_s(u)}\\&\quad +\frac{\int _{{\mathbb R}} \partial _x g(z_s(u), w)\mathrm{d}\mu _s(w)}{\partial _x z_s(u)}. \end{aligned} \end{aligned}$$
(4.29)

Under the assumptions of Theorem 4.2 we have \(\Vert V' (z) \Vert _{C^2} \le C\) and \(| \partial _z g (z, w) |+|\partial _{{\bar{z}}}g(z,w)| \le C\) by Proposition 2.6. Combining this with Corollary 4.6 we have \(|b_s| + |c_s| \le C\) for \(0 \le s \le \sigma \).

First we derive an upper bound for the real part of \(\partial _z m_s(z_s(u))\). It follows from taking real part on both sides of (4.28) that

$$\begin{aligned}&\partial _s \mathop {\mathrm {Re}}[\partial _z m_s(z_s(u))] \\&\quad = ( \mathop {\mathrm {Re}}[ \partial _z m_s (z_s (u) ])^2 - ( \mathop {\mathrm {Im}}[ \partial _z m_s (z_s (u) ) ] )^2 + 2 \mathop {\mathrm {Re}}[ \partial _z m_s (z_s (u) ) ] \mathop {\mathrm {Re}}[ b_s ]\\&\qquad - 2 \mathop {\mathrm {Im}}[ \partial _z m_s (z_s (u) ) ] \mathop {\mathrm {Im}}[ b_s ] + \mathop {\mathrm {Re}}[ c_s ] \\&\quad \le ( \mathop {\mathrm {Re}}[ \partial _z m_s (z_s (u) ) ] )^2 + 2 \mathop {\mathrm {Re}}[ \partial _z m_s (z_s (u) ) ] \mathop {\mathrm {Re}}[ b_s] + \mathop {\mathrm {Im}}[ b_s ] ^2 + \mathop {\mathrm {Re}}[ c_s ]\\&\quad = ( \mathop {\mathrm {Re}}[ \partial _z m_s (z_s (u) ) + b_s ] )^2 + \mathop {\mathrm {Re}}[ c_s - b_s^2 ]. \end{aligned}$$

Therefore, we derive

$$\begin{aligned} \begin{aligned} \partial _s ( \mathop {\mathrm {Re}}[\partial _z m_s(z_s(u))] )_+ \le&\left( ( \mathop {\mathrm {Re}}[\partial _z m_s(z_s(u))] )_+ +C]\right) ^2 +C\log N, \end{aligned}\end{aligned}$$
(4.30)

with initial data \(( \mathop {\mathrm {Re}}[\partial _z m_0(z_0(u))] )_+\le (1-c)/{t}\) from (4.18). The above ODE is separable and by solving it explicitly and using the fact that \(\sqrt{\log N}t\ll 1\), we get

$$\begin{aligned} \begin{aligned} \mathop {\mathrm {Re}}[\partial _z m_s(z_s(u))] \le&\sqrt{C\log N}\tan \left( \arctan \left( \frac{(1-c)/t+C}{\sqrt{C\log N}}\right) +\sqrt{C\log N}s\right) \\ \asymp&\sqrt{C\log N}\tan \left( \frac{\pi }{2}-\frac{(c-Ct)\sqrt{C\log N }t}{1-c+Ct}\right) \asymp \frac{1-c}{c t}, \end{aligned} \end{aligned}$$
(4.31)

uniformly for \(0\le s\le \sigma \). Therefore, there exists some constant C, so that \(\mathop {\mathrm {Re}}[ \partial _z m_s (z_s (u) ) ]\le C/t\), uniformly for any \(0\le s\le \sigma \).

Using this we derive from (4.28) that,

$$\begin{aligned} \begin{aligned} \partial _s|\partial _z m_s(z_s(u))|^2&=2\mathop {\mathrm {Re}}[\partial _s \partial _z m_s(z_s(u)) \partial _z {\bar{m}}_s(z_s(u))]\\&= 2\mathop {\mathrm {Re}}[\partial _z m_s(z_s(u))]| \partial _z m_s(z_s(u))|^2 +4\mathop {\mathrm {Re}}[b_s] |\partial _z m_s(z_s(u))|^2\\&\quad +2\mathop {\mathrm {Re}}[c_s \partial _z {\bar{m}}_s(z_s(u))]\\&\le \frac{C}{t}| \partial _z m_s(z_s(u))|^2+C(t\log N)^2. \end{aligned} \end{aligned}$$
(4.32)

It follows by Gronwall’s inequality that \(|\partial _z m_s(z_s(u))|={{\mathrm{O}}}(1/t)\) uniformly for \(0\le s\le \sigma \). Notice that \(\partial _x z_0(u)=1\), and that (4.25) implies

$$\begin{aligned} \partial _xz_s(u)=\exp \left\{ \int _0^s -\partial _z m_s(z_s(u)) + \frac{\partial _z V'(z_s(u))}{2}+\frac{\partial _{{\bar{z}}}V'(z_s(u))\partial _x\bar{z}_s}{2\partial _x z_s(u)}\mathrm{d}\tau \right\} \asymp 1. \end{aligned}$$
(4.33)

uniformly for \(0\le s\le \sigma \). Therefore, \(\sigma =t\) and the estimates (4.23) and \(|\partial _xz_t(u)| \asymp 1\) are immediate consequences. The estimate \(|\partial _yz_t(u)| \asymp 1\) follows from the same argument. \(\square \)

Finally, we have the following results for the regularity of \(m_t (w)\).

Corollary 4.10

Suppose that the assumptions of Theorem 4.2 hold, and let \(\eta _*\ll t\ll (\log N)^{-1}r\wedge (\log N)^{-2}\). We have,

  1. i)

    For any \(w\in \{E+\mathrm {i}\eta : E\in [E_0-3r/4, E_0+3r/4], 0< \eta \le 3/4\}\), we have that \(z_t^{-1}(w)\subset \{E+\mathrm {i}\eta : E\in [E_0-r, E_0+r], \eta _*\le \eta \le 1\}\), and \(\partial _zm_t(w)={{\mathrm{O}}}(1/t)\).

  2. ii)

    Fix \(u\in \{E+\mathrm {i}\eta : E\in [E_0-r, E_0+r],\eta \in [\eta _*, 1]\}\). If \(z_t(u)\in \{E+\mathrm {i}\eta : E\in [E_0-r/2, E_0+r/2], 0< \eta \ll t\}\), then for \(0\le s\le t\), and any \(w\in {\mathbb C}_+\) such that \(|w-z_s(u)|\le \mathop {\mathrm {Im}}[z_s(u)]/2\), we have \(|\partial _z m_s(w)|={{\mathrm{O}}}(1/t)\).

In both statements, the implicit constants depend on V and \({\mathfrak d}\).

Proof

We first consider the first statement in i). Uniformly for any \(u\in \{E+\mathrm {i}\eta : E\in [E_0-r, E_0+r], \eta _*< \eta \le 1\}\cap \Omega _t\) (with \(\Omega _t\) as in Proposition 2.8), we have by (2.30), (4.10) and (2.25), that there exists a constant C depending on V and \({\mathfrak d}\), such that

$$\begin{aligned} \begin{aligned} \max \left\{ 0, \mathop {\mathrm {Im}}[u]-2tC\mathop {\mathrm {Im}}[m_0(u)]\right\}&\le \mathop {\mathrm {Im}}[z_t(u)]\le e^{Ct}\left( \mathop {\mathrm {Im}}[u]-\frac{1-e^{-Ct}}{C}\mathop {\mathrm {Im}}[m_0(u)]\right) ,\\ \mathop {\mathrm {Re}}[u]-Ct\log N&\le \mathop {\mathrm {Re}}[z_t(u)]\le \mathop {\mathrm {Re}}[u]+Ct\log N. \end{aligned} \end{aligned}$$
(4.34)

By Proposition 2.8, \(z_t\) is surjective from \(\Omega _t\) onto \({\mathbb C}_+\). The first statement in i) follows from the assumptions \(t\gg \eta _*\) and \(r\gg t\log N\). The second statement in i), is then a consequence of (4.23) and the equality \(\partial _zm_t(w)=\partial _zm_t(z_t(z_t^{-1}(w)))\).

For ii), since \(\mathop {\mathrm {Im}}[m_0(u)]={{\mathrm{O}}}(1)\), it follows from (2.30) that \( \mathop {\mathrm {Im}}[u]=t\mathop {\mathrm {Im}}[m_0(u)]+o(t)\). If \(s\le t/2\), then we see that by (2.30) that \(t / C \le \mathop {\mathrm {Im}}[ z_s (u ) ] \le Ct \) for some \(C>0\). Furthermore, by (4.10) and (2.25) we see that \(\mathop {\mathrm {Re}}[u] - C t \log (N) \le \mathop {\mathrm {Re}}[ z_s (u) ] \le \mathop {\mathrm {Re}}[u] + C t \log (N)\). We also observe that \(\mathop {\mathrm {Im}}[w]\ge t/2C\). It follows from the same argument as in i) that \(\{w\in {\mathbb C}_+: |w-z_s(u)|\le \mathop {\mathrm {Im}}[z_s(u)]/2\} \subseteq z_s(\{E+\mathrm {i}\eta : E\in [E_0-r, E_0+r],\eta \in [\eta _*, 1]\}\cap \Omega _s)\). Therefore, by (2.29), uniformly for \(\{w\in {\mathbb C}_+: |w-z_s(u)|\le \mathop {\mathrm {Im}}[z_s(u)]/2\}\), \(\mathop {\mathrm {Im}}[m_s(w)]={{\mathrm{O}}}(\mathop {\mathrm {Im}}[m_0(z_s^{-1}(w))])={{\mathrm{O}}}(1)\), and therefore

$$\begin{aligned} |\partial _z m_s(w)|\le \frac{\mathop {\mathrm {Im}}[m_s(w)]}{\mathop {\mathrm {Im}}[w]}={{\mathrm{O}}}\left( \frac{1}{t}\right) . \end{aligned}$$
(4.35)

If \(s\ge t/2\), from i), uniformly for any \(w\in \{E+\mathrm {i}\eta : E\in [E_0-3r/4, E_0+3r/4], 0< \eta \le 3/4\}\), \(\partial _zm_s(w)={{\mathrm{O}}}(1/s)={{\mathrm{O}}}(1/t)\). Moreover, we have \(\{w\in {\mathbb C}_+: |w-z_s(u)|\le \mathop {\mathrm {Im}}[z_s(u)]/2\} \subseteq \{E+\mathrm {i}\eta : E\in [E_0-3r/4, E_0+3r/4], 0< \eta \le 3/4\}\). The statement follows. \(\square \)

4.2 Proof of Theorem 4.2

Using regularity of \(m_t\) and the local law we infer the following regularity for the empirical Stieltjes transform \(\tilde{m}_t\).

Lemma 4.11

Suppose that the assumptions of Theorem 4.2 hold. Let \(\eta _*\ll t\ll (\log N)^{-1}r\wedge (\log N)^{-2}\). Fix \(u\in \{E+\mathrm {i}\eta : E\in [E_0-r, E_0+r],\eta \in [\eta _*, 1]\}\). If \(z_t(u)\in \{E+\mathrm {i}\eta : E\in [E_0-r/2, E_0+r/2], 0< \eta \ll t\}\), then on the event \(\Omega \) (as defined in the proof of Proposition 3.8), we have the following estimate uniformly for \(0\le s\le t\),

$$\begin{aligned} \partial _z^p \tilde{m}_s(z_s(u))={{\mathrm{O}}}\left( \frac{M}{N\mathop {\mathrm {Im}}[z_s(u)]^{p+1}}+\frac{1}{t\mathop {\mathrm {Im}}[z_s(u)]^{p-1}}\right) . \end{aligned}$$
(4.36)

Proof

The estimate (4.36) is a consequence of the following two statements.

$$\begin{aligned}&\partial _z^p \left( \tilde{m}_s(z_s(u))-m_s(z_s(u))\right) ={{\mathrm{O}}}\left( \frac{M}{N\mathop {\mathrm {Im}}[z_s(u)]^{p+1}}\right) , \end{aligned}$$
(4.37)
$$\begin{aligned}&\partial _z^p m_s(z_s(u))={{\mathrm{O}}}\left( \frac{1}{t\mathop {\mathrm {Im}}[z_s(u)]^{p-1}}\right) . \end{aligned}$$
(4.38)

For (4.37), since both \(\tilde{m}_s\) and \(m_s\) are analytic on the upper half plane, by Cauchy’s integral formula

$$\begin{aligned} \partial _z^p \left( \tilde{m}_s(z_s(u))-m_s(z_s(u))\right) =\frac{p!}{2\pi \mathrm {i}}\oint _{{\mathcal C}} \frac{\tilde{m}_s(w)-m_s(w)}{(w-z_s(u))^{p+1}}\mathrm{d}w , \end{aligned}$$
(4.39)

where \({\mathcal C}\) is a small contour in the upper half plane centering at \(z_s(u)\) with radius \(\mathop {\mathrm {Im}}[z_s(u)]/2\). On the event \(\Omega \), we use (3.3) in Theorem 3.1 to bound the integral by

$$\begin{aligned} \left| \frac{p!}{2\pi \mathrm {i}}\oint _{{\mathcal C}} \frac{\tilde{m}_s(w)-m_s(w)}{(w-z_s(u))^{p+1}}\mathrm{d}w\right|\le & {} \frac{p!}{2\pi }\oint _{{\mathcal C}} \frac{|\tilde{m}_s(w)-m_s(w)|}{|w-z_s(u)|^{p+1}}\mathrm{d}w \nonumber \\= & {} {{\mathrm{O}}}\left( \frac{M}{N\mathop {\mathrm {Im}}[z_s(u)]^{p+1}}\right) . \end{aligned}$$
(4.40)

For (4.38), Cauchy’s integral formula leads to

$$\begin{aligned} \begin{aligned} \left| \partial _z^p m_s(z_s(u))\mathrm{d}s\right| \le \frac{(p-1)!}{2\pi }\oint _{{\mathcal C}} \frac{\left| \partial _z m_s(w)\right| }{|w-z_s(u)|^{p}}\mathrm{d}w={{\mathrm{O}}}\left( \frac{1}{t\mathop {\mathrm {Im}}[z_s(u)]^{p-1}}\right) . \end{aligned} \end{aligned}$$
(4.41)

where we used ii) in Corollary 4.10 which states that \(\left| \partial _z m_s(w)\right| ={{\mathrm{O}}}(1/t)\). \(\square \)

By i) in Corollary 4.10, \(\{E+\mathrm {i}\eta : E\in [E_0-r/2, E_0+r/2], M^2/N\ll \eta \ll t\}\subseteq z_t(\{E+\mathrm {i}\eta : E\in [E_0-r, E_0+r], \eta \in [\eta _*, 1]\}\cap \Omega _t)\). In the following, we fix some \(u\in \{E+\mathrm {i}\eta : E\in [E_0-r, E_0+r], \eta \in [\eta _*, 1]\}\), such that \(z_t(u)\in \{E+\mathrm {i}\eta : E\in [E_0-r/2, E_0+r/2], M^2/N\ll \eta \ll t\}\). By Lemma 4.7, \(z_t\in {{\mathrm{\mathcal {D}}}}_t\), and the local law of Theorem 3.1 holds.

We integrate both sides of (3.22), and get the following integral expression for \(\tilde{m}_t(z_t)\),

$$\begin{aligned}&\tilde{m}_t(z_t)-m_t(z_t)=\int _0^t\left( \tilde{m}_s(z_s)-m_s(z_s)\right) \partial _z \left( \tilde{m}_s(z_s)+\frac{V'(z_s)}{2}\right) \mathrm{d}s\nonumber \\&\quad + \frac{1}{\pi }\int _0^t\int _{{\mathbb C}} \partial _{{\bar{w}}} \tilde{g}(z_s,w) (\tilde{m}_s(w)-m_s(w))\mathrm{d}^2 w\mathrm{d}s +\frac{2-\beta }{\beta N^2}\int _0^t\sum _{i=1}^{N}\frac{\mathrm{d}s}{(\lambda _i(s)-z_s)^3}\nonumber \\&\quad -\sqrt{\frac{2}{\beta N^3}}\int _0^t\sum _{i=1}^N \frac{\mathrm{d} B_i(s)}{(\lambda _i(s)-z_s)^2}. \end{aligned}$$
(4.42)

For the proof of the mesoscopic central limit theorem, we will show that the first three terms on the righthand side of (4.42) are negligible, and the Gaussian fluctuation is from the last term, i.e. the integral with respect to Brownian motion. In the following Proposition, we calculate the quadratic variance of the Brownian integrals.

Proposition 4.12

Suppose that the assumptions of Theorem 4.1 hold. Fix \(u,u'\in \{E+\mathrm {i}\eta : E_0-r\le E\le E_0+r,\eta _*\le \eta \le 1\}\). Let \(z_t\,{:}{=}\,z_t(u)\) and \(z_t'\,{:}{=}\,z_t(u')\). If

$$\begin{aligned} z_t,z'_t\in \{E+\mathrm {i}\eta : E\in [E_0-r/2, E_0+r/2], M^2/N\ll \eta \ll t\}, \end{aligned}$$
(4.43)

and \(\mathop {\mathrm {Im}}[z_t]\ge \mathop {\mathrm {Im}}[z_t']\), then

$$\begin{aligned}&\frac{1}{N^3}\int _0^t\sum _{i=1}^N \frac{\mathrm{d}s}{(\lambda _i(s)-z_s)^4}={{\mathrm{O}}}\left( \frac{M}{N^3\mathop {\mathrm {Im}}[z_t]^3}+\frac{1}{N^2t\mathop {\mathrm {Im}}[z_t]}\right) , \end{aligned}$$
(4.44)
$$\begin{aligned}&\frac{1}{N^3}\int _0^t\sum _{i=1}^N \frac{\mathrm{d}s}{(\lambda _i(s)-z_s)^2(\lambda _i(s)-z_s')^2}={{\mathrm{O}}}\left( \frac{M}{N^3\mathop {\mathrm {Im}}[z_t]^2\mathop {\mathrm {Im}}[z_t']}+\frac{1}{N^2t\mathop {\mathrm {Im}}[z_t]}\right) , \end{aligned}$$
(4.45)
$$\begin{aligned}&\frac{1}{ N^3}\int _0^t\sum _{i=1}^N \frac{\mathrm{d} s}{(\lambda _i(s)-\bar{z}_s)^2(\lambda _i(s)-z_s')^2}\nonumber \\&\quad =- \frac{1}{N^2(\bar{z}_t-z_t')^2} +{{\mathrm{O}}}\left( \frac{M}{N^3\mathop {\mathrm {Im}}[z_t]^2\mathop {\mathrm {Im}}[z_t']}+\frac{1}{N^2t\mathop {\mathrm {Im}}[z_t]}\right) . \end{aligned}$$
(4.46)

Proof

Since \(\mathop {\mathrm {Im}}[m_0(z_0)]=\mathop {\mathrm {Im}}[m_0(u)] \asymp 1\), by (2.30) and (2.29), we have \(\mathop {\mathrm {Im}}[z_s] \asymp \mathop {\mathrm {Im}}[z_t]+(t-s)\) and \(\mathop {\mathrm {Im}}[z_s'] \asymp \mathop {\mathrm {Im}}[z_t']+(t-s)\). Since \(\mathop {\mathrm {Im}}[z_t]\ge \mathop {\mathrm {Im}}[z_t']\), there exists a constant c depending on V and \({\mathfrak d}\), such that uniformly for \(0\le s\le t\), \(\mathop {\mathrm {Im}}[z_s]\ge c\mathop {\mathrm {Im}}[z_s']\).

For (4.44), the lefthand side can be written as the derivative of the Stieltjes transform \(\tilde{m}_s\) at \(z_s\), and so

$$\begin{aligned} \begin{aligned} \left| \frac{1}{6N^2}\int _0^t\partial ^3_z\tilde{m}_s(z_s)\mathrm{d}s\right| \le&\frac{C}{6N^2}\int _0^t\left( \frac{M}{N\mathop {\mathrm {Im}}[z_s]^4}+\frac{1}{t\mathop {\mathrm {Im}}[z_s]^2}\right) \mathrm{d}s\\ =&{{\mathrm{O}}}\left( \frac{M}{N^3\mathop {\mathrm {Im}}[z_t]^3}+\frac{1}{N^2t\mathop {\mathrm {Im}}[z_t]}\right) , \end{aligned} \end{aligned}$$
(4.47)

where we used Lemma 4.11 and (4.15).

We write the LHS of (4.45), as a contour integral of \(\tilde{m}_s\):

$$\begin{aligned} \begin{aligned} \frac{1}{N^3}\sum _{i=1}^N \frac{1}{(\lambda _i(s)-z_s)^2(\lambda _i(s)-z_s')^2} =&\frac{1}{2\pi \mathrm {i}N^2}\oint _{\mathcal C}\frac{\tilde{m}_s(w)}{(w-z_s)^2(w-z_s')^2}\mathrm{d}w, \end{aligned} \end{aligned}$$
(4.48)

where if \(\mathop {\mathrm {Im}}[z_s]/3\ge |z_s-z'_s|\), then \(\mathcal C\) is a contour centered at \(z_s\) with radius \(\mathop {\mathrm {Im}}[z_s]/2\). In this case we have \({{\mathrm{dist}}}(\mathcal C, \{z_s, z_s'\})\ge \mathop {\mathrm {Im}}[z_s]/6\). In the case that \(|z_s-z'_s|\ge \mathop {\mathrm {Im}}[z_s]/3\), we let \(\mathcal C=\mathcal C_1\cup \mathcal C_2\) consist of two contours, where \(\mathcal C_1\) is centered at \(z_s\) with radius \(\min \{\mathop {\mathrm {Im}}[z_s'],\mathop {\mathrm {Im}}[z_s]\}/6\), and \(\mathcal C_2\) is centered at \(z_s'\) with radius \(\min \{\mathop {\mathrm {Im}}[z_s'],\mathop {\mathrm {Im}}[z_s]\}/6\). Then in this case we have \({{\mathrm{dist}}}(\mathcal C_1, z_s')\ge \mathop {\mathrm {Im}}[z_s]/6\) and \({{\mathrm{dist}}}(\mathcal C_2, z_s)\ge \mathop {\mathrm {Im}}[z_s]/6\). In the first case, thanks to Lemma 4.11 and ii) in Corollary 4.10, for \(w\in \mathcal C\) we have

$$\begin{aligned} \tilde{m}_s(w)=\tilde{m}_s(z_s)+(w-z_s)\partial _z \tilde{m}_s(z_s)+(w-z_s)^2{{\mathrm{O}}}\left( \frac{M}{N\mathop {\mathrm {Im}}[z_s]^{3}}+\frac{1}{t\mathop {\mathrm {Im}}[z_s]}\right) .\nonumber \\ \end{aligned}$$
(4.49)

Plugging (4.49) into (4.48), we see that t he first two terms vanish and

$$\begin{aligned} |(4.48)|&\le \frac{C}{N^2}\int _{\mathcal C}\left( \frac{M}{N\mathop {\mathrm {Im}}[z_s]^{5}}+\frac{1}{t\mathop {\mathrm {Im}}[z_s]^3}\right) \mathrm{d}w \nonumber \\&\quad ={{\mathrm{O}}}\left( \frac{M}{N^3\mathop {\mathrm {Im}}[z_s]^{4}}+\frac{1}{N^2t\mathop {\mathrm {Im}}[z_s]^2}\right) , \end{aligned}$$
(4.50)

where we used that \(|\mathcal C|\asymp \mathop {\mathrm {Im}}[z_s]\). In the second case, (4.49) holds on \(\mathcal C_1\). Similarly, for \(w\in \mathcal C_2\) we have

$$\begin{aligned} \tilde{m}_s(w)=\tilde{m}_s(z_s')+(w-z_s')\partial _z \tilde{m}_s(z_s')+(w-z_s')^2{{\mathrm{O}}}\left( \frac{M}{N\mathop {\mathrm {Im}}[z_s']^{3}}+\frac{1}{t\mathop {\mathrm {Im}}[z_s']}\right) .\nonumber \\ \end{aligned}$$
(4.51)

It follows by plugging (4.49) and (4.51) into (4.48), that we can bound (4.48) by

$$\begin{aligned} \begin{aligned}&\frac{C}{N^2}\left( \int _{\mathcal C_1}\left( \frac{M}{N\mathop {\mathrm {Im}}[z_s]^{5}}+\frac{1}{t\mathop {\mathrm {Im}}[z_s]^3}\right) \mathrm{d}w\right. \\&\left. \qquad +\int _{\mathcal C_2}\left( \frac{M}{N\mathop {\mathrm {Im}}[z_s]^{2}\mathop {\mathrm {Im}}[z_s']^{3}}+\frac{1}{t\mathop {\mathrm {Im}}[z_s]^2\mathop {\mathrm {Im}}[z_s']}\right) \mathrm{d}w\right) \\&\quad ={{\mathrm{O}}}\left( \frac{M}{N^3\mathop {\mathrm {Im}}[z_s]^{2}\mathop {\mathrm {Im}}[z_s']^2}+\frac{1}{N^2t\mathop {\mathrm {Im}}[z_s]^2}\right) , \end{aligned} \end{aligned}$$
(4.52)

where we used \(\mathop {\mathrm {Im}}[z_s]\ge c\mathop {\mathrm {Im}}[z_s']\) and \(|\mathcal C_1|, |\mathcal C_2|={{\mathrm{O}}}(\mathop {\mathrm {Im}}[z_s'])\). The estimate of the LHS of (4.45) follows by combining (4.50) and (4.52),

$$\begin{aligned} \begin{aligned} |(4.45)|\le&\frac{C}{N^2}\int _0^t \frac{M}{N^3\mathop {\mathrm {Im}}[z_s]^{2}\mathop {\mathrm {Im}}[z_s']^2}+\frac{1}{N^2t\mathop {\mathrm {Im}}[z_s]^2}\mathrm{d}s\\ =&{{\mathrm{O}}}\left( \frac{M}{N^3\mathop {\mathrm {Im}}[z_t]^2}\int _0^t \frac{\mathrm{d}s}{\mathop {\mathrm {Im}}[z_s']^2}+\frac{1}{N^2t\mathop {\mathrm {Im}}[z_t]}\right) \\ =&{{\mathrm{O}}}\left( \frac{M}{N^3\mathop {\mathrm {Im}}[z_t]^2\mathop {\mathrm {Im}}[z_t']}+\frac{1}{N^2t\mathop {\mathrm {Im}}[z_t]}\right) , \end{aligned} \end{aligned}$$
(4.53)

where we used that \(\mathop {\mathrm {Im}}[z_s]\ge c\mathop {\mathrm {Im}}[z_t]\) in the second line, and (4.15) for the last line.

Finally, for (4.46),

$$\begin{aligned} \frac{1}{N}\sum _{i=1}^N \frac{1}{(\lambda _i(s)-\bar{z}_s)^2(\lambda _i(s)-z_s')^2}&=\frac{2(\overline{-\tilde{m}_s(z_s)}+\tilde{m}_s(z_s'))}{({\bar{z}}_s-z_s')^3}\nonumber \\&\quad +\frac{\overline{\partial _{z} \tilde{m}_s( z_s)} +\partial _z \tilde{m}_s(z_s')}{({\bar{z}}_s-z_s')^2}. \end{aligned}$$
(4.54)

Note that \(|{\bar{z}}_s-z_s'|\ge \mathop {\mathrm {Im}}[z_s]+\mathop {\mathrm {Im}}[z_s']\asymp \mathop {\mathrm {Im}}[z_s]\). For the second term in (4.54), we have by (4.36),

$$\begin{aligned} \begin{aligned} \left| \frac{1}{N^2}\int _0^t\frac{\overline{\partial _{z} \tilde{m}_s( z_s)}+\partial _z \tilde{m}_s(z_s')}{({\bar{z}}_s-z_s')^2}\right| \le&\frac{C}{N^2}\int _0^t\frac{1}{\mathop {\mathrm {Im}}[z_s]^2} \left( \frac{M}{N\mathop {\mathrm {Im}}[z_s']^2}+\frac{1}{t}\right) \mathrm{d}s\\ =&{{\mathrm{O}}}\left( \frac{M}{N^3\mathop {\mathrm {Im}}[z_t]^2\mathop {\mathrm {Im}}[z_t']}+\frac{1}{N^2t\mathop {\mathrm {Im}}[z_t]}\right) . \end{aligned} \end{aligned}$$
(4.55)

For the first term in (4.54), we recall the definition of the vector flow \(z_s(u)\) as in (2.25). Since \(\Vert V'(z)\Vert _{C^1}={{\mathrm{O}}}(1)\), we have

$$\begin{aligned} \overline{-\tilde{m}_s(z_s)}+\tilde{m}_s(z_s')=\partial _s ({\bar{z}}_s-z_s') +{{\mathrm{O}}}(|{\bar{z}}_s-z_s'|). \end{aligned}$$
(4.56)

Therefore,

$$\begin{aligned} \begin{aligned}&\frac{2}{N^2}\int _{0}^{t}\frac{(\overline{-\tilde{m}_s(z_s)}+\tilde{m}_s(z_s'))}{({\bar{z}}_s-z_s')^3}\mathrm{d}s\\&\quad = \frac{2}{N^2}\int _{0}^{t}\frac{\partial _s ({\bar{z}}_s-z_s') }{(\bar{z}_s-z_s')^3}\mathrm{d}s +{{\mathrm{O}}}\left( \frac{1}{N^2}\int _0^t\frac{\mathrm{d}s}{\mathop {\mathrm {Im}}[z_s]^2}\right) \\&\quad =-\frac{1}{N^2({\bar{z}}_t-z_t')^2}+\frac{1}{N^2({\bar{u}}-u')^2}+{{\mathrm{O}}}\left( \frac{1}{N^2\mathop {\mathrm {Im}}[z_t]}\right) \\&\quad =-\frac{1}{N^2(\bar{z}_t-z_t')^2}+{{\mathrm{O}}}\left( \frac{1}{N^2t^2}+\frac{1}{N^2\mathop {\mathrm {Im}}[z_t]}\right) , \end{aligned} \end{aligned}$$
(4.57)

where we used \(|{\bar{u}}-u'|\ge \mathop {\mathrm {Im}}[u]+\mathop {\mathrm {Im}}[u']\ge ct\). This finishes the proof of Proposition  4.12. \(\square \)

Proof of Theorem 4.2

Let the event \(\Omega \) be as above. Thanks to the estimates Theorem 3.1 and Lemma 4.11 which hold on \(\Omega \), we can bound the first term on the RHS of (4.42) by

$$\begin{aligned} \begin{aligned}&\left| \int _0^t\left( \tilde{m}_s(z_s)-m_s(z_s)\right) \partial _z \left( \tilde{m}_s(z_s)+\frac{V'(z_s)}{2}\right) \mathrm{d}s\right| \\&\quad \le C\int _0^t\frac{M}{N\mathop {\mathrm {Im}}[z_s]}\left( \frac{M}{N\mathop {\mathrm {Im}}[z_s]^2}+\frac{1}{t}\right) \mathrm{d}s\\&\quad = {{\mathrm{O}}}\left( \frac{M^2}{(N\mathop {\mathrm {Im}}[z_t])^2}+\frac{M\log N}{Nt}\right) , \end{aligned} \end{aligned}$$
(4.58)

where we used (4.15).

For the second term on the righthand side of (4.42), by Proposition 3.9 we have on the event \(\Omega \)

$$\begin{aligned} \left| \frac{1}{\pi }\int _0^t\int _{{\mathbb C}} \partial _{{\bar{w}}} \tilde{g}(z_s,w) (\tilde{m}_s(w)-m_s(w))\mathrm{d}^2 w\mathrm{d}t\right| \le \frac{CtM(\log N)^{2}}{N}. \end{aligned}$$
(4.59)

We can rewrite the third term on the righthand side of (4.42) as

$$\begin{aligned} \frac{2-\beta }{\beta N^2}\int _0^t\sum _{i=1}^{N}\frac{\mathrm{d}s}{(\lambda _i(s)-z_s)^3} =\frac{2-\beta }{2\beta N}\int _0^t\partial _z^2 \tilde{m}_s(z_s)\mathrm{d}s. \end{aligned}$$
(4.60)

Thanks to Lemma 4.11, and (4.15) we have

$$\begin{aligned} \begin{aligned} \left| \int _0^t\partial _z^2 \tilde{m}_s(z_s)\mathrm{d}s\right|&\le C\int _0^t\left( \frac{1}{N\mathop {\mathrm {Im}}[z_s]^3} + \frac{1}{t\mathop {\mathrm {Im}}[z_s]}\right) \mathrm{d}s={{\mathrm{O}}}\left( \frac{1}{N (\mathop {\mathrm {Im}}[z_t])^2}+\frac{\log N}{t}\right) . \end{aligned} \end{aligned}$$
(4.61)

It follows that

$$\begin{aligned} \left| \frac{2-\beta }{\beta N^2}\int _0^t\sum _{i=1}^{N}\frac{\mathrm{d}s}{(\lambda _i(s)-z_s)^3}\right| ={{\mathrm{O}}}\left( \frac{1}{(N\mathop {\mathrm {Im}}[z_t])^2}+\frac{\log N}{Nt}\right) . \end{aligned}$$
(4.62)

By combining the above estimates we see that on the event \(\Omega \), we have

$$\begin{aligned} \tilde{m}_t(z_t)-m_t(z_t)={{\mathrm{O}}}\left( \frac{M^2}{(N\mathop {\mathrm {Im}}[z_t])^2}+\frac{M\log N}{Nt}\right) +\sqrt{\frac{2}{\beta N^3}}\int _0^t\sum _{i=1}^N \frac{\mathrm{d} B_i(s)}{(\lambda _i(s)-z_s)^2}.\nonumber \\ \end{aligned}$$
(4.63)

In the following we show that the Brownian integrals are asymptotically jointly Gaussian. We fix some \(u_j\in \{E+\mathrm {i}\eta : E_0-r\le E\le E_0+r,\eta _*\le \eta \le 1\}\), \(j=1,2,\ldots , k\) such that

$$\begin{aligned} z_t(u_j)\in \{E+\mathrm {i}\eta : E\in [E_0-r/2, E_0+r/2], M^2/N\ll \eta \ll t\},\quad j=1,2,\ldots , k.\nonumber \\ \end{aligned}$$
(4.64)

For \(1\le j\le k\). Let

$$\begin{aligned} X_j(t)= \mathop {\mathrm {Im}}[z_t(u_j)]\sqrt{\frac{2}{\beta N}}\int _0^t\sum _{i=1}^N \frac{\mathrm{d} B_i(s)}{(\lambda _i(s)-z_s(u_j))^2},\quad j=1,2,\ldots , k.\quad \end{aligned}$$
(4.65)

We compute their joint characteristic function,

$$\begin{aligned} \mathbb {E}\left[ \exp \left\{ \mathrm {i}\sum _{j=1}^k a_j\mathop {\mathrm {Re}}[X_j(t)]+b_j\mathop {\mathrm {Im}}[X_j(t)]\right\} \right] . \end{aligned}$$
(4.66)

Since \(\sum _{j=1}^k a_j\mathop {\mathrm {Re}}[X_j(t)]+b_j\mathop {\mathrm {Im}}[X_j(t)]\) is a martingale, the following is also a martingale

$$\begin{aligned} \exp \left\{ \mathrm {i}\sum _{j=1}^k a_j\mathop {\mathrm {Re}}[X_j(t)]+b_j\mathop {\mathrm {Im}}[X_j(t)]\}+\frac{1}{2}\left\langle \sum _{j=1}^k a_j\mathop {\mathrm {Re}}[X_j(t)]+b_j\mathop {\mathrm {Im}}[X_j(t)]\right\rangle \right\} .\nonumber \\ \end{aligned}$$
(4.67)

In particular, its expectation is one. By Proposition 4.12, on the event \(\Omega \) (as defined in the proof of Proposition 3.8), the quadratic variation is given by

$$\begin{aligned} \begin{aligned}&\frac{1}{2}\left\langle \sum _{j=1}^k a_j\mathop {\mathrm {Re}}[X_j(t)]+b_j\mathop {\mathrm {Im}}[X_j(t)]\right\rangle \\&\quad =-\sum _{1\le j,\ell \le k}\mathop {\mathrm {Re}}\left[ \frac{(a_j-\mathrm {i}b_j)(a_\ell +\mathrm {i}b_\ell )\mathop {\mathrm {Im}}[z_t(u_j)]\mathop {\mathrm {Im}}[z_t(u_\ell )]}{2\beta (z_t(u_j)-\overline{z_t(u_\ell )})^2}\right] \\&\qquad +{{\mathrm{O}}}\left( \frac{M}{N\min _j\{z_t(u_j)\}}+\frac{\max _j\{\mathop {\mathrm {Im}}[z_t(u_j)]\}}{t}\right) . \end{aligned}\end{aligned}$$
(4.68)

Therefore,

$$\begin{aligned} \begin{aligned} (4.66)&=\exp \left\{ \sum _{1\le j,\ell \le k}\mathop {\mathrm {Re}}\left[ \frac{(a_j-\mathrm {i}b_j)(a_\ell +\mathrm {i}b_\ell )\mathop {\mathrm {Im}}[z_t(u_j)]\mathop {\mathrm {Im}}[z_t(u_\ell )]}{2\beta (z_t(u_j)-\overline{z_t(u_\ell )})^2}\right] \right\} \\&\quad +{{\mathrm{O}}}\left( \frac{M}{N\min _j\{\mathop {\mathrm {Im}}[z_t(u_j)]\}}+\frac{\max _j\{\mathop {\mathrm {Im}}[z_t(u_j)]\}}{t}\right) . \end{aligned}\end{aligned}$$
(4.69)

Since by (4.63),

$$\begin{aligned} \Gamma _t(z_t(u_j))=X_j(t)+{{\mathrm{O}}}\left( \frac{M^2}{N\mathop {\mathrm {Im}}[z_t]}+\frac{M\log N\mathop {\mathrm {Im}}[z_t(u_j)]}{t}\right) , \end{aligned}$$
(4.70)

and so (4.2) follows. This finishes the proof of Theorem 4.2. \(\square \)

Proof of Corollary 4.4

The corollary follows from Theorem 4.2 and the rigidity estimate in Theorem 3.1 by the same argument as in [49].

We can approximate the test function \(\psi _{\eta ,E}(x)\) by its convolution with a Cauchy distribution on the scale \(\eta \), as \(\varepsilon \) goes to 0,

$$\begin{aligned} \psi _{\eta ,E}^{(\varepsilon )}\,{:}{=}\,\psi _{\eta ,E}* \frac{1}{\pi }\frac{(\varepsilon \eta )}{x^2+(\varepsilon \eta )^2}\rightarrow \psi _{\eta ,E},\quad \text {in } H^s. \end{aligned}$$
(4.71)

We let

$$\begin{aligned} \mathcal L(\psi _{\eta ,E}^{(\varepsilon )})\,{:}{=}\,\sum _{i=1}^N \psi _{\eta ,E}^{(\varepsilon )}(\lambda _i(t))-N\int _{{\mathbb R}} \psi ^{\varepsilon }_{\eta ,E}(x) \mathrm{d}\mu _t(x). \end{aligned}$$
(4.72)

Then thanks to Theorem 4.2, \(\mathcal L(\psi _{\eta ,E}^{(\varepsilon )})\) is asymptotically Gaussian,

$$\begin{aligned} \mathbb {E}\left[ e^{\mathrm {i}\xi \mathcal L(\psi _{\eta ,E}^{(\varepsilon )})}\right] =\exp \left\{ -\frac{\xi ^2}{4\beta \pi ^2}\mathop {\mathrm {Re}}\int _{\mathbb R}\int _{\mathbb R}\left( \frac{\psi (x)-\psi (y)}{x-y+2\mathrm {i}\varepsilon }\right) ^2\mathrm{d}x\mathrm{d}y\right\} +{{\mathrm{o}}}(1). \end{aligned}$$
(4.73)

We can approximate the characteristic function of \(\mathcal L(\psi _{\eta ,E})\) by that of \(\mathcal L(\psi _{\eta ,E}^{(\varepsilon )})\),

$$\begin{aligned} \left| \mathbb {E}\left[ e^{\mathrm {i}\xi \mathcal L(\psi _{\eta ,E})}\right] -\mathbb {E}\left[ e^{\mathrm {i}\xi \mathcal L(\psi _{\eta ,E}^{(\varepsilon )})}\right] \right| \le |\xi |\mathbb {E}\left[ \left( \mathcal L(\psi _{\eta ,E})-\mathcal L(\psi _{\eta ,E}^{(\varepsilon )})\right) ^2\right] ^{1/2}. \end{aligned}$$
(4.74)

By the Littlewood–Paley type decomposition argument developed in [49, Section 3], the righthand side of (4.74) can be bounded by the variance of the Stieljtes transform. It follows from the rigidity estimate in Theorem 3.1 by the same argument as in [49, Proposition 4.1], we have the following bound on the variance of the Stieltjes transform

$$\begin{aligned} \mathbb {E}\left[ \left| {\tilde{m}}_t(z)-m_t(z)\right| ^2\right] \le \frac{C}{N^2\eta ^{2+\delta }}, \end{aligned}$$
(4.75)

uniformly for any \(z\in \{E+\mathrm {i}\eta : E\in [E_0-r/2, E_0+r/2], 0\le \eta \le 1\), where \(\delta >0\) can be arbitrarily small. Therefore, [49, Theorem 5] implies that

$$\begin{aligned} \mathbb {E}\left[ \left( \mathcal L(\psi _{\eta ,E})-\mathcal L(\psi _{\eta ,E}^{(\varepsilon )})\right) ^2\right] ^{1/2}\le C(\eta )\Vert \psi _{\eta ,E}-\psi ^{(\varepsilon )}_{\eta ,E}\Vert _{H^s}, \end{aligned}$$
(4.76)

provided that \(2+\delta \le 2s\).

It follows from combining (4.73), (4.74) and (4.76) that

$$\begin{aligned} \mathbb {E}\left[ e^{\mathrm {i}\xi \mathcal L(\psi _{\eta ,E})}\right] =\exp \left\{ -\frac{\xi ^2}{4\beta \pi ^2}\int _{\mathbb R}\int _{\mathbb R}\left( \frac{\psi (x)-\psi (y)}{x-y}\right) ^2\mathrm{d}x\mathrm{d}y\right\} +{{\mathrm{o}}}(1), \end{aligned}$$
(4.77)

and the claim (4.5) follows. \(\square \)