1 Introduction and main results

Let us consider a Markov process \(X={(X_t)}_{t\ge 0}\) with state space S and invariant law \(\mu \) for which

$$\begin{aligned} \lim _{t\rightarrow \infty }\mathrm {dist}(\mathrm {Law}(X_t)\mid \mu )=0 \end{aligned}$$

where \(\mathrm {dist}(\cdot \mid \cdot )\) is a distance or divergence on the probability measures on S. Suppose now that \(X=X^n\) depends on a dimension, size, or complexity parameter n, and let us set \(S=S^n\), \(\mu =\mu ^n\), and \(X_0=x^n_0\in S^n\). For example \(X^n\) can be a random walk on the symmetric group of permutations of \(\{1,\ldots ,n\}\), Brownian motion on the group of \(n\times n\) unitary matrices, Brownian motion on the n-dimensional sphere, etc. In many of such examples, it has been proved that when n is large enough, the supremum over some set of initial conditions \(x^n_0\) of the quantity \(\mathrm {dist}(\mathrm {Law}(X_t^n)\mid \mu ^n)\) collapses abruptly to 0 when t passes a critical value \(c=c_n\) which may depend on n. This is often referred to as a cutoff phenomenon. More precisely, if \(\mathrm {dist}\) ranges from 0 to \(\max \), then, for some subset \(S^n_0 \subset S^n\) of initial conditions, some critical value \(c=c_n\) and for all \(\varepsilon \in (0,1)\),

$$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{x^n_0 \in S^n_0} \mathrm {dist}(\mathrm {Law}(X^n_{t_n})\mid \mu ^n) ={\left\{ \begin{array}{ll} \max &{} \text {if }t_n=(1-\varepsilon )c_n\\ 0 &{} \text {if }t_n=(1+\varepsilon )c_n \end{array}\right. }. \end{aligned}$$

It is standard to introduce, for an arbitrary small threshold \(\eta >0\), the quantity \(\inf \{t\ge 0:\sup _{x_0\in S_0^n}\mathrm {dist}(\mathrm {Law}(X^n_t)\mid \mu ^n)\le \eta \}\) known as the mixing time in the literature. Of course such a definition fully makes sense as soon as \(t\mapsto {\sup _{x_0\in S_0^n}}\mathrm {dist}(\mathrm {Law}(X^n_t)\mid \mu ^n)\) is non-increasing.

When \(S^n\) is finite, it is customary to take \(S^n_0 = S^n\). When \(S^n\) is infinite, it may happen that the supremum over the whole set \(S^n\) of the distance to equilibrium remains equal to \(\max \) at all times, in which case one has to consider strict subspaces of initial conditions. For some processes, it is possible to restrict \(S^n_0\) to a single state in which case one obtains a very precise description of the convergence to equilibrium starting from this initial condition. Note that the constraint over the initial condition can be made compatible with a limiting dynamics, for instance a mean-field limit when the process describes an exchangeable interacting particle system.

The cutoff phenomenon was put forward by Aldous and Diaconis at the origin for random walks on finite sets, see for instance [1, 26, 28, 52] and references therein. The analysis of the cutoff phenomenon is the subject of an important activity, still seeking for a complete theory: let us mention that, for the total variation distance, Peres proposed the so-called product condition (the mixing time must be much larger than the inverse of the spectral gap) as a necessary and sufficient condition for a cutoff phenomenon to hold, but counter-examples were exhibited [52, Sec. 18.3] and the product condition is only necessary.

The study of the cutoff phenomenon for Markov diffusion processes goes back at least to the works of Saloff-Coste [62, 64] in relation notably with Nash–Sobolev type functional inequalities, heat kernel analysis, and Diaconis–Wilson probabilistic techniques. We also refer to the more recent work [55] for the case of diffusion processes on compact groups and symmetric spaces, in relation with group invariance and representation theory, a point of view inspired by the early works of Diaconis on Markov chains and of Saloff-Coste on diffusion processes. Even if most of the available results in the literature on the cutoff phenomenon are related to compact state spaces, there are some notable works devoted to non-compact spaces such as [8,9,10,11, 19, 47].

Our contribution is an exploration of the cutoff phenomenon for the Dyson–Ornstein–Uhlenbeck diffusion process, for which the state space is \({\mathbb {R}}^n\). This process is an interacting particle system. When the interaction is turned off, we recover the Ornstein–Uhlenbeck process, a special case that has been considered previously in the literature but for which we also provide new results.

1.1 Distances

As for \(\mathrm {dist}\) we use several standard distances or divergences between probability measures: total variation (denoted TV), Hellinger, relative entropy (denoted Kullback), relative variance (denoted \(\chi ^2\)), Wasserstein of order 2, and Fisher information, surveyed in Appendix A. We take the following convention for probability measures \(\mu \) and \(\nu \) on the same space:

$$\begin{aligned} \mathrm {dist}(\mu \mid \nu ) ={\left\{ \begin{array}{ll} \left\| \mu -\nu \right\| _{\mathrm {TV}} &{}\text {when }\mathrm {dist}=\mathrm {TV}\\ \mathrm {Hellinger}(\mu ,\nu ) &{}\text {when }\mathrm {dist}=\mathrm {Hellinger}\\ \mathrm {Kullback}(\mu \mid \nu ) &{}\text {when }\mathrm {dist}=\mathrm {Kullback}\\ \chi ^2(\mu \mid \nu ) &{}\text {when }\mathrm {dist}=\chi ^2\\ \mathrm {Wasserstein}(\mu ,\nu ) &{}\text {when }\mathrm {dist}=\mathrm {Wasserstein}\\ \mathrm {Fisher}(\mu \mid \nu ) &{}\text {when }\mathrm {dist}=\mathrm {Fisher} \end{array}\right. }, \end{aligned}$$
(1.1)

see Appendix A for precise definitions. The maximal value \(\max \) taken by \(\mathrm {dist}\) is given by

$$\begin{aligned} \max = {\left\{ \begin{array}{ll} 1 &{}\text { if } \mathrm {dist} \in \{ \mathrm {TV}, \mathrm {Hellinger}\},\\ +\infty &{}\text { if } \mathrm {dist} \in \{\mathrm {Kullback}, \chi ^2, \mathrm {Wasserstein}, \mathrm {Fisher}\}. \end{array}\right. } \end{aligned}$$
(1.2)

1.2 The Dyson–Ornstein–Uhlenbeck (DOU) process and preview of main results

The DOU process is the solution \(X^n={(X^n_t)}_{t\ge 0}\) on \({\mathbb {R}}^n\) of the stochastic differential equation

$$\begin{aligned} X^n_0= & {} x^n_0\in {\mathbb {R}}^n,\nonumber \\ \mathrm {d}X_t^{n,i}= & {} \sqrt{\frac{2}{n}}\mathrm {d}B^i_t -V'(X_t^{n,i})\mathrm {d}t+\frac{\beta }{n}\sum _{j\ne i}\frac{\mathrm {d}t}{X_t^{n,i}-X_t^{n,j}},\quad 1\le i\le n, \end{aligned}$$
(1.3)

where \({(B_t)}_{t\ge 0}\) is a standard n-dimensional Brownian motion (BM), and where

  • \(V(x)=\frac{x^2}{2}\) is a “confinement potential” acting through the drift \(-V'(x)=-x\)

  • \(\beta \ge 0\) is a parameter tuning the interaction strength.

The notation \(X^{n,i}_t\) stands for the i-th coordinate of the vector \(X^n_t\). The process \(X^n\) can be thought of as an interacting particle system of n one-dimensional Brownian particles \(X^{n,1},\ldots ,X^{n,n}\), subject to confinement and singular pairwise repulsion when \(\beta >0\) (respectively first and second term in the drift). We take an inverse temperature of order n in (1.3) in order to obtain a mean-field limit without time-changing the process, see Sect. 2.5. The spectral gap is 1 for all \(n\ge 1\), see Sect. 2.6. We refer to Sect. 2.9 for other parametrizations or choices of inverse temperature.

In the special cases \(\beta \in \{0,1,2\}\), the cutoff phenomenon for the DOU process can be established by using Gaussian analysis and stochastic calculus, see Sects. 1.4 and 1.5. For \(\beta = 0\), the process reduces to the Ornstein–Uhlenbeck process (OU) and its behavior serves as a benchmark for the interaction case \(\beta \ne 0\), while when \(\beta \in \{1,2\}\), the approach involves a lift to unitary invariant ensembles of random matrix theory. For a general \(\beta \ge 1\), our main results regarding the cutoff phenomenon for the DOU process are given in Sects. 1.6 and 1.7. We are able, in particular, to prove the following: for all \(\mathrm {dist}\in \{\mathrm {TV}, \mathrm {Hellinger}, \mathrm {Wasserstein}\}\), \(a>0\), \(\varepsilon \in (0,1)\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{x^n_0 \in [-a,a]^n} \mathrm {dist}(\mathrm {Law}(X^n_{t_n})\mid P_n^\beta ) ={\left\{ \begin{array}{ll} \max &{} \text {if }t_n=(1-\varepsilon )c_n\\ 0 &{} \text {if }t_n=(1+\varepsilon )c_n \end{array}\right. }, \end{aligned}$$

where \(P_n^\beta \) is the invariant law of the process, and where

$$\begin{aligned} c_n := {\left\{ \begin{array}{ll} \log (\sqrt{n}a) &{}\text { if }\mathrm {dist} = \mathrm {Wasserstein}\\ \log (na) &{}\text { if }\mathrm {dist} \in \{\mathrm {TV}, \mathrm {Hellinger}\} \end{array}\right. }. \end{aligned}$$

This result is stated in a slightly more general form in Corollary 1.7. Our proof relies crucially on an exceptional exact solvability of the dynamics, notably the fact that we know explicitly the optimal long time behavior in entropy and coupling distance, as well as the eigenfunction associated to the spectral gap which turns out to be linear and optimal. This comes from the special choice of V as well as the special properties of the Coulomb interaction. We stress that such an exact solvability is no longer available for a general strongly convex V, even for instance in the simple example \(V(x)=\frac{x^2}{2}+x^4\) or for general linear forces. Nevertheless, and as usual, two other special classical choices of V could be explored, related to Laguerre and Jacobi weights, see Sect. 2.8.

1.3 Analysis of the Dyson–Ornstein–Uhlenbeck process

The process \(X^n\) was essentially discovered by Dyson in [33], in the case \(\beta \in \{1,2,4\}\), because it describes the dynamics of the eigenvalues of \(n\times n\) symmetric/Hermitian/symplectic random matrices with independent Ornstein–Uhlenbeck entries, see Lemma 5.1 and Lemma 5.2 below for the cases \(\beta =1\) and \(\beta =2\) respectively.

  • Case \(\beta =0\) (interaction turned off). The particles become n independent one-dimensional Ornstein–Uhlenbeck processes, and the DOU process \(X^n\) becomes exactly the n-dimensional Ornstein–Uhlenbeck process \(Z^n\) solving (1.8). The process lives in \({\mathbb {R}}^n\). The particles collide but since they do not interact, this does not raise any issue.

  • Case \(0<\beta <1\). Then with positive probability the particles collide producing a blow up of the drift, see for instance [22, 25] for a discussion. Nevertheless, it is possible to define the process for all times, for instance by adding a local time term to the stochastic differential equation, see [25] and references therein. It is natural to expect that the cutoff universality works as for \(\beta \not \in (0,1)\), but for simplicity we do not consider this case here.

  • Case \(\beta \ge 1\). If we order the coordinates by defining the convex domain

    $$\begin{aligned} D_n=\{x\in {\mathbb {R}}^n:x_1<\ldots <x_n\}, \end{aligned}$$

    and if \(x^n_0\in D_n\) then the Eq. (1.3) admits a unique strong solution that never exits \(D_n\), in other words the particles never collide and the order of the initial particles is preserved at all times, see [60]. Moreover if

    $$\begin{aligned} {\overline{D}}_n=\{x\in {\mathbb {R}}^n:x_1\le \ldots \le x_n\} \end{aligned}$$

    then it is possible to start the process from the boundary \({\overline{D}}_n\setminus D_n\), in particular from \(x^n_0\) such that \(x^{n,1}_0=\ldots =x^{n,n}_0\), and despite the singularity of the drift, it can be shown that with probability one, \(X^n_t\in D_n\) for all \(t>0\). We refer to [2, Th. 4.3.2] for a proof in the Dyson Brownian Motion case that can be adapted mutatis mutandis. In the sequel, we will only consider the cases \(\beta =0\) with \(x_0^n\in {\mathbb {R}}^n\) and \(\beta \ge 1\) with \(x^n_0\in {\overline{D}}_n\).

The drift in (1.3) is the gradient of a function, and (1.3) rewrites

$$\begin{aligned} X_0^n=x_0^n\in D_n,\quad \mathrm {d}X_t^n=\sqrt{\frac{2}{n}}\mathrm {d}B_t-\frac{1}{n}\nabla E(X_t^n)\mathrm {d}t, \end{aligned}$$
(1.4)

where

$$\begin{aligned} E(x_1,\ldots ,x_n) =n\sum _{i=1}^nV(x_i) +{\beta }\sum _{i>j}\log \frac{1}{|x_i-x_j|} \end{aligned}$$
(1.5)

can be interpreted as the energy of the configuration of particles \(x_1,\ldots ,x_n\).

  • If \(\beta =0\), then the Markov process \(X^n\) is an Ornstein–Uhlenbeck process, irreducible with unique invariant law \(P_n^0={\mathcal {N}}(0,\frac{1}{n}I_n)\) which is reversible.

  • If \(\beta \ge 1\), then the Markov process \(X^n\) is not irreducible, but \(D_n\) is a recurrent class carrying a unique invariant law \(P_n^\beta \), which is reversible and given by

    $$\begin{aligned} P_n^\beta =\frac{\mathrm {e}^{-E(x_1,\ldots ,x_n)}}{C_n^\beta }{\mathbf {1}}_{(x_1,\ldots ,x_n)\in {\overline{D}}_n} \mathrm {d}x_1\ldots \mathrm {d}x_n, \end{aligned}$$
    (1.6)

    where \(C_n^\beta \) is the normalizing factor given by

    $$\begin{aligned} C_n^\beta =\int _{{\overline{D}}_n} \mathrm {e}^{-E(x_1,\ldots ,x_n)}\mathrm {d}x_1\ldots \mathrm {d}x_n. \end{aligned}$$
    (1.7)

In terms of geometry, it is crucial to observe that since \(-\log \) is convex on \((0,+\infty )\), the map

$$\begin{aligned} (x_1,\ldots ,x_n)\in D_n\mapsto \mathrm {Interaction}(x_1,\ldots ,x_n) ={\beta }\sum _{i>j}\log \frac{1}{x_i-x_j}, \end{aligned}$$

is convex. Thus, since V is convex on \({\mathbb {R}}\), it follows that E is convex on \(D_n\). For all \(\beta \ge 0\), the law \(P_n^\beta \) is log-concave with respect to the Lebesgue measure as well as with respect to \({\mathcal {N}}(0,\frac{1}{n}I_n)\).

1.4 Non-interacting case and Ornstein–Uhlenbeck benchmark

When we turn off the interaction by taking \(\beta =0\) in (1.3), the DOU process becomes an Ornstein–Uhlenbeck process (OU) \(Z^n={(Z^n_t)}_{t\ge 0}\) on \({\mathbb {R}}^n\) solving the stochastic differential equation

$$\begin{aligned} Z^n_0=z^n_0\in {\mathbb {R}}^n,\quad \mathrm {d}Z^n_t=\sqrt{\frac{2}{n}}\mathrm {d}B^n_t-Z^n_t\mathrm {d}t, \end{aligned}$$
(1.8)

where \(B^n\) is a standard n-dimensional BM. The invariant law of \(Z^n\) is the product Gaussian law \(P_n^0={\mathcal {N}}(0,\frac{1}{n}I_n)={\mathcal {N}}(0,\frac{1}{n})^{\otimes n}\). The explicit Gaussian nature of \(Z_t^n\sim {\mathcal {N}}(z_0^n\mathrm {e}^{-t},\frac{1-\mathrm {e}^{-2t}}{n}I_n)\), valid for all \(t\ge 0\), allows for a fine analysis of convergence to equilibrium, as in the following theorem.

Theorem 1.1

(Cutoff for OU: mean-field regime) Let \(Z^n={(Z^n_t)}_{t\ge 0}\) be the OU process (1.8) and let \(P_n^0\) be its invariant law. Suppose that

$$\begin{aligned} \varliminf _{n\rightarrow \infty }\frac{|z^n_0|^2}{n}>0 \quad \text {and}\quad \varlimsup _{n\rightarrow \infty }\frac{|z^n_0|^2}{n}<\infty \end{aligned}$$

where \(|z|=\sqrt{z_1^2+\ldots +z_n^2}\) is the Euclidean norm. Then for all \(\varepsilon \in (0,1)\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathrm {dist}(\mathrm {Law}(Z^n_{t_n})\mid P_n^0) ={\left\{ \begin{array}{ll} \max &{} \text {if }t_n=(1-\varepsilon )c_n\\ 0 &{} \text {if }t_n=(1+\varepsilon )c_n \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} c_n ={\left\{ \begin{array}{ll} \frac{1}{2}\log (n) &{} \text {if }\mathrm {dist}=\mathrm {Wasserstein},\\ \log (n) &{} \text {if }\mathrm {dist}\in \{\mathrm {TV}, \mathrm {Hellinger}, \mathrm {Kullback}, \chi ^2\},\\ \frac{3}{2}\log (n) &{} \text {if }\mathrm {dist}=\mathrm {Fisher}. \end{array}\right. } \end{aligned}$$

Theorem 1.1 is proved in Sect. 3. See Figs. 1 and 2 for a numerical experiment.

Fig. 1
figure 1

A trajectory of a single DOU with \(n=3\) and \(x^n_0=(-10,0,10)\), \(\beta =0\) on top and \(\beta =2\) on bottom. The driving Brownian motions are the same

Fig. 2
figure 2

Plot of the function \(t\mapsto \mathrm {Hellinger}(\mathrm {Law}(X^n_t)\mid P^\beta _n)\) (see (3.2) for the explicit formula) with \(n=50\), \(\beta =0\), and \(\tfrac{|x^n_0|^2}{n}=1\). Note that \(\log (50)\approx 3.9\)

Theorem 1.1 constitutes a very natural benchmark for the cutoff phenomenon for the DOU process. Theorem 1.1 is not a surprise, and actually the TV and Hellinger cases are already considered in [47], see also [7]. Let us mention that in [9], a cutoff phenomenon for TV, entropy and Wasserstein is proven for the OU process of fixed dimension d and vanishing noise. This is to be compared with our setting where the dimension is sent to infinity: the results (and their proofs) are essentially the same in these two situations, however we will see below that if one considers more general initial conditions, there are some substantial differences according to whether the dimension is fixed or sent to infinity.

The restriction over the initial condition in Theorem 1.1 is spelled out in terms of the second moment of the empirical distribution, a natural choice suggested by the mean-field limit discussed in Sect. 2.5. It yields a mixing time of order \(\log (n)\), just like for Brownian motion on compact Lie groups, see [55, 64]. For the OU process and more generally for overdamped Langevin processes, the non-compactness of the space is replaced by the confinement or tightness due to the drift.

Actually, Theorem 1.1 is a particular instance of the following, much more general result that reveals that, except for the Wasserstein distance, a cutoff phenomenon always occurs.

Theorem 1.2

(General cutoff for OU). Let \(Z^n={(Z^n_t)}_{t\ge 0}\) be the OU process (1.8) and let \(P_n^0\) be its invariant law. Let \(\mathrm {dist}\in \{\mathrm {TV}, \mathrm {Hellinger}, \mathrm {Kullback}, \chi ^2, \mathrm {Fisher}\}\). Then, for all \(\varepsilon \in (0,1)\),

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathrm {dist}(\mathrm {Law}(Z^n_{t_n})\mid P_n^0) ={\left\{ \begin{array}{ll} \max &{} \text {if }t_n=(1-\varepsilon )c_n,\\ 0 &{} \text {if }t_n=(1+\varepsilon )c_n \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} c_n = {\left\{ \begin{array}{ll} \log (\sqrt{n}|z_0^n|) \vee \frac{1}{4}\log (n) &{}\text {if }\mathrm {dist}\in \{\mathrm {TV}, \mathrm {Hellinger}, \mathrm {Kullback}, \chi ^2\},\\ \log (n|z_0^n|) \vee \frac{1}{2}\log (n) &{}\text {if }\mathrm {dist} = \mathrm {Fisher}. \end{array}\right. } \end{aligned}$$

Regarding the Wasserstein distance, the following dichotomy occurs:

  • if \(\lim _{n\rightarrow \infty }|z_0^n|=+\infty \), then for all \(\varepsilon \in (0,1)\), with \(c_n=\log |z_0^n|\),

    $$\begin{aligned} \lim _{n\rightarrow \infty } \mathrm {Wasserstein}(\mathrm {Law}(Z_{t_n}),P_n^0) = {\left\{ \begin{array}{ll} +\infty &{} \text {if }t_n=(1-\varepsilon )c_n,\\ 0 &{} \text {if }t_n=(1+\varepsilon )c_n, \end{array}\right. } \end{aligned}$$
  • if \(\lim _{n\rightarrow \infty }|z_0^n|=\alpha \in [0,\infty )\) then there is no cutoff phenomenon namely for any \(t>0\)

    $$\begin{aligned} \lim _{n\rightarrow \infty } \mathrm {Wasserstein}^2(\mathrm {Law}(Z_{t}),P_n^0) = \alpha ^2\mathrm {e}^{-2t} +2\Bigr (1-\sqrt{1-\mathrm {e}^{-2t}} - \tfrac{1}{2}\mathrm {e}^{-2t}\Bigr ). \end{aligned}$$

Theorem 1.2 is proved in Sect. 3.

The observation that for every distance or divergence, except for the Wasserstein distance, a cutoff phenomenon occurs generically seems to be new.

Let us make a few comments. First, in terms of convergence to equilibrium the relevant observable in Theorem 1.2 appears to be the Euclidean norm \(|z_0^n|\) of the initial condition. This quantity differs from the eigenfunction associated to the spectral gap of the generator, which is given by \(z_1+\ldots +z_n\) as we will recall later on. This is also related to the equality of (2.3) and (3.4). Second, cutoff occurs at a time that is independent of the initial condition provided that its Euclidean norm is small enough: this cutoff time appears as the time required to regularize the initial condition (a Dirac mass) into a sufficiently spread out absolutely continuous probability measure; in particular this cutoff phenomenon would not hold generically if we allowed for spread out (non-Dirac) initial conditions. Note that, for the OU process of fixed dimension and vanishing noise, we would not observe a cutoff phenomenon when starting from initial conditions with small enough Euclidean norm: this is a high dimensional phenomenon. In this respect, the Wasserstein distance is peculiar since it is much less stringent on the local behavior of the measures at stake: for instance \(\lim _{n\rightarrow \infty }\mathrm {Wasserstein}(\delta _0,\delta _{1/n})=0\) while for all other distances or divergences considered here, the corresponding quantity would remain equal to \(\max \). This explains the absence of generic cutoff phenomenon for Wasserstein. Third, the explicit expressions provided in our proof allow to extract the cutoff profile in each case, but we prefer not to provide them in our statement and refer the interested reader to the end of Sect. 3.

1.5 Exactly solvable intermezzo

When \(\beta \ne 0\), the law of the DOU process is no longer Gaussian nor explicit. However several exactly solvable aspects are available. Let us recall that a Cox–Ingersoll–Ross process (CIR) of parameters \(a,b,\sigma \) is the solution \(R = (R_t)_{t\ge 0}\) on \({\mathbb {R}}_+\) of

$$\begin{aligned} R_0 = r_0 \in {\mathbb {R}}_+,\quad \mathrm {d}R_t = \sigma \sqrt{R_t} \mathrm {d}W_t + (a-bR_t) \mathrm {d}t, \end{aligned}$$
(1.9)

where W is a standard BM. Its invariant law is \(\mathrm {Gamma}(2a/\sigma ^2,2b/\sigma ^2)\) with density proportional to \(r\ge 0 \mapsto r^{2a/\sigma ^2-1}\mathrm {e}^{-2br/\sigma ^2}\), with mean a/b, and variance \(a\sigma ^2/(2b^2)\). It was proved by William Feller in [38] that the density of \(R_t\) at an arbitrary t can be expressed in terms of special functions.

If \({(Z_t)}_{t\ge 0}\) is a d-dimensional OU process of parameters \(\theta \ge 0\) and \(\rho \in {\mathbb {R}}\), weak solution of

$$\begin{aligned} \mathrm {d}Z_t=\theta \mathrm {d}W_t-\rho Z_t\mathrm {d}t \end{aligned}$$
(1.10)

where W is a d-dimensional BM, then \(R={(R_t)}_{t\ge 0}\), \(R_t:=|Z_t|^2\), is a CIR process with parameters \(a=\theta ^2d\), \(b=2\rho \), \(\sigma = 2\theta \). When \(\rho =0\) then Z is a BM while \(R=|Z|^2\) is a squared Bessel process.

The following theorem gathers some exactly solvable aspects of the DOU process for general \(\beta \ge 1\), which are largely already in the statistical physics folklore, see [58]. It is based on our knowledge of eigenfunctions associated to the first spectral values of the dynamics, see (2.6), and their remarkable properties. As in (2.6), we set \(\pi (x):=x_1+\ldots +x_n\) when \(x\in {\mathbb {R}}^n\).

Theorem 1.3

(From DOU to OU and CIR). Let \({(X^n_t)}_{t\ge 0}\) be the DOU process (1.3), with \(\beta =0\) or \(\beta \ge 1\), and let \(P_n^\beta \) be its invariant law. Then:

  • \({(\pi (X^n_t))}_{t\ge 0}\) is a one-dimensional OU process weak solution of (1.8) with \(\theta =\sqrt{2}\), \(\rho =1\). Its invariant law is \({\mathcal {N}}(0,1)\). It does not depend on \(\beta \), and \(\pi (X^n_t)\sim {\mathcal {N}}(\pi (x^n_0)\mathrm {e}^{-t},1-\mathrm {e}^{-2t})\), \(t\ge 0\). Furthermore \(\pi (X^n_t)^2\) is a CIR process of parameters \(a=2\), \(b=2\), \(\sigma = 2\sqrt{2}\).

  • \({(|X^n_t|^2)}_{t\ge 0}\) is a CIR process, weak solution of (1.9) with \(a=2+\beta (n-1)\), \(b=2\), \(\sigma = \sqrt{8/n}\). Its invariant law is \(\mathrm {Gamma}(\frac{1}{2}(n+\beta \frac{n(n-1)}{2}),\frac{n}{2})\) of mean \(1+\frac{\beta }{2}(n-1)\) and variance \(\beta +\frac{2-\beta }{n}\). Furthermore, if \(d=n+\beta \frac{n(n-1)}{2}\) is a positive integer, then \({(|X^n_t|^2)}_{t\ge 0}\) has the law of \({(|Z_t|^2)}_{t\ge 0}\) where \({(Z_t)}_{t\ge 0}\) is a d-dimensional OU process, weak solution of (1.8) with \(\theta =\sqrt{2/n}\), \(\rho =1\), and \(Z_0=z^n_0\) for an arbitrary \(z^n_0\in {\mathbb {R}}^d\) such that \(|z^n_0|=|x^n_0|\).

At this step it is worth noting that Theorem 1.3 gives in particular, denoting \(\beta _n:=1+\frac{\beta }{2}(n-1)\),

$$\begin{aligned} {\mathbb {E}}[\pi (X^n_t)]=\pi (x^n_0)\mathrm {e}^{-t} \underset{t\rightarrow \infty }{\longrightarrow }0 \quad \text {and}\quad {\mathbb {E}}[|X^n_t|^2] =\beta _n+(|x^n_0|^2-\beta _n)\mathrm {e}^{-2t} \underset{t\rightarrow \infty }{\longrightarrow } \beta _n.\nonumber \\ \end{aligned}$$
(1.11)

Following [25, Sec. 2.2], the limits can also be deduced from the Dumitriu–Edelman tridiagonal random matrix model [32] isospectral to \(\beta \)-Hermite. These formulas for the “transient” first two moments \({\mathbb {E}}[\pi (X_t^n)]\) and \({\mathbb {E}}[|X_t^n|^2]\) reveal an abrupt convergence to their equilibrium values :

  • If \(\lim _{n\rightarrow \infty }\frac{\pi (x^n_0)}{n}=\alpha \ne 0\) then for all \(\varepsilon \in (0,1)\),

    $$\begin{aligned} \lim _{n\rightarrow \infty } \vert {\mathbb {E}}[\pi (X^n_{t_n})]\vert = {\left\{ \begin{array}{ll} +\infty &{}\text {if }t_n=(1-\varepsilon )\log (n)\\ 0&{}\text {if }t_n=(1+\varepsilon )\log (n) \end{array}\right. }. \end{aligned}$$
    (1.12)
  • If \(\lim _{n\rightarrow \infty }\frac{|x^n_0|^2}{n} = \alpha \ne \frac{\beta }{2}\) then for all \(\varepsilon \in (0,1)\), denoting \(\beta _n:=1+\frac{\beta }{2}(n-1)\),

    $$\begin{aligned} \lim _{n\rightarrow \infty } \left| {\mathbb {E}}[|X^n_{t_n}|^2]-\beta _n \right| = {\left\{ \begin{array}{ll} +\infty &{}\text {if }t_n=(1-\varepsilon )\frac{1}{2}\log (n)\\ 0&{}\text {if }t_n=(1+\varepsilon )\frac{1}{2}\log (n) \end{array}\right. }. \end{aligned}$$
    (1.13)

These critical times are universal with respect to \(\beta \). The first two transient moments are related to the eigenfunctions (2.6) associated to the first two non-zero eigenvalues of the dynamics. Higher order transient moments are related to eigenfunctions associated to higher order eigenvalues. Note that \({\mathbb {E}}[\pi (X^n_t)]\) and \({\mathbb {E}}[|X^n_t|^2]\) are the first two moments of the non-normalized mean empirical measure \({\mathbb {E}}[\sum _{i=1}^n\delta _{X^{n,i}_t}]\), and this lack of normalization is responsible of the critical times of order \(\log (n)\). In contrast, the first two moments of the normalized mean empirical measure \({\mathbb {E}}[\frac{1}{n}\sum _{i=1}^n\delta _{X^{n,i}_t}]\), given by \(\frac{1}{n}{\mathbb {E}}[\pi (X^n_t)]\) and \(\frac{1}{n}{\mathbb {E}}[|X^n_t|^2]\) respectively, do not exhibit a critical phenomenon. This is related to the exponential decay of the first two moments in the mean-field limit (2.12), as well as the lack of cutoff for Wasserstein already revealed for OU by Theorem 1.2. This also reminds the high dimension behavior of norms in the field of the asymptotic geometric analysis of convex bodies. In another direction, this elementary observation on the moments also illustrates that the cutoff phenomenon for a given quantity is not stable under rather simple transformations of this quantity.

From the first part of Theorem 1.3 and contraction properties available for some distances or divergences, see Lemma A.2, we obtain the following lower bound on the mixing time for the DOU, which is independent of \(\beta \):

Corollary 1.4

(Lower bound on the mixing time). Let \({(X^n_t)}_{t\ge 0}\) be the DOU process (1.3) with \(\beta =0\) or \(\beta \ge 1\), and invariant law \(P_n^\beta \). Let \(\mathrm {dist}\in \{\mathrm {TV}, \mathrm {Hellinger}, \mathrm {Kullback}, \chi ^2, \mathrm {Wasserstein}\}\). Set

$$\begin{aligned} c_n := {\left\{ \begin{array}{ll} \log (|\pi (x_0^n)|) &{}\text {if }\mathrm {dist}\in \{ \mathrm {TV}, \mathrm {Hellinger}, \mathrm {Kullback}, \chi ^2\}\\ \log \Big (\frac{\vert \pi (x_0^n) \vert }{\sqrt{n}}\Big ) &{}\text {if }\mathrm {dist} = \mathrm {Wasserstein} \end{array}\right. }, \end{aligned}$$

and assume that \(\lim _{n\rightarrow \infty }c_n=\infty \). Then, for all \(\varepsilon \in (0,1)\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathrm {dist}(\mathrm {Law}(X^n_{(1-\varepsilon )c_n})\mid P_n^\beta ) =\max . \end{aligned}$$

Theorem 1.3 and Corollary 1.4 are proved in Sect. 4.

The derivation of an upper bound on the mixing time is much more delicate: once again recall that the case \(\beta = 0\) covered by Theorem 1.2 is specific as it relies on exact Gaussian computations which are no longer available for \(\beta \ge 1\). In the next subsection, we will obtain results for general values of \(\beta \ge 1\) via more elaborate arguments.

In the specific cases \(\beta \in \{1,2\}\), there are some exactly solvable aspects that one can exploit to derive, in particular, precise upper bounds on the mixing times. Indeed, for these values of \(\beta \), the DOU process is the process of eigenvalues of the matrix-valued OU process:

$$\begin{aligned} M_0 = m_0,\quad \mathrm {d}M_t = \sqrt{\frac{2}{n}} \mathrm {d}B_t - M_t \mathrm {d}t, \end{aligned}$$

where B is a BM on the symmetric \(n\times n\) matrices if \(\beta = 1\) and on Hermitian \(n\times n\) matrices if \(\beta =2\), see (5.4) and (5.16) for more details. Based on this observation, we can deduce an upper bound on the mixing times by contraction (for most distances or divergences).

Theorem 1.5

(Upper bound on mixing time in matrix case). Let \({(X^n_t)}_{t\ge 0}\) be the DOU process (1.3) with \(\beta \in \{0,1,2\}\), and invariant law \(P_n^\beta \), and \(\mathrm {dist}\in \{\mathrm {TV}, \mathrm {Hellinger}, \mathrm {Kullback}, \chi ^2, \mathrm {Wasserstein}\}\). Set

$$\begin{aligned} c_n := {\left\{ \begin{array}{ll} \log (\sqrt{n} |x_0^n|) \vee \log (\sqrt{n}) &{}\text {if }\mathrm {dist}\in \{ \mathrm {TV}, \mathrm {Hellinger}, \mathrm {Kullback}, \chi ^2\}\\ \log (|x_0^n|) &{}\text {if }\mathrm {dist} = \mathrm {Wasserstein} \end{array}\right. }, \end{aligned}$$

and assume that \(\lim _{n\rightarrow \infty }c_n=\infty \) if \(\mathrm {dist} = \mathrm {Wasserstein}\). Then, for all \(\varepsilon \in (0,1)\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathrm {dist}(\mathrm {Law}(X^n_{(1+\varepsilon )c_n})\mid P_n^\beta ) =0. \end{aligned}$$

Combining this upper bound with the lower bound already obtained above, we derive a cutoff phenomenon in this particular matrix case.

Corollary 1.6

(Cutoff for DOU in the matrix case). Let \({(X^n_t)}_{t\ge 0}\) be the DOU process (1.3), with \(\beta \in \{0,1,2\}\), and invariant law \(P_n^\beta \). Let \(\mathrm {dist}\in \{\mathrm {TV}, \mathrm {Hellinger}, \mathrm {Kullback}, \chi ^2, \mathrm {Wasserstein}\}\). Let \({(a_n)}_n\) be a real sequence satisfying \(\inf _n \sqrt{n} a_n > 0\), and assume further that \(\lim _{n\rightarrow \infty }\sqrt{n} a_n=\infty \) if \(\mathrm {dist}=\mathrm {Wasserstein}\). Then, for all \(\varepsilon \in (0,1)\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{x^n_0 \in [-a_n,a_n]^n} \mathrm {dist}(\mathrm {Law}(X^n_{t_n})\mid P_n^\beta ) ={\left\{ \begin{array}{ll} \max &{} \text {if }t_n=(1-\varepsilon )c_n\\ 0 &{} \text {if }t_n=(1+\varepsilon )c_n \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} c_n :={\left\{ \begin{array}{ll} \log (na_n) &{}\text { if }\mathrm {dist} \in \{\mathrm {TV}, \mathrm {Hellinger}, \mathrm {Kullback}, \chi ^2\}\\ \log (\sqrt{n} a_n) &{}\text { if }\mathrm {dist} = \mathrm {Wasserstein} \end{array}\right. }. \end{aligned}$$

Theorem 1.5 and Corollary 1.6 are proved in Sect. 5.

It is worth noting that \(d=n+\beta \frac{n(n-1)}{2}\) in Theorem 1.3 is indeed an integer in the “random matrix” cases \(\beta \in \{1,2\}\), and corresponds then exactly to the degree of freedom of the Gaussian random matrix models GOE and GUE respectively. More precisely, if we let \(X^n_\infty \sim P_n^\beta \) then:

  • If \(\beta =1\) then \(P_n^\beta \) is the law of the eigenvalues of \(S\sim \mathrm {GOE}_n\), and \(|X^n_\infty |^2=\sum _{j,k=1}^nS_{jk}^2\) which is the sum of n squared Gaussians of variance \(v=1/n\) (diagonal) plus twice the sum of \(\frac{n^2-n}{2}\) squared Gaussians of variance \(\frac{v}{2}\) (off-diagonal) all being independent. The duplication has the effect of renormalizing the variance from \(\frac{v}{2}\) to v. All in all we have the sum of \(d=\frac{n^2+n}{2}\) independent squared Gaussians of same variance v. See Sect. 5.

  • If \(\beta =2\) then \(P_n^\beta \) is the law of the eigenvalues of \(H\sim \mathrm {GUE}_n\), and \(|X^n_\infty |^2=\sum _{j,k=1}^n|H_{jk}|^2\) is the sum of n squared Gaussians of variance \(v=1/n\) (diagonal) plus twice the sum of \(n^2-n\) squared Gaussians of variance \(\frac{v}{2}\) (off-diagonal) all being independent. All in all we have the sum of \(d=n^2\) independent squared Gaussians of same variance v. See Sect. 5.

Another manifestation of exact solvability lies at the level of functional inequalities. Indeed, and following [25], the optimal Poincaré constant of \(P_n^\beta \) is given by 1/n and does not depend on \(\beta \), and the extremal functions are tranlations/dilations of \(x\mapsto \pi (x)=x_1+\ldots +x_n\). This corresponds to a spectral gap of the dynamics equal to 1 and its associated eigenfunction. Moreover, the optimal logarithmic Sobolev inequality of \(P_n^\beta \) (Lemma B.1) is given by 2/n and does not depend on \(\beta \), and the extremal functions are of the form \(x\mapsto \mathrm {e}^{c(x_1+\ldots +x_n)}\), \(c\in {\mathbb {R}}\). This knowledge of the optimal constants and extremal functions and their independence with respect to \(\beta \) is truly remarkable. It plays a crucial role in the results presented in this article. More precisely, the optimal Poincaré inequality is used for the lower bound via the first eigenfunctions while the optimal logarithmic Sobolev inequality is used for the upper bound via exponential decay of the entropy.

1.6 Cutoff in the general interacting case

Our main contribution consists in deriving an upper bound on the mixing times in the general case \(\beta \ge 1\): the proof relies on the logarithmic Sobolev inequality, some coupling arguments and a regularization procedure.

Theorem 1.7

(Upper bound on the mixing time: the general case). Let \({(X^n_t)}_{t\ge 0}\) be the DOU process (1.3), with \(\beta =0\) or \(\beta \ge 1\) and invariant law \(P_n^\beta \). Take \(\mathrm {dist}\in \{\mathrm {TV}, \mathrm {Hellinger}, \mathrm {Wasserstein}\}\). Set

$$\begin{aligned} c_n := {\left\{ \begin{array}{ll} \log (\sqrt{n} |x_0^n|) \vee \log ({n}) &{}\text {if }\mathrm {dist}\in \{\mathrm {TV}, \mathrm {Hellinger}\}\\ \log (|x_0^n|) \vee \log (\sqrt{n}) &{}\text {if }\mathrm {dist} = \mathrm {Wasserstein} \end{array}\right. }. \end{aligned}$$

Then, for all \(\varepsilon \in (0,1)\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathrm {dist}(\mathrm {Law}(X^n_{(1+\varepsilon )c_n})\mid P_n^\beta ) =0. \end{aligned}$$

Combining this upper bound with the general lower bound that we obtained in Corollary 1.4, we deduce the following cutoff phenomenon. Observe that it holds both for \(\beta =0\) and \(\beta \ge 1\), and that the expression of the mixing time does not depend on \(\beta \).

Corollary 1.8

(Cutoff for DOU in the general case). Let \({(X^n_t)}_{t\ge 0}\) be the DOU process (1.3) with \(\beta =0\) or \(\beta \ge 1\) and invariant law \(P_n^\beta \). Take \(\mathrm {dist}\in \{\mathrm {TV}, \mathrm {Hellinger}, \mathrm {Wasserstein}\}\). Let \({(a_n)}_n\) be a real sequence satisfying \(\inf _na_n > 0\). Then, for all \(\varepsilon \in (0,1)\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{x^n_0 \in [-a_n,a_n]^n} \mathrm {dist}(\mathrm {Law}(X^n_{t_n})\mid P_n^\beta ) ={\left\{ \begin{array}{ll} \max &{} \text {if }t_n=(1-\varepsilon )c_n\\ 0 &{} \text {if }t_n=(1+\varepsilon )c_n \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} c_n := {\left\{ \begin{array}{ll} \log (na_n) &{}\text { if }\mathrm {dist} \in \{\mathrm {TV}, \mathrm {Hellinger}\}\\ \log (\sqrt{n} a_n) &{}\text { if }\mathrm {dist} = \mathrm {Wasserstein} \end{array}\right. }. \end{aligned}$$

The proofs of Theorem 1.7 and Corollary 1.8 for the TV and Hellinger distances are presented in Sect. 6. The Wasserstein distance is treated in Sect. 7. Let us make a comment on the assumptions made on \(a_n\) in Corollaries 1.6 and 1.8. They are dictated by the upper bounds established in Theorems 1.5 and 1.7, which take the form of maxima of two terms: one that depends on the initial condition, and another one which is a power of a logarithm of n. The logarithmic term is an upper bound on the time required to regularize a pointwise initial condition, its precise expression varies according to the method of proof we rely on: in the matrix case, it is the time required to regularize a larger object, the matrix-valued OU process; in the general case, it is related to the time it takes to make the entropy of a pointwise initial condition small. These bounds are not optimal for \(\beta =0\) (compare with Theorem 1.2), and probably neither for \(\beta \ge 1\).

A natural, but probably quite difficult, goal would be to establish a cutoff phenomenon in the situation where the set of initial conditions is reduced to any given singleton, as in Theorem 1.2 for the case \(\beta = 0\). Recall that in that case, the asymptotic of the mixing time is dictated by the Euclidean norm of the initial condition. In the case \(\beta \ge 1\), this cannot be the right observable since the Euclidean norm does not measure the distance to equilibrium. Instead one should probably consider the Euclidean norm \(|x_0^n-\rho _n|\), where \(\rho _n\) is the vector of the quantiles of order 1/n of the semi-circle law that arises in the mean-field limit equilibrium (see Sect. 2.5). More precisely

$$\begin{aligned} \rho _{n,i} =\inf \left\{ t\in {\mathbb {R}}:\int _{-\infty }^t \frac{\sqrt{2\beta -x^2}}{\beta \pi }{\mathbf {1}}_{x\in [-\sqrt{2\beta },\sqrt{2\beta }]}\mathrm {d}x\ge \frac{i}{n} \right\} , \quad i\in \{1,\ldots ,n\}.\nonumber \\ \end{aligned}$$
(1.14)

Note that \(\rho _n = 0\) when \(\beta = 0\).

A first step in this direction is given by the following result:

Theorem 1.9

(DOU in the general case and pointwise initial condition). Let \({(X^n_t)}_{t\ge 0}\) be the DOU process (1.3) with \(\beta =0\) or \(\beta \ge 1\), and invariant law \(P_n^\beta \). There hold

  • If \(\lim _{n\rightarrow \infty }|x^n_0-\rho _n|=+\infty \), then, denoting \(t_n = \log (|x_0^n-\rho _n|)\), for all \(\varepsilon \in (0,1)\),

    $$\begin{aligned} \lim _{n\rightarrow \infty } \mathrm {Wasserstein}(\mathrm {Law}(X_{(1+\varepsilon )t_n}),P_n^\beta ) = 0. \end{aligned}$$
  • If \(\lim _{n\rightarrow \infty }|x^n_0-\rho _n|=\alpha \in [0,\infty )\), then, for all \(t>0\),

    $$\begin{aligned} \varlimsup _{n\rightarrow \infty } \mathrm {Wasserstein}(\mathrm {Law}(X_{t}),P_n^\beta )^2 \le \alpha ^2\mathrm {e}^{-2t}. \end{aligned}$$

Theorem 1.9 is proved in Sect. 7.

1.7 Non-pointwise initial conditions

It is natural to ask about the cutoff phenomenon when the initial conditions \(X^n_0\) is not pointwise. Even if we turn off the interaction by taking \(\beta =0\), the law of the process at time t is then no longer Gaussian in general, which breaks the method of proof used for Theorem 1.1 and Theorem 1.2. Nevertheless, Theorem 1.10 below provides a universal answer, that is both for \(\beta =0\) and \(\beta \ge 1\), at the price however of introducing several objects and notations. More precisely, for any probability measure \(\mu \) on \({\mathbb {R}}^n\), we introduce

$$\begin{aligned} S(\mu ) ={\left\{ \begin{array}{ll} \displaystyle \int \frac{\mathrm {d}\mu }{\mathrm {d}x}\log \frac{\mathrm {d}\mu }{\mathrm {d}x}\mathrm {d}x =\text {``}\mathrm {Kullback}(\mu \mid \mathrm {d}x)\text {''} &{}\text {if }\displaystyle \frac{\mathrm {d}\mu }{\mathrm {d}x}\log \frac{\mathrm {d}\mu }{\mathrm {d}x}\in L^1(\mathrm {d}x)\\ +\infty &{}\text {otherwise} \end{array}\right. }. \end{aligned}$$
(1.15)

Note that S takes its values in the whole \((-\infty ,+\infty ]\), and when \(S(\mu )<+\infty \) then \(-S(\mu )\) is the Boltzmann–Shannon entropy of the law \(\mu \). For all \(x\in {\mathbb {R}}^n\) with \(x_i \ne x_j\) for all \(i\ne j\), we have

$$\begin{aligned} E(x_1,\ldots ,x_n) =n^2\iint \Phi (x,y) {\mathbf {1}}_{\{x\ne y\}} L_n(\mathrm {d}x)L_n(\mathrm {d}y) \end{aligned}$$
(1.16)

where \(\displaystyle L_n:=\frac{1}{n}\sum _{i=1}^n\delta _{x_i}\) and where \(\displaystyle \Phi (x,y):=\frac{n}{n-1}\frac{V(x)+V(y)}{2}+\frac{\beta }{2}\log \frac{1}{|x-y|}\).

Let us define the map \(\Psi :{\mathbb {R}}^n\mapsto {\overline{D}}_n\) by

$$\begin{aligned} \Psi (x_1,\ldots ,x_n):=(x_{\sigma (1)},\ldots ,x_{\sigma (n)}). \end{aligned}$$
(1.17)

where \(\sigma \) is any permutation of \(\{1,\ldots ,n\}\) that reorders the particles non-decreasingly.

Theorem 1.10

(Cutoff for DOU with product smooth initial conditions). Let \({(X^n_t)}_{t\ge 0}\) be the DOU process (1.3) with \(\beta =0\) or \(\beta \ge 1\), and invariant law \(P_n^\beta \). Let S, \(\Phi \), and \(\Psi \) be as in (1.15), (1.16), and (1.17). Let us assume that \(\mathrm {Law}(X_0^n)\) is the image law or push forward of a product law \(\mu _1\otimes \ldots \otimes \mu _n\) by \(\Psi \) where \(\mu _1,\ldots ,\mu _n\) are laws on \({\mathbb {R}}\). Then:

  1. (1)

    If \(\displaystyle \varliminf _{n\rightarrow \infty } \Bigr |\frac{1}{n}\sum _{i=1}^n \int x \mu _i(\mathrm {d}x)\Bigr | \ne 0\) then, for all \(\varepsilon \in (0,1)\),

    $$\begin{aligned} \lim _{n\rightarrow \infty }\mathrm {Kullback}(\mathrm {Law}(X_{(1-\varepsilon )\log (n)})\mid P_n^\beta )=+\infty . \end{aligned}$$
  2. (2)

    If \(\displaystyle \varlimsup _{n\rightarrow \infty }\frac{1}{n^2}\sum _{i=1}^n S(\mu _i)<\infty \) and \(\displaystyle \varlimsup _{n\rightarrow \infty } \frac{1}{n^2}\sum _{i\ne j}\iint \Phi \,\mathrm {d}\mu _i \otimes \mathrm {d}\mu _j <\infty \), then, for all \(\varepsilon \in (0,1)\),

    $$\begin{aligned} \lim _{n\rightarrow \infty }\mathrm {Kullback}(\mathrm {Law}(X_{(1+\varepsilon )\log (n)})\mid P_n^\beta )=0. \end{aligned}$$

Theorem 1.10 is proved in Sect. 6.3.

It is likely that Theorem 1.10 can be extended to the case \(\mathrm {dist}\in \{\mathrm {Wasserstein}, \mathrm {Hellinger}, \mathrm {Fisher}\}\).

1.8 Structure of the paper

  • Section 2 provides additional comments and open problems.

  • Section 3 focuses on the OU process (\(\beta = 0\)) and gives the proofs of Theorems 1.1 and 1.2.

  • Section 4 concerns the exact solvability of the DOU process for all \(\beta \), and provides the proofs of Theorem 1.3 and Corollary 1.4.

  • Section 5 is about random matrices and gives the proofs of Theorem 1.5 and Corollary 1.6.

  • Section 6 deals with the DOU process for all \(\beta \) with the TV and Hellinger distances, and provides the proofs of Theorem 1.7 and Corollary 1.8.

  • Section 7 gives the Wasserstein counterpart of Sect. 6 and the proof of Theorem 1.9.

  • Appendix A provides a survey on distances and divergences, with new results.

  • Appendix B gathers useful dynamical consequences of convexity.

2 Additional comments and open problems

2.1 About the results and proofs

The proofs of our results rely among other ingredients on convexity and optimal functional inequalities, exact solvability, exact Gaussian formulas, coupling arguments, stochastic calculus, variational formulas, contraction properties and regularization.

The proofs of Theorems 1.1 and 1.2 are based on the explicit Gaussian nature of the OU process, which allows to use Gaussian formulas for all the distances and divergences that we consider (the Gaussian formula for \(\mathrm {Fisher}\) seems to be new). Our analysis of the convergence to equilibrium of the OU process seems to go beyond what is already known, see for instance [47] and [8,9,10,11].

Theorem 1.3 is a one-dimensional analogue of [15, Th. 1.2]. The proof exploits the explicit knowledge of eigenfunctions of the dynamics (2.6), associated with the first two non-zero spectral values, and their remarkable properties. The first one is associated to the spectral gap and the optimal Poincaré inequality. It implies Corollary 1.4, which is the provider of all our lower bounds on the mixing time for the cutoff.

The proof of Theorem 1.5 is based on a contraction property and the upper bound for matrix OU processes. It is not available beyond the matrix cases. All the other upper bounds that we establish are related to an optimal exponential decay which comes from convexity and involves sometimes coupling, the simplest instance being Theorem 1.7 about the Wasserstein distance. The usage of the Wasserstein metrics for Dyson dynamics is quite natural, see for instance [13].

The proof of Theorem 1.7 for the \(\mathrm {TV}\) and \(\mathrm {Hellinger}\) relies on the knowledge of the optimal exponential decay of the entropy (with respect to equilibrium) related to the optimal logarithmic Sobolev inequality. Since pointwise initial conditions have infinite entropy, the proof proceeds in three steps: first we regularize the initial condition to make its entropy finite, second we use the optimal exponential decay of the entropy of the process starting from this regularized initial condition, third we control the distance between the processes starting from the initial condition and its regularized version. This last part is inspired by a work of Lacoin [48] for the simple exclusion process on the segment, subsequently adapted to continuous state-spaces [18, 19], where one controls an area between two versions of the process.

The (optimal) exponential decay of the entropy (Lemma B.2) is equivalent to the (optimal) logarithmic Sobolev inequality (Lemma B.1). For the DOU process, the optimal logarithmic Sobolev inequality provided by Lemma B.1 achieves also the universal bound with respect to the spectral gap, just like for Gaussians. This sharpness between the best logarithmic Sobolev constant and the spectral gap also holds for instance for the random walk on the hypercube, a discrete process for which a cutoff phenomenon can be established with the optimal logarithmic Sobolev inequality, and which can be related to the OU process, see for instance [29, 30] and references therein. If we generalize the DOU process by adding an arbitrary convex function to V, then we will still have a logarithmic Sobolev inequality – see [25] for several proofs including the proof via the Bakry–Émery criterion – however the optimal logarithmic Sobolev constant will no longer be explicit nor sharp with respect to the spectral gap, and the spectral gap will no longer be explicit.

The proof of Theorem 1.10 relies crucially on the tensorization property of \(\mathrm {Kullback}\) and on the asymptotics on the normalizing constant \(C_n^\beta \) at equilibrium.

2.2 Analysis and geometry of the equilibrium

The full space \({\mathbb {R}}^n\) is, up to a bunch of hyperplanes, covered with n! disjoint isometric copies of the convex domain \(D_n\) obtained by permuting the coordinates (simplices or Weyl chambers). Following [25], for all \(\beta \ge 0\) let us define the law \(P_{*n}^\beta \) on \({\mathbb {R}}^n\) with density proportional to \(\mathrm {e}^{-E}\), just like for \(P_n^\beta \) in (1.6) but without the \({\mathbf {1}}_{(x_1,\ldots ,x_n)\in {\overline{D}}_n}\).

If \(\beta =0\) then \(P_{*n}^0=P_n^0={\mathcal {N}}(0,\frac{1}{n}I_n)\) according to our definition of \(P_n^0\).

If \(\beta > 0\) then \(P_{*n}^\beta \) has density \((C_{*n}^\beta )^{-1}\mathrm {e}^{-E}\) with \(C_{*n}^\beta =n!C_n^\beta \) where \(C_n^\beta \) is the normalization of \(P_n^\beta \). Moreover \(P_{*n}^\beta \) is a mixture of n! isometric copies of \(P_n^\beta \), while \(P_n^\beta \) is the image law or push forward of \(P_{*n}^\beta \) by the map \(\Psi _n:{\mathbb {R}}^n\rightarrow {\overline{D}}_n\) defined in (1.17). Furthermore for all bounded measurable \(f:{\mathbb {R}}^n\rightarrow {\mathbb {R}}\), denoting \(\Sigma _n\) the symmetric group of permutations of \(\{1,\ldots ,n\}\),

$$\begin{aligned} \int f\mathrm {d}P_{*n}^\beta =\int f_{\mathrm {sym}}\mathrm {d} P_n^\beta \quad \text {with}\quad f_{\mathrm {sym}}(x_1,\ldots ,x_n) :=\frac{1}{n!}\sum _{\sigma \in \Sigma _n}f(x_{\sigma (1)},\ldots ,x_{\sigma (n)}). \end{aligned}$$

Regarding log-concavity, it is important to realize that if \(\beta =0\) then E is convex on \({\mathbb {R}}^n\), while if \(\beta >0\) then E is convex on \(D_n\) but is not convex on \({\mathbb {R}}^n\) and has n! isometric local minima.

  • The law \(P_{*n}^\beta \) is centered but is not log-concave when \(\beta >0\) since E is not convex on \({\mathbb {R}}^n\).

    As \(\beta \rightarrow 0^+\) the law \(P_{*n}^\beta \) tends to \(P_{*n}^0=P_n^0={\mathcal {N}}(0,\frac{1}{n}I_n)\) which is log-concave.

  • The law \(P_n^\beta \) is not centered but is log-concave for all \(\beta \ge 0\).

    Its density vanishes at the boundary of \(D_n\) if \(\beta >0\).

    As \(\beta \rightarrow 0^+\) the law \(P_n^\beta \) tends to the law of the order statistics of n i.i.d. \({\mathcal {N}}(0,\frac{1}{n})\).

2.3 Spectral analysis of the generator: the non-interacting case

This subsection and the next deal with analytical aspects of our dynamics. We start with the OU process (\(\beta =0\)) for which everything is explicit; the next subsection deals with the DOU process (\(\beta \ge 1\)).

The infinitesimal generator of the OU process is given by

$$\begin{aligned} {{\,\mathrm{G}\,}}f =\frac{1}{n}\Bigr (\Delta -\nabla E\cdot \nabla \Bigr ) =\frac{1}{n}\sum _{i=1}^n\partial _i^2 - \sum _{i=1}^nV'(x_i)\partial _i. \end{aligned}$$
(2.1)

It is a self-adjoint operator on \(L^2({\mathbb {R}}^n, P_n^0)\) that leaves globally invariant the set of polynomials. Its spectrum is the set of all non-positive integers, that is, \(\lambda _0 = 0> \lambda _1 = - 1> \lambda _2 = -2 > \ldots \). The corresponding eigenspaces \(F_0,F_1,F_2,\ldots \) are finite dimensional: \(F_m\) is spanned by the multivariate Hermite polynomials of degree m, in other words tensor products of univariate Hermite polynomials. In particular, \(F_0\) is the vector space of constant functions while \(F_1\) is the n-dimensional vector space of all linear functions.

Let us point out that \({{\,\mathrm{G}\,}}\) can be restricted to the set of \(P_n^0\) square integrable symmetric functions: it leaves globally invariant the set of symmetric polynomials, its spectrum is unchanged but the associated eigenspaces \(E_m\) are the restrictions of the vector spaces \(F_m\) to the set of symmetric functions, in other words, \(E_m\) is spanned by the multivariate symmetrized Hermite polynomials of degree m. Note that \(E_1\) is the one-dimensional space generated by \(\pi (x) =x_1+\ldots +x_n\).

The Markov semigroup \({(\mathrm {e}^{t{{\,\mathrm{G}\,}}})}_{t\ge 0}\) generated by \({{\,\mathrm{G}\,}}\) admits \(P_n^0\) as a reversible invariant law since \({{\,\mathrm{G}\,}}\) is self-adjoint in \(L^2(P_n^0)\). Following [62], let us introduce the heat kernel \(p_t(x,y)\) which is the density of \(\mathrm {Law}(X^n_t\mid X^n_0=x)\) with respect to the invariant law \(P_n^0\). The long-time behavior reads \(\lim _{t\rightarrow \infty }p_t(x,\cdot )=1\) for all \(x\in {\mathbb {R}}^n\). Let \(\left\| \cdot \right\| _p\) be the norm of \(L^p=L^p(P_n^0)\). For all \(1\le p\le q\), \(t\ge 0\), \(x\in {\mathbb {R}}^n\), we have

$$\begin{aligned} 2\Vert \mathrm {Law}(X^n_t\mid X^n_0=x)-P_n^0\Vert _{\mathrm {TV}}= & {} \Vert p_t(x,\cdot )-1\Vert _1 \nonumber \\\le & {} \Vert p_t(x,\cdot )-1\Vert _p \le \Vert p_t(x,\cdot )-1\Vert _q. \end{aligned}$$
(2.2)

In the particular case \(p=2\) we can write

$$\begin{aligned} \Vert p_t(x,\cdot )-1\Vert _2^2 =\sum _{m=1}^\infty \mathrm {e}^{-2mt}\sum _{\psi \in B_m}|\psi (x)|^2. \end{aligned}$$
(2.3)

where \(B_m\) is an orthonormal basis of \(F_m\subset L^2(P_n^0)\), hence

$$\begin{aligned} \Vert p_t(x,\cdot )-1\Vert _2^2 \ge \mathrm {e}^{-2t}\sum _{\psi \in B_1}|\psi (x)|^2, \end{aligned}$$
(2.4)

which leads to a lower bound on the \(\chi ^2\) (in other words \(L^2\)) cutoff, provided one can estimate \(\sum _{\psi \in B_1}|\psi (x)|^2\) which is the square of the norm of the projection of \(\delta _x\) on \(B_1\).

Following [62, Th. 6.2], an upper bound would follow from a Bakry–Émery curvature–dimension criterion \(\mathrm {CD}(\rho ,d)\) with a finite dimension d, in relation with Nash–Sobolev inequalities and dimensional pointwise estimates on the heat kernel \(p_t(x,\cdot )\) or ultracontractivity of the Markov semigroup, see for instance [63, Sec. 4.1]. The OU process satisfies to \(\mathrm {CD}(\rho ,\infty )\) but never to \(\mathrm {CD}(\rho ,d)\) with d finite and is not ultracontractive. Actually the OU process is a critical case, see [3, Ex. 2.7.3].

2.4 Spectral analysis of the generator: the interacting case

We now assume that \(\beta \ge 1\). The infinitesimal generator of the DOU process is the operator

$$\begin{aligned} {{\,\mathrm{G}\,}}f =\frac{1}{n}\Bigr (\Delta -\nabla E\cdot \nabla \Bigr ) =\frac{1}{n}\sum _{i=1}^n\partial _i^2 - \sum _{i=1}^nV'(x_i)\partial _i +\frac{\beta }{2n}\sum _{j\ne i}\frac{\partial _i-\partial _j}{x_i-x_j}. \end{aligned}$$
(2.5)

Despite the interaction term, the operator leaves globally invariant the set of symmetric polynomials. Following Lassalle in [4, 49], see also [25], the operator \({{\,\mathrm{G}\,}}\) is a self-adjoint operator on the space of \(P_{*n}^\beta \) square integrable symmetric functions of n variables, its spectrum does not depend on \(\beta \) and matches the spectrum of the OU process case \(\beta =0\). In particular the spectral gap is 1. The eigenspaces \(E_m\) are spanned by the generalized symmetrized Hermite polynomials of degree m. For instance, \(E_1\) is the one-dimensional space generated by \( \pi (x)=x_1+\ldots +x_n\) while \(E_2\) is the two-dimensional space spanned by

$$\begin{aligned} (x_1+\ldots +x_n)^2-1 \quad \text {and}\quad x_1^2+\ldots +x_n^2-1-\frac{\beta }{2}(n-1). \end{aligned}$$
(2.6)

From the isometry between \(L^2({\overline{D}}_n,P_n^\beta )\) and \(L^2_{\mathrm {sym}}({\mathbb {R}}^n,P_{*n}^\beta )\), the above explicit spectral decomposition applies to the semigroup of the DOU on \({\overline{D}}_n\). Formally, the discussion presented at the end of the previous subsection still applies. However, in the present interacting case the integrability properties of the heat kernel are not known: in particular, we do not know whether \(p_t(x,\cdot )\) lies in \(L^p(P_n^\beta )\) for \(t>0\), \(x\in {\overline{D}}_n\) and \(p>1\). This leads to the question, of independent interest, of pointwise upper and lower Gaussian bounds for heat kernels similar to the OU process, with explicit dependence of the constants over the dimension. We refer for example to [36, 41, 65] for some results in this direction.

2.5 Mean-field limit

The measure \(P_n^\beta \) is log-concave since E is convex, and its density writes

$$\begin{aligned} x\in {\mathbb {R}}^n\mapsto \frac{\mathrm {e}^{-\frac{n}{2}|x|^2}}{C_n^\beta }\prod _{i>j}(x_i-x_j)^\beta {\mathbf {1}}_{x_1\le \ldots \le x_n}. \end{aligned}$$
(2.7)

See [25, Sec. 2.2] for a high-dimensional analysis. The Boltzmann–Gibbs measure \(P_n^\beta \) is known as the \(\beta \)-Hermite ensemble or H\(\beta \)E. When \(\beta =2\), it is better known as the Gaussian Unitary Ensemble (GUE). If \(X^n\sim P_n^\beta \) then the Wigner theorem states that the empirical measure with atoms distributed according to \(P_n^\beta \) converges in distribution to a semi-circle law, namely

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^n\delta _{X^{n,i}} \underset{n\rightarrow \infty }{\overset{\text {weak}}{\longrightarrow }} \frac{\sqrt{2\beta -x^2}}{\beta \pi }{\mathbf {1}}_{x\in [-\sqrt{2\beta },\sqrt{2\beta }]}\mathrm {d}x, \end{aligned}$$
(2.8)

and this can be deduced in this Coulomb gas context from a large deviation principle as in [12].

Let \({(X^n)}_{t\ge 0}\) be the process solving (1.3) with \(\beta \ge 0\) or \(\beta \ge 1\), and let

$$\begin{aligned} \mu ^n_t=\frac{1}{n}\sum _{k=1}^n\delta _{X^{n,i}_t} \end{aligned}$$
(2.9)

be the empirical measure of the particles at time t. Following notably [14, 20, 21, 31, 53, 60], if the sequence of initial conditions \({(\mu _0^n)}_{n\ge 1}\) converges weakly as \(n\rightarrow \infty \) to a probability measure \(\mu _0\), then the sequence of measure valued processes \({({(\mu _t^n)}_{t\ge 0})}_{n\ge 1}\) converges weakly to the unique probability measure valued deterministic process \({(\mu _t)}_{t\ge 0}\) satisfying the evolution equation

$$\begin{aligned} \langle \mu _t,f\rangle= & {} \langle \mu _0,f\rangle -\int _0^t \int V'(x)f'(x)\mu _s(\mathrm {d}x) \mathrm {d}s \nonumber \\&+\frac{\beta }{2} \int _0^t\int _{{\mathbb {R}}^2}\frac{f'(x)-f'(y)}{x-y}\mu _s(\mathrm {d}x)\mu _s(\mathrm {d}y)\mathrm {d}s \quad \end{aligned}$$
(2.10)

for all \(t\ge 0\) and \(f\in {\mathcal {C}}^3_b({\mathbb {R}},{\mathbb {R}})\). The Eq. (2.10) is a weak formulation of a McKean–Vlasov equation or free Fokker–Planck equation associated to a free OU process. Moreover, if \(\mu _0\) has all its moments finite, then for all \(t\ge 0\), we have the free Mehler formula

$$\begin{aligned} \mu _t = \mathrm {dil}_{\mathrm {e}^{-2t}}\mu _0\boxplus \mathrm {dil}_{\sqrt{1-\mathrm {e}^{-2t}}}\mu _\infty , \end{aligned}$$
(2.11)

where \(\mathrm {dil}_\sigma \mu \) is the law of \(\sigma X\) when \(X\sim \mu \), where “\(\boxplus \)” stands for the free convolution of probability measures of Voiculescu free probability theory, and where \(\mu _\infty \) is the semi-circle law of variance \(\frac{\beta }{2}\). In particular, if \(\mu _0\) is a semi-circle law then \(\mu _t\) is a semi-circle law for all \(t\ge 0\).

Let us introduce the k-th moment \(m_k(t):=\displaystyle \int x^k\mu _t(\mathrm {d}x)\) of \(\mu _t\). The first and second moments satisfy the differential equations \(m_1'=-m_1\) and \(m_2'=-2m_2+\beta \) respectively, which give

$$\begin{aligned} m_1(t)=\mathrm {e}^{-t}m_1(0) \underset{t\rightarrow \infty }{\longrightarrow }0 \quad \text {and}\quad m_2(t)=m_2(0)\mathrm {e}^{-2t}+\frac{\beta }{2}(1-\mathrm {e}^{-2t}) \underset{t\rightarrow \infty }{\longrightarrow }\frac{\beta }{2}.\nonumber \\ \end{aligned}$$
(2.12)

More generally, beyond the first two moments, the Cauchy–Stieltjes transform

$$\begin{aligned} z\in {\mathbb {C}}_+ =\{z\in {\mathbb {C}}:\Im z>0\}\mapsto s_t(z) =\int _{{\mathbb {R}}}\frac{\mu _t(\mathrm {d}x)}{x-z} \end{aligned}$$
(2.13)

of \(\mu _t\) is the solution of the following complex Burgers equation

$$\begin{aligned} \partial _ts_t(z)=s_t(z)+z\partial _zs_t(z)+ \beta s_t(z)\partial _zs_t(z),\quad t\ge 0, z\in {\mathbb {C}}_+. \end{aligned}$$
(2.14)

The semi-circle law on \([-c,c]\) has density \(\frac{2\sqrt{c^2-x^2}}{\pi c^2}{\mathbf {1}}_{x\in [-c,c]}\), mean 0, second moment or variance \(\frac{c^2}{4}\), and Cauchy–Stieltjes transform \(s_t(z)=\frac{\sqrt{4z^2-4c^2}-2z}{c^2}\), \(t\ge 0 , z\in {\mathbb {C}}_+\).

The cutoff phenomenon is in a sense a diagonal (tn) estimate, melting long time behavior and high dimension. When \(|z_0^n|\) is of order n, cutoff occurs at a time of order \(\approx \log (n)\): this informally corresponds to taking \(t\rightarrow \infty \) in \((\mu _t)_{t\ge 0}\).

When \(\mu _0\) is centered with same second moment \(\frac{\beta }{2}\) as \(\mu _\infty \), then there is a Boltzmann H-theorem interpretation of the limiting dynamics as \(n\rightarrow \infty \): the steady-state is the Wigner semi-circle law \(\mu _\infty \), the second moment is conserved by the dynamics, the Voiculescu entropy is monotonic along the dynamics, grows exponentially, and is maximized by the steady-state.

2.6 \(L^p\) cutoff

Following [26], we can deduce an \(L^p\) cutoff started from x from an \(L^1\) cutoff by showing that the heat kernel \(p_t(x,\cdot )\) is in \(L^p(P_n^\beta )\) for some \(t>0\). Thanks to the Mehler formula, it can be checked that this holds for the OU case, despite the lack of ultracontractivity. The heat kernel of the DOU process is less accessible.

In another exactly solvable direction, the \(L^p\) cutoff phenomenon has been studied for instance in [62, 64] for Brownian motion on compact simple Lie groups, and in [55, 64] for Brownian motion on symmetric spaces, in relation with representation theory, an idea which goes back at the origin to the works of Diaconis on random walks on groups.

2.7 Cutoff window and profile

Once a cutoff phenomenon is established, one can ask for a finer description of the pattern of convergence to equilibrium. The cutoff window is the order of magnitude of the transition time from the value \(\max \) to the value 0: more precisely, if cutoff occurs at time \(c_n\) then we say that the cutoff window is \(w_n\) if

$$\begin{aligned} \varlimsup _{b\rightarrow +\infty } \varlimsup _{n\rightarrow \infty } \mathrm {dist}(\mathrm {Law}(X_{c_n + bw_n})\mid P_n^\beta )&=0,\\ \varliminf _{b\rightarrow -\infty } \varliminf _{n\rightarrow \infty } \mathrm {dist}(\mathrm {Law}(X_{c_n + bw_n})\mid P_n^\beta )&=\max , \end{aligned}$$

and for any \(b\in {\mathbb {R}}\)

$$\begin{aligned} 0< \varliminf _{n\rightarrow \infty } \mathrm {dist}(\mathrm {Law}(X_{c_n + bw_n})\mid P_n^\beta ) \le \varlimsup _{n\rightarrow \infty } \mathrm {dist}(\mathrm {Law}(X_{c_n + bw_n})\mid P_n^\beta ) < \max . \end{aligned}$$

Note that necessarily \(w_n = o(c_n)\) by definition of the cutoff phenomenon. Note also that \(w_n\) is unique in the following sense: \(w'_n\) is a cutoff window if and only if \(w_n/w'_n\) remains bounded from above and below as \(n\rightarrow \infty \).

We say that the cutoff profile is given by \(\varphi :{\mathbb {R}}\rightarrow [0,1]\) if

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathrm {dist}(\mathrm {Law}(X_{c_n + bw_n})\mid P_n^\beta ) = \varphi (b). \end{aligned}$$

The analysis of the OU process carried out in Theorems 1.1 and 1.2 can be pushed further to establish the so-called cutoff profiles, we refer to the end of Sect. 3 for details.

Regarding the DOU process, such a detailed description of the convergence to equilibrium does not seem easily accessible. However it is straightforward to deduce from our proofs that the cutoff window is of order 1, in other words the inverse of the spectral gap, in the setting of Corollary 1.6. This is also the case in the setting of Corollary 1.8 for the Wasserstein distance.

We believe that this remains true in the setting of Corollary 1.8 for the TV and Hellinger distances: actually, a lower bound of the required order can be derived from the calculations in the proof of Corollary 1.4; on the other hand, our proof of the upper bound on the mixing time does not allow to give a precise enough upper bound on the window.

2.8 Other potentials

It is natural to ask about the cutoff phenomenon for the process solving (1.3) when V is a more general \({\mathcal {C}}^2\) function. The invariant law \(P_n^\beta \) of this Markov diffusion writes

$$\begin{aligned} \frac{\mathrm {e}^{-n\sum _{i=1}^nV(x_i)}}{C_n^\beta }\prod _{i>j}(x_i-x_j)^\beta {\mathbf {1}}_{(x_1,\ldots ,x_n)\in {\overline{D}}_n}\mathrm {d}x_1\ldots \mathrm {d}x_n. \end{aligned}$$
(2.15)

The case where \(V-\frac{\rho }{2}\left| \cdot \right| ^2\) is convex for some constant \(\rho \ge 0\) generalizes the DOU case and has exponential convergence to equilibrium, see [25]. Three exactly solvable cases are known:

  • \(\mathrm {e}^{-V(x)}=\mathrm {e}^{-\frac{x^2}{2}}\): the DOU process associated to the Gaussian law weight and the \(\beta \)-Hermite ensemble including HOE/HUE/HSE when \(\beta \in \{1,2,4\}\),

  • \(\mathrm {e}^{-V(x)}=x^{a-1}\mathrm {e}^{-x}{\mathbf {1}}_{x\in [0,\infty )}\): the Dyson–Laguerre process associated to the Gamma law weight and the \(\beta \)-Laguerre ensembles including LOE/LUE/LSE when \(\beta \in \{1,2,4\}\),

  • \(\mathrm {e}^{-V(x)}=x^{a-1}(1-x)^{b-1}{\mathbf {1}}_{x\in [0,1]}\): the Dyson–Jacobi process associated to the Beta law weight and the \(\beta \)-Jacobi ensembles including JOE/JUE/JSE when \(\beta \in \{1,2,4\}\),

up to a scaling. Following Lassalle [4, 49,50,51] and Bakry [5], in these three cases, the multivariate orthogonal polynomials of the invariant law \(P_n^\beta \) are the eigenfunctions of the dynamics of the process. We refer to [32, 35, 54] for more information on (H/L/J)\(\beta \)E random matrix models.

The contraction property or spectral projection used to pass from a matrix process to the Dyson process can be used to pass from BM on the unitary group to the Dyson circular process for which the invariant law is the Circular Unitary Ensemble (CUE). This provides an upper bound for the cutoff phenomenon. The cutoff for BM on the unitary group is known and holds at critical time or order \(\log (n)\), see for instance [55, 62, 64].

More generally, we could ask about the cutoff phenomenon for a McKean–Vlasov type interacting particle system \({(X^n_t)}_{t\ge 0}\) in \(({\mathbb {R}}^d)^n\) solution of the stochastic differential equation of the form

$$\begin{aligned} \mathrm {d}X^{n,i}_t= & {} \sigma _{n,t}(X^n)\mathrm {d}B^n_t -\sum _{i=1}^n\nabla V_{n,t}(X^{n,i}_t)\mathrm {d}t \nonumber \\&-\sum _{j\ne i}\nabla W_{n,t}(X^{n,i}_t-X^{n,j}_t)\mathrm {d}t,\quad 1\le i\le n, \end{aligned}$$
(2.16)

for various types of confinement V and interaction W (convex, repulsive, attractive, repulsive-attractive, etc), and discuss the relation with the propagation of chaos. The case where V and W are both convex and constant in time is already very well studied from the point of view of long-time behavior and mean-field limit in relation with convexity, see for instance [20, 21, 53].

Regarding universality, it is worth noting that if \(V=\left| \cdot \right| ^2\) and if W is convex then the proof by factorization of the optimal Poincaré and logarithmic Sobolev inequalities and their extremal functions given in [25] remains valid, paving the way to the generalization of many of our results in this spirit. On the other hand, the convexity of the limiting energy functional in the mean-field limit is of Bochner type and suggests to take for W a power, in other words a Riesz type interaction.

2.9 Alternative parametrization

If \({(X^n_t)}_{t\ge 0}\) is the process solution of the stochastic differential equation (1.3), then for all real parameters \(\alpha >0\) and \(\sigma >0\), the space scaled and time changed stochastic process \({(Y^n_t)}_{t\ge 0}={(\sigma X^n_{\alpha t})}_{t\ge 0}\) solves the stochastic differential equation

$$\begin{aligned} Y^n_0=\sigma x^n_0,\quad \mathrm {d}Y_t^{n,i} =\sqrt{\frac{2\alpha \sigma ^2}{n}}\mathrm {d}B^i_t -\alpha Y_t^{n,i}\mathrm {d}t +\frac{\alpha \beta \sigma ^2}{n}\sum _{j\ne i}\frac{\mathrm {d}t}{Y_t^{n,i}-Y_t^{n,j}},\quad 1\le i\le n, \end{aligned}$$
(2.17)

where \({(B_t)}_{t\ge 0}\) is a standard n-dimensional BM. The invariant law of \({(Y^n_t)}_{t\ge 0}\) is

$$\begin{aligned} \frac{\mathrm {e}^{-\frac{n}{2\sigma ^2}|y|^2}}{C_n^\beta } \prod _{i>j}(y_i-y_j)^\beta {\mathbf {1}}_{(y_1,\ldots ,y_n)\in {\overline{D}}_n} \mathrm {d}y_1\ldots \mathrm {d}y_n \end{aligned}$$
(2.18)

where \(C_n^\beta \) is the normalizing constant. This law and its normalization \(C_n^\beta \) depend on the “shape parameter” \(\beta \), the “scale parameter” \(\sigma \), and does not depend on the “speed parameter” \(\alpha \). When \(\beta >0\), taking \(\sigma ^2=\beta ^{-1}\), the stochastic differential equation (2.17) boils down to

$$\begin{aligned} Y^n_0=\frac{x^n_0}{\sqrt{\beta }},\quad \mathrm {d}Y_t^{n,i} =\sqrt{\frac{2\alpha }{n\beta }}\mathrm {d}B^i_t -\alpha Y_t^{n,i}\mathrm {d}t +\frac{\alpha }{n}\sum _{j\ne i}\frac{\mathrm {d}t}{Y_t^{n,i}-Y_t^{n,j}},\quad 1\le i\le n\nonumber \\ \end{aligned}$$
(2.19)

while the invariant law becomes

$$\begin{aligned} \frac{\mathrm {e}^{-\frac{n\beta }{2}|y|^2}}{C_n^\beta } \prod _{i>j}(y_i-y_j)^\beta {\mathbf {1}}_{(y_1,\ldots ,y_n)\in {\overline{D}}_n} \mathrm {d}y_1\ldots \mathrm {d}y_n. \end{aligned}$$
(2.20)

The Eq. (2.19) is the one considered in [37, Eq. (12.4)] and in [46, Eq. (1.1)]. The advantage of (2.19) is that \(\beta \) can be now truly interpreted as an inverse temperature and the right-hand side in the analogue of (2.8) does not depend on \(\beta \), while the drawback is that we cannot turn off the interaction by setting \(\beta =0\) and recover the OU process as in (1.3). It is worthwhile mentioning that for instance Theorem 1.7 remains the same for the process solving (2.19) in particular the cutoff threshold is at critical time \(\frac{c_n}{\alpha }\) and does not depend on \(\beta \).

2.10 Discrete models

There are several discrete space Markov processes admitting the OU process as a scaling limit, such as for instance the random walk on the discrete hypercube, related to the Ehrenfest model, for which the cutoff has been studied in [29, 30], and the M/M/\(\infty \) queuing process, for which a discrete Mehler formula is available [24]. Certain discrete space Markov processes incorporate a singular repulsion mechanism, such as for instance the exclusion process on the segment, for which the study of the cutoff in [48] shares similarities with our proof of Theorem 1.7. It is worthwhile noting that there are discrete Coulomb gases, related to orthogonal polynomials for discrete measures, suggesting to study discrete Dyson processes. More generally, it could be natural to study the cutoff phenomenon for Markov processes on infinite discrete state spaces, under curvature condition, even if the subject is notoriously disappointing in terms of high-dimensional analysis. We refer to the recent work [61] for the finite state space case.

3 Cutoff phenomenon for the OU

In this section, we prove Theorems 1.1 and 1.2: actually we only prove the latter since it implies the former. We start by recalling a well-known fact.

Lemma 3.1

(Mehler formula). If \({(Y_t)}_{t\ge 0}\) is an OU process in \({\mathbb {R}}^d\) solution of the stochastic differential equation \(Y_0=y_0\in {\mathbb {R}}^d\) and \(\mathrm {d}Y_t=\sigma \mathrm {d}B_t-\mu Y_t\mathrm {d}t\) for parameters \(\sigma >0\) and \(\mu >0\) where B is a standard d-dimensional Brownian motion then

$$\begin{aligned} {(Y_t)}_{t\ge 0}= & {} {\Bigr (y_0 \mathrm {e}^{-\mu t}+\sigma \int _0^t\mathrm {e}^{\mu (s-t)}\mathrm {d}B_s\Bigr )}_{t\ge 0} \text { hence } Y_t \\&\sim {\mathcal {N}}\Bigr ( y_0\mathrm {e}^{-\mu t}, \frac{\sigma ^2}{2}\frac{1-\mathrm {e}^{-2\mu t}}{\mu }\mathrm {I}_d \Bigr ) \text { for all }t\ge 0. \end{aligned}$$

Moreover its coordinates are independent one-dimensional OU processes with initial condition \(y_0^i\) and invariant law \({\mathcal {N}}(0,\frac{\sigma ^2}{2\mu })\), \(1\le i\le d\).

Proof of Theorem 1.1 and Theorem 1.2

By using Lemma 3.1, for all \(n\ge 1\) and \(t\ge 0\),

$$\begin{aligned} Z^n_t \sim {\mathcal {N}}\Bigr (z^n_0\mathrm {e}^{-t},\frac{1-\mathrm {e}^{-2t}}{n}I_n\Bigr )= & {} \otimes _{i=1}^n{\mathcal {N}}\Bigr (z^{n,i}_0\mathrm {e}^{-t},\frac{1-\mathrm {e}^{-2t}}{n}\Bigr ),\nonumber \\ \ P_n^0= & {} {\mathcal {N}}\Bigr (0,\frac{I_n}{n}\Bigr ) ={\mathcal {N}}\Bigr (0,\frac{1}{n}\Bigr )^{\otimes n}. \end{aligned}$$
(3.1)

Hellinger, Kullback, \(\chi ^2\), Fisher, and Wasserstein cutoffs. A direct computation from (3.1) or Lemma A.5 either from multivariate Gaussian formulas or univariate via tensorization gives

$$\begin{aligned} \mathrm {Hellinger}^2(\mathrm {Law}(Z^n_t),P_n^0)&=1-\exp \Bigr (-\frac{n}{4}\frac{|z^n_0|^2\mathrm {e}^{-2t}}{2-\mathrm {e}^{-2t}}+\frac{n}{4}\log \Bigr (4\frac{1-\mathrm {e}^{-2t}}{(2-\mathrm {e}^{-2t})^2}\Bigr )\Bigr ), \end{aligned}$$
(3.2)
$$\begin{aligned} 2\mathrm {Kullback}(\mathrm {Law}(Z^n_t)\mid P_n^0)&=n|z^n_0|^2\mathrm {e}^{-2t}-n\mathrm {e}^{-2t}-n\log (1-\mathrm {e}^{-2t}),\end{aligned}$$
(3.3)
$$\begin{aligned} \chi ^2(\mathrm {Law}(Z^n_t)\mid P_n^0)&=-1+\frac{1}{(1-\mathrm {e}^{-4t})^{n/2}} \exp \Bigr ( n|z_0^n|^2 \frac{\mathrm {e}^{-2t}}{1+\mathrm {e}^{-2t}} \Bigr ),\end{aligned}$$
(3.4)
$$\begin{aligned} \mathrm {Fisher}(\mathrm {Law}(Z^n_t)\mid P_n^0)&=n^2|z^n_0|^2\mathrm {e}^{-2t}+n^2\frac{\mathrm {e}^{-4t}}{1-\mathrm {e}^{-2t}},\end{aligned}$$
(3.5)
$$\begin{aligned} \mathrm {Wasserstein}^2(\mathrm {Law}(Z^n_t),P_n^0)&=|z^n_0|^2\mathrm {e}^{-2t}+2(1-\sqrt{1-\mathrm {e}^{-2t}}-\frac{1}{2} \mathrm {e}^{-2t}), \end{aligned}$$
(3.6)

which gives the desired lower and upper bounds as before by using the hypothesis on \(z^n_0\).

Total variation cutoff. By using the comparison between total variation and Hellinger distances (Lemma A.1) we deduce from (3.2) the cutoff in total variation distance at the same critical time. The upper bound for the total variation distance can alternatively be obtained by using the \(\mathrm {Kullback}\) estimate (3.3) and the Pinsker–Csiszár–Kullback inequality (Lemma A.1). Since both distributions are tensor products, we could use alternatively the tensorization property of the total variation distance (Lemma A.4) together with the one-dimensional version of the Gaussian formula for \(\mathrm {Kullback}\) (Lemma A.1) to obtain the result for the total variation. \(\square \)

Remark 3.2

(Competition between bias and variance mixing). From the computations of the proof of Theorem 1.2, we can show that for \(\mathrm {dist} \in \{\mathrm {TV}, \mathrm {Hellinger}, \chi ^2\}\)

$$\begin{aligned} A_t := \mathrm {dist}(\mathrm {Law}(Z^n_t)\mid \mathrm {Law}(Z^n_t-z^n_0\mathrm {e}^{-t})) \end{aligned}$$

has a cutoff at time \(c_n^{A} = \log (\sqrt{n} |z^n_0|)\), while

$$\begin{aligned} B_t := \mathrm {dist}(\mathrm {Law}(Z^n_t-z^n_0\mathrm {e}^{-t})\mid P_n^0) \end{aligned}$$

admits a cutoff at time \(c_n^{B} = \frac{1}{4} \log (n)\). The triangle inequality for \(\mathrm {dist}\) yields

$$\begin{aligned} | A_t - B_t | \le \mathrm {dist}(\mathrm {Law}(Z^n_t)\mid P_n^0)\le A_t + B_t. \end{aligned}$$

Therefore the critical time of Theorem 1.2 is dictated by either \(A_t\) or \(B_t\), according to whether \(c_n^A \gg c_n^B\) or \(c_n^A \ll c_n^B\). This can be seen as a competition between bias and variance mixing.

Remark 3.3

(Total variation discriminating event for small initial conditions). Let us introduce the random variable \(Z^n_\infty \sim P_n^0={\mathcal {N}}(0,\frac{1}{n}I_n)={\mathcal {N}}(0,\frac{1}{n})^{\otimes n}\), in accordance with (3.1). There holds

$$\begin{aligned} S_t^n:= & {} \sum _{i=1}^n (Z_t^{n,i} -z_0^{n,i}\mathrm {e}^{-t})^2 \sim \mathrm {Gamma}\Bigr (\frac{n}{2},\frac{n}{2(1-\mathrm {e}^{-2t})}\Bigr )\\&\quad \text {and}\quad |Z^n_\infty |^2\sim \mathrm {Gamma}\Bigr (\frac{n}{2},\frac{n}{2}\Bigr ). \end{aligned}$$

We can check, using an explicit computation of Hellinger and Kullback between Gamma distributions and the comparison between total variation and Hellinger distances (Lemma A.1), that

$$\begin{aligned} C_t := \mathrm {dist}(\mathrm {Law}(S^n_t)\mid \mathrm {Law}(|Z^n_\infty |^2)) \end{aligned}$$

admits a cutoff at time \(c_n^C = c_n^B = \frac{1}{4} \log (n)\). Moreover, one can exhibit a discriminating event for the TV distance. Namely, we can observe that

$$\begin{aligned} \Bigr \Vert \mathrm {Gamma} \Bigr (\frac{n}{2},\frac{n}{2(1-\mathrm {e}^{-2t})}\Bigr ) -\mathrm {Gamma}\Bigr (\frac{n}{2},\frac{n}{2}\Bigr ) \Bigr \Vert _{\mathrm {TV}} ={\mathbb {P}}( |Z^n_\infty |^2 \ge \alpha _{t})-{\mathbb {P}}( S_t^n \ge \alpha _{t}) \end{aligned}$$

with \(\alpha _t\) the unique point where the two densities meet, which happens to be

$$\begin{aligned} \alpha _{t}=-\mathrm {e}^{2t}\log (1-\mathrm {e}^{-2t})(1-\mathrm {e}^{-2t}). \end{aligned}$$

From the explicit expressions (3.2), (3.3), (3.4), (3.5), (3.6), it is immediate to extract the cutoff profile associated to the convergence of \(\mathrm {Law}(Z_t^n)\) to \(P_n^0\) in Hellinger, Kullback, \(\chi ^2\), Fisher and Wasserstein. For Wasserstein we already know by Theorem 1.2 that a cutoff occurs if and only if \(|z_0^n|\underset{n\rightarrow \infty }{\rightarrow } \infty \). In this case, regarding the profile, we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathrm {Wasserstein}(\mathrm {Law}(Z_t^n),P_n^0)=\phi (b), \end{aligned}$$
(3.7)

where for all \(b\in {\mathbb {R}}\),

$$\begin{aligned} t_{n,b}=\log |z_0^n| + b\quad \text {and}\quad \phi (b)=\mathrm {e}^{-b}. \end{aligned}$$
(3.8)

For the other distances and divergences, let us assume that the following limit exists

$$\begin{aligned} a:= \lim _{n\rightarrow \infty }\sqrt{n}|z_0^n|^2\in [0,+\infty ]. \end{aligned}$$
(3.9)

This quantity can be related with

$$\begin{aligned} c_n^A:=\log (|z_0^n|\sqrt{n}) \quad \text {and}\quad c_n^B:=\frac{\log n}{4} \end{aligned}$$
(3.10)

which were already introduced in Remark 3.2. Indeed

$$\begin{aligned} a= 0 \Longleftrightarrow c_n^A \ll c_n^B,\quad a= +\infty \Longleftrightarrow c_n^A \gg c_n^B, \end{aligned}$$

while \(a\in (0,\infty )\) is equivalent to \(c_n^A \asymp c_n^B\).

Then, for \(\mathrm {dist}\in \{\mathrm {Hellinger}, \mathrm {Kullback}, \chi ^2, \mathrm {Fisher}\}\), we have, for all \(b\in {\mathbb {R}}\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathrm {dist}(\mathrm {Law}(Z_{t_{n,b}})\mid P_n^0)=\phi (b), \end{aligned}$$
(3.11)

where \(t_{n,b}\) and \(\phi (b)\) are as in Table 1. The cutoff window is always of size 1.

Table 1 Values of \(t_{n,b}\) and \(\phi (b)\) for the cutoff profile of the OU process in (3.11)

Since the total variation distance is not expressed in a simple explicit manner, further computations are needed to extract the precise cutoff profile, which is given in the following lemma:

Lemma 3.4

(Cutoff profile in \(\mathrm {TV}\) for OU). Let \(Z^n=(Z_t^n)_{t\ge 0}\) be the OU process (1.8), started from \(z_0^n\in {\mathbb {R}}^n\), and let \(P_n^0\) be its invariant law. Assume as in (3.9) that \(a:=\lim _{n\rightarrow \infty }|z_0^n|^2\sqrt{n}\in [0,+\infty ]\), and let \(t_{n,b}\) be as in Table 1 for Hellinger. Then, for all \(b\in {\mathbb {R}}\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\Vert \mathrm {Law}({Z}_{t_{n,b}}^n)-P_n^0\Vert _{\mathrm {TV}} =\phi (b), \end{aligned}$$

where

$$\begin{aligned} \phi (b):= {\left\{ \begin{array}{ll} \displaystyle \mathrm {erf}\Bigr (\frac{\mathrm {e}^{-b}}{2\sqrt{2}}\Bigr ) &{} \text {if }a=+\infty \\ \displaystyle \mathrm {erf}\Bigr (\frac{\mathrm {e}^{-2b}}{4}\Bigr ) &{} \text {if }a=0 \\ \displaystyle \mathrm {erf}\Bigr (\frac{\sqrt{2a\mathrm {e}^{-2b}+\mathrm {e}^{-4b}}}{4}\Bigr ) &{} \text {if }a\in (0,+\infty ) \end{array}\right. }, \end{aligned}$$

where \(\displaystyle \mathrm {erf}(u):=\frac{1}{\sqrt{\pi }}\int _{|t|\le u}\mathrm {e}^{-t^2}\mathrm {d}t={\mathbb {P}}(|X|\le \sqrt{2}u)\) with \(X\sim {\mathcal {N}}(0,1)\) is the error function.

Proof of Lemma 3.4

The idea is to exploit the fact that we consider Gaussian product measures (the covariance matrices are multiple of the identity), which allows a finer analysis than for instance in [27, Le. 3.1]. We begin with a rather general step. Let \(\mu \) and \(\nu \) be two probability measures on \({\mathbb {R}}^n\) with densities f and g with respect to the Lebesgue measure \(\mathrm {d}x\). We have then

$$\begin{aligned} \Vert \mu -\nu \Vert _{\mathrm {TV} } =\frac{1}{2}\int |f-g|\mathrm {d}x =\frac{1}{2}\Bigr (\int (f-g){\mathbf {1}}_{g\le f}\mathrm {d}x -\int (f-g){\mathbf {1}}_{f\le g}\mathrm {d}x\Bigr ),\nonumber \\ \end{aligned}$$
(3.12)

and since

$$\begin{aligned} -\int (f-g){\mathbf {1}}_{f\le g}\mathrm {d}x= & {} -\int (f-g)(1-{\mathbf {1}}_{g<f})\mathrm {d}x =\int (f-g){\mathbf {1}}_{g<f}\mathrm {d}x\\= & {} \int (f-g){\mathbf {1}}_{g\le f}\mathrm {d}x, \end{aligned}$$

we obtain

$$\begin{aligned} \Vert \mu -\nu \Vert _{\mathrm {TV} } =\int (f-g){\mathbf {1}}_{g\le f}\mathrm {d}x =\mu (g\le f)-\nu (g\le f). \end{aligned}$$
(3.13)

In particular, when \(\mu ={\mathcal {N}}(m_1,\sigma _1^2I_n)\) and \(\nu ={\mathcal {N}}(m_2,\sigma _2^2I_n)\) then \(g(x)\le f(x)\) is equivalent to

$$\begin{aligned} \psi (x):=\frac{|x-m_1|^2}{\sigma _1^2}-\frac{|x-m_2|^2}{\sigma _2^2} \le n\log \frac{\sigma _2^2}{\sigma _1^2}, \end{aligned}$$
(3.14)

for all \(x\in {\mathbb {R}}^n\), and therefore, with \(Z_1\sim \mu \) and \(Z_2\sim \nu \), we get

$$\begin{aligned} \Vert \mu -\nu \Vert _{\mathrm {TV} } ={\mathbb {P}}\Bigr (\psi (Z_1)\le n\log \frac{\sigma _2^2}{\sigma _1^2}\Bigr ) -{\mathbb {P}}\Bigr (\psi (Z_2)\le n\log \frac{\sigma _2^2}{\sigma _1^2}\Bigr ). \end{aligned}$$
(3.15)

Let us assume from now on that \(m_2=0\) and \(\sigma _1\ne \sigma _2\). We can then gather the quadratic terms as

$$\begin{aligned} \psi (x)=\Bigr (1-\frac{\sigma _1^2}{\sigma _2^2}\Bigr ) \frac{|x-{\tilde{m}}_1|^2}{\sigma _1^2} +\Bigr (\frac{1}{\sigma _1^2}-\frac{1}{(1-\frac{\sigma _1^2}{\sigma _2^2})\sigma _1^2}\Bigr )|m_1|^2 \quad \text {where}\quad {\tilde{m}}_1:=\frac{1}{1-\frac{\sigma _1^2}{\sigma _2^2}}m_1. \end{aligned}$$
(3.16)

We observe at this step that the random variable \(\frac{|Z_1-{\tilde{m}}_1|^2}{\sigma _1^2}\) follows a noncentral chi-squared distribution, which depends only on n and on the noncentrality parameter

$$\begin{aligned} \lambda _1:=\frac{|m_1-{\tilde{m}}_1|^2}{\sigma _1^2} =\frac{\sigma _1^2}{(\sigma _2^2-\sigma _1^2)^2}|m_1|^2. \end{aligned}$$
(3.17)

Similarly, the random variable \(\frac{|Z_2-{\tilde{m}}_1|^2}{\sigma _2^2}\) follows a noncentral chi-squared distribution, which depends only on n and on the noncentrality parameter

$$\begin{aligned} \lambda _2:=\frac{|{\tilde{m}}_1|^2}{\sigma _1^2} =\frac{\sigma _2^4}{\sigma _1^2(\sigma _2^2-\sigma _1^2)^2}|m_1|^2. \end{aligned}$$
(3.18)

It follows that the law of \(\psi (Z_1)\) and the law of \(\psi (Z_2)\) depend over \(m_1\) only via \(|m_1|\). Hence

$$\begin{aligned} \psi (Z_1)\overset{\mathrm {d}}{=}X_1+\ldots +X_n \quad \text {and}\quad \psi (Z_2)\overset{\mathrm {d}}{=}Y_1+\ldots +Y_n \end{aligned}$$
(3.19)

where \(X_1,\ldots ,X_n\) and \(Y_1,\ldots ,Y_n\) are two sequences of i.i.d. random variables for which the mean and variance depends only (and explicitly) on \(|m_1|\), \(\sigma _1\), \(\sigma _2\). Note in particular that these means and variances are given by \(\frac{1}{n}\) the ones of \(\psi (Z_1)\) and \(\psi (Z_2)\). Now we specialize to the case where \(\mu =\mathrm {Law}(Z^n_t)={\mathcal {N}}(z_0^n\mathrm {e}^{-t},\frac{1-\mathrm {e}^{-2t}}{n}I_n)\) and \(\nu =\mathrm {Law}(Z^n_\infty )={\mathcal {N}}(0,\frac{1}{n}I_n)=P_n^0\), and we find

$$\begin{aligned} {\mathbb {E}}[\psi (Z_1)] =n\Bigr (1-\frac{\sigma _t^2}{\sigma _{\infty }^2}\Bigr )-\frac{|z_0^n|^2\mathrm {e}^{-2t} }{\sigma _{\infty }^2},\quad {\mathbb {E}}[\psi (Z_2)] =n\Bigr (\frac{\sigma _{\infty }^2}{\sigma _t^2}-1\Bigr )+\frac{|z_0^n|^2\mathrm {e}^{-2t}}{\sigma _t^2} \end{aligned}$$

while

$$\begin{aligned} \mathrm {Var}[\psi (Z_1)]= & {} 2n\Bigr (\frac{1}{\sigma _t^2}-\frac{1}{\sigma _{\infty }^2}\Bigr )^2\sigma _t^4+4\frac{\sigma _{t}^2}{\sigma _{\infty }^4}|z_0^n|^2\mathrm {e}^{-2t}, \quad \\\mathrm {Var}[\psi (Z_2)]= & {} 2n\Bigr (\frac{1}{\sigma _t^2}-\frac{1}{\sigma _{\infty }^2}\Bigr )^2\sigma _{\infty }^4+4\frac{\sigma _\infty ^2}{\sigma _{t}^4}|z_0^n|^2\mathrm {e}^{-2t}. \end{aligned}$$

Let \(t=t_{n,b}\) be as in Table 1 for Hellinger. Using (3.15) and the central limit theorem for the i.i.d. random variables \(X_1,\ldots ,X_n\) and \(Y_1,\ldots ,Y_n\), we get, with \(Z\sim {\mathcal {N}}(0,1)\),

$$\begin{aligned} \bigr \Vert \mathrm {Law}({Z}_t^n)-P_n^0\bigr \Vert _{\mathrm {TV}} = {\mathbb {P}}(Z\le \gamma _{n,t})-{\mathbb {P}}(Z\le {\tilde{\gamma }}_{n,t})+o_n(1), \end{aligned}$$

where

$$\begin{aligned} \gamma _{n,t}:=\frac{-n\log (1-\mathrm {e}^{-2t})-{\mathbb {E}}[\psi (Z^n_t)] }{\sqrt{\mathrm {Var}[\psi (Z^n_t)]} }, \quad {\tilde{\gamma }}_{n,t}:=\frac{-n\log (1-\mathrm {e}^{-2t})-{\mathbb {E}}[\psi (Z^n_\infty )]}{\sqrt{\mathrm {Var}[\psi (Z^n_\infty )]} }. \end{aligned}$$

Expanding \(\gamma _{n,t_{n,b} }\) gives the cutoff profile. Let us detail the computations in the most involved case \(\lim _{n\rightarrow \infty }|z_0^n|^2\sqrt{n}=a\in (0,+\infty )\). For all \(b\in {\mathbb {R}}\), recall \(t_{n,b}=\frac{\log n}{4}+b\). One may check that

$$\begin{aligned} -n\log (1-\mathrm {e}^{-2t_{n,b} })-{\mathbb {E}}[\psi (Z^n_{t_{n,b }}) ]= & {} \frac{1}{2}\mathrm {e}^{-4b}+a \mathrm {e}^{-2b}+o_n(1), \\ -n\log (1-\mathrm {e}^{-2t_{n,b} })-{\mathbb {E}}[\psi (Z^n_\infty )]= & {} -\frac{1}{2}\mathrm {e}^{-4b}-a \mathrm {e}^{-2b}+o_n(1), \\ \mathrm {Var}[\psi (Z^n_{t_{n,b}})]= & {} 2\mathrm {e}^{-4b}+4a \mathrm {e}^{-2b}+o_n(1), \quad \\ \mathrm {Var}[\psi (Z^n_\infty )]= & {} 2\mathrm {e}^{-4b}+4a \mathrm {e}^{-2b}+o_n(1). \end{aligned}$$

It follows that

$$\begin{aligned} \lim _{n\rightarrow \infty }\bigr \Vert \mathrm {Law}({Z}_{t_{n,b}}^n)-P_n^0 \bigr \Vert _{\mathrm {TV}}= & {} {\mathbb {P}}\Bigr (|Z|\le \frac{1}{2\sqrt{2}}\sqrt{\mathrm {e}^{-4b }+2a \mathrm {e}^{-2b} }\Bigr )\\= & {} \mathrm {erf}\Bigr (\frac{1}{4}\sqrt{\mathrm {e}^{-4b}+2a\mathrm {e}^{-2b} }\Bigr ). \end{aligned}$$

The other cases are similar. \(\square \)

4 General exactly solvable aspects

In this section, we prove Theorem 1.3 and Corollary 1.4.

The proof of Theorem 1.3 is based on the fact that the polynomial functions \(\pi (x)=x_1+\ldots +x_n\) and \(|x|^2=x_1^2+\ldots +x_n^2\) are, up to an additive constant for the second, eigenfunctions of the dynamics associated to the spectral values \(-1\) and \(-2\) respectively, and that their “carré du champ” is affine. In the matrix cases \(\beta \in \{1,2\}\), these functions correspond to the dynamics of the trace, the dynamics of the squared Hilbert–Schmidt trace norm, and the dynamics of the squared trace. It is remarkable that this phenomenon survives beyond these matrix cases, yet another manifestation of the Gaussian “ghosts” concept due to Edelman, see for instance [34].

Proof of Theorem 1.3

The process \(Y_t := \pi (X_t^n)\) solves

$$\begin{aligned} \mathrm {d}Y_t = \sum _{i=1}^n \mathrm {d} X_t^{n,i} = \sqrt{\frac{2}{n}} \sum _{i=1}^n\mathrm {d}B^i_t - \sum _{i=1}^n X_t^{n,i} \mathrm {d}t +\frac{\beta }{n}\sum _{j\ne i}\frac{\mathrm {d}t}{X_t^{n,i}-X_t^{n,j}}. \end{aligned}$$

By symmetry, the double sum vanishes. Note that the process \(W_t := \frac{1}{\sqrt{n}} \sum _{i=1}^n B^i_t\) is a standard one dimensional BM, so that \( \mathrm {d}Y_t = \sqrt{2} \mathrm {d}W_t - Y_t \mathrm {d}t. \) This proves the first part of the statement.

We turn to the second part. Recall that \(X_t \in D_n\) for all \(t>0\). By Itô’s formula

$$\begin{aligned} \mathrm {d} (X_t^{n,i})^2 = \sqrt{\frac{8}{n}} X_t^{n,i} \mathrm {d}B^i_t - 2 (X_t^{n,i})^2 \mathrm {d}t + 2\frac{\beta }{n} X_t^{n,i} \sum _{j:j\ne i}\frac{\mathrm {d}t}{X_t^{n,i}-X_t^{n,j}} + \frac{2}{n} \mathrm {d}t. \end{aligned}$$

Set \(W_t := \sum _{i=1}^n \int _0^t \frac{X_s^{n,i}}{|X^n_s|} \mathrm {d}B^i_s\). The process \({(W_t)}_{t\ge 0}\) is a BM by the Lévy characterization since

$$\begin{aligned} \langle W\rangle _t =\int _0^t\frac{\sum _{i=1}^n(X^{n,i}_s)^2}{|X^n_s|^2}\mathrm {d}s =t. \end{aligned}$$

Furthermore, a simple computation shows that

$$\begin{aligned} \sum _{i=1}^n X_t^{n,i} \sum _{j:j\ne i}\frac{1}{X_t^{n,i}-X_t^{n,j}} = \frac{n(n-1)}{2}. \end{aligned}$$

Consequently the process \(R_t := |X_t^n|^2\) solves

$$\begin{aligned} \mathrm {d} R_t = \sqrt{\frac{8}{n} R_t} \mathrm {d}W_t + \Big (2 + \beta (n-1) - 2 R_t\Big ) \mathrm {d}t, \end{aligned}$$

and is therefore a CIR process of parameters \(a=2+\beta (n-1)\), \(b=2\), and \(\sigma = \sqrt{8/n}\).

When \(d=\frac{\beta }{2}n^2 + (1-\frac{\beta }{2})n\) is a positive integer, the last property of the statement follows from the connection between OU and CIR recalled right before the statement of the theorem. \(\square \)

The last proof actually relies on the following general observation. Let X be an n-dimensional continuous semi-martingale solution of

$$\begin{aligned} \mathrm {d}X_t=\sigma (X_t)\mathrm {d}B_t+b(X_t)\mathrm {d}t \end{aligned}$$

where B is a n-dimensional standard BM, and where

$$\begin{aligned} x\in {\mathbb {R}}^n\mapsto \sigma (x)\in {\mathcal {M}}_{n,n}({\mathbb {R}}) \quad \text {and}\quad x\in {\mathbb {R}}^n\mapsto b(x)\in {\mathbb {R}}^n \end{aligned}$$

are Lipschitz. The infinitesimal generator of the Markov semigroup is given by

$$\begin{aligned}&{{\,\mathrm{G}\,}}(f)(x) =\frac{1}{2}\sum _{i,j=1}^na_{i,j}(x)\partial _{i,j}f(x) +\sum _{i=1}^nb_i(x)\partial _if(x), \quad \text {where}\quad a(x)=\sigma (x)(\sigma (x))^\top , \end{aligned}$$

for all \(f\in {\mathcal {C}}^2({\mathbb {R}}^n,{\mathbb {R}})\) and \(x\in {\mathbb {R}}^n\). Then, by Itô’s formula, the process \(M^f={(M^f_t)}_{t\ge 0}\) given by

$$\begin{aligned} M^f_t=f(X_t)-f(X_0)-\int _0^t({{\,\mathrm{G}\,}}f)(X_s)\mathrm {d}s =\sum _{i,k=1}^n\int _0^t\partial _if(X_s)\sigma _{i,k}(X_s)\mathrm {d}B_s^k \end{aligned}$$

is a local martingale, and moreover, for all \(t\ge 0\),

$$\begin{aligned} \langle M^f\rangle _t =\int _0^t\Gamma (f)(X_s)\mathrm {d}s \quad \text {where}\quad \Gamma (f)(x)=|\sigma (x)^\top \nabla f(x)|^2=a(x)\nabla f\cdot \nabla f. \end{aligned}$$

The functional quadratic form \(\Gamma \) is known as the “carré du champ” operator.

If f is an eigenfunction of \({{\,\mathrm{G}\,}}\) associated to the spectral value \(\lambda \) in the sense that \({{\,\mathrm{G}\,}}f=\lambda f\) (note by the way that \(\lambda \le 0\) since \({{\,\mathrm{G}\,}}\) generates a Markov process), then we get

$$\begin{aligned}&f(X_t)=f(X_0)+\lambda \int _0^tf(X_s)\mathrm {d}s +M_t^f, \quad \text {in other words}\\&\quad \mathrm {d}f(X_t)=\mathrm {d}M^f_t+\lambda f(X_t)\mathrm {d}t. \end{aligned}$$

Now if \(\Gamma (f) = c\) (as in the first part of the theorem), then by the Lévy characterization of Brownian motion, the continuous local martingale \(W:=\frac{1}{\sqrt{c}}M^f\) starting from the origin is a standard BM and we recover the result of the first part of the theorem. On the other hand, if \(\Gamma (f) = cf\) (as in the second part of the theorem), then by the Lévy characterization of BM the local martingale

$$\begin{aligned} W := \int _0^t \frac{1}{\sqrt{cf(X_s)}}\mathrm {d}M_s^f \end{aligned}$$

is a standard BM and we recover the result of the second part.

At this point, we observe that the infinitesimal generator of the CIR process R is the Laguerre partial differential operator

$$\begin{aligned} L(f)(x)=\frac{4}{n}xf''(x)+(2+\beta (n-1)-2x)f'(x). \end{aligned}$$
(4.1)

This operator leaves invariant the set of polynomials of degree less than or equal to k, for all integer \(k\ge 0\), a property inherited from (2.5). We will use this property in the following proof.

4.1 Proof of Corollary 1.4

By Theorem 1.3, \(Z = \pi (X^n)\) is an OU process in \({\mathbb {R}}\) solution of the stochastic differential equation

$$\begin{aligned} Z_0 = \pi (X_0^n), \quad \mathrm {d}Z_t = \sqrt{2} \mathrm {d}B_t - Z_t \mathrm {d}t, \end{aligned}$$

where B is a standard one-dimensional BM. By Lemma 3.1, \(Z_t \sim {\mathcal {N}}(Z_0 \mathrm {e}^{-t}, 1-\mathrm {e}^{-2t})\) for all \(t\ge 0\) and the equilibrium distribution is \(P_n^\beta \circ \pi ^{-1} = {\mathcal {N}}(0, 1)\). Using the contraction property stated in Lemma A.2, the comparison between Hellinger and TV of Lemma A.1 and the explicit expressions for Gaussian distributions of Lemma A.5, we find

$$\begin{aligned} \Vert \mathrm {Law}(X^n_t)-P_n^\beta \Vert _{\mathrm {TV}}&\ge \Vert \mathrm {Law}(Z_t)-P_n^\beta \circ \pi ^{-1}\Vert _{\mathrm {TV}}\\&\ge \mathrm {Hellinger}^2(\mathrm {Law}(Z_t), P_n^\beta \circ \pi ^{-1})\\&= 1 - \frac{(1-\mathrm {e}^{-2t})^{1/4}}{(1-\frac{1}{2}\mathrm {e}^{-2t})^{1/2}} \exp \Bigr (-\frac{\pi (X^n_0)^2\mathrm {e}^{-2t}}{4(2-\mathrm {e}^{-2t})}\Bigr ). \end{aligned}$$

Setting \(c_n := \log (\vert \pi (X^n_0)\vert )\) and assuming that \(\lim _{n\rightarrow \infty }c_n=\infty \), we deduce that for all \(\varepsilon \in (0,1)\)

$$\begin{aligned} \lim _{n\rightarrow \infty } \Vert \mathrm {Law}(X^n_{c_n(1-\varepsilon )})-P_n^\beta \Vert _{\mathrm {TV}} =1.\end{aligned}$$

The comparison between \(\mathrm {Hellinger}\) and \(\mathrm {TV}\) of Lemma A.1 allows to deduce that this remains true for the Hellinger distance.

We turn to Kullback. The contraction property stated in Lemma A.2 and the explicit expressions for Gaussian distributions of Lemma A.5 yield

$$\begin{aligned} 2\mathrm {Kullback}(\mathrm {Law}(X^n_t) \mid P_n^\beta )&\ge 2\mathrm {Kullback}(\mathrm {Law}(Z_t) \mid P_n^\beta \circ \pi ^{-1})\\&= \pi (X^n_0)^2\mathrm {e}^{-2t} -\mathrm {e}^{-2t} - \log (1-\mathrm {e}^{-2t}). \end{aligned}$$

This is enough to deduce that

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathrm {Kullback}(\mathrm {Law}(X^n_{(1-\varepsilon )c_n}) \mid P_n^\beta ) =+\infty . \end{aligned}$$

The situation is similar for \(\chi ^2\): the contraction property stated in Lemma A.2 and the explicit expressions for Gaussian distributions of Lemma A.5 yield

$$\begin{aligned} \chi ^2(\mathrm {Law}(X^n_t) \mid P_n^\beta )&\ge \chi ^2(\mathrm {Law}(Z_t) \mid P_n^\beta \circ \pi ^{-1})\\&= -1+\frac{1}{\sqrt{1-\mathrm {e}^{-4t}}} \exp \left( \frac{1}{1+\mathrm {e}^{-2t}}(1-\pi (X^n_0) \mathrm {e}^{-t})^2\right) , \end{aligned}$$

so that

$$\begin{aligned} \lim _{n\rightarrow \infty } \chi ^2(\mathrm {Law}(X^n_{(1-\varepsilon )c_n}) \mid P_n^\beta ) =+\infty .\end{aligned}$$

Regarding the Wasserstein distance, we have \(\left\| \pi \right\| _{\mathrm {Lip}}:=\sup _{x\ne y}\frac{|\pi (x)-\pi (y)|}{|x-y|}\le \sqrt{n}\) from the Cauchy–Schwarz inequality, and by Lemma A.2, for all probability measures \(\mu \) and \(\nu \) on \({\mathbb {R}}^n\),

$$\begin{aligned} \mathrm {Wasserstein}(\mu \circ \pi ^{-1},\nu \circ \pi ^{-1}) \le \sqrt{n}\mathrm {Wasserstein}(\mu ,\nu ). \end{aligned}$$
(4.2)

Using the explicit expressions for Gaussian distributions of Lemma A.5, we thus find

$$\begin{aligned} \mathrm {Wasserstein}^2(\mathrm {Law}(X_t^n), P_n^\beta )&\ge \frac{1}{n} \mathrm {Wasserstein}^2(\mathrm {Law}(Z_t), P_n^\beta \circ \pi ^{-1})\\&= \frac{1}{n}\Bigr (\pi (X^n_0)^2 \mathrm {e}^{-2t} + 2 - \mathrm {e}^{-2t} - 2\sqrt{1-\mathrm {e}^{-2t}}\Bigr ). \end{aligned}$$

Setting \(c_n:= \log \Big (\frac{\vert \pi (x_0^n)\vert }{\sqrt{n}}\Big )\) and assuming \(c_n \rightarrow \infty \) as \(n\rightarrow \infty \), we thus deduce that for all \(\varepsilon \in (0,1)\)

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathrm {Wasserstein}(\mathrm {Law}(X^n_{(1-\varepsilon )c_n}),P_n^\beta ) =+\infty . \end{aligned}$$

5 The random matrix cases

In this section, we prove Theorem 1.5 and Corollary 1.6 that cover the matrix cases \(\beta \in \{1,2\}\). For these values of \(\beta \), the DOU process is the image by the spectral map of a matrix OU process, connected to the random matrix models \(\mathrm {GOE}\) and \(\mathrm {GUE}\). We could consider the case \(\beta =4\) related to \(\mathrm {GSE}\). Beyond these three algebraic cases, it could be possible for an arbitrary \(\beta \ge 1\) to use random tridiagonal matrices dynamics associated to \(\beta \) Dyson processes, see for instance [44].

The next two subsections are devoted to the proof of Theorem 1.5 in the \(\beta =2\) and \(\beta = 1\) cases respectively. The third section provides the proof of Corollary 1.6.

5.1 Hermitian case (\(\beta =2\))

Let \(\mathrm {Herm}_n\) be the set of \(n\times n\) complex Hermitian matrices, namely the set of \(h\in {\mathcal {M}}_{n,n}({\mathbb {C}})\) with \(h_{i,j}=\overline{h_{j,i}}\) for all \(1\le i,j\le n\). An element \(h\in \mathrm {Herm}_n\) is parametrized by the \(n^2\) real variables \((h_{i,i})_{1\le i\le n}\), \((\Re h_{i,j})_{1\le i<j\le n}\), \((\Im h_{i,j})_{1\le i<j\le n}\). We define, for \(h\in \mathrm {Herm}_n\) and \(1\le i,j\le n\),

$$\begin{aligned} \pi _{i,j}(h) ={\left\{ \begin{array}{ll} h_{i,i} &{} \text {if }i=j\\ \sqrt{2}\, \Re h_{i,j} &{} \text {if }i<j\\ \sqrt{2}\, \Im h_{j,i} &{} \text {if }i>j \end{array}\right. }. \end{aligned}$$
(5.1)

Note that

$$\begin{aligned} \mathrm {Tr}(h^2)=\sum _{i,j=1}^n|h_{i,j}|^2=\sum _{i=1}^nh_{i,i}^2+2\sum _{i<j}(\Re h_{i,j})^2+2\sum _{i<j}(\Im h_{i,j})^2 = \sum _{i,j} \pi _{i,j}(h)^2. \end{aligned}$$

We thus identify \(\mathrm {Herm}_n\) with \({\mathbb {R}}^n\times {\mathbb {R}}^{2\frac{n^2-n}{2}}={\mathbb {R}}^{n^2}\), this identification is isometrical provided \(\mathrm {Herm}_n\) is endowed with the norm \(\sqrt{\mathrm {Tr}(h^2)}\) and \({\mathbb {R}}^{n^2}\) with the Euclidean norm.

The Gaussian Unitary Ensemble \(\mathrm {GUE}_n\) is the Gaussian law on \(\mathrm {Herm}_n\) with density

$$\begin{aligned} h\in \mathrm {Herm}_n \mapsto \frac{\mathrm {e}^{-\frac{n}{2}\mathrm {Tr}(h^2)}}{C_n} \quad \text {where}\quad C_n:=\int _{{\mathbb {R}}^{n^2}}\mathrm {e}^{-\frac{n}{2}\mathrm {Tr}(h^2)} \prod _{i=1}^n\mathrm {d}h_{i,i}\prod _{i<j}\mathrm {d}\Re h_{i,j}\prod _{i<j}\mathrm {d}\Im h_{i,j}.\nonumber \\ \end{aligned}$$
(5.2)

If H is a random \(n\times n\) Hermitian matrix then \(H\sim \mathrm {GUE}_n\) if and only if the \(n^2\) real random variables \(\pi _{i,j}(H)\), \(1\le i,j\le n\), are independent Gaussian random variables with

$$\begin{aligned} \pi _{i,j}(H)\sim {\mathcal {N}}\Bigr (0,\frac{1}{n}\Bigr ), \quad 1\le i,j\le n. \end{aligned}$$
(5.3)

The law \(\mathrm {GUE}_n\) is the unique invariant law of the Hermitian matrix OU process \({(H_t)}_{t\ge 0}\) on \(\mathrm {Herm}_n\) solution of the stochastic differential equation

$$\begin{aligned} H_0=h_0\in \mathrm {Herm}_n,\quad \mathrm {d}H_t=\sqrt{\frac{2}{n}}\mathrm {d}B_t-H_t\mathrm {d}t, \end{aligned}$$
(5.4)

where \(B={(B_t)}_{t\ge 0}\) is a Brownian motion on \(\mathrm {Herm}_n\), in the sense that the stochastic processes \((\pi _{i,j}(B_t))_{t\ge 0}\), \(1\le i \ne j \le n\), are independent standard one-dimensional BM. The coordinates stochastic processes \({(\pi _{i,j}(H_t))}_{t\ge 0}\), \(1\le i,j\le n\), are independent real OU processes.

For any h in \(\mathrm {Herm}_n\), we denote by \(\Lambda (h)\) the vector of the eigenvalues of h ordered in non-decreasing order. Lemma 5.1 below is an observation which dates back to the seminal work of Dyson [33], hence the name DOU for \(X^n\). We refer to [37, Ch. 12] and [2, Sec. 4.3] for a mathematical approach using modern stochastic calculus.

Lemma 5.1

(From matrix OU to DOU) The image of \(\mathrm {GUE}_n\) by the map \(\Lambda \) is the Coulomb gas \(P_n^\beta \) given by (1.6) with \(\beta =2\). Moreover the stochastic process \(X^n={(X^n_t)}_{t\ge 0}={(\Lambda (H_t))}_{t\ge 0}\) is well-defined and solves the stochastic differential equation (1.3) with \(\beta =2\) and \(x_0^n=\Lambda (h_0)\).

Let \(\beta =2\). Let us assume from now on that the initial value \(h_0\in \mathrm {Herm}_n\) of \({(H_t)}_{t\ge 0}\) has eigenvalues \(x_0^n\) where \(x^n_0\) is as in Theorem 1.5. We start by proving the upper bound on the \(\chi ^2\) distance stated in Theorem 1.5: it will be an adaptation of the proof of the upper bound of Theorem 1.1 applied to the Hermitian matrix OU process \({(H_t)}_{t\ge 0}\) combined with the contraction property of the \(\chi ^2\) distance. Indeed, by Lemma 5.1 and the contraction property of Lemma A.2

$$\begin{aligned} \chi ^2(\mathrm {Law}(X_t^n) \mid P_n^\beta ) \le \chi ^2(\mathrm {Law}(H_t) \mid \mathrm {GUE}_n). \end{aligned}$$
(5.5)

We claim now that the right-hand side tends to 0 as \(n\rightarrow \infty \) when \(t=t_n\) is well chosen. Indeed, using the identification between \(\mathrm {Herm}_n\) and \({\mathbb {R}}^{n^2}\) mentioned earlier, we have \(\mathrm {GUE}_n={\mathcal {N}}(m_2,\Sigma _2)\) where \(m_2=0\) and where \(\Sigma _2\) is an \(n^2\times n^2\) diagonal matrix with

$$\begin{aligned} (\Sigma _2)_{(i,j),(i,j)}=\frac{1}{n}. \end{aligned}$$
(5.6)

On the other hand, the Mehler formula (Lemma 3.1) gives \(\mathrm {Law}(H_t)={\mathcal {N}}(m_1,\Sigma _1)\) where \(m_1=\mathrm {e}^{-t}h_0\) and where \(\Sigma _1\) is an \(n^2\times n^2\) diagonal matrix with

$$\begin{aligned} (\Sigma _1)_{(i,j),(i,j)} =\frac{1-\mathrm {e}^{-2t}}{n}. \end{aligned}$$
(5.7)

Therefore, using Lemma A.5, the analogue of (3.3) reads

$$\begin{aligned} \chi ^2(\mathrm {Law}(H_t)\mid \mathrm {GUE}_n) =-1 + \frac{1}{(1-\mathrm {e}^{-4t})^{n^2/2}} \exp \left( n|h_0|^2\frac{\mathrm {e}^{-2t}}{1+\mathrm {e}^{-2t}}\right) . \end{aligned}$$
(5.8)

where

$$\begin{aligned} |h_0|^2 =\sum _{1\le i,j\le n}\pi _{i,j}(h_0)^2 =\sum _{1\le i,j\le n}|(h_0)_{i,j}|^2=\mathrm {Tr}(h_0^2)=|x_0^n|^2. \end{aligned}$$
(5.9)

Taking now \(c_n := \log (\sqrt{n} |x_0^n|) \vee \log (\sqrt{n})\), for any \(\varepsilon \in (0,1)\), we get

$$\begin{aligned} \chi ^2(\mathrm {Law}(X_{(1+\varepsilon )c_n}^n) \mid P_n^\beta ) \le \chi ^2(\mathrm {Law}(H_{(1+\varepsilon )c_n}) \mid \mathrm {GUE}_n) \underset{n\rightarrow \infty }{\longrightarrow }0. \end{aligned}$$
(5.10)

In the right-hand side of (5.8), the factor \(n^2\) is the dimension of the \({\mathbb {R}}^{n^2}\) to which \(\mathrm {Herm}_n\) is identified, while the factor n in the first term is due to the 1/n scaling in the stochastic differential equation of the process. This explains the difference with the analogue (3.3) in dimension n.

From the comparison between TV, Hellinger, Kullback and \(\chi ^2\) stated in Lemma A.1, we easily deduce that the previous convergence remains true upon replacing \(\chi ^2\) by \(\mathrm {TV}\), \(\mathrm {Hellinger}\) or \(\mathrm {Kullback}\).

It remains to cover the upper bound for the Wasserstein distance. This distance is more sensitive to contraction arguments: according to Lemma A.2, one needs to control the Lipschitz norm of the “contraction map” at stake. It happens that the spectral map, restricted to the set \(\mathrm {Herm}_n\) of \(n\times n\) Hermitian matrices, is 1-Lipschitz: more precisely, the Hoffman–Wielandt inequality, see [43] and [45, Th. 6.3.5], asserts that for any two such matrices A and B, denoting \(\Lambda (A)=(\lambda _i(A))_{1\le i \le n}\) and \(\Lambda (B)=(\lambda _i(B))_{1\le i \le n}\) the ordered sequences of their eigenvalues, we have

$$\begin{aligned} \sum _{i=1}^{n} |\lambda _i(A) - \lambda _i(B)|^2 \le \sum _{i,j} |A_{i,j} - B_{i,j}|^2. \end{aligned}$$

Applying Lemma A.2, we thus deduce that

$$\begin{aligned} \mathrm {Wasserstein}(\mathrm {Law}(X_t^n) , P_n^\beta ) \le \mathrm {Wasserstein}(\mathrm {Law}(H_t) , \mathrm {GUE}_n). \end{aligned}$$
(5.11)

Following the Gaussian computations in the proof of Theorem 1.2, we obtain

$$\begin{aligned} \mathrm {Wasserstein}^2(\mathrm {Law}(H_t) , \mathrm {GUE}_n) = |x_0^n|^2 \mathrm {e}^{-2t} + 2 - \mathrm {e}^{-2t} - 2\sqrt{1-\mathrm {e}^{-2t}}. \end{aligned}$$
(5.12)

Set \(c_n := \log (|x_0^n|)\). If \(c_n\rightarrow \infty \) as \(n\rightarrow \infty \) then for all \(\varepsilon \in (0,1)\) we find

$$\begin{aligned} \mathrm {Wasserstein}(\mathrm {Law}(X_{(1+\varepsilon )c_n}^n) , P_n^\beta ) \underset{n\rightarrow \infty }{\longrightarrow }0. \end{aligned}$$

This completes the proof of Theorem 1.5.

5.2 Symmetric case (\(\beta =1\))

The method is similar to the case \(\beta =2\). Let us focus only on the differences. Let \(\mathrm {Sym}_n\) be the set of \(n\times n\) real symmetric matrices, namely the set of \(s\in {\mathcal {M}}_{n,n}({\mathbb {R}})\) with \(s_{i,j}=s_{j,i}\) for all \(1\le i,j\le n\). An element \(s\in \mathrm {Sym}_n\) is parametrized by the \(n+\frac{n^2-n}{2}=\frac{n(n+1)}{2}\) real variables \((s_{i,j})_{1\le i\le j\le n}\). We define, for \(s\in \mathrm {Sym}_n\) and \(1\le i\le j\le n\),

$$\begin{aligned} \pi _{i,j}(s)={\left\{ \begin{array}{ll} s_{i,i} &{} \text {if }i=j\\ \sqrt{2}\, s_{i,j} &{} \text {if }i < j \end{array}\right. }. \end{aligned}$$
(5.13)

Note that

$$\begin{aligned} \mathrm {Tr}(s^2)=\sum _{i,j=1}^ns_{i,j}^2=\sum _{i=1}^ns_{i,i}^2+2\sum _{i<j}s_{i,j}^2 = \sum _{1\le i\le j \le n} \pi _{i,j}(s)^2. \end{aligned}$$

We thus identify isometrically \(\mathrm {Sym}_n\), endowed with the norm \(\sqrt{\mathrm {Tr}(h^2)}\), with \({\mathbb {R}}^n\times {\mathbb {R}}^{\frac{n^2-n}{2}}={\mathbb {R}}^{\frac{n(n+1)}{2}}\) endowed with the Euclidean norm.

The Gaussian Orthogonal Ensemble \(\mathrm {GOE}_n\) is the Gaussian law on \(\mathrm {Sym}_n\) with density

$$\begin{aligned} s\in \mathrm {Sym}_n \mapsto \frac{\mathrm {e}^{-\frac{n}{2}\mathrm {Tr}(s^2)}}{C_n} \quad \text {where}\quad C_n:=\int _{{\mathbb {R}}^{\frac{n(n+1)}{2}}}\mathrm {e}^{-\frac{n}{2}\mathrm {Tr}(s^2)} \prod _{1\le i\le j\le n}\mathrm {d}s_{i,j}.\qquad \end{aligned}$$
(5.14)

If S is a random \(n\times n\) real symmetric matrix then \(S\sim \mathrm {GOE}_n\) if and only if the \(\frac{n(n+1)}{2}\) real random variables \(\pi _{i,j}(S)\), \(1\le i\le j\le n\), are independent Gaussian random variables with

$$\begin{aligned} \pi _{i,j}(S)\sim {\mathcal {N}}\Bigr (0,\frac{1}{n}\Bigr ), \quad 1\le i\le j\le n. \end{aligned}$$
(5.15)

The law \(\mathrm {GOE}_n\) is the unique invariant law of the real symmetric matrix OU process \({(S_t)}_{t\ge 0}\) on \(\mathrm {Sym}_n\) solution of the stochastic differential equation

$$\begin{aligned} S_0=s_0\in \mathrm {Sym}_n,\quad \mathrm {d}S_t=\sqrt{\frac{2}{n}}\mathrm {d}B_t-S_t\mathrm {d}t \end{aligned}$$
(5.16)

where \(B={(B_t)}_{t\ge 0}\) is a Brownian motion on \(\mathrm {Sym}_n\), in the sense that the stochastic processes \((\pi _{i,j}(B_t))_{t\ge 0}\), \(1\le i\le j \le n\), are independent standard one-dimensional BM. The coordinates stochastic processes \({(\pi _{i,j}(S_t))}_{t\ge 0}\), \(1\le i\le j\le n\), are independent real OU processes.

For any s in \(\mathrm {Sym}_n\), we denote by \(\Lambda (s)\) the vector of the eigenvalues of s ordered in non-decreasing order. Lemma 5.2 below is the real symmetric analogue of Lemma 5.1.

Lemma 5.2

(From matrix OU to DOU) The image of \(\mathrm {GOE}_n\) by the map \(\Lambda \) is the Coulomb gas \(P_n^\beta \) given by (1.6) with \(\beta =1\). Moreover the stochastic process \(X^n={(X^n_t)}_{t\ge 0}={(\Lambda (S_t))}_{t\ge 0}\) is well-defined and solves the stochastic differential equation (1.3) with \(\beta =1\) and \(x_0^n=\Lambda (s_0)\).

As for the case \(\beta =2\), the idea now is that the DOU process is sandwiched between a real OU process and a matrix OU process.

By similar computations to the case \(\beta =2\), the analogue of (5.8) becomes

$$\begin{aligned} \chi ^2(\mathrm {Law}(H_t)\mid \mathrm {GOE}_n) =-1 + \frac{1}{(1-\mathrm {e}^{-4t})^{\frac{(n(n+1))^2}{8}}} \exp \left( n|h_0|^2\frac{\mathrm {e}^{-2t}}{1+\mathrm {e}^{-2t}}\right) .\nonumber \\ \end{aligned}$$
(5.17)

This allows to deduce the upper bound for TV, Hellinger, Kullback and \(\chi ^2\). Regarding the Wasserstein distance, the analogue of (5.12) reads

$$\begin{aligned} \mathrm {Wasserstein}^2(\mathrm {Law}(S_t) , \mathrm {GOE}_n) = |x_0^n|^2 \mathrm {e}^{-2t} + 2 - \mathrm {e}^{-2t} - 2\sqrt{1-\mathrm {e}^{-2t}}. \end{aligned}$$
(5.18)

If \(\lim _{n\rightarrow \infty }\log (|x_0^n|)=\infty \) then we deduce the asserted result, concluding the proof of Theorem 1.5.

5.3 Proof of Corollary 1.6

Let \(\beta \in \{1,2\}\). Recall the definitions of \(a_n\) and \(c_n\) from the statement. Take \(x_0^{n,i} = a_n\) for all i, and note that \(\pi (x_0^n) = n a_n\). Given our assumptions on \(a_n\), Corollary 1.4 yields for this particular choice of initial condition and for any \(\varepsilon \in (0,1)\)

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathrm {dist}(\mathrm {Law}(X^n_{(1-\varepsilon )c_n})\mid P_n^\beta ) =\max . \end{aligned}$$

On the other hand, in the proof of Theorem 1.5 we saw that

$$\begin{aligned} \chi ^2(\mathrm {Law}(X_t^n) \mid P_n^\beta ) \le -1 + \frac{1}{(1-\mathrm {e}^{-4t})^{b_n/2}} \exp \left( n|x_0^n|^2\frac{\mathrm {e}^{-2t}}{1+\mathrm {e}^{-2t}}\right) , \end{aligned}$$

where \(b_n = n^2\) for \(\beta =2\) and \(b_n = (n(n+1)/2)^2\) for \(\beta =1\). Since \(|x_0^n| \le \sqrt{n} a_n\) for all \(x_0^n \in [-a_n,a_n]^n\), and given the comparison between TV, Hellinger, Kullback and \(\chi ^2\) stated in Lemma A.1 we obtain for \(\mathrm {dist} \in \{\mathrm {TV}, \mathrm {Hellinger}, \mathrm {Kullback}, \chi ^2\}\) and for all \(\varepsilon \in (0,1)\)

$$\begin{aligned} \lim _{n\rightarrow \infty }\sup _{x_0^n \in [-a_n,a_n]^n}\mathrm {dist}(\mathrm {Law}(X^n_{(1+\varepsilon )c_n})\mid P_n^\beta ) =0, \end{aligned}$$

thus concluding the proof of Corollary 1.6 regarding theses distances.

Concerning Wasserstein, the proof of Theorem 1.5 shows that for any \(x_0^n \in [-a_n,a_n]^n\) we have

$$\begin{aligned} \mathrm {Wasserstein}^2(\mathrm {Law}(X_t^n) , P_n^\beta )&\le |x_0^n|^2 \mathrm {e}^{-2t} + 2 - \mathrm {e}^{-2t} - 2\sqrt{1-\mathrm {e}^{-2t}}\\&\le n a_n^2 \mathrm {e}^{-2t} + 2 - \mathrm {e}^{-2t} - 2\sqrt{1-\mathrm {e}^{-2t}}. \end{aligned}$$

If \(\sqrt{n} a_n \rightarrow \infty \), then for \(c_n = \log (\sqrt{n}a_n)\) we deduce that for all \(\varepsilon \in (0,1)\)

$$\begin{aligned} \lim _{n\rightarrow \infty }\sup _{x_0^n \in [-a_n,a_n]^n}\mathrm {dist}(\mathrm {Law}(X^n_{(1+\varepsilon )c_n})\mid P_n^\beta ) =0. \end{aligned}$$

6 Cutoff phenomenon for the DOU in TV and Hellinger

In this section, we prove Theorem 1.7 and Corollary 1.8 for the TV and Hellinger distances. We only consider the case \(\beta \ge 1\), although the arguments could be adapted mutatis mutandis to cover the case \(\beta = 0\): note that the result of Theorem 1.7 and Corollary 1.8 for \(\beta = 0\) can be deduced from Theorem 1.2. At the end of this section, we also provide the proof of Theorem 1.10.

6.1 Proof of Theorem 1.7 in TV and Hellinger

By the comparison between TV and Hellinger stated in Lemma A.1, it suffices to prove the result for the TV distance, so we concentrate on this distance until the end of this section. Our proof is based on the exponential decay of the relative entropy at an explicit rate given by the optimal logarithmic Sobolev constant. However, this requires the relative entropy of the initial condition to be finite. Consequently, we proceed in three steps. First, given an arbitrary initial condition \(x_0^n \in {\overline{D}}_n\), we build an absolutely continuous probability measure \(\mu _{x_0^n}\) on \(D_n\) that approximates \(\delta _{x_0^n}\) and whose relative entropy is not too large. Second, we derive a decay estimate starting from this regularized initial condition. Third, we control the total variation distance between the two processes starting respectively from \(\delta _{x_0^n}\) and \(\mu _{x_0^n}\).

6.1.1 Regularization

In order to have a finite relative entropy at time 0, we first regularize the initial condition by smearing out each particle in a ball of radius bounded below by \(n^{-(\kappa +1)}\), for some \(\kappa >0\). Let us first introduce the regularization at scale \(\eta \) of a Dirac distribution \(\delta _{z}\), \(z\in {\mathbb {R}}\) by

$$\begin{aligned} \delta _z^{(\eta )}(\mathrm {d}u)=\mathrm {Uniform}([z,z+\eta ])(\mathrm {d}u)=\eta ^{-1} {\mathbf {1}}_{[z,z+\eta ]} \mathrm {d}u. \end{aligned}$$

Given \(x \in {\overline{D}}_n\) and \(\kappa > 0\), we define a regularized version of \(\delta _{x}\) at scale \(n^{-\kappa }\), that we denote \(\mu _x\), by setting

$$\begin{aligned} \mu _x=\otimes _{i=1}^n \delta _{x_i+3i\eta }^{(\eta )}, \end{aligned}$$
(6.1)

where \(\eta := n^{-(\kappa +1 )}\). The parameters have been tuned in such a way that, independently of the choice of \(x\in {\overline{D}}_n\), the following properties hold. The supports of the Dirac masses \(\delta _{x_i+3i\eta }^{(\eta )}\), \(i\in \{1,\ldots ,n\}\), lie at distance at least \(\eta \) from each other. The volume of the support of \(\mu _x\) is equal to \(\eta ^n\), and therefore the relative entropy of \(\mu _x\) with respect to the Lebesgue measure is not too large. Finally, provided \(X_0^n = x\) and \(Y_0^n\) is distributed according to \(\mu _x\), almost surely \(\vert X_0^n - Y_0^n\vert _\infty \le (3n+1) \eta \).

6.1.2 Convergence of the regularized process to equilibrium

Lemma 6.1

(Convergence of regularized process) Let \((Y_t^n)_{t\ge 0}\) be a DOU process solution of (1.3), \(\beta \ge 1\), and let \(P_n^\beta \) be its invariant law. Assume that \(\mathrm {Law}(Y^n_0)\) is the regularized measure \(\mu _{x_0^n}\) in (6.1) associated to some initial condition \(x_0^n \in {\overline{D}}_n\). Then there exists a constant \(C > 0\), only depending on \(\kappa \), such that for all \(t\ge 0\), all \(n\ge 2\) and all \(x_0^n \in {\overline{D}}_n\)

$$\begin{aligned} \mathrm {Kullback}(\mathrm {Law}(Y_t^n)\mid P_n^\beta ) \le C (n|x_0^n|^2 + n^2\log (n))\mathrm {e}^{-2t}. \end{aligned}$$

Proof of Lemma 6.1

By Lemma B.2 and since \(\mathrm {Law}(Y^n_0) = \mu _{x_0^n}\), for all \(t\ge 0\), there holds

$$\begin{aligned} \mathrm {Kullback}(\mathrm {Law}(Y_t^n)\mid P_n^\beta ) \le \mathrm {Kullback}(\mu _{x_0^n}\mid P_n^\beta )\mathrm {e}^{-2t}. \end{aligned}$$
(6.2)

Now we have

$$\begin{aligned} \mathrm {Kullback}(\mu _{x_0^n}\mid P_n^\beta ) = {\mathbb {E}}_{\mu _{x_0^n} }\left[ \log \frac{\mathrm {d}\mu _{x_0^n} }{\mathrm {d} P_n^\beta } \right] . \end{aligned}$$

Recall the definition of S in (1.15). As \(P_n^\beta \) has density \(\frac{\mathrm {e}^{- E}}{C_n^\beta }\), we may re-write this as

$$\begin{aligned} \mathrm {Kullback}(\mu _{x_0^n}\mid P_n^\beta )= S(\mu _{x_0^n}) +{\mathbb {E}}_{\mu _{x_0^n}}[E] +\log C_n^\beta . \end{aligned}$$
(6.3)

Recall the partition function \(C_{*n}^\beta = n! C_n^\beta \) from Sect. 2.2. It is proved in [12], using explicit expressions involving Gamma functions via a Selberg integral, that for some constant \(C>0\)

$$\begin{aligned} \log C_n^\beta \le \log C_{*n}^\beta \le Cn^2. \end{aligned}$$
(6.4)

Next, we claim that \(S(\mu _{x_0^n})\le n \log (n^{1+\kappa })\). Indeed since \(\mu _{x_0^n}\) is a product measure, the tensorization property of entropy recalled in Lemma A.4 gives

$$\begin{aligned} \mathrm {Kullback}(\mu _{x_0^n} \mid \mathrm {d}x ) = \sum _{i=1}^n \mathrm {Kullback}( \delta _{0}^{(\eta )} \mid \mathrm {d}x). \end{aligned}$$

Moreover an immediate computation yields \(\mathrm {Kullback}( \delta _{0}^{(\eta )} \mid \mathrm {d}x)=\log (\eta ^{-1})\) so that, given the definition of \(\eta \), we get

$$\begin{aligned} \mathrm {Kullback}(\mu _{x_0^n} \mid \mathrm {d}x) = n \log (n^{\kappa +1}). \end{aligned}$$
(6.5)

We turn to the estimation of the term \({\mathbb {E}}_{\mu _{x_0^n} }[E].\) The confinement term can be easily bounded:

$$\begin{aligned} {\mathbb {E}}_{\mu _{x_0^n} }\left[ \frac{n}{2} \sum _{i=1}^n {x_i^2} \right] \le (n|x_0^n|^2 + n^2 \eta ^2). \end{aligned}$$

Let us now estimate the logarithmic energy of \(\mu _{x_0^n}\). Using the fact that the logarithmic function is increasing, together with the fact the supports of \(\delta _{x_i+3i\eta }^{(\eta )}\) lie at distance at least \(\eta \) from each other, we notice that for any \(i > j\) there holds

$$\begin{aligned} {\mathbb {E}}_{\mu _{x_0^n} }\left[ \log |x_i-x_j| \right]&= \iint \log |x-y|\delta _{x_i+3i\eta }^{(\eta )}(\mathrm {d}x)\delta _{x_j+3j\eta }^{(\eta )}(\mathrm {d}y)\\&\ge \iint \log |x-y|\delta _{3\eta }^{(\eta )}(\mathrm {d}x)\delta _{0}^{(\eta )}(\mathrm {d}y)\\&\ge \log \eta . \end{aligned}$$

It follows that the initial logarithmic energy cannot be much larger than \(n^2\log n\):

$$\begin{aligned} {\mathbb {E}}_{\mu _{x_0^n} }\left[ \sum _{i > j}\log \frac{1}{|x_i-x_j|} \right] \le \frac{n(n-1)}{2} \log n^{\kappa +1}. \end{aligned}$$

This implies that there exists a constant \(C>0\), only depending on \(\kappa \), such that for all \(n\ge 2\)

$$\begin{aligned} {\mathbb {E}}_{\mu _{x_0^n} }[E] = {\mathbb {E}}_{\mu _{x_0^n} }\left[ \frac{n}{2} \sum _{i=1}^n |x_i|^2+{\beta }\sum _{i > j}\log \frac{1}{|x_i-x_j|}\right] \le C \big (n|x_0^n|^2 + n^2\log n\big ).\nonumber \\ \end{aligned}$$
(6.6)

Inserting (6.4), (6.5) and (6.6) into (6.3) we obtain (for a different constant \(C>0\))

$$\begin{aligned} \mathrm {Kullback}(\mu _{x_0^n} \mid P_n^\beta )\le C \big (n|x_0^n|^2 + n^2\log n\big ). \end{aligned}$$

This bound, combined with (6.2), concludes the proof of Lemma 6.1. \(\square \)

6.1.3 Convergence to the regularized process in total variation distance

Let \((X_t^n)_{t\ge 0}\) and \((Y_t^n)_{t\ge 0}\) be two DOU processes with \(X_0^n=x_0^n\) and \(\mathrm {Law}(Y_0^n)=\mu _{x_0^n}\), where the measure \(\mu _{x_0^n}\) is defined in (6.1). Below we prove that, as soon as the parameter \(\kappa \) is large enough, the total variation distance between \(\mathrm {Law}(X_t^n)\) and \(\mathrm {Law}(Y_t^n)\) tends to 0, for any fixed \(t> 0\).

Note that at time 0, almost surely, there holds \(X_0^{n,i} \le Y_0^{n,i}\), for every \(i\in \{1,\ldots ,n\}\). We now introduce a coupling of the processes \((X_t^n)_{t\ge 0}\) and \((Y_t^n)_{t\ge 0}\) that preserves this ordering at all times. Consider two independent standard BM \(B^n\) and \(W^n\) in \({\mathbb {R}}^n\). Let \(X^n\) be the solution of (1.3) driven by \(B^n\), and let \(Y^n\) be the solution of

$$\begin{aligned} \mathrm {d}Y_t^{n,i}= & {} \sqrt{\frac{2}{n}} \Big ( {\mathbf {1}}_{\{Y_t^{n,i} \ne X_t^{n,i} \}} \mathrm {d}W^i_t + {\mathbf {1}}_{\{Y_t^{n,i} = X_t^{n,i} \}} \mathrm {d}B^i_t \Big )\\&-Y_t^{n,i}\mathrm {d}t +\frac{\beta }{n}\sum _{j\ne i}\frac{\mathrm {d}t}{Y_t^{n,i}-Y_t^{n,j}},\quad 1\le i\le n. \end{aligned}$$

We denote by \({\mathbb {P}}\) the probability measure under which these two processes are coupled. Let us comment on the driving noise in the equation satisfied by \(Y^n\). When the i-th coordinates of \(X^n\) and \(Y^n\) equal, we take the same driving Brownian motion and the difference \(Y^{n,i}-X^{n,i}\) remains non-negative due to the convexity of \(-\log \) defined in (1.5), see the monotoncity result stated in Lemma B.4. On the other hand, when these two coordinates differ, we take independent driving Brownian motions in order for their difference to have non-zero quadratic variation (this allows to increase their merging probability). Under this coupling, the ordering of \(X^n\) and \(Y^n\) is thus preserved at all times, and if \(X_s^n = Y_s^n\) for some \(s\ge 0\), then it remains true at all times \(t\ge s\). Note however that if \(X_s^{n,i} = Y_s^{n,i}\), then this equality does not remain true at all times except if all the coordinates match.

As in (A.7), the total variation distance between the laws of \(X_t^n\) and \(Y_t^n\) may be bounded by

$$\begin{aligned} \Vert \mathrm {Law}(Y_t^n) -\mathrm {Law}(X_t^n) \Vert _{\mathrm {TV}}\le {\mathbb {P}}( X_t^n \ne Y_t^n), \end{aligned}$$

for all \(t\ge 0\). We wish to establish that for any given \(t>0\),

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb {P}}(X_t^n\ne Y_t^n)=0. \end{aligned}$$

To do so, we work with the area between the two processes \(X^n\) and \(Y^n\), defined by

$$\begin{aligned} A^n_t := \sum _{i=1}^n \big ( Y^{n,i}_t - X^{n,i}_t \big ) = \pi (Y^n_t) - \pi (X^n_t),\quad t\ge 0. \end{aligned}$$

As the two processes are ordered at any time, this is nothing but the geometric area between the two discrete interfaces \(i\mapsto X_t^{n,i}\) and \(i\mapsto Y_t^{n,i}\) associated to the configurations \(X_t^n\) and \(Y_t^n\). We deduce that the merging time of the two processes coincide with the hitting time of 0 by this area, that we denote by \(\tau =\inf \{t\ge 0:A_t^n=0\}\).

The process \(A^n\) has a very simple structure: it is a semimartingale that behaves like an OU process with a randomly varying quadratic variation. Let \(N_t\) be the number of coordinates that do not coincide at time t, that is

$$\begin{aligned} N_t := \#\big \{i\in \{1,\ldots ,n\}: X^{n,i}_t \ne Y^{n,i}_t\big \}. \end{aligned}$$

Then \(A^n\) satisfies

$$\begin{aligned} \mathrm {d}A^n_t = -A^n_t \mathrm {d}t + \mathrm {d}M_t, \end{aligned}$$

where M is a centered martingale with quadratic variation

$$\begin{aligned} \mathrm {d}\langle M\rangle _t = \frac{2}{n} N_t \mathrm {d}t. \end{aligned}$$
(6.7)

Note that whenever \(t< \tau \) we have

$$\begin{aligned} \mathrm {d}\langle M\rangle _t \ge \frac{2}{n}. \end{aligned}$$

This a priori lower bound on the quadratic variation of M, combined with the Dubins–Schwarz theorem, allows to check that \(\tau < \infty \) almost surely. Note that in view of the coupling between \(X_t^n\) and \(Y_t^n\), we have \(X_t^n=Y_t^n\) for all \(t\ge \tau \).

Recall the following informal fact: with large probability, a Brownian motion starting from a hits b by a time of order \((a-b)^2\). For a continuous martingale, this becomes: with large probability, a continuous martingale starting from a accumulates a quadratic variation of order \((a-b)^2\) up to its first hitting time of b. Our next lemma states such a bound on the supermartingale \(A^n\).

Lemma 6.2

Let \(a>b\ge 0\). Let \(\tau _b=\inf \{t>0:A_t=b\}<\infty \) almost surely. Then, for all \(u\ge 1\),

$$\begin{aligned} {\mathbb {P}}(\langle A\rangle _{\tau _b} \ge (a-b)^2 u \mid A_0 = a) \le 4 u^{-1/2}. \end{aligned}$$

Proof

Without loss of generality one can assume that \(A_0 = a\) almost surely.

By Itô’s formula, for all \(\lambda \ge 0\), the process

$$\begin{aligned} S_t = \exp \Bigr (-\lambda A_t - \frac{\lambda ^2}{2} \langle A\rangle _t\Bigr ), \end{aligned}$$

defines a submartingale (taking its values in [0, 1]). Doob’s stopping theorem yields

$$\begin{aligned} {\mathbb {E}}[\mathrm {e}^{-\frac{\lambda ^2}{2} \langle A\rangle _{\tau _b}}] =\mathrm {e}^{\lambda b} {\mathbb {E}}[S_{\tau _b}] \ge \mathrm {e}^{\lambda b} {\mathbb {E}}[S_0] = \mathrm {e}^{-\lambda (a-b)}. \end{aligned}$$

On the other hand, for \(\lambda = 2 (a-b)^{-1} u^{-1/2}\), there holds

$$\begin{aligned} {\mathbb {E}}[\mathrm {e}^{-\frac{\lambda ^2}{2} \langle A\rangle _{\tau _b}}]&\le {\mathbb {P}}\big (\langle A\rangle _{\tau _b} < (a-b)^2 u\big ) + \mathrm {e}^{-\frac{\lambda ^2}{2}(a-b)^2u}\, {\mathbb {P}}\big (\langle A\rangle _{\tau _b} \ge (a-b)^2 u\big )\\&\le 1 - (1-\mathrm {e}^{-\frac{\lambda ^2}{2}(a-b)^2u}){\mathbb {P}}\big (\langle A\rangle _{\tau _b} \ge (a-b)^2 u\big )\\&\le 1 - \frac{1}{2} {\mathbb {P}}\big (\langle A\rangle _{\tau _b} \ge (a-b)^2 u\big ). \end{aligned}$$

Consequently one deduces that

$$\begin{aligned} {\mathbb {P}}(\langle A\rangle _{\tau _b} \ge (a-b)^2 u) \le 2(1-\mathrm {e}^{-\lambda (a-b)}) \le 4u^{-1/2}. \end{aligned}$$

\(\square \)

We are now ready to prove the following lemma:

Lemma 6.3

If \(\kappa >\frac{3}{2}\), then for every sequence of times \({(t_n)}_n\) with \(\varliminf _{n\rightarrow \infty }t_n>0\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{x_0^n \in {\overline{D}}_n} \Vert \mathrm {Law}(Y_{t_n}^n) -\mathrm {Law}(X_{t_n}^n) \Vert _{\mathrm {TV}}=0. \end{aligned}$$

Proof of Lemma 6.3

Let \({(t_n)}_n\) be a sequence of times such that \(\varliminf _{n\rightarrow \infty }t_n>0\). In view of the definition of \(\mu _{{x_0^n}}\) and \(\eta \), the initial area satisfies almost surely

$$\begin{aligned} A_0^n\le 4n^{1-\kappa }. \end{aligned}$$

According to Lemma 6.2, with a probability that goes to 1, one has

$$\begin{aligned} \langle A^n\rangle _{\tau }-\langle A^n\rangle _{0} < 16n^{2-2\kappa } \log n. \end{aligned}$$

On the other hand, by (6.7), we have the following control on the quadratic variation:

$$\begin{aligned} \langle A\rangle _{\tau }-\langle A\rangle _{0} \ge \frac{2}{n}\tau . \end{aligned}$$

One deduces that, with a probability that goes to 1,

$$\begin{aligned} \tau \le \frac{16}{2} n^{3-2\kappa } \log n, \end{aligned}$$

and this quantity goes to 0 as \(n\rightarrow \infty \), whenever \(\kappa >\frac{3}{2}\). Therefore for \(\kappa >\frac{3}{2}\), there holds

$$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{x_0^n \in {\overline{D}}_n} {\mathbb {P}}(X_{t_n}^n\ne Y_{t_n}^n)=0, \end{aligned}$$

thus concluding the proof of Lemma 6.3. \(\square \)

Proof of Theorem 1.7 in TV and Hellinger

Let \(\kappa >\frac{3}{2}\) and fix some initial condition \(x_0^n \in {\overline{D}}_n\). By the triangle inequality for \(\mathrm {TV}\), there holds

$$\begin{aligned} \Vert \mathrm {Law}(X_{t}^n)-P_n^\beta \Vert _{\mathrm {TV}}\le \Vert \mathrm {Law}(Y_{t}^n)-P_n^\beta \Vert _{\mathrm {TV}}+ \Vert \mathrm {Law}(X_{t}^n)-\mathrm {Law}(Y_{t}^n)\Vert _{\mathrm {TV}}. \end{aligned}$$
(6.8)

Taking \(t=t_n(1+\varepsilon )\) with \(t_n=\log (\sqrt{n} |x_0^n|) \vee \log (n)\), one deduces from Lemma 6.1 and the Pinsker inequality stated in Lemma A.1 that the first term in the right-hand side of (6.8) vanishes as n tends to infinity. Meanwhile Lemma 6.3 guaranties that the second term tends to 0 as n tends to infinity. We also conclude using the comparison between TV and Hellinger (see Lemma A.1) that

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathrm {Hellinger}(\mathrm {Law}(X_{t_n}^n),P_n^\beta )=0. \end{aligned}$$

\(\square \)

6.2 Proof of Corollary 1.8 in TV and Hellinger

Proof of Corollary 1.8 in TV and Hellinger

By Lemma A.1 and the triangle inequality for \(\mathrm {TV}\), we have

$$\begin{aligned} \sup _{x_0^n \in [-a_n,a_n]^n}\Vert \mathrm {Law}(X_{t}^n)-P_n^\beta \Vert _{\mathrm {TV}}&\le \sup _{x_0^n \in [-a_n,a_n]^n}\Vert \mathrm {Law}(Y_{t}^n)-\mathrm {Law}(X_{t}^n)\Vert _{\mathrm {TV}}\\&\quad + \sup _{x_0^n \in [-a_n,a_n]^n} \sqrt{2\, \mathrm {Kullback}(\mathrm {Law}(Y_t^n)\mid P_n^\beta )}. \end{aligned}$$

Take \(t=(1+\varepsilon ) c_n\) with \(c_n = \log (na_n)\). Lemmas 6.1 and 6.3, combined with the assumption made on \((a_n)\), show that the two terms on the right-hand side vanish as \(n\rightarrow \infty \). Using Lemma A.1, the same result holds for \(\mathrm {Hellinger}\).

On the other hand, take \(x_0^{n,i} = a_n\) for all i and note that \(\pi (x_0^n) = na_n\) goes to \(+\infty \) as \(n\rightarrow \infty \). By Corollary 1.4 we find

$$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{x^n_0 \in [-a_n,a_n]^n} \mathrm {dist}(\mathrm {Law}(X^n_{(1-\varepsilon )c_n})\mid P_n^\beta ) = 1\; \end{aligned}$$

whenever \(\mathrm {dist} \in \{\mathrm {TV}, \mathrm {Hellinger}\}\). \(\square \)

6.3 Proof of Theorem 1.10

Proof of Theorem 1.10

Lower bound. The contraction property provided by Lemma A.2 gives

$$\begin{aligned} \mathrm {Kullback}(\mathrm {Law}(X_t^n)\mid P_n^\beta ) \ge \mathrm {Kullback}( \mathrm {Law}(\pi (X_t^n))\mid P_n^\beta \circ \pi ^{-1}). \end{aligned}$$

By Theorem 1.3\(P_n\circ \pi ^{-1}={\mathcal {N}}(0,1)\) and \(Y=\pi (X^n)\) is an OU process weak solution of \(Y_0=\pi (X^n_0)\) and \(\mathrm {d}Y_t=\sqrt{2}\mathrm {d}B_t-Y_t\mathrm {d}t\). In particular for all \(t\ge 0\), \(\mathrm {Law}(Y_t)\) is a mixture of Gaussian laws in the sense that for any measurable test function g with polynomial growth,

$$\begin{aligned} {\mathbb {E}}_{\mathrm {Law}(Y_t)}[g] ={\mathbb {E}}[g(Y_t)]={\mathbb {E}}[G_t(Y_0)] \quad \text {where}\quad G_t(y)={\mathbb {E}}_{{\mathcal {N}}(y\mathrm {e}^{-t},1-\mathrm {e}^{-2t})}[g]. \end{aligned}$$

Now we use (again) the variational formula used in the proof of Lemma A.2 to get

$$\begin{aligned} \mathrm {Kullback}( \mathrm {Law}(\pi (X_t^n))\mid P_n^\beta \circ \pi ^{-1}) =\sup _{g}\{{\mathbb {E}}_{\mathrm {Law}(\pi (X_t^n))}[g]-\log {\mathbb {E}}_{{\mathcal {N}}(0,1)}[\mathrm {e}^g] \}, \end{aligned}$$

and taking for g the linear function defined by \(g(x)=\lambda x\) for all \(x\in {\mathbb {R}}\) and for some \(\lambda \ne 0\) yields

$$\begin{aligned} \mathrm {Kullback}( \mathrm {Law}(\pi (X_t^n))\mid P_n^\beta \circ \pi ^{-1}) \ge \lambda \mathrm {e}^{-t}\sum _{i=1}^n \int x\mu _i(\mathrm {d}x)-\frac{\lambda ^2}{2}. \end{aligned}$$

Finally, by using the assumption on first moment and taking \(\lambda \) small enough we get, for all \(\varepsilon \in (0,1)\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathrm {Kullback}( \mathrm {Law}(\pi (X_{(1-\varepsilon )\log (n)}^n)\mid P_n^\beta \circ \pi ^{-1}) =+\infty , \end{aligned}$$

Upper bound. From Lemma B.2 we have, for all \(t\ge 0\),

$$\begin{aligned} \mathrm {Kullback}(\mathrm {Law}(X_t^n)\mid P_n^\beta ) \le \mathrm {Kullback}(\mathrm {Law}(X_0^n)\mid P_n^\beta )\mathrm {e}^{-2t}. \end{aligned}$$

Arguing like in the proof of Lemma 6.1 and using the contraction property of \(\mathrm {Kullback}\) provided by Lemma A.2 for the map \(\Psi \) defined in (1.17), we can write the following decomposition

$$\begin{aligned} \mathrm {Kullback}(\mathrm {Law}(X_0^n)\mid P_n^\beta )&\le \mathrm {Kullback}(\otimes _{i=1}^n\mu _i\mid P_{*n}^\beta )\\&=S(\otimes _{i=1}^n\mu _i)+{\mathbb {E}}_{\otimes _{i=1}^n\mu _i}[E]+\log C_{*n}^\beta \\&\le \sum _{i=1}^nS(\mu _i)+\sum _{i\ne j}\iint \Phi \mathrm {d}\mu _i \otimes \mathrm {d}\mu _j +Cn^2. \end{aligned}$$

Combining (6.4) with the assumptions on the \(\mu _i\)’s yields for some constant \(C>0\)

$$\begin{aligned} \mathrm {Kullback}(\mathrm {Law}(X_0^n)\mid P_n^\beta )\le Cn^2 \end{aligned}$$

and it follows finally that for all \(\varepsilon \in (0,1)\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathrm {Kullback}(\mathrm {Law}(X_{(1+\varepsilon )\log (n)})\mid P_n^\beta )=0. \end{aligned}$$

\(\square \)

7 Cutoff phenomenon for the DOU in Wasserstein

7.1 Proofs of Theorem 1.7 and Corollary 1.8 in Wasserstein

Let \({(X_t)}_{t\ge 0}\) be the DOU process. By Lemma B.2, for all \(t\ge 0\) and all initial conditions \(X_0 \in {\overline{D}}_n\),

$$\begin{aligned} \mathrm {Wasserstein}^2( \mathrm {Law}(X_t),P_n^\beta ) \le \mathrm {e}^{-2t} \mathrm {Wasserstein}^2(\mathrm {Law}(X_0),P_n^\beta ). \end{aligned}$$

Suppose now that \(\mathrm {Law}(X^n_0)=\delta _{x^n_0}\). Then the triangle inequality for the Wasserstein distance gives

$$\begin{aligned} \mathrm {Wasserstein}^2(\delta _{x^n_0},P_n^\beta ) = \int \left| x^n_0 - x \right| ^2 P_n^\beta (\mathrm {d}x) \le 2 |x^n_0|^2 +2 \int \left| x\right| ^2P_n^\beta (\mathrm {d}x). \end{aligned}$$

By Theorem 1.3, the mean at equilibrium of \(|X_t^n|^2\) equals \(1 + \frac{\beta }{2}(n-1)\) and therefore

$$\begin{aligned} \int \left| x\right| ^2P_n^\beta (\mathrm {d}x) =1+\frac{\beta }{2}(n-1). \end{aligned}$$

We thus get

$$\begin{aligned} \mathrm {Wasserstein}^2(\mathrm {Law}(X^n_t),P_n^\beta ) \le 2(|x^n_0|^2+1+\frac{\beta }{2}(n-1))\mathrm {e}^{-2t}. \end{aligned}$$

Set \(c_n := \log (|x_0^n|) \vee \log (\sqrt{n})\). For any \(\varepsilon \in (0,1)\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathrm {Wasserstein}(\mathrm {Law}(X^n_{(1+\varepsilon )c_n}),P_n^\beta ) =0 \end{aligned}$$

and this concludes the proof of Theorem 1.7 in the Wasserstein distance.

Regarding the proof of Corollary 1.8, if \(x_0^n \in [-a_n,a_n]^n\) then \(|x_0^n| \le \sqrt{n} a_n\). Therefore if \(\inf _n a_n > 0\), setting \(c_n = \log (\sqrt{n} a_n)\) we find, as required,

$$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{x_0^n \in [-a_n,a_n]^n} \mathrm {Wasserstein}(\mathrm {Law}(X^n_{(1+\varepsilon )c_n}),P_n^\beta ) =0. \end{aligned}$$

7.2 Proof of Theorem 1.9

This is an adaptation of the previous proof. We compute

$$\begin{aligned} \mathrm {Wasserstein}^2(\delta _{x_0^n},P_n^\beta )&= \int \left| x^n_0 - x \right| ^2P_n^\beta (\mathrm {d}x)\\&\le 2\left| x^n_0 - \rho _n\right| ^2 + 2\int \left| \rho _n -x \right| ^2P_n^\beta (\mathrm {d}x), \end{aligned}$$

where \(\rho _n \in D_n\) is the vector of the quantiles of order 1/n of the semi-circle law as in (1.14). The rigidity estimates established in [17, Th. 2.4] justify that

$$\begin{aligned} \lim _{n\rightarrow \infty }\int \left| \rho _n -x \right| ^2P_n^\beta (\mathrm {d}x)=0. \end{aligned}$$

If \(|x^n_0-\rho _n|\) diverges with n, we deduce that for all \(\varepsilon \in (0,1)\), with \(t_n = \log (|x^n_0-\rho _n|)\),

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathrm {Wasserstein}(\mathrm {Law}(X^n_{(1+\varepsilon )t_n}),P_n^\beta ) =0. \end{aligned}$$

On the other hand, if \(|x^n_0-\rho _n|\) converges to some limit \(\alpha \) then we easily get, for any \(t\ge 0\),

$$\begin{aligned} \varlimsup _{n\rightarrow \infty } \mathrm {Wasserstein}^2(\mathrm {Law}(X^n_{t}),P_n^\beta ) \le \alpha ^2\mathrm {e}^{-2t}. \end{aligned}$$

Remark 7.1

(High-dimensional phenomena) With \(X_n\sim P_n^\beta \), in the bias-variance decomposition

$$\begin{aligned} \int \left| \rho _n -x \right| ^2P_n^\beta (\mathrm {d}x) =|{\mathbb {E}}X_n-\rho _n|^2+{\mathbb {E}}(|X_n-{\mathbb {E}}X_n|^2), \end{aligned}$$

the second term of the right hand side is a variance term that measures the concentration of the log-concave random vector \(X_n\) around its mean \({\mathbb {E}}X_n\), while the first term in the right hand side is a bias term that measures the distance of the mean \({\mathbb {E}}X_n\) to the mean-field limit \(\rho _n\). Note also that \({\mathbb {E}}(|X_n-{\mathbb {E}}X_n|^2)={\mathbb {E}}(|X_n|^2)-|{\mathbb {E}}X_n|^2=1+\frac{\beta }{2}(n-1)-|{\mathbb {E}}X_n|^2\), reducing the problem to the mean. We refer to [42] for a fine asymptotic analysis in the determinantal case \(\beta =2\).