1 Introduction

The non equilibrium stationary states (NESS) for diffusive systems in contact with reservoirs have been extensively studied, one of the main targets being to understand how the presence of a current affects what seen in thermal equilibrium. In particular it has been shown that fluctuations in NESS have a non local structure as opposite to what happens in thermal equilibrium. The theory of such phenomena is well developed, [1, 5] but mathematical proofs are restricted to very special systems (SEP, [6], KMP, [8], chain of oscillators, [2] \(\ldots \)).

The general structure of the NESS in the presence of phase transitions is a very difficult and open problem not only mathematically, also a theoretical understanding is lacking. However a breakthrough came recently from a paper by De Masi et al. [4], where they prove that the NESS can be computed explicitly for a quite general class of Ginzburg–Landau stochastic models which include phase transitions.

The main point in [4] is that the NESS is still a Gibbs state but with the original Hamiltonian modified by adding a slowly varying chemical potential. Thus for boundary driven Ginzburg–Landau stochastic models the analysis of the NESS is reduced to an equilibrium Gibbsian problem and, at least in principle, very fine properties of their structure can be investigated which is unthinkable for general models.

In particular we can study cases where there are phase transitions and purpose of this paper is to give an indication that the 2d NESS interface is much more rigid than in thermal equilibrium.

The analysis in [4] includes a system where the Ising model is coupled to a Ginzburgh–Landau process. In the corresponding NESS the distribution of the Ising spin is a Gibbs measure with the usual nearest neighbour ferromagnetic interaction plus a slowly varying external magnetic field.

In particular in the 2d square \(\Lambda _N:=[0,N]\times [-N,N]\cap {\mathbb {Z}}^2\) the NESS \(\mu _N(\sigma )\) is

$$\begin{aligned} \mu _N(\sigma )=\frac{1}{Z_N} e^{-\beta H_N(\sigma )},\quad \sigma =(\sigma (x)\in \{-1,1\}, x\in \Lambda _N) \end{aligned}$$
$$\begin{aligned} H_N(\sigma ) = H^{\mathrm{ising}}(\sigma )+ \sum _{x\in \Lambda _N} \frac{bx\cdot e_2 }{N}\,\,\sigma (x),\quad H^{\mathrm{ising}}(\sigma )= \sum _{\begin{array}{c} x,y\in \Lambda _N \\ |x-y|=1 \end{array}} {\mathbf {1}}_{\sigma (x)\ne \sigma (y)}\quad e_2=(0,1) \end{aligned}$$

where \(b>0\) is fixed by the chemical potentials at the boundaries.

We assume \(\beta >\beta _c\), thus since the slowly varying external magnetic field \(\displaystyle {\frac{bx\cdot e_2}{N}}\) is positive in the half upper plane and negative in the half lower plane, we expect the existence of an interface, namely a connected “open line” \(\lambda \) in the dual lattice which goes from left to right and which separates the region with the majority of spins equal to 1 to the one with the majority of spins equal to \(-1\).

The problem of the microscopic location of the interface has been much studied in equilibrium without external magnetic field and when the interface is determined by the boundary conditions: \(+\) boundary conditions on \(\Lambda _N^c \cap \{x\cdot e_2 \geqslant 0\}\) and − boundary conditions on \(\Lambda _N^c \cap \{x\cdot e_2 < 0\}\).

It is well known since the work initiated by Gallavotti [7], that in the 2d Ising model at thermal equilibrium the interface fluctuates by the order of \(\sqrt{N}\), N the size of the system.

In this paper we argue that at low temperature (much below the critical value) and in the presence of a stationary current produced by reservoirs at the boundaries the interface is much more rigid as it fluctuates only by the order \(N^{1/4}\).

We study the problem with a drastic simplification by considering the SOS approximation of the interface. Namely we consider the simplest case where the interface \(\lambda \) is a graph, namely \(\lambda \) is described by a function \(s_x\), \(x\in \{0,\ldots ,N\}\) with integers values in \({\mathbb {Z}}\). The corresponding Ising configurations are spins equal to \(-1\) below \(s_x\) and \(+1\) above \(s_x\). Namely \(\sigma (x,i)=1\) if \(i\geqslant s_x\) and \(\sigma (x,i)=-1\) if \(i\leqslant s_x\).

The interface is then made by a sequence of horizontal and vertical segments and the Ising energy of such configurations is \(|\lambda |\). We normalise the energy by subtracting the energy of the flat interface so that the normalised energy is

$$\begin{aligned} \sum _{x=1}^N |s_x-s_{x-1}|=|\lambda |-N \end{aligned}$$

i.e. the sum of the lengths of the vertical segments.

The energy due to the external magnetic field is normalised by subtracting the energy of the configuration when all \(s_x\) are equal to 0. This is (below we set \(b=1\))

$$\begin{aligned} 2\sum _{x=0}^N \sum _{i=1}^{|s_x|}\frac{i}{N} \approx \sum _{x=0}^N\frac{s_x^2}{N} \end{aligned}$$

Thus we get the SOS Hamiltonian

$$\begin{aligned} H_N(s)=\frac{1}{N}\sum _{x=0}^{N}s_x^2+ \sum _{x=1}^{N}|s_x-s_{x-1}| \end{aligned}$$
(1.1)

We prove that the stationary fluctuations of the interface in this SOS approximation scaled by \(N^{1/4}\) convergence to a stationary Ornstein–Unhlenbeck process.

The problem addressed in this article is the behavior of the interface in the NESS and the aim is to argue that its fluctuations are more rigid than in thermal equilibrium as indicated by the SOS approximation. Thus in the SOS approximation we prove the \(N^{1/4}\) behavior in the simplest setting of Sect. 2.

More general results similar to those in [9] presumably apply. We cannot use directly the results in [9] because their SOS models have an additional constraint (the interface is in the upper half plane). Our proofs have several points in common with [9], but since we work in a more specific setup with less constrains, they are considerably simpler and somehow more intuitive.

2 Model and Results

We consider \(\Lambda _N= \{0,\ldots ,N\}\times {\mathbb {Z}}\) and denote the configuration of the interface with \({\mathbf {s}}= \{s_x\in {\mathbb {Z}}, x= 0,\ldots , N\}\). The interface increments are denoted by \(\eta _x=s_x-s_{x-1} \in {\mathbb {Z}}, x= 1, \ldots , N\).

Let \(\pi \) a symmetric probability distribution on \({\mathbb {Z}}\) aperiodic and such that

$$\begin{aligned} \sum _{\eta \in {\mathbb {Z}}} e^{a\eta } \pi (\eta )< +\infty \quad \forall |a|\leqslant a_0, \text { for some }a_0>0 \end{aligned}$$
(2.1)

We denote \(\sigma ^2\) the variance of \(\pi \) and as we shall see the result does not depend on the particular choice of \(\pi \) but only on the variance \(\sigma ^2\).

For \(s, {\overline{s}} \in {\mathbb {Z}}\) define the positive kernel

$$\begin{aligned} T_N(s, {\overline{s}})=e^{-\frac{ s^2+{\overline{s}}^2}{2N}} \pi (s-{\overline{s}}). \end{aligned}$$
(2.2)

Call \(T_Nf(s)\) the integral operator with kernel \(T_N\). \(T_N\) is a symmetric positive operator in \(\ell _2({\mathbb {Z}})\), and it can be checked immediately that it is Hilbert–Schmidt, consequently compact. Then the Krein–Rutman theorem [11] applies, thus there is a strictly positive eigenfunction \(h_N\in \ell _2({\mathbb {Z}})\) and a strictly positive eigenvalue \(\lambda _N>0\):

$$\begin{aligned} \sum _{ s'} T_N(s, s')h_N( s')=\lambda _Nh_N(s), \quad \sum _{s} h_N^2(s) = 1, \end{aligned}$$
(2.3)

The eigenvalue \(\lambda _N<1\), and \(\lambda _N \rightarrow 1\) as \(N\rightarrow \infty \), see Theorem 3.1.

We then observe that the Gibbs distribution \(\nu _N\) with the Hamiltonian given in (1.1) and with the values at the boundaries distributed according to the measure \(h_N(s)e^{\frac{s^2}{2N}}\) can be expressed in terms of the kernel \(T_N\) and the double-geometric distribution

$$\begin{aligned} \displaystyle {\pi (\eta ) = \frac{1}{V} e^{-|\eta |}} \quad V=\sum _\eta e^{-|\eta |} \end{aligned}$$

In fact

$$\begin{aligned} \nu _N({\mathbf {s}})= & {} \frac{1}{Z_N} h(s_0)e^{\frac{s_0^2}{2N}} e^{- \frac{1}{N}\sum _{x=0}^N s_x^2}\prod _{x=1}^N \frac{e^{-|s_x-s_{x-1}|}}{V} h(s_N)e^{\frac{s_N^2}{2N}}\nonumber \\= & {} \frac{1}{Z_N} h_N(s_0)\,e^{-\frac{1}{2N}\sum _{x=1}^N (s_x^2+s_{x-1}^2)} \prod _{x=1}^N \pi (\eta _x)h_N(s_N)\nonumber \\= & {} \frac{1}{Z_N} h_N(s_0)\prod _{x=1}^N T_N(s_{x-1}, s_x)h_N(s_N) \end{aligned}$$
(2.4)

with \(Z_N\) the partition function.

Call

$$\begin{aligned} p_N(s,s'):=\frac{h_N(s')}{\lambda _N h_N(s)}T_N(s,s') \end{aligned}$$
(2.5)

\(p_N\) defines an irreducible positive-recurrent Markov chain on \({\mathbb {Z}}\) with reversible measure given by \(h_N^2(s)\). We call \( P_N\) the law of the Markov chain starting from the invariant measure \(h_N^2(s)\).

Observe that \(\nu _N({\mathbf {s}})\) in (2.4) is the \( P_N\)-probability of the trajectory \({\mathbf {s}}\), indeed from (2.5) we get

$$\begin{aligned} \nu _N({\mathbf {s}}) =\frac{1}{Z_N} h_N(s_0)\prod _{x=1}^N T_N(s_{x-1}, s_x)h_N(s_N) =\frac{\lambda _N^N}{Z_N} h^2_N(s_0)\prod _{x=1}^N p_N(s_{x-1},s_x) \end{aligned}$$
(2.6)

which proves that \(Z_N=\lambda ^N_N\) and that \( \nu ({\mathbf {s}}) = P_N({\mathbf {s}})\).

We define the rescaled variables

$$\begin{aligned} {\widetilde{S}}^N(t) = \frac{s_{[tN^{1/2}]}}{N^{1/4}},\quad t=0,1,\ldots ,N^{1/2},\quad [] = \text { integer part} \end{aligned}$$

then \({\widetilde{S}}^N(t) \) is extended to \(t\in [0,1]\) by linear interpolation, in this way we can consider the induced distribution \({\mathcal {P}}_N\) on the space of continuous function C([0, 1]).

We denote by \({\mathcal {E}}_N\) the expectation with respect to \({\mathcal {P}}_N\).

Our main result is the following Theorem.

Theorem 2.1

The process \(\{{\widetilde{S}}^N(t), t\in [0,1]\}\) converges in law to the stationary Ornstein–Uhlenbeck process with variance \(\sigma /2\). Moreover \(\displaystyle {\lim _{N\rightarrow \infty } \lambda _N^{\sqrt{N}}= e^{-\sigma /2}}\).

The paper is organized as follows: in Sect. 3 we give a priori estimates on the eigenfunctions \(h_N\) and on the eigenvalues \(\lambda _N\), in Sect. 4 we prove convergence of the eigenfunctions \(h_N\) and identify the limit, in Sect. 5 we prove Theorem 2.1.

3 Estimates on the Eigenfunctions and the Eigenvalues

Theorem 3.1

The operator \(T_N\) defined in (2.2) has a maximal positive eigenvalue \(\lambda _N\) and a positive normalized eigenvector \(h_N(s)\in \ell ^{2}({\mathbb {Z}})\) as in (2.3) with the following properties:

  1. (i)

    \(h_N\) is a symmetric function.

  2. (ii)

    \(\Vert h_N\Vert _\infty \leqslant 1\) for all N.

  3. (iii)

    There exists c so that \(\displaystyle {1- \frac{c}{\sqrt{N}} \leqslant \lambda _N < 1}\).

Proof

That \(h_N(s)\) is positive follows by the Krein–Rutman theorem, [11], also \(\lambda _N\) is not degenerate, its eigenspace is one-dimensional. The symmetry follows from the symmetry of \(T_N\), since \(h_N(-s)\) is also eigenfunction for \(\lambda _N\).

The \(\ell _\infty \) bound follows from

$$\begin{aligned} \Vert h_N\Vert ^2_\infty = \sup _{s}\; h_N(s)^2 \leqslant \sum _{s}\; h_N(s)^2 = 1. \end{aligned}$$
(3.1)

The upper bound in (iii) easily follows from

$$\begin{aligned}&\lambda _N \leqslant \sum _{s,{\overline{s}}} \pi (s- {\overline{s}}) h_N(s) h_N({\overline{s}}) \leqslant \frac{1}{2} \sum _{s,{\overline{s}}} \pi (s- {\overline{s}}) \left( h_N(s)^2 + h_N({\overline{s}})^2\right) \leqslant 1 \end{aligned}$$

having used that \(\sum _{s} h_N(s)^2 = 1\).

To prove the lower bound in (iii) we use the variational formula

$$\begin{aligned} {\begin{matrix} \lambda _N = \sup _{h} \frac{\sum _{s,s'} T_N(s, s') h(s) h(s')}{ \sum _{s} h(s)^2} \end{matrix}} \end{aligned}$$
(3.2)

By choosing h with \(\sum _{s} h(s)^2 = 1\), and using the inequality \(e^{-x} \geqslant 1 - x\), we have a lower bound

$$\begin{aligned} {\begin{matrix} \lambda _N \geqslant \sum _{s,{\overline{s}}} \pi (s- {\overline{s}}) h(s) h({\overline{s}}) - \frac{1}{N} \sum _{s,{\overline{s}}} s^2\pi (s- {\overline{s}}) h(s) h({\overline{s}}) \end{matrix}} \end{aligned}$$
(3.3)

Observe that, since \(\sum _{s} h(s)^2 = 1\),

$$\begin{aligned} \frac{1}{N} \sum _{s,{\overline{s}}} s^2 \pi (s- {\overline{s}}) h(s) h({\overline{s}})\leqslant & {} \frac{1}{2N} \sum _{s,{\overline{s}}} s^2 \pi (s- {\overline{s}}) \left( h(s)^2 + h({\overline{s}})^2\right) \\\leqslant & {} \frac{1}{2N} \sum _{s} s^2 h(s)^2 + \frac{1}{2N} \sum _{\eta ,{\overline{s}}} ({\overline{s}}+\eta )^2 \pi (\eta ) h({\overline{s}})^2\\= & {} \frac{1}{N}\sum _{s} s^2 h(s)^2 +\frac{\sigma ^2}{2N} \end{aligned}$$

Thus

$$\begin{aligned} \lambda _N \geqslant \sum _{s,{\overline{s}}} \pi (s-{\overline{s}}) h(s) h({\overline{s}}) - \frac{1}{N} \sum _{s} s^2 h(s)^2 - \frac{\sigma ^2}{2N} \end{aligned}$$
(3.4)

For \(\alpha >0\), we choose \(h(s) = h_\alpha (s) := C_{\alpha }\)\( e^{- \alpha s^2/4}\), with \(C_\alpha = \left( \sum _s e^{-\alpha s^2/2}\right) ^{-1/2} \). Observe that for \(\alpha \rightarrow 0\)

$$\begin{aligned} \Big |\sqrt{\alpha }\sum _s e^{-\alpha s^2/2}-\int e^{-r^2/2}dr\Big |\leqslant C\alpha \quad \Big |\sqrt{\alpha }\sum _s (\alpha s^2 e^{-\alpha s^2/2}-\int r^2e^{-r^2/2}dr\Big |\leqslant C\alpha \end{aligned}$$

Thus

$$\begin{aligned} \sum _{s} s^2 h_\alpha (s)^2 = \alpha ^{-1} + O(\alpha )\quad \text {as}\quad \alpha \rightarrow 0. \end{aligned}$$
(3.5)

We next prove that

$$\begin{aligned} \sum _{s,s'} \pi (s- s') h_\alpha (s) h_\alpha (s')\ \geqslant 1-\frac{\alpha \sigma ^2}{4} \end{aligned}$$
(3.6)

To prove (3.6) observe that \(h_\alpha (s) h_\alpha (s+\tau ) = h_\alpha (s)^2 e^{-\alpha \tau ^2/4 - \alpha s \tau /2}\), then

$$\begin{aligned} \sum _{s,s'} \pi (s- s') h_\alpha (s) h_\alpha (s')&= \sum _s h_\alpha (s) \sum _\tau \pi (\tau ) h_\alpha (s+ \tau ) \\&= \sum _s h_\alpha (s)^2 \sum _\tau \pi (\tau ) e^{-\alpha \tau ^2/4} e^{-\alpha s\tau /2} \end{aligned}$$

Using again that \(e^{-z} \geqslant 1 - z\) and the parity of \(h_\alpha \) and of \(\pi \) we get

$$\begin{aligned}&\sum _s h_\alpha (s)^2 \sum _\tau \pi (\tau ) e^{-\alpha \tau ^2/4} e^{-\alpha s\tau /2}\\&\quad \geqslant \sum _s h_\alpha (s)^2 \sum _\tau \pi (\tau ) \left( 1 - \frac{\alpha }{4} \tau ^2\right) \left( 1 -\frac{\alpha s\tau }{2}\right) = 1-\frac{\alpha \sigma ^2}{4} \end{aligned}$$

which proves (3.6).

We choose \(\alpha = N^{-1/2}\) and from (3.4), (3.5) and (3.6) we then get

$$\begin{aligned} \lambda _N \geqslant 1-\frac{1}{\sqrt{N}} \left( \frac{\sigma ^2}{4} + 1\right) - \frac{\sigma ^2}{2N} + O(N^{-3/2}), \end{aligned}$$
(3.7)

which gives the lower bound. \(\square \)

Given s let \(s_x\) be the position at x of the random walk starting at s, namely \(\displaystyle {s_x=s+\sum _{k=1}^x \eta _k}\) where \(\{\eta _k\}_k\) are i.i.d. random variables with distribution \(\pi \). By an abuse of notation we will denote by \(\pi \) also the probability distribution of the trajectories of the corresponding random walk and by \({\mathbb {E}}_{s}\) the expectation with respect to the law of the random walk which starts from s.

We will use the local central limit theorem as stated in Theorem (2.1.1) in [12] [see in particular formula (2.5)]. There exists a constant c not depending on n such that for any s:

$$\begin{aligned} |\pi \left( \sum _{k=1}^n \eta _k=s\right) - \overline{p}\left( \sum _{k=1}^n \eta _k=s\right) |\leqslant \frac{c}{n^{3/2}} \end{aligned}$$
(3.8)

where

$$\begin{aligned} \left. {\overline{p}} \left( \sum _{k=1}^n \eta _k=s\right) \right) = \frac{1}{\sqrt{2\pi \sigma ^2n}}e^{-\frac{s^2}{2\sigma ^2n}} \end{aligned}$$

By iterating (2.3) n times we get

$$\begin{aligned} h_N(s)=\frac{1}{\lambda _N^n} {\mathbb {E}}_{s}\left( e^{-\frac{1}{2N}\sum _{x=0}^n s_x^2}\,\,h_N(s_n)\right) \end{aligned}$$
(3.9)

Theorem 3.2

There exist positive constants cC (independent of N) such that

$$\begin{aligned} h_N(s) \leqslant { \frac{C}{N^{1/8}}}\,\, \exp \Big \{-\displaystyle { \frac{c s^2}{N^{1/2}}}\Big \} \end{aligned}$$
(3.10)

Proof

Below we will write h(s) for the eigenfunction \(h_N(s)\), and \(\lambda \) for \(\lambda _N\).

Because of the symmetry of h, it is enough to consider \(s>0\). From (3.9) we get

$$\begin{aligned} h( s)\leqslant \frac{1}{\lambda ^n} \Big [ {\mathbb {E}}_{s}\left( e^{-\frac{2}{2N}\sum _{x=0}^n s_x^2}\right) \Big ]^{1/2} \Big [{\mathbb {E}}_{s}( h^2(s_n))\Big ]^{1/2} \end{aligned}$$
(3.11)

To estimate \({\mathbb {E}}_{s}( h^2(s_n))\) we use (3.8),

$$\begin{aligned} {\mathbb {E}}_{s}( h^2(s_n))= & {} \sum _{s_n} \pi \left( \sum _{k=0}^{n}\eta _k=s_n-s\right) h^2(s_n) \nonumber \\\leqslant & {} \sum _{s_n} {\overline{p}}\left( \sum _{k=0}^{n}\eta _k=s_n-s\right) h^2(s_n) + \frac{c}{n^{3/2}} \sum _{s_n} h^2(s_n) \nonumber \\\leqslant & {} \left[ \frac{1}{\sqrt{2\pi n \sigma ^2}}+ \frac{c}{n^{3/2}}\right] \sum _{s_n} h^2(s_n) \nonumber \\\leqslant & {} \frac{K}{\sqrt{n}}\sum _{s'} h^2(s')= \frac{K}{\sqrt{n}} \end{aligned}$$
(3.12)

where K is a constant independent of N.

Thus for \(n=\sqrt{N}\) we get

$$\begin{aligned} h( s)\leqslant \frac{1}{\lambda ^{\sqrt{N}}} \frac{\sqrt{K}}{N^{1/8}} \Big [ {\mathbb {E}}_{s}\Big ( e^{-\frac{1}{N}\sum _{x=0}^n s_x^2}\Big )\Big ]^{1/2} \end{aligned}$$
(3.13)

For \(\alpha \in (0,1)\) we consider

$$\begin{aligned} z=\inf \{x: s_x \leqslant s(1-\alpha ) \} \end{aligned}$$
(3.14)

and we split the expectation on the right hand side of (3.13)

$$\begin{aligned} {\begin{matrix} {\mathbb {E}}_{s}\left( e^{-\frac{1}{N}\sum _{x=0}^n s_x^2}\right) &{}\leqslant {\mathbb {E}}_{s}\left( e^{-\frac{1}{N}\sum _{x=0}^{z-1} s_x^2} 1_{[z\leqslant n]}\right) + {\mathbb {E}}_{s}\left( e^{-\frac{1}{N}\sum _{x=0}^{n} s_x^2} 1_{[z > n]}\right) \\ &{}\leqslant {\mathbb {E}}_{s}\left( e^{-\frac{s^2(1-\alpha )^2}{N}z} 1_{[z\leqslant n]}\right) + e^{-\frac{s^2(1-\alpha )^2(n+1)}{N}} \end{matrix}} \end{aligned}$$
(3.15)

Calling \(M_x:= s_x-s\), and \(\Lambda (a)=\log {\mathbb {E}}(e^{a\eta })\) for \(|a|\leqslant a_0\), see (2.1), we get that \( e^{aM_x-x\Lambda (a)}\) is a martingale, so that

$$\begin{aligned} 1 = {\mathbb {E}}_{s}\Big ( e^{aM_{z\wedge n}- z\wedge n \Lambda (a)}\Big ) \geqslant {\mathbb {E}}_{s}\Big ( e^{aM_{z}- z \Lambda (a)} 1_{[z\leqslant n]}\Big ) \end{aligned}$$
(3.16)

Also \(M_{z} \leqslant -\alpha s\) and thus, choosing \(a<0\), we have \(aM_{z} \geqslant -a\alpha s\), so that:

$$\begin{aligned} {\mathbb {E}}\Big (e^{-z\Lambda (a)}1_{[z\leqslant n]}\Big ) \leqslant e^{a\alpha s}. \end{aligned}$$

Since \(\Lambda (a)= \frac{1}{2} \sigma ^2 a^2+O(a^4)\) choosing \(a=-\frac{\sqrt{2} (1-\alpha )s}{\sigma N^{1/2}}\) we get

$$\begin{aligned} {\mathbb {E}}_s\Big (e^{- \frac{(1-\alpha )^2s^2 }{N} z} 1_{[z\leqslant n]}\Big ) \leqslant e^{-\frac{\sqrt{2}\alpha (1-\alpha ) s^2}{2\sigma N^{1/2}}} \end{aligned}$$

Recalling (3.15), we have

$$\begin{aligned} {\mathbb {E}}_{s}\left( e^{-\frac{1}{N}\sum _{x=0}^n s_x^2}\right) \leqslant e^{-\frac{\sqrt{2}\alpha (1-\alpha ) s^2}{2\sigma N^{1/2}}} + e^{-\frac{s^2(1-\alpha )^2(n+1)}{N}} \end{aligned}$$

For \(n = \sqrt{N}\) we thus get that there is a constant b so that

$$\begin{aligned} \Big [ {\mathbb {E}}_{s}\Big ( e^{-\frac{1}{N}\sum _{x=0}^n s_x^2}\Big )\Big ]^{1/2}\leqslant e^{-\frac{b s^2}{N^{1/2}}} \end{aligned}$$
(3.17)

From (iii) of Theorem 3.1 there is \(B>0\) so that \(\lambda ^{\sqrt{N}} \geqslant B\), thus from (3.13) and (3.17) we get (3.10). \(\square \)

4 Convergence and Identification of the Limit

We start the section with a preliminary lemma.

Lemma 4.1

There is \(b>0\) so that

$$\begin{aligned} \sum _{s, {\overline{s}}} \pi (s- {\overline{s}}) \left( h_N(s) - h_N({\overline{s}})\right) ^2 \leqslant \frac{b}{N^{1/2}}. \end{aligned}$$
(4.1)

Proof

Using that \(\sum _sh_n(s)^2=1\) we have

$$\begin{aligned} {\begin{matrix} \sum _{s, {\overline{s}}} \pi (s- {\overline{s}}) \left( h_N(s) - h_N({\overline{s}})\right) ^2 &{}= 2 \sum _{s, {\overline{s}}} \pi (s- {\overline{s}})h^2_N(s) -2 \sum _{s, {\overline{s}}} \pi (s- {\overline{s}})h_N(s)h_N({\overline{s}})\\ &{}=2\!-\!2\lambda _N \!-\!2 \sum _{s, {\overline{s}}} \big (1\!-\!e^{-(s^2 \!+\! {\overline{s}}^2)/2N} \big )\pi (s- {\overline{s}})h_N(s)h_N({\overline{s}}) \end{matrix}} \end{aligned}$$
(4.2)

By (iii) of Theorem 3.1\(2(1-\lambda _N)\leqslant \frac{2c}{\sqrt{N}} \). By using that \(1-e^x<x\) and that \(\sum _s s^2 h_N(s)\leqslant c'\), by Theorem 3.2 we have

$$\begin{aligned} 2 \sum _{s, {\overline{s}}} (1-e^{-(s^2 + {\overline{s}}^2)/2N} )\pi (s- {\overline{s}})h_N(s)h_N({\overline{s}})\leqslant & {} \frac{1}{2N} \sum _{s, {\overline{s}}} (s^2 + {\overline{s}}^2) \pi (s- {\overline{s}})\big [h^2_N(s)+h^2_N({\overline{s}})\big ]\\\leqslant & {} \frac{\sigma ^2}{2N} + \frac{c'}{2N} \end{aligned}$$

From this (4.1) follows. \(\square \)

Define for \(r\in {\mathbb {R}}\)

$$\begin{aligned} {\widetilde{h}}_N^2(r)=N^{1/4} h_N^2\big (\big [r N^{1/4}\big ]\big ),\quad [] = \text { integer part} \end{aligned}$$
(4.3)

Proposition 4.2

The following holds.

  1. (1)

    The sequence of measures \( {\widetilde{h}}_N^2(r)dr\) in \({\mathbb {R}}\) is tight and any limit measure is absolutely continuous with respect to the Lebesgue measure.

  2. (2)

    The sequence of functions \({\widetilde{h}}_N(r):=N^{1/8} h_N([r N^{1/4}])\) is sequentially compact in \(L^2({\mathbb {R}})\).

Proof

As a straightforward consequence of Theorem 3.2, we have that

$$\begin{aligned} {\widetilde{h}}_N^2(r) \leqslant C e^{- c\,\, r^2} \end{aligned}$$
(4.4)

It follows that for any \(\epsilon \) there is k so that \(\displaystyle {\int _{|r|\leqslant k} {\widetilde{h}}_N^2(r)dr\geqslant 1-\epsilon }\), which proves tightness of the sequence of probability measures \( {\widetilde{h}}_N^2(r) dr\) on \({\mathbb {R}}\). From (4.4) we also get that any limit measure must be absolutely continuous.

To prove that the sequence \(( {\widetilde{h}}_N(r))_{N\geqslant 1}\) is sequentially compact in \(L^2({\mathbb {R}})\) we prove below that there exists a constant C such that for any N and any \(\delta >0\):

$$\begin{aligned} \int \left( {\widetilde{h}}_N(r + \delta ) - {\widetilde{h}}_N(r) \right) ^2 dr \leqslant C\delta ^2 \end{aligned}$$
(4.5)

Assume that \(\pi (1) >0\), then

$$\begin{aligned} {\begin{matrix} \int \left( {\widetilde{h}}_N(r + \delta ) - {\widetilde{h}}_N(r) \right) ^2 dr &{}= \sum _s \left( h_N(s + [\delta N^{1/4}]) - h_N(s) \right) ^2\\ &{}= \sum _s \left( \sum _{i=1}^{[\delta N^{1/4}] } \left( h_N(s + i) - h_N(s + i -1)\right) \right) ^2\\ &{}\leqslant \frac{[\delta N^{1/4}]}{\pi (1)} \sum _s \sum _{i=1}^{[\delta N^{1/4}] } \pi (1) \left( h_N(s + i) - h_N(s + i -1) \right) ^2\\ &{}\leqslant \frac{[\delta N^{1/4}]^2}{\pi (1)} \sum _{s,{\overline{s}}} \pi (s-{\overline{s}}) \left( h_N(s) - h_N({\overline{s}}) \right) ^2 \leqslant \frac{c\delta ^2}{\pi (1)} \end{matrix}} \end{aligned}$$

The condition \(\pi (1) >0\) can be relaxed easily by a slight modification of the above argument.

From (4.4) and (4.5), applying the Kolmogorov–Riesz compactness theorem (see e.g. [10]), we get that \({\widetilde{h}}_N\) is sequentially compact in \(L^2({\mathbb {R}})\). \(\square \)

We next identify the limit.

Proposition 4.3

Any limit point u(r) of \({\widetilde{h}}_N(r)\) in \(L^2\) satisfies in weak form

$$\begin{aligned} u(r)= \frac{1}{\lambda }{\mathbb {E}}_r\Big (e^{-\frac{1}{2}\int _0^1 B^2_sds}u(B_1)\Big ) \end{aligned}$$
(4.6)

where \(B_s\) is a Brownian motion with variance \(\sigma ^2\) and with \(B_0=r\) furthermore \(\displaystyle {\lambda =\lim _{N\rightarrow \infty } \lambda _N^{\sqrt{N}}}\) which exists.

The unique solution of (4.6) (up to a multiplicative constant) is \(u(r)=\exp \{- r^2/2\sigma \}\) and \(\lambda = e^{-\sigma /2}\).

Proof

Given r call \(r_N =[r N^{1/4}]\), iterating (2.3) \(\sqrt{N}\) times (assuming that \(\sqrt{N}\) is an integer) we get

$$\begin{aligned} {\widetilde{h}}_N(r) = \frac{1}{\lambda _N^{\sqrt{N}}}\,\, {\mathbb {E}}^N_{r_N}\Bigg (\exp \Bigg \{-\frac{1}{2\sqrt{N}} \sum _{x=0}^{\sqrt{N}} \frac{s_x^2}{N^{1/2}}\Bigg \}\,\, {\widetilde{h}}_N\big (N^{-1/4} s_{\sqrt{N}}\big )\Bigg ) \end{aligned}$$
(4.7)

where \({\mathbb {E}}^N_{r_N}\) is the expectation w.r.t. the random walk which starts from \(r_N\).

$$\begin{aligned} s_x=r_N+\sum _{k=1}^x \eta _x,\quad x=1,\ldots ,\sqrt{N} \end{aligned}$$
(4.8)

By the invariance principle,

$$\begin{aligned} \frac{ s_{t\sqrt{N}} - r_N}{N^{1/4}} \ \longrightarrow \ \sigma B_t \quad t\in [0,1] \end{aligned}$$
(4.9)

in law, where \(B_t\) is a standard Brownian motion which starts from 0.

Take a subsequence along which \({\widetilde{h}}_N\) converges strongly in \(L^2({\mathbb {R}})\) and call u(r) the limit point. Choosing a test function \(\varphi \in L^2({\mathbb {R}})\), and denoting \(\pi _n(s) = \pi \left( \sum _{k=1}^{n} \eta _k = s\right) \), we get along that sequence

$$\begin{aligned} \begin{aligned}&N^{-1/4} \sum _{s'}\varphi (N^{-1/4} s') {\mathbb {E}}^N_{s'}\Big (\exp \Big \{-\frac{1}{2\sqrt{N}} \sum _{x=0}^{\sqrt{N}} \Big (\frac{s_x}{N^{1/4}}\Big )^2\Big \}\, \Big |{\widetilde{h}}_N\Big (N^{-1/4} s_{\sqrt{N}}\Big ) -u\Big (N^{-1/4} s_{\sqrt{N}}\Big |\Big ) \\&\quad \leqslant N^{-1/4} \sum _{s'}\varphi \Big (N^{-1/4} s'\Big ) {\mathbb {E}}^N_{s'} \Big ( \Big |{\widetilde{h}}_N\Big (N^{-1/4} s_{\sqrt{N}}\Big ) -u\Big (N^{-1/4} s_{\sqrt{N}}\Big |\Big )\\&\quad = N^{-1/4} \sum _{s,s'}\varphi \Big (N^{-1/4} s'\Big ) \pi _{[\sqrt{N}]}\left( s-s'\right) \Big ( \Big |{\widetilde{h}}_N(N^{-1/4} s\Big ) -u(N^{-1/4} s\Big ) \Big |\Big )\\&\quad \leqslant N^{-1/4} \sum _{s'} \Big |\varphi (N^{-1/4} s')\Big | \left( \sum _s \pi _{[\sqrt{N}]}\left( s-s'\right) \Big |{\widetilde{h}}_N(N^{-1/4} s\Big ) -u(N^{-1/4} s)\Big |^2\right) ^{1/2}\\&\quad \leqslant \left( N^{-1/4} \sum _{s'} \Big |\varphi (N^{-1/4} s')\Big |^2\right) ^{1/2} \left( \!N^{-1/4} \sum _{s'} \sum _s \pi _{[\sqrt{N}]}\left( s\!-\!s'\right) \Big |{\widetilde{h}}_N(N^{-1/4} s) \!-\!u(N^{-1/4} s)\Big |^2\right) ^{1/2}\\&\quad = \left( N^{-1/4} \sum _{s'}\Big |\varphi (N^{-1/4} s')\Big |^2\right) ^{1/2} \left( N^{-1/4} \sum _s \Big |{\widetilde{h}}_N(N^{-1/4} s) -u(N^{-1/4} s)\Big |^2\right) ^{1/2}\\&\quad \leqslant C\Big \Vert \varphi \Vert _{L^2} \Vert {\widetilde{h}}_N - u\Big \Vert _{L^2} \ \mathop {\longrightarrow }_{N\rightarrow \infty } \ 0. \end{aligned} \end{aligned}$$
(4.10)

Since the exponential on the right hand side of (4.7) is a bounded functional of the random walk, from (4.9) we get (along the chosen sequence),

$$\begin{aligned}&\lim _{N\rightarrow \infty }{\mathbb {E}}^N_{r_N}\Big (\exp \Big \{-\frac{1}{2\sqrt{N}} \sum _{x=0}^{\sqrt{N}} \Big (\frac{s_x}{N^{1/4}})^2\Big \}\, u(N^{-1/4} s_{\sqrt{N}}) \Big ) \nonumber \\&\quad =\lim _{N\rightarrow \infty }{\mathbb {E}}^N_{0}\Big (\exp \Big \{-\frac{1}{2\sqrt{N}} \sum _{x=0}^{\sqrt{N}} \Big (\frac{s_x+r_N}{N^{1/4}})^2\Big \}\, u(N^{-1/4} s_{\sqrt{N}}) \Big ) \nonumber \\&\quad ={\mathbb {E}}_0 \Big (e^{-\frac{1}{2} \int _0^1 (\sigma B_s + r)^2 ds}u(\sigma B_1+ r)\Big ) \end{aligned}$$
(4.11)

where \({\mathbb {E}}_0 \) is the expectation w.r.t. the law of a standard Brownian motion starting at 0 and the limits are intended in the weak \(L^2\) sense.

Since \({\widetilde{h}}_N\) is converging strongly in \(L^2\) (along the subsequence we have chosen) and the expectation on the right hand side of (4.7) has a finite limit, we get that the limit of \(\lambda _N^{\sqrt{N}}\) must exists.

Observe that for a standard Brownian motion \(\{B_s\}_{s\in [0,1]}\) we have that

$$\begin{aligned} \exp \Big \{ -\frac{1}{2} \int _0^t (\sigma B_s + r)^2 ds - \int _0^t (\sigma B_s+ r) dB_s\Big \},\quad \text { is a martingale.} \end{aligned}$$

Furthermore by Ito’s formula

$$\begin{aligned} - \sigma \int _0^1 (\sigma B_s + r) dB_s= -\frac{1}{2} (\sigma B_1+r)^2 + \frac{r^2}{2} + \frac{\sigma ^2}{2} \end{aligned}$$

Thus

$$\begin{aligned} {\begin{matrix} 1 &{}= {\mathbb {E}}\left( \exp \Big \{ -\frac{1}{2} \int _0^1 (\sigma B_s + r)^2 ds - \int _0^1 (\sigma B_s+ r) dB_s\Big \}\right) \\ &{}= {\mathbb {E}}\left( \exp \Big \{ -\frac{1}{2} \int _0^t (\sigma B_s + r)^2 ds -\frac{1}{2\sigma } (\sigma B_1+r)^2 + \frac{r^2}{2\sigma } + \frac{\sigma }{2}\Big \}\right) \end{matrix}} \end{aligned}$$

that implies

$$\begin{aligned} e^{-\frac{r^2}{2\sigma }} =e^{\sigma /2} {\mathbb {E}}\Big (e^{-\int _0^1 (\sigma B_s + r)^2 ds}e^{-\frac{1}{2\sigma } (\sigma B_1 +r)^2}\Big ) \end{aligned}$$

Comparing with (4.6) we identify u(r) and \(\lambda \). \(\square \)

We thus have the following corollary of Proposition 4.2 and Proposition 4.3.

Corollary 4.4

The sequence of measures \( {\widetilde{h}}_N^2(r)dr\) in \({\mathbb {R}}\) converges weakly to the Gaussian measure \(g^2(r)dr\) where \(g(r)= (\pi \sigma )^{-1/4} e^{-r^2/2\sigma }\).

Moreover for any \(\psi , \varphi \in C_b({\mathbb {R}})\) and any \(t\in [0,1]\)

$$\begin{aligned}&\lim _{N\rightarrow \infty } \frac{1}{\lambda _N^{\sqrt{N}}}\,\,\frac{1}{N^{1/4}}\sum _{s} {\widetilde{h}}_N(N^{-1/4}s)\psi (N^{-1/4}s) \nonumber \\&\quad {\mathbb {E}}^N_{s}\Big (\exp \Big \{-\frac{1}{2\sqrt{N}} \sum _{x=0}^{[t\sqrt{N}]} \frac{s_x^2}{N^{1/2}}\Big \}\,\, {\widetilde{h}}_N(N^{-1/4} s_{[t\sqrt{N}]})\varphi (N^{-1/4} s_{[t\sqrt{N}]})\Big ) \nonumber \\&\quad = e^{\sigma /2}\int \psi (r)g(r) {\mathbb {E}}_r \Big (e^{-\frac{1}{2} \int _0^t (\sigma B_s)^2 ds}\varphi (\sigma B_t)g(\sigma B_t)\Big )dr \end{aligned}$$
(4.12)

where \({\mathbb {E}}_r \) is the expectation w.r.t. the law of the Brownian motion starting at r.

Proof

From Proposition 4.3 we have that any subsequence of \( {\widetilde{h}}_N(r)\) converges in \(L^2({\mathbb {R}})\) to \(c e^{-r^2/2\sigma }\) but since \(\Vert {\widetilde{h}}^2_N\Vert _{L^2}=1\) we get that c must be equal to \( (\pi \sigma )^{-1/4} \). This together with (1) of Proposition 4.2 concludes the proof.

The proof of (4.12) is an adaptation of (4.10) and (4.11). \(\square \)

5 Proof of Theorem 2.1

Recall that \({\mathcal {P}}_N\) and \({\mathcal {E}}_N\) denote respectively the law and the expectation in C([0, 1]) of the process \( {\widetilde{S}}_N(t) =N^{-1/4} s_{[tN^{1/2}]} \) induced by the law of the Markov chain with transition probabilities given in (2.5) and initial distribution the invariant measure \({\widetilde{h}}^2_N(r)dr\).

Proposition 5.1

The finite dimensional distributions of \({\widetilde{S}}_N(t)\), \(t\in [0,1]\), converge in law to those of the stationary Ornstein–Uhlenbeck.

Proof

For any k, any \(0\leqslant \tau _1<\cdots <\tau _k\leqslant 1\) and any collection of continuous bounded functions with compact support \(\varphi _0,\varphi _1, \ldots \varphi _k\), setting \(t_i=\tau _i-\tau _{i-1}\), \(i=1,\ldots ,k\), \(\tau _0=0\) we have

$$\begin{aligned}&{\mathcal {E}}_N\Big (\varphi _0({\widetilde{S}}_N(0))\varphi _1({\widetilde{S}}_N(t_1))\cdots \varphi _k({\widetilde{S}}_N(t_k))\Big )\\&\quad =N^{-1/4}\sum _{r_0\in N^{-{1/4}}{\mathbb {Z}}} {\widetilde{h}}_N(r_0)\varphi (r_0)\lambda _N^{-k\sqrt{N}}\\&\qquad {\mathbb {E}}_{r_0}^N\Bigg (e^{-\frac{1}{2\sqrt{N}} \sum _{x=0}^{[t_1\sqrt{N}]} \frac{s_x^2}{N^{1/2}}}\,\,\varphi _1(r_1){\mathbb {E}}_{r_1}^N\Big (e^{-\frac{1}{2\sqrt{N}} \sum _{x=0}^{[t_2\sqrt{N}]} \frac{s_x^2}{N^{1/2}}}\,\,\varphi _2(r_2) \\&\qquad \quad \ldots {\mathbb {E}}_{r_{k-1}}^N\Big (e^{-\frac{1}{2\sqrt{N}} \sum _{x=0}^{[t_k\sqrt{N}]} \frac{s_x^2}{N^{1/2}}}\,\,{\widetilde{h}}_N(r_k)\varphi _k(r_{k})\Big )\ldots \Big )\Bigg ) \end{aligned}$$

where \(r_i=N^{-1/4}\Big [r_{i-1}+\sum _{x=1}^{[t_i\sqrt{N}]} \eta _x\Big ]\). Then from a ripetute use of (4.12) we get

$$\begin{aligned}&\lim _{N\rightarrow \infty } {\mathcal {E}}_N\Big (\varphi _0({\widetilde{S}}_N(0))\varphi _1({\widetilde{S}}_N(t_1))\varphi _2({\widetilde{S}}_N(t_2))\cdots \varphi _k({\widetilde{S}}_N(t_k))\Big ) \\&\quad =e^{k\sigma /2}\int g(r_0)\varphi (r_0){\mathbb {E}}_{r_0}\Big (e^{-\int _0^{t_1} \sigma B_s}\varphi _1(\sigma B_{t_1}) \cdots e^{-\int _0^{t_k} \sigma B_s}\varphi _k(B_{t_k})g(B_{t_k})\Big )dr_0 \end{aligned}$$

\(\square \)

To conclude the proof of Theorem 2.1 we need to show tightness of \({\mathcal {P}}_N\) in C([0, 1]); this is a consequence of Proposition 5.2 below, see Theorem 12.3, Eq. (12.51) of [3].

Proposition 5.2

There is C so that for all N,

$$\begin{aligned} {\mathcal {E}}_N\left( \left( {\widetilde{S}}_N(t) - {\widetilde{S}}_N(0)\right) ^4\right) \leqslant Ct^{3/2}. \end{aligned}$$
(5.1)

Proof

$$\begin{aligned}&{\mathcal {E}}_N\left( \left( {\widetilde{S}}_N(t) - {\widetilde{S}}_N(0)\right) ^4\right) \nonumber \\&\quad = \lambda _N^{-\sqrt{N}} \sum _{s} h_N(s) {\mathbb {E}}^N_{s}\left( e^{-\frac{1}{2\sqrt{N}} \sum _{x=0}^{[N^{1/2}t]} \frac{s_x^2}{N^{1/2}}} \left( {\widetilde{S}}_N(t) - s N^{-1/4}\right) ^4 h_N(s_{[N^{1/2}t]})\right) \nonumber \\&\quad \leqslant \lambda _N^{-\sqrt{N}} \sum _{s} h_N(s) {\mathbb {E}}^N_{s}\left( \left( {\widetilde{S}}_N(t) - s N^{-1/4}\right) ^4 h_N(s_{[N^{1/2}t]})\right) \nonumber \\&\quad \leqslant C \lambda _N^{-\sqrt{N}} \sum _{s,s'} h_N(s) h_N(s') \pi _{[N^{1/2}t]} (s-s') \left| \frac{s-s'}{t^{1/2}N^{1/4}}\right| ^4 t^2 \end{aligned}$$
(5.2)

where \(\pi _n(s) = \pi \left( \sum _{k=1}^{n} \eta _k = s\right) \). By Proposition 2.4.6 in [12], if \(\pi \) is aperiodic with finite 4th moments, as in our case, we have the bound

$$\begin{aligned} \pi _n(s) \leqslant \frac{C}{n^{1/2}} \left( \frac{\sqrt{n}}{|s|}\right) ^4, \quad \forall s\in {\mathbb {Z}}. \end{aligned}$$
(5.3)

From this estimate it follows that the right hand side of (5.2) is bounded by

$$\begin{aligned} \leqslant t^2 C' \lambda _N^{-\sqrt{N}} \sum _{s,s'} h_N(s) h_N(s') \frac{1}{\sqrt{ t N^{1/2}}} = C' t^{3/2} \lambda _N^{-\sqrt{N}} N^{-1/4} \left( \sum _{s} h_N(s)\right) ^2, \end{aligned}$$

By (3.10) we have that \(\sum _{s} h_N(s) \leqslant N^{1/8}\), and the bound follows. \(\square \)