1 Introduction

Describing the emergence of local equilibrium in systems forced out of equilibrium is one of the main challenges of nonequilibrium statistical mechanics. Most mathematical works concern stochastic interacting particle systems as they are generally more tractable than the deterministic ones. Despite the big amount of recent work in this field, little is known for systems with spatial inhomogeneity.

The present paper extends the classical duality result of Kipnis et al. [15] to inhomogeneous systems, where the inhomogeneity means that either (A) the components of the system have different degrees of freedom or (B) the interaction rate between the components depends on their location. The dual process is roughly speaking a collection of biased random walkers that interact with one another once their distance is not bigger than 1, and are completely independent otherwise. We leverage the dual process to prove the existence of local equilibrium in systems forced out of equilibrium under various assumptions. Some of these results were announced in [17].

1.1 Informal Description

To fix notation, we denote by \(\Gamma (\alpha ,\, c)\) the probability distribution with density \(\frac{x^{\alpha -1} \exp (-x/c)}{c^{\alpha } \Gamma (\alpha )}\) for \(x >0,\) where \(\alpha >0\) is the shape parameter and \(c>0\) is the scale parameter (\(\Gamma \) is the usual Gamma function). The kth moment of the Gamma distribution is

$$\begin{aligned} \int _0^{\infty } \frac{x^{k+\alpha -1} \exp (-x/c)}{c^{\alpha } \Gamma (\alpha )} dx = \frac{c^k \Gamma (\alpha + k)}{\Gamma (\alpha )}. \end{aligned}$$
(1)

The \(Beta(\alpha ,\,\beta )\) distribution with parameters \(\alpha ,\,\beta >0\) has density \(\frac{\Gamma (\alpha + \beta )}{\Gamma (\alpha ) \Gamma (\beta )} x^{\alpha -1} (1-x)^{\beta -1}\) for \(x>0.\) For brevity, we will write \(B(\alpha ,\,\beta ) = \frac{\Gamma (\alpha ) \Gamma ( \beta )}{\Gamma (\alpha + \beta )}.\)

Our model can be informally described as follows. First consider two systems with degrees of freedom \(\omega _i\) and energy \(\xi _i\) for \(i=1,\,2.\) Now let the two systems exchange energies by a microcanonical procedure: redistribute the total energy according to the law of equipartition. Representing the energy per the jth degree of freedom by \(X_j^2\) for \(j=1,\ldots ,\omega _1 + \omega _2,\,\vec {X}\) is thus uniformly distributed on the sphere \((\xi _1 + \xi _2) \mathcal S^{\omega _1 + \omega _2 -1}.\) Writing \(\vec {X} = (\xi _1 + \xi _2) \vec {Y} / \Vert \vec {Y}\Vert ,\) where \(\vec {Y}\) has \(\omega _1 + \omega _2\) dimensional standard normal distribution, the first system’s energy is updated to

$$\begin{aligned} \xi _1^{\prime } = X_1^2 + \cdots +X_{\omega _1}^2 = \frac{(\xi _1 + \xi _2) (Y_1^2 + \cdots +Y_{\omega _1}^2)}{Y_1^2 +\cdots +Y_{\omega _1 + \omega _2}^2}, \end{aligned}$$

which is well known to have \((\xi _1 + \xi _2) Beta(\omega _1/2,\,\omega _2/2)\) distribution.

In the simplest case, our model consists of a one dimensional chain of systems located at sites \(1,\,2,\ldots , L-1\) and possibly having different degrees of freedom. Then we choose pairs of nearest neighbors randomly and update their energies by the above rule. Furthermore, the systems at site 1 and \(L-1\) are coupled to heat baths of different temperature. Then we study the macroscopic energy propagation and the emergence of local equilibrium for L large. We also consider two more generalizations: (i) to higher dimensions and (ii) to inhomogeneity in the rate of interaction.

1.2 Related Works

Here, we briefly mention some previous related works although we do not intend to provide a complete review. For general introduction to interacting particle systems and their macroscopic scaling limits, see [14, 18, 24]. Two classical important references for dual Markov processes are [18, 23].

The present paper is an extension of [15]: our one dimensional case with constant 2 degrees of freedom and homogeneous rate of interaction is the KMP model. Some earlier extensions of the KMP, relevant to the present work are the following. The paper [21] provides a one dimensional model where a tracer is liable for the transportation of energy. The authors apply certain martingale techniques (suggested by SRS Varadhan) to derive local equilibrium. We will use similar techniques in Sect. 7, note however that the adaptation to our non homogeneous setup is far from being obvious. The thermalized Brownian energy process, introduced in [6], corresponds to our model with constant (not necessarily 2) degrees of freedom, dimension 1, homogeneous rate of interaction. Next, [7] defines an asymmetric version of the thermalized Brownian energy process (roughly speaking, the system on the left is preferred in the redistribution of energies). Finally, [17] is a homogeneous model with tracers but in arbitrary dimension. All the papers mentioned in this paragraph have some dual process similar to ours.

The moral of our proof is to show that in the dual process, the particles behave asymptotically as if they were independent. If this can be verified, then the local equilibrium is an elementary consequence. An early application of this idea is in [12], see also [10, 11, 19], etc.

A chain of alternating billiard particles (2 degrees of freedom) and pistons (1 degree of freedom), has been proposed in [2]. It is expected that here the mathematically rigorous study of the rare interaction limit (at least on a mesoscopic level) is easier than that of the more realistic chain of interacting hard spheres (all having 2 degrees of freedom, [4, 13]). Note however that in deterministic systems the energies are updated by a much more complicated rule.

1.3 Random Walks End Electrical Networks

Here, we review the connection between one dimensional random walks and electrical networks as we will need them in the study of the dual process. The connection extends to much more general graphs than \(\mathbb Z^1,\) see e.g., ([16], Sect. 19). Assume that the weights (conductances) \(w_{i+1/2},\,i=A,\,A+1,\ldots ,B,\) are given and a random walker \(\mathcal S_k\) is defined on the set \(\{ A,\,A+1,\ldots ,B\}\) by \(\mathcal S(0) = I\) and

$$\begin{aligned} \mathbb P\left( \mathcal S_k = i+1 | S_k = i\right) = \frac{w_{i+1/2}}{w_{i-1/2}+ w_{i+1/2}}, \quad \mathbb P\left( \mathcal S_k = i-1 | S_k = i\right) = \frac{w_{i-1/2}}{w_{i-1/2}+ w_{i+1/2}}, \end{aligned}$$

for all \(i=A+1,\ldots , B-1\) (the states A and B are absorbing). Then the resistances \(R_{i+1/2} = 1/w_{i+1/2}\) and the hitting probability

$$\begin{aligned} p = \mathbb P \left( \min \left\{ k{\text {:}}\, \mathcal S_k = 0\right\} < \min \left\{ k{\text {:}}\, \mathcal S_k = N\right\} \right) , \end{aligned}$$

are connected by the well known formula

$$\begin{aligned} p = \frac{\sum _{i=I}^{B-1} R_i}{\sum _{i=A}^{B-1} R_i}. \end{aligned}$$
(2)

Finally, to fix terminology, we denote by SSRW the simple symmetric random walk, i.e., a random walk with independent steps, uniformly distributed on the neighbors of the origin in \(\mathbb Z^d.\)

1.4 Organization

The rest of the paper is organized as follows. We define our model precisely in Sect.  2. In Sect. 3, the dual process is defined and the duality relation is proved. In Sect. 4 the main results, namely local equilibrium in the hydrodynamic limit in dimension \({\ge } 2\) and in the nonequilibrium steady state in dimension 1, are formulated. Section 5 contains the proofs of the theorems concerning the hydrodynamic limit, namely Theorems 1 and 2, except for the proof of Proposition 6. Then Proposition 6 is proved in Sect. 6 (except for the proof of Lemma 6). Section 7 is the proof of Theorem 3, which is the case of nonequilibrium steady state. Finally, the Appendix contains the proof of Lemma 6.

2 The Model

Let us fix a dimension \(d \ge 1,\) and \(\mathcal D \subset \mathbb R^d,\) a bounded, connected open set for which \(\partial \mathcal D\) is a piecewise \(\mathcal C^2\) submanifold with no cusps. We prescribe a continuous function \(T{\text {:}}\,\mathbb R^d \setminus \mathcal D \rightarrow \mathbb R_{+}\) to be thought of as temperature. For \(L \gg 1,\) the physical domain of our system is

$$\begin{aligned} \mathcal D_{L} = L \mathcal D \cap \mathbb Z^d. \end{aligned}$$

At each lattice point \(z \in \mathcal D_L\) (which will be called a site) there is a physical system of \(\omega _z\) degrees of freedom. The rate of interaction along the edge \(e = (u,\,v) \in \mathcal E (\mathcal D_L)\) is denoted by \(r_e = r_{(u,v)}\) (here, \(\mathcal E (\mathcal D_L)\) is the set of edges of the lattice \(\mathbb Z^d\) restricted to \(\mathcal D_L\)). Finally, we also fix rates of interaction between boundary sites and the heat bath: \(r_{(u,v)}\) for \(u \in \mathcal D_L,\,v \in \mathbb Z^d \setminus \mathcal D_L\) with \(\Vert u-v\Vert =1.\)

Throughout this paper, \(|\cdot |\) denotes the cardinality of a finite set, and for \(v \in \mathbb R^d,\,\langle v \rangle \) is its closest point in \(\mathbb Z^d.\)

The time evolution of the energies \(\varvec{X}(t) = \varvec{X}^{(L)}(t) = (\xi _{v}^{(L)}(t))_{v \in \mathcal D_{L}}\) is a Markov process with generator

$$\begin{aligned} (Gf)(\underline{\xi }) = \left( G_1f \right) (\underline{\xi }) + \left( G_2f\right) (\underline{\xi }), \end{aligned}$$

where \(G_1\) describes interactions within \(\mathcal D_L\) and \(G_2\) stands for interaction with the bath. We define \(G_1\) as follows. There is an exponential clock at each edge \(e =(u,\,v) \in \mathcal E (\mathcal D_L)\) of rate \(e_{(u,v)}.\) When it rings, the energies of the two corresponding systems (\(\xi _u\) and \(\xi _v\)) are pooled together and redistributed according to a \(Beta(\omega _u/2,\, \omega _v/2)\) distribution. Thus

$$\begin{aligned} \left( G_1f\right) (\underline{\xi }) = \sum _{(u,\,v) \in \mathcal E (\mathcal D_L)} r_{(u,v)} \int _0^1 \frac{1}{B \left( \frac{\omega _u}{2},\,\frac{\omega _v}{2} \right) } p^{\frac{\omega _u}{2} -1} (1-p)^{\frac{\omega _v}{2} -1} [f(\underline{\xi }^{\prime }) -f( \underline{\xi })] dp, \end{aligned}$$

where

$$\begin{aligned} \xi ^{\prime }_w = \left\{ \begin{array}{ll} \xi _w &{}\mathrm{if}\,w \notin \{ u,\,v\}, \\ p(\xi _{u} + \xi _v) &{}\mathrm{if}\,w = u,\\ (1-p)(\xi _{u} + \xi _v) &{}\mathrm{if}\,w = v. \end{array} \right. \end{aligned}$$

Every edge \(e = (u,\,v)\) where \(u \in \mathcal D_L\) and \(v \in \mathbb Z^d \setminus \mathcal D_L\) provides connection to the heat bath of temperature \(T(v /L){\text {:}}\) with rate \(r_{(u,v)},\,\xi _u\) is updated to \(\Gamma \left( \frac{\omega _u}{2},\,T \left( \frac{v}{L} \right) \right) .\) That is,

$$\begin{aligned} \left( G_2f\right) (\underline{\xi }) = \sum _{u \in \mathcal D_L,\,v \in \mathbb Z^d \setminus \mathcal D_L,\,\Vert u-v\Vert =1} r_{(u,v)} \int _0^{\infty } \frac{\eta ^{\frac{\omega _u}{2} -1} \exp \left[ -\frac{\eta }{T\left( \frac{v}{L} \right) } \right] }{ \left[ T\left( \frac{v}{L} \right) \right] ^{\frac{\omega _u}{2}} \Gamma \left( \frac{\omega _u}{2} \right) } [f(\underline{\xi }^{\prime \prime }) -f( \underline{\xi })] d \eta , \end{aligned}$$

where

$$\begin{aligned} \xi ^{\prime \prime }_w = \left\{ \begin{array}{ll} \xi _w &{}\mathrm{if}\,w \ne u, \\ \eta &{}\mathrm{if}\,w = u. \end{array} \right. \end{aligned}$$

This completes the definition of \(\varvec{X}^{(L)}(t).\)

3 The Dual Process

As mentioned in the Sect. 1, we want to understand the asymptotic behavior of \(\varvec{X}(t)\) by switching to a dual process. This section is devoted to the discussion of duality.

For \(\mathcal D \subset \mathbb R^d\) and L as in Sect. 2, we now introduce a Markov process \({\varvec{Y}^{(L)}(t)} = ((n_v^{(L)} (t))_{v \in \mathcal D_L},\,(\hat{n}_v^{(L)} (t))_{v \in \mathcal B_L})\) designed to carry certain dual object, which we call particles, from sites in \(\mathcal D_L\) to

$$\begin{aligned} \mathcal B_L = \left\{ v \in \mathbb Z^d \setminus \mathcal D_L{\text {:}}\,\exists v \in \mathcal D_L{\text {:}}\,\Vert u-v\Vert =1\right\} . \end{aligned}$$

Here, \(n_{v}\) the number of particles at site \(v \in \mathcal D_L\) and \(\hat{n}_{w}\) the number of particles permanently drooped off to the storage at \(w \in \mathcal B_L.\) The generator of the process \({\varvec{Y}(t)}\) is given by

$$\begin{aligned} (Af)(\underline{n}) = \left( A_1f\right) (\underline{n}) +\left( A_2f\right) (\underline{n}), \end{aligned}$$

where \(A_1\) corresponds to movements inside \(\mathcal D_L\) and \(A_2\) corresponds to the process of dropping off the particles to the storage. That is,

$$\begin{aligned}&\left( A_1f\right) (\underline{n}) = \sum _{(u,\,v) \in \mathcal E ( \mathcal D_L)} r_{(u,v)} \\&\sum _{k=0}^{n_v+n_w} {{n_u+n_v} \atopwithdelims (){k}} \frac{B\left( k+ \frac{\omega _u}{2},\,n_u+n_v-k+\frac{\omega _v}{2}\right) }{ B\left( \frac{\omega _u}{2},\,\frac{\omega _v}{2}\right) } [f(\underline{n}^{\prime }) - f(\underline{n}) ], \end{aligned}$$

where

$$\begin{aligned} n^{\prime }_w = \left\{ \begin{array}{ll} n_w &{}\mathrm{if}\,w \notin \{ u,\,v\}, \\ k &{}\mathrm{if}\,w = u,\\ n_v+n_w - k &{}\mathrm{if}\,w = v, \end{array} \right. \quad \text {and}\quad \hat{n}^{\prime }_w = \hat{n}_w \forall w \in \mathcal B_L. \end{aligned}$$
(3)

Recall that in case of the process \(\varvec{X},\) the energies are redistributed according to a beta distribution with parameters \(\omega _u/2,\,\omega _v/2.\) In case of the dual process \(\varvec{Y},\) we redistribute the particles with the so called beta binomial distribution: first we choose a p according to \(Beta(\omega _u/2,\,\omega _v/2),\) then we choose \(n^{\prime }_u\) with binomial distribution of parameters \(n_u + n_v,\,p\) and \(n^{\prime }_v= n_u+n_v-n^{\prime }_u.\) The second part of the generator is given by

$$\begin{aligned} \left( A_2f\right) (\underline{n}) = \sum _{u \in \mathcal D_L,\,v \in \mathcal B_L,\, \Vert u-v\Vert =1} r_{(u,v)} [f(\underline{n}^{\prime }) - f(\underline{n}) ], \end{aligned}$$

where

$$\begin{aligned} n^{\prime }_w = \left\{ \begin{array}{ll} n_w &{}\mathrm{if}\,w \ne u, \\ 0 &{}\mathrm{if}\,w = u, \end{array} \right. \quad \text {and}\quad \hat{n}^{\prime }_w = \left\{ \begin{array}{ll} \hat{n}_w &{}\mathrm{if}\,w \ne v, \\ \hat{n}_v+ n_u &{}\mathrm{if}\,w = v. \end{array} \right. \end{aligned}$$

This completes the definition of \(\varvec{Y}^{(L)} (t).\)

Now we turn to the duality. Let us define the function with respect to which the duality holds

$$\begin{aligned} F(\underline{n},\,\underline{\xi }) = \prod _{u \in \mathcal D_L} \frac{\xi _u^{n_u}\Gamma (\omega _u/2)}{\Gamma (n_u+\omega _u/2)} \prod _{v \in \mathcal B_L} \left[ T\left( \frac{v}{L} \right) \right] ^{\hat{n}_v}. \end{aligned}$$
(4)

The duality with respect to F means that

Proposition 1

For any \(\underline{\xi }\) any \(\underline{n}\) and any \(t>0,\)

$$\begin{aligned} \mathbb E \left( F(\underline{n},\,\varvec{X}(t)) | \varvec{X}_0 = \underline{\xi }\right) = \mathbb E \left( F( \varvec{Y}(t),\,\underline{\xi }) | \varvec{Y}_0 = \underline{\xi }\right) . \end{aligned}$$

Proof

Clearly it is enough to prove that for any \(\underline{\xi }\) and \(\underline{n},\)

$$\begin{aligned} G F (\underline{\xi },\,\underline{n}) = A F (\underline{\xi },\,\underline{n}). \end{aligned}$$

To prove this, we consider the following two cases, where Case i corresponds to \(G_i\) and \(A_i\) for \(i=1,\,2.\)

  • Case 1 the clock on the edge \((u,\,v)\) rings. The term corresponding to u and v in \(G_1F\) can be written as \(r_{(u,v)} \cdot I \cdot II,\) where

    $$\begin{aligned} I = \Gamma \left( \omega _u/2\right) \Gamma \left( \omega _v/2\right) \prod _{w \in \mathcal D_L \setminus \{ u,\,v\} } \frac{\xi _w^{n_w} \Gamma (\omega _w/2)}{\Gamma (n_w+\omega _w/2)} \prod _{w \in \mathcal B_L} \left[ T\left( \frac{w}{L} \right) \right] ^{\hat{n}_w}, \end{aligned}$$

    and

    $$\begin{aligned} II = \frac{ \int _0^1 p^{n_u + \frac{\omega _u}{2} -1 } (1-p)^{n_v + \frac{\omega _v}{2} -1 } \frac{1}{B\left( \frac{\omega _u}{2},\, \frac{\omega _v}{2}\right) } \left( \xi _u + \xi _v\right) ^{n_u+n_v} dp - \xi _{u}^{n_{u}} \xi _{v}^{n_{v}} }{{\Gamma \left( n_u + \frac{\omega _u}{2}\right) } \Gamma \left( n_v + \frac{\omega _v}{2}\right) }. \end{aligned}$$

    Then we compute

    $$\begin{aligned}&II + \frac{\xi _{u}^{n_{u}} \xi _{v}^{n_{v}} }{{\Gamma \left( n_u + \frac{\omega _u}{2}\right) } \Gamma \left( n_v + \frac{\omega _v}{2}\right) }\\= & {} \frac{1}{B\left( \frac{\omega _u}{2},\, \frac{\omega _v}{2}\right) } \frac{1}{\Gamma \left( n_u + n_v + \frac{\omega _u + \omega _v}{2}\right) } \left( \xi _u + \xi _v\right) ^{n_u+n_v}\\= & {} \frac{1}{B\left( \frac{\omega _u}{2},\, \frac{\omega _v}{2}\right) } \frac{1}{\Gamma \left( n_u + n_v + \frac{\omega _u + \omega _v}{2}\right) } \sum _{k=0}^{n_u+n_v} {{n_u+n_v} \atopwithdelims ()k}\xi _u^{n_u} \xi _v^{n_v}\\= & {} \frac{1}{B\left( \frac{\omega _u}{2},\, \frac{\omega _v}{2}\right) } \sum _{k=0}^{n_u+n_v} {{n_u+n_v} \atopwithdelims ()k} B \left( k+ \frac{\omega _u}{2},\, n_u+n_v-k+\frac{\omega _v}{2} \right) \frac{\xi _u^{n_u}}{\Gamma \left( k+ \frac{\omega _u}{2}\right) } \frac{\xi _v^{n_v}}{\Gamma \left( n-k+ \frac{\omega _v}{2}\right) }. \end{aligned}$$

    Thus \(r_{(u,v)} \cdot I \cdot II\) is the term corresponding to u and v in \(A_1F.\)

  • Case 2 the energy at site \(u \in \mathcal D_L\) is updated by the heat bath at \(v \in \mathcal B_L\) (where \(\Vert u-v\Vert =1\)). As before, we write the term corresponding to \((u,\,v)\) in \(G_2F\) as \(r_{(u,v)} \cdot I \cdot II,\) where

    $$\begin{aligned} I =\prod _{u^{\prime } \in \mathcal D_L \setminus \{ u\} } \frac{\xi _{u^{\prime }}^{n_{u^{\prime }}} \Gamma (\omega _{u^{\prime }}/2)}{\Gamma (n_{u^{\prime }}+\omega _{u^{\prime }}/2)} \prod _{v^{\prime } \in \mathcal B_L \setminus \{ v\}} \left[ T\left( \frac{v^{\prime }}{L} \right) \right] ^{\hat{n}_{v^{\prime }}}, \end{aligned}$$

    and

    $$\begin{aligned} II = \frac{\Gamma \left( \frac{\omega _u}{2}\right) }{\Gamma \left( n_u + \frac{\omega _u}{2}\right) } \left[ \int _0^{\infty } \frac{\eta ^{n_u+\frac{\omega _u}{2} -1} \exp \left[ -\frac{\eta }{T\left( \frac{v}{L} \right) } \right] }{ \left[ T\left( \frac{v}{L} \right) \right] ^{\frac{\omega _u}{2}} \Gamma \left( \frac{\omega _u}{2} \right) } d \eta - \xi _{u}^{n_{u}} \right] \left[ T\left( \frac{v}{L} \right) \right] ^{\hat{n}_v}. \end{aligned}$$

    By (1), we obtain that

    $$\begin{aligned}&II + \frac{\xi _u^{n_u} \Gamma \left( \frac{\omega _u}{2}\right) }{\Gamma \left( n_u + \frac{\omega _u}{2}\right) }\left[ T\left( \frac{v}{L} \right) \right] ^{\hat{n}_v} = \left[ T\left( \frac{v}{L} \right) \right] ^{\hat{n}_v+n_u}. \\ \end{aligned}$$

    Thus \(r_{(u,v)} \cdot I \cdot II\) is the term corresponding to v in \(A_2F.\) \(\square \)

Note that the process \(\varvec{Y}\) preserves the total number of particles, which will be denoted be N. We conclude this section with the following simple lemma.

Lemma 1

The restriction of the process \(\varvec{Y}\) to arbitrary subset of K particles (with \(K<N)\) is also a Markov process and satisfies the definition of \(\varvec{Y}\) with N replaced by K.

Proof

Without loss of generality, we can consider a system with \(K=N-1\) particles, assume that the clock attached to the edge \((v,\,w)\) rings and the union of the particles at sites u and v prior to the mixing are labeled \(\{1, \ldots , n\}.\)

Then the probability that after the mixing, the set of particles at site u is exactly \(\{j_1,\ldots ,j_l\} \subset \{1,\,2,\ldots ,n\}\) is given by

$$\begin{aligned} p\left( j_1,\ldots , j_l\right) = {n \atopwithdelims ()l} \frac{\Gamma (l+\omega _u/2)\Gamma (n-l+\omega _v/2)}{\Gamma ( n + \omega _u/2 + \omega _v/2)} \frac{\Gamma (\omega _u/2 +\omega _v/2)}{\Gamma ( \omega _u/2)\Gamma ( \omega _v/2)} \frac{1}{{n \atopwithdelims ()l}}. \end{aligned}$$
(5)

Now assume we add a new particle (of index N). If this new particle is not at sites u or v,  then clearly the situation is not disturbed. If it is there, we compute

$$\begin{aligned}&p\left( j_1,\ldots , j_l\right) + p\left( j_1,\ldots , j_l,\,N+1\right) \\&=\frac{\Gamma (l+\omega _u/2)\Gamma (n+1-l+\omega _v/2)}{\Gamma ( n + 1+ \omega _u/2 + \omega _v/2)} \frac{\Gamma (\omega _u/2 +\omega _v/2)}{\Gamma ( \omega _u/2)\Gamma ( \omega _v/2)} \nonumber \\&\quad + \frac{\Gamma (l+1+\omega _u/2)\Gamma (n-l+\omega _v/2)}{\Gamma ( n+1+ \omega _u/2 + \omega _v/2)} \frac{\Gamma (\omega _u/2 +\omega _v/2)}{\Gamma ( \omega _u/2)\Gamma ( \omega _v/2)}. \nonumber \end{aligned}$$
(6)

An elementary computation shows that (5) is equal to (6). The lemma follows. \(\square \)

4 Local Equilibrium

Let \(d,\,\mathcal D\) and L be as before such that \(\mathcal D_L\) is connected.

First we state the existence and uniqueness of invariant measure in the equilibrium case.

Proposition 2

Fix arbitrary functions \(\omega {\text {:}}\, \mathcal D_L \rightarrow \mathbb Z_+\) and \(r{\text {:}}\,\mathcal E (\mathcal D_L) \rightarrow \mathbb R_+.\) If T is constant, then

$$\begin{aligned} \mu ^{(L)}_e = \prod _{v \in \mathcal D_L} \Gamma \left( \frac{\omega _v}{2},\,\frac{1}{T} \right) , \end{aligned}$$

is the unique invariant probability measure of the process \(\varvec{X}^{(L)}(t).\)

Proof

The discussion in Sect. 1.1 implies the following statement (which is actually well known, see, e.g., Lemma 3 in [5]). Let \(\xi _1\) and \(\xi _2\) be independent Gamma distributed random variables with shape parameter k and l,  respectively and with the same scale parameter. Let Z be independent from X and Y and have Beta distribution with parameters k / 2 and l / 2. Then the pair \((Z(\xi _1+\xi _2),\,(1-Z)(\xi _1+\xi _2))\) has the same distribution as \((\xi _1,\,\xi _2).\) Proposition 2 follows. \(\square \)

Our primary interest is in the out-of-equilibrium settings where the bath temperature is non constant:

Proposition 3

Let \(\omega {\text {:}}\,\mathcal D_L \rightarrow \mathbb Z_+\) and \(r{\text {:}}\,\mathcal E (\mathcal D_L) \rightarrow \mathbb R_+\) be arbitrary. The process \({\varvec{X}^{(L)}(t)}\) has a unique invariant probability measure \(\mu ^{(L)}.\) Furthermore, the distribution of \({\varvec{X}^{(L)}(t)}\) converges to \(\mu ^{(L)}\) as \(t \rightarrow \infty \) for any initial distribution of \({\varvec{X}^{(L)}(0)}.\)

We skip the proof of Proposition  3 since it is very similar to the analogous propositions in earlier similar models, see Proposition 1.2 in [21], Proposition 2 in [17].

4.1 Hydrodynamic Limit

In order to discuss the local equilibrium in the hydrodynamic limit, we need some definitions. First we introduce some properties of the initial measures.

Definition 1

We say that \(\varvec{X} (0)\) is associated with f if for any fixed \(\delta \) and any k

$$\begin{aligned} \mathbb E \left( \prod _{i=1}^k \xi _{v_i}^{(L)} (0) \right) \sim \prod _{i=1}^k \frac{\omega _{v_i}}{2} f\left( v_i/L\right) , \end{aligned}$$
(7)

as \(L \rightarrow \infty \) uniformly for every \(v_1,\ldots , v_k \in \mathcal D_L\) satisfying \(\Vert v_i - v_j\Vert \ge \delta L \) for \(i \ne j\) and with some fixed continuous function \(f{\text {:}}\,\mathbb R^d \rightarrow \mathbb R_+\) such that \(f|_{\mathbb R^d \setminus D} = T.\)

Definition 2

We say that \(\varvec{X} (0)\) satisfies the uniform moment condition if there are constants \(C_k\) such that \(\mathbb E ( \xi _{v}^k (0)) < C_k\) for every L and for every \(v \in \mathcal D_L.\)

Recall that the Lévy-Prokhorov distance is the metrization of weak convergence of measures.

Definition 3

We say that \(\varvec{X}^{(L)}(t)\) approaches local equilibrium in the hydrodynamic limit at \(x \in \mathcal D\) and \(t>0\) if for any finite set \(S \subset \mathbb Z^d\) the Lévy–Prokhorov distance of the distribution of \(\varvec{X}^{(L)}(tL^2)\) restricted to the components \((\xi _{\langle xL \rangle +s})_{s \in S}\) and

$$\begin{aligned} \prod _{s \in S} \Gamma \left( \omega _{{\langle xL \rangle +s} }/2,\, u(t,\,x) \right) \end{aligned}$$

converges to zero as \(L \rightarrow \infty .\)

We will choose the initial distributions, i.e., the distributions of \(\varvec{X}^{(L)}(0),\) which are associated with a continuous function f. The interesting question is that what kind of equation defines u for different choices of \(\omega \) and r. In any case, we expect that the initial condition is given by f and the boundary condition by T. We will consider the two simplest cases here.

Theorem 1

Assume \(d \ge 2\) and \(\omega _v=\omega _0 \in \mathbb Z_+\) for every site v. Assume furthermore that \(r_{(u,v)}=R\left( \frac{u+v}{2L}\right) \) for every \(u \in \mathcal D_L\) and \(v \in \mathbb Z^d\) with \(\Vert u-v\Vert =1,\) where \(R \in \mathcal C^2 (\mathbb R^d,\, \mathbb R_+).\) Also assume that \(\varvec{X}^{(L)}(0)\) is associated with f,  a continuous function \(f{\text {:}}\,\mathcal {\bar{D}} \rightarrow \mathbb R_+,\) and satisfy the uniform moment condition. Then \(\varvec{X}^{(L)}(t)\) approaches local equilibrium in the hydrodynamic limit for all \(x \in \mathcal D\) and \(t>0\) with u the unique solution of the equation

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t = \nabla ( R\nabla u), \\ u(0,\,x) = f(x), \\ u(t,\,x)|_{\partial \mathcal D} = T(x). \end{array}\right. } \end{aligned}$$

Remark about the initial conditions note that f represents the energy per degrees of freedom at time zero, that is why we need the multiplier \(\omega _v/2\) on the right hand side of (7). We do not have to assume local equilibrium at zero, which would correspond to the special choice of \(\varvec{X}(0){\text {:}}\) the product of Gamma distributions [with shape parameter \(\omega _v/2,\) and scale parameter f(v / L)]. One interesting consequence of Theorem 1 (and similarly that of Theorem 2) is that the system satisfies the local equilibrium for arbitrary positive macroscopic time even if it only satisfies the given weaker condition at time zero. Since we want to leverage the duality via moments, we also need to assume some condition on the higher moments. The simplest one is the uniform moment condition. Most probably, neither condition (7) nor the uniform moment condition is optimal, but we do not pursue the most general case here.

Our next choice is the simplest non-continuous environment: we consider \(\mathcal D = [-1,\,1]^d\) with one of the functions \(\omega \) and r being constant on \(\mathcal D\) and the other one is constant on \([-1,\,0] \times [-1,\,1]^{d-1}\) and \([0,\,1] \times [-1,\,1]^{d-1}.\) Thus we have the following.

Theorem 2

Let \(d \ge 2,\,\mathcal D = [-1,\,1]^d.\) Assume that \(\varvec{X}^{(L)}(0)\) is associated with f,  a continuous extension of T to \(\mathcal {\bar{D}},\) and satisfies the uniform moment condition

  1. (a)

    Let \(\omega _v = \omega _{-1}\) if \(v_1 <0\) and \(\omega _v = \omega _{1}\) if \(v_1 \ge 0\) with some positive integers \(\omega _{-1},\,\omega _1\) and let r be constant. Then \(\varvec{X}^{(L)}(t)\) approaches local equilibrium in the hydrodynamic limit for all \(x \in \mathcal D\) with \(x_1 \ne 0\) and all \(t>0\) with u the unique solution of the equation

    $$\begin{aligned} {\left\{ \begin{array}{ll} u_t = r \Delta u \quad \text {for}\,x \in \mathcal D \, \text {with}\, x_1\ne 0,\\ \omega _1 \frac{\partial }{\partial x_1+} u(t,\,(0,\,x_2,\ldots , x_d)) = \omega _{-1} \frac{\partial }{\partial x_1 -} u(t,\,(0,\, x_2,\ldots , x_d)),\\ u(0,\,x) = f(x), \\ u(t,\,x)|_{\partial \mathcal D} = T(x). \end{array}\right. } \end{aligned}$$
  2. (b)

    Let \(r_{(u,v)} = r_{-1}\) if \(u_1+v_1 <0\) and \(r_{(u,v)} = r_{1}\) otherwise, where \(r_{-1},\,r_1\) are fixed positive numbers. Similarly, \(r_w = r_{-1}\) if \(w_1 <0\) and \(r_w = r_1\) otherwise. Let \(\omega \) be constant. Then \(\varvec{X}^{(L)}(t)\) approaches local equilibrium in the hydrodynamic limit for all \(x \in \mathcal D\) with \(x_1 \ne 0\) and all \(t>0\) with u the unique solution of the equation

    $$\begin{aligned} {\left\{ \begin{array}{ll} u_t = r_{sign(x_1)} \Delta u \quad \text {for}\, x \in \mathcal D \, \text {with}\,x_1\ne 0,\\ r_1 \frac{\partial }{\partial x_1+} u(t,\,(0,\,x_2,\ldots , x_d)) = r_{-1} \frac{\partial }{\partial x_1 -} u(t,\,(0,\, x_2,\ldots , x_d)),\\ u(0,\,x) = f(x), \\ u(t,\,x)|_{\partial \mathcal D} = T(x). \end{array}\right. } \end{aligned}$$

4.2 Nonequilibrium Steady State

Now we are interested in the invariant measure of the Markov chains for finite (but large) L. Specifically, we are looking for a function \(u(x),\,x \in \mathcal D\) such that in the limit \(\lim _{L \rightarrow \infty } \lim _{t \rightarrow \infty }\) the local temperature exists and is given by u(x). In case the hydrodynamic limit is known, u(x) is expected to be equal to \(\lim _{t \rightarrow \infty } u(t,\,x).\) We will choose arbitrary environment (r and \(\omega \)), that is why we define the local equilibrium in a little more general form, namely with an L dependent u. Of course in all natural examples, one expects \(u^{(L)}\) to converge.

Definition 4

We say that \(\varvec{X}^{(L)}(t)\) approaches local equilibrium in the nonequilibrium steady state if for any \(x \in \mathcal D\) and any finite set \(S \subset \mathbb Z^d\) the Lévy–Prokhorov distance of the invariant measure of \(\varvec{X}^{(L)}(t)\) restricted to the components \((\xi _{\langle xL \rangle +s})_{s \in S}\) and

$$\begin{aligned} \prod _{s \in S} \Gamma \left( \omega _{{\langle xL \rangle +s} }/2,\,u^{(L)}(x) \right) , \end{aligned}$$

converges to zero as \(L \rightarrow \infty .\)

Let us fix \(d=1\) and \(\mathcal D = (0,\,1).\) To simplify notation, we will write \(r_{m+1/2} := r_{(m,m+1)},\, \omega _0 := \omega _1\) and \(\omega _L := \omega _{L-1}.\) Furthermore, we will need the following two definitions:

$$\begin{aligned} \psi (m) = \frac{\omega _{m-1} + \omega _m}{r_{m-1/2} \omega _{m-1} \omega _m} \quad \text {for}\quad 1 \le m \le L, \end{aligned}$$

and

$$\begin{aligned} \mathcal A^{(L)}(x) = \frac{\sum _{m=1}^{\lfloor xL \rfloor } \psi (m)}{\sum _{m=1}^{L} \psi (m)}. \end{aligned}$$

Theorem 3

Let \(d=1\) and \(\mathcal D = (0,\,1).\) Assume that the functions r and \(\omega \) are bounded away from zero and infinity uniformly in L and the temperature on the boundary is given by \(T(0),\,T(1) \in \mathbb R_+ \cup \{ 0\}.\) Then \(\varvec{X}^{(L)}(t)\) approaches local equilibrium in the nonequilibrium steady state with

$$\begin{aligned} u^{(L)}(x) = \left( 1 - \mathcal A^{(L)} (x)\right) T(0) + \mathcal A^{(L)}(x) T(1). \end{aligned}$$

Clearly, \(u^{(L)}(x)\) can easily diverge in this generality. That is why we consider two special cases. In case (a), r is constant and \(\omega \) is random. In this case, we prove the quenched local equilibrium in the nonequilibrium steady state, i.e., the almost sure convergence of \( u^{(L)}(x)\) to a deterministic limit. In case (b), \(\omega \) is constant but r is prescribed by a non-constant macroscopic function. We also prove that \( u^{(L)}(x)\) converges in this case.

Proposition 4

  1. (a)

    Let r be constant, K be the maximal degrees of freedom and let us fix some continuous function

    $$\begin{aligned} \varkappa {\text {:}}\, [0,\,1] \rightarrow \left\{ p \in \mathbb R^K{\text {:}}\, p \ge 0,\, \sum _{i=1}^K p_i = 1\right\} .\end{aligned}$$

    For each L,  let us choose \(\omega _{v}^{(L)}\) randomly and independently from one another with

    $$\begin{aligned} \mathbb P\left( \omega _{v}^{(L)} = i\right) = \varkappa _i(v/L). \end{aligned}$$

    Then for almost every realization of the random functions \(\omega ^{(L)},\)

    $$\begin{aligned} \lim _{L \rightarrow \infty } \mathcal A^{(L)}(x) = \frac{\sum _{i=1}^K \frac{1}{i} \int _0^x \varkappa _i(y)dy}{\sum _{i=1}^K \frac{1}{i} \int _0^1 \varkappa _i(y)dy}. \end{aligned}$$
  1. (b)

    Let \(\omega \) be constant, \(\varrho {\text {:}}\, [0,\,1] \rightarrow \mathbb R_+\) is a continuous function. For each L and \(v = 0,\,1,\ldots ,L-1\) define \(r^{(L)}_{v+1/2} = \varrho (v/L).\) Then

    $$\begin{aligned} \lim _{L \rightarrow \infty } \mathcal A^{(L)}(x) = \frac{\int _0^x 1/ \varrho (y)dy}{ \int _0^1 1/ \varrho (y)dy}. \end{aligned}$$

Before turning to the proofs of the above results, we briefly comment on some possibilities of extension.

4.3 Possible Extensions

As we will see, the proof of Theorems 1 and 2 also provides the local equilibrium in the nonequilibrium steady state.

Corollary 4

Consider the setup of either Theorem 1 or 2. Then \(\varvec{X}^{(L)} (t)\) approaches local equilibrium (assuming \(x_1 \ne 0\) in case of Theorem 2) in the nonequilibrium steady state with \(U(x) = \lim _{t \rightarrow \infty } u(t,\,x).\)

Indeed, for \(\delta >0\) fixed one can find some t large such that with probability at least \(1-\delta ,\) all processes in Proposition 6 have arrived at the boundary before \(tL^2.\) In such cases, \(\tilde{\varvec{Y}}_i^{(L)}({tL^2}) = \tilde{\varvec{Y}}_i^{(L)}({\infty }),\) and the latter can be used to prove local equilibrium in the nonequilibrium steady state (cf. the proof of Theorem 3).

Furthermore, it seems likely that our proof could be adapted to a version of Theorem 2 with more general domains and with piecewise constant r and \(\omega \) (see [20] for the extension of skew Brownian motion to such scenarios).

However, the case of more general inhomogeneity in either high dimensions or in the hydrodynamic limit can be difficult. For example, if \(\varkappa \) is constant in Proposition 4(a), then in order to verify the hydrodynamic limit, one would have to compute the scaling limit of some interacting random walkers among iid conductances, whereas even the case of one random walker is non obvious (see [22]). Clearly, the case of high dimensions or non iid environments are even much harder.

5 Proof of Theorems 1 and 2

Since the proofs of Theorems 1 and 2 are very similar, we provide one proof and distinguish between cases Theorems 1, 2(a) and 2(b) if necessary. Let us fix some \(t>0,\) a point \(x \in \mathcal D\) and a finite set \(S \subset \mathbb Z^d.\) We need to show that the (joint) distribution of \((\xi _{\langle xL \rangle +s} (tL^2))_{s \in S}\) converges to the product of gamma distributions with the scale parameter \(u(t,\,x).\) As it is well known, any product of Gamma distributions is characterized by its moments (see [26]). Thus it is enough to prove that the moments of \((\xi _{\langle xL \rangle +s})_{s \in S}\) converge to the product of the moments of gamma distributions (convergence of second moments implies tightness). When computing the moments of order \(n^*_s \in \mathbb N,\,s\in S,\) we can use Proposition 1 to switch to the dual process.

We need to introduce some auxiliary processes. Let us denote by \(\tilde{\varvec{Y}}\) the slight variant of \(\varvec{Y}\) where the position of distinguishable particles are recorded. More precisely, the phase space of \(\tilde{\varvec{Y}} = (\tilde{\varvec{Y}}^{(L)}_{i}(t))_{0 \le t,\, 1 \le i \le N}\) is \((\mathcal D_L \cup \mathcal B_L)^N,\) where \(N=\sum _{s\in S} n^*_s\) and the initial condition is given such that for all \(s \in S,\)

$$\begin{aligned} \# \left\{ i{\text {:}}\,\tilde{\varvec{Y}}_{i}(0) = \langle xL \rangle + s \right\} = n^*_s. \end{aligned}$$

For any \(t\ge 0\) we define \(\tilde{\varvec{Y}} (t)\) so that

$$\begin{aligned} \# \left\{ i{\text {:}}\,\tilde{\varvec{Y}}_{i}(t) = v \in \mathcal D_L\right\} = n_v\left( t_k\right) , \quad \# \left\{ i{\text {:}}\,\tilde{\varvec{Y}}_{i}(t) = v \in \mathcal B_L\right\} = \hat{n}_v\left( t_k\right) , \end{aligned}$$
(8)

and for all t

$$\begin{aligned} \# \left\{ i{\text {:}}\,\tilde{\varvec{Y}}_{i}(t-) \ne \tilde{\varvec{Y}}_{i}(t+)\right\} \le 1. \end{aligned}$$

We want to show that the diffusively rescaled version of \(\tilde{\varvec{Y}}\) converges weakly to N independent copies of some (generalized) diffusion processes \(\mathcal Y.\) Then the Kolmogorov backward equation associated with these processes will provide the function u. Clearly, the process \(\mathcal Y\) will have to be stopped on \(\partial \mathcal D.\) So as to shed light on the main component of the proof, namely the convergence to the diffusion process, we introduce a further simplification by not stopping the particles.

We can assume that R is bounded in case of Theorem 1 (possibly by multiplying with a smooth function which is constant on \(\mathcal D\) and decays quickly) and extend the definition of \(\omega _v,\,r_{(u,v)}\) in case of Theorem 2 for any \(u,\,v \in \mathbb Z^d.\) Then we consider the process \(\varvec{Z} (t) = (n_v(t))_{v \in \mathbb Z^d}\) on the space

$$\begin{aligned} z_v \in \mathbb Z_+, \quad \sum _{v \in \mathbb Z^d} z_v = N, \end{aligned}$$

with the generator \(A^{\prime }_1,\) obtained from \(A_1\) by replacing \(\sum _{(u,v) \in \mathcal E(\mathcal D_L)}\) with \(\sum _{(u,v) \in \mathcal E(\mathbb Z^d)}\).Then we define \(\tilde{\varvec{Z}}\) from \(\varvec{Z}\) the same way as we defined \(\tilde{\varvec{Y}}\) from \(\varvec{Y}.\) Observe that by construction (and with the natural coupling) for \(N=1\) we have

$$\begin{aligned} \tilde{\varvec{Y}}^{(L)} (s) = \tilde{\varvec{Z}} \left( s \wedge \tau _{\mathcal D_L}\right) \quad \text {where} \quad \tau _{\mathcal D_L} = \min \left\{ s{\text {:}}\,\tilde{\varvec{Z}} (s) \notin \mathcal D_L\right\} . \end{aligned}$$
(9)

Now we define the limiting process \((\mathcal Z (s))_{0 \le s \le t}\) with \(\mathcal Z (0) = x\) and with the generator

$$\begin{aligned} \mathcal L = {\left\{ \begin{array}{ll} \sum _{i=1}^d R(x) \frac{\partial ^2 }{\partial x_i^2} + \sum _{i=1}^d \frac{\partial R}{\partial x_i } \frac{\partial }{\partial x_i } &{}\quad \text {in Theorem 1},\\ \sum _{i=1}^d r \frac{\partial ^2 }{\partial x_i^2} &{}\quad \text {in Theorem 2(a)},\\ \sum _{i=1}^d r_{sign(x_1)} \frac{\partial ^2 }{\partial x_i^2} &{}\quad \text {in Theorem 2(b)},\end{array}\right. } \end{aligned}$$

acting on

  1. (1)

    functions \(\phi \in \mathcal C^2_0\) in case of Theorem 1,

  2. (2)

    compactly supported continuous functions \(\phi \) which admit \(\mathcal C^2\) extensions on \( (\mathbb R_- \cup 0) \times \mathbb R^{d-1}\) and \( (\mathbb R_+ \cup 0) \times \mathbb R^{d-1}\) and

    $$\begin{aligned} {\left\{ \begin{array}{ll} \omega _{-1} \frac{\partial }{\partial x_1 -} \phi (0,\,x_2,\ldots , x_d) = \omega _1 \frac{\partial }{\partial x_1 +} \phi (0,\,x_2,\ldots , x_d) &{} \text { in case of Theorem 2(a)},\\ r_{-1} \frac{\partial }{\partial x_1 -} \phi (0,\,x_2, \ldots , x_d) = r_1 \frac{\partial }{\partial x_1 +} \phi (0,\,x_2, \ldots , x_d) &{} \text {in case of Theorem 2(b)}. \end{array}\right. } \end{aligned}$$

Note that \(\mathcal Z \) is a diffusion process in case of Theorem 1 and a generalized diffusion process in case of Theorem 2 (see [20] for a survey on generalized diffusion processes and [1] for an early proof of the existence of \(\mathcal Z\) in case of Theorem 2(b)). Now let us stop \(\mathcal Z\) on \(\partial \mathcal D\) and define

$$\begin{aligned} \mathcal Y (s) = \mathcal Z \left( s \wedge \tau _{\mathcal D}\right) \quad \text {where} \quad \tau _{\mathcal D} = \min \{ s{\text {:}}\, \mathcal Z(s) \in \partial \mathcal D\}. \end{aligned}$$
(10)

The connection between the processes \(\tilde{\varvec{Y}}\) and the PDE’s defining u is most easily seen in the simplest case of one particle.

Let \(\Rightarrow \) denote weak convergence in the Skorokhod space \(\mathcal D[0,\,T]\) with respect to the supremum metric and with some T to be specified. (Although we need the Skorokhod space as the trajectories of \({\tilde{\varvec{Z}} }\) are not continuous, but the limiting measures will always be supported on \(\mathcal C[0,\,T]\) and we can use the supremum metric. Alternatively, one could smooth the trajectories of \({\tilde{\varvec{Z}}} \) and only use the space \(\mathcal C[0,\,T].\))

Lemma 2

If \(N=1,\) then

$$\begin{aligned} \left( \frac{\tilde{\varvec{Z}} ({sL^2})}{L} \right) _{0 \le s \le t} \Rightarrow \mathcal (\mathcal Z (s))_{0 \le s \le t}. \end{aligned}$$

Proof

In the setup of Theorem 1, this follows from Theorem 11.2.3 in [25]. More precisely, as we can neglect events of small probability, we can assume that (A) the particle jumps less than \(L^3\) times before \(tL^2\) and consequently (B) the smallest time between two consecutive jumps before \(tL^2\) is bigger than \(L^{-4}.\) Now choosing \(h=L^{-4},\,\tilde{\varvec{Z}}\) can only jump at most once on the interval \([kh,\,(k+1)h]\) for all \(k <t L^6.\) Now we choose

$$\begin{aligned} \Pi _h\left( \frac{z}{L},\,\frac{z+e_i}{L}\right) = L^{-2} R \left( \frac{x+e_i/2}{L} \right) , \quad \Pi _h\left( \frac{z}{L},\,\frac{z}{L}\right) = 1-L^{-2} \sum _{e_i} R \left( \frac{x+e_i/2}{L} \right) , \end{aligned}$$

for any \(z \in \mathbb Z^d\) and any unit vector \(e_i.\) With this choice, we easily see that \(a^{ii}(x) = 2R(x)\) and \(b^i(x) = \frac{\partial }{\partial x_i} R(x)\) at the end of p. 267 in [25]. Applying Theorem 11.2.3, the lemma follows.

In case of Theorem 2(a), we consider the first coordinate of \(\tilde{\varvec{Z}}\) and the other \(d-1\) coordinates separately. Under diffusive scaling, the former one converges to a skew Brownian motion by e.g., [8], while the latter one converges to a \(d-1\) dimensional Brownian motion by Donsker’s theorem. A slight technical detail is that we need to switch to discrete time so as to apply the result of [8]. Let us thus define \({ \varvec{Z}}^{\prime }_1 (k) = \tilde{ \varvec{Z}}_1 (\tau _{1,k})\) for non-negative integers k,  where \(\tau _{1,0} = 0\) and \(\tau _{1,k} = \min \{ t> \tau _{1, k-1}{\text {:}}\,\tilde{ \varvec{Z}}_1 (t-) \ne \tilde{ \varvec{Z}}_1 (t+)\}.\) Then by [8] we have that

$$\begin{aligned} \left( \frac{{ \varvec{Z}}^{\prime }_1 (sL^2)}{L} \right) _{0 \le s \le 4rt} \Rightarrow \left( \mathcal SBM_{\omega _1/(\omega _{-1}+ \omega _1)} (s) \right) _{0 \le s \le 4rt}, \end{aligned}$$
(11)

where \(SBM_{\omega _1/(\omega _{-1}+ \omega _1)}\) is the skew Brownian motion with parameter \(\omega _1/(\omega _{-1}+ \omega _1),\) which is by definition equal to \(\mathcal Z_1(s)/\sqrt{2r}.\) Now observe that by the law of large numbers, the functions \(s \mapsto 2r\tau _{1,sL^2}/(sL^2)\) converge in probability to the identity function on \(\mathcal D[0,\,t].\) Hence the statement of the lemma, restricted to the first component, follows. Finally, the other components are independent from the first one and under diffusive scaling, they converge to Brownian motion by Donsker’s theorem.

In case of Theorem 2(b) we use a similar argument to the one in the previous paragraph. Namely, we still have the analogue of (11) and by independence

$$\begin{aligned} \left( \frac{{ \varvec{Z}}^{\prime } (sL^2)}{L} \right) _{0 \le s \le T} \Rightarrow \left( \mathcal SBM_{r_1/(r_{-1}+ r_1)} (s/d),\, \mathcal W_{d-1} (s/d) \right) _{0 \le s \le T}, \end{aligned}$$
(12)

where

$$\begin{aligned} T=4 d \max \left\{ r_{-1},\, r_{1} \right\} t, \end{aligned}$$

\(\mathcal W_{d-1}\) is a \(d-1\) dimensional standard Brownian motion and \({ \varvec{Z}}^{\prime } (k) = \tilde{ \varvec{Z}} (\tau _{k})\) with \(\tau _{0} = 0\) and \(\tau _{k} = \min \{ t> \tau _{ k-1}{\text {:}}\, \tilde{ \varvec{Z}} (t-) \ne \tilde{ \varvec{Z}} (t+)\}\) for positive integers k. The difference from the case of Theorem 2(a) is that now in order to recover the convergence of \(\tilde{\varvec{Z}},\) we need a nonlinear time change (and consequently have to work with all coordinates at the same time). To do so, let us introduce the local time the first coordinate spends on the negative and positive halfline:

$$\begin{aligned} \tau _{-}(s) = \int _0^s 1_{f_1(y) < 0} dy, \quad \tau _{+}(s) = \int _0^s 1_{f_1(y) > 0} dy, \end{aligned}$$

for \(f = (f_1,\ldots , f_d) \in \mathcal D([0,\,T],\, \mathbb R^d).\) Now we can define

$$\begin{aligned} \rho (s) = \min \left\{ y{\text {:}}\,\frac{\tau _-(y)}{2 d r_-} + \frac{\tau _+ (y)}{2 d r_+} \ge s\right\} , \end{aligned}$$

and

$$\begin{aligned} \Theta {\text {:}}\,\mathcal D([0,\,T]) \rightarrow \mathcal D([0,\,3t/2]), \end{aligned}$$

by

$$\begin{aligned} (\Theta (f)) (s) = {\left\{ \begin{array}{ll} f(\rho (s)) &{} \text {if}\,\tau _-(T) + \tau _+(T) \ge 3T/4, \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

The function \(\Theta \) is well defined because of the choice of T. Let us denote by \(Q_L\) and Q the measure on \(\mathcal D[0,\,T]\) given by the left and right hand sides of (12). Next, we observe that

$$\begin{aligned} Q\left( \tau _-(T) + \tau _+(T) = T\right) = 1, \end{aligned}$$

and \(\Theta \) is continuous on the set \(\{ \tau _-(T) + \tau _+(T) = T \}.\) Now we can apply the continuous mapping theorem (see, e.g., Theorem 5.1 in [3]) to conclude \(\Theta _*Q_L \Rightarrow \Theta _* Q.\) Here, \(\Theta _*Q_L \) is the distribution of a process \((\hat{\varvec{Z}}(s))_{0 \le s \le 3t/2}\) obtained from \(({\varvec{Z}}^{\prime }(s))_{0 \le s \le T}\) by rescaling time with \(\Theta .\) Since \(\Theta _* Q\) is the distribution of \(\mathcal Z(s),\) it suffices to show that

$$\begin{aligned} \left( \frac{\tilde{\varvec{Z}} ({sL^2}) - \hat{\varvec{Z}} ({sL^2})}{L} \right) _{0 \le s \le t} \Rightarrow 0. \end{aligned}$$
(13)

Let us introduce the notations \(z_k = {\varvec{Z}}^{\prime } ({k}) - {\varvec{Z}}^{\prime } ({k-1}),\,\mathcal T_i = \tau _{i} - \tau _{i-1}.\) By an elementary estimate on the local time of random walks, we have

$$\begin{aligned} \mathbb P \left( \# \left\{ k< tL^2{\text {:}}\, \varvec{Z}^{\prime } (k)_1 =0\right\} > L^{3/2}\right) = o(1). \end{aligned}$$

Since we prove weak convergence, we can neglect the above event and subsequently assume that \(z_1,\ldots , z_{tL^2}\) is such that the first coordinate’s local time at zero is smaller than \(L^{3/4}.\) Especially, we use the first line of the definition of \(\Theta \) when we construct \(\hat{\varvec{Z}}\) for L large. Thus with the notation

$$\begin{aligned} \tilde{i}_s = \max \left\{ i{\text {:}}\, \sum _{j=1}^i \mathcal T_j< s \right\} \quad \hat{i}_s = \max \left\{ i{\text {:}}\, \sum _{j=1}^i \mathbb E \left( \mathcal T_j 1_{\{ {\varvec{Z}}^{\prime } (j)_1 \ne 0 \}} | z_1,\ldots , {z_{tL^2}}\right) < s \right\} , \end{aligned}$$

we have \(\tilde{\varvec{Z}} (s) = {\varvec{Z}}^{\prime } (\tilde{i}_s)\) and \(\hat{\varvec{Z}} (s) = {\varvec{Z}}^{\prime } (\hat{i}_s).\) Now with some fixed small \(\delta \) let us write

$$\begin{aligned} \sum ^k = \sum _{i\in [k \delta L^2,\, (k+1)\delta L^2]} \quad \text {and} \quad \sum ^{k^{*}} = \sum _{i\in [k \delta L^2,\, (k+1)\delta L^2],\,{\varvec{Z}} (i)_1 \ne 0}, \end{aligned}$$

for \(k=1,\,2,\ldots , T/\delta .\) Then by Chebyshev’s inequality,

$$\begin{aligned}&\mathbb P \left( \left| \sum ^{k^{*}} \mathcal T_i - \sum ^{k^{*}} \mathbb E \left( \mathcal T_i | z_1,\ldots , {z_{tL^2}}\right) \right| > \delta ^3 L^2 | z_1,\ldots , z_{z_{tL^2}} \right) \nonumber \\<&\frac{\delta L^2}{4d^2 \min \{r_1^2,\,r_{-1}^2 \}} \frac{1}{\delta ^6 L^4} < \delta ^{10}, \nonumber \end{aligned}$$

for L large enough. Furthermore, by our assumption on \(z_1,\ldots , {z_{tL^2}},\)

$$\begin{aligned} P \left( \left| \sum ^{k^{*}} \mathcal T_i - \sum ^{k} \mathcal T_i \right| > \delta ^3 L^2 | z_1,\ldots , {z_{tL^2}} \right) < \delta ^{10}, \end{aligned}$$

also holds for L large. Consequently the random times \(\tilde{s}_l = \sum _{k=1}^l \sum ^{k^{*}} \mathbb E (\mathcal T_i | z_1, \ldots , {z_{tL^2}})\) and \(\hat{s}_l = \sum _{i=1}^{lL^2/\delta } \mathcal T_i\) satisfy

  • \(0=\tilde{s}_0<\tilde{s}_1<\tilde{s}_2 \cdots < \tilde{s}_T,\,\tilde{s}_T \ge 3t/2\) and \(\tilde{s}_l - \tilde{s}_{l-1} < C\delta ,\)

  • for all l with \(\tilde{s}_l< t,\, \mathbb P (|\hat{s}_l - \tilde{s}_l| > 2 \delta ^2 ) < \delta ^{8}\) and \(\hat{\varvec{Z}} (\hat{s}_l L^2) = \tilde{\varvec{Z}} (\tilde{s}_l L^2).\)

This together with tightness of \(\varvec{Z}^{\prime }\) (proved in the usual way) gives (13). We have finished the proof of Lemma 2. \(\square \)

Lemma 3

If \(N=1,\) then

$$\begin{aligned} \left( \frac{\tilde{\varvec{Y}}^{(L)}({sL^2})}{ L} \right) _{0 \le s \le t} \Rightarrow \mathcal (\mathcal Y (s))_{0 \le s \le t}. \end{aligned}$$

Proof

This is a consequence of the continuous mapping theorem, Lemma 2, (9) and (10). \(\square \)

Next, we prove a simpler version of Theorems  1 and 2, namely the convergence of the expectations

Proposition 5

Consider the setup of either Theorem 1 or 2. Then

$$\begin{aligned} \lim _{L \rightarrow \infty } \mathbb E \left( \xi _{\langle xL \rangle }^{(L)} \left( t L^2\right) \right) = u(t,\,x) \frac{\omega _{x }}{2}, \end{aligned}$$
(14)

where \(\omega _x = \omega \) in case of Theorems 1 and 2(b) and \(=\omega _{sign(x_1)}\) in case of Theorem 2(a).

Proof

By Proposition 1, the left hand side of (14) is equal to

$$\begin{aligned} \frac{\omega _{\langle xL \rangle }}{2} \mathbb E \left( \xi ^*_{\tilde{\varvec{Y}} (tL^2)} \frac{2}{\omega _{\tilde{\varvec{Y}} (tL^2)}} 1_{ \{ \tilde{\varvec{Y}} (tL^2) \in \mathcal D_L \}} + T \left( \frac{{\tilde{\varvec{Y}}({tL^2})}}{L} \right) 1_{ \{\tilde{\varvec{Y}}({tL^2}) \in \mathcal B_L \}} \right) =:I + II, \end{aligned}$$

where \(\xi ^*_v = \xi _v(0).\) In case of Theorems 1 and 2(b), \(\omega \) is constant, thus we obtain

$$\begin{aligned} I = \sum _{v \in \mathcal D_L} \mathbb P\left( \tilde{\varvec{Y}} \left( tL^2\right) =v \right) \mathbb E\left( \xi _v^*\right) . \end{aligned}$$

Recall that (7) gives \(\mathbb E (\xi _v^*) = \frac{\omega }{2} f(v/L) + o(1).\) Applying Lemma 3 and the definition of the weak convergence, we obtain

$$\begin{aligned} \lim _L I = \frac{\omega }{2} \mathbb E\left( f(\mathcal Y (t)) 1_{\{\mathcal Y(t) \in \mathcal D\}}\right) . \end{aligned}$$

Similarly,

$$\begin{aligned} \lim _L II= \frac{\omega }{2} \mathbb E\left( T(\mathcal Y (t)) 1_{\{\mathcal Y(t) \in \mathcal \partial D\}}\right) . \end{aligned}$$

Now the proposition follows from the fact that \(u(t,\,x)\) solves the Kolmogorov equation associated with the process \(\mathcal Y.\) Finally, in case of Theorem 2(a), we consider the function h on \( \mathcal D \) with \(h(y) = 0\) if \(y_1 <0,\,h(y) = y_1/\delta \) if \(0 \le y_1 < \delta \) and \(h(y) = 1\) if \(y_1 > \delta \) and approximate I by \(Ia + Ib,\) where Ia is obtained from I by multiplying the integrand with \(h(\tilde{\varvec{Y}} (tL^2)/L)\) and Ib is obtained from I by multiplying the integrand with \(h( - \tilde{\varvec{Y}} (tL^{2})/L).\) Clearly, \(I - \varepsilon< Ia + Ib < I\) for \(\delta \) small enough. Then we can repeat the above argument since \(\omega \) is constant on the integration domain in both Ia and Ib. \(\square \)

In order to complete the proof of Theorems 1 and 2, we need an extension of Lemma 3 from \(N=1\) to arbitrary \(N{\text {:}}\)

Proposition 6

The processes

$$\begin{aligned} \left( \frac{\tilde{\varvec{Y}}_i^{(L)}({sL^2})}{ L} \right) _{0 \le s \le t}, \end{aligned}$$

for \(i=1,\ldots ,N\) converge weakly to N independent copies of \((\mathcal Y(s))_{0 \le s \le t}.\)

Now, we prove Theorems 1 and 2 assuming Proposition 6. By the discussion at the beginning of this section and by (1), Theorems  1 and 2 will be proved once we establish

$$\begin{aligned} \mathbb E \left( \prod _{s \in S} \xi _{\langle xL \rangle +s}^{n^*_s}\left( tL^2\right) \right) \sim u(t,\,x)^{N} \prod _{s \in S} \frac{ \Gamma (n^*_s + \omega _{\langle xL \rangle +s}/2)}{\Gamma (\omega _{\langle xL \rangle +s}/2)}. \end{aligned}$$
(15)

By Proposition 6, \(\lim _{L} \mathbb P (\mathcal A^{(L)}_{\delta }) = o_{\delta } (1),\) where

$$\begin{aligned} \mathcal A_{\delta } = \mathcal A^{(L)}_{\delta } = \left\{ \left\| {\tilde{\varvec{Y}}_i^{(L)}\left( {tL^2}\right) } - {\tilde{\varvec{Y}}_j^{(L)}\left( {tL^2}\right) } \right\| \le \delta L\, \text {for some}\,i \ne j\right\} . \end{aligned}$$

This, combined with the uniform moment condition and the Cauchy–Schwarz inequality gives

$$\begin{aligned} \lim _{L} \mathbb E \left( 1_{\mathcal A_{\delta }} F\left( \varvec{Y}\left( t L^2\right) ,\,\underline{\xi }^*\right) \right) = o_{\delta } (1), \end{aligned}$$

where \(\xi ^*_v = \xi _v(0)\) and F is the duality function defined in (4). Now Proposition 1 and the fact that \(\underline{\xi }^*\) is associated with f implies that the left hand side of (15) is \(o_{\delta } (1)\) close to

Now we can cut this integral to \(2^N\) pieces and apply a version of the proof of Proposition 5 to conclude (15).

In order to complete the proof of Theorems 1 and 2 it only remains to prove Proposition 6, which is the subject of the next section.

6 Proof of Proposition 6

We are going to prove a variant of Proposition 6 obtained by replacing \(\tilde{ \varvec{Y}}_i\) and \(\mathcal Y\) with \(\tilde{\varvec{Z}}\) and \(\mathcal Z.\) Proposition 6 follows from this variant the same way as Lemma 3 follows from Lemma 2.

The idea of the proof is borrowed from ([17], Sect. 5): we show that with probability close to 1, \(\tilde{\varvec{Z}}_i(s)\) and \(\tilde{\varvec{Z}}_j(s)\) will not meet after getting separated by a distance \(L^{\gamma }\) with some \(\gamma \) close to 1. Since \(\tilde{\varvec{Z}}_i(s)\) and \(\tilde{\varvec{Z}}_j(s)\) move independently if their distance is bigger than 1, we can replace \(\tilde{\varvec{Z}}(s)\) for \(s > \tau = \max \{ \tau _{i,j}\}\) by N independent copies of \(\tilde{\varvec{Z}}_1(s),\) where \(\tau _{i,j}\) is the first time s when \(\Vert \tilde{\varvec{Z}}_i(s) - \tilde{\varvec{Z}}_j(s) \Vert > L^{\gamma }.\) It only remains to show that \(\tilde{\varvec{Z}}_i(\tau )\) is close to xL and \(\tau /L^2\) is negligible. We complete this strategy in the case of Theorem 1, \(N=2\) and \(d=2\) in Sect. 6.1 and for all other cases in Sect. 6.2.

6.1 Case of Theorem 1, N = 2, d = 2

Recalling the notation of Sect. 5, let us write \(Z(k) = {\varvec{Z}}_2^{\prime }(k) - {\varvec{Z}}_1^{\prime }(k).\) Note that Z is not a particularly nice process: it is neither Markov, nor translation invariant. As long as both \({\varvec{Z}}_1^{\prime }(k)\) and \({\varvec{Z}}_2^{\prime }(s)\) are \(\varepsilon L\)-close to \(xL,\, Z\) is well approximated by a Brownian motion. In particular, if \(\Vert Z\Vert = M,\) then the probability that \( \Vert Z\Vert \) reaches M / 2 before reaching 2M is close to 1/2 and thus \(\Vert Z\Vert \) performs an approximate SSRW on the circles

$$\begin{aligned} \mathcal C_m = \left\{ z \in \mathbb Z^2{\text {:}}\,| \Vert z \Vert - 2^m | <2\right\} . \end{aligned}$$

We will estimate the goodness of this approximation by a SSRW.

Assuming that \( Z(s_0) \in \mathcal C_m,\) denote by \(s_1\) the smallest \(s > s_0\) such that \(Z(s) \in \mathcal C_{m-1}\) or \(Z(s) \in \mathcal C_{m+1}.\) We also write

$$\begin{aligned}\log = \log _2, \quad M=2^m, \quad \mathcal E_{L,M} = \left| \mathbb P \left( Z\left( s_1\right) \in \mathcal C_{m-1}\right) - \frac{1}{2} \right| . \end{aligned}$$

Note that \(s_1\) is a stopping time with respect to the filtration generated by \( {\varvec{Z}}_2^{\prime }(s),\, {\varvec{Z}}_1^{\prime }(s).\)

We will need a series of lemmas.

Lemma 4

If \(\Vert Z(k)\Vert \ge 2, \) then

$$\begin{aligned} \mathbb P \left( Z(k+1) - Z(k) = e \right) = \frac{1}{4} + O\left( \frac{ \Vert Z(k) \Vert }{L^2} \right) , \end{aligned}$$

for \(e=(1,\,0),\, (-1,\,0),\,(0,\,1),\,(0,\,-1).\) Here, the constants involved in O only depend on the \(\mathcal C^2\) norm of R.

Proof

Using that R is a \(\mathcal C^2\) function on a compact domain containing \(\mathcal D,\) the lemma follows from the definition. \(\square \)

Lemma 4 enables us to couple Z(k) with a planar SSRW W(k) such that

$$\begin{aligned} \mathbb P (Z(k) - Z(k-1) = W(k) - W(k-1) | Z(k) ) \ge 1-\frac{C \Vert Z(k)\Vert }{L^2}. \end{aligned}$$

We are going to apply such a coupling several times in the forthcoming lemmas.

Lemma 5

There are constants \(\varepsilon _0 >0,\, C_0< \infty \) and \(\theta <1\) such that

$$\begin{aligned} \mathbb P \left( s_1 > n\right) < C_0 \theta ^{n/ M^2}, \end{aligned}$$

assuming \(\Vert Z(s_0)\Vert = M < \varepsilon _0 L.\)

Proof

To verify Lemma 5, we couple \(Z(k),\,k \in [s_0,\,s_0 + M^2]\) to a SSRW \(W(k),\, k \in [s_0,\, s_0 + M^2],\,W(s_0) = Z(s_0).\) By Lemma 4, we can guarantee that \(\Vert Z(k) - W(k) \Vert < C M^3 / L^2\) for all \(k < M^2\) with some probability bounded away from zero. Here, C only depends on the \(\mathcal C^2\) norm of R. Thus by choosing \(\varepsilon _0\) small, we can guarantee \(C M^3 / L^2 < M/10.\) Since W is close to a Brownian motion, it reaches \(\mathcal C_{m-2}\) or \(\mathcal C_{m+2}\) before \(M^2\) with some positive probability. Consequently, there is some p independent of L such that \(\mathbb P (s_1 < M^2) > p.\) Applying this argument in an inductive fashion gives Lemma 5. \(\square \)

Let us introduce the notation

$$\begin{aligned} \underline{\tau }_{\gamma } = \underline{\tau }_{\gamma } (L)= \min \left\{ k{\text {:}}\,\Vert Z(k) \Vert > L^{\gamma }\right\} . \end{aligned}$$
(16)

Now we claim the following.

Lemma 6

For every \(\gamma <1\) with \(1-\gamma \) small, there exists some \(\xi >0\) such that for L large enough, the following estimates hold.

  1. (a)

    Tiny gap: if \(m < \frac{3}{5} \log L,\) then

    $$\begin{aligned} \mathbb P \left( \tau _{3/5} > L^{13/10}\right) = O\left( L^{-1/10}\right) . \end{aligned}$$
  2. (b)

    Small gap:

    $$\begin{aligned} \text {if}\,\frac{3}{10} \log L \le m < \gamma \log L,\quad \text {then}\quad \mathcal E_{L,M} = O\left( L^{-\xi }\right) . \end{aligned}$$
  3. (c)

    Moderate gap:

    $$\begin{aligned} \text {If}\,\gamma \log L< m < \log L - \log \log L,\quad \text {then}\quad \mathcal E_{L,M} = O\left( \log ^{-3/2} L\right) . \end{aligned}$$
  4. (d)

    Large gap: there is some \(\varepsilon >0\) such that

    $$\begin{aligned} \text {if}\,\log L - \log \log L< m< \log L + \log \varepsilon ,\quad \text {then}\quad \mathcal E_{L,M} < 1/100. \end{aligned}$$

As the proof of Lemma 6 is slightly longer than the other lemmas, we postpone it to the Appendix. Next, we formulate our key lemma:

Lemma 7

For any small \(\delta >0,\) there exists \(\gamma <1\) such that for L large enough,

$$\begin{aligned} \mathbb P \left( \not \exists k{\text {:}}\,\underline{\tau }_{\gamma }< k < tL^2{\text {:}}\, \Vert Z(k) \Vert \le 2\right) > 1-\delta . \end{aligned}$$

Proof

In order to derive Lemma 7 from Lemma 6, recall the connection of random walks and electrical networks from Sect. 1.1. Using the notation from there, we choose \(A=\frac{3}{10} \log L,\,B = \log L + \log \varepsilon \) and \(I = \gamma \log L,\, R_{I+1/2} =1.\) If \(R_{i+1/2}\) is defined for some \(i \in [I, \,\log L - \log \log L],\) then let us define \(R_{i+3/2} = R_{i+1/2}(1+ K \log ^{-3/2} L ).\) If \(R_{i+1/2}\) is defined for some \(i \in [\log L - \log \log L,\, B-1],\) then let us define \(R_{i+3/2} = \frac{11}{10} w_{i+1/2}.\) Similarly, if \(R_{i+1/2}\) is defined for some \(i \in [A,\,I],\) then we define \(R_{i-1/2} = R_{i+1/2}(1- K L^{-\xi } ).\) Now by Lemma 6(b–d)

$$\begin{aligned} \mathbb P \left( \min \left\{ k{\text {:}}\, \left\| Z \left( k+s_0\right) \right\|< L^{3/10}\right\} < \min \left\{ k{\text {:}}\, \left\| Z\left( k+s_0\right) \right\| > \varepsilon L\right\} | \left\| Z\left( s_0\right) \right\| = L^{\gamma }\right) , \end{aligned}$$

is bounded from above by (2). An elementary computation shows that (2) can be made arbitrarily small by choosing \(\gamma \) close to 1. Thus after \(\underline{\tau }_{\gamma },\, \Vert Z(k)\Vert \) reaches \(\varepsilon L\) before reaching 2 with probability close to 1. Finally, Lemmas 1 and 2 yield that for fixed \(\varepsilon \) the two particles do not meet after separating by a distance \(\varepsilon L\) and before \(tL^2\) with probability close to 1. This proves Lemma 7. \(\square \)

As a consequence of Lemma 7, we will be able to replace \(\tilde{\varvec{Z}}_1(s)\) and \( \tilde{\varvec{Z}}_2(s)\) with two independent copies after time

$$\begin{aligned} \underline{ \tilde{\tau }}_{\gamma } = \min \left\{ s{\text {:}}\,\left\| \tilde{\varvec{Z}}_1(s) - \tilde{\varvec{Z}}_2(s)\right\| > L^{\gamma }\right\} . \end{aligned}$$

Then Proposition 6 will easily follow once we establish that (A) \(\underline{\tilde{\tau }}_{\gamma } / L^2\) is negligible and (B) \((\tilde{\varvec{Z}}_i( \underline{\tau }_{\gamma }) - xL)/L\) is negligible. This is what we do in the next two lemmas.

Lemma 8

For any fixed \(\gamma <1\) and \(\delta >0,\) we have

$$\begin{aligned} \mathbb P\left( \underline{\tau }_{\gamma } < L^{1+\gamma }\right) > 1-\delta , \end{aligned}$$

for L large enough.

Proof

By Lemma 6(a), it is enough to prove \(\mathbb P(\underline{\tau }_{\alpha } - \underline{\tau }_{3/5} > L^{1+\gamma }) < \delta .\) We write \(s_0 = \underline{\tau }_{3/5} \) and if \(Z(s_{i}) \in \mathcal C_{m}\) with \(m > \frac{3}{10} \log L,\) then \(s_{i+1}\) is the first time s when either \(Z(s) \in \mathcal C_{m-1}\) or \(Z(s) \in \mathcal C_{m+1}.\) If \(Z(s_{i}) \in \mathcal C_{\lfloor \frac{3}{10} \log L \rfloor },\) then \(s_{i+1}\) is the first time s when \(Z(s) \in \mathcal C_{\lfloor \frac{3}{10} \log L \rfloor +1}.\) By Lemma 6(b), \(b_i := \log \Vert Z(s_i)\Vert \) can be approximated by a one dimensional SSRW (reflected at \(\frac{3}{10} \log L,\) absorbed at \(\gamma \log L\)) with an error of \(O(L^{\xi })\) at each step. In particular, if \(\varvec{t}\) is the smallest i when \(b_i \ge \gamma \log L,\) then \(\mathbb P (\varvec{t} > \log ^3 L) < \delta /10\) and thus \(b_i,\,i \le \varvec{t}\) can be coupled to a SSRW with an error \({<} \delta /5.\) Now if \(\zeta = (1-\gamma )/2\) and \(\ell _m = \# \{ i < \varvec{t} {\text {:}}\, b_i = m\},\) then

$$\begin{aligned} \mathbb P \left( \exists m{\text {:}}\, \ell _m> L^{\zeta }\right) \le \log L \max _m \mathbb P \left( \ell _m > L^{\zeta }\right) < \delta /10, \end{aligned}$$

by the gambler’s ruin estimate \(\mathbb P(\ell _m> n+1 | \ell _m >n) < 1-\log ^{-1} L.\) If \(\ell _m < L^{\zeta }\) for all m,  then

$$\begin{aligned} \underline{\tau }_{\alpha } < \sum _{m = \frac{3}{10} \log L}^{\gamma \log L} \sum _{i=1}^{L^{\zeta }} \mathcal T_{m,i}, \end{aligned}$$

where \(\mathcal T_{m,1}\) is the random time \(s_1 - s_0\) if \(\Vert Z(s_0)\Vert = m\) and for fixed \(m,\,\mathcal T_{m,i}\)’s are iid. Consequently, we have

$$\begin{aligned} \mathbb P\left( \underline{\tau }_{\alpha }> L^{1+\gamma }\right)< \frac{3\delta }{10} + (\log L ){L^{\zeta }} \max _m \mathbb P \left( \mathcal T_{m,1}> L^{1+\gamma - \zeta } \log ^{-1} L \right) < \delta , \end{aligned}$$

by Lemmas 5 and 6(a). \(\square \)

Now, with the notation

$$\begin{aligned} \tilde{\overline{\tau }}_{\gamma } = \tilde{\overline{\tau }}_{\gamma }( L)= \min \left\{ k{\text {:}}\, \left\| \tilde{\varvec{Z}}_1(k) - xL\right\|> L^{\gamma }\, \text {or}\, \left\| \tilde{\varvec{Z}}_2(k) - xL\right\| > L^{\gamma }\right\} , \end{aligned}$$

we have

Lemma 9

For any fixed \(\gamma <1\) and \(\delta >0,\) we have

$$\begin{aligned} \mathbb P\left( L^{1+ \gamma } < \tilde{ \overline{\tau }}_{\frac{\gamma + 3}{4}}\right) > 1- \delta , \end{aligned}$$

for L large enough.

Proof

By Lemma 1, it is enough to consider the case of one particle. A simplified version of Lemma 2 yields that

$$\begin{aligned} \max _{s< L^{1+\gamma } } \left\| \tilde{\varvec{Z}}_1 (s) -xL \right\| < K L^{\frac{1+\gamma }{2}}, \end{aligned}$$

with probability \(1 - \delta \) for some K and L large. \(\square \)

The variant of Proposition 6 (explained in the beginning of Sect. 6) easily follows from Lemmas 79. Since we can neglect an event of small probability, we can assume that all events hold in Lemmas 79. Note that by definition, \(\tilde{\varvec{Z}}_1(s)\) and \( \tilde{\varvec{Z}}_2(s)\) move independently if their distance is bigger than 2. Thus for \(s > \underline{ \tilde{\tau }}_{\gamma },\) we can replace \((\tilde{\varvec{Z}}_1( \underline{ \tilde{\tau }}_{\gamma } +k))_{k=1,\ldots ,TL^2}\) and \(( \tilde{\varvec{Z}}_2(\underline{\tilde{\tau }}_{\gamma } + k))_{k=1,\ldots ,TL^2}\) with independent random walks, both of them converging to \(\mathcal Z (s)\) under the proper scaling. Finally, by Lemmas 8 and 9, \(\underline{\tau }_{\gamma }/L^2 < \delta \) and \((\tilde{\varvec{Z}}_i( \underline{\tilde{\tau }}_{\gamma }) - x)/L < \delta .\) Proposition 6 follows in the case of Theorem 1, \(N=2,\,d=2.\)

6.2 Completing the Proof of Proposition 6

The case of general N follows from Lemma 1 and from the case \(N=2.\) Indeed, Lemmas 1, 7 and 8 imply that for some \(\gamma < 1,\)

$$\begin{aligned} \mathbb P \left( \exists s \in \left[ L^{1+\gamma },\, tL^2\right] ,\, i,\,j \in \{ 1,\,2,\ldots ,N\}{\text {:}}\, \left\| {\tilde{\varvec{Z}}}_i(s) - {\tilde{\varvec{Z}}}_j(s)\right\| \le 2\right) < \delta . \end{aligned}$$

Furthermore, we have the analogue of Lemma 9 with \(\tilde{\overline{\tau }}_{\alpha } \) replaced by

$$\begin{aligned} \min \left\{ k{\text {:}}\, \exists i < N{\text {:}}\, \left\| \tilde{\varvec{Z}}_i(k) - xL\right\| > L^{\alpha } \right\} . \end{aligned}$$

Whence the case of general N follows the same way as before.

The case of dimension \(d > 2\) is simpler than \(d=2\) as the particles only meet finitely many times.

Lemma 10

In case of Theorem 1, \(d>2,\, N=2,\) there is some positive p such that \(\Vert {\tilde{\varvec{Z}}}_1(s) - {\tilde{\varvec{Z}}}_2(s)\Vert \ge 2\) for all \(k \in [2,\,tL^2]\) with probability at least p.

Proof

Consider the process \(Z(k) = {\varvec{Z}}^{\prime }_2(k) - {\varvec{Z}}^{\prime }_1(k)\) as in Sect. 6.1. Let us fix some small \(\varepsilon >0.\) We will show that the probability of the event \( \{ \Vert Z \Vert \) reaches \(\varepsilon L\) before reaching 1 \(\}\) is bounded away from zero. From this the lemma will follow by the same argument as in \(d=2\) (cf. the end of the proof of Lemma 7).

Similarly to Lemma 4, we have

$$\begin{aligned} \frac{1}{2d} - \frac{c_0 \varepsilon }{ L}< \mathbb P ( Z(k+1) - Z(k) = e ) < \frac{1}{2d} + \frac{c_0 \varepsilon }{ L}, \end{aligned}$$
(17)

assuming \(\Vert Z(k)\Vert \le \varepsilon L\) for all unit vectors \(e \in \mathbb Z^d\) and L large enough. Now we define a random walk B(k) on \(\mathbb Z^d\) with weights. Specifically \(B(3) = Z(3)\) and we choose the weights

$$\begin{aligned} w_{(u,v)} = \left( 1-\frac{ c_1 \varepsilon }{L} \right) ^l,\quad \text {where}\quad |u|_1 = l-1,\quad |v|_1 = l\quad \text {and}\quad c_1 \gg c_0\,\text {is a fixed constant}. \end{aligned}$$

Clearly \(\Vert Z(3) \Vert \ge 2\) holds with some positive probability. Let us write \(\varvec{t}_{B,l} = \min \{ k > 3{\text {:}}\,| B(k)|_1 = l \},\) where \(\min \emptyset = \infty \) and similarly \(\varvec{t}_{ Z,l} = \min \{ k > 3{\text {:}}\, \Vert Z(k)\Vert = l \}.\) Now we claim that

Lemma 11

Assuming \(c_1 = c_1(c_0)\) is large enough, there exists a coupling between the processes B and Z such that \(|B_i(k)| \le |Z_i(k)|\) holds for all \(k \in [3,\, \varvec{t}_{B,1} \wedge \varvec{t}_{Z,\varepsilon L})\) and \(i \le d\) almost surely.

By Lemma 11, it suffices to prove that

$$\begin{aligned} \mathbb P\left( \varvec{t}_{B, \varepsilon L} < \varvec{t}_{B,1}\right) \,\text { is bounded away from zero}. \end{aligned}$$
(18)

This follows from a simple application of the connection between random walks and electrical networks. Note that the weights \(w_{(u,v)}\) are bounded away from zero in the \(\varepsilon L\) neighborhood of the origin. In such cases, (18) follows from a standard argument, see, e.g., the proof of Theorem 19.30 in [16]. In order to complete the proof of the Lemma 10, it only remains to prove Lemma 11. \(\square \)

Proof of Lemma 11

We prove by induction on k. Assume \(|B_i(k)| \le |Z_i(k)|\) for all \(i \le d.\)

  • Case 1 none of the coordinates of B(k) and Z(k) are zero. If \(|B_i(k+1)| = |B_i(k)|+1,\) then we define \(Z(k+1)\) by \(|Z_i(k+1)| = |Z_i(k)|+1.\) We can do so by (17) and by the definition of \(B{\text {:}}\) if \(c_1\) is large enough, then

    $$\begin{aligned} \mathbb P\left( \left| B_i(k+1)\right| = \left| B_i(k)\right| +1\right) \le \mathbb P\left( \left| Z_i(k+1)\right| = \left| Z_i(k)\right| +1\right) . \end{aligned}$$
    (19)

    Similarly, if \(|Z_i(k+1)| = |Z_i(k)|-1,\) then we define \(B(k+1)\) by \(|B_i(k+1)| = |B_i(k)|-1.\) We can do so as similarly to (19), we have

    $$\begin{aligned} \mathbb P\left( \left| Z_i(k+1)\right| = \left| Z_i(k)\right| -1\right) \le \mathbb P\left( \left| B_i(k+1)\right| = \left| B_i(k)\right| +1\right) . \end{aligned}$$

    The coupling is arbitrary on the remaining set (sometimes we may have to move B and Z in different directions to match the probabilities, i.e., to define a proper coupling). This proves the inductive step for the case when none of the coordinates of B(k) and Z(k) are zero.

  • Case 2 none of the coordinates of B(k) are zero or for all i with \(B_i(k) = 0,\,Z_i(k) =0\) also holds. The same argument works as in Case 1. Note that the coupling of Case 1 will not work if \(B_i(k) =0\) and \(Z_i(k) \ne 0\) as \(\mathbb P( |B_i(k+1)| = |B_i(k)|+1 ) \approx d^{-1}\) while \(\mathbb P( |Z_i(k+1)| = |Z_i(k)|+1 ) \approx (2d)^{-1}.\) Let \(\mathcal I\) denote the set of indices i with \(B_i(k) = 0,\, Z_i(k) \ne 0\) and let \(\mathcal I^{\prime } \subset \mathcal I\) be the set of indices i with \(B_i(k) = 0\) and \(|Z_i(k)|=1.\).

  • Case 3 \(|Z_i(k)| \ge 2\) for all \(i \in \mathcal I.\) If \(|B_i(k+1)| = |B_i(k)|+1\) with some \(i \in \mathcal I,\) then we can define \(Z(k+1)\) by changing the ith coordinate (either increasing or decreasing), since we have

    $$\begin{aligned}&\mathbb P\left( B(k+1) = B(k)+e_i\,\text {or }\, B(k)-e_i\right) \nonumber \\\le & {} \mathbb P\left( Z(k+1) = Z(k)+e_i\, \text {or}\, Z(k)-e_i\right) . \end{aligned}$$
    (20)

    The proof of (20) is similar to that of (19): the first line of (20) is

    $$\begin{aligned} \frac{w_{(B(k),B(k)+e_i)} + w_{(B(k),B(k)-e_i)}}{\sum _{v{\text {:}}\, \Vert v-B(k)\Vert =1} w_{(B(k),v)}}. \end{aligned}$$

    Here the denominator can be bounded from below by

    $$\begin{aligned} {\left( 1- \frac{c_1 \varepsilon }{L} \right) ^{|B(k)|_1} \left[ 1+(2d-1)\left( 1- \frac{c_1 \varepsilon }{L} \right) \right] }, \end{aligned}$$

    which corresponds to the case when only one coordinate is nonzero (note that we have excluded \(B(k)=0\) since \(k < \varvec{t}_{B,1}\)). Combining this estimate with (17) gives (20) assuming that \(c_1=c_1(c_0)\) is large enough. The other coordinates are treated the same way as in Case 1.

  • Case 4 \(|\mathcal I^{\prime }| \ne 0\) is an even integer. Consider a perfect matching of \(\mathcal I^{\prime }.\) Let \((i,\,j) \in \mathcal I^{\prime 2}\) be an arbitrary pair. If \(Z_i(k+1) = 0,\) then we define \(B(k+1)\) such that \(|B_j(k+1)| = 1.\) We can do so, since \(\mathbb P (Z_i(k+1) = 0) \approx (2d)^{-1}\) and \(\mathbb P (|B_j(k+1) |= 1) \approx d^{-1}.\) Analogously, if \(Z_j(k+1) = 0,\) then we define \(B(k+1)\) such that \(|B_i(k+1)| = 1.\) Then, we do the coupling of movements in other directions as discussed in Cases 1–3. Finally, we couple the remaining set (including \(Z_i(k+1) = 2,\,Z_j(k+1) = 2\)) arbitrarily.

  • Case 5 \(|\mathcal I^{\prime }| =n\) is an odd integer. First we consider a matching of \(n-1\) elements of \(\mathcal I^{\prime },\) and do the coupling described in Case 4. Let us denote the remaining index by i. Now we claim that there is some \(j \notin \mathcal I^{\prime }\) such that \(|Z_j(k)| -| B_j(k)| \ge 1.\) Indeed, if there was no such j, then \(\sum _{m=1}^d |Z_m(k)| -| B_m(k)| = n\) would be an odd number, which is a contradiction with the fact that \(Z(0) = B(0)\) and both processes move to nearest neighbors at each step. Now if \(|Z_j(k+1)| = |Z_j(k)|-1,\) then we define \(B(k+1)\) such that \(|B_i(k+1)| = 1.\) If \(Z_i(k+1) = 0,\) we define \(B(k+1)\) by changing the jth coordinate (either decreasing or increasing). These can be done as before. Then, we consider the cases corresponding to other coordinates as discussed in Cases 1–3. Finally, we couple the remaining set arbitrarily. \(\square \)

Using Lemma 10, one can easily prove a much simplified version of Lemmas 79 with \(L^{\gamma },\,L^{\frac{1+\gamma }{2}},\,L^{\frac{3+\gamma }{4}}\) replaced by constants \(K_1(\delta ),\,K_2(\delta ),\,K_3(\delta ).\) This implies Proposition 6 for the case \(d>2,\,N=2.\) Then the case of general N follows the same way as in \(d=2.\)

Finally, in case of Theorem 2 we have assumed that \(x_1 \ne 0.\) Then choosing \(\varepsilon < |x_1|,\) the proof of the case of Theorem 1 (with the choice R is the constant 1 function) applies.

7 Proof of Theorem 3 and Proposition 4

As in the proof of Theorems 1 and 2, we prove the weak convergence by showing that the moments converge. The latter one is showed by switching to the dual process. Recalling some notation from Sect. 5, we define \(\varvec{ Y}^{\prime }\) from \(\tilde{\varvec{Y}} (t)\) the same way as we defined \(\varvec{ Z}^{\prime }\) from \(\tilde{\varvec{Z}} (t).\) The proof consists of two parts. First we consider the case when the number of particles is \(N=2\) and prove

Proposition 7

$$\begin{aligned} \lim _{L \rightarrow \infty }\mathbb E \left[ T \left( \frac{{\varvec{Y}}_1^{\prime (L)}(\infty )}{L} \right) T \left( \frac{{\varvec{Y}}_2^{\prime (L)}(\infty )}{L} \right) \right] / \left[ u^{(L)}(x) \right] ^2 =1. \end{aligned}$$

Then we can derive an extension of Proposition 7 to arbitrary N. The idea of the proof of Proposition 7 is borrowed from [21] (the main difference is in Lemma 14). The extension to arbitrary N is very similar to the argument in [15, 17].

Let us write \(P^{(L)}(A_1) = \mathbb P ({\varvec{Y}}_1^{\prime (L)} (\infty )= A_1)\) and

$$\begin{aligned} P^{(L)}\left( A_1,\, A_2\right) = \mathbb P \left( {\varvec{Y}}_1^{\prime (L)} (\infty ) = A_1, \quad {\varvec{Y}}_2^{\prime (L)} (\infty )= A_2\right) , \end{aligned}$$

for \(A_1,\,A_2 \in \{ 0,\,L\}.\) The asymptotic hitting probabilities of one particle are given by

Lemma 12

\(\lim _{L \rightarrow \infty } \frac{P^{(L)}(L) }{\mathcal A^{(L)} (x)} = 1.\)

Proof

Although this lemma follows from the connection between random walks and electrical networks, we give a direct proof as its extensions will be needed later. Note that by the definition of \(\omega _0,\, \omega _L\) and \(\psi (m),\) we have for any \(1 \le m \le L-1\)

$$\begin{aligned} \frac{\omega _{m-1}}{ \omega _{m-1}+ \omega _m} r_{m-1/2} \psi (m) = \frac{\omega _{m+1}}{ \omega _{m}+ \omega _{m+1}} r_{m+1/2} \psi (m+1). \end{aligned}$$
(21)

Let us write \(\underline{\Phi }(m) = \sum _{i=1}^m \psi (i)\) for \(0 \le m \le L.\) Then (21) means that \(\underline{\Phi }({\varvec{Y}}^{\prime }_1(k))\) is a bounded martingale. After the first hitting of either 0 or L,  this martingale clearly stays constant. Then by the martingale convergence theorem,

$$\begin{aligned} P^{(L)}(L) \underline{\Phi }(L) = \underline{\Phi }\left( {\varvec{Y}}^{\prime }_1(0)\right) . \end{aligned}$$

Since \(\mathcal A^{(L)}(x) \sim \underline{\Phi }({\varvec{Y}}^{\prime }_1(0)) / \underline{\Phi }(L),\) Lemma 12 follows. \(\square \)

Now with the notation \(\overline{\Phi }(m) = \sum _{i=m+1}^{L} \psi (i)\) for \(0 \le m \le L\) our aim is to construct submartingales using

$$\begin{aligned} S_k := \underline{\Phi }\left( {\varvec{Y}}^{\prime }_1(k)\right) \underline{\Phi }\left( {\varvec{Y}}^{\prime }_2(k)\right) + \overline{\Phi }\left( {\varvec{Y}}^{\prime }_1(k)\right) \overline{\Phi }\left( {\varvec{Y}}^{\prime }_2(k)\right) , \end{aligned}$$

and

$$\begin{aligned} T_k = \sum _{i=\min \{ {\varvec{Y}}^{\prime }_1(k),\, {\varvec{Y}}^{\prime }_2(k)\} +1}^{\max \{ {\varvec{Y}}^{\prime }_1(k),\, {\varvec{Y}}^{\prime }_2(k)\} } \psi (i). \end{aligned}$$

Lemma 13

There exists some constant C only depending on the upper and lower bound of r and \(\omega \) such that \(S_k + C T_k\) is a submartingale and \(S_k - C T_k\) is a supermartingale.

Proof

Let us compute the conditional expectations with respect to \((\mathcal F_k)_k,\) the filtration generated by the process \({\varvec{Y}}^{\prime }.\) First, observe that \(\mathbb E (S_{k+1} | \mathcal F_k) = S_k\) if \(|{\varvec{Y}}^{\prime }_1(k)- {\varvec{Y}}^{\prime }_2(k)| \ge 2.\) Next, by definition \(\mathbb E (S_{k+1} | {\varvec{Y}}^{\prime }_1(k)={\varvec{Y}}^{\prime }_2(k)=i)\) is equal to

where we used the shorthand \(d Beta(\alpha ,\, \beta ,\, p) = \frac{1}{B(\alpha ,\, \beta )}p^{\alpha -1} (1-p)^{\beta -1} dp.\) Computing the integrals and using (21) gives

Similarly, \(\mathbb E (S_{k+1} - S_k| {\varvec{Y}}^{\prime }_1(k)=i,\, {\varvec{Y}}^{\prime }_2(k)=i+1) \) is by definition equal to

A similar computation to the previous one gives

$$\begin{aligned}&\mathbb E \left( S_{k+1} - S_k | {\varvec{Y}}^{\prime }_1(k)=i,\,{\varvec{Y}}^{\prime }_2(k)=i+1\right) \\= & {} {-} [\psi (i+1)]^2 \frac{2 r_{i+1/2}}{r_{i-1/2}+r_{i+1/2}+r_{i+3/2}} \frac{\omega _{i}\omega _{i+1}}{(\omega _{i}+\omega _{i+1})(\omega _{i}+\omega _{i+1}+2)}. \end{aligned}$$

Just like in the case of \(S_k,\) we have \(\mathbb E (T_{k+1} | \mathcal F_k) = T_k\) if \(|{\varvec{Y}}^{\prime }_1(k)- {\varvec{Y}}^{\prime }_2(k)| \ge 2.\) Furthermore,

$$\begin{aligned} \mathbb E \left( T_{k+1} - T_k| {\varvec{Y}}^{\prime }_1(k)={\varvec{Y}}^{\prime }_2(k)=i\right) = [\psi (i+1)] \frac{ r_{i+1/2}}{r_{i-1/2}+r_{i+1/2}} \frac{\omega _{i}\omega _{i+1}}{(\omega _{i}+\omega _{i+1})}, \end{aligned}$$

and

$$\begin{aligned} \mathbb E \left( T_{k+1} - T_k | {\varvec{Y}}^{\prime }_1(k)=i,\, {\varvec{Y}}^{\prime }_2(k)=i+1\right) = [\psi (i+1)] \frac{1}{3} \frac{\omega _{i}\omega _{i+1}}{(\omega _{i}+\omega _{i+1} +2)}. \end{aligned}$$

We conclude that there is some positive constant c such that

  • \(0<\mathbb E (S_{k+1} - S_k | {\varvec{Y}}^{\prime }_1(k) ={\varvec{Y}}^{\prime }_2(k)) < \frac{1}{c},\)

  • \(-1/c< \mathbb E (S_{k+1} - S_k | |{\varvec{Y}}^{\prime }_1(k) - {\varvec{Y}}^{\prime }_2(k)| = 1) <0,\)

  • \( c< \mathbb E (T_{k+1} - T_k | {\varvec{Y}}^{\prime }_1(k) ={\varvec{Y}}^{\prime }_2(k)),\)

  • \(c< \mathbb E (T_{k+1} - T_k | |{\varvec{Y}}^{\prime }_1(k) - {\varvec{Y}}^{\prime }_2(k)| = 1).\)

The lemma follows. \(\square \)

Lemma 14

\(\lim _{L \rightarrow \infty } \frac{P^{(L)}(L,L) + P^{(L)}(0,0)}{[\mathcal A^{(L)} (x)]^2 + [1-\mathcal A^{(L)} (x)]^2} = 1.\)

Proof

Since \(M_k:=S_k+CT_k\) is a bounded submartingale, we can apply the martingale convergence theorem to deduce

$$\begin{aligned} M_0 \le \mathbb E\left( M_{\infty }\right) = \left( P^{(L)}(L,\,L) + P^{(L)}(0,\,0)\right) \underline{\Phi }^2 (L) + \left( P^{(L)}(0,\,L) + P^{(L)}(L,\,0)\right) C\underline{\Phi }(L).\end{aligned}$$

Since \(\underline{\Phi }(L)/L\) is bounded away from zero and infinity, the lower bound follows. The upper bound is derived similarly from the fact that \(S_k+CT_k\) is a supermartingale. \(\square \)

Now we are ready to prove Proposition 7.

Proof of Proposition 7

By Lemma 1,

$$\begin{aligned} P^{(L)}(L,\,L) = \frac{1}{2} \left[ P^{(L)}(L) + P^{(L)}(L,\,L) + P^{(L)}(0,\,0) - P^{(L)}(0)\right] . \end{aligned}$$

Then by Lemmas 12 and 14,

$$\begin{aligned} P^{(L)}(L,\,L) \sim \left[ \mathcal A^{(L)}(x) \right] ^2. \end{aligned}$$

Similarly, \(P^{(L)}(0,\,0) \sim [ 1- \mathcal A^{(L)}(x) ]^2,\,P^{(L)}(L,\,0) \sim P^{(L)}(0,\,L) \sim [ \mathcal A^{(L)}(x)][ 1- \mathcal A^{(L)}(x)].\) Proposition 7 follows. \(\square \)

Since the extension of Proposition 7 to arbitrary N can be proved the same way as its analogues in [15], Sect. 3 and [17], Sect. 6.2, we omit the proof here. We have finished the proof of Theorem 3.

Finally, Proposition 4(b) is elementary and Proposition 4(a) follows from the connection between random walks and electrical networks [namely, from (2) or Lemma 12] and the law of large numbers.