1 Introduction

How does the Brownian motion in \({\mathbb {R}}^d\) look like? This question has fascinated probabilists and mathematical physicists for a long time, and it continues to be an unending source of challenging problems. Not too long after the existence of the Brownian motion was rigorously shown by Wiener in 1923, Lévy [23] proved that a two dimensional Brownian motion intersects itself almost surely, Kakutani [8] showed that a d-dimensional Brownian motion is almost surely a simple path when \(d \ge 5\), Dvoretzky et al. [5] verified that a Brownian motion intersects itself in three but not in four dimensions almost surely. Much later, Taylor and Fristedt [6, 32] found that the Hausdorff dimension of the set of double points of the Brownian motion is two in two dimensions and one in three dimensions.

In this paper, we are interested in the nature of self-intersections, more specifically, how loops formed by the Brownian motion are distributed in space. Consequently, from our point of view, we may focus on the case of two and three dimensions. We give an explicit representation of such loops by establishing a decomposition of the Brownian path into independent simple path and a set of loops. In order to explain it, let us begin with a similar problem for a simple random walk.

Consider a simple random walk (SRW) on the rescaled lattice \(\frac{1}{n}{\mathbb {Z}}^d\) started at the origin and stopped upon exiting from the unit ball. Its loop erasure, the loop erased random walk (LERW), is a simple path connecting the origin with the complement of the ball obtained from the random walk path by chronologically erasing all its loops. Remarkably, the law of these loops is very explicit. They come from a Poisson point process of discrete loops (a random walk loop soup) on \(\frac{1}{n}{\mathbb {Z}}^d\) independent from the loop erasure [17, Sect. 9]. More precisely, if we denote the loop erased random walk by \({\mathrm {LEW}}_n\) and an independent random walk loop soup in the unit ball by \({\mathrm {LS}}_n\), the exact definitions will come later (see Sects. 2.1 and 2.2), then

$$\begin{aligned} \begin{array}{l} \hbox {the union of} \,{\mathrm {LEW}}_n \,\hbox {and the loops from}\, {\mathrm {LS}}_n \hbox {intersecting}\, {\mathrm {LEW}}_n \,\hbox {has the same law as}\\ \hbox {the trace of SRW on}\, \frac{1}{n}{\mathbb {Z}}^d \hbox {started at 0 and stopped upon exiting from the unit ball}. \end{array} \end{aligned}$$
(1.1)

As the Brownian motion is the scaling limit of a simple random walk, it is natural to start by looking for an analogue of the random walk path decomposition in the continuous setting. However, unlike the random walk, the Brownian motion has a dense set of loops and it is not clear how to remove them in chronological order. Zhan proved in [35] the existence of a loop erasure of planar Brownian motion, but the uniqueness is missing, and three dimensions is for the time being out of reach. Nevertheless, we are able to get an analogue of (1.1) for the Brownian motion by passing suitably to the large-n limit on the both sides of the decomposition (1.1). First of all, after interpolating linearly, we may view all the lattice paths and loops as continuous curves and loops of \({\mathbb {R}}^d\), more usefully, as elements in the metric space of all compact subsets of the closed unit ball with the Hausdorff metric, denote it by \(({\mathcal {K}}_D, d_H)\). Let \({\mathrm {K}}\) be any weak subsequential limit of \({\mathrm {LEW}}_n\) in \(({\mathcal {K}}_D, d_H)\), and \({\mathrm {BS}}\) the limit of \({\mathrm {LS}}_n\), which turns out to be the Brownian loop soup of Lawler and Werner [21].

Theorem 1.1

In 2 and 3 dimensions, the union of \({\mathrm {K}}\) and all the loops from an independent \({\mathrm {BS}}\) that intersect \({\mathrm {K}}\) has the same law in \(({\mathcal {K}}_D, d_H)\) as the trace of the Brownian motion stopped on exiting from the unit ball.

A related result has been proved in [19] for the “filling” of the planar Brownian path, where the filling of a closed set A in \({\mathbb {R}}^2\) is the union of A with all the bounded connected components of \({\mathbb {R}}^2 {\setminus } A\). It is shown there that the filling of the union of \({\mathrm {K}}\) and the loops from \({\mathrm {BS}}\) intersecting \({\mathrm {K}}\) has the same law as the filling of the Brownian path. However, the filling of a random set does not characterize its law. For instance, the filling of the \({\mathrm {SLE}}_{6}\) started at 0 up to the first exit time from the unit disc has the same law as the filling of the Brownian path [19, Theorem 9.4], while the law of \({\mathrm {SLE}}_{6}\) as a random compact subset of the disc is different from that of the Brownian trace.

In two dimensions, the sequence \({\mathrm {LEW}}_n\) converges to the Schramm–Loewner evolution with parameter 2 \(({\mathrm {SLE}}_2)\) [18], a simple path [26] with Hausdorff dimension \(\frac{5}{4}\) [2]. In particular, Theorem 1.1 immediately gives a decomposition of the planar Brownian path into a simple path and loops. Unfortunately, no explicit expression for the scaling limit of the LERW is known or conjectured in three dimensions. Kozma [11] proved that the sequence \({\mathrm {LEW}}_{2^n}\) is Cauchy in \(({\mathcal {K}}_D, d_H)\), which gives the existence of the scaling limit as a random compact subset of the ball, and, topologically, this is all that has been known up to now. Our next main result shows that in three dimensions \({\mathrm {K}}\) is a simple path.

Theorem 1.2

Let \(\gamma ^s\) and \(\gamma ^e\) be the end points of a simple path \(\gamma \), and define

$$\begin{aligned} \Gamma = \left\{ \gamma ~:~\gamma \, {\textit{is a simple path with}}\, \gamma ^s = 0\, {\textit{and}} \,\gamma \cap \partial D = \{\gamma ^e\}\right\} . \end{aligned}$$

Then, almost surely, \({\mathrm {K}}\in \Gamma \).

Theorems 1.1 and 1.2 give a decomposition in \(({\mathcal {K}}_D, d_H)\) of the Brownian path into a simple path and loops also in three dimensions. For completeness, let us comment briefly on the higher dimensions. Our two main results also hold in higher dimensions, but the conclusions are rather trivial. In dimensions higher than 3, the scaling limit of the LERW is a d-dimensional Brownian motion [12, Theorem 7.7.6], it is itself a simple path, and the Brownian loop soup does not intersect it.

We believe that the decomposition of the Brownian path into a simple path and loops as in Theorem 1.1 is important not only because it sheds light on the nature of self-intersections in the Brownian path, but more substatially, a uniqueness of the decomposition is expected, in which case, the law of \({\mathrm {K}}\) would be uniquely characterized by the decomposition.

Conjecture 1.3

Let \({\mathrm {K}}_1\) and \({\mathrm {K}}_2\) be random elements in \(({\mathcal {K}}_D, d_H)\) such that \({\mathrm {K}}_1,{\mathrm {K}}_2\in \Gamma \) almost surely, and \({\mathrm {BS}}\) a Brownian loop soup in the unit ball independent from \({\mathrm {K}}_1\) and \({\mathrm {K}}_2\). If for each \(i\in \{1,2\}\), the union of \({\mathrm {K}}_i\) and all the loops from \({\mathrm {BS}}\) that intersect \({\mathrm {K}}_i\) has the same law as the trace of the Brownian motion stopped on exiting from the unit ball, then \({\mathrm {K}}_1\) and \({\mathrm {K}}_2\) have the same law in \(({\mathcal {K}}_D, d_H)\).

As immediate consequences of the uniqueness and Theorem 1.1, one would get a new strategy to the existence of the scaling limit of the loop erased random walk and its rotational invariance. Needless to say, it would provide a description of the LERW scaling limit in three dimensions, which is still missing. As far as we know, the conjecture has not been proved or disproved even in two dimensions.

Any subsequential limit \({\mathrm {K}}\) of \({\mathrm {LEW}}_n\) is a simple path, and it is immediate that in three dimensions, \({\mathrm {K}}\) has a different law than the Brownian path. In two dimensions, the law of \({\mathrm {K}}\) is explicit, namely that of the trace of \({\mathrm {SLE}}_2\). Our final main result provides rigorous bounds on the Hausdorff dimension of \({\mathrm {K}}\) in three dimensions. Let \(\xi \) be the non-intersection exponent for 3 dimensional Brownian motion [14], and \(\beta \) the growth exponent for the 3 dimensional loop erased random walk [15]. Both exponents exist by [14, 27] and satisfy the bounds \(\xi \in (\frac{1}{2},1)\) and \(\beta \in (1,\frac{5}{3}]\), see [14, 15].

Theorem 1.4

In 3 dimensions, \(2-\xi \le {\mathrm {dim}_{\mathrm {H}}}({\mathrm {K}})\le \beta \) almost surely. In particular,

$$\begin{aligned} 1<{\mathrm {dim}_{\mathrm {H}}}({\mathrm {K}})\le \frac{5}{3}. \end{aligned}$$

The lower bound on \({\mathrm {dim}_{\mathrm {H}}}({\mathrm {K}})\) is an immediate application of Theorem 1.1 and a result on the Hausdorff dimension of the set of cut-points of the Brownian path. Here, a cut-point of a connected set F that contains 0 and intersects the boundary of the unit ball is any point \(x\in F\) such that 0 and the boundary of the ball are disconnected in \(F {\setminus } \{x \}\). The Hausdorff dimension of the set of cut-points of the three-dimensional Brownian path from 0 until exiting from the unit ball is precisely \(2- \xi \), cf. [14]. To see that it is a lower bound on \({\mathrm {dim}_{\mathrm {H}}}({\mathrm {K}})\), it remains to notice that in every decomposition of a Brownian path into a simple path and loops, all its cut-points are on the simple path.

We expect that \({\mathrm {dim}_{\mathrm {H}}}({\mathrm {K}}) = \beta \) almost surely. Some steps towards this equality will be made in [28]. A same identity holds in two dimensions, where both the growth exponent and the Hausdorff dimension of \({\mathrm {SLE}}_2\) are known to be \(\frac{5}{4}\), see [2, 9] and also [16, 24]. In three dimensions, the value of \(\beta \) is not known or conjectured. Numerical experiments suggest that \(\beta = 1.62 \pm 0.01\) [7, 34], but the best rigorous bounds are \(1<\beta \le \frac{5}{3}\) [15].

1.1 Some words about proofs

Theorem 1.1 is an analogue of (1.1) in continuum. To prove it, we start with the decomposition of the random walk path (1.1) and suitably take scaling limits in both sides of the decomposition. By (1.1), the union of the loop erased random walk \({\mathrm {LEW}}_n\) and the loops from an independent random walk loop soup \({\mathrm {LS}}_n\) that intersect \({\mathrm {LEW}}_n\) is the trace of simple random walk on the lattice \(\frac{1}{n}{\mathbb {Z}}^d\) killed on exiting from the unit ball. In particular, it converges to the trace of the Brownian motion. On the other hand, \({\mathrm {LEW}}_n\) converges weakly (along subsequences) to \({\mathrm {K}}\), and, as was shown in two dimensions by Lawler and Trujillo [20], the loop soups \({\mathrm {LS}}_n\) and \({\mathrm {BS}}\) can be coupled so that with high probability there is a one-to-one correspondence between all large loops from \({\mathrm {LS}}_n\) and those from \({\mathrm {BS}}\), and each large loop from \({\mathrm {LS}}_n\) is very close, in the Hausdorff distance, to the corresponding loop in \({\mathrm {BS}}\). Such a strong coupling of loop soups can be extended to all dimensions with little effort, see Theorem 2.2. So where is the challenge?

First, we may assume that \({\mathrm {LEW}}_n\) and \({\mathrm {K}}\) are defined on the same probability space and \(d_H({\mathrm {LEW}}_n,{\mathrm {K}})\rightarrow 0\) almost surely. Let \(\varepsilon <\delta \), and consider the event that \(d_H({\mathrm {LEW}}_n,{\mathrm {K}})<\varepsilon \) and to each loop \(\ell _n\) from \({\mathrm {LS}}_n\) of diameter at least \(\delta \) corresponds a unique loop \(\ell \) from the Brownian soup so that \(d_H(\ell _n,\ell )<\varepsilon \). By the strong coupling of loop soups (see Theorem 2.2), this event has high probability for all large n. The challenge is to show that the correspondence of loops in the strong coupling makes the right selection of Brownian loops. What may go wrong? If a loop \(\ell _n\in {\mathrm {LS}}_n\) intersects \({\mathrm {LEW}}_n\), then the corresponding Brownian loop \(\ell \) does not have to intersect \({\mathrm {K}}\), and vice versa. The meat of the proof is then to show that this does not happen.

To demonstrate a difficulty, notice that very little is known about \({\mathrm {K}}\) in three dimensions. In particular, it is not a priori clear if the Brownian soup really intersects \({\mathrm {K}}\) almost surely (or even with positive probability). As we know, it is not the case in dimensions 4 and higher, and the three dimensional Brownian soup does not intersect a line. As a result, not all paths in \({\mathbb {R}}^3\) are hittable by Brownian loops, so we have to show that \({\mathrm {K}}\) is hittable. Moreover, we want that every Brownian loop of large diameter (bigger than \(\delta \)) that gets close enough (within \(\varepsilon \) distance) to \({\mathrm {K}}\) intersects it locally, and we want the same to be true for large random walk loops and \({\mathrm {LEW}}_n\).

In two dimensions, analogous questions are classically resolved with a help of the Beurling projection principle, see [10], which states that a random walk starting near any simple path will intersect it with high probability. As we have just seen, such a principle cannot work in three dimensions for all paths. The main novelty of our proof is a Beurling-type estimate for the loop erased random walk stating that most of the samples of the LERW are hittable with probability close to one by an independent simple random walk started anywhere near the LERW, see Theorem 3.1. This result is then easily converted into an analogous statement for random walk loops, namely, with high probability, the only large loops that are close to \({\mathrm {LEW}}_n\) are those that intersect it, see Proposition 5.2.

Similar complications arise when we try to show that \({\mathrm {K}}\) is a simple path, although now without loop soups. We need to rule out a possibility that \({\mathrm {LEW}}_n\) backtracks from far away. In his proof that \({\mathrm {K}}\) is a simple path in two dimensions [26], Schramm introduced \((\varepsilon ,\delta )\)-quasi-loops as subpaths of \({\mathrm {LEW}}_n\) ending within \(\varepsilon \) distance from the start but stretching to distance \(\delta \). Of course, if a quasi-loop exists for all large n, it collapses, in the large-n limit, into a proper loop in \({\mathrm {K}}\). Thus, to show that \({\mathrm {K}}\) is a simple path, we need to rule out the existence of quasi-loops uniformly in n, namely, to show that for all \(\delta >0\),

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}{\mathbb {P}}\left[ {\mathrm {LEW}}_n \,\hbox {does not have a}\, (\varepsilon ,\delta )-\hbox {quasi-loop}\right] = 0,\quad \text {uniformly in}\, n. \end{aligned}$$

Schramm proved this in two dimensions using the Beurling projection principle [26, Lemma 3.4]. As remarked before, the principle does not longer work in three dimensions, but our Beurling-type esitmate is strong enough to get the desired conclusion, see Theorem 6.1.

We should mention that Kozma [11] proved that with high probability (as \(n\rightarrow \infty \)), \({\mathrm {LEW}}_n\) does not contain \((n^{-\gamma },\delta )\)-quasi-loops, see [11, Theorem 4]. This was enough to establish the convergence of \({\mathrm {LEW}}_n\)’s, but more is needed to show that \({\mathrm {K}}\) is a simple path. Unfortunately, Kozma’s proof strongly relies on the fact that the choice of \(\varepsilon \) is n-dependent, and we need to establish a new method to get the uniform estimate.

1.2 Structure of the paper

The main definitions are given in Sect. 2, the loop erased random walk and its scaling limit in Sect. 2.1, the random walk loop soup in Sect. 2.2, and the Brownian loop soup in Sect. 2.3. In each subsection, we also discuss some properties and a few historical facts about the models. Section 2.3 also contains the statement about the coupling of the random walk and the Brownian loop soups that we use in the proof of Theorem 1.1 (see Theorem 2.2). Some notation that we only use in the proofs are summarized in Sect. 2.4.

The Beurling-type estimate for the loop erased random walk is given in Sect. 3 (see Theorem 3.1). Some related lemmas about hittability of the LERW are also stated there and may be of independent interest (see Lemmas 3.2 and 3.3). The proof of the Beurling-type estimate is given in Sect. 3.1. The rest of the section is devoted to the proof of an auxiliary lemma, and may be omitted in the first reading.

In Sect. 4 we construct the coupling of the loop soups satisfying conditions of Theorem 2.2. This section may be skipped in the first reading.

The proof of our first main result, Theorem 1.1, is contained in Sect. 5. It is based on Theorems 2.2 and 3.1.

In Sect. 6, we prove that the scaling limit of the LERW is a simple path. In Sect. 6.1, we define quasi-loops and prove that the LERW is unlikely to contain them. The proof of our second main result, Theorem 1.2, is given in Sect. 6.2. It is based on the quasi-loops-estimates from Sect. 6.1, namely on Propositions 5.2 and 5.1.

Finally, in Sect. 7 we prove bounds on the Hausdorff dimension of the scaling limit of the LERW stated in Theorem 1.4. This proof is largely based on some earlier results on non-intersection probabilities for independent LERW and SRW obtained in [27]. We recall these results in Sect. 7.1 [see (7.1)] and also prove there some of their consequences. The upper and lower bounds on the Hausdorff dimension are proved in the remaining subsections.

2 Definitions, notation, and some history

2.1 Loop erased random walk and its scaling limit

We consider the graph \({\mathbb {Z}}^d\) with edges between nearest neighbors. If x and y are nearest neighbors in \({\mathbb {Z}}^3\), we write \(x\sim y\). A path is a function \(\gamma \) from \(\{1,\ldots ,n\}\) to \({\mathbb {Z}}^d\) for some \(n\ge 1\) such that \(\gamma (i)\sim \gamma (i+1)\) for all \(1\le i\le n-1\). The integer n is the length of \(\gamma \), we denote it by \(\mathrm {len}\, \gamma \). The loop erasure of a path \(\gamma , {\mathrm {LE}}(\gamma )\), is the (simple) path obtained by removing loops from \(\gamma \) in order of their appearance, namely,

$$\begin{aligned} {\mathrm {LE}}(\gamma )(1)= & {} \gamma (1)\\ {\mathrm {LE}}(\gamma )(i+1)= & {} \gamma (j_i+1)\quad \text {if}\quad j_i = \max \{j:\gamma (j) = {\mathrm {LE}}(\gamma )(i)\}<\mathrm {len}\,\gamma . \end{aligned}$$

We are interested in the loop erasure of a simple random walk path started at 0 and stopped at the first time when it exits from a large Euclidean ball, the loop erased random walk (LERW). The simple random walk started at \(x\in {\mathbb {Z}}^d\) is a Markov chain \(\{R(t)\}_{t\in {\mathbb {Z}}_+}\) with \(R(0) = x\) and transition probabilities

$$\begin{aligned} {\mathbb {P}}\left[ R(t+1) = z~|~R(t) = y\right] = \left\{ \begin{array}{ll} \frac{1}{2d} &{} \quad \text {if }z\sim y \\ 0 &{}\quad \text {otherwise.}\end{array}\right. \end{aligned}$$

We denote its law and the expectation by \({\mathbb {P}}^x\) and \({\mathbb {E}}^x\), respectively.

LERW was originally introduced [13] and studied extensively by Lawler (see [15] and the references therein), who considered LERW as a substitute for the self-avoiding walk (see [29]), which is harder to analyze. Since its appearance, the LERW has played an important role both in statistical physics and mathematics through its relation to the uniform spanning tree (UST). Pemantle [25] proved that paths in the UST are distributed as LERWs, furthermore, the UST can be generated using LERWs by Wilson’s algorithm [33].

We are interested in the scaling limit of the LERW and its connections to the Brownian motion. Let \(|\cdot |\) be the Euclidean norm in \({\mathbb {R}}^d\). The open ball of radius r is defined as \(D_r = \{x\in {\mathbb {R}}^d~:~|x| < r\}\), and we denote its closure by \(\overline{D}_r\). When \(r=1\), we just write D and \(\overline{D}\). We consider the loop erasure of the simple random walk path on \({\mathbb {Z}}^d\) from 0 until the first exit time from \(\overline{D}_n\), rescale it by \(\frac{1}{n}\), and denote the corresponding simple path on the lattice \(\frac{1}{n}{\mathbb {Z}}^d\) and its linear interpolation by \({\mathrm {LEW}}_n\). Consider the metric space \(({\mathcal {K}}_D, d_H)\) of all compact subsets of \(\overline{D}\) with the Hausdorff metric. We can think of \({\mathrm {LEW}}_n\) as random elements of \({\mathcal {K}}_D\). Let \({\mathrm {P}}_n\) be the probability measure on \(({\mathcal {K}}_D, d_H)\) induced by \({\mathrm {LEW}}_n\). Since \(({\mathcal {K}}_D, d_H)\) is compact and the space of Borel probability measures on a compact space is compact in the weak topology, for any subsequence \(n_k\), we can find a further subsequence \(n_{k_i}\) such that \({\mathrm {P}}_{n_{k_i}}\) converges weakly to a probability measure supported on compact subsets of \(\overline{D}\). In fact, more is known. In two dimensions, \({\mathrm {LEW}}_n\) converges weakly to \({\mathrm {SLE}}_2\) [18] (actually, even in a stronger sense). In 3 dimensions, \({\mathrm {LEW}}_{2^n}\) converges weakly as \(n\rightarrow \infty \) to a random compact subset of \(\overline{D}\), invariant under rotations and dilations [11].

The existence of the LERW scaling limit will not be used in this paper. In fact, as discussed in the introduction, we are hoping that our approach can give an alternative proof of the existence. All our results are valid for any subsequential limit of \({\mathrm {LEW}}_n\), which we denote by \({\mathrm {K}}\) throughout the paper, and we will write for simplicity of notation that \({\mathrm {LEW}}_n\) converges to \({\mathrm {K}}\) without specifying a subsequence.

2.2 Random walk loop soup

To have a useful description of the loops generated by the loop erasure of a random walk path, we define a Poisson point process of discrete loops.

A rooted loop of length 2n in \({\mathbb {Z}}^d\) is a \((2n+1)\)-tuple \(\gamma = (\gamma _0,\ldots ,\gamma _{2n})\) with \(|\gamma _i - \gamma _{i-1}| = 1\) and \(\gamma _0 = \gamma _{2n}\). Let \({\mathcal L}\) be the space of all rooted loops. We are interested in a Poisson point process of rooted loops in which each individual loop “looks like” a random walk bridge. We define the random walk loop measure \({\mu ^{\mathrm {rwl}}}\) as a sigma finite measure on \({\mathcal L}\) giving the value \(\frac{1}{2n}\cdot \left( \frac{1}{2d}\right) ^{2n}\) to each loop of length 2n. The factor \(\frac{1}{2n}\) should be understood as choosing the root of the loop of length 2n uniformly. The random walk loop soup \({\mathcal R}\) is the Poisson point process on the space \({\mathcal L}\times (0,\infty )\) with the intensity measure \({\mu ^{\mathrm {rwl}}}\otimes \mathrm {Leb}_1\). For each \(\lambda >0\), the random walk loop soup induces the Poisson point process on the space \({\mathcal L}\) with the intensity measure \(\lambda {\mu ^{\mathrm {rwl}}}\), as a pushforward by the function \(\sum _i \mathbbm {1}_{(\gamma _i,\lambda _i)} \mapsto \sum _{i:\lambda _i\le \lambda }\mathbbm {1}_{\gamma _i}\). We call the resulting process the random walk loop soup of intensity \(\lambda \) and denote it by \({\mathcal R}^\lambda \).

Poisson ensembles of Markovian loops (loop soups) were introduced informally by Symanzik [30] as a representation of the \(\phi ^4\) Euclidean field, and subsequently extensively researched in the physics community. The first rigorous definition of a loop soup was given by Lawler and Werner [21] in the context of planar Brownian motion. Our definition of the random walk loop soup is taken from [17, Chapter 9]. Random walk and Brownian loop soups have lately been an object of large attention from probabilists and mathematical physicists due to their intimate relations to the Gaussian free field, see, e.g., [22, 31]. Of particular importance for us, is the following decomposition of a random walk path into its loop erasure and a collection of loops coming from an independent random walk loop soup of intensity 1.

Proposition 2.1

[17, Propositions 9.4.1 and 9.5.1] Let \(L_n\) be the loop erasure of a simple random walk on \({\mathbb {Z}}^d\) started at 0 and stopped upon exiting from \(D_n\). Let \({\mathcal R}^1\) be an independent random walk loop soup, and denote by \(R_n\) be the set of all loops (with multiplicities) from \({\mathcal R}^1\) that are contained in \(D_n\) and intersect \(L_n\). Then the union of \(L_n\) and \(R_n\) has the same law as the trace of a simple random walk on \({\mathbb {Z}}^d\) started at 0 and stopped upon exiting from \(D_n\).

Our goal is to pass to the scaling limit in the above decomposition to get a similar representation for the Brownian path. The scaling limit of \(L_n\) is a random compact subset of a unit ball, as discussed in the previous section. We will soon see that the scaling limit of a random walk loop soup is the Brownian loop soup of Lawler and Werner, which we introduce in the next section.

We finish this section with a hands-on definition of the random walk loop soup. Let \({\mu ^{\mathrm {rwl}}}(z,n)\) be the restriction of \({\mu ^{\mathrm {rwl}}}\) to the loops of length 2n rooted at z. It is a finite measure with the total mass \({\mu ^{\mathrm {rwl}}}(z,n)[{\mathcal L}] = \frac{1}{2n}\, p_{2n}(z,z)\), where \(p_{2n}(x,y)\) is the probability that the simple random walk started at x will be at y at step 2n, and \(\frac{{\mu ^{\mathrm {rwl}}}(z,n)}{{\mu ^{\mathrm {rwl}}}(z,n)[{\mathcal L}]}\) is the probability distribution of the random walk bridge of length 2n starting and ending at z. The measure \({\mu ^{\mathrm {rwl}}}\) can be expressed as a linear combination of probability measures on \({\mathcal L}\),

$$\begin{aligned} {\mu ^{\mathrm {rwl}}}= \sum _{z\in {\mathbb {Z}}^d}\sum _{n\ge 1}{\mu ^{\mathrm {rwl}}}(z,n) = \sum _{z\in {\mathbb {Z}}^d}\sum _{n\ge 1}\frac{p_{2n}(0,0)}{2n}\cdot \frac{{\mu ^{\mathrm {rwl}}}(z,n)}{{\mu ^{\mathrm {rwl}}}(z,n)[{\mathcal L}]}\, , \end{aligned}$$
(2.1)

which leads to the following simple recipe for sampling the random walk loop soups. Let

$$\begin{aligned} \widetilde{N}(z,n;\cdot ),\quad n\in \{1,2,\ldots \}, z\in {\mathbb {Z}}^d, \end{aligned}$$

be independent Poisson point processes on \((0,\infty )\) with parameter \(\frac{p_{2n}(0,0)}{2n}\). Let

$$\begin{aligned} \widetilde{L}(z,n;m),\quad n\in \{1,2,\ldots \}, z\in {\mathbb {Z}}^d, m\in \{1,2,\ldots \}, \end{aligned}$$

be independent random walk bridges of length 2n starting and ending at 0, independent of all the \(\widetilde{N}(z,n;\cdot )\). Then the multiset

$$\begin{aligned} \left\{ z+\widetilde{L}(z,n;m)~:~z\in {\mathbb {Z}}^d,~n\ge 1,~1\le m\le \widetilde{N}(z,n;\lambda )\right\} \end{aligned}$$
(2.2)

is the random walk loop soup of intensity \(\lambda \). In other words, we first generate the number of (labeled) random walk bridges of length 2n, rooted at z, and with label at most \(\lambda \), \(\widetilde{N}(z,n;\lambda )\), and then sample their shapes according to the random walk bridge measure \(\frac{{\mu ^{\mathrm {rwl}}}(z,n)}{{\mu ^{\mathrm {rwl}}}(z,n)[{\mathcal L}]}\).

2.3 Brownian loop soup and a strong coupling of loop soups

Recall our strategy—we want to get a decomposition of a Brownian path by taking a scaling limit of both sides in the corresponding random walk path decomposition. For this, we still need to discuss the existence of a scaling limit of the random walk loop soup. Actually, the scaling limit is explicit, it is the Brownian loop soup of Lawler and Werner [21], and we now give its description.

A rooted loop in \({\mathbb {R}}^d\) is a continuous function \(\gamma :[0,t_\gamma ]\rightarrow {\mathbb {R}}^d\) with \(\gamma (0) = \gamma (t_\gamma )\), where \(t_\gamma \in (0,\infty )\) is the time duration of \(\gamma \). We denote by \({\mathcal {C}}\) the set of all rooted loops. For \(z\in {\mathbb {R}}^d\) and \(t>0\), let \({\mu ^{\mathrm {bb}}}(z,t)\) be the measure on \({\mathcal {C}}\) induced by the Brownian bridge from z to z of time duration t. The Brownian loop measure \({\mu ^{\mathrm {bl}}}\) is the measure on \({\mathcal {C}}\) given by

$$\begin{aligned} {\mu ^{\mathrm {bl}}}= \int _{{\mathbb {R}}^d}\int _0^\infty \frac{1}{t\cdot (2\pi t)^{\frac{d}{2}}}\, {\mu ^{\mathrm {bb}}}(z,t)\,dt\,dz. \end{aligned}$$

Notice the analogy with a similar representation (2.1) of the random walk loop measure as a linear combination of random walk bridge measures. The measure \({\mu ^{\mathrm {bl}}}\) of course inherits the invariance under the Brownian scaling, \((r\cdot \mathrm {space}, r^2\cdot \mathrm {time~duration})\), from the bridge measures.

The Brownian loop soup \({\mathcal B}\) in \({\mathbb {R}}^d\) is the Poisson point process on the space \({\mathcal {C}}\times (0,\infty )\) with the intensity measure \({\mu ^{\mathrm {bl}}}\otimes \mathrm {Leb}_1\). For each \(\lambda >0\), the Brownian loop soup induces the Poisson point process on the space \({\mathcal {C}}\) with the intensity measure \(\lambda {\mu ^{\mathrm {bl}}}\), as a pushforward by the function \(\sum _i \mathbbm {1}_{(\gamma _i,\lambda _i)} \mapsto \sum _{i:\lambda _i\le \lambda }\mathbbm {1}_{\gamma _i}\). We call the resulting process the Brownian loop soup of intensity \(\lambda \) and denote it by \({\mathcal B}^\lambda \).

The Brownian loop soups exhibit strong connections with the Schramm–Loewner evolution and the Gaussian free field, see, e.g., [4] for an overview, and they have been quite extensively studied. The connection between the random walk loop soups and the Brownian ones has been shown by Lawler and Trujillo [20] in two dimensions, who constructed a strong coupling between the two loop soups—much more than needed to see that the scaling limit of a random walk loop soup is a Brownian soup. For our purposes, we need to extend the result of [20] to higher dimensions. Actually, only to dimension 3, but we give an extension to arbitrary dimensions, as, on the one hand, the proof does not get more complicated, and, on the other, it may be instructive to see the dependence of various parameters on the dimension. Let

$$\begin{aligned} {\alpha }= \frac{3d+4}{2d(d+2)}. \end{aligned}$$
(2.3)

Theorem 2.2

There exist \(C<\infty \) and a coupling of the Brownian loop soup \({\mathcal B}= \{{\mathcal B}^\lambda \}_{\lambda >0}\) and the random walk loop soup \({\mathcal R}= \{{\mathcal R}^\lambda \}_{\lambda >0}\) such that for any \(\lambda >0, r\ge 1, N\ge 1\), and \(\theta \in \left( \frac{2d}{d+4}, 2\right) \), on the event of probability

$$\begin{aligned} \ge 1 - C\, (\lambda +1)\, r^d\, N^{-\min \left( \frac{d}{2},\,\theta \left( \frac{d}{2}+2\right) -d\right) } \end{aligned}$$

there is a one-to-one correspondence of random walk loops from \({\mathcal R}^\lambda \) of length \(\ge N^\theta \) rooted in \([-rN,rN]^d\) and Brownian loops from \({\mathcal B}^\lambda \) of time duration \(\ge \frac{N^\theta -2}{d}+{\alpha }\) rooted in \([-rN-\frac{1}{2},rN+\frac{1}{2}]^d\), such that the length of the random walk loop divided by d differs from the time duration of the corresponding Brownian loop by at most \({\alpha }\), and the supremum distance between the corresponding loops is \(\le C\, N^{\frac{3}{4}}\log N\). Here, each discrete loop is viewed as a rooted loop in \({\mathbb {R}}^d\) after linear interpolation.

2.4 Further notation

In this section, we summarize all the remaining notation that will be used at least in two different proofs. Those that are used only once are deferred until more appropriate spots.

For \(v\in {\mathbb {R}}^d\) and \(r>0\), the (discrete) ball of radius r centered at v is the set

$$\begin{aligned} B(v,r) = \left\{ x\in {\mathbb {Z}}^d:|x-v|\le r\right\} . \end{aligned}$$

For \(A\subset {\mathbb {Z}}^d\), we denote by \(\partial A\) the exterior vertex boundary of A, namely,

$$\begin{aligned} \partial A = \left\{ x\notin A:x\sim y\, \hbox {for some}\, y\in A\right\} . \end{aligned}$$

We also define \(\overline{A} = A\cup \partial A\). The boundary of a subset V of \({\mathbb {R}}^d\) is denoted by \(\partial _{{\mathbb {R}}^d}V\).

For a random walk R, we denote the hitting time of a set \(A\subset {\mathbb {Z}}^d\) by R by

$$\begin{aligned} T(A) = \inf \{t\ge 1:R(t)\in A\}. \end{aligned}$$

For \(v\in {\mathbb {R}}^d\) and \(r>0\), we write

$$\begin{aligned} T_{v,r} = T(\partial B(v,r)). \end{aligned}$$

Quite often, we will consider two independent random walks on the same space. If so, we will denote these random walks by \(R^1\) and \(R^2\), their laws by \({\mathbb {P}}^{1,x_1}\) and \({\mathbb {P}}^{2,x_2}\) (where \(x_i=R^i(0)\)), and the corresponding hitting times by \(T^i\) and \(T^i_{v,r}\).

If \(\gamma \) is a path, we denote by \(\gamma [a,b]\) the path (or the set, depending on a situation) in \({\mathbb {Z}}^d\) consisting of the vertices \(\gamma (a),\gamma (a+1),\ldots ,\gamma (b)\). If \(\gamma _1\) and \(\gamma _2\) are two paths in \({\mathbb {Z}}^d\) and \(\gamma _1(\mathrm {len}\,\gamma _1)\sim \gamma _2(1)\), then we denote by \(\gamma _1\cup \gamma _2\) the path of length \(\mathrm {len}\,\gamma _1 + \mathrm {len}\,\gamma _2\) obtained by concatenating \(\gamma _1\) and \(\gamma _2\).

For a set \(S\subset {\mathbb {R}}^d\) and \(\epsilon >0\), we denote by \(S^{+\epsilon }\) the \(\epsilon \)-neighborhood of S and by \(S^{-\epsilon }\) the subset of points of S at distance \(>\epsilon \) from the complement of S.

Finally, let us make a convention about constants. Large constants whose values are not important are denoted by C and \(C'\) and small ones by c and \(c'\). Their dependence on parameters varies from proof to proof. Constants marked with a subindex, e.g., \(C_1\), \(C_H\), \(c_2\), \(c_*\), keep their values within the proof where they appear, but will change from proof to proof.

3 Beurling-type estimate

Throughout this section we assume that the dimension of the lattice is 3. We prove that the loop erasure of a simple random walk is hittable with high probability by an independent random walk started anywhere near the loop erasure.

Theorem 3.1

There exist \(\eta >0\) and \(C<\infty \) such that for any \(\varepsilon >0\) and \(n\ge 1\),

$$\begin{aligned} {\mathbb {P}}^{1,0} \left[ \begin{array}{c} {\textit{For any}} \, x \in B(0,n)\, {\textit{with}}\, {\mathrm {dist}}\left( x, {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n} \right] \right) \right) \le \varepsilon ^2 n,\\ {\mathbb {P}}^{2,x} \left[ R^2\left[ 0, T^2_{x,\sqrt{\varepsilon } n}\right] \cap {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n} \right] \right) = \emptyset \right] \le \varepsilon ^{\eta } \end{array} \right] \ge 1- C\varepsilon , \end{aligned}$$
(3.1)

see Sect. 2.1 for the definition of \({\mathrm {LE}}\) and Sect. 2.4 for the other notation.

A result analogous to Theorem 3.1 in 2 dimensions is known as the Beurling projection principle, see [10]. It states that for any \(\eta <\frac{3}{4}\), the probability on the left hand side of (3.1) equals to 1. In dimensions \(d\ge 4\), the result of Theorem 3.1 is not true.

Before moving on to the proof of Theorem 3.1, we discuss its main ingredients. They are of independent interest and also will be used in other proofs in this paper. First of all, from the point of view of this work, it would be enough to prove the estimate (3.1) only for all those \(x\in B(0,n)\) that are at least \(\varepsilon n\) distance away from 0 and the complement of B(0, n). However, Theorem 3.1 is a valuable tool in the study of loop erased random walks in three dimensions and its applications will surely spread beyond the topics covered in this paper.

The proof of Theorem 3.1 is done by considering separately the cases when \(x\in B(0,\varepsilon n)\) and \(x\notin B(0,\varepsilon n)\). In the first case we use [11, Lemma 4.6], which states that the LERW is hittable by an independent random walk in any wide enough annulus centered at the origin.

Lemma 3.2

[11, Lemma 4.6] For any \(K\ge 1\), there exist \(\eta >0\) and \(C<\infty \) such that for all \(r>s>1\),

$$\begin{aligned} {\mathbb {P}}^{1,0} \left[ \begin{array}{c} {\textit{There exists}}\,\, T\ge 0 \,\,{\textit{such that}}\,\, {\mathrm {LE}}\left( R^1[0,T]\right) \nsubseteq B(0,r), {\textit{and}}\\ {\mathbb {P}}^{2,0} \left[ R^2\left[ 0, T^2_{0,r}\right] \cap {\mathrm {LE}}\left( R^1[0, T] \right) \cap \left( B(0,r){\setminus } \overline{B(0,s)}\right) = \emptyset \right] > \left( \frac{s}{r}\right) ^{\eta } \end{array} \right] \le C\, \left( \frac{s}{r}\right) ^K. \end{aligned}$$

In the second case, we use an analogue of [11, Lemma 4.6] about hittability of the LERW in annuli that do not surround the origin. We give its proof in Sect. 3.2. We will use this lemma also in Sect. 6 to show that the LERW scaling limit is a simple path. This is why we state here a slightly stronger result than we need for the proof of Theorem 3.1. We will comment more on this after stating the lemma.

Lemma 3.3

For any \(K\ge 1\), there exist \(\eta >0\) and \(C<\infty \) such that for all \(r>s>1\) and \(v\notin B(0,r)\),

$$\begin{aligned} {\mathbb {P}}^{1,0} \left[ \begin{array}{c} {\textit{There exists}}\, T\ge 0 \,{\textit{such that}}\, {\mathrm {LE}}\left( R^1[0,T]\right) \nsubseteq B(v,s)^c, {\textit{and}}\\ {\mathbb {P}}^{2,v} \left[ R^2\left[ 0, T^2_{v,r}\right] \cap {\mathrm {LE}}\left( R^1[0, T] \right) [0,\sigma ]\cap \left( B(v,r){\setminus } \overline{B(v,s)}\right) = \emptyset \right] > \left( \frac{s}{r}\right) ^{\eta } \end{array} \right] \le C\,\quad \left( \frac{s}{r}\right) ^K, \end{aligned}$$
(3.2)

where \(\sigma = \inf \{t\ge 0:{\mathrm {LE}}\left( R^1[0, T] \right) [0,t]\cap B(v,s)\ne \emptyset \}\).

As remarked above, the full strength of Lemma 3.3 will not be needed until Sect. 6, where we reuse the lemma to prove that the LERW scaling limit is a simple path, see the proof of Claim 6.5. In the proof of Theorem 3.1 we will only apply a weaker version of (3.2), where \({\mathrm {LE}}\left( R^1[0, T] \right) [0,\sigma ]\) is replaced by \({\mathrm {LE}}\left( R^1[0, T] \right) \).

3.1 Proof of Theorem 3.1

Without loss of generality, we may assume that \(\varepsilon \) is small. The proof of Theorem 3.1 is a simple consequence of Lemmas 3.2 and 3.3. We estimate the probability in (3.1) separately for x’s in and outside \(B(0,\varepsilon n)\). In the first case, we apply Lemma 3.2 to \(T = T^1_{0,n}\), \(s=2\varepsilon n\), \(r = \frac{1}{2}\sqrt{\varepsilon }n\), and \(K = 2\), so that for some \(\eta >0\) and \(C<\infty \),

$$\begin{aligned} {\mathbb {P}}^{1,0} \left[ {\mathbb {P}}^{2,0} \left[ R^2\left[ 0, T^2_{0,\frac{1}{2}\sqrt{\varepsilon } n}\right] \cap {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n} \right] \right) \cap \left( B\left( 0,\frac{1}{2}\sqrt{\varepsilon }n\right) {\setminus }\overline{B(0,2\varepsilon n})\right) = \emptyset \right] > \varepsilon ^{\eta } \right] \le C\varepsilon . \end{aligned}$$

By varying the starting point of \(R^2\), we get the harmonic function in \(B\left( 0,2\varepsilon n\right) \),

$$\begin{aligned} h(x)~:=~ {\mathbb {P}}^{2,x} \left[ R^2\left[ 0, T^2_{0,\frac{1}{2}\sqrt{\varepsilon } n}\right] \cap {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n} \right] \right) \cap \left( B\left( 0,\frac{1}{2}\sqrt{\varepsilon }n\right) {\setminus }\overline{B(0,2\varepsilon n})\right) = \emptyset \right] . \end{aligned}$$

By the Harnack inequality [12, Theorem 1.7.2], there exists a constant \(C_H<\infty \) such that \(h(x) \le C_H\,h(0)\), for all \(x\in B(0,\varepsilon n)\). In particular,

$$\begin{aligned} {\mathbb {P}}^{2,x} \left[ R^2\left[ 0, T^2_{0,\frac{1}{2}\sqrt{\varepsilon } n}\right] \cap {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n} \right] \right) = \emptyset \right] \le C_H\, h(0). \end{aligned}$$

Since \(B(x,\sqrt{\varepsilon }n) \supseteq B(0,\frac{1}{2}\sqrt{\varepsilon }n)\) for all \(x\in B(0,\varepsilon n)\), we also have

$$\begin{aligned} {\mathbb {P}}^{2,x} \left[ R^2\left[ 0, T^2_{x,\sqrt{\varepsilon } n}\right] \cap {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n} \right] \right) = \emptyset \right] \le C_H\, h(0). \end{aligned}$$

Plugging this into the very first inequality gives

$$\begin{aligned} {\mathbb {P}}^{1,0} \left[ \begin{array}{c} \hbox {For some} \,x\in B(0,\varepsilon n),\\ {\mathbb {P}}^{2,x} \left[ R^2\left[ 0, T^2_{x,\sqrt{\varepsilon } n}\right] \cap {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n} \right] \right) = \emptyset \right] > C_H\,\varepsilon ^{\eta } \end{array} \right] \le C\varepsilon . \end{aligned}$$

This gives (3.1) after slightly decreasing \(\eta \).

It remains to consider the case \(x\notin B(0,\varepsilon n)\). We prove that for some \(\eta >0\) and \(C<\infty \),

$$\begin{aligned} {\mathbb {P}}^{1,0} \left[ \begin{array}{c} \hbox {For some} \, x \in B(0,n){\setminus } \overline{B(0,\varepsilon n)}\,\,\hbox {with}\,\, {\mathrm {dist}}\left( x, {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n} \right] \right) \right) \le \varepsilon ^2 n,\\ {\mathbb {P}}^{2,x} \left[ R^2\left[ 0, T^2_{x,\varepsilon n}\right] \cap {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n} \right] \right) = \emptyset \right] > \varepsilon ^{\eta } \end{array} \right] \le C\varepsilon , \end{aligned}$$
(3.3)

which is slightly stronger than (3.1), since \(T^2_{x,\sqrt{\varepsilon } n}\) of (3.1) is replaced here by the smaller \(T^2_{x,\varepsilon n}\). We start by covering \(B(0,n){\setminus }\overline{B(0,\varepsilon n)}\) by \(s= 10\,\lfloor \varepsilon ^{-6}\rfloor \) balls of radius \(\varepsilon ^2 n\) with centers at \(v_1,\ldots , v_s\in B(0,n){\setminus }\overline{B(0,\varepsilon n)}\). By the union bound, the right hand side of (3.3) is bounded from above by

$$\begin{aligned} \sum _{i=1}^s {\mathbb {P}}^{1,0} \left[ \begin{array}{c} \hbox {There exists} \, x \in B\left( v_i,\varepsilon ^2 n\right) \hbox {with}\, {\mathrm {dist}}\left( x, {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n} \right] \right) \right) \le \varepsilon ^2 n,\\ {\mathbb {P}}^{2,x} \left[ R^2\left[ 0, T^2_{x,\varepsilon n}\right] \cap {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n} \right] \right) = \emptyset \right] > \varepsilon ^{\eta } \end{array} \right] \end{aligned}$$

For each \(x\in B\left( v_i,\varepsilon ^2 n\right) \), \(B(x,\varepsilon ^2n)\subset B(v_i,2\varepsilon ^2n)\) and \(B(x,\varepsilon n)\supset B(v_i,\frac{1}{2}\varepsilon n)\). Thus, the ith probability in the sum is at most

$$\begin{aligned} {\mathbb {P}}^{1,0} \left[ \begin{array}{c} {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n}\right] \right) \nsubseteq B\left( v_i,2\varepsilon ^2 n\right) ^c, \hbox {and for some} \,x \in B\left( v_i,\varepsilon ^2 n\right) \\ {\mathbb {P}}^{2,x} \left[ R^2\left[ 0, T^2_{v_i,\frac{1}{2}\varepsilon n}\right] \cap {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n} \right] \right) \cap \left( B\left( v_i,\frac{1}{2}\varepsilon n\right) {\setminus } \overline{B\left( v_i,2\varepsilon ^2 n\right) }\right) = \emptyset \right] > \varepsilon ^{\eta } \end{array} \right] \end{aligned}$$

By the Harnack inequality applied to the harmonic function

$$\begin{aligned}&{\mathbb {P}}^{2,x} \left[ R^2\left[ 0, T^2_{v_i,\frac{1}{2}\varepsilon n}\right] \cap {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n} \right] \right) \cap \left( B\left( v_i,\frac{1}{2}\varepsilon n\right) {\setminus } \overline{B\left( v_i,2\varepsilon ^2 n\right) }\right) = \emptyset \right] , \\&\quad x \in B\left( v_i,2\varepsilon ^2 n\right) , \end{aligned}$$

there exists a universal constant \(c_H>0\) such that the ith probability is bounded from above by

$$\begin{aligned} {\mathbb {P}}^{1,0} \left[ \begin{array}{c} {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n} \right] \right) \nsubseteq B\left( v_i,2\varepsilon ^2 n\right) ^c,\, \hbox {and}\\ {\mathbb {P}}^{2,v_i} \left[ R^2\left[ 0, T^2_{v_i,\frac{1}{2}\varepsilon n}\right] \cap {\mathrm {LE}}\left( R^1\left[ 0, T^1_{0,n} \right] \right) \cap \left( B\left( v_i,\frac{1}{2}\varepsilon n\right) {\setminus } \overline{B\left( v_i,2\varepsilon ^2 n\right) }\right) = \emptyset \right] > c_H\,\varepsilon ^{\eta } \end{array} \right] \end{aligned}$$

Now, we apply Lemma 3.3 with \(v=v_i\), \(r = \frac{1}{2}\varepsilon n\), \(s=2\varepsilon ^2n\), and \(K=7\) to find \(\eta >0\) and \(C<\infty \) for which the above probability is \(\le C\varepsilon ^7\). Thus, the probability from (3.3) is bounded from above by \((C\varepsilon ^7)\cdot s \le 10 C\,\varepsilon \). This proves (3.3) and completes the proof of Theorem 3.1. \(\square \)

3.2 Proof of Lemma 3.3

The scheme of the proof is conceptually the same as that of [11, Lemma 4.6], except for the main improvement stated in Claim 3.4 below. For the reader’s convenience and because of the importace of the result, we give a complete proof, which we organize in a sequence of claims. The first claim is a stronger version of [11, Lemma 4.3], which is the first step in the proof of [11, Lemma 4.6]. This improvement is essentially the main reason why the remaining steps in the proof of [11, Lemma 4.6] can be adapted to our situation.

Claim 3.4

There exists \(c_1>0\) such that for all \(n>0, v\in \partial B(0,n)\), and \(\Gamma \subset B(v,n)^c\),

$$\begin{aligned} {\mathbb {P}}^0\left[ {\mathrm {dist}}\left( R(T_{0,n}), B(v,n)^c\right) \ge \frac{n}{2} ~\Big |~ R[1,T_{0,n}]\cap \Gamma = \emptyset \right] \ge c_1. \end{aligned}$$

Proof of Claim 3.4

An analogous claim for the random walk on \({\mathbb {Z}}^2\) is proved in [24, Proposition 3.5]. The same scheme works for \({\mathbb {Z}}^3\) with a slightly more involved analysis of the corresponding harmonic function.

We begin with some auxiliary observations in \({\mathbb {R}}^3\). For \(z\in {\mathbb {R}}^3\), let \({\mathbb D}(z)\) be the unit ball in \({\mathbb {R}}^3\) centered at z, and write \({\mathbb D}\) for \({\mathbb D}(0)\). Let \(u\in \partial _{{\mathbb {R}}^3}{\mathbb D}\), \(\delta >0\), and \(M = \{z\in \partial _{{\mathbb {R}}^3}{\mathbb D}:|z-u|\le \delta \}\). For \(z\in {\mathbb D}\), let \(h(z) = {\mathbf {P}}^z[W(\tau _{\mathbb D})\in M]\), where W is the standard Brownian motion in \({\mathbb {R}}^3\) and \(\tau _{\mathbb D}\) is the first hitting time of \(\partial _{{\mathbb {R}}^3}{\mathbb D}\) by W. Then h is a harmonic function in \({\mathbb D}\) with the boundary condition \(\mathbbm {1}_M\). In particular, it can be written as

$$\begin{aligned} h(z) = \frac{1}{4\pi }\int _M\frac{1 - |z|^2}{|z-\sigma |^3}d\sigma . \end{aligned}$$

We will need the following properties of h.

  • If \(\delta \) is small enough, then for all \(z\in {\mathbb D}{\setminus }{\mathbb D}(u)\), \(h(z)\le h(0)\).

Proof

By the maximum principle, it suffices to consider \(z\in \partial {\mathbb D}(u)\cap {\mathbb D}\). By the symmetry, it suffices to prove the claim for \(u = (1,0,0)\) and \(z = (z_1,z_2,0)\). Using geometric constraints, one can express h(z) as a function of \(z_2\) only,

$$\begin{aligned} h(z)= & {} f(z_2) = \frac{1}{4\pi }\int _M\frac{2\sqrt{1-z_2^2} - 1}{\left( 3-2\sigma _1 + 2(\sigma _1 - 1)\sqrt{1-z_2^2} - 2\sigma _2z_2\right) ^{\frac{3}{2}}}d\sigma , \\&\qquad z\in \partial {\mathbb D}(u)\cap {\mathbb D}. \end{aligned}$$

One can show by a direct computation that \(z_2f'(z_2)\le 0\) if \(|\sigma _2|\) is sufficiently small, which proves the claim. \(\square \)

  • Another direct computation gives \(\frac{\partial h}{\partial u}(0) = \nu >0\) (derivative in the direction u) and \(\frac{\partial h}{\partial u'} = 0\) for any \(u'\) orthogonal to u.

  • There exists \(r\in (0,1)\) such that for all \(\delta \in (0,\frac{1}{4})\) and \(r\le |z|<1\) with \(|z-u|\ge \frac{1}{2}\), \(h(z)\le \frac{1}{4} h(0)\). This follows from the bound \(\frac{1 - |z|^2}{|z-\sigma |^3}\le 4^3(1-r^2)\).

Assume that n is large enough so that \(\overline{B(0,rn)}\subset n{\mathbb D}\). The function \(h_n(z) = h(\frac{z}{n})\) is harmonic in \(n{\mathbb D}\). For \(z\in \overline{B(0,rn)}\), let \(\widetilde{h}_n(z) = {\mathbb {E}}^z[h_n(R(T_{0,rn}))]\) be the discrete harmonic function in B(0, rn) which agrees with \(h_n(z)\) on \(\partial B(0,rn)\). By [12, (1.23) and (1.34)], there exists \(C<\infty \) such that for all \(z\in B(0,rn), |h_n(z) - \widetilde{h}_n(z)|\le \frac{C}{n}\).

We proceed with the proof of the claim. Let \(n\ge 1\) and \(v\in \partial B(0,n)\). We choose \(u = \frac{v}{|v|}\in \partial _{{\mathbb {R}}^3}{\mathbb D}\). Let \(A=\frac{4C}{\nu }\) (with \(\nu \) and C as above) and \(x\in B(0,rn)\) be such that \(x_i = \lfloor Au_i\rfloor \). By Taylor’s theorem, \(h_n(x) - h_n(0)\ge \frac{A\nu }{2n}\) for large n. Thus, for any \(z\in B(0,rn){\setminus } B(v,n)\),

$$\begin{aligned} \widetilde{h}_n(x) - \widetilde{h}_n(z)= & {} \left[ \widetilde{h}_n(x) - h_n(x)\right] + \left[ h_n(x) - h_n(0)\right] \\&\quad + \left[ h_n(0) - h_n(z)\right] + \left[ h_n(z) - \widetilde{h}_n(z)\right] \\\ge & {} -\frac{C}{n} + \frac{A\nu }{2n} + 0 - \frac{C}{n} \ge 0. \end{aligned}$$

Since \(\Gamma \subset B(v,n)^c\), the same calculation as on [24, page 1032] gives

$$\begin{aligned} {\mathbb {E}}^x\left[ h_n(R(T_{0,rn}))~|~R\left[ 0,T_{0,rn}\right] \cap \Gamma =\emptyset \right]\ge & {} {\mathbb {E}}^x\left[ h_n(R(T_{0,rn}))\right] \\= & {} \widetilde{h}_n(x)\ge \widetilde{h}_n(0) \ge h(0) - \frac{C}{n} \ge \frac{1}{2} h(0). \end{aligned}$$

By splitting the above probability into the terms where \(|R(T_{0,rn}) - v|\) is \(\ge \frac{n}{2}\) and \(<\frac{n}{2}\), and estimating \(h_n(R(T_{0,rn}))\) from above by \(\frac{1}{4} h(0)\) in the first case and by 1 in the second, one gets exactly as on [24, page 1033] that

$$\begin{aligned} {\mathbb {P}}^x\left[ \left| R(T_{0,rn}) - v\right| \le \frac{n}{2}~\big |~R\left[ 1,T_{0,rn}\right] \cap \Gamma =\emptyset \right] \ge c>0, \end{aligned}$$

which implies that \({\mathbb {P}}^0\left[ |R(T_{0,n}) - v|\le \frac{n}{2}~\big |~R[1,T_{0,n}]\cap \Gamma =\emptyset \right] \ge c'>0\). The proof of the claim is complete. \(\square \)

Before we state the next claim, we introduce some notation. For a path \(\gamma \) and \(t\ge 1\), we define the set of cut points of \(\gamma \) up to time t,

$$\begin{aligned} {\mathrm {cut}}(\gamma ;t) = \left\{ \gamma (i)~:~i<t,~\gamma [1,i]\cap \gamma [i+1,\mathrm {len}\,\gamma ] = \emptyset \right\} . \end{aligned}$$

Note that \({\mathrm {cut}}(\gamma ;t)\) is non-decreasing in t, and non-increasing as \(\gamma \) is extended. Also note that \({\mathrm {LE}}(\gamma )[1,\mathrm {len}\,{\mathrm {LE}}(\gamma )]\supset {\mathrm {cut}}(\gamma ;\mathrm {len}\,\gamma )\).

The following claim is an analogue of [11, Lemma 4.4].

Claim 3.5

There exists \(q>0\) such that the following holds. For any \(\varepsilon >0\) there exist \(\delta = \delta (\varepsilon )>0\) and \(C = C(\varepsilon )<\infty \) such that for all \(r>C\), \(s\in [r(1+\varepsilon ),2r]\), \(\Gamma \subset B(0,s)^c\) with

$$\begin{aligned} {\mathbb {P}}^0\left[ R[0,T_{0,4r}]\cap \Gamma \ne \emptyset \right] <\delta , \end{aligned}$$

and \(v\in \partial B(0,s)\),

$$\begin{aligned} {\mathbb {P}}^{1,v}\left[ \begin{array}{c} \hbox {For all}\, y\in B(v,\varepsilon r),\\ {\mathbb {P}}^{2,y}\left[ {\mathrm {cut}}\left( R^1[0,+\infty );T^1_{v,\varepsilon r}\right) \cap R^2\left[ 0,T^2_{0,4r}\right] \ne \emptyset \right] \ge q \end{array} ~\Big |~R^1\left[ 1,T_{0,4r}\right] \cap \Gamma =\emptyset \right] \ge q. \end{aligned}$$

Proof of Claim 3.5

Let \(v\in \partial B(0,s)\), and define \({\mathcal {C}} = {\mathrm {cut}}\left( R^1[0,+\infty );T^1_{v,\varepsilon r}\right) \) and \(A = B(v,\frac{1}{2}\varepsilon r){\setminus }\overline{B(v,\frac{1}{4}\varepsilon r)}\). Let \(\lambda >2\) to be fixed later, and take \(\mu = \frac{\varepsilon }{4\lambda }\) and \(\rho = \mu r\). By [11, Lemma 4.2], there exists \(c>0\) such that for all \(x\in \partial B(v,\rho )\),

$$\begin{aligned} {\mathbb {P}}^{1,x}\left[ \hbox {For all}\, y\in B\left( v,\frac{1}{8}\varepsilon r\right) ,\,\, {\mathbb {P}}^{2,y}\left[ {\mathcal {C}}\cap R^2\left[ 0,T^2_{0,4r}\right] \cap A\ne \emptyset \right] \ge c \right] \ge c. \end{aligned}$$

Since the random walk started from any \(y\in B(v,\varepsilon r)\) will hit \(B(v,\frac{1}{8}\varepsilon r)\) before exiting from B(0, 4r) with probability \(>c'\), the previous inequality also holds for all \(y\in B(v,\varepsilon r)\). Namely, there exists \(c_2>0\) such that for all \(x\in \partial B(v,\rho )\),

$$\begin{aligned} {\mathbb {P}}^{1,x}\left[ \hbox {For all}\, y\in B\left( v,\varepsilon r\right) ,\,\, {\mathbb {P}}^{2,y}\left[ {\mathcal {C}}\cap R^2\left[ 0,T^2_{0,4r}\right] \cap A\ne \emptyset \right] \ge c_2 \right] \ge c_2. \end{aligned}$$
(3.4)

By [12, Proposition 1.5.10], for any \(z\in \partial B(v,\frac{1}{4}\varepsilon r)\),

$$\begin{aligned} {\mathbb {P}}^{1,z}\left[ T^1_{v,\rho }<\infty \right] \le C'\,\frac{\rho }{\varepsilon r} = \frac{C_3}{\lambda }. \end{aligned}$$

Thus,

$$\begin{aligned} {\mathbb {P}}^{1,v}\left[ R^1\left[ T^1_{v,\frac{1}{4}\varepsilon r},+\infty \right) \cap B(v,\rho ) \ne \emptyset \right] \le \frac{C_3}{\lambda }. \end{aligned}$$
(3.5)

Note that if the random walk \(R^1\) started from v does not return to \(B(v,\rho )\) after \(T^1_{v,\frac{1}{4}\varepsilon r}\), then

$$\begin{aligned} {\mathrm {cut}}\left( R^1[0,+\infty );T^1_{v,\varepsilon r}\right) \cap A = {\mathrm {cut}}\left( R^1\left[ T^1_{v,\rho },+\infty \right) ;T^1_{v,\varepsilon r}\right) \cap A. \end{aligned}$$
(3.6)

Denote by M the set of points on \(\partial B(v,\rho )\) which are at distance \(\ge \frac{\rho }{2}\) from \(B(0,s)^c\). By Claim 3.4,

$$\begin{aligned} {\mathbb {P}}^{1,v}\left[ R^1\left( T^1_{v,\rho }\right) \in M ~\Big |~ R^1\left[ 1,T^1_{v,\rho }\right] \cap \Gamma = \emptyset \right] \ge c_1. \end{aligned}$$
(3.7)

By the Harnack inequality applied to the harmonic function

$$\begin{aligned} {\mathbb {P}}^{1,x}\left[ R^1\left[ 0,T^1_{0,4r}\right] \cap \Gamma \ne \emptyset \right] ,\qquad x\in B\left( 0,s-\frac{\rho }{2}\right) , \end{aligned}$$

and the assumption on \(\Gamma \), there exists \(C_4=C_4(\varepsilon ,\lambda )<\infty \) such that for any \(x\in M\),

$$\begin{aligned} {\mathbb {P}}^{1,x}\left[ R^1\left[ 0,T^1_{0,4r}\right] \cap \Gamma \ne \emptyset \right] \le C_4\delta . \end{aligned}$$
(3.8)

All the ingredients are ready to conclude. We have

$$\begin{aligned}&{\mathbb {P}}^{1,v}\left[ \hbox {For all} \,y\in B(v,\varepsilon r),\,\, {\mathbb {P}}^{2,y}\left[ {\mathcal {C}}\cap R^2\left[ 0,T^2_{0,4r}\right] \ne \emptyset \right] \ge c_2~\Big |~ R^1\left[ 1,T^1_{0,4r}\right] \cap \Gamma =\emptyset \right] \\&\quad \ge \frac{{\mathbb {P}}^{1,v}\left[ \begin{array}{c} \hbox {For all} \,y\in B(v,\varepsilon r),\,\,{\mathbb {P}}^{2,y}\left[ {\mathcal {C}}\cap R^2\left[ 0,T^2_{0,4r}\right] \cap A\ne \emptyset \right] \ge c_2,\\ R^1\left[ 1,T^1_{0,4r}\right] \cap \Gamma =\emptyset ,~R^1\left( T^1_{v,\rho }\right) \in M,~R^1\left[ T^1_{v,\frac{1}{4}\varepsilon r},+\infty \right) \cap B(v,\rho ) = \emptyset \end{array}\right] }{{\mathbb {P}}^{1,v}\left[ R^1\left[ 1,T^1_{v,\rho }\right] \cap \Gamma = \emptyset \right] }. \end{aligned}$$

By the strong Markov property for \(R^1\) at time \(T^1_{v,\rho }\) and the identity (3.6), the nominator of the above expression is bounded from below by

$$\begin{aligned}&{\mathbb {P}}^{1,v}\left[ R^1\left[ 1,T^1_{v,\rho }\right] \cap \Gamma =\emptyset ,~R^1\left( T^1_{v,\rho }\right) \in M\right] \\&\quad \cdot \min _{x\in M}\, {\mathbb {P}}^{1,x}\left[ \begin{array}{c} \hbox {For all}\, y\in B(v,\varepsilon r),\,\,{\mathbb {P}}^{2,y}\left[ {\mathcal {C}}\cap R^2\left[ 0,T^2_{0,4r}\right] \ne \emptyset \right] \ge c_2,\\ R^1\left[ 1,T^1_{0,4r}\right] \cap \Gamma =\emptyset ,~R^1\left[ T^1_{v,\frac{1}{4}\varepsilon r},+\infty \right) \cap B(v,\rho ) = \emptyset \end{array}\right] \end{aligned}$$

By (3.4), (3.5), and (3.8), the above display is

$$\begin{aligned} \ge \left( c_2 - \frac{C_3}{\lambda } - C_4\delta \right) \cdot {\mathbb {P}}^{1,v}\left[ R^1\left[ 1,T^1_{v,\rho }\right] \cap \Gamma =\emptyset ,~R^1\left( T^1_{v,\rho }\right) \in M\right] , \end{aligned}$$

which implies that

$$\begin{aligned}&{\mathbb {P}}^{1,v}\left[ \hbox {For all}\, y\in B(v,\varepsilon r),\,\, {\mathbb {P}}^{2,y}\left[ {\mathcal {C}}\cap R^2\left[ 0,T^2_{0,4r}\right] \ne \emptyset \right] \right. \\&\quad \left. \ge c_2~\Big |~ R^1\left[ 1,T^1_{0,4r}\right] \cap \Gamma =\emptyset \right] \\&\quad \ge \left( c_2 - \frac{C_3}{\lambda } - C_4\delta \right) \cdot {\mathbb {P}}^{1,v}\left[ R^1\left( T^1_{v,\rho }\right) \in M~\Big |~R^1\left[ 1,T^1_{v,\rho }\right] \cap \Gamma =\emptyset \right] \\&\quad \ge c_1\cdot \left( c_2 - \frac{C_3}{\lambda } - C_4\delta \right) , \end{aligned}$$

where the last inequality follows from (3.7). Finally we make a choice of parameters. We choose \(\lambda \) so that \(\frac{C_3}{\lambda }<\frac{c_2}{4}\). Then we choose \(\delta \) so that \(C_4\delta < \frac{c_2}{4}\) and \(q = \frac{c_1c_2}{2}\). The proof of Claim 3.5 is complete. \(\square \)

To state the next claim, we need more notation. A function \(\gamma :\{1,\ldots ,n\}\rightarrow {\mathbb {Z}}^3\) is called a discontinuous path of length n. All the definitions that we introduced for nearest neighbor paths extend without any changes to discontinuous paths. Given two discontinuous paths \(\gamma _1\) and \(\gamma _2\), we define the discontinuous paths \({\mathrm {LE}}_1(\gamma _1\cup \gamma _2)\) and \({\mathrm {LE}}_2(\gamma _1\cup \gamma _2)\) as follows. Let \(t_* = \max \{t:{\mathrm {LE}}(\gamma _1)[1,t-1]\cap \gamma _2=\emptyset \}\). Then

$$\begin{aligned} \begin{array}{rll} {\mathrm {LE}}_1(\gamma _1\cup \gamma _2) &{}= &{}{\mathrm {LE}}(\gamma _1\cup \gamma _2)[1,t_*],\\ {\mathrm {LE}}_2(\gamma _1\cup \gamma _2) &{}=&{}{\mathrm {LE}}(\gamma _1\cup \gamma _2)\left[ t_*+1,\mathrm {len}\,{\mathrm {LE}}(\gamma _1\cup \gamma _2)\right] . \end{array} \end{aligned}$$

The next claim is an analogue of [11, Lemma 4.5].

Claim 3.6

For any \(\varepsilon >0\) and \(\eta >0\), there exist \(\delta >0\) and \(C<\infty \) such that the following holds. For \(r>0\), let \(A_1 = B(0,2r){\setminus }\overline{B(0,r)}\) and \(A_2 = B(0,4r){\setminus }\overline{B(0,\frac{1}{2}r)}\). Then for any \(r>C\), \(s\in [r(1+\eta ),2r], v\in \partial A_1\), and a discontinuous path \(\gamma \subset A_2\) with \(\gamma (1)\in \partial B(0,4r)\) and \(\gamma (\mathrm {len}\,\gamma )\sim v\),

$$\begin{aligned} {\mathbb {P}}^{1,v}\left[ \begin{array}{c} {\mathrm {LE}}_1\left( \gamma \cup R^1\left[ 0,T^1(\partial A_2)\right] \right) \subset B(0,s)^c,\\ {\mathrm {LE}}_2\left( \gamma \cup R^1\left[ 0,T^1(\partial A_2)\right] \right) \cap B(0,s-\eta r)\ne \emptyset ,\\ {\mathbb {P}}^{2,0}\left[ R^2\left[ 0,T^2_{0,4r}\right] \cap L'\ne \emptyset \right]<\delta \end{array} \right] <\varepsilon , \end{aligned}$$
(3.9)

where \(L' = {\mathrm {LE}}(\gamma \cup R^1[0,T^1(\partial A_2)])[1,t]\) and \(t = \min \{i:{\mathrm {LE}}(\gamma \cup R^1[0,T^1(\partial A_2)])(i)\in B(0,s-\eta r)\}\).

Proof of Claim 3.6

Fix \(\varepsilon >0\) and \(\eta >0\). Take \(q>0\) from Claim 3.5. Let \(K=K(\varepsilon )>2\) be an integer such that \((1-q)^K<\varepsilon \). Let \(\varepsilon ' = \frac{\eta }{2K}\) and \(\delta ' = \delta _{{\mathrm {Claim}}~3.5}(\varepsilon ')\) be the \(\delta \) from Claim 3.5 corresponding to \(\varepsilon _{{\mathrm {Claim}}~3.5} = \varepsilon '\). Define \(s_i = s - \frac{\eta i}{K+2}r\) for \(i\in \{1,\ldots ,K+1\}\). Let \(j_k\) be as in the definition of the loop erasure, so that \(R^1[j_k+1,T^1(\partial A_2)]\) is a random walk conditioned not to hit \({\mathrm {LE}}(\gamma \cup R^1[0,j_k])\). Let

$$\begin{aligned} \tau _i = \max \left\{ j_k\le T^1(\partial A_2)~:~{\mathrm {LE}}\left( \gamma \cup R^1[0,j_k]\right) \subset B(0,s_i)^c\right\} . \end{aligned}$$

If \(\tau _i<T^1(\partial A_2)\), then define \(\Gamma _i = {\mathrm {LE}}(\gamma \cup R^1[0,\tau _i])\) and \(v_i = R^1(\tau _i)\). By Claim 3.5,

$$\begin{aligned} {\mathbb {P}}\left[ \mathcal B_i~|~\tau _i,~R^1[0,\tau _i]\right] <1-q, \end{aligned}$$

where

$$\begin{aligned} {\mathcal {B}}_i = \left\{ \begin{array}{c} \tau _i<T^1(\partial A_2), {\mathbb {P}}^{2,0}\left[ R^2\left[ 0,T^2_{0,4r}\right] \cap \Gamma _i \ne \emptyset \right] < \delta ', \hbox {and}\\ \hbox {for some} \,y\in B\left( v_i,\varepsilon 'r\right) , {\mathbb {P}}^{2,y}\left[ {\mathrm {cut}}\left( R^1\left[ \tau _i,+\infty \right) ;T^*_i\right) \cap R^2\left[ 0,T^2_{0,4r}\right] \ne \emptyset \right] \le q \end{array} \right\} , \end{aligned}$$

and \(T^*_i = \min \{t>\tau _i:R^1(t)\in \partial B(v_i,\varepsilon 'r)\}\). Note that the event \({\mathcal {B}}_i\) contains the following event

$$\begin{aligned} {\mathcal {B}}_i' = \left\{ \begin{array}{c} \tau _{i+1}<T^1(\partial A_2), {\mathbb {P}}^{2,0}\left[ R^2\left[ 0,T^2_{0,4r}\right] \cap \Gamma _i \ne \emptyset \right] < \delta ', \hbox {and}\\ \hbox {for some}\,\, y\in B(v_i,\varepsilon 'r), {\mathbb {P}}^{2,y}\left[ {\mathrm {cut}}\left( R^1\left[ \tau _i,\tau _{i+1}\right] ;T^*_i\right) \cap R^2\left[ 0,T^2_{0,4r}\right] \ne \emptyset \right] \le q \end{array} \right\} , \end{aligned}$$

which depends only on \(\tau _{i+1}\) and \(R^1[0,\tau _{i+1}]\). Thus,

$$\begin{aligned} {\mathbb {P}}\left[ {\mathcal {B}}_i'~|~{\mathcal {B}}_1',\ldots , {\mathcal {B}}_{i-1}'\right] = {\mathbb {E}}\left[ {\mathbb {P}}\left[ {\mathcal {B}}_i'~|~\tau _i,~R^1[0,\tau _i]\right] ~\big |~{\mathcal {B}}_1',\ldots , {\mathcal {B}}_{i-1}'\right] < 1 - q \end{aligned}$$

and \({\mathbb {P}}\left[ \bigcap _{i=1}^{K}{\mathcal {B}}_i'\right]<(1-q)^K<\varepsilon \). It remains to show that the event in (3.9) implies \(\bigcap _{i=1}^{K}{\mathcal {B}}_i'\) for some choice of \(\delta \). It is well known (see, e.g., [11, Lemma 2.5]) that there exists \(c_* = c_*(\eta )>0\) such that

$$\begin{aligned} {\mathbb {P}}^0\left[ T_{w_i,\varepsilon 'r}<T_{0,4r}\right] \ge c_* \qquad \hbox {for all}\, i \,\hbox {and}\, w_i\in \partial B(0,s_i). \end{aligned}$$

Take \(\delta <\min (\delta ',c_*q)\). Then the event in (3.9) implies \(\bigcap _{i=1}^{K}{\mathcal {B}}_i'\). Indeed, if the event in (3.9) occurs,

  • since \({\mathrm {LE}}_1(\gamma \cup R^1[0,T^1(\partial A_2)])\subset B(0,s)^c\) and \({\mathrm {LE}}_2(\gamma \cup R^1[0,T^1(\partial A_2)])\) contains a path from \(\partial B(0,s)\) to \(\partial B(0,s-\eta r)\), all \(\tau _i\)’s are strictly smaller than \(T^1(\partial A_2)\),

  • since \(\Gamma _i\subset L'\), \({\mathbb {P}}^{2,0}\left[ R^2[0,T^2_{0,4r}]\cap \Gamma _i \ne \emptyset \right]< \delta < \delta '\), and

  • since \({\mathrm {cut}}(R^1[\tau _i,\tau _{i+1}];T^*_i)\subset L'\),

    $$\begin{aligned}&\min _{y\in B(v_i,\varepsilon ' r)}\,{\mathbb {P}}^{2,y}\left[ {\mathrm {cut}}\left( R^1[\tau _i,\tau _{i+1}];T^*_i\right) \cap R^2\left[ 0,T^2_{0,4r}\right] \ne \emptyset \right] \\&\quad \le \frac{{\mathbb {P}}^{2,0}\left[ {\mathrm {cut}}\left( R^1[\tau _i,\tau _{i+1}];T^*_i\right) \cap R^2\left[ 0,T^2_{0,4r}\right] \ne \emptyset \right] }{{\mathbb {P}}^{2,0}\left[ T^2_{v_i,\varepsilon 'r}<T^2_{0,4r}\right] }<\frac{\delta }{c_*} < q. \end{aligned}$$

The proof of Claim 3.6 is complete. \(\square \)

Proof of Lemma 3.3

Without loss of generality we may assume that s and r / s are big (possibly depending on K). Let \(\rho _i = \frac{r}{2^i}\), for \(i\in \{0,\ldots , I = \lfloor \log _2\frac{r}{s}\rfloor \}\), and consider the stopping times when the random walk \(R^1\) jumps between different \(\partial B(v,\rho _i)\)’s,

$$\begin{aligned} \tau _0'= & {} T^1_{v,r}, ~~i'(0) = 0,\\ \tau _j'= & {} \min \left\{ t>\tau _{j-1}'~:~ R^1(t)\in \partial B(v,\rho _{i'(j)})\,\hbox {for some}\, i'(j)\ne i'(j-1)\right\} . \end{aligned}$$

The process \(i'(j)\) dominates a random walk on \(\{0,\ldots ,I\}\) with a drift towards 0 and an absorption at 0. In particular, if \(n_i' = \sharp \{j:i'(j)=i\}\), then there exists \(\lambda = \lambda (K)\) such that \({\mathbb {P}}^{1,0}[\sum _{i=1}^{I} n_i' > \lambda I] \le C(\frac{s}{r})^K\). Consider the annuli

$$\begin{aligned} A_{1,i}= & {} B(v,2\rho _{2i}){\setminus }\overline{B(v,\rho _{2i})}\quad \text {and}\quad A_{2,i} = B(v,4\rho _{2i}){\setminus }\overline{B\left( v,\frac{1}{2}\rho _{2i}\right) },\\&\qquad \text {for }1\le i\le \frac{I-1}{2} \end{aligned}$$

and the stopping times

$$\begin{aligned} \tau _1= & {} T^1(\partial A_{1,1}), ~~i(0) = 1,\\ \tau _j= & {} \min \left\{ t>\tau _{j-1}~:~ R^1(t)\in \partial A_{1,i(j)}\,\hbox {for some}\, i(j)\ne i(j-1) \right\} . \end{aligned}$$

Note that the \(\tau _j\) is a subsequence of the \(\tau _j'\). Therefore, if \(n_i = \sharp \{j:i(j)=i\}\), then \({\mathbb {P}}^{1,0}[\sum _{i=1}^{\lfloor \frac{I-1}{2}\rfloor } n_i > \lambda I] \le C(\frac{s}{r})^K\). Let \(M = \lceil 6\lambda \rceil \), then

$$\begin{aligned} {\mathbb {P}}^{1,0}\left[ \sharp \left\{ 1\le i\le \frac{1}{2} I~:~ n_i>M\right\} >\frac{1}{6}I\right] \le C\left( \frac{s}{r}\right) ^K. \end{aligned}$$
(3.10)

For each j and \(m\in \{0,\ldots ,M-1\}\), define \(r_j = \rho _{2i(j)}\) and \(s_{j,m} = 2r_j - \frac{m}{M}r_j\), discontinuous paths \(\gamma _{i,j} = {\mathrm {LE}}(R^1[0,\tau _j-1]\cap A_{2,i})\) and \(\gamma _j = \gamma _{i(j),j}\), the loop erasures \(L_{k,j} = {\mathrm {LE}}_k(\gamma _j\cup R^1[\tau _j,\tau _{j+1}])\) for \(k=1,2\), and the event \({\mathcal {X}}_{j,m}\) that

  1. (a)

    \(L_{1,j}\subset B(v, s_{j,m})^c\),

  2. (b)

    \(L_{2,j}\cap B(v, s_{j,m+1})\ne \emptyset \),

  3. (c)

    \({\mathbb {P}}^{2,v}\left[ R^2[0,T^2_{v,4r}]\cap L_{j,m}'\ne \emptyset \right] <\delta _{{\mathrm {Claim}}~3.6}\), where \(L_{j,m}' = L_{1,j}\cup L_{2,j}[1,t]\) and \(t = \min \{i:L_{2,j}(i)\in \partial B(v,s_{j,m+1})\}\).

By Claim 3.6, \({\mathbb {P}}^{1,0}[{\mathcal {X}}_{j,m}| \tau _j<\infty , R^1[0,\tau _j]]<\varepsilon _{{\mathrm {Claim}}~3.6}\) and

$$\begin{aligned} {\mathbb {P}}^{1,0}\left[ {\mathop {\cup }\limits _{m=0}^{M-1}{\mathcal {X}}_{j,m}}~\big |~ \tau _j<\infty , R^1\left[ 0,\tau _j\right] \right] <M\,\varepsilon _{{\mathrm {Claim}}~3.6}. \end{aligned}$$

Let \({\mathcal {X}}_j = \cup _{m=0}^{M-1}{\mathcal {X}}_{j,m}\). Then the sequence of their indicators is dominated by a sequence of independent Bernoulli random variables with parameter \(M\,\varepsilon _{{\mathrm {Claim}}~3.6}\). In particular, by choosing \(\varepsilon _{{\mathrm {Claim}}~3.6}\) small enough,

$$\begin{aligned} {\mathbb {P}}^{1,0}\left[ \sharp \left\{ j\le \lambda I~:~{\mathcal {X}}_j\right\} >\frac{\lambda I}{M}\right] \le C\left( \frac{s}{r}\right) ^K. \end{aligned}$$
(3.11)

To finish the proof of Lemma 3.3, it suffices to show that the event in (3.2) implies one of the events in (3.10) or (3.11). We call an index i good, if \(n_i\le M\) and none of the \({\mathcal {X}}_j\)’s occur for j’s with \(i(j) = i\). The following claim is essentially [11, Sublemma 4.6.1].

Claim 3.7

Let i be a good index. For \(j>0\), let \(n_{i,j} = \sharp \{k<j:R^1(\tau _k)\in \partial A_{1,i}\}, u_{i,j} = 2\rho _{2i} - \frac{n_{i,j}}{M}\rho _{2i}\), and \(U_{i,j} = B(v,2\rho _{2i}){\setminus }\overline{B(v,u_{i,j})}\). Let \(t_{i,j} = \inf \{t:\gamma _{i,j}(t)\cap \partial B(v,u_{i,j})\ne \emptyset \}\), and define \(\gamma _{i,j}^* = \gamma _{i,j}[1,t_{i,j}]\) if \(t_{i,j}<\infty \) and \(\gamma _{i,j}^*=\emptyset \) otherwise. Then for all j, if \(\gamma _{i,j}^*\ne \emptyset \), then

$$\begin{aligned} {\mathbb {P}}^{2,v}\left[ \gamma _{i,j}^*\cap U_{i,j}\cap R^2\left[ 0,T^2_{v,4\rho _{2i}}\right] \ne \emptyset \right] \ge \delta _{{\mathrm {Claim}}~3.6}. \end{aligned}$$
(3.12)

Proof of Claim 3.7

The same as the proof of [11, Lemma 4.6.1]. We use induction on j. If \(R^1(\tau _j)\notin \partial A_{1,i}\), then \(R^1[\tau _j,\tau _{j+1}]\) does not enter in \(A_{1,i}\) and thus can only change \(\gamma _{i,j}^*\) by erasing a piece from its end leading to \(\gamma _{i,j+1}\cap \partial B(v,u_{i,j})=\emptyset \). (Note that in this case \(u_{i,j} =u_{i,j+1}\).) On the other hand, if \(R^1(\tau _j)\in \partial A_{1,i}\), then \({\mathcal {X}}_j\) does not occur, and, in particular, \({\mathcal {X}}_{j,n_{i,j}}\) does not occur, meaning that one of (a), (b), or (c) fails. If (a) fails, then \(\gamma _{i,j}^*\ne \emptyset \) and hence satisfies (3.12). Also, if \(\gamma _{i,j+1}^*\ne \emptyset \), then it contains \(\gamma _{i,j}^*\) and hence also satisfies (3.12). If (a) holds but (b) fails, then \(\gamma _{i,j+1}^*=\emptyset \). Finally, if (a) and (b) hold but not (c), then \(\gamma _{i,j+1}^* = L'_{j,n_{i,j}}\), and (3.12) holds. \(\square \)

Assume that the event in (3.2) occurs for some T. Let J be such that \(\tau _J\le T < \tau _{J+1}\). Since \(R^1[\tau _J,T]\subset A_{2,i(J)}\), Claim 3.7 shows that for each good index \(i\ne i(J)\), either \({\mathrm {LE}}(R^1[0,T])\) does not cross \(A_{1,i}\) from outside to inside or the first such crossing is hittable by \(R^2\). Since we assume that the event in (3.2) occurs, \({\mathrm {LE}}(R^1[0,T])\) crosses \(B(v,r){\setminus }\overline{B(v,s)}\) and, in particular, all \(A_{1,i}\)’s. Thus, for each good \(i\ne i(J)\),

$$\begin{aligned} {\mathbb {P}}^{2,v}\left[ {\mathrm {LE}}\left( R^1[0,T]\right) [0,\sigma ]\cap A_{1,i}\cap R^2\left[ 0,T^2_{v,4\rho _{2i}}\right] \ne \emptyset \right] \ge \delta _{{\mathrm {Claim}}~3.6}. \end{aligned}$$

By the Harnack inequality applied to the harmonic function

$$\begin{aligned} {\mathbb {P}}^{2,x}\left[ {\mathrm {LE}}\left( R^1[0,T]\right) [0,\sigma ]\cap A_{1,i}\cap R^2\left[ 0,T^2_{v,4\rho _{2i}}\right] \ne \emptyset \right] ,\qquad x\in B\left( v,\frac{1}{2}\rho _{2i}\right) , \end{aligned}$$

there exists a constant \(c_*>0\) (independent of i) such that

$$\begin{aligned}&{\mathbb {P}}^{2,x}\left[ {\mathrm {LE}}\left( R^1[0,T]\right) [0,\sigma ]\cap A_{1,i}\cap R^2\left[ 0,T^2_{v,4\rho _{2i}}\right] \ne \emptyset \right] \ge c_*\,\delta _{{\mathrm {Claim}}~3.6},\\&\quad \qquad \text {for all }x\in B\left( v,\frac{1}{2}\rho _{2i}\right) . \end{aligned}$$

Recall that we aim to show that the event in (3.2) implies one of the events in (3.10) or (3.11). Thus, assume that the both these events do not occur. This implies that at least \(\frac{1}{6} I\) indices between 1 and \(\frac{1}{2} I\) are good. Let \(n=\lfloor \frac{I}{12}\rfloor - 2\) and \(i_1>i_2>\cdots>i_n>1\) be good indices with \(i_k - i_{k+1}\ge 2\) and \(i_k\ne i(J)\), then

$$\begin{aligned}&{\mathbb {P}}^{2,v}\left[ R^2\left[ 0,T^2_{v,r}\right] \cap {\mathrm {LE}}\left( R^1[0,T]\right) [0,\sigma ]\cap \left( B(v,r){\setminus } \overline{B(v,s)}\right) =\emptyset \right] \\&\quad \le \prod _{k=1}^{n-1}{\mathbb {P}}^{2,v}\left[ R^2\left[ T^2_{v,\frac{1}{2}\rho _{2i_k}},T^2_{v,4\rho _{2i_k}}\right] \cap {\mathrm {LE}}\left( R^1[0,T]\right) [0,\sigma ] \right. \\&\quad \left. =\emptyset ~\Big |~ R^2\left[ 0,T^2_{v,4\rho _{2i_{k+1}}}\right] \cap {\mathrm {LE}}\left( R^1[0,T]\right) [0,\sigma ]=\emptyset \right] \\&\quad \le \prod _{k=1}^{n-1}\max _{w\in B\left( v,\frac{1}{2}\rho _{2i_k}\right) }{\mathbb {P}}^{2,w}\left[ R^2\left[ 0,T^2_{v,4\rho _{2i_k}}\right] \cap {\mathrm {LE}}\left( R^1[0,T]\right) [0,\sigma ]=\emptyset \right] \\&\quad \le \left( 1 - c_*\, \delta _{{\mathrm {Claim}}~3.6}\right) ^{n-1}\le \left( 1 - c_*\, \delta _{{\mathrm {Claim}}~3.6}\right) ^{\frac{I}{12} - 4} < \left( \frac{s}{r}\right) ^\eta , \end{aligned}$$

for \(\eta \) small enough. This contradicts the assumption that the event (3.2) occurs. Thus, we have shown that if both the events (3.10) and (3.11) do not occur, then the event (3.2) also does not occur. The proof of Lemma 3.3 is complete. \(\square \)

4 Strong coupling of loop soups

In this section we prove Theorem 2.2 by giving an explicit coupling of the random walk and the Brownian loop soups satisfying the conditions of the theorem. In two dimensions, such coupling is obtained in [20]. Our construction is an adaptation of the one from [20] to higher dimensions.

4.1 Preliminaries

In this section we prove two lemmas needed for the proof of Theorem 2.2.

Lemma 4.1

For any \(d\ge 2\),

$$\begin{aligned} p_{2n}(0,0) = 2\, \left( \frac{d}{4\pi n}\right) ^{\frac{d}{2}}\cdot \left( 1 - \frac{d}{8n} + O\left( \frac{1}{n^2}\right) \right) , \end{aligned}$$

as \(n\rightarrow \infty \).

Proof

Using the inverse Fourier transform, one can write

$$\begin{aligned} p_{2n}(0,0) = \frac{1}{(2\pi )^d}\,\int _{[-\pi ,\pi ]^d}\left( \frac{1}{d}\,\sum _{k=1}^d\cos (\theta _k)\right) ^{2n}\,d\theta . \end{aligned}$$

The result follows by analysing this integral, see, for instance, [1]. \(\square \)

Lemma 4.2

Let \(q = \frac{1}{d}\). For any \(D>0\) there exists \(C_{4.2}<\infty \) such that for every \(n\ge 1\), there exists a coupling \({\mathbb {Q}}_n\) of the d-dimensional random walk bridge \((X_t)_{t\in [0,2n]}\) of length 2n from 0 to 0 and the standard d-dimensional Brownian bridge \((B_t)_{t\in [0,1]}\) such that

$$\begin{aligned} {\mathbb {Q}}_n\left[ \sup _{0\le t\le 1}\left\| \frac{X_{2nt}}{\sqrt{2nq}} - B_t\right\| > C_{4.2}\cdot n^{-\frac{1}{4}}\log n\right] \le C_{4.2}\cdot n^{-D}. \end{aligned}$$

Proof

The one dimensional case of the lemma follows from [20, Lemma 3.1]. (The result proved there is much stronger than the statement of Lemma 4.2, the exponent \(\frac{1}{4}\) in the event of the lemma is replaced by \(\frac{1}{2}\) in [20, Lemma 3.1].)

Fix \(d\ge 1\) and \(n\ge 1\). Let \((X^{i,m}, B^{i,m})_{1\le i\le d,\, m\ge 1}\) be independent copies of pairs of one-dimensional random walk and Brownian bridges each coupled to satisfy the requirements of Lemma 4.2 in one dimension, where \(X^{i,m}\) is distributed as a random walk bridge of length 2m.

We will sample the desired d-dimensional random walk bridge by first specifying the choice of coordinate directions for all 2n steps, and then by choosing the directions along specified coordinates. Let \(L_n = (L^1_n,\ldots , L^d_n)\) be an independent multinomial sequence of length 2n with parameter \((q,\ldots ,q)\) conditioned on all \(L^1_n(2n),\ldots L^d_n(2n)\) being even. Namely,

$$\begin{aligned} {\mathbb {P}}[L_n \in \cdot ] = {\mathbb {P}}\left[ \left( \sum _{i=1}^kZ_i;\, 1\le k\le 2n\right) \in \cdot ~\Big |~\sum _{i=1}^{2n}Z_i\in (2{\mathbb {Z}})^d\right] , \end{aligned}$$

where \(Z_i = (Z_{i,1},\ldots ,Z_{i,d})\) are independent and \({\mathbb {P}}[Z_i = e_1] = \cdots = {\mathbb {P}}[Z_i = e_d] = q\), with \(e_1,\ldots , e_d\) being the canonical basis of \({\mathbb {R}}^d\). Let

$$\begin{aligned} M_1 = L^1_n(2n), \, \ldots ,\, M_d = L^d_n(2n) \end{aligned}$$

be the number of jumps of each of the coordinates in \(L_n\). Consider

$$\begin{aligned} X_k&= X^{1,M_1}_{L^1_n(k)} \, e_1 + \cdots + X^{d,M_d}_{L^d_n(k)}\, e_d,\quad 1\le k\le 2n,\\ B_t&= B^{1,M_1}_t \, e_1 + \cdots + B^{d,M_d}_t\, e_d,\quad 0\le t\le 1. \end{aligned}$$

Then

  • \((X_t)_{t\in [0,2n]}\) has the law of the d-dimensional random walk bridge,

  • \((B_t)_{t\in [0,1]}\) has the law of the standard d-dimensional Brownian bridge.

It remains to show that this coupling satisfies the bound in the statement of the lemma. It will be a consequence of the following two claims.

Claim 4.3

For any \(D>0\) there exists \(C<\infty \) such that

$$\begin{aligned} {\mathbb {P}}\left[ \left| L^i_n(k) - qk\right| \le C(n\log n)^{\frac{1}{2}},\,\hbox {for all}\, 1\le i\le d\, \hbox {and} \,1\le k\le 2n \right] \ge 1 - Cn^{-D}. \end{aligned}$$

Proof of Claim 4.3

First note that

$$\begin{aligned} {\mathbb {P}}\left[ \sum _{i=1}^{2n}Z_i\in (2{\mathbb {Z}})^d\right]\ge & {} {\mathbb {P}}\left[ S_{2n}:= S^{1}_{L^1_n(2n)} \, e_1 + \cdots + S^{d}_{L^d_n(2n)}\, e_d = 0\right] \\= & {} p_{2n}(0,0) {\mathop {\ge }\limits ^{{\mathrm {L.}}4.1}}cn^{-\frac{d}{2}}, \end{aligned}$$

where \(S^1,\ldots , S^d\) are independent copies of one dimensional simple random walk on \({\mathbb {Z}}\) started from the origin (which implies that S is a d-dimensional simple random walk).

On the other hand, by [17, Corollary 12.2.7], for any \(D>0\) there exists \(C<\infty \) such that for each \(1\le i\le d\),

$$\begin{aligned} {\mathbb {P}}\left[ \max _{1\le k\le 2n}\left| \sum _{j=1}^kZ_{j,i} - qk\right| > C(n\log n)^{\frac{1}{2}}\right] \le Cn^{-D-\frac{d}{2}}. \end{aligned}$$

The two above estimates together and the definition of \(L_n\) give the claim. \(\square \)

Claim 4.4

Let \((\ell (k))_{0\le k\le 2n}\) be an integer sequence with

  • \(\ell (0) = 0\),

  • for all \(1\le k\le 2n\), \(0\le \ell (k) - \ell (k-1)\le 1\),

  • for some \(1\le C_\ell <\infty \), \(\max _{1\le k\le 2n}|\ell (k) - qk| \le C_\ell (n\log n)^{\frac{1}{2}}\),

  • \(m:= \ell (2n)\) is even.

Let \((X^m,B^m)\) be a pair of one-dimensional random walk bridge of length m and a standard one-dimensional Brownian bridge coupled to satisfy the requirements of Lemma 4.2 in one dimension. Let

$$\begin{aligned} \widetilde{X}_k = X^m_{\ell (k)},\quad 0\le k\le 2n \end{aligned}$$

be the time reparametrization of the bridge \(X^m\) induced by the sequence \(\ell \), and \((\widetilde{X}_{2nt})_{t\in [0,1]}\) its linear interpolation. Then for any \(D>0\), there exists \(C<\infty \) (depending on \(\ell \) only through \(C_\ell \)) such that

$$\begin{aligned} {\mathbb {P}}\left[ \sup _{0\le t\le 1}\left| \frac{\widetilde{X}_{2nt}}{\sqrt{2nq}} - B^m_t\right| > C \cdot n^{-\frac{1}{4}}\log n\right] \le C\cdot n^{-D}. \end{aligned}$$

Proof of Claim 4.4

Let \(D>0\). By assumption on \((X^m,B^m)\) and the fact that \(|m-2nq| = o(n)\), there exists \(C<\infty \) such that

$$\begin{aligned} {\mathbb {P}}\left[ \sup _{0\le t\le 1}\left| \frac{X^m_{mt}}{\sqrt{m}} - B^m_t\right| > C \cdot n^{-\frac{1}{4}}\log n\right] \le C\cdot n^{-D}. \end{aligned}$$

Thus, it suffices to show that there exists \(C<\infty \) such that

$$\begin{aligned} {\mathbb {P}}\left[ \sup _{0\le t\le 1}\left| \frac{\widetilde{X}_{2nt}}{\sqrt{2nq}} - \frac{X^m_{mt}}{\sqrt{m}}\right| > C \cdot n^{-\frac{1}{4}}\log n\right] \le C\cdot n^{-D}. \end{aligned}$$

Since \(\sqrt{m} = \sqrt{2nq}\left( 1 + O\left( (\frac{\log n}{n})^{\frac{1}{2}}\right) \right) \), there exist \(C,C'<\infty \) such that

$$\begin{aligned} {\mathbb {P}}\left[ \sup _{0\le t\le 1}\left| \frac{X^m_{mt}}{\sqrt{2nq}} - \frac{X^m_{mt}}{\sqrt{m}}\right|> C \cdot n^{-\frac{1}{4}}\log n\right]\le & {} {\mathbb {P}}\left[ \sup _{0\le t\le 1}\left| X^m_{mt}\right|> C' \cdot n^{\frac{3}{4}}\log ^{\frac{1}{2}} n\right] \\= & {} \frac{{\mathbb {P}}\left[ {\mathop {\max }\limits _{0\le k\le m}\left| S_k\right| } > C' \cdot n^{\frac{3}{4}}\log ^{\frac{1}{2}} n\right] }{p_m(0,0)} \\\le & {} C\cdot n^{-D}, \end{aligned}$$

where S is a one-dimensional simple random walk on \({\mathbb {Z}}\). The last inequality follows from Lemma 4.1 and [17, Corollary 12.2.7].

It remains to show that there exists \(C<\infty \) such that

$$\begin{aligned} {\mathbb {P}}\left[ \sup _{0\le t\le 1}\left| \widetilde{X}_{2nt} - X^m_{mt}\right| > C \cdot n^{\frac{1}{4}}\log n\right] \le C\cdot n^{-D}. \end{aligned}$$

By the definition of \(\widetilde{X}\), the probability on the left hand side equals

$$\begin{aligned}&{\mathbb {P}}\left[ \sup _{0\le t\le 1}\left| X^m_{\ell (2nt)} - X^m_{mt}\right|> C \cdot n^{\frac{1}{4}}\log n\right] \\&\quad \le \frac{2n\cdot {\mathbb {P}}\left[ \max _{0\le k\le 4C_\ell (n\log n)^{\frac{1}{2}}}\left| S_k\right| > C \cdot n^{\frac{1}{4}}\log n\right] }{p_m(0,0)}\\&\quad \le C\cdot n^{-D}, \end{aligned}$$

where S is again a one-dimensional simple random walk on \({\mathbb {Z}}\). In the first inequality we used the fact that \(|\ell (\lfloor 2nt\rfloor ) - mt| \le 3C_\ell (n\log n)^{\frac{1}{2}}\), and in the second the exponential Markov inequality and Lemma 4.1. By putting all the estimates together we get the claim. \(\square \)

To complete the proof of Lemma 4.2, we call the sequence \(L_n=(L_n^1,\ldots ,L_n^d)\) C-good if each one-dimensional sequence \(L^i_n\) satisfies the assumptions of Claim 4.4 with \(C_\ell = C\). By Claim 4.3, for each \(D>0\) there exists \(C_1<\infty \) such that \({\mathbb {P}}[L_n \text {is not}\, C_1-\text {good}] \le C_1 \cdot n^{-D}\). By Claim 4.4 applied to each of the d projections of X and B, there exists \(C<\infty \) such that

$$\begin{aligned} {\mathbb {P}}\left[ \sup _{0\le t\le 1}\left\| \frac{X_{2nt}}{\sqrt{2nq}} - B_t\right\| > C\cdot n^{-\frac{1}{4}}\log n~\Big |\,L_n\, \text {is}\, C_1-\text {good}\right] \le C n^{-D} \end{aligned}$$

The two estimates together give the result of the lemma. \(\square \)

4.2 Proof of Theorem 2.2

The main strategy of the proof is similar to that of [20]. We will pair-up random walk loops of length \(2n\ge N^{\theta }\) rooted at \(z \in [-rN,rN]^{d}\) and Brownian loops of time duration between \(\frac{2(n-1)}{d} + {\alpha }\) and \(\frac{2n}{d} + {\alpha }\) and rooted in \(z + [-\frac{1}{2},\frac{1}{2}]^d\).

Fix \(\lambda >0\), \(r>0\), and \(N\ge 1\). For \(n\ge 1\), let

$$\begin{aligned} \widetilde{q}_n = \frac{1}{2n}\cdot p_{2n}(0,0) {\mathop {=}\limits ^{\mathrm {L.}4.1}} \frac{1}{n}\, \left( \frac{d}{4\pi n}\right) ^{\frac{d}{2}}\cdot \left( 1 - \frac{d}{8n} + O\left( \frac{1}{n^2}\right) \right) , \end{aligned}$$

and

$$\begin{aligned} q_n = \int _{\frac{2(n-1)}{d} + {\alpha }}^{\frac{2n}{d} + {\alpha }}\, \frac{ds}{s\cdot (2\pi s)^{\frac{d}{2}}} = \frac{1}{n}\, \left( \frac{d}{4\pi n}\right) ^{\frac{d}{2}}\cdot \left( 1 - \frac{d}{8n} + O\left( \frac{1}{n^2}\right) \right) . \end{aligned}$$

The main reason to define \({\alpha }\) as in (2.3) is so that the first two terms in the expansions of \(\widetilde{q}_n\) and \(q_n\) coincide. In particular,

$$\begin{aligned} \left| q_n - \widetilde{q}_n\right| \le C\cdot n^{-\frac{d}{2} - 3}. \end{aligned}$$
(4.1)

Let

$$\begin{aligned} \widetilde{N}(z,n;\cdot ),\quad n\in \{1,2,\ldots \}, z\in {\mathbb {Z}}^d, \end{aligned}$$

be independent Poisson point processes on \((0,\infty )\) with intensity parameter \(\widetilde{q}_n\), and

$$\begin{aligned} N(z,n;\cdot ),\quad n\in \{1,2,\ldots \}, z\in {\mathbb {Z}}^d, \end{aligned}$$

independent Poisson point processes on \((0,\infty )\) with intensity parameter \(q_n\). We couple all these processes so that the pairs of processes

$$\begin{aligned} \left\{ \widetilde{N}(z,n;\cdot ), N(z,n;\cdot )\right\} ,\quad n\in \{1,2,\ldots \}, z\in {\mathbb {Z}}^d, \end{aligned}$$

are independent and

$$\begin{aligned} {\mathbb {P}}\left[ \widetilde{N}(z,n;t)\ne N(z,n;t)\right] \le C\cdot t\cdot n^{-\frac{d}{2} - 3}. \end{aligned}$$

Let

$$\begin{aligned} \widetilde{L}(z,n;m),\quad n\in \{1,2,\ldots \}, z\in {\mathbb {Z}}^d, m\in \{1,2,\ldots \}, \end{aligned}$$

be independent random walk bridges of length 2n rooted at 0, and

$$\begin{aligned} L(z,n;m),\quad n\in \{1,2,\ldots \}, z\in {\mathbb {Z}}^d, m\in \{1,2,\ldots \}, \end{aligned}$$

independent standard Brownian bridges coupled with the respective \(\widetilde{L}(z,n;m)\) as in Lemma 4.2. All the \(\widetilde{L}\)’s and L’s are assumed to be independent from all the \(\widetilde{N}\)’s and N’s.

The random walk loop soup \({\mathcal R}^\lambda \) of intensity \(\lambda \) is defined as in (2.2) as the collection (multiset) of loops

$$\begin{aligned} \left\{ z+\widetilde{L}(z,n;m)~:~n\in \{1,2,\ldots , \},\, z\in {\mathbb {Z}}^d,\, m\in \left\{ 1,\ldots , \widetilde{N}(z,n;\lambda )\right\} \right\} . \end{aligned}$$

In order to define the Brownian loop soup, we introduce more random variables. Let

$$\begin{aligned} Y(z,n;m),\quad n\in \{1,2,\ldots \}, z\in {\mathbb {Z}}^d, m\in \{1,2,\ldots \}, \end{aligned}$$

be independent random variables uniformly distributed on \([-\frac{1}{2},\frac{1}{2}]^d\), and

$$\begin{aligned} T(z,n;m),\quad n\in \{1,2,\ldots \}, z\in {\mathbb {Z}}^d, m\in \{1,2,\ldots \}, \end{aligned}$$

independent real-valued random variables with density

$$\begin{aligned} \propto \, \frac{1}{t\cdot (2\pi t)^{\frac{d}{2}}},\qquad \frac{2(n-1)}{d} + {\alpha }\le t\le \frac{2n}{d} + {\alpha }. \end{aligned}$$

We assume that the Y’s and T’s are independent and also independent of all the previously defined variables.

Finally, consider an independent Brownian loop soup \({\mathcal B}_{\alpha }\) of loops of time duration \(\le {\alpha }\), and the corresponding restriction of \({\mathcal B}_{\alpha }\) to \({\mathcal {C}}\times (0,\lambda )\), \({\mathcal B}_{\alpha }^\lambda \) (the loops “appearing before time \(\lambda \)”).

To generate the Brownian loop soup, we first rescale each loop L(znm) so that its time duration is T(znm) and translate so that its root is at \(z+ Y(z,n;m)\), obtaining the loop \(L^*(z,n;m)\). The Brownian loop soup \({\mathcal B}^\lambda \) of intensity \(\lambda \) is then the collection of loops

$$\begin{aligned} \left\{ L^*(z,n;m)~:~n\in \{1,2,\ldots , \},\, z\in {\mathbb {Z}}^d,\, m\in \left\{ 1,\ldots , N(z,n;\lambda )\right\} \right\} \cup {\mathcal B}_{\alpha }^\lambda . \end{aligned}$$

It remains to show that the constructed coupling satisfies all the requirements of the theorem. We would like to pair-up random walk loops from the multiset

$$\begin{aligned} \left\{ z+\widetilde{L}(z,n;m)~:~ m\in \left\{ 1,\ldots , \widetilde{N}(z,n;\lambda )\right\} \right\} \end{aligned}$$

with Brownian loops from the set

$$\begin{aligned} \left\{ L^*(z,n;m)~:~ m\in \left\{ 1,\ldots , N(z,n;\lambda )\right\} \right\} , \end{aligned}$$

for all n and z such that \(2n\ge N^\theta \) and \(|z|\le rN\).

First, we estimate the probability that cardinalities are different,

$$\begin{aligned}&{\mathbb {P}}\left[ N(z,n;\lambda )\ne \widetilde{N}(z,n;\lambda )~\text {for some}~n\ge \frac{1}{2}N^\theta , |z|\le rN \right] \\&\quad {\mathop {\le }\limits ^{4.1}}C(rN)^d\,\lambda \sum _{2n\ge N^\theta } n^{-\frac{d}{2} - 3} \\&\quad \le C\,\lambda \,r^d\,N^{d-\theta (\frac{d}{2} + 2)}. \end{aligned}$$

Next, we rule out the existence of big loops,

$$\begin{aligned} {\mathbb {P}}\left[ \widetilde{N}(z,n;\lambda )>0~\text {for some}~n\ge N^3, |z|\le rN \right]\le & {} C(rN)^d\,\lambda \sum _{n\ge N^3} n^{-\frac{d}{2} - 1} \\\le & {} C\,\lambda \,r^d\,N^{-\frac{d}{2}}. \end{aligned}$$

Now, we bound the probability of having too many loops of a given size rooted at the same vertex,

$$\begin{aligned} {\mathbb {P}}\left[ \widetilde{N}(z,n;\lambda )\ge N^5~\text {for some}~\frac{1}{2}N^\theta \le n\le N^3, |z|\le rN \right]\le & {} C(rN)^d\,N^3\,\frac{\lambda N^{-\theta \left( \frac{d}{2} + 1\right) }}{N^5}\\\le & {} C\,\lambda \,r^d\,N^{d-\theta \left( \frac{d}{2} + 2\right) }, \end{aligned}$$

where in the first inequality we used the Markov inequality and the fact that \(\widetilde{N}(z,n;\lambda )\) is a Poisson random variable with parameter \(\lambda \widetilde{q}_n\).

By putting all the bounds together and using Lemma 4.2,

$$\begin{aligned}&{\mathbb {P}}\left[ \begin{array}{c}\hbox {there exist loops}\, L^*(z,n;m)\,\,\mathrm{and}\,\, z+ \widetilde{L}(z,n;m)\\ \hbox {for some}\, n\ge \frac{1}{2} N^\theta , |z|\le rN, m\le \widetilde{N}(z,n;\lambda )\\ \hbox {such that sup-distance between them is} \ge C_{4.2}N^{\frac{3}{4}}\log N \end{array} \right] \\&\quad \le C\, \lambda \, r^d\, N^{-\min \left( \frac{d}{2},\,\theta \left( \frac{d}{2}+2\right) -d\right) } + C (rN)^d\cdot N^3\cdot N^5\cdot N^{-\theta D} \\&\quad \le C\, (\lambda +1)\, r^d\, N^{-\min \left( \frac{d}{2},\,\theta \left( \frac{d}{2}+2\right) -d\right) }, \end{aligned}$$

by choosing D sufficiently large. The proof is complete. \(\square \)

5 LEW scaling limit \(+\) Brownian loop soup \(=\) Brownian motion

In this section we prove Theorem 1.1 using the Beurling-type estimate of Theorem 3.1 and the strong coupling of loop soups from Theorem 2.2. We will focus here, without further mentioning, on the three dimensional case. The two dimensional case can be treated similarly and often with less effort, so we leave the details to an interested reader.

5.1 Preliminaries

Recall the notation for random walk loop soups from Sect. 2.2. Let U be a connected open subset of \({\mathbb {R}}^3\) with smooth boundary. Let \(0<\varepsilon <\delta \) be sufficiently small so that all the finite connected components of \(U^c\) have diameter \(>\delta \). For \(n\ge 1\), let \(U_n = nU\) and use the same notation for \(nU\cap {\mathbb {Z}}^3\). Let \(\lambda >0\). The first result states that it is unlikely that there is a loop in \({\mathcal R}^\lambda \) of big diameter which is contained in \(U_n\) and reaches very close to the boundary of \(U_n\).

Proposition 5.1

There exists \(\alpha >0\) and \(C = C(\alpha ,U)<\infty \) such that for all \(\lambda >0\), \(0<\varepsilon <\delta \) as above, and \(n\ge 1\),

$$\begin{aligned} {\mathbb {P}}\left[ \begin{array}{c} {\textit{There exists}}\, \ell \in {\mathcal R}^\lambda \, {\textit{with diameter}}\, >\,\delta n\\ {\textit{such that}} \, \ell \subset U_n\, {\textit{and}}\, \ell \nsubseteq U_n^{-\varepsilon n} \end{array} \right] \le C\,\lambda \,\frac{\varepsilon ^{\alpha }}{\delta ^5}. \end{aligned}$$
(5.1)

Proof

Let \(\varepsilon<\gamma <\delta \). We will prove that for some \(\eta >0\) and \(C=C(U,\eta )<\infty \),

$$\begin{aligned} {\mu ^{\mathrm {rwl}}}\left[ \ell ~:~\hbox {diameter of}\, \ell \, {\hbox {is}} >\delta n, \ell \subset U_n, \hbox {and}\, \ell \nsubseteq U_n^{-\varepsilon n}\right] \le C\cdot \left( \frac{\gamma ^\eta }{\delta ^{\eta +3}} + \frac{\varepsilon ^\eta }{\gamma ^{\eta +3}}\right) . \end{aligned}$$
(5.2)

Optimizing over \(\gamma \) then gives that the above measure is \(\le C\,\varepsilon ^\alpha \,\delta ^{-5}\) for \(\alpha = \frac{\eta ^2}{2\eta + 3}\). Since the probability in (5.1) equals

$$\begin{aligned} 1 - \exp \left( -\lambda \,{\mu ^{\mathrm {rwl}}}\left[ \ell ~:\hbox {diameter of}\, \ell \, {\hbox {is}} >\delta n, \ell \subset U_n, \hbox {and}\, \ell \nsubseteq U_n^{-\varepsilon n}\right] \right) , \end{aligned}$$

the result follows. Thus, it suffices to prove (5.2). Denote by \(L_n\) the set of all loops \(\ell \) with diameter \(>\delta n\), \(\ell \subset U_n\), and \(\ell \nsubseteq U_n^{-\varepsilon n}\). By the definition of \({\mu ^{\mathrm {rwl}}}\),

$$\begin{aligned} {\mu ^{\mathrm {rwl}}}[L_n]= & {} \sum _{z\in {\mathbb {Z}}^3\cap U_n}\,\sum _{k\ge 1} \frac{1}{2k}\,{\mathbb {P}}^{z}\left[ R(2k) = z,~R[0,2k]\in L_n\right] \\\le & {} C(U)n^3\cdot \max _{z\in {\mathbb {Z}}^3\cap U_n}\,\sum _{k\ge 1} \frac{1}{2k}\,{\mathbb {P}}^{z}\left[ R(2k) = z,~R[0,2k]\in L_n\right] . \end{aligned}$$

Let \(z\in {\mathbb {Z}}^3\cap U_n\). We consider separately two cases: \(k\le \gamma ^2 n^2\) and \(k\ge \gamma ^2 n^2\).

If \(k\le \gamma ^2 n^2\), then by the time reversibility of the random walk,

$$\begin{aligned} {\mathbb {P}}^{z}\left[ R(2k) = z,~R[0,2k]\in L_n\right] \le 2\,{\mathbb {P}}^z\left[ T_{z,\frac{1}{2}\delta n} \le k,~R(2k)=z\right] . \end{aligned}$$
(5.3)

By the local central limit theorem, for each \(y\in {\mathbb {Z}}^d\) and m,

$$\begin{aligned} {\mathbb {P}}^y\left[ R(m) = z\right] \le C\,m^{-\frac{3}{2}}\,e^{-\frac{3|y-z|^2}{2m}}. \end{aligned}$$

Using the strong Markov property on exiting from \(B(z,\frac{1}{2}\delta n)\) and the local central limit theorem, one gets from (5.3) that

$$\begin{aligned} {\mathbb {P}}^{z}\left[ R(2k) = z,~R[0,2k]\in L_n\right] \le C\,k^{-\frac{3}{2}} e^{-\frac{3(\delta n)^2}{16k}} \le C\,e^{-\frac{3}{32}\,\left( \frac{\delta }{\gamma }\right) ^2}\,k^{-\frac{3}{2}} e^{-\frac{3(\delta n)^2}{32k}}. \end{aligned}$$

Thus,

$$\begin{aligned}&n^3\,\sum _{k=1}^{\lfloor \gamma ^2 n^2\rfloor } \frac{1}{2k}\,{\mathbb {P}}^{z}\left[ R(2k) = z,~R[0,2k]\in L_n\right] \\&\quad \le C\,e^{-\frac{3}{32}\,\left( \frac{\delta }{\gamma }\right) ^2}\,n^3\,\sum _{k=1}^\infty k^{-\frac{5}{2}} e^{-\frac{3(\delta n)^2}{32k}} \\&\quad \le C'\,e^{-\frac{3}{32}\,\left( \frac{\delta }{\gamma }\right) ^2}\,\frac{1}{\delta ^3}. \end{aligned}$$

Note that for any \(\eta >0\) there exists \(C = C(\eta )<\infty \) such that \(e^{-\frac{3}{32}\,\left( \frac{\delta }{\gamma }\right) ^2}\le C\,\left( \frac{\gamma }{\delta }\right) ^\eta \). Thus, we have the first part of the bound in (5.2).

Next we consider \(k\ge \gamma ^2 n^2\). By the time reversibility of the random walk,

$$\begin{aligned}&{\mathbb {P}}^{z}\left[ R(2k) = z,~R[0,2k]\in L_n\right] \nonumber \\&\quad \le 2\,{\mathbb {P}}^z\left[ T\left( \left( U_n^{-\varepsilon n}\right) ^c\right) \le k,~T(\partial U_n) > \frac{3}{2}k,~R(2k) = z\right] , \end{aligned}$$
(5.4)

It is well known (see, e.g., [11, Lemma 2.5]) that there exist \(\eta >0\) and \(C=C(U)<\infty \) such that for all \(y\in U_n{\setminus } U_n^{-\varepsilon n}\),

$$\begin{aligned} {\mathbb {P}}^y\left[ T(\partial U_n)>\frac{1}{2}\gamma ^2 n^2\right] \le C\left( \frac{\varepsilon }{\gamma }\right) ^\eta . \end{aligned}$$

By the strong Markov property at time \(T\left( \left( U_n^{-\varepsilon n}\right) ^c\right) \) and the above inequality, as well as the Markov property at time \(\frac{3}{2}k\) and the local central limit theorem, one gets from (5.4) that

$$\begin{aligned} {\mathbb {P}}^{z}\left[ R(2k) = z,~R[0,2k]\in L_n\right] \le C\,\left( \frac{\varepsilon }{\gamma }\right) ^\eta \,k^{-\frac{3}{2}} . \end{aligned}$$

Thus,

$$\begin{aligned}&n^3\,\sum _{k\ge \lfloor \gamma ^2 n^2\rfloor } \frac{1}{2k}\,{\mathbb {P}}^{z}\left[ R(2k) = z,~R[0,2k]\in L_n\right] \\&\quad \le C\,\left( \frac{\varepsilon }{\gamma }\right) ^\eta \,n^3\,\sum _{k\ge \lfloor \gamma ^2 n^2\rfloor } k^{-\frac{5}{2}} \le C'\,\left( \frac{\varepsilon }{\gamma }\right) ^\eta \,\frac{1}{\gamma ^3}. \end{aligned}$$

This gives us the second part of the bound in (5.2). The proof of Proposition 5.1 is complete. \(\square \)

Consider a simple random walk \(R^1\) from 0 and an independent random walk loop soup \({\mathcal R}^\lambda \) defined on the same probability space.

Proposition 5.2

There exists \(\alpha >0\) and \(C = C(\alpha )<\infty \) such that for all \(\lambda >0\), \(0<\varepsilon <\delta \), and \(n\ge 1\),

$$\begin{aligned} {\mathbb {P}}\left[ \begin{array}{c} {\textit{There exists}} \,\ell \in {\mathcal R}^\lambda \, {\textit{with diameter}} >\delta n\, {\textit{such that}}\, \ell \,\subset B(0,n),\\ {{\mathrm {dist}}\left( \ell ,\,{\mathrm {LE}}\left( R^1\left[ 0,T^1_{0,n}\right] \right) \right) <\varepsilon n\,\, {\textit{and}}\, \ell \cap {\mathrm {LE}}\left( R^1\left[ 0,T^1_{0,n}\right] \right) =\emptyset } \end{array} \right] \le C\,\left( \sqrt{\varepsilon } + \lambda \,\frac{\varepsilon ^{\alpha }}{\delta ^5}\right) . \end{aligned}$$
(5.5)

Proof

Let \(\eta >0\) and define the event

$$\begin{aligned} A = \left\{ \begin{array}{c} \hbox {For all}\, x\in B(0,n)\, \hbox {with} \,{\mathrm {dist}}\left( x,{\mathrm {LE}}\left( R^1\left[ 0,T^1_{0,n}\right] \right) \right) \le \varepsilon n,\\ {\mathbb {P}}^{2,x}\left[ R^2\left[ 0,T^2_{x,\varepsilon ^{\frac{1}{4}}n}\right] \cap {\mathrm {LE}}\left( R^1\left[ 0,T^1_{0,n}\right] \right) =\emptyset \right] <\varepsilon ^\eta \end{array} \right\} . \end{aligned}$$

By Theorem 3.1, there exists \(\eta >0\) and \(C=C(\eta )<\infty \) such that \({\mathbb {P}}^{1,0}[A^c]\le C\,\sqrt{\varepsilon }\). This gives the first part of the bound in (5.5), and it remains to estimate the probability of the event in (5.5) intersected with A. By the definition of Poisson point process and independence of \(R^1\) and \({\mathcal R}^\lambda \), this probability equals

$$\begin{aligned}&{\mathbb {E}}^{1,0}\left[ \mathbbm {1}_A\cdot \left( 1 - \exp \left( -\lambda {\mu ^{\mathrm {rwl}}}\left[ \ell ~:~ \begin{array}{c} \hbox {diameter of}\, \ell \, \hbox {is}>\delta n, \ell \subset B(0,n),\\ {\mathrm {dist}}\left( \ell ,{\mathrm {LE}}\left( R^1\left[ 0,T^1_{0,n}\right] \right) \right)<\varepsilon n,\\ \hbox {and}\, \ell \cap {\mathrm {LE}}\left( R^1\left[ 0,T^1_{0,n}\right] \right) =\emptyset \end{array} \right] \right) \right) \right] \\&\quad \le {\mathbb {E}}^{1,0}\left[ \mathbbm {1}_A\cdot \lambda {\mu ^{\mathrm {rwl}}}\left[ \ell ~:~ \begin{array}{c} \hbox {diameter of} \,\ell \, \hbox {is} >\delta n, \ell \subset B(0,n),\\ {\mathrm {dist}}\left( \ell ,{\mathrm {LE}}\left( R^1\left[ 0,T^1_{0,n}\right] \right) \right) <\varepsilon n,\\ \hbox {and}\, \ell \cap {\mathrm {LE}}\left( R^1\left[ 0,T^1_{0,n}\right] \right) =\emptyset \end{array} \right] \right] \\&\quad =: {\mathbb {E}}^{1,0}\left[ \mathbbm {1}_A\cdot \lambda {\mu ^{\mathrm {rwl}}}[L_n]\right] , \end{aligned}$$

where we denoted by \(L_n\) the set of loops \(\ell \subset B(0,n)\) with diameter \(>\delta n\) and such that \({\mathrm {dist}}(\ell ,{\mathrm {LE}}(R^1[0,T^1_{0,n}]))<\varepsilon n\) and \(\ell \cap {\mathrm {LE}}(R^1[0,T^1_{0,n}])=\emptyset \). It suffices to prove that for some \(\alpha >0\) and \(C=C(\alpha )<\infty \),

$$\begin{aligned} \mathbbm {1}_A\cdot {\mu ^{\mathrm {rwl}}}[L_n]\le C\,\frac{\varepsilon ^{\alpha }}{\delta ^5}. \end{aligned}$$

By the definition of \({\mu ^{\mathrm {rwl}}}\),

$$\begin{aligned} {\mu ^{\mathrm {rwl}}}[L_n]= & {} \sum _{z\in {\mathbb {Z}}^3\cap B(0,n)}\,\sum _{k\ge 1} \frac{1}{2k}\, {\mathbb {P}}^{2,z}\left[ R^2(2k) = z,~ R^2[0,2k]\in L_n\right] \\\le & {} C\,n^3\,\max _{z\in {\mathbb {Z}}^3\cap B(0,n)}\,\sum _{k\ge 1} \frac{1}{2k}\,{\mathbb {P}}^{2,z}\left[ R^2(2k) = z,~ R^2[0,2k]\in L_n\right] . \end{aligned}$$

Let \(\varepsilon<\gamma <\delta \) be some paramter to be fixed later. We first consider the case \(k\le \gamma ^2n^2\). Exactly as in the proof of Proposition 5.1 [see below (5.3)], there exists \(C<\infty \) such that

$$\begin{aligned} n^3\,\sum _{k=1}^{\lfloor \gamma ^2n^2\rfloor } \frac{1}{2k}\, {\mathbb {P}}^{2,z}\left[ R^2(2k) = z,~ R^2[0,2k]\in L_n\right] \le C\,\frac{\gamma }{\delta }\cdot \frac{1}{\delta ^3}. \end{aligned}$$

We consider next the case \(k\ge \gamma ^2n^2\). Define \(S = {\mathrm {LE}}(R^1[0,T^1_{0,n}])\). By the time reversibility of the random walk,

$$\begin{aligned}&{\mathbb {P}}^{2,z}\left[ R^2(2k) = z,~R^2[0,2k]\in L_n\right] \le 2\,{\mathbb {P}}^{2,z}\left[ R^2[0,2k]\subset B(0,n),~{\mathrm {dist}}\left( R^2[0,k],S\right) \right. \\&\quad \left. \le \varepsilon n,~R^2\left[ 0,\frac{3}{2}k\right] \cap S=\emptyset ,~R^2(2k) = z\right] . \end{aligned}$$

Let \(T^*=\min \{t:{\mathrm {dist}}(R^2[0,t],S)\le \varepsilon n\}\) and \(T^{**} = \inf \{t>T^*:R^2(t)\notin B(R^2(T^*),\varepsilon ^{\frac{1}{4}}n)\}\). By the Markov inequality,

$$\begin{aligned} {\mathbb {P}}^{2,z}\left[ T^*<\infty ,~T^{**}>T^* + \frac{1}{2}\gamma ^2n^2\right] \le C \frac{\sqrt{\varepsilon }}{\gamma ^2}, \end{aligned}$$

and by the definition of the event A,

$$\begin{aligned} \mathbbm {1}_A\cdot {\mathbb {P}}^z\left[ T^*<\infty ,~R^2\left( T^*\right) \in B(0,n),~R^2\left[ T^*,T^{**}\right] \cap S=\emptyset \right] \le \varepsilon ^{\eta }. \end{aligned}$$

Combining these bounds with the Markov property at time \(\frac{3}{2}k\) and the local central limit theorem, we obtain that for each \(k\ge \gamma ^2 n^2\),

$$\begin{aligned} \mathbbm {1}_A\cdot {\mathbb {P}}^{2,z}\left[ R^2(2k) = z,~ R^2[0,2k]\in L_n\right] \le C\,\left( \frac{\sqrt{\varepsilon }}{\gamma ^2} + \varepsilon ^\eta \right) \, k^{-\frac{3}{2}}. \end{aligned}$$

Thus,

$$\begin{aligned}&\mathbbm {1}_A\cdot n^3\,\sum _{k\ge \lfloor \gamma ^2n^2\rfloor } \frac{1}{2k}\, {\mathbb {P}}^{2,z}\left[ R^2(2k) = z,~ R^2[0,2k]\in L_n\right] \\&\quad \le C\,\left( \frac{\sqrt{\varepsilon }}{\gamma ^2} + \varepsilon ^\eta \right) \,n^3\,\sum _{k\ge \lfloor \gamma ^2n^2\rfloor } k^{-\frac{5}{2}} \le C'\,\left( \frac{\sqrt{\varepsilon }}{\gamma ^2} + \varepsilon ^\eta \right) \,\frac{1}{\gamma ^3}. \end{aligned}$$

Putting the two cases together, we get

$$\begin{aligned} \mathbbm {1}_A\cdot {\mu ^{\mathrm {rwl}}}[L_n]\le C\,\left( \frac{\gamma }{\delta ^4} + \left( \frac{\sqrt{\varepsilon }}{\gamma ^2} + \varepsilon ^\eta \right) \,\frac{1}{\gamma ^3}\right) . \end{aligned}$$

If \(\gamma = \varepsilon ^{\frac{\eta }{4}}\delta ^{1-\frac{\eta }{4}}\), then the last expression is \(\le C'\,\frac{\varepsilon ^\alpha }{\delta ^5}\) for \(\alpha = \frac{\eta }{4}\). This gives the second part of the bound in (5.5). The proof of Proposition 5.2 is complete. \(\square \)

5.2 Proof of Theorem 1.1

Let \({\mathcal B}_D\) be the restriction of an independent Brownian loop soup of intensity 1 to the loops entirely contained in D. Denote by X the random subset of \(\overline{D}\) consisting of \({\mathrm {K}}\) and all the loops from \({\mathcal B}_D\) that intersect \({\mathrm {K}}\). First of all, note that X is closed. Indeed, for any \(\varepsilon >0\), the set of loops in \({\mathcal B}_D\) with diameter bigger than \(\varepsilon \) is almost surely finite. Thus, the complement of any \(\varepsilon \)-neighborhood of \({\mathrm {K}}\) in X is closed. Since \({\mathrm {K}}\) is closed, X is also closed.

Let \({\mathrm {BM}}\) be the trace of the Brownian motion killed on exiting from D viewed as a compact subset of \(\overline{D}\). We need to prove that X has the same law as \({\mathrm {BM}}\) in \(({\mathcal {K}}_D,d_H)\). Let \((\Omega ,\mathcal F,{\mathbb {P}})\) be a probability space large enough to contain all the random variables used in this proof. It suffices to prove that

$$\begin{aligned} {\mathbb {P}}\left[ X\subset U\cap \overline{D}\right] = {\mathbb {P}}\left[ {\mathrm {BM}}\subset U\cap \overline{D}\right] ,\qquad \hbox {for all open} \,U\subset {\mathbb {R}}^3. \end{aligned}$$

Since every bounded open set can be approximated from inside by a finite union of open balls, which itself can be approximated from inside by an open set with smooth boundary, it suffices to prove the above identity only for sets U with smooth boundary.

For a (measurable) set \(S\subset {\mathbb {R}}^3\) and a (countable) collection of loops L in \({\mathbb {R}}^3\), let E(SL) be the union of S and all the loops from L that intersect S, the enlargement of S by L. In particular, \(X = E({\mathrm {K}},{\mathcal B}_D)\). Also for \(\delta >0\), let \(L^{>\delta }\) be the subcollection of all the loops from L with diameter \(>\delta \).

Let \({\mathrm {LS}}_n\) be the restriction of the random walk loop soup of intensity 1 rescaled by \(\frac{1}{n}\) to D, i.e., \({\mathrm {LS}}_n = \{\frac{1}{n}\ell ~:~\ell \in {\mathcal R}^1,~\frac{1}{n}\ell \subset D\}\). By Proposition 2.1, if \({\mathrm {LEW}}_n\) and \({\mathrm {LS}}_n\) are independent, then \(E({\mathrm {LEW}}_n,{\mathrm {LS}}_n)\) has the same law as the trace of a simple random walk on \(\frac{1}{n}{\mathbb {Z}}^3\) killed on exiting from D. In particular, as \(n\rightarrow \infty \), \(E({\mathrm {LEW}}_n,{\mathrm {LS}}_n)\) converges weakly in the space \(({\mathcal {K}}_D,d_H)\) to the Brownian motion \({\mathrm {BM}}\).

Let U be an open subset of \({\mathbb {R}}^3\) with a smooth boundary and such that \(0\in U\). Fix \(0<\varepsilon <\delta \). We now define the random objects that will be used in the proof.

Since the space \(({\mathcal {K}}_D,d_H)\) is separable and \({\mathrm {LEW}}_n\) converges weakly to \({\mathrm {K}}\), by Skorokhod’s representation theorem we can define \(({\mathrm {LEW}}_n)_{n\ge 1}\) and \({\mathrm {K}}\) on the same probability space so that \(d_H({\mathrm {LEW}}_n,{\mathrm {K}})\rightarrow 0\) almost surely. Consider the event

$$\begin{aligned} A_0=\left\{ {\mathrm {K}}\subseteq U^{-3\delta }\right\} \cup \{K\nsubseteq U\} \end{aligned}$$

that if \({\mathrm {K}}\) is in U then the distance from \({\mathrm {K}}\) to the complement of U is \(>3\delta \). By monotonicity, \({\mathbb {P}}[A_0]\rightarrow 1\) as \(\delta \rightarrow 0\). For each \(n\ge 1\), consider also the event

$$\begin{aligned} A_{1,n} = \left\{ d_H({\mathrm {LEW}}_n,{\mathrm {K}})<\varepsilon \right\} . \end{aligned}$$

By construction, \({\mathbb {P}}[A_{1,n}]\rightarrow 1\) as \(n\rightarrow \infty \).

For each \(n\ge 1\), let \(({\mathcal R}^1_n,{\mathcal B}^1_n)\) be independent pairs of the rescaled random walk loop soup of intensity 1 on \(\frac{1}{n}{\mathbb {Z}}^3\) and the Brownian loop soup of intensity 1 on \({\mathbb {R}}^3\) coupled so that on an event \(A_{2,n}\) of probability \(>1-Cn^{-\frac{1}{2}}\) there is a one-to-one correspondence between the loops from \({\mathcal R}^1_n\) of diameter \(>\delta \) rooted in \(D_2\) and those from \({\mathcal B}^1_n\) rooted in \(D_2\) and so that the Hausdorff distance between the paired loops is \(<\varepsilon \). This coupling is possible by Theorem 2.2. (In Theorem 2.2 we paired loops of sufficiently large length, but each loop of length s is of diameter of order \(\sqrt{s}\).) We also assume that all the pairs \(({\mathcal R}^1_n,{\mathcal B}^1_n)\) are independent from \(({\mathrm {LEW}}_n)_{n\ge 1}\) and \({\mathrm {K}}\).

In addition, for each \(n\ge 1\), we consider the event \(A_{3,n}\) that every loop from \({\mathcal R}^1_n\) with diameter \(>\delta \) which is contained in \((U\cap D)^{+4\varepsilon }\) is also contained in \((U\cap D)^{-4\varepsilon }\), and the event \(A_{4,n}\) that every loop from \({\mathcal R}^1_n\) with diameter \(>\delta \) at distance \(<4\varepsilon \) from \({\mathrm {LEW}}_n\) intersects \({\mathrm {LEW}}_n\). In other words, in the event \(A_{3,n}\) there are no big loops that are contained in \((U\cap D)^{+4\varepsilon }\) and get too close to the boundary of \(U\cap D\), and in the event \(A_{4,n}\) there are no big loops that get too close to the \({\mathrm {LEW}}_n\) without hitting it. It is proved in Propositions 5.1 and 5.2 that for some \(\alpha >0\), \(\inf _n{\mathbb {P}}[A_{3,n}\cap A_{4,n}]\ge 1 - C\,\frac{\varepsilon ^\alpha }{\delta ^5}\).

Note that in the event \(A_{2,n}\cap A_{3,n}\), for every loop from \({\mathcal R}^1_n\) with diameter \(>\delta \) contained in D, its pair from \({\mathcal B}^1_n\) is contained in D, and vice versa.

We call the restriction of \({\mathcal R}^1_n\) to the loops contained in D by \({\mathrm {LS}}_n\), and the restriction of \({\mathcal B}^1_n\) to the loops contained in D by \({\mathrm {BS}}_n\).

We prove that for any \(n\ge 1\), on the event \(A_n = A_0\cap A_{1,n}\cap A_{2,n}\cap A_{3,n}\cap A_{4,n}\),

$$\begin{aligned} \left\{ E\left( {\mathrm {LEW}}_n,{\mathrm {LS}}_n\right) \subset U^{-3\varepsilon }\cap \overline{D}\right\}\subseteq & {} \left\{ E\left( {\mathrm {K}}^{+2\varepsilon },{\mathrm {BS}}_n\right) \subset U\cap \overline{D}\right\} \nonumber \\\subseteq & {} \left\{ E\left( {\mathrm {LEW}}_n,{\mathrm {LS}}_n\right) \subset U^{+\delta }\cap \overline{D}\right\} . \end{aligned}$$
(5.6)

We begin with a proof of the first inclusion. Assume that \(E({\mathrm {LEW}}_n,{\mathrm {LS}}_n)\subset U^{-3\varepsilon }\). In particular, \({\mathrm {LEW}}_n\subset U^{-3\varepsilon }\). Since \(A_{1,n}\) holds, this implies that \({\mathrm {K}}\subset U^{-2\varepsilon }\), which is equivalent to \({\mathrm {K}}^{+2\varepsilon }\subset U\).

Let \(\ell \in {\mathrm {BS}}_n^{>\delta }\) be such that \(\ell \cap {\mathrm {K}}^{+2\varepsilon }\ne \emptyset \). Then, since \(A_{1,n}\) occurs, \(\ell \cap {\mathrm {LEW}}_n^{+3\varepsilon }\ne \emptyset \). Since \(A_{2,n}\cap A_{3,n}\) occurs, there is \(\widetilde{\ell }\in {\mathrm {LS}}_n^{>\delta }\) such that \(d_H(\widetilde{\ell },\ell )<\varepsilon \). In particular, \(\widetilde{\ell }\cap {\mathrm {LEW}}_n^{+4\varepsilon }\ne \emptyset \). Since \(A_{4,n}\) occurs, this implies that \(\widetilde{\ell }\cap {\mathrm {LEW}}_n\ne \emptyset \). By our assumption, \(\widetilde{\ell }\subset U^{-3\varepsilon }\). Therefore, \(\ell \subset U^{-2\varepsilon }\subset U\). Thus, any \(\ell \in {\mathrm {BS}}_n^{>\delta }\) such that \(\ell \cap {\mathrm {K}}^{+2\varepsilon }\ne \emptyset \) is contained in U. This implies that \(E({\mathrm {K}}^{+2\varepsilon },{\mathrm {BS}}_n^{>\delta })\subset U\). Since \(A_0\) occurs, the above is true if and only if \(E({\mathrm {K}}^{+2\varepsilon },{\mathrm {BS}}_n)\subset U\).

We proceed with the proof of the second inclusion in (5.6). Assume that \(E({\mathrm {K}}^{+2\varepsilon },{\mathrm {BS}}_n)\subset U\). Since \(A_0\) occurs, this is equivalent to \(E({\mathrm {K}}^{+2\varepsilon },{\mathrm {BS}}_n^{>\delta })\subset U\). Since \(A_{1,n}\) occurs and \({\mathrm {K}}^{+2\varepsilon }\subset U\), we also have \({\mathrm {LEW}}_n\subset U\).

Let \(\widetilde{\ell }\in {\mathrm {LS}}_n^{>\delta }\) such that \(\widetilde{\ell }\cap {\mathrm {LEW}}_n\ne \emptyset \). Since \(A_{1,n}\) occurs, \(\widetilde{\ell }\cap {\mathrm {K}}^{+\varepsilon } \ne \emptyset \). Since \(A_{2,n}\cap A_{3,n}\) occurs, there is \(\ell \in {\mathrm {BS}}_n^{>\delta }\) such that \(d_H(\widetilde{\ell },\ell )<\varepsilon \). In particular, \(\ell \cap {\mathrm {K}}^{+2\varepsilon }\ne \emptyset \). By assumption, \(\ell \subset U\). Therefore, \(\widetilde{\ell }\subset U^{+\varepsilon }\). Since \(A_{3,n}\) occurs, we actually have \(\widetilde{\ell }\subset U\). Thus, any \(\widetilde{\ell }\in {\mathrm {LS}}_n^{>\delta }\) such that \(\widetilde{\ell }\cap {\mathrm {LEW}}_n \ne \emptyset \) is contained in U. This implies that \(E({\mathrm {LEW}}_n,{\mathrm {LS}}_n^{>\delta })\subset U\). Finally, by adding all the loops of diameter \(<\delta \) we get \(E({\mathrm {LEW}}_n,{\mathrm {LS}}_n)\subset U^{+\delta }\). The proof of the inclusion (5.6) is complete.

It follows from (5.6) that for all \(n\ge 1\), \(0<\varepsilon <\delta \),

$$\begin{aligned}&{\mathbb {P}}\left[ E\left( {\mathrm {LEW}}_n,{\mathrm {LS}}_n\right) \subset U^{-3\delta }\cap \overline{D}\right] - {\mathbb {P}}\left[ A_n^c\right] \le {\mathbb {P}}\left[ E\left( {\mathrm {K}}^{+2\varepsilon },{\mathcal B}_D\right) \subset U\cap \overline{D}\right] \\&\quad \le {\mathbb {P}}\left[ E\left( {\mathrm {LEW}}_n,{\mathrm {LS}}_n\right) \subset U^{+\delta }\cap \overline{D}\right] + {\mathbb {P}}\left[ A_n^c\right] . \end{aligned}$$

By monotonicity,

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}{\mathbb {P}}\left[ E\left( {\mathrm {K}}^{+2\varepsilon },{\mathcal B}_D\right) \subset U\cap \overline{D}\right] = {\mathbb {P}}\left[ E\left( {\mathrm {K}},{\mathcal B}_D\right) \subset U\cap \overline{D}\right] = {\mathbb {P}}\left[ X\subset U\cap \overline{D}\right] . \end{aligned}$$

Since \(\limsup _{\delta \rightarrow 0}\limsup _{\varepsilon \rightarrow 0}\limsup _{n\rightarrow \infty }{\mathbb {P}}[A_n^c] = 0\), we have

$$\begin{aligned} \liminf _{\delta \rightarrow 0}{\mathbb {P}}\left[ {\mathrm {BM}}\subset U^{-3\delta }\cap \overline{D}\right] \le {\mathbb {P}}\left[ X\subset U\cap \overline{D}\right] \le \limsup _{\delta \rightarrow 0}{\mathbb {P}}\left[ {\mathrm {BM}}\subset U^{+\delta }\cap \overline{D}\right] . \end{aligned}$$

Since by monotonicity both left and right hand sides are equal to \({\mathbb {P}}\left[ {\mathrm {BM}}\subset U\cap \overline{D}\right] \), we get the desired result. \(\square \)

6 Scaling limit is a simple path

6.1 Quasi-loops

We say that a nearest neighbor path \(\gamma \) in \({\mathbb {Z}}^3\) has a (sr)-quasi-loop at \(v\in {\mathbb {Z}}^3\) if there exist \(i\le j\) such that \(\gamma (i),\gamma (j)\in B(v,s)\) and \(\gamma [i,j]\nsubseteq B(v,r)\). The set of all such v’s is denoted by \({\mathrm {QL}}(s,r;\gamma )\). Note that the set \({\mathrm {QL}}(s,r; \gamma )\) is non-increasing in r and non-decreasing in s.

Theorem 6.1

There exist \(M<\infty \), \(\alpha >0\), and \(C<\infty \) such that for any \(\varepsilon >0\) and \(n\ge 1\),

$$\begin{aligned} {\mathbb {P}}^0\left[ {\mathrm {QL}}\left( \varepsilon ^Mn,\sqrt{\varepsilon }n;~{\mathrm {LE}}(R[0,T_{0,n}])\right) \ne \emptyset \right] \le C\varepsilon ^\alpha . \end{aligned}$$
(6.1)

Proof

It suffices to consider sufficiently small \(\varepsilon \). Let \(M\ge 1\). The proof is subdivided into three cases: (1) there is a quasi-loop at a vertex of \(B(0,\varepsilon n)\), (2) there is a quasi-loop at a vertex of \(B(0,n-\varepsilon n)^c\), and (3) there is a quasi-loop at a vertex of \(B(0,n-\varepsilon n){\setminus }\overline{B(0,\varepsilon n)}\). We will verify the first and the second cases for \(M=1\) and the last case for a sufficiently large M.

Consider the first case. If \({\mathrm {QL}}\left( \varepsilon n,\sqrt{\varepsilon }n;~{\mathrm {LE}}(R[0,T_{0,n}])\right) \cap B(0,\varepsilon n)\ne \emptyset \), then the random walk R must return to \(B(0,2\varepsilon n)\) after leaving \(B(0,\sqrt{\varepsilon } n - \varepsilon n)\). Thus,

$$\begin{aligned}&{\mathbb {P}}^0\left[ {\mathrm {QL}}\left( \varepsilon n,\sqrt{\varepsilon }n;~{\mathrm {LE}}(R[0,T_{0,n}])\right) \cap B(0,\varepsilon n)\ne \emptyset \right] \\&\quad \le {\mathbb {P}}^0\left[ R\left[ T_{0,\sqrt{\varepsilon } n - \varepsilon n},~T_{0,n}\right] \cap B(0,2\varepsilon n)\ne \emptyset \right] \le C\sqrt{\varepsilon }. \end{aligned}$$

This implies (6.1) for quasi-loops at a vertex in \(B(0,\varepsilon n)\).

Consider the second case. If \({\mathrm {QL}}\left( \varepsilon n,\sqrt{\varepsilon }n;~{\mathrm {LE}}(R[0,T_{0,n}])\right) {\setminus } B(0,n-\varepsilon n)\ne \emptyset \), then the random walk R will hit \(\partial B(0,n-2\varepsilon n)\) and then moves to distance \(\sqrt{\varepsilon } n - 3\varepsilon n\) away from the hitting point before it exits from B(0, n). Thus, there exist \(\alpha >0\) and \(C<\infty \) such that

$$\begin{aligned}&{\mathbb {P}}^0\left[ {\mathrm {QL}}\left( \varepsilon n,\sqrt{\varepsilon }n;~{\mathrm {LE}}\left( R\left[ 0,T_{0,n}\right] \right) \right) {\setminus } B(0,n - \varepsilon n)\ne \emptyset \right] \\&\quad \le {\mathbb {P}}^0\left[ R\left[ T_{0,n-2\varepsilon n},~T_{0,n}\right] \cap \partial B\left( R(T_{0,n-2\varepsilon n}),\sqrt{\varepsilon }n-3\varepsilon n\right) \ne \emptyset \right] \le C\varepsilon ^\alpha , \end{aligned}$$

where the last inequality follows, for instance, from [11, Lemma 2.5]. This implies (6.1) for quasi-loops at a vertex outside of \(B(0,n- \varepsilon n)\).

It remains to consider the third case. Let \(A = B(0,n-\varepsilon n){\setminus } \overline{B(0,\varepsilon n)}\). We will prove that for some \(M\ge 1\),

$$\begin{aligned} {\mathbb {P}}^0\left[ {\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{0,n}])\right) \cap A\ne \emptyset \right] \le C\varepsilon , \end{aligned}$$
(6.2)

which implies (6.1) for quasi-loops at a vertex in A. We cover A by \(s= 10\,\lfloor \varepsilon ^{-6}\rfloor \) balls of radius \(\varepsilon ^2 n\) with centers at \(v_1,\ldots , v_s\in A\). Then, the probability in (6.2) is bounded from above by

$$\begin{aligned} 10\,\varepsilon ^{-6}\,\max _i {\mathbb {P}}^0\left[ {\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{0,n}])\right) \cap B\left( v_i,\varepsilon ^2 n\right) \ne \emptyset \right] , \end{aligned}$$

and the inequality (6.2) is immediate from the following lemma. \(\square \)

Lemma 6.2

There exist \(M,C<\infty \) such that for all \(v\in A\), \(\varepsilon >0\) and \(n\ge 1\),

$$\begin{aligned} {\mathbb {P}}^0\left[ {\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{0,n}])\right) \cap B\left( v,\varepsilon ^2 n\right) \ne \emptyset \right] \le C\varepsilon ^7. \end{aligned}$$

The proof of Theorem 6.1 is complete subject to Lemma 6.2, which we prove below.

Proof of Lemma 6.2

Fix \(v\in A\) and assume that \(M>2\). Consider the stopping times \(T_0 = 0\),

$$\begin{aligned} T_{2k-1}= & {} \inf \left\{ t> T_{2k-2}~:~R(t)\in B\left( v,2\varepsilon ^2 n\right) \right\} ,\\ T_{2k}= & {} \inf \left\{ t> T_{2k-1}~:~ R(t)\in \partial B\left( v,\frac{1}{2}\varepsilon n\right) \right\} , \end{aligned}$$

where \(\inf \emptyset = \infty \).

Claim 6.3

If \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{0,n}])\right) \cap B(v,\varepsilon ^2 n)\ne \emptyset \), then there exists i such that

$$\begin{aligned} {\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{2i-1}])\right) \cap B\left( v,\varepsilon ^2 n\right)= & {} \emptyset , \nonumber \\ {\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{2i}])\right) \cap B\left( v,\varepsilon ^2 n\right)\ne & {} \emptyset . \end{aligned}$$
(6.3)

Proof of Claim 6.3

Assume that \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{0,n}])\right) \) has a point in \(B(v,\varepsilon ^2 n)\). Let I be such that \(T_{2I}< T_{0,n} < T_{2I+1}\).

Note that \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{2I}])\right) \) also has a point in \(B(v,\varepsilon ^2 n)\), since \(R[T_{2I},T_{0,n}]\) does not intersect \(B(v,\varepsilon ^2n + \varepsilon ^Mn)\). If \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{2I-1}])\right) \cap B(v,\varepsilon ^2 n)=\emptyset \), then we are done, thus assume the contrary.

Let \(i\le I\) and assume that \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{2i-1}])\right) \) has a point in \(B(v,\varepsilon ^2 n)\). Since \(R[T_{2i-2},T_{2i-1}]\) does not intersect \(B(v,\varepsilon ^2 n + \varepsilon ^M n)\), the set \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{2i-2}])\right) \) must also have a point in \(B(v,\varepsilon ^2 n)\). In this case, if \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{2i-3}])\right) \) does not have a point in \(B(v,\varepsilon ^2 n)\), then we are done, otherwise, we repeat.

Note that the above procedure eventually succeeds, since \({\mathrm {QL}}\big (\varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_1])\big )\) does not have a point in \(B(v,\varepsilon ^2 n)\). (In fact, \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_3])\right) \) too.) \(\square \)

We proceed to estimate the probability of event (6.3) for any fixed i. Let i be fixed so that \(T_{2i-1}<\infty \) and define \(L' = {\mathrm {LE}}(R[0,T_{2i-1}])\) and \(R' = R[T_{2i-1},T_{2i}]\). Consider the stopping times \(\tau _0 = 0\),

$$\begin{aligned} \tau _{2k-1}= & {} \inf \left\{ t> \tau _{2k-2}~:~L'(t)\in B\left( v,2\varepsilon ^2 n\right) \right\} ,\\ \tau _{2k}= & {} \inf \left\{ t> \tau _{2k-1}~:~ L'(t)\in \partial B\left( v,\frac{1}{2}\varepsilon n\right) \right\} . \end{aligned}$$

Claim 6.4

If the event (6.3) occurs, then there exists k and \(x\in B(v,\varepsilon ^2 n)\) such that

$$\begin{aligned} L'\left[ 0,\tau _{2k}\right] \cap B\left( x,\varepsilon ^M n\right) \ne \emptyset , \qquad R'\cap B\left( x,\varepsilon ^M n\right) \ne \emptyset ,\qquad R'\cap L'\left[ 0,\tau _{2k}\right] =\emptyset . \end{aligned}$$
(6.4)

Proof of Claim 6.4

Assume that the event (6.3) occurs. Since \({\mathrm {LE}}(R[0,T_{2i}]) = {\mathrm {LE}}(L'\cup R')\) and \(R'\subset \overline{B(v,\frac{1}{2}\varepsilon n)}\), there exist \(x\in B(v,\varepsilon ^2 n)\) such that \(L'[0,\mathrm {len}\,L']\cap B(x,\varepsilon ^Mn)\ne \emptyset \) and \(R'\cap B(x,\varepsilon ^Mn)\ne \emptyset \). Otherwise, \(R'\) would not complete any \((\varepsilon ^Mn,\varepsilon n)\)-quasi-loop in \(B(v,\varepsilon ^2 n)\).

Let k be the smallest index such that for some \(x\in B(v,\varepsilon ^2n)\), \(L'[0,\tau _{2k}]\cap B(x,\varepsilon ^M n)\ne \emptyset \) and \(R'\cap B(x,\varepsilon ^M n)\ne \emptyset \). We claim that k satisfies (6.4), i.e., \(R'\cap L'[0,\tau _{2k}]=\emptyset \).

Assume that \(R'\cap L'[0,\tau _{2k}]\ne \emptyset \). Let s be the smallest number that \(L'[0,s]\cap R'\ne \emptyset \). By the assumption, \(\tau _{2k}\ge s\). Thus, \(L'[\tau _{2k-1},s]\subset \overline{B(v,\frac{1}{2}\varepsilon n)}\). Since also \(R'\subset \overline{B(v,\frac{1}{2}\varepsilon n)}\), there must exist \(x\in B(v,\varepsilon ^2 n)\) such that \(L'[0,\tau _{2k-1}]\cap B(x,\varepsilon ^M n)\ne \emptyset \) and \(R'\cap B(x,\varepsilon ^M n)\ne \emptyset \). Since \(L'[\tau _{2k-2},\tau _{2k-1}]\cap B(v,\varepsilon ^2n+\varepsilon ^Mn)=\emptyset \), the above conclusion actually gives that for some \(x\in B(v,\varepsilon ^2 n)\), \(L'[0,\tau _{2k-2}]\cap B(x,\varepsilon ^M n)\ne \emptyset \) and \(R'\cap B(x,\varepsilon ^M n)\ne \emptyset \). This contradicts the minimality of k. \(\square \)

Next we use Lemma 3.3 to prove that each \({\mathrm {LE}}(R[0,T_{2i-1}])[0,\tau _{2k}]\) is hittable by a random walk starting nearby.

Claim 6.5

There exist M and C such that for each \(v\notin B(0,\varepsilon n)\), integers i and k, and \(\varepsilon >0\), if \(L_{i,k} = {\mathrm {LE}}(R[0,T_{2i-1}])[0,\tau _{2k}]\), then

$$\begin{aligned} {\mathbb {P}}^0 \left[ \begin{array}{c} T_{2i-1}<\infty , \tau _{2k}<\infty ,\, \hbox {and there exists}\, x \in B\left( v,2\varepsilon ^2 n\right) \,\hbox {such that }\\ {{\mathrm {dist}}\left( x,L_{i,k}\right) \le 2\varepsilon ^M n\, \hbox {and} }\, {\mathbb {P}}^{2,x} \left[ R^2\left[ 0, T^2_{x,\frac{1}{4}\varepsilon n}\right] \cap L_{i,k}= \emptyset \right] > \varepsilon ^7 \end{array} \right] \le C\,\varepsilon ^7. \end{aligned}$$
(6.5)

Proof of Claim 6.5

Assume that \(\varepsilon \) is small (possibly depending on M). We cover \(B(v,2\varepsilon ^2 n)\) by \(s = 10\,\lfloor \varepsilon ^{-3M}\rfloor \) balls of radius \(\varepsilon ^Mn\) with centers at \(v_1,\ldots , v_s\in B(v,2\varepsilon ^2 n)\). Then the probabilty in (6.5) is bounded from above by

$$\begin{aligned} 10\,\varepsilon ^{-3M}\,\max _j {\mathbb {P}}^0 \left[ \begin{array}{c} T_{2i-1}<\infty , \tau _{2k}<\infty , \,\hbox {and there exists}\, x \in B\left( v_j,\varepsilon ^M n\right) \, \hbox {such that} \\ {{\mathrm {dist}}\left( x,L_{i,k}\right) \le 2\varepsilon ^M n\, \hbox {and}\, }{\mathbb {P}}^{2,x} \left[ R^2\left[ 0, T^2_{x,\frac{1}{4}\varepsilon n}\right] \cap L_{i,k}= \emptyset \right] > \varepsilon ^7 \end{array} \right] . \end{aligned}$$

For each \(x\in B\left( v_j,\varepsilon ^M n\right) \), \(B(x,2\varepsilon ^M n)\subset B(v_j,3\varepsilon ^M n)\) and \(B(x,\frac{1}{4}\varepsilon n)\supset B(v_j,\frac{1}{8}\varepsilon n)\). Thus, the jth probability in the above maximum is at most

$$\begin{aligned} {\mathbb {P}}^0 \left[ \begin{array}{c} { T_{2i-1}<\infty , \tau _{2k}<\infty ,}\\ L_{i,k}\cap B\left( v_j,3\varepsilon ^Mn\right) \ne \emptyset ,\,\hbox {there exists} \, x \in B\left( v_j,\varepsilon ^M n\right) \, \hbox {such that}\\ {\mathbb {P}}^{2,x} \left[ R^2\left[ 0, T^2_{v_j,\frac{1}{8}\varepsilon n}\right] \cap L_{i,k}\cap \left( B\left( v_j,\frac{1}{8}\varepsilon n\right) {\setminus }\overline{B\left( v_j,3\varepsilon ^Mn\right) }\right) = \emptyset \right] > \varepsilon ^7 \end{array} \right] . \end{aligned}$$

By the Harnack inequality applied to the harmonic function

$$\begin{aligned}&{\mathbb {P}}^{2,x} \left[ R^2\left[ 0, T^2_{v_j,\frac{1}{8}\varepsilon n}\right] \cap L_{i,k}\cap \left( B\left( v_j,\frac{1}{8}\varepsilon n\right) {\setminus }\overline{B\left( v_j,3\varepsilon ^Mn\right) }\right) = \emptyset \right] ,\\&\qquad x \in B\left( v_j,2\varepsilon ^M n\right) , \end{aligned}$$

there exists a universal constant \(c_*>0\) such that the jth probability is bounded from above by

$$\begin{aligned} {\mathbb {P}}^0 \left[ \begin{array}{c} T_{2i-1}<\infty , \tau _{2k}<\infty , L_{i,k}\cap B\left( v_j,3\varepsilon ^Mn\right) \ne \emptyset , \hbox {and}\\ {\mathbb {P}}^{2,v_j} \left[ R^2\left[ 0, T^2_{v_j,\frac{1}{8}\varepsilon n}\right] \cap L_{i,k}\cap \left( B\left( v_j,\frac{1}{8}\varepsilon n\right) {\setminus }\overline{B\left( v_j,3\varepsilon ^Mn\right) }\right) = \emptyset \right] > c_*\varepsilon ^7 \end{array} \right] . \end{aligned}$$

We apply Lemma 3.3 to \({\mathrm {LE}}(R[0,T_{2i-1}])[0,\tau _{2k}]\) with \(v=v_j\), \(r = \frac{1}{8}\varepsilon n\), \(s=3\varepsilon ^Mn\), choosing M so that \(\left( \frac{s}{r}\right) ^\eta = \left( 24\,\varepsilon ^{M-1}\right) ^\eta <c_*\varepsilon ^7\), and choosing K so that \(\left( \frac{s}{r}\right) ^K = \left( 24\,\varepsilon ^{M-1}\right) ^K < \varepsilon ^{7+3M}\). This gives that the above probability is \(\le C\varepsilon ^{7+3M}\). Thus, the original probability is \(\le (C\varepsilon ^{7+3M})\cdot s \le C'\varepsilon ^7\). \(\square \)

Let \(E_i\) be the event in (6.3) and \(E_{i,k}\) the event in (6.4) with M as in Claim 6.5. By Claim 6.4, \({\mathbb {P}}^0[E_i]\le i\,\max _k{\mathbb {P}}^0[E_{i,k}]\). On the event \(E_{i,k}\), define the stopping time

$$\begin{aligned} \sigma _{i,k} = \inf \left\{ t>T_{2i-1}~:~R(t)\in B\left( v,2\varepsilon ^2n\right) ,~{\mathrm {dist}}\left( R(t),L_{i,k}\right) \le 2\varepsilon ^M n\right\} . \end{aligned}$$

By the definition of \(E_{i,k}\), \(\sigma _{i,k}<T_{2i}\) and \(R[\sigma _{i,k},T_{2i}]\cap L_{i,k} = \emptyset \). By Claim 6.5 and the strong Markov property, if \(F_{i,k}\) is the event in (6.5), then

$$\begin{aligned} {\mathbb {P}}^0[E_{i,k}]\le {\mathbb {P}}^0[F_{i,k}] + \varepsilon ^7 \le (C+1)\varepsilon ^7. \end{aligned}$$

Thus, there exists C such that for all i, \({\mathbb {P}}^0[E_i]\le C\,i\,\varepsilon ^7\). To conclude the proof, we notice that for each \(i\ge 7\),

$$\begin{aligned} {\mathbb {P}}^0\left[ T_{2i-1}<\infty \right] \le C\,\varepsilon ^i\le C\,\varepsilon ^7. \end{aligned}$$

By Claim 6.3,

$$\begin{aligned}&{\mathbb {P}}^0\left[ {\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{0,n}])\right) \cap B\left( v,\varepsilon ^2 n\right) \ne \emptyset \right] \\&\quad \le {\mathbb {P}}^0\left[ T_{13}<\infty \right] + \sum _{i=1}^7{\mathbb {P}}^0[E_i] \le C\,\varepsilon ^7. \end{aligned}$$

The proof of Lemma 6.2 is complete. \(\square \)

Exactly the same argument as in the second case in the proof of Theorem 6.1 gives the following result.

Proposition 6.6

There exist \(\alpha >0\) and \(C<\infty \) such that for all \(\varepsilon >0\) and \(n\ge 1\), if \(L = {\mathrm {LE}}(R[0,T_{0,n}])\) then

$$\begin{aligned} {\mathbb {P}}^0\left[ \begin{array}{c} {\textit{There exist}}\, i \,{\textit{and}}\, j \,{\textit{such that}}\, L(i)\notin B(0,n-\varepsilon n), \\ {L(j)\notin B(0,n-\varepsilon n),\, {\textit{and}}\, L[i,j]\nsubseteq B(L(i),\sqrt{\varepsilon }n)} \end{array} \right] \le C\,\varepsilon ^\alpha . \end{aligned}$$

6.2 Proof of Theorem 1.2

The proof of the theorem follows from Theorem 6.1 and Proposition 6.6 similarly to [26, Theorem 1.1]. Recall that a compact subset of \({\mathbb {R}}^d\) is a simple path if it is homeomorphic to [0, 1]. We begin with the following observation, which is analogous to [26, Theorem 3.5].

For a simple path \(\gamma \) and \(x,y\in \gamma \), we denote by \(D(x,y;\gamma )\) the diameter (with respect to the Euclidean distance) of the arc of \(\gamma \) joining x and y. For a function \(f:(0,\infty )\rightarrow (0,\infty )\), let \(\Gamma (f)\) be the set of paths from \(\Gamma \) such that for all \(x,y\in \gamma \),

$$\begin{aligned} \begin{array}{rll} |x-y| &{}\ge &{}f(D(x,y;\gamma )),\\ \max \left( {\mathrm {dist}}(x,\partial D),{\mathrm {dist}}(y,\partial D)\right) &{}\ge &{}f(D(x,y;\gamma )), \end{array} \end{aligned}$$

and denote by \(\overline{\Gamma (f)}\) the closure of \(\Gamma (f)\) in \(({\mathcal {K}}_D, d_H)\).

Lemma 6.7

For any monotone increasing and continuous function \(f:(0,\infty )\rightarrow (0,\infty )\),

$$\begin{aligned} \overline{\Gamma (f)}\subset \Gamma . \end{aligned}$$

Proof of Lemma 6.7

Let \(\gamma \in \overline{\Gamma (f)}\). Then \(\gamma \) is a compact, connected subset of \(\overline{D}\), such that \(0\in \gamma \) and \(\gamma \cap \partial D\ne \emptyset \).

We first show that \(|\gamma \cap \partial D| = 1\). Assume that there exist two different points \(p,q\in \gamma \cap \partial D\). Let \(\gamma _n\in \Gamma (f)\) be such that \(d_H(\gamma _n,\gamma )<\frac{1}{n}\) and \(p_n,q_n\in \gamma _n\) such that \(|p_n-p|<\frac{1}{n}\) and \(|q_n-q|<\frac{1}{n}\). In particular, \({\mathrm {dist}}(p_n,\partial D)<\frac{1}{n}\) and \({\mathrm {dist}}(q_n,\partial D)<\frac{1}{n}\). Since \(\gamma _n\in \Gamma (f)\),

$$\begin{aligned} \frac{1}{n} > \max \left( {\mathrm {dist}}(p_n,\partial D),{\mathrm {dist}}(q_n,\partial D)\right) \ge f(D(p_n,q_n;\gamma _n))\ge f(|p_n-q_n|). \end{aligned}$$

By passing to the limit, we conclude that \(p=q\). Thus, \(|\gamma \cap \partial D| = 1\). We denote this point by b.

It remains to show that \(\gamma \) is a simple path with \(\gamma ^e = b\). We will use the following Janiszewski’s topological characterization of arcs (see, e.g., [26, Lemma 3.6]):

$$\begin{aligned} \begin{array}{c} \text {Let } \gamma \text { be a compact, connected metric space, } a,b\in \gamma .\\ \text {If } \gamma {\setminus }\{x\} \text {is disconnected for all } x\in \gamma {\setminus }\{a,b\}, \text { then } \gamma \text { is a simple path from } a \text { to } b. \end{array} \end{aligned}$$
(6.6)

Let \(x\in \gamma {\setminus }\{0,b\}\). Let \(\gamma _n\in \Gamma (f)\) be such that \(d_H(\gamma _n,\gamma )<\frac{1}{n}\), and \(x_n\in \gamma _n\) such that \(|x_n-x|<\frac{1}{n}\). For each n, let \(\gamma _n^1\) be the closed arc of \(\gamma _n\) connecting 0 and \(x_n\), and \(\gamma _n^2\) the closed arc of \(\gamma _n\) connecting \(z_n\) and \(\gamma _n^e\). By passing to a subsequence, if necessary, assume with no loss of generality that \(\gamma _n^1\) converges to \(\gamma ^1\) and \(\gamma _n^2\) converges to \(\gamma ^2\) in \(({\mathcal {K}}_D, d_H)\). Note that \(\gamma ^1\) and \(\gamma ^2\) are compacts with \(\gamma ^1\cup \gamma ^2 = \gamma \). For any \(p\in \gamma ^1\) and \(q\in \gamma ^2\), let \(p_n\in \gamma _n^1\) be a sequence converging to p, and \(q_n\in \gamma _n^2\) a sequence converging to q. Since \(\gamma _n\in \Gamma (f)\),

$$\begin{aligned} |p_n-q_n|\ge f(D(p_n,q_n;\gamma _n))\ge f(|x_n-q_n|). \end{aligned}$$

By passing to the limit, we get that \(|p-q|\ge f(|x-q|)\), for all \(p\in \gamma ^1\) and \(q\in \gamma ^2\), which implies that \(\gamma ^1{\setminus }\{x\}\) and \(\gamma ^2{\setminus }\{x\}\) are disjoint. Therefore, \(\gamma {\setminus }\{x\}\) is disconnected. By (6.6), \(\gamma \) is a simple path from 0 to b.

We have shown that \(\gamma \in \Gamma \). \(\square \)

The main ingredient for the proof of Theorem 1.2 is the following lemma.

Lemma 6.8

The exist \(C,M<\infty \) and \(\alpha >0\) such that for all \(m,n\ge 1\),

$$\begin{aligned} {\mathbb {P}}\left[ {\mathrm {LEW}}_n\in \Gamma (f_m)\right] \ge 1 - C\,2^{-\alpha m}, \end{aligned}$$

where

$$\begin{aligned} f_m (s) = \min \left( 2^{-Mm},~\left( \frac{s}{4}\right) ^{2M}\right) . \end{aligned}$$

Proof of Lemma 6.8

Let M be as in Theorem 6.1. Let \(n\ge 1\). Denote by \(L = {\mathrm {LE}}(R[0,T_{0,n}])\). For \(\varepsilon >0\), consider the events

$$\begin{aligned} \begin{array}{rll} E_\varepsilon &{}= &{}\left\{ {\mathrm {QL}}(\varepsilon ^Mn,\sqrt{\varepsilon } n,L)\ne \emptyset \right\} ,\\ F_\varepsilon &{}= &{}\left\{ \text {For some}\, i \,\text {and}\, j, L(i),L(j)\notin B(0,n-\varepsilon n), L[i,j]\nsubseteq B(L(i),\sqrt{\varepsilon }n)\right\} , \end{array} \end{aligned}$$

and denote by \(E_i = E_{2^{-i}}\) and \(F_i = F_{2^{-i}}\). By Theorem 6.1 and Proposition 6.6, there exist \(C<\infty \) and \(\alpha >0\) such that

$$\begin{aligned} {\mathbb {P}}\left[ \bigcup _{i=m}^\infty E_i\cup F_i\right] \le C\,2^{-\alpha m}. \end{aligned}$$

Therefore, it suffices to show that

$$\begin{aligned} \left\{ {\mathrm {LEW}}_n\notin \Gamma (f_m)\right\} \subseteq \bigcup _{i=m}^\infty E_i\cup F_i. \end{aligned}$$

Assume that \(E_i\) does not occur. Let \(x,y\in {\mathrm {LEW}}_n\) with \(2^{-M(i+1)}\le |x-y|< 2^{-Mi}\). Then,

$$\begin{aligned} D(x,y; {\mathrm {LEW}}_n) \le 2\sqrt{2^{-i}} \le 4\, |x-y|^{\frac{1}{2M}}. \end{aligned}$$

Assume that \(F_i\) does not occur. Let \(x,y\in {\mathrm {LEW}}_n\) with

$$\begin{aligned} 2^{-(i+1)} \le \max \left( {\mathrm {dist}}(x,\partial D),{\mathrm {dist}}(y,\partial D)\right) < 2^{-i}. \end{aligned}$$

Then,

$$\begin{aligned} D(x,y;{\mathrm {LEW}}_n)\le 2\,\sqrt{2^{-i}} \le 4\,\max \left( {\mathrm {dist}}(x,\partial D),{\mathrm {dist}}(y,\partial D)\right) ^{\frac{1}{2}}. \end{aligned}$$

Thus, if \(\bigcup _{i=m}^\infty E_i\cup F_i\) does not occur, then for all \(x,y\in {\mathrm {LEW}}_n\) with \(|x-y|< 2^{-Mm}\) or \(\max \left( {\mathrm {dist}}(x,\partial D),{\mathrm {dist}}(y,\partial D)\right) <2^{-Mm}\),

$$\begin{aligned} \min \left( |x-y|,\max \left( {\mathrm {dist}}(x,\partial D),{\mathrm {dist}}(y,\partial D)\right) \right) \ge \left( \frac{D(x,y;{\mathrm {LEW}}_n)}{4}\right) ^{2M}. \end{aligned}$$

This implies that if \(\bigcup _{i=m}^\infty E_i\cup F_i\) does not occur, then for all \(x,y\in {\mathrm {LEW}}_n\),

$$\begin{aligned} \min \left( |x-y|,\max \left( {\mathrm {dist}}(x,\partial D),{\mathrm {dist}}(y,\partial D)\right) \right) \ge f_m\left( D(x,y;{\mathrm {LEW}}_n)\right) . \end{aligned}$$

Since \({\mathrm {LEW}}_n\in \Gamma \), we have proved that \({\mathrm {LEW}}_n\in \Gamma (f_m)\). The proof of Lemma 6.8 is complete. \(\square \)

Theorem 1.2 easily follows from Lemmas 6.7 and 6.8.

Proof of Theorem 1.2

Let \(f_m\) be as in Lemma 6.8. By Lemma 6.8, there exist \(C<\infty \) and \(\alpha >0\) such that for all \(m,n\ge 1\), \({\mathbb {P}}\left[ {\mathrm {LEW}}_n\in \Gamma (f_m)\right] \ge 1 - C\,2^{-\alpha m}\). By the Portmanteau theorem (see, e.g., [3, Theorem 2.1]), for each \(m\ge 1\),

$$\begin{aligned} {\mathbb {P}}\left[ K\in \overline{\Gamma (f_m)}\right]\ge & {} \limsup _{n\rightarrow \infty }{\mathbb {P}}\left[ {\mathrm {LEW}}_n\in \overline{\Gamma (f_m)}\right] \\\ge & {} \limsup _{n\rightarrow \infty }{\mathbb {P}}\left[ {\mathrm {LEW}}_n\in \Gamma (f_m)\right] \ge 1 - C\,2^{-\alpha m}. \end{aligned}$$

Since \(f_m\) is increasing and continuous, by Lemma 6.7, for all \(m\ge 1\).

$$\begin{aligned} {\mathbb {P}}\left[ K\in \Gamma \right] \ge {\mathbb {P}}\left[ K\in \overline{\Gamma (f_m)}\right] \ge 1 - C\,2^{-\alpha m}. \end{aligned}$$

Thus, \({\mathbb {P}}\left[ K\in \Gamma \right] =1\). \(\square \)

7 Hausdorff dimension

7.1 Preliminaries on loop erased walks

In this section we collect some auxiliary results about loop erased random walk. Let

$$\begin{aligned} {\mathrm {Es}}(m,n) = {\mathbb {P}}^{1,0}\otimes {\mathbb {P}}^{2,0}\left[ {\mathrm {LE}}\left( R^1[0,\infty )\right) [s_m,t_n]\cap R^2\left[ 1,T^2_{0,n}\right] = \emptyset \right] , \end{aligned}$$

where \(t_n\) is the first time that \({\mathrm {LE}}(R^1[0,\infty ))\) exits from B(0, n), and \(s_m\) is its last visit to B(0, m) before time \(t_n\). We define \({\mathrm {Es}}(m,n) = 1\) for \(m\ge n\). Let

$$\begin{aligned} \alpha = 2 - \beta \in \left[ \frac{1}{3},1\right) , \end{aligned}$$

where \(\beta = \lim _{n\rightarrow \infty }\frac{\log {\mathbb {E}}^{0,1}\left[ \mathrm {len}\,{\mathrm {LE}}(R^1[0,T^1_{0,n}])\right] }{\log n}\) is the growth exponent of the loop erased random walk. Its existence is shown in [27]. By [27, Lemma 7.2.2], for all \(\epsilon >0\) there exist \(C_{1,\epsilon },C_{2,\epsilon }\in (0,\infty )\) such that for all mn,

$$\begin{aligned} C_{1,\epsilon }\,\left( \frac{m}{n}\right) ^{\alpha + \epsilon } \le {\mathrm {Es}}(m,n) \le C_{2,\epsilon }\,\left( \frac{m}{n}\right) ^{\alpha - \epsilon }. \end{aligned}$$
(7.1)

The following lemma is the main ingredient in the proof of the upper bound on the Hausdorff dimension of \({\mathrm {K}}\). In its proof we will only use the upper bound from (7.1).

Lemma 7.1

For any \(\delta >0\) there exists \(C_\delta <\infty \) such that for all \(\varepsilon >0\), \(n\ge 1\) and \(x\in B(0,n-2\varepsilon n){\setminus } \overline{B(0,2\varepsilon n)}\),

$$\begin{aligned} {\mathbb {P}}^{1,0}\left[ {\mathrm {LE}}\left( R^1\left[ 0,T^1_{0,n}\right] \right) \cap B(x,\varepsilon n)\ne \emptyset \right] \le C_\delta \cdot \left( \frac{\varepsilon n}{|x|}\right) ^{1+\alpha - \delta }. \end{aligned}$$
(7.2)

Proof

We write \(B = B(x,\varepsilon n)\) throughout the proof. Let \(L = \sup \{i\le T^1_{0,n}:R^1(i)\in B\}\) be the time of last visit of \(R^1\) to B before leaving B(0, n). We will use the following observation,

$$\begin{aligned} \left\{ {\mathrm {LE}}\left( R^1\left[ 0,T^1_{0,n}\right] \right) \cap B\ne \emptyset \right\} = \left\{ L<T^1_{0,n},~R^1\left[ L+1,T^1_{0,n}\right] \cap \lambda \left( {\mathrm {LE}}\left( R^1[0,L]\right) \right) = \emptyset \right\} , \end{aligned}$$

where for a path \(\gamma \), we write \(\lambda (\gamma )\) for the piece of \(\gamma \) from the start and until the first entrance to B. The probability of the event on the right hand side equals

$$\begin{aligned}&\sum _{y\in B}\sum _t{\mathbb {P}}^{1,0}\otimes {\mathbb {P}}^{2,y} \left[ T^1_{0,n}>t,~R^1(t) = y,~ \lambda \left( {\mathrm {LE}}\left( R^1[0,t]\right) \right) \cap R^2\left[ 1,T^2_{0,n}\right] = \emptyset ,\right. \\&\quad \left. ~R^2\left[ 1,T^2_{0,n}\right] \cap B = \emptyset \right] . \end{aligned}$$

By the time reversibility of loop erasure, see [12, Lemma 7.2.1], the probability in the sum equals to

$$\begin{aligned}&{\mathbb {P}}^{1,y}\otimes {\mathbb {P}}^{2,y} \left[ T^1_{0,n}>t,~R^1(t) = 0,~ \mu \left( {\mathrm {LE}}\left( R^1[0,t]\right) \right) \cap R^2\left[ 1,T^2_{0,n}\right] = \emptyset ,\right. \\&\quad \left. ~R^2\left[ 1,T^2_{0,n}\right] \cap B = \emptyset \right] , \end{aligned}$$

where for a path \(\gamma \), we write \(\mu (\gamma )\) for the piece of \(\gamma \) from the last visit to B until the end. Summation over t gives

$$\begin{aligned}&{\mathbb {P}}^{1,y}\otimes {\mathbb {P}}^{2,y} \left[ T^1_{0,n}>T^1_0,~ \mu \left( {\mathrm {LE}}\left( R^1\left[ 0,T^1_0\right] \right) \right) \cap R^2\left[ 1,T^2_{0,n}\right] = \emptyset ,\right. \\&\quad \left. ~R^2\left[ 1,T^2_{0,n}\right] \cap B = \emptyset \right] , \end{aligned}$$

where \(T^1_0 = T^1(\{0\})\). Let \(d_x = {\mathrm {dist}}(x,\partial B(0,n))\) and define \(R_x = \frac{1}{2}\min (|x|,d_x)\). We first consider the case when \(R^1\) returns to \(B(y,2\varepsilon n)\) after leaving \(B(y,\frac{1}{4} R_x)\):

$$\begin{aligned}&\sum _{y\in B}{\mathbb {P}}^{1,y}\otimes {\mathbb {P}}^{2,y}\left[ R^1\left[ T^1_{y,\frac{1}{4} R_x},T^1_{0,n}\right] \cap B(y,2\varepsilon n)\ne \emptyset ,~T^1_{0,n}>T^1_0,\right. \\&\quad \left. ~R^2\left[ 1,T^2_{0,n}\right] \cap B = \emptyset \right] \\&\quad {\mathop {\le }\limits ^{(*)}}C\cdot (\varepsilon n)^2\cdot \frac{\varepsilon n}{R_x}\cdot \frac{d_x}{|x|\,n}\cdot \frac{1}{R_x} {\mathop {\le }\limits ^{(**)}}C'\cdot \left( \frac{\varepsilon n}{|x|}\right) ^2 \le C'\cdot \left( \frac{\varepsilon n}{|x|}\right) ^{1 + \alpha }, \end{aligned}$$

where \((*)\) follows from [12, Proposition 1.5.10], \((**)\) by considering separately the cases \(R_x = |x|\) and \(R_x = d_x\) and using \(R_x\ge \varepsilon n\), and the last inequality from \(\alpha \le 1\).

Thus, to establish (7.2), it remains to consider the case when \(R^1\) does not return to \(B(y,2\varepsilon n)\) after leaving \(B(y,\frac{1}{4} R_x)\). In this case, we have the inclusion

$$\begin{aligned} \left\{ \mu \left( {\mathrm {LE}}\left( R^1\left[ 0,T^1_0\right] \right) \right) \cap R^2\left[ 1,T^2_{0,n}\right] = \emptyset \right\} \subseteq \left\{ {\mathrm {LE}}\left( R^1\left[ 0,T^1_0\right] \right) [s,t]\cap R^2\left[ 1,T^2_{0,n}\right] = \emptyset \right\} , \end{aligned}$$

where t is the first time that \({\mathrm {LE}}(R^1[0,T^1_0])\) exits from \(B(y,\frac{1}{4} R_x)\), and s is its last visit time to \(B(y,2\varepsilon n)\) before t. Therefore, in this case,

$$\begin{aligned}&{\mathbb {P}}^{2,y}\left[ \mu \left( {\mathrm {LE}}\left( R^1\left[ 0,T^1_0\right] \right) \right) \cap R^2\left[ 1,T^2_{0,n}\right] = \emptyset ,~R^2\left[ 1,T^2_{0,n}\right] \cap B = \emptyset \right] \\&\quad \le {\mathbb {P}}^{2,y}\left[ {\mathrm {LE}}\left( R^1\left[ 0,T^1_0\right] \right) [s,t]\cap R^2\left[ 1,T^2_{0,n}\right] = \emptyset ,~R^2\left[ 1,T^2_{0,n}\right] \cap B = \emptyset \right] \\&\quad {\mathop {\le }\limits ^{(*)}}{\mathbb {P}}^{2,y}\left[ R^2\left[ 1,T^2_{y,\varepsilon n}\right] \cap B = \emptyset \right] \cdot \max _{z\in \partial B\left( y,\varepsilon n\right) }\,\\&\qquad {\mathbb {P}}^{2,z}\left[ {\mathrm {LE}}\left( R^1\left[ 0,T^1_0\right] \right) [s,t]\cap R^2\left[ 0,T^2_{0,n}\right] = \emptyset \right] \\&\quad {\mathop {\le }\limits ^{(**)}}C\cdot \frac{1}{\varepsilon n}\cdot {\mathbb {P}}^{2,y}\left[ {\mathrm {LE}}\left( R^1\left[ 0,T^1_0\right] \right) [s,t]\cap R^2\left[ 0,T^2_{0,n}\right] = \emptyset \right] , \end{aligned}$$

where \((*)\) follows from the strong Markov property and \((**)\) from [12, Proposition 1.5.10] and the Harnack inequality. It remains to bound the probability

$$\begin{aligned} {\mathbb {P}}^{1,y}\otimes {\mathbb {P}}^{2,y} \left[ T^1_{0,n}>T^1_0,~ {\mathrm {LE}}\left( R^1\left[ 0,T^1_0\right] \right) [s,t]\cap R^2\left[ 0,T^2_{0,n}\right] = \emptyset \right] . \end{aligned}$$

By the results of [24, Sect. 4.1], if X is a random walk from y conditioned to hit 0 before \(\partial B(0,n)\) and killed upon hitting 0, Y is an infinite random walk from y, and the laws of their loop erasures until the first exit times from \(B(y,\frac{1}{4} R_x)\) are \({\mathbb {P}}_X\) and \({\mathbb {P}}_Y\), then the Radon–Nikodym derivative \(\frac{d{\mathbb {P}}_X}{d{\mathbb {P}}_Y}\) is bounded from above and below by universal positive and finite constants \(C_1\) and \(C_2\). Therefore, the probability in the above display is bounded from above by

$$\begin{aligned} C_2\cdot {\mathbb {P}}^{1,y}\otimes {\mathbb {P}}^{2,y} \left[ {\mathrm {LE}}\left( R^1[0,\infty )\right) [s,t]\cap R^2\left[ 0,T^2_{0,n}\right] = \emptyset \right] \cdot {\mathbb {P}}^{1,y}\left[ T^1_{0,n}>T^1_0\right] , \end{aligned}$$

which is at most

$$\begin{aligned} C_2\cdot {\mathrm {Es}}\left( 2\varepsilon n,\frac{1}{4} R_x\right) \cdot {\mathbb {P}}^{1,y}\left[ T^1_{0,n}>T^1_0\right] \le C_\delta \cdot \left( \frac{\varepsilon n}{R_x}\right) ^{\alpha - \delta } \cdot \frac{d_x}{|x|\,n} \le C_\delta \cdot \frac{1}{\varepsilon n}\cdot \left( \frac{\varepsilon n}{|x|}\right) ^{1+\alpha - \delta }, \end{aligned}$$

where the last inequality again follows by considering cases \(R_x=|x|\) and \(R_x=d_x\) and using \(\alpha \le 1\). Finally, by summing over y on the interior boundary of B, we get the bound

$$\begin{aligned} C\cdot (\varepsilon n)^2\cdot \frac{1}{\varepsilon n}\cdot C_\delta \cdot \frac{1}{\varepsilon n}\cdot \left( \frac{\varepsilon n}{|x|}\right) ^{1+\alpha - \delta }, \end{aligned}$$

giving (7.2). The proof of Lemma 7.1 is complete. \(\square \)

Remark 7.2

Essentially the same proof gives the complementary lower bound to (7.2) for x’s away from the boundary of B(0, n). For any \(\delta >0\) there exists \(c_\delta >0\) such that for all \(\varepsilon >0\), \(n\ge 1\) and \(x\in B(0,\frac{1}{2} n){\setminus } \overline{B(0,2\varepsilon n)}\),

$$\begin{aligned} {\mathbb {P}}^{1,0}\left[ {\mathrm {LE}}\left( R^1\left[ 0,T^1_{0,n}\right] \right) \cap B\left( x,\varepsilon n\right) \ne \emptyset \right] \ge c_\delta \cdot \left( \frac{\varepsilon n}{|x|}\right) ^{1+\alpha + \delta }. \end{aligned}$$

7.2 Proof of Theorem 1.4: upper bound

In this section we use Lemma 7.1 to prove that \({\mathrm {dim}_{\mathrm {H}}}({\mathrm {K}})\le 2 - \alpha \) almost surely. Recall that the Hausdorff dimension of a subset S of \({\mathbb {R}}^d\) is defined as

$$\begin{aligned} {\mathrm {dim}_{\mathrm {H}}}(S) = \inf \left\{ \delta ~:~{\mathcal {H}}^\delta (S) = 0\right\} , \end{aligned}$$

where \({\mathcal {H}}^\delta (S)\) is the \(\delta \)-Hausdorff measure of S, \({\mathcal {H}}^\delta (S) = \lim _{\varepsilon \rightarrow 0}\inf \sum _{j=1}^\infty {\mathrm {diam}}(S_j)^\delta \), and the infimum is taken over all countable collections of sets \(\{S_j\}\) covering S with \({\mathrm {diam}}(S_j)\le \varepsilon \).

Consider the coupling of \({\mathrm {K}}\) and \({\mathrm {LEW}}_n\) such that \(d_H({\mathrm {LEW}}_n,{\mathrm {K}})\rightarrow 0\) as \(n\rightarrow \infty \) almost surely. Let \(N_1(\varepsilon )\) be the number of balls of radius \(\frac{1}{2}\varepsilon \) centered in \(\frac{1}{2}\varepsilon {\mathbb {Z}}^3\cap \left( D_{1-2\varepsilon }{\setminus } D_{2\varepsilon }\right) \) that have non-empty intersection with \({\mathrm {K}}\), and \(N_2(\varepsilon )\) the number of balls of radius \(\frac{1}{2}\varepsilon \) centered in \(\frac{1}{2}\varepsilon {\mathbb {Z}}^3\cap \left( D_{2\varepsilon }\cup D_{1-2\varepsilon }^c\right) \) that have non-empty intersection with \({\mathrm {K}}\). Similarly, define \(N_{1,n}(\varepsilon )\) as the number of balls of radius \(\varepsilon n\) centered in \(\frac{1}{2}\varepsilon n{\mathbb {Z}}^3\cap B(0,n-2\varepsilon n){\setminus }\overline{B(0,2\varepsilon n)}\) that have non-empty intersection with \({\mathrm {LE}}(R[0,T_{0,n}])\), and \(N_{2,n}(\varepsilon )\) as the corresponding number of balls centered in \(\frac{1}{2}\varepsilon n{\mathbb {Z}}^3\cap \left( B(0,2\varepsilon n)\cup B(0,n-2\varepsilon n)^c\right) \). Then, for any positive \(\delta \) and \(\xi \),

$$\begin{aligned} {\mathbb {P}}\left[ N_1(\varepsilon )\ge \delta \,\varepsilon ^{\alpha -2-\xi }\right] \le {\mathbb {P}}\left[ d_H({\mathrm {LEW}}_n,{\mathrm {K}})\ge \frac{1}{2}\varepsilon \right] + {\mathbb {P}}\left[ N_{1,n}(\varepsilon ) \ge \delta \,\varepsilon ^{\alpha -2-\xi }\right] . \end{aligned}$$

By Lemma 7.1,

$$\begin{aligned} {\mathbb {P}}\left[ N_{1,n}(\varepsilon ) \ge \delta \,\varepsilon ^{\alpha -2-\xi }\right] \le \frac{1}{\delta }\,\varepsilon ^{2-\alpha +\xi }\,{\mathbb {E}}\left[ N_{1,n}(\varepsilon )\right] \le \frac{1}{\delta }\, C_\xi \,\varepsilon ^{\frac{1}{2}\xi }. \end{aligned}$$

By sending n to infinity, we obtain that \({\mathbb {P}}\left[ N_1(\varepsilon )\ge \delta \,\varepsilon ^{\alpha -2-\xi }\right] \le \frac{1}{\delta }\, C_\xi \,\varepsilon ^{\frac{1}{2}\xi }\). To obtain a bound for \(N_2(\varepsilon )\), we proceed as above, but use Proposition 6.6 instead of Lemma 7.1. We get \({\mathbb {P}}\left[ N_2(\varepsilon )\ge \delta \,\varepsilon ^{-\frac{1}{2}}\right] \le C_\delta \,\varepsilon ^{\frac{1}{2}\xi }\), if \(\xi \) is sufficiently small. Since \(\alpha \le 1\), this implies that \({\mathbb {P}}\left[ N_2(\varepsilon )\ge \delta \,\varepsilon ^{\alpha -2-\xi }\right] \le C_\delta \,\varepsilon ^{\frac{1}{2}\xi }\).

For \(\gamma \ge 0\), let \({\mathcal {H}}^\gamma _\varepsilon ({\mathrm {K}}) = \inf \sum _{j=1}^\infty {\mathrm {diam}}(S_j)^\gamma \), where the infimum is taken over all coverings of \({\mathrm {K}}\) by sets \(S_j\) with diameter at most \(\varepsilon \). Then, \({\mathcal {H}}^\gamma _\varepsilon ({\mathrm {K}})\le \varepsilon ^\gamma \,\left( N_1(\varepsilon )+N_2(\varepsilon )\right) \), and we obtain from the above estimates that

$$\begin{aligned} {\mathbb {P}}\left[ {\mathcal {H}}^{2-\alpha +\xi }_\varepsilon ({\mathrm {K}})\ge 2\delta \right] \le C_{\xi ,\delta }\,\varepsilon ^{\frac{1}{2}\xi }. \end{aligned}$$

Note that if \(\varepsilon \searrow 0\) then \({\mathcal {H}}^{2-\alpha +\xi }_\varepsilon ({\mathrm {K}})\nearrow {\mathcal {H}}^{2-\alpha +\xi }({\mathrm {K}})\). Thus, for all \(\delta >0\), \({\mathbb {P}}[{\mathcal {H}}^{2-\alpha +\xi }({\mathrm {K}})\ge 2\delta ]=0\), i.e., \({\mathcal {H}}^{2-\alpha +\xi }({\mathrm {K}}) = 0\) almost surely. Since \(\xi >0\) is arbitrary, we get \({\mathrm {dim}_{\mathrm {H}}}({\mathrm {K}})\le 2-\alpha \). \(\square \)

7.3 Proof of Theorem 1.4: lower bound

Let \({\mathrm {BM}}\) be the Brownian motion in \({\mathbb {R}}^3\) and \(\tau = \inf \{t\ge 0:|B(t)|=1\}\) the first exit time of \({\mathrm {BM}}\) from D. The set of cut points \({\mathrm {C}}\) of \({\mathrm {BM}}[0,\tau ]\) is defined as

$$\begin{aligned} {\mathrm {C}} = \left\{ {\mathrm {BM}}(t)~:~0\le t\le \tau ,~{\mathrm {BM}}[0,t]\cap {\mathrm {BM}}(t,\tau ]=\emptyset \right\} . \end{aligned}$$

It is proved in [14] that \({\mathrm {dim}_{\mathrm {H}}}({\mathrm {C}}) = 2 - \xi \) almost surely, where \(\xi \) is the non-intersection exponent for 3 dimensional Brownian motion satisfying \(\xi \in (\frac{1}{2},1)\). Note that every path in \({\mathrm {BM}}[0,\tau ]\) from \({\mathrm {BM}}(0)=0\) to \({\mathrm {BM}}(\tau )\in \partial D\) goes through all the cut points. We denote by \({\mathcal {S}}(U)\) the set of all points of \(U\subseteq \overline{D}\) which disconnect 0 from \(\partial D\) in U. As noticed above, \({\mathcal {S}}({\mathrm {BM}}[0,\tau ])\supseteq {\mathrm {C}}\), thus, \({\mathrm {dim}_{\mathrm {H}}}({\mathcal {S}}({\mathrm {BM}}[0,\tau ])) \ge 2 - \xi \) almost surely.

Now, recall from Theorem 1.1 that \({\mathrm {BM}}[0,\tau ]\) has the same distribution as the union of the independent scaling limit of the loop erased random walk, \({\mathrm {K}}\), and all the loops from the Brownian loop soup of intensity 1 that are contained in D and intersect \({\mathrm {K}}\). Denote this union by X. Then \({\mathcal {S}}(X)\) has the same distribution as \({\mathcal {S}}({\mathrm {BM}}[0,\tau ])\) and, since \({\mathrm {K}}\) connects 0 and \(\partial D, {\mathcal {S}}(X)\subseteq {\mathrm {K}}\). Thus, \({\mathrm {dim}_{\mathrm {H}}}({\mathrm {K}}) \ge 2 - \xi \) almost surely. \(\square \)