Abstract
We provide a decomposition of the trace of the Brownian motion into a simple path and an independent Brownian soup of loops that intersect the simple path. More precisely, we prove that any subsequential scaling limit of the loop erased random walk is a simple path (a new result in three dimensions), which can be taken as the simple path of the decomposition. In three dimensions, we also prove that the Hausdorff dimension of any such subsequential scaling limit lies in \((1,\frac{5}{3}]\). We conjecture that our decomposition characterizes uniquely the law of the simple path. If so, our results would give a new strategy to the existence of the scaling limit of the loop erased random walk and its rotational invariance.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
How does the Brownian motion in \({\mathbb {R}}^d\) look like? This question has fascinated probabilists and mathematical physicists for a long time, and it continues to be an unending source of challenging problems. Not too long after the existence of the Brownian motion was rigorously shown by Wiener in 1923, Lévy [23] proved that a two dimensional Brownian motion intersects itself almost surely, Kakutani [8] showed that a d-dimensional Brownian motion is almost surely a simple path when \(d \ge 5\), Dvoretzky et al. [5] verified that a Brownian motion intersects itself in three but not in four dimensions almost surely. Much later, Taylor and Fristedt [6, 32] found that the Hausdorff dimension of the set of double points of the Brownian motion is two in two dimensions and one in three dimensions.
In this paper, we are interested in the nature of self-intersections, more specifically, how loops formed by the Brownian motion are distributed in space. Consequently, from our point of view, we may focus on the case of two and three dimensions. We give an explicit representation of such loops by establishing a decomposition of the Brownian path into independent simple path and a set of loops. In order to explain it, let us begin with a similar problem for a simple random walk.
Consider a simple random walk (SRW) on the rescaled lattice \(\frac{1}{n}{\mathbb {Z}}^d\) started at the origin and stopped upon exiting from the unit ball. Its loop erasure, the loop erased random walk (LERW), is a simple path connecting the origin with the complement of the ball obtained from the random walk path by chronologically erasing all its loops. Remarkably, the law of these loops is very explicit. They come from a Poisson point process of discrete loops (a random walk loop soup) on \(\frac{1}{n}{\mathbb {Z}}^d\) independent from the loop erasure [17, Sect. 9]. More precisely, if we denote the loop erased random walk by \({\mathrm {LEW}}_n\) and an independent random walk loop soup in the unit ball by \({\mathrm {LS}}_n\), the exact definitions will come later (see Sects. 2.1 and 2.2), then
As the Brownian motion is the scaling limit of a simple random walk, it is natural to start by looking for an analogue of the random walk path decomposition in the continuous setting. However, unlike the random walk, the Brownian motion has a dense set of loops and it is not clear how to remove them in chronological order. Zhan proved in [35] the existence of a loop erasure of planar Brownian motion, but the uniqueness is missing, and three dimensions is for the time being out of reach. Nevertheless, we are able to get an analogue of (1.1) for the Brownian motion by passing suitably to the large-n limit on the both sides of the decomposition (1.1). First of all, after interpolating linearly, we may view all the lattice paths and loops as continuous curves and loops of \({\mathbb {R}}^d\), more usefully, as elements in the metric space of all compact subsets of the closed unit ball with the Hausdorff metric, denote it by \(({\mathcal {K}}_D, d_H)\). Let \({\mathrm {K}}\) be any weak subsequential limit of \({\mathrm {LEW}}_n\) in \(({\mathcal {K}}_D, d_H)\), and \({\mathrm {BS}}\) the limit of \({\mathrm {LS}}_n\), which turns out to be the Brownian loop soup of Lawler and Werner [21].
Theorem 1.1
In 2 and 3 dimensions, the union of \({\mathrm {K}}\) and all the loops from an independent \({\mathrm {BS}}\) that intersect \({\mathrm {K}}\) has the same law in \(({\mathcal {K}}_D, d_H)\) as the trace of the Brownian motion stopped on exiting from the unit ball.
A related result has been proved in [19] for the “filling” of the planar Brownian path, where the filling of a closed set A in \({\mathbb {R}}^2\) is the union of A with all the bounded connected components of \({\mathbb {R}}^2 {\setminus } A\). It is shown there that the filling of the union of \({\mathrm {K}}\) and the loops from \({\mathrm {BS}}\) intersecting \({\mathrm {K}}\) has the same law as the filling of the Brownian path. However, the filling of a random set does not characterize its law. For instance, the filling of the \({\mathrm {SLE}}_{6}\) started at 0 up to the first exit time from the unit disc has the same law as the filling of the Brownian path [19, Theorem 9.4], while the law of \({\mathrm {SLE}}_{6}\) as a random compact subset of the disc is different from that of the Brownian trace.
In two dimensions, the sequence \({\mathrm {LEW}}_n\) converges to the Schramm–Loewner evolution with parameter 2 \(({\mathrm {SLE}}_2)\) [18], a simple path [26] with Hausdorff dimension \(\frac{5}{4}\) [2]. In particular, Theorem 1.1 immediately gives a decomposition of the planar Brownian path into a simple path and loops. Unfortunately, no explicit expression for the scaling limit of the LERW is known or conjectured in three dimensions. Kozma [11] proved that the sequence \({\mathrm {LEW}}_{2^n}\) is Cauchy in \(({\mathcal {K}}_D, d_H)\), which gives the existence of the scaling limit as a random compact subset of the ball, and, topologically, this is all that has been known up to now. Our next main result shows that in three dimensions \({\mathrm {K}}\) is a simple path.
Theorem 1.2
Let \(\gamma ^s\) and \(\gamma ^e\) be the end points of a simple path \(\gamma \), and define
Then, almost surely, \({\mathrm {K}}\in \Gamma \).
Theorems 1.1 and 1.2 give a decomposition in \(({\mathcal {K}}_D, d_H)\) of the Brownian path into a simple path and loops also in three dimensions. For completeness, let us comment briefly on the higher dimensions. Our two main results also hold in higher dimensions, but the conclusions are rather trivial. In dimensions higher than 3, the scaling limit of the LERW is a d-dimensional Brownian motion [12, Theorem 7.7.6], it is itself a simple path, and the Brownian loop soup does not intersect it.
We believe that the decomposition of the Brownian path into a simple path and loops as in Theorem 1.1 is important not only because it sheds light on the nature of self-intersections in the Brownian path, but more substatially, a uniqueness of the decomposition is expected, in which case, the law of \({\mathrm {K}}\) would be uniquely characterized by the decomposition.
Conjecture 1.3
Let \({\mathrm {K}}_1\) and \({\mathrm {K}}_2\) be random elements in \(({\mathcal {K}}_D, d_H)\) such that \({\mathrm {K}}_1,{\mathrm {K}}_2\in \Gamma \) almost surely, and \({\mathrm {BS}}\) a Brownian loop soup in the unit ball independent from \({\mathrm {K}}_1\) and \({\mathrm {K}}_2\). If for each \(i\in \{1,2\}\), the union of \({\mathrm {K}}_i\) and all the loops from \({\mathrm {BS}}\) that intersect \({\mathrm {K}}_i\) has the same law as the trace of the Brownian motion stopped on exiting from the unit ball, then \({\mathrm {K}}_1\) and \({\mathrm {K}}_2\) have the same law in \(({\mathcal {K}}_D, d_H)\).
As immediate consequences of the uniqueness and Theorem 1.1, one would get a new strategy to the existence of the scaling limit of the loop erased random walk and its rotational invariance. Needless to say, it would provide a description of the LERW scaling limit in three dimensions, which is still missing. As far as we know, the conjecture has not been proved or disproved even in two dimensions.
Any subsequential limit \({\mathrm {K}}\) of \({\mathrm {LEW}}_n\) is a simple path, and it is immediate that in three dimensions, \({\mathrm {K}}\) has a different law than the Brownian path. In two dimensions, the law of \({\mathrm {K}}\) is explicit, namely that of the trace of \({\mathrm {SLE}}_2\). Our final main result provides rigorous bounds on the Hausdorff dimension of \({\mathrm {K}}\) in three dimensions. Let \(\xi \) be the non-intersection exponent for 3 dimensional Brownian motion [14], and \(\beta \) the growth exponent for the 3 dimensional loop erased random walk [15]. Both exponents exist by [14, 27] and satisfy the bounds \(\xi \in (\frac{1}{2},1)\) and \(\beta \in (1,\frac{5}{3}]\), see [14, 15].
Theorem 1.4
In 3 dimensions, \(2-\xi \le {\mathrm {dim}_{\mathrm {H}}}({\mathrm {K}})\le \beta \) almost surely. In particular,
The lower bound on \({\mathrm {dim}_{\mathrm {H}}}({\mathrm {K}})\) is an immediate application of Theorem 1.1 and a result on the Hausdorff dimension of the set of cut-points of the Brownian path. Here, a cut-point of a connected set F that contains 0 and intersects the boundary of the unit ball is any point \(x\in F\) such that 0 and the boundary of the ball are disconnected in \(F {\setminus } \{x \}\). The Hausdorff dimension of the set of cut-points of the three-dimensional Brownian path from 0 until exiting from the unit ball is precisely \(2- \xi \), cf. [14]. To see that it is a lower bound on \({\mathrm {dim}_{\mathrm {H}}}({\mathrm {K}})\), it remains to notice that in every decomposition of a Brownian path into a simple path and loops, all its cut-points are on the simple path.
We expect that \({\mathrm {dim}_{\mathrm {H}}}({\mathrm {K}}) = \beta \) almost surely. Some steps towards this equality will be made in [28]. A same identity holds in two dimensions, where both the growth exponent and the Hausdorff dimension of \({\mathrm {SLE}}_2\) are known to be \(\frac{5}{4}\), see [2, 9] and also [16, 24]. In three dimensions, the value of \(\beta \) is not known or conjectured. Numerical experiments suggest that \(\beta = 1.62 \pm 0.01\) [7, 34], but the best rigorous bounds are \(1<\beta \le \frac{5}{3}\) [15].
1.1 Some words about proofs
Theorem 1.1 is an analogue of (1.1) in continuum. To prove it, we start with the decomposition of the random walk path (1.1) and suitably take scaling limits in both sides of the decomposition. By (1.1), the union of the loop erased random walk \({\mathrm {LEW}}_n\) and the loops from an independent random walk loop soup \({\mathrm {LS}}_n\) that intersect \({\mathrm {LEW}}_n\) is the trace of simple random walk on the lattice \(\frac{1}{n}{\mathbb {Z}}^d\) killed on exiting from the unit ball. In particular, it converges to the trace of the Brownian motion. On the other hand, \({\mathrm {LEW}}_n\) converges weakly (along subsequences) to \({\mathrm {K}}\), and, as was shown in two dimensions by Lawler and Trujillo [20], the loop soups \({\mathrm {LS}}_n\) and \({\mathrm {BS}}\) can be coupled so that with high probability there is a one-to-one correspondence between all large loops from \({\mathrm {LS}}_n\) and those from \({\mathrm {BS}}\), and each large loop from \({\mathrm {LS}}_n\) is very close, in the Hausdorff distance, to the corresponding loop in \({\mathrm {BS}}\). Such a strong coupling of loop soups can be extended to all dimensions with little effort, see Theorem 2.2. So where is the challenge?
First, we may assume that \({\mathrm {LEW}}_n\) and \({\mathrm {K}}\) are defined on the same probability space and \(d_H({\mathrm {LEW}}_n,{\mathrm {K}})\rightarrow 0\) almost surely. Let \(\varepsilon <\delta \), and consider the event that \(d_H({\mathrm {LEW}}_n,{\mathrm {K}})<\varepsilon \) and to each loop \(\ell _n\) from \({\mathrm {LS}}_n\) of diameter at least \(\delta \) corresponds a unique loop \(\ell \) from the Brownian soup so that \(d_H(\ell _n,\ell )<\varepsilon \). By the strong coupling of loop soups (see Theorem 2.2), this event has high probability for all large n. The challenge is to show that the correspondence of loops in the strong coupling makes the right selection of Brownian loops. What may go wrong? If a loop \(\ell _n\in {\mathrm {LS}}_n\) intersects \({\mathrm {LEW}}_n\), then the corresponding Brownian loop \(\ell \) does not have to intersect \({\mathrm {K}}\), and vice versa. The meat of the proof is then to show that this does not happen.
To demonstrate a difficulty, notice that very little is known about \({\mathrm {K}}\) in three dimensions. In particular, it is not a priori clear if the Brownian soup really intersects \({\mathrm {K}}\) almost surely (or even with positive probability). As we know, it is not the case in dimensions 4 and higher, and the three dimensional Brownian soup does not intersect a line. As a result, not all paths in \({\mathbb {R}}^3\) are hittable by Brownian loops, so we have to show that \({\mathrm {K}}\) is hittable. Moreover, we want that every Brownian loop of large diameter (bigger than \(\delta \)) that gets close enough (within \(\varepsilon \) distance) to \({\mathrm {K}}\) intersects it locally, and we want the same to be true for large random walk loops and \({\mathrm {LEW}}_n\).
In two dimensions, analogous questions are classically resolved with a help of the Beurling projection principle, see [10], which states that a random walk starting near any simple path will intersect it with high probability. As we have just seen, such a principle cannot work in three dimensions for all paths. The main novelty of our proof is a Beurling-type estimate for the loop erased random walk stating that most of the samples of the LERW are hittable with probability close to one by an independent simple random walk started anywhere near the LERW, see Theorem 3.1. This result is then easily converted into an analogous statement for random walk loops, namely, with high probability, the only large loops that are close to \({\mathrm {LEW}}_n\) are those that intersect it, see Proposition 5.2.
Similar complications arise when we try to show that \({\mathrm {K}}\) is a simple path, although now without loop soups. We need to rule out a possibility that \({\mathrm {LEW}}_n\) backtracks from far away. In his proof that \({\mathrm {K}}\) is a simple path in two dimensions [26], Schramm introduced \((\varepsilon ,\delta )\)-quasi-loops as subpaths of \({\mathrm {LEW}}_n\) ending within \(\varepsilon \) distance from the start but stretching to distance \(\delta \). Of course, if a quasi-loop exists for all large n, it collapses, in the large-n limit, into a proper loop in \({\mathrm {K}}\). Thus, to show that \({\mathrm {K}}\) is a simple path, we need to rule out the existence of quasi-loops uniformly in n, namely, to show that for all \(\delta >0\),
Schramm proved this in two dimensions using the Beurling projection principle [26, Lemma 3.4]. As remarked before, the principle does not longer work in three dimensions, but our Beurling-type esitmate is strong enough to get the desired conclusion, see Theorem 6.1.
We should mention that Kozma [11] proved that with high probability (as \(n\rightarrow \infty \)), \({\mathrm {LEW}}_n\) does not contain \((n^{-\gamma },\delta )\)-quasi-loops, see [11, Theorem 4]. This was enough to establish the convergence of \({\mathrm {LEW}}_n\)’s, but more is needed to show that \({\mathrm {K}}\) is a simple path. Unfortunately, Kozma’s proof strongly relies on the fact that the choice of \(\varepsilon \) is n-dependent, and we need to establish a new method to get the uniform estimate.
1.2 Structure of the paper
The main definitions are given in Sect. 2, the loop erased random walk and its scaling limit in Sect. 2.1, the random walk loop soup in Sect. 2.2, and the Brownian loop soup in Sect. 2.3. In each subsection, we also discuss some properties and a few historical facts about the models. Section 2.3 also contains the statement about the coupling of the random walk and the Brownian loop soups that we use in the proof of Theorem 1.1 (see Theorem 2.2). Some notation that we only use in the proofs are summarized in Sect. 2.4.
The Beurling-type estimate for the loop erased random walk is given in Sect. 3 (see Theorem 3.1). Some related lemmas about hittability of the LERW are also stated there and may be of independent interest (see Lemmas 3.2 and 3.3). The proof of the Beurling-type estimate is given in Sect. 3.1. The rest of the section is devoted to the proof of an auxiliary lemma, and may be omitted in the first reading.
In Sect. 4 we construct the coupling of the loop soups satisfying conditions of Theorem 2.2. This section may be skipped in the first reading.
The proof of our first main result, Theorem 1.1, is contained in Sect. 5. It is based on Theorems 2.2 and 3.1.
In Sect. 6, we prove that the scaling limit of the LERW is a simple path. In Sect. 6.1, we define quasi-loops and prove that the LERW is unlikely to contain them. The proof of our second main result, Theorem 1.2, is given in Sect. 6.2. It is based on the quasi-loops-estimates from Sect. 6.1, namely on Propositions 5.2 and 5.1.
Finally, in Sect. 7 we prove bounds on the Hausdorff dimension of the scaling limit of the LERW stated in Theorem 1.4. This proof is largely based on some earlier results on non-intersection probabilities for independent LERW and SRW obtained in [27]. We recall these results in Sect. 7.1 [see (7.1)] and also prove there some of their consequences. The upper and lower bounds on the Hausdorff dimension are proved in the remaining subsections.
2 Definitions, notation, and some history
2.1 Loop erased random walk and its scaling limit
We consider the graph \({\mathbb {Z}}^d\) with edges between nearest neighbors. If x and y are nearest neighbors in \({\mathbb {Z}}^3\), we write \(x\sim y\). A path is a function \(\gamma \) from \(\{1,\ldots ,n\}\) to \({\mathbb {Z}}^d\) for some \(n\ge 1\) such that \(\gamma (i)\sim \gamma (i+1)\) for all \(1\le i\le n-1\). The integer n is the length of \(\gamma \), we denote it by \(\mathrm {len}\, \gamma \). The loop erasure of a path \(\gamma , {\mathrm {LE}}(\gamma )\), is the (simple) path obtained by removing loops from \(\gamma \) in order of their appearance, namely,
We are interested in the loop erasure of a simple random walk path started at 0 and stopped at the first time when it exits from a large Euclidean ball, the loop erased random walk (LERW). The simple random walk started at \(x\in {\mathbb {Z}}^d\) is a Markov chain \(\{R(t)\}_{t\in {\mathbb {Z}}_+}\) with \(R(0) = x\) and transition probabilities
We denote its law and the expectation by \({\mathbb {P}}^x\) and \({\mathbb {E}}^x\), respectively.
LERW was originally introduced [13] and studied extensively by Lawler (see [15] and the references therein), who considered LERW as a substitute for the self-avoiding walk (see [29]), which is harder to analyze. Since its appearance, the LERW has played an important role both in statistical physics and mathematics through its relation to the uniform spanning tree (UST). Pemantle [25] proved that paths in the UST are distributed as LERWs, furthermore, the UST can be generated using LERWs by Wilson’s algorithm [33].
We are interested in the scaling limit of the LERW and its connections to the Brownian motion. Let \(|\cdot |\) be the Euclidean norm in \({\mathbb {R}}^d\). The open ball of radius r is defined as \(D_r = \{x\in {\mathbb {R}}^d~:~|x| < r\}\), and we denote its closure by \(\overline{D}_r\). When \(r=1\), we just write D and \(\overline{D}\). We consider the loop erasure of the simple random walk path on \({\mathbb {Z}}^d\) from 0 until the first exit time from \(\overline{D}_n\), rescale it by \(\frac{1}{n}\), and denote the corresponding simple path on the lattice \(\frac{1}{n}{\mathbb {Z}}^d\) and its linear interpolation by \({\mathrm {LEW}}_n\). Consider the metric space \(({\mathcal {K}}_D, d_H)\) of all compact subsets of \(\overline{D}\) with the Hausdorff metric. We can think of \({\mathrm {LEW}}_n\) as random elements of \({\mathcal {K}}_D\). Let \({\mathrm {P}}_n\) be the probability measure on \(({\mathcal {K}}_D, d_H)\) induced by \({\mathrm {LEW}}_n\). Since \(({\mathcal {K}}_D, d_H)\) is compact and the space of Borel probability measures on a compact space is compact in the weak topology, for any subsequence \(n_k\), we can find a further subsequence \(n_{k_i}\) such that \({\mathrm {P}}_{n_{k_i}}\) converges weakly to a probability measure supported on compact subsets of \(\overline{D}\). In fact, more is known. In two dimensions, \({\mathrm {LEW}}_n\) converges weakly to \({\mathrm {SLE}}_2\) [18] (actually, even in a stronger sense). In 3 dimensions, \({\mathrm {LEW}}_{2^n}\) converges weakly as \(n\rightarrow \infty \) to a random compact subset of \(\overline{D}\), invariant under rotations and dilations [11].
The existence of the LERW scaling limit will not be used in this paper. In fact, as discussed in the introduction, we are hoping that our approach can give an alternative proof of the existence. All our results are valid for any subsequential limit of \({\mathrm {LEW}}_n\), which we denote by \({\mathrm {K}}\) throughout the paper, and we will write for simplicity of notation that \({\mathrm {LEW}}_n\) converges to \({\mathrm {K}}\) without specifying a subsequence.
2.2 Random walk loop soup
To have a useful description of the loops generated by the loop erasure of a random walk path, we define a Poisson point process of discrete loops.
A rooted loop of length 2n in \({\mathbb {Z}}^d\) is a \((2n+1)\)-tuple \(\gamma = (\gamma _0,\ldots ,\gamma _{2n})\) with \(|\gamma _i - \gamma _{i-1}| = 1\) and \(\gamma _0 = \gamma _{2n}\). Let \({\mathcal L}\) be the space of all rooted loops. We are interested in a Poisson point process of rooted loops in which each individual loop “looks like” a random walk bridge. We define the random walk loop measure \({\mu ^{\mathrm {rwl}}}\) as a sigma finite measure on \({\mathcal L}\) giving the value \(\frac{1}{2n}\cdot \left( \frac{1}{2d}\right) ^{2n}\) to each loop of length 2n. The factor \(\frac{1}{2n}\) should be understood as choosing the root of the loop of length 2n uniformly. The random walk loop soup \({\mathcal R}\) is the Poisson point process on the space \({\mathcal L}\times (0,\infty )\) with the intensity measure \({\mu ^{\mathrm {rwl}}}\otimes \mathrm {Leb}_1\). For each \(\lambda >0\), the random walk loop soup induces the Poisson point process on the space \({\mathcal L}\) with the intensity measure \(\lambda {\mu ^{\mathrm {rwl}}}\), as a pushforward by the function \(\sum _i \mathbbm {1}_{(\gamma _i,\lambda _i)} \mapsto \sum _{i:\lambda _i\le \lambda }\mathbbm {1}_{\gamma _i}\). We call the resulting process the random walk loop soup of intensity \(\lambda \) and denote it by \({\mathcal R}^\lambda \).
Poisson ensembles of Markovian loops (loop soups) were introduced informally by Symanzik [30] as a representation of the \(\phi ^4\) Euclidean field, and subsequently extensively researched in the physics community. The first rigorous definition of a loop soup was given by Lawler and Werner [21] in the context of planar Brownian motion. Our definition of the random walk loop soup is taken from [17, Chapter 9]. Random walk and Brownian loop soups have lately been an object of large attention from probabilists and mathematical physicists due to their intimate relations to the Gaussian free field, see, e.g., [22, 31]. Of particular importance for us, is the following decomposition of a random walk path into its loop erasure and a collection of loops coming from an independent random walk loop soup of intensity 1.
Proposition 2.1
[17, Propositions 9.4.1 and 9.5.1] Let \(L_n\) be the loop erasure of a simple random walk on \({\mathbb {Z}}^d\) started at 0 and stopped upon exiting from \(D_n\). Let \({\mathcal R}^1\) be an independent random walk loop soup, and denote by \(R_n\) be the set of all loops (with multiplicities) from \({\mathcal R}^1\) that are contained in \(D_n\) and intersect \(L_n\). Then the union of \(L_n\) and \(R_n\) has the same law as the trace of a simple random walk on \({\mathbb {Z}}^d\) started at 0 and stopped upon exiting from \(D_n\).
Our goal is to pass to the scaling limit in the above decomposition to get a similar representation for the Brownian path. The scaling limit of \(L_n\) is a random compact subset of a unit ball, as discussed in the previous section. We will soon see that the scaling limit of a random walk loop soup is the Brownian loop soup of Lawler and Werner, which we introduce in the next section.
We finish this section with a hands-on definition of the random walk loop soup. Let \({\mu ^{\mathrm {rwl}}}(z,n)\) be the restriction of \({\mu ^{\mathrm {rwl}}}\) to the loops of length 2n rooted at z. It is a finite measure with the total mass \({\mu ^{\mathrm {rwl}}}(z,n)[{\mathcal L}] = \frac{1}{2n}\, p_{2n}(z,z)\), where \(p_{2n}(x,y)\) is the probability that the simple random walk started at x will be at y at step 2n, and \(\frac{{\mu ^{\mathrm {rwl}}}(z,n)}{{\mu ^{\mathrm {rwl}}}(z,n)[{\mathcal L}]}\) is the probability distribution of the random walk bridge of length 2n starting and ending at z. The measure \({\mu ^{\mathrm {rwl}}}\) can be expressed as a linear combination of probability measures on \({\mathcal L}\),
which leads to the following simple recipe for sampling the random walk loop soups. Let
be independent Poisson point processes on \((0,\infty )\) with parameter \(\frac{p_{2n}(0,0)}{2n}\). Let
be independent random walk bridges of length 2n starting and ending at 0, independent of all the \(\widetilde{N}(z,n;\cdot )\). Then the multiset
is the random walk loop soup of intensity \(\lambda \). In other words, we first generate the number of (labeled) random walk bridges of length 2n, rooted at z, and with label at most \(\lambda \), \(\widetilde{N}(z,n;\lambda )\), and then sample their shapes according to the random walk bridge measure \(\frac{{\mu ^{\mathrm {rwl}}}(z,n)}{{\mu ^{\mathrm {rwl}}}(z,n)[{\mathcal L}]}\).
2.3 Brownian loop soup and a strong coupling of loop soups
Recall our strategy—we want to get a decomposition of a Brownian path by taking a scaling limit of both sides in the corresponding random walk path decomposition. For this, we still need to discuss the existence of a scaling limit of the random walk loop soup. Actually, the scaling limit is explicit, it is the Brownian loop soup of Lawler and Werner [21], and we now give its description.
A rooted loop in \({\mathbb {R}}^d\) is a continuous function \(\gamma :[0,t_\gamma ]\rightarrow {\mathbb {R}}^d\) with \(\gamma (0) = \gamma (t_\gamma )\), where \(t_\gamma \in (0,\infty )\) is the time duration of \(\gamma \). We denote by \({\mathcal {C}}\) the set of all rooted loops. For \(z\in {\mathbb {R}}^d\) and \(t>0\), let \({\mu ^{\mathrm {bb}}}(z,t)\) be the measure on \({\mathcal {C}}\) induced by the Brownian bridge from z to z of time duration t. The Brownian loop measure \({\mu ^{\mathrm {bl}}}\) is the measure on \({\mathcal {C}}\) given by
Notice the analogy with a similar representation (2.1) of the random walk loop measure as a linear combination of random walk bridge measures. The measure \({\mu ^{\mathrm {bl}}}\) of course inherits the invariance under the Brownian scaling, \((r\cdot \mathrm {space}, r^2\cdot \mathrm {time~duration})\), from the bridge measures.
The Brownian loop soup \({\mathcal B}\) in \({\mathbb {R}}^d\) is the Poisson point process on the space \({\mathcal {C}}\times (0,\infty )\) with the intensity measure \({\mu ^{\mathrm {bl}}}\otimes \mathrm {Leb}_1\). For each \(\lambda >0\), the Brownian loop soup induces the Poisson point process on the space \({\mathcal {C}}\) with the intensity measure \(\lambda {\mu ^{\mathrm {bl}}}\), as a pushforward by the function \(\sum _i \mathbbm {1}_{(\gamma _i,\lambda _i)} \mapsto \sum _{i:\lambda _i\le \lambda }\mathbbm {1}_{\gamma _i}\). We call the resulting process the Brownian loop soup of intensity \(\lambda \) and denote it by \({\mathcal B}^\lambda \).
The Brownian loop soups exhibit strong connections with the Schramm–Loewner evolution and the Gaussian free field, see, e.g., [4] for an overview, and they have been quite extensively studied. The connection between the random walk loop soups and the Brownian ones has been shown by Lawler and Trujillo [20] in two dimensions, who constructed a strong coupling between the two loop soups—much more than needed to see that the scaling limit of a random walk loop soup is a Brownian soup. For our purposes, we need to extend the result of [20] to higher dimensions. Actually, only to dimension 3, but we give an extension to arbitrary dimensions, as, on the one hand, the proof does not get more complicated, and, on the other, it may be instructive to see the dependence of various parameters on the dimension. Let
Theorem 2.2
There exist \(C<\infty \) and a coupling of the Brownian loop soup \({\mathcal B}= \{{\mathcal B}^\lambda \}_{\lambda >0}\) and the random walk loop soup \({\mathcal R}= \{{\mathcal R}^\lambda \}_{\lambda >0}\) such that for any \(\lambda >0, r\ge 1, N\ge 1\), and \(\theta \in \left( \frac{2d}{d+4}, 2\right) \), on the event of probability
there is a one-to-one correspondence of random walk loops from \({\mathcal R}^\lambda \) of length \(\ge N^\theta \) rooted in \([-rN,rN]^d\) and Brownian loops from \({\mathcal B}^\lambda \) of time duration \(\ge \frac{N^\theta -2}{d}+{\alpha }\) rooted in \([-rN-\frac{1}{2},rN+\frac{1}{2}]^d\), such that the length of the random walk loop divided by d differs from the time duration of the corresponding Brownian loop by at most \({\alpha }\), and the supremum distance between the corresponding loops is \(\le C\, N^{\frac{3}{4}}\log N\). Here, each discrete loop is viewed as a rooted loop in \({\mathbb {R}}^d\) after linear interpolation.
2.4 Further notation
In this section, we summarize all the remaining notation that will be used at least in two different proofs. Those that are used only once are deferred until more appropriate spots.
For \(v\in {\mathbb {R}}^d\) and \(r>0\), the (discrete) ball of radius r centered at v is the set
For \(A\subset {\mathbb {Z}}^d\), we denote by \(\partial A\) the exterior vertex boundary of A, namely,
We also define \(\overline{A} = A\cup \partial A\). The boundary of a subset V of \({\mathbb {R}}^d\) is denoted by \(\partial _{{\mathbb {R}}^d}V\).
For a random walk R, we denote the hitting time of a set \(A\subset {\mathbb {Z}}^d\) by R by
For \(v\in {\mathbb {R}}^d\) and \(r>0\), we write
Quite often, we will consider two independent random walks on the same space. If so, we will denote these random walks by \(R^1\) and \(R^2\), their laws by \({\mathbb {P}}^{1,x_1}\) and \({\mathbb {P}}^{2,x_2}\) (where \(x_i=R^i(0)\)), and the corresponding hitting times by \(T^i\) and \(T^i_{v,r}\).
If \(\gamma \) is a path, we denote by \(\gamma [a,b]\) the path (or the set, depending on a situation) in \({\mathbb {Z}}^d\) consisting of the vertices \(\gamma (a),\gamma (a+1),\ldots ,\gamma (b)\). If \(\gamma _1\) and \(\gamma _2\) are two paths in \({\mathbb {Z}}^d\) and \(\gamma _1(\mathrm {len}\,\gamma _1)\sim \gamma _2(1)\), then we denote by \(\gamma _1\cup \gamma _2\) the path of length \(\mathrm {len}\,\gamma _1 + \mathrm {len}\,\gamma _2\) obtained by concatenating \(\gamma _1\) and \(\gamma _2\).
For a set \(S\subset {\mathbb {R}}^d\) and \(\epsilon >0\), we denote by \(S^{+\epsilon }\) the \(\epsilon \)-neighborhood of S and by \(S^{-\epsilon }\) the subset of points of S at distance \(>\epsilon \) from the complement of S.
Finally, let us make a convention about constants. Large constants whose values are not important are denoted by C and \(C'\) and small ones by c and \(c'\). Their dependence on parameters varies from proof to proof. Constants marked with a subindex, e.g., \(C_1\), \(C_H\), \(c_2\), \(c_*\), keep their values within the proof where they appear, but will change from proof to proof.
3 Beurling-type estimate
Throughout this section we assume that the dimension of the lattice is 3. We prove that the loop erasure of a simple random walk is hittable with high probability by an independent random walk started anywhere near the loop erasure.
Theorem 3.1
There exist \(\eta >0\) and \(C<\infty \) such that for any \(\varepsilon >0\) and \(n\ge 1\),
see Sect. 2.1 for the definition of \({\mathrm {LE}}\) and Sect. 2.4 for the other notation.
A result analogous to Theorem 3.1 in 2 dimensions is known as the Beurling projection principle, see [10]. It states that for any \(\eta <\frac{3}{4}\), the probability on the left hand side of (3.1) equals to 1. In dimensions \(d\ge 4\), the result of Theorem 3.1 is not true.
Before moving on to the proof of Theorem 3.1, we discuss its main ingredients. They are of independent interest and also will be used in other proofs in this paper. First of all, from the point of view of this work, it would be enough to prove the estimate (3.1) only for all those \(x\in B(0,n)\) that are at least \(\varepsilon n\) distance away from 0 and the complement of B(0, n). However, Theorem 3.1 is a valuable tool in the study of loop erased random walks in three dimensions and its applications will surely spread beyond the topics covered in this paper.
The proof of Theorem 3.1 is done by considering separately the cases when \(x\in B(0,\varepsilon n)\) and \(x\notin B(0,\varepsilon n)\). In the first case we use [11, Lemma 4.6], which states that the LERW is hittable by an independent random walk in any wide enough annulus centered at the origin.
Lemma 3.2
[11, Lemma 4.6] For any \(K\ge 1\), there exist \(\eta >0\) and \(C<\infty \) such that for all \(r>s>1\),
In the second case, we use an analogue of [11, Lemma 4.6] about hittability of the LERW in annuli that do not surround the origin. We give its proof in Sect. 3.2. We will use this lemma also in Sect. 6 to show that the LERW scaling limit is a simple path. This is why we state here a slightly stronger result than we need for the proof of Theorem 3.1. We will comment more on this after stating the lemma.
Lemma 3.3
For any \(K\ge 1\), there exist \(\eta >0\) and \(C<\infty \) such that for all \(r>s>1\) and \(v\notin B(0,r)\),
where \(\sigma = \inf \{t\ge 0:{\mathrm {LE}}\left( R^1[0, T] \right) [0,t]\cap B(v,s)\ne \emptyset \}\).
As remarked above, the full strength of Lemma 3.3 will not be needed until Sect. 6, where we reuse the lemma to prove that the LERW scaling limit is a simple path, see the proof of Claim 6.5. In the proof of Theorem 3.1 we will only apply a weaker version of (3.2), where \({\mathrm {LE}}\left( R^1[0, T] \right) [0,\sigma ]\) is replaced by \({\mathrm {LE}}\left( R^1[0, T] \right) \).
3.1 Proof of Theorem 3.1
Without loss of generality, we may assume that \(\varepsilon \) is small. The proof of Theorem 3.1 is a simple consequence of Lemmas 3.2 and 3.3. We estimate the probability in (3.1) separately for x’s in and outside \(B(0,\varepsilon n)\). In the first case, we apply Lemma 3.2 to \(T = T^1_{0,n}\), \(s=2\varepsilon n\), \(r = \frac{1}{2}\sqrt{\varepsilon }n\), and \(K = 2\), so that for some \(\eta >0\) and \(C<\infty \),
By varying the starting point of \(R^2\), we get the harmonic function in \(B\left( 0,2\varepsilon n\right) \),
By the Harnack inequality [12, Theorem 1.7.2], there exists a constant \(C_H<\infty \) such that \(h(x) \le C_H\,h(0)\), for all \(x\in B(0,\varepsilon n)\). In particular,
Since \(B(x,\sqrt{\varepsilon }n) \supseteq B(0,\frac{1}{2}\sqrt{\varepsilon }n)\) for all \(x\in B(0,\varepsilon n)\), we also have
Plugging this into the very first inequality gives
This gives (3.1) after slightly decreasing \(\eta \).
It remains to consider the case \(x\notin B(0,\varepsilon n)\). We prove that for some \(\eta >0\) and \(C<\infty \),
which is slightly stronger than (3.1), since \(T^2_{x,\sqrt{\varepsilon } n}\) of (3.1) is replaced here by the smaller \(T^2_{x,\varepsilon n}\). We start by covering \(B(0,n){\setminus }\overline{B(0,\varepsilon n)}\) by \(s= 10\,\lfloor \varepsilon ^{-6}\rfloor \) balls of radius \(\varepsilon ^2 n\) with centers at \(v_1,\ldots , v_s\in B(0,n){\setminus }\overline{B(0,\varepsilon n)}\). By the union bound, the right hand side of (3.3) is bounded from above by
For each \(x\in B\left( v_i,\varepsilon ^2 n\right) \), \(B(x,\varepsilon ^2n)\subset B(v_i,2\varepsilon ^2n)\) and \(B(x,\varepsilon n)\supset B(v_i,\frac{1}{2}\varepsilon n)\). Thus, the ith probability in the sum is at most
By the Harnack inequality applied to the harmonic function
there exists a universal constant \(c_H>0\) such that the ith probability is bounded from above by
Now, we apply Lemma 3.3 with \(v=v_i\), \(r = \frac{1}{2}\varepsilon n\), \(s=2\varepsilon ^2n\), and \(K=7\) to find \(\eta >0\) and \(C<\infty \) for which the above probability is \(\le C\varepsilon ^7\). Thus, the probability from (3.3) is bounded from above by \((C\varepsilon ^7)\cdot s \le 10 C\,\varepsilon \). This proves (3.3) and completes the proof of Theorem 3.1. \(\square \)
3.2 Proof of Lemma 3.3
The scheme of the proof is conceptually the same as that of [11, Lemma 4.6], except for the main improvement stated in Claim 3.4 below. For the reader’s convenience and because of the importace of the result, we give a complete proof, which we organize in a sequence of claims. The first claim is a stronger version of [11, Lemma 4.3], which is the first step in the proof of [11, Lemma 4.6]. This improvement is essentially the main reason why the remaining steps in the proof of [11, Lemma 4.6] can be adapted to our situation.
Claim 3.4
There exists \(c_1>0\) such that for all \(n>0, v\in \partial B(0,n)\), and \(\Gamma \subset B(v,n)^c\),
Proof of Claim 3.4
An analogous claim for the random walk on \({\mathbb {Z}}^2\) is proved in [24, Proposition 3.5]. The same scheme works for \({\mathbb {Z}}^3\) with a slightly more involved analysis of the corresponding harmonic function.
We begin with some auxiliary observations in \({\mathbb {R}}^3\). For \(z\in {\mathbb {R}}^3\), let \({\mathbb D}(z)\) be the unit ball in \({\mathbb {R}}^3\) centered at z, and write \({\mathbb D}\) for \({\mathbb D}(0)\). Let \(u\in \partial _{{\mathbb {R}}^3}{\mathbb D}\), \(\delta >0\), and \(M = \{z\in \partial _{{\mathbb {R}}^3}{\mathbb D}:|z-u|\le \delta \}\). For \(z\in {\mathbb D}\), let \(h(z) = {\mathbf {P}}^z[W(\tau _{\mathbb D})\in M]\), where W is the standard Brownian motion in \({\mathbb {R}}^3\) and \(\tau _{\mathbb D}\) is the first hitting time of \(\partial _{{\mathbb {R}}^3}{\mathbb D}\) by W. Then h is a harmonic function in \({\mathbb D}\) with the boundary condition \(\mathbbm {1}_M\). In particular, it can be written as
We will need the following properties of h.
-
If \(\delta \) is small enough, then for all \(z\in {\mathbb D}{\setminus }{\mathbb D}(u)\), \(h(z)\le h(0)\).
Proof
By the maximum principle, it suffices to consider \(z\in \partial {\mathbb D}(u)\cap {\mathbb D}\). By the symmetry, it suffices to prove the claim for \(u = (1,0,0)\) and \(z = (z_1,z_2,0)\). Using geometric constraints, one can express h(z) as a function of \(z_2\) only,
One can show by a direct computation that \(z_2f'(z_2)\le 0\) if \(|\sigma _2|\) is sufficiently small, which proves the claim. \(\square \)
-
Another direct computation gives \(\frac{\partial h}{\partial u}(0) = \nu >0\) (derivative in the direction u) and \(\frac{\partial h}{\partial u'} = 0\) for any \(u'\) orthogonal to u.
-
There exists \(r\in (0,1)\) such that for all \(\delta \in (0,\frac{1}{4})\) and \(r\le |z|<1\) with \(|z-u|\ge \frac{1}{2}\), \(h(z)\le \frac{1}{4} h(0)\). This follows from the bound \(\frac{1 - |z|^2}{|z-\sigma |^3}\le 4^3(1-r^2)\).
Assume that n is large enough so that \(\overline{B(0,rn)}\subset n{\mathbb D}\). The function \(h_n(z) = h(\frac{z}{n})\) is harmonic in \(n{\mathbb D}\). For \(z\in \overline{B(0,rn)}\), let \(\widetilde{h}_n(z) = {\mathbb {E}}^z[h_n(R(T_{0,rn}))]\) be the discrete harmonic function in B(0, rn) which agrees with \(h_n(z)\) on \(\partial B(0,rn)\). By [12, (1.23) and (1.34)], there exists \(C<\infty \) such that for all \(z\in B(0,rn), |h_n(z) - \widetilde{h}_n(z)|\le \frac{C}{n}\).
We proceed with the proof of the claim. Let \(n\ge 1\) and \(v\in \partial B(0,n)\). We choose \(u = \frac{v}{|v|}\in \partial _{{\mathbb {R}}^3}{\mathbb D}\). Let \(A=\frac{4C}{\nu }\) (with \(\nu \) and C as above) and \(x\in B(0,rn)\) be such that \(x_i = \lfloor Au_i\rfloor \). By Taylor’s theorem, \(h_n(x) - h_n(0)\ge \frac{A\nu }{2n}\) for large n. Thus, for any \(z\in B(0,rn){\setminus } B(v,n)\),
Since \(\Gamma \subset B(v,n)^c\), the same calculation as on [24, page 1032] gives
By splitting the above probability into the terms where \(|R(T_{0,rn}) - v|\) is \(\ge \frac{n}{2}\) and \(<\frac{n}{2}\), and estimating \(h_n(R(T_{0,rn}))\) from above by \(\frac{1}{4} h(0)\) in the first case and by 1 in the second, one gets exactly as on [24, page 1033] that
which implies that \({\mathbb {P}}^0\left[ |R(T_{0,n}) - v|\le \frac{n}{2}~\big |~R[1,T_{0,n}]\cap \Gamma =\emptyset \right] \ge c'>0\). The proof of the claim is complete. \(\square \)
Before we state the next claim, we introduce some notation. For a path \(\gamma \) and \(t\ge 1\), we define the set of cut points of \(\gamma \) up to time t,
Note that \({\mathrm {cut}}(\gamma ;t)\) is non-decreasing in t, and non-increasing as \(\gamma \) is extended. Also note that \({\mathrm {LE}}(\gamma )[1,\mathrm {len}\,{\mathrm {LE}}(\gamma )]\supset {\mathrm {cut}}(\gamma ;\mathrm {len}\,\gamma )\).
The following claim is an analogue of [11, Lemma 4.4].
Claim 3.5
There exists \(q>0\) such that the following holds. For any \(\varepsilon >0\) there exist \(\delta = \delta (\varepsilon )>0\) and \(C = C(\varepsilon )<\infty \) such that for all \(r>C\), \(s\in [r(1+\varepsilon ),2r]\), \(\Gamma \subset B(0,s)^c\) with
and \(v\in \partial B(0,s)\),
Proof of Claim 3.5
Let \(v\in \partial B(0,s)\), and define \({\mathcal {C}} = {\mathrm {cut}}\left( R^1[0,+\infty );T^1_{v,\varepsilon r}\right) \) and \(A = B(v,\frac{1}{2}\varepsilon r){\setminus }\overline{B(v,\frac{1}{4}\varepsilon r)}\). Let \(\lambda >2\) to be fixed later, and take \(\mu = \frac{\varepsilon }{4\lambda }\) and \(\rho = \mu r\). By [11, Lemma 4.2], there exists \(c>0\) such that for all \(x\in \partial B(v,\rho )\),
Since the random walk started from any \(y\in B(v,\varepsilon r)\) will hit \(B(v,\frac{1}{8}\varepsilon r)\) before exiting from B(0, 4r) with probability \(>c'\), the previous inequality also holds for all \(y\in B(v,\varepsilon r)\). Namely, there exists \(c_2>0\) such that for all \(x\in \partial B(v,\rho )\),
By [12, Proposition 1.5.10], for any \(z\in \partial B(v,\frac{1}{4}\varepsilon r)\),
Thus,
Note that if the random walk \(R^1\) started from v does not return to \(B(v,\rho )\) after \(T^1_{v,\frac{1}{4}\varepsilon r}\), then
Denote by M the set of points on \(\partial B(v,\rho )\) which are at distance \(\ge \frac{\rho }{2}\) from \(B(0,s)^c\). By Claim 3.4,
By the Harnack inequality applied to the harmonic function
and the assumption on \(\Gamma \), there exists \(C_4=C_4(\varepsilon ,\lambda )<\infty \) such that for any \(x\in M\),
All the ingredients are ready to conclude. We have
By the strong Markov property for \(R^1\) at time \(T^1_{v,\rho }\) and the identity (3.6), the nominator of the above expression is bounded from below by
By (3.4), (3.5), and (3.8), the above display is
which implies that
where the last inequality follows from (3.7). Finally we make a choice of parameters. We choose \(\lambda \) so that \(\frac{C_3}{\lambda }<\frac{c_2}{4}\). Then we choose \(\delta \) so that \(C_4\delta < \frac{c_2}{4}\) and \(q = \frac{c_1c_2}{2}\). The proof of Claim 3.5 is complete. \(\square \)
To state the next claim, we need more notation. A function \(\gamma :\{1,\ldots ,n\}\rightarrow {\mathbb {Z}}^3\) is called a discontinuous path of length n. All the definitions that we introduced for nearest neighbor paths extend without any changes to discontinuous paths. Given two discontinuous paths \(\gamma _1\) and \(\gamma _2\), we define the discontinuous paths \({\mathrm {LE}}_1(\gamma _1\cup \gamma _2)\) and \({\mathrm {LE}}_2(\gamma _1\cup \gamma _2)\) as follows. Let \(t_* = \max \{t:{\mathrm {LE}}(\gamma _1)[1,t-1]\cap \gamma _2=\emptyset \}\). Then
The next claim is an analogue of [11, Lemma 4.5].
Claim 3.6
For any \(\varepsilon >0\) and \(\eta >0\), there exist \(\delta >0\) and \(C<\infty \) such that the following holds. For \(r>0\), let \(A_1 = B(0,2r){\setminus }\overline{B(0,r)}\) and \(A_2 = B(0,4r){\setminus }\overline{B(0,\frac{1}{2}r)}\). Then for any \(r>C\), \(s\in [r(1+\eta ),2r], v\in \partial A_1\), and a discontinuous path \(\gamma \subset A_2\) with \(\gamma (1)\in \partial B(0,4r)\) and \(\gamma (\mathrm {len}\,\gamma )\sim v\),
where \(L' = {\mathrm {LE}}(\gamma \cup R^1[0,T^1(\partial A_2)])[1,t]\) and \(t = \min \{i:{\mathrm {LE}}(\gamma \cup R^1[0,T^1(\partial A_2)])(i)\in B(0,s-\eta r)\}\).
Proof of Claim 3.6
Fix \(\varepsilon >0\) and \(\eta >0\). Take \(q>0\) from Claim 3.5. Let \(K=K(\varepsilon )>2\) be an integer such that \((1-q)^K<\varepsilon \). Let \(\varepsilon ' = \frac{\eta }{2K}\) and \(\delta ' = \delta _{{\mathrm {Claim}}~3.5}(\varepsilon ')\) be the \(\delta \) from Claim 3.5 corresponding to \(\varepsilon _{{\mathrm {Claim}}~3.5} = \varepsilon '\). Define \(s_i = s - \frac{\eta i}{K+2}r\) for \(i\in \{1,\ldots ,K+1\}\). Let \(j_k\) be as in the definition of the loop erasure, so that \(R^1[j_k+1,T^1(\partial A_2)]\) is a random walk conditioned not to hit \({\mathrm {LE}}(\gamma \cup R^1[0,j_k])\). Let
If \(\tau _i<T^1(\partial A_2)\), then define \(\Gamma _i = {\mathrm {LE}}(\gamma \cup R^1[0,\tau _i])\) and \(v_i = R^1(\tau _i)\). By Claim 3.5,
where
and \(T^*_i = \min \{t>\tau _i:R^1(t)\in \partial B(v_i,\varepsilon 'r)\}\). Note that the event \({\mathcal {B}}_i\) contains the following event
which depends only on \(\tau _{i+1}\) and \(R^1[0,\tau _{i+1}]\). Thus,
and \({\mathbb {P}}\left[ \bigcap _{i=1}^{K}{\mathcal {B}}_i'\right]<(1-q)^K<\varepsilon \). It remains to show that the event in (3.9) implies \(\bigcap _{i=1}^{K}{\mathcal {B}}_i'\) for some choice of \(\delta \). It is well known (see, e.g., [11, Lemma 2.5]) that there exists \(c_* = c_*(\eta )>0\) such that
Take \(\delta <\min (\delta ',c_*q)\). Then the event in (3.9) implies \(\bigcap _{i=1}^{K}{\mathcal {B}}_i'\). Indeed, if the event in (3.9) occurs,
-
since \({\mathrm {LE}}_1(\gamma \cup R^1[0,T^1(\partial A_2)])\subset B(0,s)^c\) and \({\mathrm {LE}}_2(\gamma \cup R^1[0,T^1(\partial A_2)])\) contains a path from \(\partial B(0,s)\) to \(\partial B(0,s-\eta r)\), all \(\tau _i\)’s are strictly smaller than \(T^1(\partial A_2)\),
-
since \(\Gamma _i\subset L'\), \({\mathbb {P}}^{2,0}\left[ R^2[0,T^2_{0,4r}]\cap \Gamma _i \ne \emptyset \right]< \delta < \delta '\), and
-
since \({\mathrm {cut}}(R^1[\tau _i,\tau _{i+1}];T^*_i)\subset L'\),
$$\begin{aligned}&\min _{y\in B(v_i,\varepsilon ' r)}\,{\mathbb {P}}^{2,y}\left[ {\mathrm {cut}}\left( R^1[\tau _i,\tau _{i+1}];T^*_i\right) \cap R^2\left[ 0,T^2_{0,4r}\right] \ne \emptyset \right] \\&\quad \le \frac{{\mathbb {P}}^{2,0}\left[ {\mathrm {cut}}\left( R^1[\tau _i,\tau _{i+1}];T^*_i\right) \cap R^2\left[ 0,T^2_{0,4r}\right] \ne \emptyset \right] }{{\mathbb {P}}^{2,0}\left[ T^2_{v_i,\varepsilon 'r}<T^2_{0,4r}\right] }<\frac{\delta }{c_*} < q. \end{aligned}$$
The proof of Claim 3.6 is complete. \(\square \)
Proof of Lemma 3.3
Without loss of generality we may assume that s and r / s are big (possibly depending on K). Let \(\rho _i = \frac{r}{2^i}\), for \(i\in \{0,\ldots , I = \lfloor \log _2\frac{r}{s}\rfloor \}\), and consider the stopping times when the random walk \(R^1\) jumps between different \(\partial B(v,\rho _i)\)’s,
The process \(i'(j)\) dominates a random walk on \(\{0,\ldots ,I\}\) with a drift towards 0 and an absorption at 0. In particular, if \(n_i' = \sharp \{j:i'(j)=i\}\), then there exists \(\lambda = \lambda (K)\) such that \({\mathbb {P}}^{1,0}[\sum _{i=1}^{I} n_i' > \lambda I] \le C(\frac{s}{r})^K\). Consider the annuli
and the stopping times
Note that the \(\tau _j\) is a subsequence of the \(\tau _j'\). Therefore, if \(n_i = \sharp \{j:i(j)=i\}\), then \({\mathbb {P}}^{1,0}[\sum _{i=1}^{\lfloor \frac{I-1}{2}\rfloor } n_i > \lambda I] \le C(\frac{s}{r})^K\). Let \(M = \lceil 6\lambda \rceil \), then
For each j and \(m\in \{0,\ldots ,M-1\}\), define \(r_j = \rho _{2i(j)}\) and \(s_{j,m} = 2r_j - \frac{m}{M}r_j\), discontinuous paths \(\gamma _{i,j} = {\mathrm {LE}}(R^1[0,\tau _j-1]\cap A_{2,i})\) and \(\gamma _j = \gamma _{i(j),j}\), the loop erasures \(L_{k,j} = {\mathrm {LE}}_k(\gamma _j\cup R^1[\tau _j,\tau _{j+1}])\) for \(k=1,2\), and the event \({\mathcal {X}}_{j,m}\) that
-
(a)
\(L_{1,j}\subset B(v, s_{j,m})^c\),
-
(b)
\(L_{2,j}\cap B(v, s_{j,m+1})\ne \emptyset \),
-
(c)
\({\mathbb {P}}^{2,v}\left[ R^2[0,T^2_{v,4r}]\cap L_{j,m}'\ne \emptyset \right] <\delta _{{\mathrm {Claim}}~3.6}\), where \(L_{j,m}' = L_{1,j}\cup L_{2,j}[1,t]\) and \(t = \min \{i:L_{2,j}(i)\in \partial B(v,s_{j,m+1})\}\).
By Claim 3.6, \({\mathbb {P}}^{1,0}[{\mathcal {X}}_{j,m}| \tau _j<\infty , R^1[0,\tau _j]]<\varepsilon _{{\mathrm {Claim}}~3.6}\) and
Let \({\mathcal {X}}_j = \cup _{m=0}^{M-1}{\mathcal {X}}_{j,m}\). Then the sequence of their indicators is dominated by a sequence of independent Bernoulli random variables with parameter \(M\,\varepsilon _{{\mathrm {Claim}}~3.6}\). In particular, by choosing \(\varepsilon _{{\mathrm {Claim}}~3.6}\) small enough,
To finish the proof of Lemma 3.3, it suffices to show that the event in (3.2) implies one of the events in (3.10) or (3.11). We call an index i good, if \(n_i\le M\) and none of the \({\mathcal {X}}_j\)’s occur for j’s with \(i(j) = i\). The following claim is essentially [11, Sublemma 4.6.1].
Claim 3.7
Let i be a good index. For \(j>0\), let \(n_{i,j} = \sharp \{k<j:R^1(\tau _k)\in \partial A_{1,i}\}, u_{i,j} = 2\rho _{2i} - \frac{n_{i,j}}{M}\rho _{2i}\), and \(U_{i,j} = B(v,2\rho _{2i}){\setminus }\overline{B(v,u_{i,j})}\). Let \(t_{i,j} = \inf \{t:\gamma _{i,j}(t)\cap \partial B(v,u_{i,j})\ne \emptyset \}\), and define \(\gamma _{i,j}^* = \gamma _{i,j}[1,t_{i,j}]\) if \(t_{i,j}<\infty \) and \(\gamma _{i,j}^*=\emptyset \) otherwise. Then for all j, if \(\gamma _{i,j}^*\ne \emptyset \), then
Proof of Claim 3.7
The same as the proof of [11, Lemma 4.6.1]. We use induction on j. If \(R^1(\tau _j)\notin \partial A_{1,i}\), then \(R^1[\tau _j,\tau _{j+1}]\) does not enter in \(A_{1,i}\) and thus can only change \(\gamma _{i,j}^*\) by erasing a piece from its end leading to \(\gamma _{i,j+1}\cap \partial B(v,u_{i,j})=\emptyset \). (Note that in this case \(u_{i,j} =u_{i,j+1}\).) On the other hand, if \(R^1(\tau _j)\in \partial A_{1,i}\), then \({\mathcal {X}}_j\) does not occur, and, in particular, \({\mathcal {X}}_{j,n_{i,j}}\) does not occur, meaning that one of (a), (b), or (c) fails. If (a) fails, then \(\gamma _{i,j}^*\ne \emptyset \) and hence satisfies (3.12). Also, if \(\gamma _{i,j+1}^*\ne \emptyset \), then it contains \(\gamma _{i,j}^*\) and hence also satisfies (3.12). If (a) holds but (b) fails, then \(\gamma _{i,j+1}^*=\emptyset \). Finally, if (a) and (b) hold but not (c), then \(\gamma _{i,j+1}^* = L'_{j,n_{i,j}}\), and (3.12) holds. \(\square \)
Assume that the event in (3.2) occurs for some T. Let J be such that \(\tau _J\le T < \tau _{J+1}\). Since \(R^1[\tau _J,T]\subset A_{2,i(J)}\), Claim 3.7 shows that for each good index \(i\ne i(J)\), either \({\mathrm {LE}}(R^1[0,T])\) does not cross \(A_{1,i}\) from outside to inside or the first such crossing is hittable by \(R^2\). Since we assume that the event in (3.2) occurs, \({\mathrm {LE}}(R^1[0,T])\) crosses \(B(v,r){\setminus }\overline{B(v,s)}\) and, in particular, all \(A_{1,i}\)’s. Thus, for each good \(i\ne i(J)\),
By the Harnack inequality applied to the harmonic function
there exists a constant \(c_*>0\) (independent of i) such that
Recall that we aim to show that the event in (3.2) implies one of the events in (3.10) or (3.11). Thus, assume that the both these events do not occur. This implies that at least \(\frac{1}{6} I\) indices between 1 and \(\frac{1}{2} I\) are good. Let \(n=\lfloor \frac{I}{12}\rfloor - 2\) and \(i_1>i_2>\cdots>i_n>1\) be good indices with \(i_k - i_{k+1}\ge 2\) and \(i_k\ne i(J)\), then
for \(\eta \) small enough. This contradicts the assumption that the event (3.2) occurs. Thus, we have shown that if both the events (3.10) and (3.11) do not occur, then the event (3.2) also does not occur. The proof of Lemma 3.3 is complete. \(\square \)
4 Strong coupling of loop soups
In this section we prove Theorem 2.2 by giving an explicit coupling of the random walk and the Brownian loop soups satisfying the conditions of the theorem. In two dimensions, such coupling is obtained in [20]. Our construction is an adaptation of the one from [20] to higher dimensions.
4.1 Preliminaries
In this section we prove two lemmas needed for the proof of Theorem 2.2.
Lemma 4.1
For any \(d\ge 2\),
as \(n\rightarrow \infty \).
Proof
Using the inverse Fourier transform, one can write
The result follows by analysing this integral, see, for instance, [1]. \(\square \)
Lemma 4.2
Let \(q = \frac{1}{d}\). For any \(D>0\) there exists \(C_{4.2}<\infty \) such that for every \(n\ge 1\), there exists a coupling \({\mathbb {Q}}_n\) of the d-dimensional random walk bridge \((X_t)_{t\in [0,2n]}\) of length 2n from 0 to 0 and the standard d-dimensional Brownian bridge \((B_t)_{t\in [0,1]}\) such that
Proof
The one dimensional case of the lemma follows from [20, Lemma 3.1]. (The result proved there is much stronger than the statement of Lemma 4.2, the exponent \(\frac{1}{4}\) in the event of the lemma is replaced by \(\frac{1}{2}\) in [20, Lemma 3.1].)
Fix \(d\ge 1\) and \(n\ge 1\). Let \((X^{i,m}, B^{i,m})_{1\le i\le d,\, m\ge 1}\) be independent copies of pairs of one-dimensional random walk and Brownian bridges each coupled to satisfy the requirements of Lemma 4.2 in one dimension, where \(X^{i,m}\) is distributed as a random walk bridge of length 2m.
We will sample the desired d-dimensional random walk bridge by first specifying the choice of coordinate directions for all 2n steps, and then by choosing the directions along specified coordinates. Let \(L_n = (L^1_n,\ldots , L^d_n)\) be an independent multinomial sequence of length 2n with parameter \((q,\ldots ,q)\) conditioned on all \(L^1_n(2n),\ldots L^d_n(2n)\) being even. Namely,
where \(Z_i = (Z_{i,1},\ldots ,Z_{i,d})\) are independent and \({\mathbb {P}}[Z_i = e_1] = \cdots = {\mathbb {P}}[Z_i = e_d] = q\), with \(e_1,\ldots , e_d\) being the canonical basis of \({\mathbb {R}}^d\). Let
be the number of jumps of each of the coordinates in \(L_n\). Consider
Then
-
\((X_t)_{t\in [0,2n]}\) has the law of the d-dimensional random walk bridge,
-
\((B_t)_{t\in [0,1]}\) has the law of the standard d-dimensional Brownian bridge.
It remains to show that this coupling satisfies the bound in the statement of the lemma. It will be a consequence of the following two claims.
Claim 4.3
For any \(D>0\) there exists \(C<\infty \) such that
Proof of Claim 4.3
First note that
where \(S^1,\ldots , S^d\) are independent copies of one dimensional simple random walk on \({\mathbb {Z}}\) started from the origin (which implies that S is a d-dimensional simple random walk).
On the other hand, by [17, Corollary 12.2.7], for any \(D>0\) there exists \(C<\infty \) such that for each \(1\le i\le d\),
The two above estimates together and the definition of \(L_n\) give the claim. \(\square \)
Claim 4.4
Let \((\ell (k))_{0\le k\le 2n}\) be an integer sequence with
-
\(\ell (0) = 0\),
-
for all \(1\le k\le 2n\), \(0\le \ell (k) - \ell (k-1)\le 1\),
-
for some \(1\le C_\ell <\infty \), \(\max _{1\le k\le 2n}|\ell (k) - qk| \le C_\ell (n\log n)^{\frac{1}{2}}\),
-
\(m:= \ell (2n)\) is even.
Let \((X^m,B^m)\) be a pair of one-dimensional random walk bridge of length m and a standard one-dimensional Brownian bridge coupled to satisfy the requirements of Lemma 4.2 in one dimension. Let
be the time reparametrization of the bridge \(X^m\) induced by the sequence \(\ell \), and \((\widetilde{X}_{2nt})_{t\in [0,1]}\) its linear interpolation. Then for any \(D>0\), there exists \(C<\infty \) (depending on \(\ell \) only through \(C_\ell \)) such that
Proof of Claim 4.4
Let \(D>0\). By assumption on \((X^m,B^m)\) and the fact that \(|m-2nq| = o(n)\), there exists \(C<\infty \) such that
Thus, it suffices to show that there exists \(C<\infty \) such that
Since \(\sqrt{m} = \sqrt{2nq}\left( 1 + O\left( (\frac{\log n}{n})^{\frac{1}{2}}\right) \right) \), there exist \(C,C'<\infty \) such that
where S is a one-dimensional simple random walk on \({\mathbb {Z}}\). The last inequality follows from Lemma 4.1 and [17, Corollary 12.2.7].
It remains to show that there exists \(C<\infty \) such that
By the definition of \(\widetilde{X}\), the probability on the left hand side equals
where S is again a one-dimensional simple random walk on \({\mathbb {Z}}\). In the first inequality we used the fact that \(|\ell (\lfloor 2nt\rfloor ) - mt| \le 3C_\ell (n\log n)^{\frac{1}{2}}\), and in the second the exponential Markov inequality and Lemma 4.1. By putting all the estimates together we get the claim. \(\square \)
To complete the proof of Lemma 4.2, we call the sequence \(L_n=(L_n^1,\ldots ,L_n^d)\) C-good if each one-dimensional sequence \(L^i_n\) satisfies the assumptions of Claim 4.4 with \(C_\ell = C\). By Claim 4.3, for each \(D>0\) there exists \(C_1<\infty \) such that \({\mathbb {P}}[L_n \text {is not}\, C_1-\text {good}] \le C_1 \cdot n^{-D}\). By Claim 4.4 applied to each of the d projections of X and B, there exists \(C<\infty \) such that
The two estimates together give the result of the lemma. \(\square \)
4.2 Proof of Theorem 2.2
The main strategy of the proof is similar to that of [20]. We will pair-up random walk loops of length \(2n\ge N^{\theta }\) rooted at \(z \in [-rN,rN]^{d}\) and Brownian loops of time duration between \(\frac{2(n-1)}{d} + {\alpha }\) and \(\frac{2n}{d} + {\alpha }\) and rooted in \(z + [-\frac{1}{2},\frac{1}{2}]^d\).
Fix \(\lambda >0\), \(r>0\), and \(N\ge 1\). For \(n\ge 1\), let
and
The main reason to define \({\alpha }\) as in (2.3) is so that the first two terms in the expansions of \(\widetilde{q}_n\) and \(q_n\) coincide. In particular,
Let
be independent Poisson point processes on \((0,\infty )\) with intensity parameter \(\widetilde{q}_n\), and
independent Poisson point processes on \((0,\infty )\) with intensity parameter \(q_n\). We couple all these processes so that the pairs of processes
are independent and
Let
be independent random walk bridges of length 2n rooted at 0, and
independent standard Brownian bridges coupled with the respective \(\widetilde{L}(z,n;m)\) as in Lemma 4.2. All the \(\widetilde{L}\)’s and L’s are assumed to be independent from all the \(\widetilde{N}\)’s and N’s.
The random walk loop soup \({\mathcal R}^\lambda \) of intensity \(\lambda \) is defined as in (2.2) as the collection (multiset) of loops
In order to define the Brownian loop soup, we introduce more random variables. Let
be independent random variables uniformly distributed on \([-\frac{1}{2},\frac{1}{2}]^d\), and
independent real-valued random variables with density
We assume that the Y’s and T’s are independent and also independent of all the previously defined variables.
Finally, consider an independent Brownian loop soup \({\mathcal B}_{\alpha }\) of loops of time duration \(\le {\alpha }\), and the corresponding restriction of \({\mathcal B}_{\alpha }\) to \({\mathcal {C}}\times (0,\lambda )\), \({\mathcal B}_{\alpha }^\lambda \) (the loops “appearing before time \(\lambda \)”).
To generate the Brownian loop soup, we first rescale each loop L(z, n; m) so that its time duration is T(z, n; m) and translate so that its root is at \(z+ Y(z,n;m)\), obtaining the loop \(L^*(z,n;m)\). The Brownian loop soup \({\mathcal B}^\lambda \) of intensity \(\lambda \) is then the collection of loops
It remains to show that the constructed coupling satisfies all the requirements of the theorem. We would like to pair-up random walk loops from the multiset
with Brownian loops from the set
for all n and z such that \(2n\ge N^\theta \) and \(|z|\le rN\).
First, we estimate the probability that cardinalities are different,
Next, we rule out the existence of big loops,
Now, we bound the probability of having too many loops of a given size rooted at the same vertex,
where in the first inequality we used the Markov inequality and the fact that \(\widetilde{N}(z,n;\lambda )\) is a Poisson random variable with parameter \(\lambda \widetilde{q}_n\).
By putting all the bounds together and using Lemma 4.2,
by choosing D sufficiently large. The proof is complete. \(\square \)
5 LEW scaling limit \(+\) Brownian loop soup \(=\) Brownian motion
In this section we prove Theorem 1.1 using the Beurling-type estimate of Theorem 3.1 and the strong coupling of loop soups from Theorem 2.2. We will focus here, without further mentioning, on the three dimensional case. The two dimensional case can be treated similarly and often with less effort, so we leave the details to an interested reader.
5.1 Preliminaries
Recall the notation for random walk loop soups from Sect. 2.2. Let U be a connected open subset of \({\mathbb {R}}^3\) with smooth boundary. Let \(0<\varepsilon <\delta \) be sufficiently small so that all the finite connected components of \(U^c\) have diameter \(>\delta \). For \(n\ge 1\), let \(U_n = nU\) and use the same notation for \(nU\cap {\mathbb {Z}}^3\). Let \(\lambda >0\). The first result states that it is unlikely that there is a loop in \({\mathcal R}^\lambda \) of big diameter which is contained in \(U_n\) and reaches very close to the boundary of \(U_n\).
Proposition 5.1
There exists \(\alpha >0\) and \(C = C(\alpha ,U)<\infty \) such that for all \(\lambda >0\), \(0<\varepsilon <\delta \) as above, and \(n\ge 1\),
Proof
Let \(\varepsilon<\gamma <\delta \). We will prove that for some \(\eta >0\) and \(C=C(U,\eta )<\infty \),
Optimizing over \(\gamma \) then gives that the above measure is \(\le C\,\varepsilon ^\alpha \,\delta ^{-5}\) for \(\alpha = \frac{\eta ^2}{2\eta + 3}\). Since the probability in (5.1) equals
the result follows. Thus, it suffices to prove (5.2). Denote by \(L_n\) the set of all loops \(\ell \) with diameter \(>\delta n\), \(\ell \subset U_n\), and \(\ell \nsubseteq U_n^{-\varepsilon n}\). By the definition of \({\mu ^{\mathrm {rwl}}}\),
Let \(z\in {\mathbb {Z}}^3\cap U_n\). We consider separately two cases: \(k\le \gamma ^2 n^2\) and \(k\ge \gamma ^2 n^2\).
If \(k\le \gamma ^2 n^2\), then by the time reversibility of the random walk,
By the local central limit theorem, for each \(y\in {\mathbb {Z}}^d\) and m,
Using the strong Markov property on exiting from \(B(z,\frac{1}{2}\delta n)\) and the local central limit theorem, one gets from (5.3) that
Thus,
Note that for any \(\eta >0\) there exists \(C = C(\eta )<\infty \) such that \(e^{-\frac{3}{32}\,\left( \frac{\delta }{\gamma }\right) ^2}\le C\,\left( \frac{\gamma }{\delta }\right) ^\eta \). Thus, we have the first part of the bound in (5.2).
Next we consider \(k\ge \gamma ^2 n^2\). By the time reversibility of the random walk,
It is well known (see, e.g., [11, Lemma 2.5]) that there exist \(\eta >0\) and \(C=C(U)<\infty \) such that for all \(y\in U_n{\setminus } U_n^{-\varepsilon n}\),
By the strong Markov property at time \(T\left( \left( U_n^{-\varepsilon n}\right) ^c\right) \) and the above inequality, as well as the Markov property at time \(\frac{3}{2}k\) and the local central limit theorem, one gets from (5.4) that
Thus,
This gives us the second part of the bound in (5.2). The proof of Proposition 5.1 is complete. \(\square \)
Consider a simple random walk \(R^1\) from 0 and an independent random walk loop soup \({\mathcal R}^\lambda \) defined on the same probability space.
Proposition 5.2
There exists \(\alpha >0\) and \(C = C(\alpha )<\infty \) such that for all \(\lambda >0\), \(0<\varepsilon <\delta \), and \(n\ge 1\),
Proof
Let \(\eta >0\) and define the event
By Theorem 3.1, there exists \(\eta >0\) and \(C=C(\eta )<\infty \) such that \({\mathbb {P}}^{1,0}[A^c]\le C\,\sqrt{\varepsilon }\). This gives the first part of the bound in (5.5), and it remains to estimate the probability of the event in (5.5) intersected with A. By the definition of Poisson point process and independence of \(R^1\) and \({\mathcal R}^\lambda \), this probability equals
where we denoted by \(L_n\) the set of loops \(\ell \subset B(0,n)\) with diameter \(>\delta n\) and such that \({\mathrm {dist}}(\ell ,{\mathrm {LE}}(R^1[0,T^1_{0,n}]))<\varepsilon n\) and \(\ell \cap {\mathrm {LE}}(R^1[0,T^1_{0,n}])=\emptyset \). It suffices to prove that for some \(\alpha >0\) and \(C=C(\alpha )<\infty \),
By the definition of \({\mu ^{\mathrm {rwl}}}\),
Let \(\varepsilon<\gamma <\delta \) be some paramter to be fixed later. We first consider the case \(k\le \gamma ^2n^2\). Exactly as in the proof of Proposition 5.1 [see below (5.3)], there exists \(C<\infty \) such that
We consider next the case \(k\ge \gamma ^2n^2\). Define \(S = {\mathrm {LE}}(R^1[0,T^1_{0,n}])\). By the time reversibility of the random walk,
Let \(T^*=\min \{t:{\mathrm {dist}}(R^2[0,t],S)\le \varepsilon n\}\) and \(T^{**} = \inf \{t>T^*:R^2(t)\notin B(R^2(T^*),\varepsilon ^{\frac{1}{4}}n)\}\). By the Markov inequality,
and by the definition of the event A,
Combining these bounds with the Markov property at time \(\frac{3}{2}k\) and the local central limit theorem, we obtain that for each \(k\ge \gamma ^2 n^2\),
Thus,
Putting the two cases together, we get
If \(\gamma = \varepsilon ^{\frac{\eta }{4}}\delta ^{1-\frac{\eta }{4}}\), then the last expression is \(\le C'\,\frac{\varepsilon ^\alpha }{\delta ^5}\) for \(\alpha = \frac{\eta }{4}\). This gives the second part of the bound in (5.5). The proof of Proposition 5.2 is complete. \(\square \)
5.2 Proof of Theorem 1.1
Let \({\mathcal B}_D\) be the restriction of an independent Brownian loop soup of intensity 1 to the loops entirely contained in D. Denote by X the random subset of \(\overline{D}\) consisting of \({\mathrm {K}}\) and all the loops from \({\mathcal B}_D\) that intersect \({\mathrm {K}}\). First of all, note that X is closed. Indeed, for any \(\varepsilon >0\), the set of loops in \({\mathcal B}_D\) with diameter bigger than \(\varepsilon \) is almost surely finite. Thus, the complement of any \(\varepsilon \)-neighborhood of \({\mathrm {K}}\) in X is closed. Since \({\mathrm {K}}\) is closed, X is also closed.
Let \({\mathrm {BM}}\) be the trace of the Brownian motion killed on exiting from D viewed as a compact subset of \(\overline{D}\). We need to prove that X has the same law as \({\mathrm {BM}}\) in \(({\mathcal {K}}_D,d_H)\). Let \((\Omega ,\mathcal F,{\mathbb {P}})\) be a probability space large enough to contain all the random variables used in this proof. It suffices to prove that
Since every bounded open set can be approximated from inside by a finite union of open balls, which itself can be approximated from inside by an open set with smooth boundary, it suffices to prove the above identity only for sets U with smooth boundary.
For a (measurable) set \(S\subset {\mathbb {R}}^3\) and a (countable) collection of loops L in \({\mathbb {R}}^3\), let E(S, L) be the union of S and all the loops from L that intersect S, the enlargement of S by L. In particular, \(X = E({\mathrm {K}},{\mathcal B}_D)\). Also for \(\delta >0\), let \(L^{>\delta }\) be the subcollection of all the loops from L with diameter \(>\delta \).
Let \({\mathrm {LS}}_n\) be the restriction of the random walk loop soup of intensity 1 rescaled by \(\frac{1}{n}\) to D, i.e., \({\mathrm {LS}}_n = \{\frac{1}{n}\ell ~:~\ell \in {\mathcal R}^1,~\frac{1}{n}\ell \subset D\}\). By Proposition 2.1, if \({\mathrm {LEW}}_n\) and \({\mathrm {LS}}_n\) are independent, then \(E({\mathrm {LEW}}_n,{\mathrm {LS}}_n)\) has the same law as the trace of a simple random walk on \(\frac{1}{n}{\mathbb {Z}}^3\) killed on exiting from D. In particular, as \(n\rightarrow \infty \), \(E({\mathrm {LEW}}_n,{\mathrm {LS}}_n)\) converges weakly in the space \(({\mathcal {K}}_D,d_H)\) to the Brownian motion \({\mathrm {BM}}\).
Let U be an open subset of \({\mathbb {R}}^3\) with a smooth boundary and such that \(0\in U\). Fix \(0<\varepsilon <\delta \). We now define the random objects that will be used in the proof.
Since the space \(({\mathcal {K}}_D,d_H)\) is separable and \({\mathrm {LEW}}_n\) converges weakly to \({\mathrm {K}}\), by Skorokhod’s representation theorem we can define \(({\mathrm {LEW}}_n)_{n\ge 1}\) and \({\mathrm {K}}\) on the same probability space so that \(d_H({\mathrm {LEW}}_n,{\mathrm {K}})\rightarrow 0\) almost surely. Consider the event
that if \({\mathrm {K}}\) is in U then the distance from \({\mathrm {K}}\) to the complement of U is \(>3\delta \). By monotonicity, \({\mathbb {P}}[A_0]\rightarrow 1\) as \(\delta \rightarrow 0\). For each \(n\ge 1\), consider also the event
By construction, \({\mathbb {P}}[A_{1,n}]\rightarrow 1\) as \(n\rightarrow \infty \).
For each \(n\ge 1\), let \(({\mathcal R}^1_n,{\mathcal B}^1_n)\) be independent pairs of the rescaled random walk loop soup of intensity 1 on \(\frac{1}{n}{\mathbb {Z}}^3\) and the Brownian loop soup of intensity 1 on \({\mathbb {R}}^3\) coupled so that on an event \(A_{2,n}\) of probability \(>1-Cn^{-\frac{1}{2}}\) there is a one-to-one correspondence between the loops from \({\mathcal R}^1_n\) of diameter \(>\delta \) rooted in \(D_2\) and those from \({\mathcal B}^1_n\) rooted in \(D_2\) and so that the Hausdorff distance between the paired loops is \(<\varepsilon \). This coupling is possible by Theorem 2.2. (In Theorem 2.2 we paired loops of sufficiently large length, but each loop of length s is of diameter of order \(\sqrt{s}\).) We also assume that all the pairs \(({\mathcal R}^1_n,{\mathcal B}^1_n)\) are independent from \(({\mathrm {LEW}}_n)_{n\ge 1}\) and \({\mathrm {K}}\).
In addition, for each \(n\ge 1\), we consider the event \(A_{3,n}\) that every loop from \({\mathcal R}^1_n\) with diameter \(>\delta \) which is contained in \((U\cap D)^{+4\varepsilon }\) is also contained in \((U\cap D)^{-4\varepsilon }\), and the event \(A_{4,n}\) that every loop from \({\mathcal R}^1_n\) with diameter \(>\delta \) at distance \(<4\varepsilon \) from \({\mathrm {LEW}}_n\) intersects \({\mathrm {LEW}}_n\). In other words, in the event \(A_{3,n}\) there are no big loops that are contained in \((U\cap D)^{+4\varepsilon }\) and get too close to the boundary of \(U\cap D\), and in the event \(A_{4,n}\) there are no big loops that get too close to the \({\mathrm {LEW}}_n\) without hitting it. It is proved in Propositions 5.1 and 5.2 that for some \(\alpha >0\), \(\inf _n{\mathbb {P}}[A_{3,n}\cap A_{4,n}]\ge 1 - C\,\frac{\varepsilon ^\alpha }{\delta ^5}\).
Note that in the event \(A_{2,n}\cap A_{3,n}\), for every loop from \({\mathcal R}^1_n\) with diameter \(>\delta \) contained in D, its pair from \({\mathcal B}^1_n\) is contained in D, and vice versa.
We call the restriction of \({\mathcal R}^1_n\) to the loops contained in D by \({\mathrm {LS}}_n\), and the restriction of \({\mathcal B}^1_n\) to the loops contained in D by \({\mathrm {BS}}_n\).
We prove that for any \(n\ge 1\), on the event \(A_n = A_0\cap A_{1,n}\cap A_{2,n}\cap A_{3,n}\cap A_{4,n}\),
We begin with a proof of the first inclusion. Assume that \(E({\mathrm {LEW}}_n,{\mathrm {LS}}_n)\subset U^{-3\varepsilon }\). In particular, \({\mathrm {LEW}}_n\subset U^{-3\varepsilon }\). Since \(A_{1,n}\) holds, this implies that \({\mathrm {K}}\subset U^{-2\varepsilon }\), which is equivalent to \({\mathrm {K}}^{+2\varepsilon }\subset U\).
Let \(\ell \in {\mathrm {BS}}_n^{>\delta }\) be such that \(\ell \cap {\mathrm {K}}^{+2\varepsilon }\ne \emptyset \). Then, since \(A_{1,n}\) occurs, \(\ell \cap {\mathrm {LEW}}_n^{+3\varepsilon }\ne \emptyset \). Since \(A_{2,n}\cap A_{3,n}\) occurs, there is \(\widetilde{\ell }\in {\mathrm {LS}}_n^{>\delta }\) such that \(d_H(\widetilde{\ell },\ell )<\varepsilon \). In particular, \(\widetilde{\ell }\cap {\mathrm {LEW}}_n^{+4\varepsilon }\ne \emptyset \). Since \(A_{4,n}\) occurs, this implies that \(\widetilde{\ell }\cap {\mathrm {LEW}}_n\ne \emptyset \). By our assumption, \(\widetilde{\ell }\subset U^{-3\varepsilon }\). Therefore, \(\ell \subset U^{-2\varepsilon }\subset U\). Thus, any \(\ell \in {\mathrm {BS}}_n^{>\delta }\) such that \(\ell \cap {\mathrm {K}}^{+2\varepsilon }\ne \emptyset \) is contained in U. This implies that \(E({\mathrm {K}}^{+2\varepsilon },{\mathrm {BS}}_n^{>\delta })\subset U\). Since \(A_0\) occurs, the above is true if and only if \(E({\mathrm {K}}^{+2\varepsilon },{\mathrm {BS}}_n)\subset U\).
We proceed with the proof of the second inclusion in (5.6). Assume that \(E({\mathrm {K}}^{+2\varepsilon },{\mathrm {BS}}_n)\subset U\). Since \(A_0\) occurs, this is equivalent to \(E({\mathrm {K}}^{+2\varepsilon },{\mathrm {BS}}_n^{>\delta })\subset U\). Since \(A_{1,n}\) occurs and \({\mathrm {K}}^{+2\varepsilon }\subset U\), we also have \({\mathrm {LEW}}_n\subset U\).
Let \(\widetilde{\ell }\in {\mathrm {LS}}_n^{>\delta }\) such that \(\widetilde{\ell }\cap {\mathrm {LEW}}_n\ne \emptyset \). Since \(A_{1,n}\) occurs, \(\widetilde{\ell }\cap {\mathrm {K}}^{+\varepsilon } \ne \emptyset \). Since \(A_{2,n}\cap A_{3,n}\) occurs, there is \(\ell \in {\mathrm {BS}}_n^{>\delta }\) such that \(d_H(\widetilde{\ell },\ell )<\varepsilon \). In particular, \(\ell \cap {\mathrm {K}}^{+2\varepsilon }\ne \emptyset \). By assumption, \(\ell \subset U\). Therefore, \(\widetilde{\ell }\subset U^{+\varepsilon }\). Since \(A_{3,n}\) occurs, we actually have \(\widetilde{\ell }\subset U\). Thus, any \(\widetilde{\ell }\in {\mathrm {LS}}_n^{>\delta }\) such that \(\widetilde{\ell }\cap {\mathrm {LEW}}_n \ne \emptyset \) is contained in U. This implies that \(E({\mathrm {LEW}}_n,{\mathrm {LS}}_n^{>\delta })\subset U\). Finally, by adding all the loops of diameter \(<\delta \) we get \(E({\mathrm {LEW}}_n,{\mathrm {LS}}_n)\subset U^{+\delta }\). The proof of the inclusion (5.6) is complete.
It follows from (5.6) that for all \(n\ge 1\), \(0<\varepsilon <\delta \),
By monotonicity,
Since \(\limsup _{\delta \rightarrow 0}\limsup _{\varepsilon \rightarrow 0}\limsup _{n\rightarrow \infty }{\mathbb {P}}[A_n^c] = 0\), we have
Since by monotonicity both left and right hand sides are equal to \({\mathbb {P}}\left[ {\mathrm {BM}}\subset U\cap \overline{D}\right] \), we get the desired result. \(\square \)
6 Scaling limit is a simple path
6.1 Quasi-loops
We say that a nearest neighbor path \(\gamma \) in \({\mathbb {Z}}^3\) has a (s, r)-quasi-loop at \(v\in {\mathbb {Z}}^3\) if there exist \(i\le j\) such that \(\gamma (i),\gamma (j)\in B(v,s)\) and \(\gamma [i,j]\nsubseteq B(v,r)\). The set of all such v’s is denoted by \({\mathrm {QL}}(s,r;\gamma )\). Note that the set \({\mathrm {QL}}(s,r; \gamma )\) is non-increasing in r and non-decreasing in s.
Theorem 6.1
There exist \(M<\infty \), \(\alpha >0\), and \(C<\infty \) such that for any \(\varepsilon >0\) and \(n\ge 1\),
Proof
It suffices to consider sufficiently small \(\varepsilon \). Let \(M\ge 1\). The proof is subdivided into three cases: (1) there is a quasi-loop at a vertex of \(B(0,\varepsilon n)\), (2) there is a quasi-loop at a vertex of \(B(0,n-\varepsilon n)^c\), and (3) there is a quasi-loop at a vertex of \(B(0,n-\varepsilon n){\setminus }\overline{B(0,\varepsilon n)}\). We will verify the first and the second cases for \(M=1\) and the last case for a sufficiently large M.
Consider the first case. If \({\mathrm {QL}}\left( \varepsilon n,\sqrt{\varepsilon }n;~{\mathrm {LE}}(R[0,T_{0,n}])\right) \cap B(0,\varepsilon n)\ne \emptyset \), then the random walk R must return to \(B(0,2\varepsilon n)\) after leaving \(B(0,\sqrt{\varepsilon } n - \varepsilon n)\). Thus,
This implies (6.1) for quasi-loops at a vertex in \(B(0,\varepsilon n)\).
Consider the second case. If \({\mathrm {QL}}\left( \varepsilon n,\sqrt{\varepsilon }n;~{\mathrm {LE}}(R[0,T_{0,n}])\right) {\setminus } B(0,n-\varepsilon n)\ne \emptyset \), then the random walk R will hit \(\partial B(0,n-2\varepsilon n)\) and then moves to distance \(\sqrt{\varepsilon } n - 3\varepsilon n\) away from the hitting point before it exits from B(0, n). Thus, there exist \(\alpha >0\) and \(C<\infty \) such that
where the last inequality follows, for instance, from [11, Lemma 2.5]. This implies (6.1) for quasi-loops at a vertex outside of \(B(0,n- \varepsilon n)\).
It remains to consider the third case. Let \(A = B(0,n-\varepsilon n){\setminus } \overline{B(0,\varepsilon n)}\). We will prove that for some \(M\ge 1\),
which implies (6.1) for quasi-loops at a vertex in A. We cover A by \(s= 10\,\lfloor \varepsilon ^{-6}\rfloor \) balls of radius \(\varepsilon ^2 n\) with centers at \(v_1,\ldots , v_s\in A\). Then, the probability in (6.2) is bounded from above by
and the inequality (6.2) is immediate from the following lemma. \(\square \)
Lemma 6.2
There exist \(M,C<\infty \) such that for all \(v\in A\), \(\varepsilon >0\) and \(n\ge 1\),
The proof of Theorem 6.1 is complete subject to Lemma 6.2, which we prove below.
Proof of Lemma 6.2
Fix \(v\in A\) and assume that \(M>2\). Consider the stopping times \(T_0 = 0\),
where \(\inf \emptyset = \infty \).
Claim 6.3
If \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{0,n}])\right) \cap B(v,\varepsilon ^2 n)\ne \emptyset \), then there exists i such that
Proof of Claim 6.3
Assume that \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{0,n}])\right) \) has a point in \(B(v,\varepsilon ^2 n)\). Let I be such that \(T_{2I}< T_{0,n} < T_{2I+1}\).
Note that \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{2I}])\right) \) also has a point in \(B(v,\varepsilon ^2 n)\), since \(R[T_{2I},T_{0,n}]\) does not intersect \(B(v,\varepsilon ^2n + \varepsilon ^Mn)\). If \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{2I-1}])\right) \cap B(v,\varepsilon ^2 n)=\emptyset \), then we are done, thus assume the contrary.
Let \(i\le I\) and assume that \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{2i-1}])\right) \) has a point in \(B(v,\varepsilon ^2 n)\). Since \(R[T_{2i-2},T_{2i-1}]\) does not intersect \(B(v,\varepsilon ^2 n + \varepsilon ^M n)\), the set \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{2i-2}])\right) \) must also have a point in \(B(v,\varepsilon ^2 n)\). In this case, if \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_{2i-3}])\right) \) does not have a point in \(B(v,\varepsilon ^2 n)\), then we are done, otherwise, we repeat.
Note that the above procedure eventually succeeds, since \({\mathrm {QL}}\big (\varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_1])\big )\) does not have a point in \(B(v,\varepsilon ^2 n)\). (In fact, \({\mathrm {QL}}\left( \varepsilon ^Mn,\varepsilon n;~{\mathrm {LE}}(R[0,T_3])\right) \) too.) \(\square \)
We proceed to estimate the probability of event (6.3) for any fixed i. Let i be fixed so that \(T_{2i-1}<\infty \) and define \(L' = {\mathrm {LE}}(R[0,T_{2i-1}])\) and \(R' = R[T_{2i-1},T_{2i}]\). Consider the stopping times \(\tau _0 = 0\),
Claim 6.4
If the event (6.3) occurs, then there exists k and \(x\in B(v,\varepsilon ^2 n)\) such that
Proof of Claim 6.4
Assume that the event (6.3) occurs. Since \({\mathrm {LE}}(R[0,T_{2i}]) = {\mathrm {LE}}(L'\cup R')\) and \(R'\subset \overline{B(v,\frac{1}{2}\varepsilon n)}\), there exist \(x\in B(v,\varepsilon ^2 n)\) such that \(L'[0,\mathrm {len}\,L']\cap B(x,\varepsilon ^Mn)\ne \emptyset \) and \(R'\cap B(x,\varepsilon ^Mn)\ne \emptyset \). Otherwise, \(R'\) would not complete any \((\varepsilon ^Mn,\varepsilon n)\)-quasi-loop in \(B(v,\varepsilon ^2 n)\).
Let k be the smallest index such that for some \(x\in B(v,\varepsilon ^2n)\), \(L'[0,\tau _{2k}]\cap B(x,\varepsilon ^M n)\ne \emptyset \) and \(R'\cap B(x,\varepsilon ^M n)\ne \emptyset \). We claim that k satisfies (6.4), i.e., \(R'\cap L'[0,\tau _{2k}]=\emptyset \).
Assume that \(R'\cap L'[0,\tau _{2k}]\ne \emptyset \). Let s be the smallest number that \(L'[0,s]\cap R'\ne \emptyset \). By the assumption, \(\tau _{2k}\ge s\). Thus, \(L'[\tau _{2k-1},s]\subset \overline{B(v,\frac{1}{2}\varepsilon n)}\). Since also \(R'\subset \overline{B(v,\frac{1}{2}\varepsilon n)}\), there must exist \(x\in B(v,\varepsilon ^2 n)\) such that \(L'[0,\tau _{2k-1}]\cap B(x,\varepsilon ^M n)\ne \emptyset \) and \(R'\cap B(x,\varepsilon ^M n)\ne \emptyset \). Since \(L'[\tau _{2k-2},\tau _{2k-1}]\cap B(v,\varepsilon ^2n+\varepsilon ^Mn)=\emptyset \), the above conclusion actually gives that for some \(x\in B(v,\varepsilon ^2 n)\), \(L'[0,\tau _{2k-2}]\cap B(x,\varepsilon ^M n)\ne \emptyset \) and \(R'\cap B(x,\varepsilon ^M n)\ne \emptyset \). This contradicts the minimality of k. \(\square \)
Next we use Lemma 3.3 to prove that each \({\mathrm {LE}}(R[0,T_{2i-1}])[0,\tau _{2k}]\) is hittable by a random walk starting nearby.
Claim 6.5
There exist M and C such that for each \(v\notin B(0,\varepsilon n)\), integers i and k, and \(\varepsilon >0\), if \(L_{i,k} = {\mathrm {LE}}(R[0,T_{2i-1}])[0,\tau _{2k}]\), then
Proof of Claim 6.5
Assume that \(\varepsilon \) is small (possibly depending on M). We cover \(B(v,2\varepsilon ^2 n)\) by \(s = 10\,\lfloor \varepsilon ^{-3M}\rfloor \) balls of radius \(\varepsilon ^Mn\) with centers at \(v_1,\ldots , v_s\in B(v,2\varepsilon ^2 n)\). Then the probabilty in (6.5) is bounded from above by
For each \(x\in B\left( v_j,\varepsilon ^M n\right) \), \(B(x,2\varepsilon ^M n)\subset B(v_j,3\varepsilon ^M n)\) and \(B(x,\frac{1}{4}\varepsilon n)\supset B(v_j,\frac{1}{8}\varepsilon n)\). Thus, the jth probability in the above maximum is at most
By the Harnack inequality applied to the harmonic function
there exists a universal constant \(c_*>0\) such that the jth probability is bounded from above by
We apply Lemma 3.3 to \({\mathrm {LE}}(R[0,T_{2i-1}])[0,\tau _{2k}]\) with \(v=v_j\), \(r = \frac{1}{8}\varepsilon n\), \(s=3\varepsilon ^Mn\), choosing M so that \(\left( \frac{s}{r}\right) ^\eta = \left( 24\,\varepsilon ^{M-1}\right) ^\eta <c_*\varepsilon ^7\), and choosing K so that \(\left( \frac{s}{r}\right) ^K = \left( 24\,\varepsilon ^{M-1}\right) ^K < \varepsilon ^{7+3M}\). This gives that the above probability is \(\le C\varepsilon ^{7+3M}\). Thus, the original probability is \(\le (C\varepsilon ^{7+3M})\cdot s \le C'\varepsilon ^7\). \(\square \)
Let \(E_i\) be the event in (6.3) and \(E_{i,k}\) the event in (6.4) with M as in Claim 6.5. By Claim 6.4, \({\mathbb {P}}^0[E_i]\le i\,\max _k{\mathbb {P}}^0[E_{i,k}]\). On the event \(E_{i,k}\), define the stopping time
By the definition of \(E_{i,k}\), \(\sigma _{i,k}<T_{2i}\) and \(R[\sigma _{i,k},T_{2i}]\cap L_{i,k} = \emptyset \). By Claim 6.5 and the strong Markov property, if \(F_{i,k}\) is the event in (6.5), then
Thus, there exists C such that for all i, \({\mathbb {P}}^0[E_i]\le C\,i\,\varepsilon ^7\). To conclude the proof, we notice that for each \(i\ge 7\),
By Claim 6.3,
The proof of Lemma 6.2 is complete. \(\square \)
Exactly the same argument as in the second case in the proof of Theorem 6.1 gives the following result.
Proposition 6.6
There exist \(\alpha >0\) and \(C<\infty \) such that for all \(\varepsilon >0\) and \(n\ge 1\), if \(L = {\mathrm {LE}}(R[0,T_{0,n}])\) then
6.2 Proof of Theorem 1.2
The proof of the theorem follows from Theorem 6.1 and Proposition 6.6 similarly to [26, Theorem 1.1]. Recall that a compact subset of \({\mathbb {R}}^d\) is a simple path if it is homeomorphic to [0, 1]. We begin with the following observation, which is analogous to [26, Theorem 3.5].
For a simple path \(\gamma \) and \(x,y\in \gamma \), we denote by \(D(x,y;\gamma )\) the diameter (with respect to the Euclidean distance) of the arc of \(\gamma \) joining x and y. For a function \(f:(0,\infty )\rightarrow (0,\infty )\), let \(\Gamma (f)\) be the set of paths from \(\Gamma \) such that for all \(x,y\in \gamma \),
and denote by \(\overline{\Gamma (f)}\) the closure of \(\Gamma (f)\) in \(({\mathcal {K}}_D, d_H)\).
Lemma 6.7
For any monotone increasing and continuous function \(f:(0,\infty )\rightarrow (0,\infty )\),
Proof of Lemma 6.7
Let \(\gamma \in \overline{\Gamma (f)}\). Then \(\gamma \) is a compact, connected subset of \(\overline{D}\), such that \(0\in \gamma \) and \(\gamma \cap \partial D\ne \emptyset \).
We first show that \(|\gamma \cap \partial D| = 1\). Assume that there exist two different points \(p,q\in \gamma \cap \partial D\). Let \(\gamma _n\in \Gamma (f)\) be such that \(d_H(\gamma _n,\gamma )<\frac{1}{n}\) and \(p_n,q_n\in \gamma _n\) such that \(|p_n-p|<\frac{1}{n}\) and \(|q_n-q|<\frac{1}{n}\). In particular, \({\mathrm {dist}}(p_n,\partial D)<\frac{1}{n}\) and \({\mathrm {dist}}(q_n,\partial D)<\frac{1}{n}\). Since \(\gamma _n\in \Gamma (f)\),
By passing to the limit, we conclude that \(p=q\). Thus, \(|\gamma \cap \partial D| = 1\). We denote this point by b.
It remains to show that \(\gamma \) is a simple path with \(\gamma ^e = b\). We will use the following Janiszewski’s topological characterization of arcs (see, e.g., [26, Lemma 3.6]):
Let \(x\in \gamma {\setminus }\{0,b\}\). Let \(\gamma _n\in \Gamma (f)\) be such that \(d_H(\gamma _n,\gamma )<\frac{1}{n}\), and \(x_n\in \gamma _n\) such that \(|x_n-x|<\frac{1}{n}\). For each n, let \(\gamma _n^1\) be the closed arc of \(\gamma _n\) connecting 0 and \(x_n\), and \(\gamma _n^2\) the closed arc of \(\gamma _n\) connecting \(z_n\) and \(\gamma _n^e\). By passing to a subsequence, if necessary, assume with no loss of generality that \(\gamma _n^1\) converges to \(\gamma ^1\) and \(\gamma _n^2\) converges to \(\gamma ^2\) in \(({\mathcal {K}}_D, d_H)\). Note that \(\gamma ^1\) and \(\gamma ^2\) are compacts with \(\gamma ^1\cup \gamma ^2 = \gamma \). For any \(p\in \gamma ^1\) and \(q\in \gamma ^2\), let \(p_n\in \gamma _n^1\) be a sequence converging to p, and \(q_n\in \gamma _n^2\) a sequence converging to q. Since \(\gamma _n\in \Gamma (f)\),
By passing to the limit, we get that \(|p-q|\ge f(|x-q|)\), for all \(p\in \gamma ^1\) and \(q\in \gamma ^2\), which implies that \(\gamma ^1{\setminus }\{x\}\) and \(\gamma ^2{\setminus }\{x\}\) are disjoint. Therefore, \(\gamma {\setminus }\{x\}\) is disconnected. By (6.6), \(\gamma \) is a simple path from 0 to b.
We have shown that \(\gamma \in \Gamma \). \(\square \)
The main ingredient for the proof of Theorem 1.2 is the following lemma.
Lemma 6.8
The exist \(C,M<\infty \) and \(\alpha >0\) such that for all \(m,n\ge 1\),
where
Proof of Lemma 6.8
Let M be as in Theorem 6.1. Let \(n\ge 1\). Denote by \(L = {\mathrm {LE}}(R[0,T_{0,n}])\). For \(\varepsilon >0\), consider the events
and denote by \(E_i = E_{2^{-i}}\) and \(F_i = F_{2^{-i}}\). By Theorem 6.1 and Proposition 6.6, there exist \(C<\infty \) and \(\alpha >0\) such that
Therefore, it suffices to show that
Assume that \(E_i\) does not occur. Let \(x,y\in {\mathrm {LEW}}_n\) with \(2^{-M(i+1)}\le |x-y|< 2^{-Mi}\). Then,
Assume that \(F_i\) does not occur. Let \(x,y\in {\mathrm {LEW}}_n\) with
Then,
Thus, if \(\bigcup _{i=m}^\infty E_i\cup F_i\) does not occur, then for all \(x,y\in {\mathrm {LEW}}_n\) with \(|x-y|< 2^{-Mm}\) or \(\max \left( {\mathrm {dist}}(x,\partial D),{\mathrm {dist}}(y,\partial D)\right) <2^{-Mm}\),
This implies that if \(\bigcup _{i=m}^\infty E_i\cup F_i\) does not occur, then for all \(x,y\in {\mathrm {LEW}}_n\),
Since \({\mathrm {LEW}}_n\in \Gamma \), we have proved that \({\mathrm {LEW}}_n\in \Gamma (f_m)\). The proof of Lemma 6.8 is complete. \(\square \)
Theorem 1.2 easily follows from Lemmas 6.7 and 6.8.
Proof of Theorem 1.2
Let \(f_m\) be as in Lemma 6.8. By Lemma 6.8, there exist \(C<\infty \) and \(\alpha >0\) such that for all \(m,n\ge 1\), \({\mathbb {P}}\left[ {\mathrm {LEW}}_n\in \Gamma (f_m)\right] \ge 1 - C\,2^{-\alpha m}\). By the Portmanteau theorem (see, e.g., [3, Theorem 2.1]), for each \(m\ge 1\),
Since \(f_m\) is increasing and continuous, by Lemma 6.7, for all \(m\ge 1\).
Thus, \({\mathbb {P}}\left[ K\in \Gamma \right] =1\). \(\square \)
7 Hausdorff dimension
7.1 Preliminaries on loop erased walks
In this section we collect some auxiliary results about loop erased random walk. Let
where \(t_n\) is the first time that \({\mathrm {LE}}(R^1[0,\infty ))\) exits from B(0, n), and \(s_m\) is its last visit to B(0, m) before time \(t_n\). We define \({\mathrm {Es}}(m,n) = 1\) for \(m\ge n\). Let
where \(\beta = \lim _{n\rightarrow \infty }\frac{\log {\mathbb {E}}^{0,1}\left[ \mathrm {len}\,{\mathrm {LE}}(R^1[0,T^1_{0,n}])\right] }{\log n}\) is the growth exponent of the loop erased random walk. Its existence is shown in [27]. By [27, Lemma 7.2.2], for all \(\epsilon >0\) there exist \(C_{1,\epsilon },C_{2,\epsilon }\in (0,\infty )\) such that for all m, n,
The following lemma is the main ingredient in the proof of the upper bound on the Hausdorff dimension of \({\mathrm {K}}\). In its proof we will only use the upper bound from (7.1).
Lemma 7.1
For any \(\delta >0\) there exists \(C_\delta <\infty \) such that for all \(\varepsilon >0\), \(n\ge 1\) and \(x\in B(0,n-2\varepsilon n){\setminus } \overline{B(0,2\varepsilon n)}\),
Proof
We write \(B = B(x,\varepsilon n)\) throughout the proof. Let \(L = \sup \{i\le T^1_{0,n}:R^1(i)\in B\}\) be the time of last visit of \(R^1\) to B before leaving B(0, n). We will use the following observation,
where for a path \(\gamma \), we write \(\lambda (\gamma )\) for the piece of \(\gamma \) from the start and until the first entrance to B. The probability of the event on the right hand side equals
By the time reversibility of loop erasure, see [12, Lemma 7.2.1], the probability in the sum equals to
where for a path \(\gamma \), we write \(\mu (\gamma )\) for the piece of \(\gamma \) from the last visit to B until the end. Summation over t gives
where \(T^1_0 = T^1(\{0\})\). Let \(d_x = {\mathrm {dist}}(x,\partial B(0,n))\) and define \(R_x = \frac{1}{2}\min (|x|,d_x)\). We first consider the case when \(R^1\) returns to \(B(y,2\varepsilon n)\) after leaving \(B(y,\frac{1}{4} R_x)\):
where \((*)\) follows from [12, Proposition 1.5.10], \((**)\) by considering separately the cases \(R_x = |x|\) and \(R_x = d_x\) and using \(R_x\ge \varepsilon n\), and the last inequality from \(\alpha \le 1\).
Thus, to establish (7.2), it remains to consider the case when \(R^1\) does not return to \(B(y,2\varepsilon n)\) after leaving \(B(y,\frac{1}{4} R_x)\). In this case, we have the inclusion
where t is the first time that \({\mathrm {LE}}(R^1[0,T^1_0])\) exits from \(B(y,\frac{1}{4} R_x)\), and s is its last visit time to \(B(y,2\varepsilon n)\) before t. Therefore, in this case,
where \((*)\) follows from the strong Markov property and \((**)\) from [12, Proposition 1.5.10] and the Harnack inequality. It remains to bound the probability
By the results of [24, Sect. 4.1], if X is a random walk from y conditioned to hit 0 before \(\partial B(0,n)\) and killed upon hitting 0, Y is an infinite random walk from y, and the laws of their loop erasures until the first exit times from \(B(y,\frac{1}{4} R_x)\) are \({\mathbb {P}}_X\) and \({\mathbb {P}}_Y\), then the Radon–Nikodym derivative \(\frac{d{\mathbb {P}}_X}{d{\mathbb {P}}_Y}\) is bounded from above and below by universal positive and finite constants \(C_1\) and \(C_2\). Therefore, the probability in the above display is bounded from above by
which is at most
where the last inequality again follows by considering cases \(R_x=|x|\) and \(R_x=d_x\) and using \(\alpha \le 1\). Finally, by summing over y on the interior boundary of B, we get the bound
giving (7.2). The proof of Lemma 7.1 is complete. \(\square \)
Remark 7.2
Essentially the same proof gives the complementary lower bound to (7.2) for x’s away from the boundary of B(0, n). For any \(\delta >0\) there exists \(c_\delta >0\) such that for all \(\varepsilon >0\), \(n\ge 1\) and \(x\in B(0,\frac{1}{2} n){\setminus } \overline{B(0,2\varepsilon n)}\),
7.2 Proof of Theorem 1.4: upper bound
In this section we use Lemma 7.1 to prove that \({\mathrm {dim}_{\mathrm {H}}}({\mathrm {K}})\le 2 - \alpha \) almost surely. Recall that the Hausdorff dimension of a subset S of \({\mathbb {R}}^d\) is defined as
where \({\mathcal {H}}^\delta (S)\) is the \(\delta \)-Hausdorff measure of S, \({\mathcal {H}}^\delta (S) = \lim _{\varepsilon \rightarrow 0}\inf \sum _{j=1}^\infty {\mathrm {diam}}(S_j)^\delta \), and the infimum is taken over all countable collections of sets \(\{S_j\}\) covering S with \({\mathrm {diam}}(S_j)\le \varepsilon \).
Consider the coupling of \({\mathrm {K}}\) and \({\mathrm {LEW}}_n\) such that \(d_H({\mathrm {LEW}}_n,{\mathrm {K}})\rightarrow 0\) as \(n\rightarrow \infty \) almost surely. Let \(N_1(\varepsilon )\) be the number of balls of radius \(\frac{1}{2}\varepsilon \) centered in \(\frac{1}{2}\varepsilon {\mathbb {Z}}^3\cap \left( D_{1-2\varepsilon }{\setminus } D_{2\varepsilon }\right) \) that have non-empty intersection with \({\mathrm {K}}\), and \(N_2(\varepsilon )\) the number of balls of radius \(\frac{1}{2}\varepsilon \) centered in \(\frac{1}{2}\varepsilon {\mathbb {Z}}^3\cap \left( D_{2\varepsilon }\cup D_{1-2\varepsilon }^c\right) \) that have non-empty intersection with \({\mathrm {K}}\). Similarly, define \(N_{1,n}(\varepsilon )\) as the number of balls of radius \(\varepsilon n\) centered in \(\frac{1}{2}\varepsilon n{\mathbb {Z}}^3\cap B(0,n-2\varepsilon n){\setminus }\overline{B(0,2\varepsilon n)}\) that have non-empty intersection with \({\mathrm {LE}}(R[0,T_{0,n}])\), and \(N_{2,n}(\varepsilon )\) as the corresponding number of balls centered in \(\frac{1}{2}\varepsilon n{\mathbb {Z}}^3\cap \left( B(0,2\varepsilon n)\cup B(0,n-2\varepsilon n)^c\right) \). Then, for any positive \(\delta \) and \(\xi \),
By Lemma 7.1,
By sending n to infinity, we obtain that \({\mathbb {P}}\left[ N_1(\varepsilon )\ge \delta \,\varepsilon ^{\alpha -2-\xi }\right] \le \frac{1}{\delta }\, C_\xi \,\varepsilon ^{\frac{1}{2}\xi }\). To obtain a bound for \(N_2(\varepsilon )\), we proceed as above, but use Proposition 6.6 instead of Lemma 7.1. We get \({\mathbb {P}}\left[ N_2(\varepsilon )\ge \delta \,\varepsilon ^{-\frac{1}{2}}\right] \le C_\delta \,\varepsilon ^{\frac{1}{2}\xi }\), if \(\xi \) is sufficiently small. Since \(\alpha \le 1\), this implies that \({\mathbb {P}}\left[ N_2(\varepsilon )\ge \delta \,\varepsilon ^{\alpha -2-\xi }\right] \le C_\delta \,\varepsilon ^{\frac{1}{2}\xi }\).
For \(\gamma \ge 0\), let \({\mathcal {H}}^\gamma _\varepsilon ({\mathrm {K}}) = \inf \sum _{j=1}^\infty {\mathrm {diam}}(S_j)^\gamma \), where the infimum is taken over all coverings of \({\mathrm {K}}\) by sets \(S_j\) with diameter at most \(\varepsilon \). Then, \({\mathcal {H}}^\gamma _\varepsilon ({\mathrm {K}})\le \varepsilon ^\gamma \,\left( N_1(\varepsilon )+N_2(\varepsilon )\right) \), and we obtain from the above estimates that
Note that if \(\varepsilon \searrow 0\) then \({\mathcal {H}}^{2-\alpha +\xi }_\varepsilon ({\mathrm {K}})\nearrow {\mathcal {H}}^{2-\alpha +\xi }({\mathrm {K}})\). Thus, for all \(\delta >0\), \({\mathbb {P}}[{\mathcal {H}}^{2-\alpha +\xi }({\mathrm {K}})\ge 2\delta ]=0\), i.e., \({\mathcal {H}}^{2-\alpha +\xi }({\mathrm {K}}) = 0\) almost surely. Since \(\xi >0\) is arbitrary, we get \({\mathrm {dim}_{\mathrm {H}}}({\mathrm {K}})\le 2-\alpha \). \(\square \)
7.3 Proof of Theorem 1.4: lower bound
Let \({\mathrm {BM}}\) be the Brownian motion in \({\mathbb {R}}^3\) and \(\tau = \inf \{t\ge 0:|B(t)|=1\}\) the first exit time of \({\mathrm {BM}}\) from D. The set of cut points \({\mathrm {C}}\) of \({\mathrm {BM}}[0,\tau ]\) is defined as
It is proved in [14] that \({\mathrm {dim}_{\mathrm {H}}}({\mathrm {C}}) = 2 - \xi \) almost surely, where \(\xi \) is the non-intersection exponent for 3 dimensional Brownian motion satisfying \(\xi \in (\frac{1}{2},1)\). Note that every path in \({\mathrm {BM}}[0,\tau ]\) from \({\mathrm {BM}}(0)=0\) to \({\mathrm {BM}}(\tau )\in \partial D\) goes through all the cut points. We denote by \({\mathcal {S}}(U)\) the set of all points of \(U\subseteq \overline{D}\) which disconnect 0 from \(\partial D\) in U. As noticed above, \({\mathcal {S}}({\mathrm {BM}}[0,\tau ])\supseteq {\mathrm {C}}\), thus, \({\mathrm {dim}_{\mathrm {H}}}({\mathcal {S}}({\mathrm {BM}}[0,\tau ])) \ge 2 - \xi \) almost surely.
Now, recall from Theorem 1.1 that \({\mathrm {BM}}[0,\tau ]\) has the same distribution as the union of the independent scaling limit of the loop erased random walk, \({\mathrm {K}}\), and all the loops from the Brownian loop soup of intensity 1 that are contained in D and intersect \({\mathrm {K}}\). Denote this union by X. Then \({\mathcal {S}}(X)\) has the same distribution as \({\mathcal {S}}({\mathrm {BM}}[0,\tau ])\) and, since \({\mathrm {K}}\) connects 0 and \(\partial D, {\mathcal {S}}(X)\subseteq {\mathrm {K}}\). Thus, \({\mathrm {dim}_{\mathrm {H}}}({\mathrm {K}}) \ge 2 - \xi \) almost surely. \(\square \)
References
Ball, K., Sterbenz, J.: Explicit bounds for the return probability of simple random walks. J. Theor. Probab. 18(2), 317–326 (2005)
Beffara, V.: The dimension of the SLE curves. Ann. Probab. 36(4), 1421–1452 (2008)
Billingsley, P.: Convergence of Probability Measures. Wiley Series in Probability and Statistics: Probability and Statistics, 2nd edn. Wiley, New York (1999)
Camia, F.: Brownian loops and conformal fields. arXiv:1501.04861
Dvoretzky, A., Erdős, P., Kakutani, S.: Double points of paths of Brownian motion in n-space. Acta Sci. Math. Szeged. 12, 75–81 (1950)
Fristedt, B.: An extension of a theorem of S. J. Taylor concerning the multiple points of the symmetric stable process. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 9, 62–64 (1967)
Guttmann, A.J., Bursill, R.J.: Critical exponents for the loop erased self-avoiding walk by Monte Carlo methods. J. Stat. Phys. 59(1), 1–9 (1990)
Kakutani, S.: On Brownian motions in n-space. Proc. Imp. Acad. Tokyo 20, 648–652 (1944)
Kenyon, R.: The asymptotic determinant of the discrete Laplacian. Acta Math. 185(2), 239–286 (2000)
Kesten, H.: Hitting probabilities of random walks on \({\mathbb{Z}}^{d}\). Stoch. Process. Appl. 25, 165–184 (1987)
Kozma, G.: The scaling limit of loop-erased random walk in three dimensions. Acta Math. 199(1), 29–152 (2007)
Lawler, G.F.: Intersections of Random Walks. Birkhauser, Boston (1991)
Lawler, G.F.: A self avoiding walk. Duke Math. J. 47, 655–694 (1980)
Lawler, G.F.: Hausdorff dimension of cut points for Brownian motion. Electron. J. Probab. 1, 1–20 (1996)
Lawler, G.F.: Loop-erased random walk. In: Bramson, M., Durrett, R. (eds.) Perplexing Problems in Probability. Progress in Probability, vol. 44, pp. 197–217. Birkhäuser Boston, Boston, MA (1999)
Lawler, G.F.: The probability that planar loop-erased random walk uses a given edge. Electron. J. Probab. 19, 1–13 (2014)
Lawler, G.F., Limic, V.: Random Walk: A Modern Introduction. Cambridge Studies in Advanced Mathematics, vol. 123. Cambridge University Press, Cambridge (2010)
Lawler, G.F., Schramm, O., Werner, W.: Conformal invariance of planar loop-erased random walks and uniform spanning trees. Ann. Probab. 32(1B), 939–995 (2004)
Lawler, G.F., Schramm, O., Werner, W.: Conformal restriction: the chordal case. J. Am. Math. Soc. 16(4), 917–955 (2003)
Lawler, G.F., Trujillo Ferreras, J.: Random walk loop soup. Trans. Am. Math. Soc. 359(2), 767–787 (2007)
Lawler, G.F., Werner, W.: The Brownian loop soup. Probab. Theory Relat. Fields 128(4), 565–588 (2004)
Le Jan, Y.: Markov Paths, Loops and Fields. Lecture Notes in Mathematics, vol. 2026. Springer, Heidelberg (2011)
Lévy, P.: Le mouvement brownien plan. Am. J. Math. 62, 487–550 (1940)
Masson, R.: The growth exponent for planar loop-erased random walk. Electron. J. Probab. 14, 1012–1073 (2009)
Pemantle, R.: Choosing a spanning tree for the integer lattice uniformly. Ann. Probab. 19(4), 1559–1574 (1991)
Schramm, O.: Scaling limits of loop-erased random walks and uniform spanning trees. Isr. J. Math. 118(1), 221–288 (2000)
Shiraishi, D.: Growth exponent for loop-erased random walk in three dimensions. Ann. Probab. (to appear)
Shiraishi, D.: Hausdorff dimension of the scaling limit of loop-erased random walk in three dimensions. Preprint, arXiv:1604.08091
Slade, G.: Self-avoiding walks. Math. Intell. 16(1), 29–35 (1994)
Symanzik, K.: Euclidean quantum field theory. Scuola internazionale di Fisica “Enrico Fermi” XLV, 152–223 (1969)
Sznitman, A.-S.: Topics in Occupation Times and Gaussian Free Fields. Zürich Lectures in Advanced Mathematics. European Mathematical Society, Zürich (2012)
Taylor, S.J.: Multiple points for the sample paths of the symmetric stable process. Zeitschr. Wahrsch. verw. Gebiete 5, 247–264 (1966)
Wilson, D.B.: Generating random spanning trees more quickly than the cover time. In: Proceedings of the Twenty-eighth Annual ACM Symposium on the Theory of Computing (Philadelphia, PA, 1996), pp. 296–303. ACM, New York (1996)
Wilson, D.B.: The dimension of loop-erased random walk in 3D. Phys. Rev. E 82(6), 062102 (2010)
Zhan, D.: Loop-erasure of plane Brownian motion. Communications in Mathematical Physics 303(3), 709–720 (2011)
Acknowledgements
Enormous thanks go to Alain-Sol Sznitman for his helpful discussions, encouragements, and fruitful comments. We also thank Yinshan Chang and Wendelin Werner for useful discussions and suggestions, and Yinshan Chang for a careful reading of the paper. This project was carried out while the second author was enjoying the hospitality of the Forschungsinstitut für Mathematik of the ETH Zürich and the Max Planck Institute for Mathematics in the Sciences. He wishes to thank these institutions. The research of the second author has been supported by the Japan Society for the Promotion of Science (JSPS). Finally, the second author thanks Hidemi Aihara for all her understanding and support.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Sapozhnikov, A., Shiraishi, D. On Brownian motion, simple paths, and loops. Probab. Theory Relat. Fields 172, 615–662 (2018). https://doi.org/10.1007/s00440-017-0817-6
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-017-0817-6