1 Introduction

The study of stochastic processes in disordered media is an important aspect of modern probability. Models in this area for which extensive research has been conducted include the classical model of random walk in random environment, as well as random walks on random graphs, such as Galton–Watson trees and percolation clusters. Typical properties that one is interested in include: (i) recurrence/transience; (ii) laws of large numbers (i.e. the existence of a deterministic limiting velocity); (iii) conditions for ballisticity/sub-ballisticity; (iv) regularity (i.e. continuity, monotonicity or lack thereof) of attributes (e.g. the velocity) in terms of some underlying parameter; and (v) scaling limits.

Fig. 1
figure 1

A simulation of a small finite piece of the walks \(X^{\scriptscriptstyle (0)},X^{\scriptscriptstyle (1)},X^{\scriptscriptstyle (2)}\) (shaded thinnest/lightest to thickest/darkest), in 2 dimensions with common biases \(p^{\scriptscriptstyle (i)}(e_1)=3/9\), and \(p^{\scriptscriptstyle (i)}(e)=2/9\) for \(e\ne e_1\)

In this paper, we tackle some of these issues for sequences of biased random walks \((X^{\scriptscriptstyle (i)})_{i \ge 0}\) on random graphs, where the initial graph is \(\mathcal {Z}^{\scriptscriptstyle (0)}=\mathbb {Z}^d\) and otherwise the graph \(\mathcal {Z}^{\scriptscriptstyle (i)}\) for the ith walk is the trace of the \((i-1)\)st walk (see Fig. 1). The sequence of biases is chosen so that each walk is transient – somewhat remarkably, this does not mean that we necessarily require the underlying drift of the ith walk to be oriented in the direction of the initial walk, see Remark 2.3 for an elaboration of this point. By regeneration arguments, which require some care to take into account the multiple processes, we demonstrate the existence of deterministic limiting speeds, see Theorem 1 below for a precise statement of these results. Regarding the issue of ballisticity, we note that the initial walk \(X^{\scriptscriptstyle (0)}\), which has non-trivial bias in the direction \(e_1\), creates traps for subsequent walks. Moreover, except in trivial settings, the walk \(X^{\scriptscriptstyle (i)}\) does not visit every site of \(\mathcal {Z}^{\scriptscriptstyle (i)}\), so its trace \(\mathcal {Z}^{\scriptscriptstyle (i+1)}\) is a strict subset of \(\mathcal {Z}^{\scriptscriptstyle (i)}\), meaning that the walk \(X^{\scriptscriptstyle (i)}\) may delete some traps and create new ones. We will show that the effect of trapping can lead to zero speeds, and in particular establish a sharp phase transition for whether the walk \(X^{\scriptscriptstyle (i)}\) is ballistic or sub-ballistic, see Theorem 2 below. Finally, we exhibit conditions depending on the sequence of biases that ensure the limiting graph is or is not an infinite simple path, see Theorem 3. In certain cases (such as when the sequence of biases is constant) the law of this limit may be of independent interest.

Before introducing our model and results, let us briefly relate our work to other studies in which trapping has been observed for biased random walk on random graphs. As early as the 1980s, physicists observed that such a phenomenon might be relevant when the random graphs are percolation clusters, empirically demonstrating the non-monotonicity of the speed, and sub-ballisticity in the strong bias regime [3]. Mathematically, a phase transition between ballisticity and sub-ballisticity was first shown rigorously for the simpler model of random walk on supercritical Galton–Watson trees [15] (see also [5, 7, 9] for recent work concerning more detailed properties of such processes), and has since been confirmed to hold in the percolation setting [6, 14, 20]. A relatively up-to-date survey of these developments is given in [4]. Qualitatively, our results match those established for Galton–Watson trees and percolation clusters, and, although we do not confirm it rigorously, we also observe empirically non-monotonic behaviour for the speed that is similar to the behaviour expected for these other models. Moreover, whilst our graphs are more complex than trees, in the sense there is not a unique shortest path between vertices and the traps are less obviously defined, the model is still more tractable than the percolation case. As a result, we are able to give a more concrete expression for the critical point that separates the ballistic and sub-ballistic phase, which we are even able to evaluate explicitly in examples. Finally, we note that, in another related work, biased random walk on an unbiased random walk has been shown to exhibit localisation on a logarithmic scale [8].

1.1 The model

Let \((\varvec{p}^{\scriptscriptstyle (i)})_{i \in \mathbb {Z}_+\cup \{\infty \}}\) be a sequence of probability distributions on the standard basis vectors \(\{\pm e_j:j \in [d]\}\) in \(\mathbb {Z}^d\), where \([d]:=\{1,2,\dots ,d\}\) and \(d\ge 2\). Given such a probability distribution \(\varvec{p}\), and a connected subgraph \(\mathcal {Z}\) of \(\mathbb {Z}^d\) that contains the origin \(0\in {\mathbb {Z}}^d\), we define a \(\varvec{p}\)-random walk \(X=(X_n)_{n \in \mathbb {Z}_+}\) on \(\mathcal {Z}\) to be the discrete-time Markov chain starting at 0 with transition probabilities

$$\begin{aligned} P^{\mathcal {Z}}(X_{n+1}=x+e|X_n=x)={\left\{ \begin{array}{ll} \frac{p(e)}{\sum _{e':(x,x+e')\in \mathcal {Z}}p(e')}, &{}\quad \text { if }(x,x+e)\in \mathcal {Z},\\ 0, &{}\quad \text { otherwise}. \end{array}\right. } \end{aligned}$$

Let \(\mathcal {Z}^{\scriptscriptstyle (0)}=\mathbb {Z}^d\), and let \(X^{\scriptscriptstyle (0)}\) be a \(\varvec{p}^{\scriptscriptstyle (0)}\)-random walk on \(\mathcal {Z}^{\scriptscriptstyle (0)}\), i.e. a simple random walk on \(\mathbb {Z}^d\) with step distribution \(\varvec{p}^{\scriptscriptstyle (0)}\). Given a walk \(X^{\scriptscriptstyle (i)}\) taking values in \(\mathbb {Z}^d\), we let \(\mathcal {Z}^{\scriptscriptstyle (i+1)}=(\mathcal {V}^{\scriptscriptstyle (i+1)},\mathcal {E}^{\scriptscriptstyle (i+1)})\) be the connected graph with vertex set

$$\begin{aligned} \mathcal {V}^{\scriptscriptstyle (i+1)}=\left\{ X^{\scriptscriptstyle (i)}_n:n \in \mathbb {Z}_+\right\} , \end{aligned}$$

and (undirected) edge set

$$\begin{aligned} \mathcal {E}^{\scriptscriptstyle (i+1)}=\left\{ (X^{\scriptscriptstyle (i)}_n,X^{\scriptscriptstyle (i)}_{n+1}):n \in \mathbb {Z}_+\right\} . \end{aligned}$$

If \(\varvec{p}^{\scriptscriptstyle (0)}\) is biased (\({p}^{\scriptscriptstyle (0)}(e)\ne {p}^{\scriptscriptstyle (0)}(-e)\) for some e) and non-trivial (\({p}^{\scriptscriptstyle (0)}(e)\ne 1\) for any e), then the walk \(X^{\scriptscriptstyle (0)}\) is transient (in fact ballistic) and the graph \(\mathcal {Z}^{\scriptscriptstyle (1)}\) is random.

The above procedure can now be iterated, leading to a sequence of walks \((X^{\scriptscriptstyle (i)})_{i \in \mathbb {Z}_+}\) (with \(X^{\scriptscriptstyle (i)}\) being a \(\varvec{p}^{\scriptscriptstyle (i)}\)-random walk on \(\mathcal {Z}^{\scriptscriptstyle (i)}\) for each i) that are conditionally independent given the graphs \(\mathcal {Z}^{\scriptscriptstyle (i)}\) (that is, given \(\mathcal {Z}^{\scriptscriptstyle (i)}\), \(X^{\scriptscriptstyle (i)}\) is conditionally independent of \(X^{\scriptscriptstyle (j)}\) for \(j<i\)). The sequence of graphs is decreasing (\(\mathcal {Z}^{\scriptscriptstyle (i+1)}\subseteq \mathcal {Z}^{\scriptscriptstyle (i)}\) for each i), and as such we can define

$$\begin{aligned} \mathcal {Z}^{\scriptscriptstyle (\infty )}:=\bigcap _{i \in \mathbb {Z}_+}\mathcal {Z}^{\scriptscriptstyle (0)}. \end{aligned}$$

We denote by \(X^{\scriptscriptstyle (\infty )}\) a \(\varvec{p}^{\scriptscriptstyle (\infty )}\)-random walk on \(\mathcal {Z}^{\scriptscriptstyle (\infty )}\). We will suppose the sequence of walks \((X^{\scriptscriptstyle (i)})_{i\in \mathbb {Z}_+\cup \{\infty \}}\) is defined on a probability space \((\Omega , \mathcal {F}, \mathbb {P})\).

Let \(\delta ^{\scriptscriptstyle (i)}_j=p^{\scriptscriptstyle (i)}(e_j)-p^{\scriptscriptstyle (i)}(-e_j)\), and \(\delta ^{\scriptscriptstyle (i)}=(\delta ^{\scriptscriptstyle (i)}_j)_{j \in [d]}\). This vector represents the drift of a \(\varvec{p}^{\scriptscriptstyle (i)}\)-random walk on \({\mathbb {Z}}^d\). In particular, \({\mathbb {P}}\)-a.s.,

$$\begin{aligned} v^{\scriptscriptstyle (0)}:=\lim _{n \rightarrow \infty }n^{-1}X^{\scriptscriptstyle (0)}_n=\delta ^{\scriptscriptstyle (0)}. \end{aligned}$$

Moreover, under the assumption that \(p^{\scriptscriptstyle (i)}(e)>0\) for every e, let \(\ell ^{\scriptscriptstyle (i)}={\hat{\ell }}^{\scriptscriptstyle (i)}/\Vert {\hat{\ell }}^{\scriptscriptstyle (i)}\Vert \), where \(\Vert \cdot \Vert \) denotes the Euclidean norm, and \({\hat{\ell }}^{\scriptscriptstyle (i)}=({\hat{\ell }}^{\scriptscriptstyle (i)}_j)_{j \in [d]}\in \mathbb {R}^d\) is defined by

$$\begin{aligned} {\hat{\ell }}^{\scriptscriptstyle (i)}_j=\log \left( \frac{p^{\scriptscriptstyle (i)}(e_j)}{p^{\scriptscriptstyle (i)}(-e_j)}\right) . \end{aligned}$$

In general, \(\ell ^{\scriptscriptstyle (i)}\ne \delta ^{\scriptscriptstyle (i)}/\Vert \delta ^{\scriptscriptstyle (i)}\Vert \). However, if \(\delta ^{\scriptscriptstyle (i)}\ne 0\) then

$$\begin{aligned} \ell ^{\scriptscriptstyle (i)}\cdot \delta ^{\scriptscriptstyle (i)}&=\frac{1}{\Vert {\hat{\ell }}^{\scriptscriptstyle (i)}\Vert }\sum _{j=1}^d [\log p^{\scriptscriptstyle (i)}(e_j) - \log p^{\scriptscriptstyle (i)}(-e_j)][p^{\scriptscriptstyle (i)}(e_j) - p^{\scriptscriptstyle (i)}(-e_j)]>0 \end{aligned}$$

since all of the summands are non-negative and at least one is strictly positive.

We henceforth assume the following.

Condition 1

The \((\varvec{p}^{\scriptscriptstyle (i)})_{i \in \mathbb {Z}_+\cup \{\infty \}}\) are such that

  1. (a)

    \(\delta ^{\scriptscriptstyle (0)}_j\ge 0\) for each \(j\in [d]\), and \(\delta ^{\scriptscriptstyle (0)}_1>0\), and

  2. (b)

    \(p^{\scriptscriptstyle (i)}(e)>0\) for all \(e \in \{\pm e_j:j \in [d]\}\), and

  3. (c)

    for each \(i\in \mathbb {N}\cup \{\infty \}\), \(\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}>0\).

We lose no generality in assuming Condition 1(a); it is included solely for the purpose of fixing a direction of transience for the walk \(X^{\scriptscriptstyle (0)}\). Condition 1(b) ensures that the walks always have an available move. Condition 1(c) is the condition required to ensure that all subsequent walks are also transient. We highlight that it involves the vector \(\ell ^{\scriptscriptstyle (i)}\) rather than \(\delta ^{\scriptscriptstyle (i)}\) (the importance of this distinction is discussed in Remark 2.3). Note that, given the other two conditions, it is also sharp (see Lemma 2.2).

A further advantage to assuming Condition 1(b) is that it allows us to express the laws of the processes in terms of conductance networks (see for example, [2, 12, 16]). More precisely, for each \(i\in \mathbb {Z}_+\cup \{\infty \}\), let

$$\begin{aligned} c_{(i),j}=p^{\scriptscriptstyle (i)}(-e_j),\qquad \beta _{\scriptscriptstyle (i)}=\exp \{\Vert {\hat{\ell }}^{\scriptscriptstyle (i)}\Vert \}\ge 1. \end{aligned}$$

Then for each \(x=(x_1,x_2,\dots ,x_d),y=(y_1,y_2,\dots ,y_d)\in {\mathbb {Z}}^d\) with \(\Vert x-y\Vert =1\), define

$$\begin{aligned} c^{\scriptscriptstyle (i)}(x,y)=\left( \prod _{j=1}^d c_{(i),j}^{|y_j-x_j|}\right) \beta _{\scriptscriptstyle (i)}^{\scriptscriptstyle (x\vee y)\cdot \ell ^{\scriptscriptstyle (i)}}, \end{aligned}$$
(1)

where \(x\vee y =(x_j\vee y_j)_{j\in [d]}\). The quantity \(c^{\scriptscriptstyle (i)}(x,y)\) is called the conductance of edge (xy) for \(\varvec{p}^{\scriptscriptstyle (i)}\). Thus \(\ell ^{\scriptscriptstyle (i)}\) (which is non-zero under Condition 1) describes the direction in which the conductances grow most rapidly, and \(\beta _{\scriptscriptstyle (i)}\) (which is strictly greater than one under Condition 1) describes the rate of increase. Let \(c^{\scriptscriptstyle (i)}(x)=\sum _{y:(x,y)\in \mathcal {E}^{\scriptscriptstyle (i)}} c^{\scriptscriptstyle (i)}(x,y)\), and, to simplify notation, write \(P^{\scriptscriptstyle (i)}\) for \(P^{\mathcal {Z}^{\scriptscriptstyle (i)}}\). It is straightforward to check that, given \(\mathcal {Z}^{\scriptscriptstyle (i)}\)

$$\begin{aligned} P^{\scriptscriptstyle (i)}(X^{\scriptscriptstyle (i)}_{n+1}=x+e|X^{\scriptscriptstyle (i)}_n=x)={\left\{ \begin{array}{ll} \frac{c^{\scriptscriptstyle (i)}(x,x+e)}{c^{\scriptscriptstyle (i)}(x)}, &{}\quad \text { if }(x,x+e)\in \mathcal {Z}^{\scriptscriptstyle (i)},\\ 0, &{}\quad \text { otherwise}. \end{array}\right. } \end{aligned}$$

1.2 Main results

We now introduce the main results that were briefly outlined above. Firstly, we establish directional transience and existence of limiting velocities.

Theorem 1

Assume Condition 1. It then \(\mathbb {P}\)-a.s. holds that:

  1. (a)

    for each \(i\in {\mathbb {Z}}_+\cup \{\infty \}\), \(\lim _{n\rightarrow \infty }X^{\scriptscriptstyle (i)}_n\cdot \ell \rightarrow \infty \) for every \(\ell \) such that \(\ell \cdot \delta ^{\scriptscriptstyle (0)}>0\);

  2. (b)

    for each \(i\in {\mathbb {Z}}_+\), there exists a deterministic \(\kappa _{\scriptscriptstyle (i)}\in [0,\infty )\) such that

    $$\begin{aligned} v^{\scriptscriptstyle (i)}:=\lim _{n \rightarrow \infty } n^{-1}X^{\scriptscriptstyle (i)}_n=\kappa _{\scriptscriptstyle (i)}\delta ^{\scriptscriptstyle (0)}. \end{aligned}$$

Remark 1.1

Condition 1 is sufficient, but not necessary, to obtain the conclusions of Theorem 1. Indeed, if \(d=2\), and, for each i, \(p^{\scriptscriptstyle (i)}(e_1)>0\) and \(p^{\scriptscriptstyle (i)}(-e_1)=0\) (i.e. \(-e_1\) is a forbidden direction), and \(p^{\scriptscriptstyle (i)}(e_2), p^{\scriptscriptstyle (i)}(-e_2)>0\), then Condition 1(b) is violated but the conclusions of Theorem 1 are relatively simple to obtain.

There are many natural questions one can ask about \(\kappa _{\scriptscriptstyle (i)}\), and we don’t know the answers to many of them (see Sect. 1.3 for various open problems). However, our second main result gives sharp conditions for ballisticity/sub-ballisticity (i.e. whether or not it is the case that \(\kappa _{\scriptscriptstyle (i)}>0\)). To state this result, we need to introduce some further notation. Let

$$\begin{aligned} \varphi _{\scriptscriptstyle (i)}(t)=\mathbb {E}\left[ \exp \{-tX^{\scriptscriptstyle (0)}_1\cdot \ell ^{\scriptscriptstyle (i)}\}\right] . \end{aligned}$$
(2)

It is then the case that there is a unique positive solution \(t_{\scriptscriptstyle (i)}\) to \(\varphi _{\scriptscriptstyle (i)}(t)=1\) (see Lemma 4.1 below), and we set

$$\begin{aligned} \alpha _{\scriptscriptstyle (i)}=\exp \{t_{\scriptscriptstyle (i)}\}. \end{aligned}$$
(3)

Theorem 2

Assume Condition 1. Then for each \(i\in {\mathbb {N}}\):

  1. (a)

    If \(\beta _{\scriptscriptstyle (i)}<\alpha _{\scriptscriptstyle (i)}\), then \(X^{\scriptscriptstyle (i)}\) is ballistic, i.e. \(v^{\scriptscriptstyle (i)}\ne 0\).

  2. (b)

    If \(\beta _{\scriptscriptstyle (i)}>\alpha _{\scriptscriptstyle (i)}\), then \(X^{\scriptscriptstyle (i)}\) is sub-ballistic, i.e. \(v^{\scriptscriptstyle (i)}=0\).

Figure 2 shows simulations of \(v^{\scriptscriptstyle (1)}\cdot e_1\) for walks on \(\mathbb {Z}^2\) where \(p^{\scriptscriptstyle (0)}(e_1)=2/5\) and \(p^{\scriptscriptstyle (0)}(e)=1/5\) otherwise, while \(p^{\scriptscriptstyle (1)}(e_1)=r/(r+3)\) and \(p^{\scriptscriptstyle (1)}(e)=1/(r+3)\) otherwise, as a function of \(r \in [1,2.5]\). Theorem 2 shows that \(v^{\scriptscriptstyle (1)}\cdot e_1=0\) when \(r>2\) in this case. While non-monotonicity in r is supported by the figure,Footnote 1 the variability between realisations shows that \(5\times 10^7\)-step walks are insufficient to identify a phase transition by simulation.

Fig. 2
figure 2

Four independent simulations (with their average in bold) of \(v^{\scriptscriptstyle (1)}\cdot e_1\) (for walks on \(\mathbb {Z}^2\) where \(p^{\scriptscriptstyle (0)}(e_1)=2/5\) and \(p^{\scriptscriptstyle (0)}(e)=1/5\) otherwise, and \(p^{\scriptscriptstyle (1)}(e_1)=r/(r+3)\) and \(p^{\scriptscriptstyle (1)}(e)=1/(r+3)\) otherwise), as a function of \(r \in [1,3]\). Each point in the graph is calculated based on the endpoint of a \(5\times 10^7\)-step \(p^{\scriptscriptstyle (1)}\)-walk (on the trace of a \(p^{\scriptscriptstyle (0)}\)-walk)

As noted above, one interesting feature of our result is that we obtain an explicit representation for the critical point for our model defined on \(\mathbb {Z}^d\). As we will describe in detail in Sects. 4 and 5, the parameter \(\alpha _{\scriptscriptstyle (i)}\) is related to the decay rate of the probability the random walk \(X^{\scriptscriptstyle (0)}\) backtracks a certain distance in direction \(\ell ^{\scriptscriptstyle (i)}\), an event which can potentially cause a trap for the ith walk. For further intuition about the relevance of this parameter, see the discussion prior to the proof of Lemma 5.1, and the detailed construction of a trap for \(X^{\scriptscriptstyle (i)}\) within the proof. To illustrate the simplicity of the condition in Theorem 2, we next present a canonical example, in which it is even possible to compute the parameters \(\alpha _{\scriptscriptstyle (i)}\) analytically.

Example 1.2

Suppose that for each \(i\ge 0\) there exists a \(k_i\in [d]\) and \(\gamma _i>1\) such that

$$\begin{aligned} p^{\scriptscriptstyle (i)}(e)= \frac{1+(\gamma _i-1) \mathbf {1}_{\{e\in \{e_1,\dots ,e_{k_i}\}\}}}{2d+k_i(\gamma _i-1)}. \end{aligned}$$

We then have that

$$\begin{aligned} \delta ^{\scriptscriptstyle (i)}_j=\frac{\gamma _i-1}{2d+k_i(\gamma _i-1)}\mathbf {1}_{\{j\le k_i\}}, \qquad {\hat{\ell }}^{\scriptscriptstyle (i)}_j=\log (\gamma _i)\mathbf {1}_{\{j\le k_i\}}, \end{aligned}$$

and the conductances are given by (1) with

$$\begin{aligned} c_{(i),j}=1, \qquad \log \beta _{\scriptscriptstyle (i)}=\sqrt{k_i}\log \gamma _i. \end{aligned}$$

In this case, it is straightforward to check that Condition 1 holds, and deduce that

$$\begin{aligned} v^{\scriptscriptstyle (0)}:=\lim _{n \rightarrow \infty } \frac{X^{\scriptscriptstyle (0)}_n}{n}=\delta ^{\scriptscriptstyle (0)}=\frac{\gamma _0-1}{2d+k_0(\gamma _0-1)}(e_1+e_2+\dots +e_{k_0}). \end{aligned}$$

We moreover observe that the value of \({\alpha }_{\scriptscriptstyle (i)}\) can be computed explicitly, thus yielding completely transparent criteria for ballisticity/sub-ballisticity. Indeed, the equation \(\varphi _{\scriptscriptstyle (i)}(t)=1\) can be rewritten

$$\begin{aligned}&{k_i\mathrm {e}^{t/\sqrt{k_i}}+\mathrm {e}^{-t/\sqrt{k_i}}\left( k_i+(k_0\wedge k_i)(\gamma _0-1)\right) +2\left( d-(k_0\vee k_i)\right) +\left( (k_0\vee k_i) -k_i\right) (\gamma _0+1)}\\&\quad =2d+k_0(\gamma _0-1). \end{aligned}$$

Multiplying up by \(\mathrm {e}^{t/\sqrt{k_i}}\), gives a quadratic in \(\mathrm {e}^{t/\sqrt{k_i}}\), which is readily solved to give that

$$\begin{aligned} \log \alpha _{\scriptscriptstyle (i)}=t_{\scriptscriptstyle (i)}=\sqrt{k_i}\log \left( 1+\frac{k_i\wedge k_0}{k_i}\left( \gamma _0-1\right) \right) . \end{aligned}$$

(The other root of the quadratic is obtained by setting \(t=0\).) Thus, the ith walk is ballistic if

$$\begin{aligned} k_i(\gamma _i-1)<(k_i\wedge k_0)(\gamma _0-1), \end{aligned}$$
(4)

and sub-ballistic if the reverse (strict) inequality is true. In particular, if \(k_i\le k_0\), then the condition (4) simplifies further to \(\gamma _i<\gamma _0\).

Thus far our results have depended only on relations between \(\varvec{p}^{\scriptscriptstyle (i)}\) and \(\varvec{p}^{\scriptscriptstyle (0)}\) for individual i. Our final main result concerns the limiting graph \(\mathcal {Z}^{\scriptscriptstyle (\infty )}\), and depends on the entire sequence \((\varvec{p}^{\scriptscriptstyle (i)})_{i \in \mathbb {Z}_+}\).

Theorem 3

Assume Condition 1.

  1. (a)

    If there exists some \(\varvec{p}^{\scriptscriptstyle (i)}\) such that \(\varvec{p}^{\scriptscriptstyle (i')}=\varvec{p}^{\scriptscriptstyle (i)}\) for infinitely many \(i'\), then \(\mathcal {Z}^{\scriptscriptstyle (\infty )}\) is almost-surely a simple path.

  2. (b)

    If for some \(e\ne e_1\), and all \(c\in (0,\infty )\),

    $$\begin{aligned} \sum _{i=1}^\infty \frac{c^{\scriptscriptstyle (i)}(0,e_1)}{c^{\scriptscriptstyle (i)}(0,e)}\min \left\{ 1,(\beta _{\scriptscriptstyle (i)}^{c\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}}-1)\beta _{\scriptscriptstyle (i)}^c\right\} <\infty , \end{aligned}$$

    then \(\mathbb {P}(\mathcal {Z}^{\scriptscriptstyle (\infty )} \text { is not simple})>0\).

The following is an example of a sequence for which all but finitely many \(X^{\scriptscriptstyle (i)}\) are ballistic in direction \(e_1\), but \(\mathcal {Z}^{\scriptscriptstyle (\infty )}\) is almost-surely not a simple path. Note that the random walks become increasingly symmetric as \(i\rightarrow \infty \), and as a result spend increasingly long times in finite sections of the graph.

Example 1.3

Let \(d=2\), and \(\varvec{p}^{\scriptscriptstyle (0)}\) be chosen so that \(\delta ^{\scriptscriptstyle (0)}=\frac{1}{2} e_1\). Set

$$\begin{aligned} p^{\scriptscriptstyle (i)}(e_1)=\frac{1}{4}+\varepsilon _i,\qquad p^{\scriptscriptstyle (i)}(-e_1)=\frac{1}{4}-\varepsilon _i,\qquad p^{\scriptscriptstyle (i)}(e_2)=p^{\scriptscriptstyle (i)}(-e_2)=\frac{1}{4}, \end{aligned}$$

where \(\varepsilon _i\in (0,\frac{1}{4})\) for each i, and \(\sum _{i=1}^{\infty }\varepsilon _i<\infty \). Observe that \(\ell ^{\scriptscriptstyle (i)}=e_1\) (for each i) since

$$\begin{aligned} {\hat{\ell }}^{\scriptscriptstyle (i)}=\left( \log ((1+4\varepsilon _i)/(1-4\varepsilon _i)),0\right) , \end{aligned}$$

so \(\alpha _{\scriptscriptstyle (i)}=\alpha >1\). Moreover, \(\beta _{\scriptscriptstyle (i)}=(1+4\varepsilon _i)/(1-4\varepsilon _i)\rightarrow 1\), so Theorem 2(a) applies for all but finitely many i. With \(e=e_2\), the summand in Theorem 3(b) is equal to

$$\begin{aligned} 4\left( \frac{1}{4}+\varepsilon _i\right) \left( \left( \frac{1+4\varepsilon _i}{1-4\varepsilon _i}\right) ^{c/2}-1\right) \left( \frac{1+4\varepsilon _i}{1-4\varepsilon _i}\right) ^{c}, \end{aligned}$$

which is asymptotic to \(4c\varepsilon _i\) as \(i\rightarrow \infty \), and hence Theorem 3(b) applies.

In our second example, we illustrate an alternative way in which it might transpire that \({\mathcal {Z}}^{\scriptscriptstyle (\infty )}\) is not simple. In particular, this is when we have an increasingly large drift into traps.

Example 1.4

Let \(d=2\), and \(\varvec{p}^{\scriptscriptstyle (0)}\) be chosen so that \(\delta ^{\scriptscriptstyle (0)}=\frac{1}{2} e_1\). Set

$$\begin{aligned} p^{\scriptscriptstyle (i)}(e_1)=\frac{1}{2}\varepsilon _i,\qquad p^{\scriptscriptstyle (i)}(-e_1)=\frac{1}{4}\varepsilon _i,\qquad p^{\scriptscriptstyle (i)}(-e_2)=\frac{1}{4}\varepsilon _i,\qquad p^{\scriptscriptstyle (i)}(e_2)=1-\varepsilon _i, \end{aligned}$$

where \(\varepsilon _i\in (0,1)\) for each i, and \(\sum _{i=1}^{\infty }\varepsilon _i<\infty \). Here, \(\ell ^{\scriptscriptstyle (i)}\rightarrow e_2\), \(\beta _{\scriptscriptstyle (i)} \rightarrow \infty \) and \(\alpha _{\scriptscriptstyle (i)}\rightarrow \alpha \ge 1\) as \(i\rightarrow \infty \), but \(\ell ^{\scriptscriptstyle (i)}\cdot \delta ^{\scriptscriptstyle (0)}>0\) for each i. Thus, all walks are transient and all but finitely many are sub-ballistic. We then have that, again with \(e=e_2\), the summand in Theorem 3(b) is bounded above by

$$\begin{aligned} \frac{\varepsilon _i}{2(1-\varepsilon _i)}, \end{aligned}$$

which is summable.

1.3 Open problems

We now collect some open problems that arise from the present work.

Open Problem 1

The regeneration techniques we apply to deduce the existence of velocities in Theorem 1 do not immediately extend to the \(i=\infty \) case. Prove the existence of a deterministic velocity in this case.

Open Problem 2

Theorem 2 gives a phase transition for ballisticity, but it does not resolve the behaviour of the walk at the phase transition. If we assume Condition 1, then is it the case that \(X^{\scriptscriptstyle (i)}\) is sub-ballistic when \(\beta _{\scriptscriptstyle (i)}=\alpha _{\scriptscriptstyle (i)}\)?

Open Problem 3

As has been studied for Galton–Watson trees and percolation clusters, it is natural to consider the scaling limit of the walk \(X^{\scriptscriptstyle (i)}\). In particular, we would expect that the asymptotic behaviour of the process is determined by the parameter \(\gamma _{\scriptscriptstyle (i)}:=\min \{2,\alpha _{\scriptscriptstyle (i)}/\beta _{\scriptscriptstyle (i)}\}\). In particular, if \(\gamma _{\scriptscriptstyle (i)}>2\), then we would expect to see a central limit theorem for \(X^{\scriptscriptstyle (i)}\cdot \delta ^{\scriptscriptstyle (0)}\) around its mean behaviour. For \(\gamma _{\scriptscriptstyle (i)}\in (1,2)\), then we would expect to see fluctuations related to a \(\gamma _{\scriptscriptstyle (i)}\)-stable distribution around the mean, and for \(\gamma _{\scriptscriptstyle (i)}\in (0,1)\) we would expect that \(X^{\scriptscriptstyle (i)}\cdot \delta ^{\scriptscriptstyle (0)}\) behaves like the inverse of a \(\gamma _{\scriptscriptstyle (i)}\)-stable subordinator. Moreover, in the latter cases, due to a lattice effect that is also seen in other models, it might be anticipated that the relevant scaling limits only hold along subsequences. See [7] for recent progress in the Galton–Watson tree case, and [4] for a discussion of results and conjectures in the percolation case.

Open Problem 4

How does the velocity \(v^{\scriptscriptstyle (i)}\) (or equivalently speed \(\kappa _{\scriptscriptstyle (i)}\)) vary as a function of \(\varvec{p}^{\scriptscriptstyle (i)}\)? Is it continuous? Where is it maximised? Is it non-monotonic and unimodal as a function of \(\beta _{\scriptscriptstyle (i)}\) (for other parameters fixed)?

Open Problem 5

Perpendicular to the previous question, one might consider how \(v^{\scriptscriptstyle (i)}\) varies with i. To give one concrete question in this direction, suppose that \(X^{\scriptscriptstyle (1)}\) is ballistic, and set \(\varvec{p}^{\scriptscriptstyle (i)}=\varvec{p}^{\scriptscriptstyle (1)}\) for every \(i\ge 2\). Is it the case that the speeds are increasing in i?

Open Problem 6

We believe that solving these problems would be simpler on trees. To what extent is it possible to answer any of the above questions if the original walk has as its state space a regular tree or a supercritical Galton–Watson tree conditioned to survive, for example?

Open Problem 7

Theorem 3 gives sufficient criteria for \({\mathcal {Z}}^{\scriptscriptstyle (\infty )}\) almost-surely being a simple graph, or this not being the case. Is it possible to give a sharp criterion for simplicity of \(\mathcal {Z}^{\scriptscriptstyle (\infty )}\) in terms of basic properties of the transition probabilities \(\varvec{p}^{\scriptscriptstyle (i)}\), \(i\in {\mathbb {Z}}_+\)?

Open Problem 8

If \(\varvec{p}^{\scriptscriptstyle (i)}=\varvec{p}^{\scriptscriptstyle (0)}\) for every i, then \({\mathcal {Z}}^{\scriptscriptstyle (\infty )}\) is a simple path by Theorem 3(a). Is there a relationship between \({\mathcal {Z}}^{\scriptscriptstyle (\infty )}\) and the law of a \(\varvec{p}^{\scriptscriptstyle (0)}\) loop-erased random walk?

1.4 Organisation and notational conventions

The remainder of the article is organised as follows. In Sect. 2, we characterise recurrence and transience, proving Theorem 1(a) in particular. This is followed in Sect. 3 by a proof of Theorem 1(b), namely the existence of deterministic velocities, which utilises a particular regeneration structure for the multiple walks. Sections 4 and 5 contain the proof of Theorem 2, with the first of these sections describing the ballistic regime, and the second the sub-ballistic one. Finally, in Sect. 6, we study the graph \({\mathcal {Z}}^{\scriptscriptstyle (\infty )}\).

Regarding notational conventions, c and C are constants that may change from line-to-line.

2 Transience

The aim of this section is to prove Theorem 1(a), that is, we will demonstrate transience of the random walks \(X^{\scriptscriptstyle (i)}\) under Condition 1. To do this we will appeal to the well-known criterion that a graph is transient if the effective resistance between a given vertex and \(\infty \) is finite, e.g. [2, Theorem 2.11]. (For background on effective resistance, see [2, Chapter 2] or [16, Chapter 2], for example.) In the proof, we write \(R^{\scriptscriptstyle (i)}\) for the effective resistance metric on \({\mathcal {Z}}^{\scriptscriptstyle (i)}\) when conductances are given by (1). We also write

$$\begin{aligned} r^{\scriptscriptstyle (i)}(x,y)=\frac{1}{c^{\scriptscriptstyle (i)}(x,y)} \end{aligned}$$

for the corresponding edge resistances. To check that \(R^{\scriptscriptstyle (i)}(0,\infty )<\infty \), we will show that the relevant resistance is bounded above by the sum \(\sum _{n=0}^\infty r^{\scriptscriptstyle (i)}(X_n^{\scriptscriptstyle (0)},X_{n+1}^{\scriptscriptstyle (0)})\), which is almost-surely finite.

Proof of Theorem 1(a)

From the law of large numbers, we \(\mathbb {P}\)-a.s. have that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{X^{\scriptscriptstyle (0)}_n}{n}=\mathbb {E}[X_1^{\scriptscriptstyle (0)}]=\delta ^{\scriptscriptstyle (0)}. \end{aligned}$$
(5)

It follows that for any nearest-neighbour path \(\pi =(\pi _n)_{n\ge 0}\) with \(\pi _0=0\), \(\pi _n\in {\mathcal {V}}^{\scriptscriptstyle (1)}\) and \(|\pi _n|\rightarrow \infty \),

$$\begin{aligned} \lim _{n\rightarrow \infty }\pi _n\cdot \ell =\infty , \end{aligned}$$
(6)

for every \(\ell \) such that \(\ell \cdot \delta ^{\scriptscriptstyle (0)}>0\), i.e. \(\pi \) is transient in the direction \(\ell \). Moreover, taking the supremum over such paths that are also injective, we have for any \(i\in {\mathbb {N}}\cup \{\infty \}\) that (using (1)),

$$\begin{aligned} \sup _{\pi }\sum _{n=0}^\infty r^{\scriptscriptstyle (i)}\left( \pi _n,\pi _{n+1}\right) \le \sum _{n=0}^\infty r^{\scriptscriptstyle (i)}\left( X_n^{\scriptscriptstyle (0)},X_{n+1}^{\scriptscriptstyle (0)}\right) \le C_{\scriptscriptstyle (i)}\sum _{n=0}^\infty \beta _{\scriptscriptstyle (i)}^{-X^{\scriptscriptstyle (0)}_n\cdot \ell ^{\scriptscriptstyle (i)}} \end{aligned}$$

where \(C_{\scriptscriptstyle (i)}\) is a deterministic constant. Now, by (5) and the assumption that \(\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}>0\), we have that, \(\mathbb {P}\)-a.s., for large n,

$$\begin{aligned} \beta _{\scriptscriptstyle (i)}^{-X^{\scriptscriptstyle (0)}_n\cdot \ell ^{\scriptscriptstyle (i)}}\le \beta _{\scriptscriptstyle (i)}^{-n\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}/2}, \end{aligned}$$

and so we conclude that, \(\mathbb {P}\)-a.s.,

$$\begin{aligned} \sup _{\pi }\sum _{n=0}^\infty r^{\scriptscriptstyle (i)}\left( \pi _n,\pi _{n+1}\right) <\infty . \end{aligned}$$
(7)

We use the preceding deductions as the basis of an inductive argument. To begin with, observe that Rayleigh’s monotonicity principle [16, Chapter 2] implies that for any injective path \(\pi \) of the form considered in the previous paragraph, we have that, \(\mathbb {P}\)-a.s.,

$$\begin{aligned} R^{\scriptscriptstyle (1)}(0,\infty )\le \sum _{n=0}^\infty r^{\scriptscriptstyle (1)}\left( \pi _n,\pi _{n+1}\right) <\infty . \end{aligned}$$

Hence, by [2, Theorem 2.11], \(\mathcal {Z}^{\scriptscriptstyle (1)}\) is \(\mathbb {P}\)-a.s. a \(\varvec{p}^{\scriptscriptstyle (1)}\)-transient graph. Suppose now that we have shown that \(\mathcal {Z}^{\scriptscriptstyle (i)}\) is \(\mathbb {P}\)-a.s. a \(\varvec{p}^{\scriptscriptstyle (i)}\)-transient graph for some \(i\ge 1\). By definition, this implies that \(|X^{\scriptscriptstyle (i)}_n|\rightarrow \infty \), \(\mathbb {P}\)-a.s., and so \(\mathcal {Z}^{\scriptscriptstyle (i+1)}\) contains at least one injective path \(\pi \) from 0 to \(\infty \) with vertices in \({\mathcal {V}}^{\scriptscriptstyle (i)}\subseteq {\mathcal {V}}^{\scriptscriptstyle (1)}\), \(\mathbb {P}\)-a.s. Thus, from (7), \(\mathbb {P}\)-a.s.,

$$\begin{aligned} R^{\scriptscriptstyle (i+1)}(0,\infty )\le \sum _{n=0}^\infty r^{\scriptscriptstyle (i+1)}\left( \pi _n,\pi _{n+1}\right) <\infty , \end{aligned}$$

and so \(\mathcal {Z}^{\scriptscriptstyle (i+1)}\) is a \(\varvec{p}^{\scriptscriptstyle (i+1)}\)-transient graph. We have therefore established that for each \(i\in \mathbb {N}\), \(\mathcal {Z}^{\scriptscriptstyle (i)}\) is a \(\varvec{p}^{\scriptscriptstyle (i)}\)-transient graph \(\mathbb {P}\)-a.s.

We now deal with the \(i=\infty \) case. In particular, it must hold that \(\mathcal {Z}^{\scriptscriptstyle (\infty )}\) is an infinite graph, \(\mathbb {P}\)-a.s. Indeed, if this is not the case, then it must hold that there is a strictly positive probability that one of the graphs \(\mathcal {Z}^{\scriptscriptstyle (i)}\), \(i\in {\mathbb {Z}}_+\) is finite, and this clearly contradicts the conclusion of the previous paragraph. As a consequence, we know that, \(\mathbb {P}\)-a.s., \(\mathcal {Z}^{\scriptscriptstyle (\infty )}\) contains at least one injective path \(\pi \) from 0 to \(\infty \) with vertices in \({\mathcal {V}}^{\scriptscriptstyle (1)}\), and we can use our previous argument to deduce its transience.

To complete the proof, we are required to check transience in the direction \(\ell \). To this end, we observe that the transience we have already established yields that \(|X^{\scriptscriptstyle (i)}_n|\rightarrow \infty \) for every \(i\in {\mathbb {Z}}_+\cup \{\infty \}\), \(\mathbb {P}\)-a.s. Appealing to (6) (and also (5)), we readily see that this implies \(X^{\scriptscriptstyle (i)}_n\cdot \ell \rightarrow \infty \) for every \(i\in {\mathbb {Z}}_+\cup \{\infty \}\) and \(\ell \) such that \(\ell \cdot \delta ^{\scriptscriptstyle (0)}>0\), \(\mathbb {P}\)-a.s. \(\quad \square \)

By a minor adaptation of the previous argument, it is possible to check, as we do in Lemma 2.2 below, that the condition \(\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}>0\) is in fact necessary for transience. To proceed with this, and for later in the paper, it will be helpful to introduce a regeneration structure for \(X^{\scriptscriptstyle (0)}\) in the direction \(\ell \), where \(\ell \) can be chosen to be any vector satisfying \(\ell \cdot \delta ^{\scriptscriptstyle (0)}>0\). More precisely, let \({\mathcal {T}}^{\ell }_j\) denote the jth regeneration time for \(X^{\scriptscriptstyle (0)}\) in the direction \(\ell \), i.e. the jth time that \(X^{\scriptscriptstyle (0)}\cdot \ell \) hits a new maximum, from which it does not return to a lower level in the future. We then have the following standard result.

Lemma 2.1

Assume the parts of Condition 1 regarding \(\varvec{p}^{\scriptscriptstyle (0)}\). For any \(\ell \) satisfying \(\ell \cdot \delta ^{\scriptscriptstyle (0)}>0\), the following statements hold.

  1. (i)

    \({\mathbb {P}}\)-a.s., the regeneration times \(({\mathcal {T}}^{\ell }_j)_{j=1}^\infty \) are all finite.

  2. (ii)

    The path segments

    $$\begin{aligned} \left( X^{\scriptscriptstyle (0)}_{{\mathcal {T}}^{\ell }_j+n}-X^{\scriptscriptstyle (0)}_{{\mathcal {T}}^{\ell }_j}\right) _{n=0}^{{\mathcal {T}}^{\ell }_{j+1}-{\mathcal {T}}^{\ell }_{j}},\qquad j\ge 1, \end{aligned}$$

    are independent and identically distributed.

  3. (iii)

    \({\mathbb {P}}\)-a.s., it holds that

    $$\begin{aligned} \lim _{j\rightarrow \infty }\frac{{\mathcal {T}}_j^{\ell }}{j}=C, \end{aligned}$$

    where \(C\in [1,\infty )\) is a deterministic constant (depending on i).

Proof

Note that the Markov property and the directional transience of \(X^{\scriptscriptstyle (0)}\) imply that once \(X^{\scriptscriptstyle (0)}\cdot \ell \) hits a new maximum, there is strictly positive probability that it does not return to a lower level in the future. Part (i) of the lemma readily follows. The remaining parts of the lemma are obtained by an application of results from [21]. Specifically, part (ii) is a consequence of [21, Theorem 1.4]. Moreover, Condition 1 ensures that the uniform ellipticity condition (0.1) and the drift condition (2.37) of [21] hold for \(X^{\scriptscriptstyle (0)}\) with respect to the direction \(\ell \). By [21, Proposition 2.4], this gives us that Kalikow’s condition [21, (0.7)] holds, and so we can apply equation [21, (2.32)] to deduce that, \({\mathbb {P}}\)-a.s.,

$$\begin{aligned} \lim _{j\rightarrow \infty }\frac{X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j^{\ell }}\cdot \ell }{j}=C, \end{aligned}$$

for some finite deterministic constant C. Together with the almost-sure result that \(n^{-1}X^{\scriptscriptstyle (0)}_n\rightarrow \delta ^{\scriptscriptstyle (0)}\), part (iii) of the lemma follows. \(\quad \square \)

We now confirm that if \(\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}\le 0\), then the ith walk is recurrent. Since this is not central to the main results of this paper, we are somewhat brief in the proof.

Lemma 2.2

Assume Condition 1(a) and 1(b) hold, but Condition 1(c) does not, i.e. \(\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}\le 0\) for some \(i\in {\mathbb {N}}\cup \{\infty \}\). It then holds that the random walk \(X^{\scriptscriptstyle (i)}\) is recurrent.

Proof

Let \(({\mathcal {T}}_j)_{j\ge 1}\) be the regeneration times for \(X^{\scriptscriptstyle (0)}\) in direction \(\delta ^{\scriptscriptstyle (0)}\). It then holds that

$$\begin{aligned} R^{\scriptscriptstyle (i)}(0,\infty )\ge \sum _{j=2}^{\infty }r^{\scriptscriptstyle (i)}\left( X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j-1},X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j}\right) \ge C \sum _{j=2}^{\infty }\beta _{\scriptscriptstyle (i)}^{-X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j}\cdot \ell ^{\scriptscriptstyle (i)}}, \end{aligned}$$
(8)

where to deduce the first inequality we again appeal to Rayleigh’s monotonicity principle. We will show that, when \(\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}\le 0\), there are \({\mathbb {P}}\)-a.s. infinitely many j for which

$$\begin{aligned} \beta _{\scriptscriptstyle (i)}^{-X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j}\cdot \ell ^{\scriptscriptstyle (i)}}\ge 1. \end{aligned}$$
(9)

In conjunction with [2, Theorem 2.11], (8) then yields that \({\mathcal {Z}}^{\scriptscriptstyle (i)}\) is a \(\varvec{p}^{\scriptscriptstyle (i)}\)-recurrent graph.

First suppose that \(\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}<0\). There then exists a deterministic constant \(c\in (0,\infty )\), such that \({\mathbb {P}}\)-a.s.,

$$\begin{aligned} j^{-1}X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j}\rightarrow c\delta ^{\scriptscriptstyle (0)}. \end{aligned}$$
(10)

This is a consequence of the law of large numbers for \(X^{\scriptscriptstyle (0)}\) and the law of large numbers for regeneration times that is stated at Lemma 2.1(c). Hence \(j^{-1}X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j}\cdot \ell ^{\scriptscriptstyle (i)}\rightarrow c\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}<0\), which confirms (9) in this case.

In the case when \(\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}=0\), a similar, but slightly more delicate argument can be applied to deduce the same conclusion. In particular, first observe Lemma 2.1 implies

$$\begin{aligned} \left( \left( X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_{j+1}}-X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j}\right) \cdot \ell ^{\scriptscriptstyle (i)}\right) _{j\ge 1} \end{aligned}$$

form an i.i.d. sequence, and moreover, (10) (together with [13, Theorems VII.9.1 and VII.9.4]) yields that these random variables have expectation equal to zero. Thus the sum of these increments is a (non-trivial) one-dimensional simple random walk with zero mean, and standard arguments show that, \({\mathbb {P}}\)-a.s.,

$$\begin{aligned} -\infty =\liminf _{j\rightarrow \infty } X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j}\cdot \ell ^{\scriptscriptstyle (i)}<\limsup _{j\rightarrow \infty } X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j}\cdot \ell ^{\scriptscriptstyle (i)}=\infty . \end{aligned}$$

Thus (9) holds in this case as well. \(\quad \square \)

Remark 2.3

The condition \(\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}>0\) for transience (in each direction \(\ell \) for which \(\ell \cdot \delta ^{\scriptscriptstyle (0)}>0\)) permits some counterintuitive behaviour whereby a random walk can be transient in the “wrong direction” (to be more precise, the walk \(X^{\scriptscriptstyle (i)}\) can be transient in direction \(\delta ^{\scriptscriptstyle (0)}\) even though \(\delta ^{\scriptscriptstyle (0)}\cdot \delta ^{\scriptscriptstyle (i)}<0\), and vice versa, See Example 2.4). Similar phenomena can be observed for example in random walk in i.i.d. random environment, where in one dimension the transience of the walk is determined by the sign of \(\mathbb {E}[\log (\omega _0^{-1}-1)]\) rather than \(\mathbb {E}[\omega _0]\), where \(\omega _0\in (0,1)\) is the random probability of stepping to the right at the origin [19].

Example 2.4

Let

$$\begin{aligned} p^{\scriptscriptstyle (0)}(e_1)=p^{\scriptscriptstyle (0)}(e_2)=\frac{1}{4}+\varepsilon , \quad p^{\scriptscriptstyle (0)}(-e_1)=p^{\scriptscriptstyle (0)}(-e_2)=\frac{1}{4}-\varepsilon , \end{aligned}$$

for some \(\varepsilon \in (0,\frac{1}{4})\), so that \(\delta ^{\scriptscriptstyle (0)}=(2\varepsilon ,2\varepsilon )\). Furthermore, let

$$\begin{aligned} p^{\scriptscriptstyle (1)}(e_1)=\frac{15}{25},\qquad p^{\scriptscriptstyle (1)}(-e_1)=\frac{5}{25},\qquad p^{\scriptscriptstyle (1)}(e_2)=\frac{1}{25},\qquad p^{\scriptscriptstyle (1)}(-e_2)=\frac{4}{25}. \end{aligned}$$
(11)

In this case \(\delta ^{\scriptscriptstyle (1)}=(\frac{10}{25},-\frac{3}{25})\) and \({\hat{\ell }}^{\scriptscriptstyle (1)}=(\log (3),-\log (4))\). Thus

$$\begin{aligned} \delta ^{\scriptscriptstyle (1)}\cdot \delta ^{\scriptscriptstyle (0)}> 0 >\ell ^{\scriptscriptstyle (1)}\cdot \delta ^{\scriptscriptstyle (0)}, \end{aligned}$$
(12)

which implies that even though a random walk on \({\mathbb {Z}}^d\) with transition probabilities \(p^{\scriptscriptstyle (1)}\) is transient in the direction \(\delta ^{\scriptscriptstyle (0)}\), the process \(X^{\scriptscriptstyle (1)}\) on \(\mathcal {Z}^{\scriptscriptstyle (1)}\) is not.

If instead we rotate the transition probabilities (11) through \(\pi \) radians, i.e. we set

$$\begin{aligned} p^{\scriptscriptstyle (1)}(e_1)=\frac{5}{25},\qquad p^{\scriptscriptstyle (1)}(-e_1)=\frac{15}{25},\qquad p^{\scriptscriptstyle (1)}(e_2)=\frac{4}{25},\qquad p^{\scriptscriptstyle (1)}(-e_2)=\frac{1}{25}, \end{aligned}$$

then the signs in (12) are reversed, so \(X^{\scriptscriptstyle (1)}\) is transient in the direction \(\delta ^{\scriptscriptstyle (0)}\), even though a walk on \({\mathbb {Z}}^d\) with transition probabilities \(p^{\scriptscriptstyle (1)}\) is transient in the direction \(-\delta ^{\scriptscriptstyle (0)}\).

3 Velocity

The main goal of this section is to prove Theorem 1(b). We will frequently make use of regenerations of walks in direction \(e_1\). In particular, we will write \(\mathcal {T}=(\mathcal {T}_k)_{k \in \mathbb {N}}\) for the set of regeneration times for \(X^{\scriptscriptstyle (0)}\), where, in the notation of the previous section, \(\mathcal {T}_k=\mathcal {T}_k^{e_1}\). We note that, by definition, any regeneration time for a walk in direction \(e_1\) is also a first hitting time of some level in direction \(e_1\) by that walk. The corresponding regeneration levels are defined as \(\{X^{\scriptscriptstyle (0)}_{\mathcal {T}_k}\cdot e_1: k \in \mathbb {N}\}\), and the corresponding regeneration points are \(\{X^{\scriptscriptstyle (0)}_{\mathcal {T}_k}: k \in \mathbb {N}\}\).

Lemma 3.1

Assume Condition 1, and let \(i\in {\mathbb {N}}\cup \{\infty \}\).

  1. (1)

    If \(X^{\scriptscriptstyle (0)}\) is not transient in direction \(\pm e_j\), then \(X^{\scriptscriptstyle (i)}\cdot e_j\) almost surely hits 0 infinitely often, and \(\lim _{n \rightarrow \infty }n^{-1}X_n^{\scriptscriptstyle (i)}\cdot e_j=0\).

  2. (2)

    If for some fixed \(\varepsilon _1>0\), \(\lim _{n \rightarrow \infty } n^{-1}X_n^{\scriptscriptstyle (i)}\cdot e_1=\varepsilon _1\), \(\mathbb {P}\)-a.s., then \(\lim _{n \rightarrow \infty } n^{-1}X_n^{\scriptscriptstyle (i)}= (\varepsilon _1/\delta ^{\scriptscriptstyle (0)}_1)\delta ^{\scriptscriptstyle (0)}\), \(\mathbb {P}\)-a.s.

  3. (3)

    If \(\lim _{n \rightarrow \infty } n^{-1}X_n^{\scriptscriptstyle (i)}\cdot e_1=0\), \(\mathbb {P}\)-a.s., then \(\lim _{n \rightarrow \infty }n^{-1}X_n^{\scriptscriptstyle (i)}=0\).

Proof

For (1), note that since the projection of \(X^{\scriptscriptstyle (0)}\) in direction \(e_j\) is a (lazy) simple symmetric random walk, it is recurrent. This implies that \(X^{\scriptscriptstyle (0)}\) has infinitely many \(e_1\)-direction regenerations when \(X^{\scriptscriptstyle (0)}\cdot e_j>0\) and infinitely many when \(X^{\scriptscriptstyle (0)}\cdot e_j<0\). All of these regenerations are at points that all subsequent walks must pass through, which proves the first claim of (1). For the second part of (1), note that a.s. there exists some finite random Y such that \(\sup _n n^{-2/3}X^{\scriptscriptstyle (0)}_n\cdot e_j\le Y\). On the other hand there exists some \(\varepsilon >0\) and a random \(N_0\) such that \(X_n^{\scriptscriptstyle (0)}\cdot e_1>\varepsilon n\) for all \(n\ge N_0\) almost surely. This means that, almost surely, for all m sufficiently large, after walk i has made \(m>N_0\) steps, its first coordinate can be at most m, and therefore its second coordinate \(X_m^{\scriptscriptstyle (i)}\cdot e_j\) can be at most \(\sup _{k\le m/\varepsilon }X_k^{\scriptscriptstyle (0)}\cdot e_j\le Y(m/\varepsilon )^{2/3}\). Therefore \(X_m^{\scriptscriptstyle (i)}/m \rightarrow 0\) almost surely.

To prove (2), note that by assumption \(n^{-1}X_n^{\scriptscriptstyle (i)}\cdot e_1=\varepsilon _1>0\) almost surely. Let \(M_n=\inf \{m: X_m^{\scriptscriptstyle (0)}=X^{\scriptscriptstyle (i)}_n\}\). Then \(M_n \rightarrow \infty \) since \(X^{\scriptscriptstyle (i)}\) is transient, and

$$\begin{aligned} \frac{X_n^{\scriptscriptstyle (i)}}{n}\cdot e_1=\frac{X_{M_n}^{\scriptscriptstyle (0)}}{M_n}\cdot e_1 \frac{M_n}{n}. \end{aligned}$$

The left hand side converges a.s. to \(\varepsilon _1\) and \(\frac{X_{M_n}^{\scriptscriptstyle (0)}}{M_n}\cdot e_1\rightarrow \delta ^{\scriptscriptstyle (0)}_1\) a.s., so \(\frac{M_n}{n} \rightarrow \varepsilon _1/\delta ^{\scriptscriptstyle (0)}_1\) a.s. Thus,

$$\begin{aligned} \frac{X_n^{\scriptscriptstyle (i)}}{n}=\frac{X_{M_n}^{\scriptscriptstyle (0)}}{M_n}\frac{M_n}{n}\rightarrow \frac{\varepsilon _1}{\delta ^{\scriptscriptstyle (0)}_1}\delta ^{\scriptscriptstyle (0)}. \end{aligned}$$

It remains to prove (3). Since \(\delta ^{\scriptscriptstyle (0)}\cdot e_1>0\) there exists \(\delta >0\) and a random \(N_1\) such that \(X_n^{\scriptscriptstyle (0)}\cdot e_1>\delta n\) for every \(n>N_1\). Let \(\varepsilon \in (0,1)\). Then there exists \(N_2\) such that \(|X_n^{\scriptscriptstyle (i)}\cdot e_1|< \varepsilon \delta n\) for every \(n\ge N_2\). Let \(N=N_2\vee (N_1/\varepsilon )\). Then for every \(n>N\ge N_2\), \(|X^{\scriptscriptstyle (0)}_{M_n}\cdot e_1|=|X_n^{\scriptscriptstyle (i)}\cdot e_1|\le \varepsilon \delta n\). This implies that \(M_n\le \varepsilon n\) so \(|X_n^{\scriptscriptstyle (i)}\cdot e_j|=|X^{\scriptscriptstyle (0)}_{M_n}\cdot e_j|\le M_n\le \varepsilon n\). This establishes that \(\limsup _{n \rightarrow \infty }n^{-1}|X_n^{\scriptscriptstyle (i)}\cdot e_j|\le \varepsilon \), \({\mathbb {P}}\)-a.s. for every \(\varepsilon \in (0,1)\), and therefore completes the proof. \(\quad \square \)

Next, let \(\mathcal {L}^{\scriptscriptstyle (i)}=(L_1^{\scriptscriptstyle (i)},L_2^{\scriptscriptstyle (i)},\dots )\) denote the (ordered) set of strictly positive levels that are regeneration levels for every one of the walks \(X^{\scriptscriptstyle (0)},\dots , X^{\scriptscriptstyle (i)}\). The elements of \(\mathcal {L}^{\scriptscriptstyle (i)}\) are called i-uber-regeneration levels. The following regeneration result is an easy consequence of the definition of the model. To state it, we use the notation \(T^{\scriptscriptstyle (i)}_n=\inf \{k:X^{\scriptscriptstyle (i)}_k\cdot e_1=n\}\) to denote the first hitting time of level n in the \(e_1\) direction by walk i. Let \(\hat{\mathcal {L}}^{\scriptscriptstyle (i)}=\{x\in \mathbb {Z}^d:x=X^{\scriptscriptstyle (i)}_{T_L^{\scriptscriptstyle (i)}} \text { for some }L \in \mathcal {L}^{\scriptscriptstyle (i)}\}\) denote the corresponding uber-regeneration points.

Lemma 3.2

Fix \(i \in \mathbb {Z}_+\). If \(L\in \mathcal {L}^{\scriptscriptstyle (i)}\), then \(((X^{\scriptscriptstyle (r)}_{n+T_L^{\scriptscriptstyle (r)}}-X^{\scriptscriptstyle (r)}_{T_L^{\scriptscriptstyle (r)}})_{n\ge 0})_{r\le i}\):

  • is independent of \(((X^{\scriptscriptstyle (r)}_n)_{n\le T^{\scriptscriptstyle (r)}_{L}})_{r\le i}\), and

  • has the same law as \(((X^{\scriptscriptstyle (r)}_{n+T_1^{\scriptscriptstyle (r)}}-X^{\scriptscriptstyle (r)}_{T_1^{\scriptscriptstyle (r)}})_{n\ge 0})_{r\le i}\) conditional on \(\{1\in \mathcal {L}^{\scriptscriptstyle (i)}\}\).

If we can show that \(\mathbb {P}(\mathcal {L}^{\scriptscriptstyle (i)} \text { is infinite})=1\) then repeatedly applying Lemma 3.2 yields a common regeneration structure for the first i walks, i.e. that

$$\begin{aligned} \Bigg (\big ((X^{\scriptscriptstyle (r)}_{n} -X^{\scriptscriptstyle (r)}_{T^{\scriptscriptstyle (r)}_{L^{\scriptscriptstyle (i)}_j}})_{n\in [T^{\scriptscriptstyle (r)}_{L^{\scriptscriptstyle (i)}_j},T^{\scriptscriptstyle (r)}_{L^{\scriptscriptstyle (i)}_{j+1}}]}\big )_{r\le i}\Bigg )_{j \in \mathbb {N}} \text { are i.i.d. over { j}.} \end{aligned}$$
(13)

This common regeneration structure will allow us to prove Theorem 1(b). Therefore our current goal is to prove that \(\mathcal {L}^{\scriptscriptstyle (i)}\) is almost surely infinite. To this end, we start by checking that uber-regeneration levels occur at any natural number with strictly positive probability. As in the proof of Theorem 1(a), our argument depends on the almost-sure finiteness of \(\sum _{n=0}^\infty r^{\scriptscriptstyle (i)}(X_n^{\scriptscriptstyle (0)},X_{n+1}^{\scriptscriptstyle (0)})\).

Lemma 3.3

For each \(i\in {\mathbb {Z}}_+\), there exists an \(\varepsilon _i>0\) such that \({\mathbb {P}}(k\in \mathcal {L}^{\scriptscriptstyle (i)})=\varepsilon _i\) for every \(k\ge 1\).

Proof

Firstly, the directional transience of Theorem 1(a) and the strong Markov property imply the result in the case \(i=0\). So, now suppose \(i,k\ge 1\). We then have that

Conditioning on the graph \({\mathcal {Z}}^{\scriptscriptstyle (j)}\) (and noting that, on the event \(k\in \mathcal {L}^{\scriptscriptstyle (j-1)}\), \(X^{\scriptscriptstyle (0)}_{T_k^{\scriptscriptstyle (0)}}\) is a measurable function of \({\mathcal {Z}}^{\scriptscriptstyle (j)}\)), we have that

where \({\bar{c}}^{\scriptscriptstyle (i)}(x):=\sum _{y \in \mathbb {Z}^d:y\sim x}c^{\scriptscriptstyle (i)}(x,y)\). Note that we have applied [2, Theorem 2.11] for the second inequality, and bounded the resistance to infinity as in the proof of Theorem 1(a). We thus obtain that

where the equality follows from Lemma 3.2. Now, as was observed in the proof of Theorem 1(a), the random variable \(\sum _{m=0}^{\infty } r^{\scriptscriptstyle (j)}(X^{\scriptscriptstyle (0)}_m,X^{\scriptscriptstyle (0)}_{m+1})\) is \({\mathbb {P}}\)-a.s. finite. Hence the above expectation is strictly positive. In particular, this confirms that

(14)

Setting \(\rho _0:=\varepsilon _0>0\), we thus have established the result with \(\varepsilon _i=\prod _{j=0}^i\rho _j\). \(\quad \square \)

The above result allows us to establish the following lemma, which in view of Lemma 3.2 confirms that there are indeed almost-surely an infinite number of i-uber-regeneration levels. In the proof, we write

$$\begin{aligned} \tau ^{\scriptscriptstyle (r,i)}_{j}=\inf \left\{ n:X^{\scriptscriptstyle (r)}_n\cdot e_1=L^{\scriptscriptstyle (i)}_j\right\} \end{aligned}$$

to represent the first hitting time of level \(L_j^{\scriptscriptstyle (i)} \in \mathcal {L}^{\scriptscriptstyle (i)}\) by walk \(X^{\scriptscriptstyle (r)}\).

Lemma 3.4

For each \(i\in \mathbb {Z}_+\),

  1. (0)

    \(\mathbb {P}(L_1^{\scriptscriptstyle (i)}=1)>0\), and

  2. (1)

    \(L_1^{\scriptscriptstyle (i)}\) is almost surely finite, and

  3. (2)

    given that \(L_1^{\scriptscriptstyle (i)}=1\), \(L_2^{\scriptscriptstyle (i)}\) is almost surely finite.

For an interval \(\mathcal {I}\subset \mathbb {Z}\) write \(\mathcal {Z}^{\scriptscriptstyle (i)}_{\mathcal {I}}:=\mathcal {Z}^{\scriptscriptstyle (i)}\cap (\mathcal {I}\times \mathbb {Z}^{d-1})\), and let \(\mathcal {E}_{\mathcal {I}}^{\scriptscriptstyle (i)}\) denote the set of edges of \(\mathcal {Z}^{\scriptscriptstyle (i)}_{\mathcal {I}}\) such that at least one of the end vertices x has \(x\cdot e_1\) in the interior of \(\mathcal {I}\). To prove Lemma 3.4(2) we first consider the probability that a given level \(\ell \in \mathcal {L}^{\scriptscriptstyle (j-1)}\) is also a j-uber regeneration level conditional on a particular\(\ell '<\ell \)being anj-uber regeneration level. For this, first note that if G is a connected graph with at least one edge then the graph can be identified with its set of edges. Given a finite connected graph \(H\subset \mathbb {Z}^d\) with at least two edges, a unique right-most vertex u, and a unique left-most vertex 0, we let \(E_H=\{\mathcal {E}^{\scriptscriptstyle (j)}_{[0,u\cdot e_1]}=H\}\) and

$$\begin{aligned} q_{H}=\mathbb {P}\left( u\in \hat{\mathcal {L}}^{\scriptscriptstyle (j)}\,\Big |\,E_H, u\in \hat{\mathcal {L}}^{\scriptscriptstyle (j-1)},1 \in \mathcal {L}^{\scriptscriptstyle (j)}\right) . \end{aligned}$$

Note that on the event that \(u\in \hat{\mathcal {L}}^{\scriptscriptstyle (j-1)}\) we have that \(u\in \hat{\mathcal {L}}^{\scriptscriptstyle (j)}\) if and only if \((X_n^{\scriptscriptstyle (j)}-u)\cdot e_1\ge 0\) for every \(n \ge \tau _u\), where \(\tau _u\) is the first hitting time of u by \(X^{\scriptscriptstyle (j)}\).

Lemma 3.5

For every H, \(q_H\ge \rho _{j}\), where \(\rho _\cdot \) was defined at (14).

Proof

Define \(R=\{u \in \hat{\mathcal {L}}^{\scriptscriptstyle (j)}\}\), \(R'=\{u\in \hat{\mathcal {L}}^{\scriptscriptstyle (j-1)}\}\), \(F=\{X^{\scriptscriptstyle (j)}_n\cdot e_1\ge 1, \, \forall n\in [1,\tau ]\}\) and \(F'=\{X^{\scriptscriptstyle (j)}_n\cdot e_1\ge 1, \,\forall n>\tau \}\), where \(\tau \) is the hitting time of level \(u\cdot e_1\) by \(X^{\scriptscriptstyle (j)}\). Then,

$$\begin{aligned} q_H&=\mathbb {P}\left( R \Big | E_H,R',F,F' \right) \\&=\dfrac{\mathbb {P}\left( R,E_H,R',F,F' \right) }{\mathbb {P}\left( E_H,R',F,F' \right) }\\&=\dfrac{\mathbb {P}\left( R,E_H,R',F \right) }{\mathbb {P}\left( F', E_H,R',F \right) }\\&=\dfrac{\mathbb {P}(R,E_H,F\,|\,R')}{\mathbb {P}(F',E_H,F\,|\,R')}. \end{aligned}$$

Given \(R'\), the event R only depends on the environment \(\mathcal {E}^{\scriptscriptstyle (j)}_{[u\cdot e_1-1,\infty )}=\mathcal {Z}^{\scriptscriptstyle (j)}_{[u\cdot e_1,\infty )}\cup \{(u-e_1,u)\}\) and the behaviour of the walker \(X^{\scriptscriptstyle (j)}\) on this graph from the first hitting time of u onwards. Given \(R'\), \(E_H\) only depends on \(\mathcal {E}^{\scriptscriptstyle (j)}_{(-\infty ,u\cdot e_1]}\), and F only depends on the behaviour of the walker \(X^{\scriptscriptstyle (j)}\) on that graph until the first hitting time of u. Hence R is conditionally independent of \(E_H,F\) given \(R'\) and

$$\begin{aligned} \dfrac{\mathbb {P}(R,E_H,F\,|\,R')}{\mathbb {P}(F',E_H,F\,|\,R')}=\dfrac{\mathbb {P}(R\,|\,R')\mathbb {P}(E_H,F\,|\,R')}{\mathbb {P}(F',E_H,F\,|\,R')}\ge \mathbb {P}(R\,|\,R'). \end{aligned}$$

The result follows since \(\mathbb {P}(R\,|\,R')=\rho _{i}\). \(\quad \square \)

Proof of Lemma 3.4

For \(i=0\) the result holds trivially. We will prove (1) and (2) of the lemma for \(i \in \mathbb {Z}_+\) by induction on i.

Suppose that (1) and (2) hold for a fixed \(i\in \mathbb {Z}_+\). Then by Lemma 3.2, \(\mathcal {L}^{\scriptscriptstyle (i)}\) is infinite almost surely, and therefore (13) holds. Let \(A=\{1\in \mathcal {L}^{\scriptscriptstyle (i)}\}\) and \(B=\{1\in \mathcal {L}^{\scriptscriptstyle (i+1)}\}\). Then \(\mathbb {P}(B)=\varepsilon _{i+1}>0\) and \(\mathbb {P}(B|A)=\rho _{i+1}>0\).

Fig. 3
figure 3

A pictorial example where \(1=L_1^{\scriptscriptstyle (i)}\notin \mathcal {L}^{\scriptscriptstyle (i+1)}\). Here the edges traversed by the walker \(X^{\scriptscriptstyle (i+1)}\) before returning to level \(L_1^{\scriptscriptstyle (i)}-1=0\) are represented by the (pink) shaded part of the graph, and \(M_1=13\) and \(L_1^{\scriptscriptstyle (i+1)}\ge {\bar{M}}_1=L_4^{\scriptscriptstyle (i)}=14\)

Let us prove that claim (1) of the Lemma holds for \((i+1)\). When the walk \(X^{\scriptscriptstyle (i+1)}\) first reaches \(L_j^{\scriptscriptstyle (i)}\) for some \(j\ge 1\), it has not viewed the environment to the right (which by Lemma 3.2 is independent of the environment to the left) and therefore by Lemma 3.3 the walk \(X^{\scriptscriptstyle (i+1)}\) has probability exactly equal to\(\rho _{i+1}=\varepsilon _{i+1}/\varepsilon _{i}>0\) of never backtracking from its current location. Let \({\bar{M}}_0=L_1^{\scriptscriptstyle (i)}\). If this non-backtracking event occurs with \(j=1\) then \(L_1^{\scriptscriptstyle (i+1)}={\bar{M}}_0\in \mathcal {L}^{\scriptscriptstyle (i+1)}\). If it does not occur for \(j=1\) then the walk \(X^{\scriptscriptstyle (i+1)}\) reached some maximal level \(M_1\) before returning to level 0, and the walk has not viewed the environment to the right of \({\bar{M}}_1:=\inf \{n>M_1: n \in \mathcal {L}^{\scriptscriptstyle (i)}\}\) (note that \({\bar{M}}_1\) is finite since \(\mathcal {L}^{\scriptscriptstyle (i)}\) is an infinite set). See for example Fig. 3. Thus when the walk reaches level \({\bar{M}}_1\in \mathcal {L}^{\scriptscriptstyle (i)}\) we again have probability exactly\(\rho _{i+1}\) that the walk \(X^{\scriptscriptstyle (i+1)}\) never backtracks from level \({\bar{M}}_1\) (and if this occurs then \(L_1^{\scriptscriptstyle (i+1)}={\bar{M}}_1\)). If the walk does backtrack from level \({\bar{M}}_1\) then it reached some maximal level \(M_2\), before backtracking and we can repeat the above. We succeed in finding an \((i+1)\)-uber-regeneration level after a geometric\((\rho _{i+1})\) number of attempts, verifying part (1) of the lemma.

Fig. 4
figure 4

A pictorial example where \(1=L_1^{\scriptscriptstyle (i+1)}=L_1^{\scriptscriptstyle (i)}\). The edges traversed by the walk \(X^{\scriptscriptstyle (i+1)}\) after first reaching level \(L_2^{\scriptscriptstyle (i)}=5\) until returning to level \(L_2^{\scriptscriptstyle (1)}-1\) are represented by the (pink) shaded part of the graph. This walk cannot return to level \(L_1^{\scriptscriptstyle (i+1)}-1\) by definition of \(L_1^{\scriptscriptstyle (i+1)}\). Here \(N_1=13\) and \(L_2^{\scriptscriptstyle (i+1)}\ge {\bar{N}}_1=L_4^{\scriptscriptstyle (i)}=14\). When the walk \(X^{\scriptscriptstyle (i+1)}\) reaches level \({\bar{N}}_1\) it has probability at least \(\rho _{i+1}\) of succeeding in never visiting level \({\bar{N}}_1-1\) again

We can now prove part (2) of the Lemma. From part (1) we know that there is some finite \(L_1^{\scriptscriptstyle (i+1)}\in \mathcal {L}^{\scriptscriptstyle (i+1)}\), and we want to show that \(L_2^{\scriptscriptstyle (i+1)}\in \mathcal {L}^{\scriptscriptstyle (i+1)}\) is almost surely finite. By Lemma 3.2 applied to \(L_1^{\scriptscriptstyle (i+1)}\in \mathcal {L}^{\scriptscriptstyle (i+1)}\) we may assume that \(L_1^{\scriptscriptstyle (i+1)}=1\) and \(X^{\scriptscriptstyle (i+1)}_{\tau _1^{\scriptscriptstyle (i,i+1)}}=e_1\) (i.e. without loss of generality we can fix the first \((i+1)\)-uber-regeneration point to be \(e_1\), and this point was therefore reached from 0). When the walk \(X^{\scriptscriptstyle (i+1)}\) reaches level \({\bar{N}}_0:=\inf \{n>1: n \in \mathcal {L}^{\scriptscriptstyle (i)}\}\) for the first time at some point \(u_0\) (whose first coordinate is \({\bar{N}}_0\)) we can apply Lemma 3.5 (with \(j=i+1\)) to conclude that with probability (depending on \(\mathcal {E}^{\scriptscriptstyle (i+1)}_{[0,u_0\cdot e_1]}\) but) at least\(\rho _{i+1}>0\) the walk never backtracks from \(u_0\). If this non-backtracking event occurs then \(L_2^{\scriptscriptstyle (i+1)}={\bar{N}}_0\in \mathcal {L}^{\scriptscriptstyle (i+1)}\) and we are done. Otherwise there is some maximal distance \(N_1<\infty \) reached by the walk before it returns to level \({\bar{N}}_0-1\). Let \({\bar{N}}_1:=\inf \{n>N_1: n \in \mathcal {L}^{\scriptscriptstyle (i)}\}\). See Fig. 4. When the walk first reaches level \({\bar{N}}_1\) at some point \(u_1\) then we can again apply Lemma 3.5 to get that with probability (depending on \(\mathcal {E}^{\scriptscriptstyle (i+1)}_{[0,u_1\cdot e_1]}\) but) at least \(\rho _{i+1}\) the walk never backtracks from \(u_1\) and if this occurs then \(L_2^{\scriptscriptstyle (i+1)}={\bar{N}}_1\in \mathcal {L}^{\scriptscriptstyle (i+1)}\) and we are done. We can repeat this procedure until we succeed in finding \(L_2^{\scriptscriptstyle (i+1)}\in \mathcal {L}^{\scriptscriptstyle (i+1)}\) after no more than a geometric\((\rho _{i+1})\) number of attempts. \(\quad \square \)

The final ingredient we need to deduce the existence of velocities is the finiteness of the expectation of the inter-regeneration distance.

Lemma 3.6

For each \(i \in \mathbb {Z}_+\),

$$\begin{aligned} \mathbb {E}[L^{\scriptscriptstyle (i)}_2-L_1^{\scriptscriptstyle (i)}]=\varepsilon _i^{-1}<\infty . \end{aligned}$$

Proof

This closely follows the proof of [22, Lemma 3.2.5] (attributed to Martin Zerner), but note that we deal with the (common) regeneration levels rather than the regeneration times (since these are not common to all walks).

By Lemma 3.3, \(\varepsilon _i=\lim _{\ell \rightarrow \infty }\mathbb {P}(\ell \in \mathcal {L}^{\scriptscriptstyle (i)})\). On the other hand,

$$\begin{aligned} \lim _{\ell \rightarrow \infty }\mathbb {P}(\ell \in \mathcal {L}^{\scriptscriptstyle (i)})&=\lim _{\ell \rightarrow \infty }\mathbb {P}(\exists k\ge 2: \ell =L^{\scriptscriptstyle (i)}_k)\\&=\lim _{\ell \rightarrow \infty }\sum _{n \ge 1}\mathbb {P}(L^{\scriptscriptstyle (i)}_1=n,\exists k\ge 2: L^{\scriptscriptstyle (i)}_k-L^{\scriptscriptstyle (i)}_1=\ell -n)\\&=\lim _{\ell \rightarrow \infty }\sum _{n \ge 1}\mathbb {P}(L^{\scriptscriptstyle (i)}_1=n)\mathbb {P}(\exists k\ge 2: L^{\scriptscriptstyle (i)}_k-L^{\scriptscriptstyle (i)}_1=\ell -n). \end{aligned}$$

By the renewal theorem and the fact that \(\mathbb {P}(L^{\scriptscriptstyle (i)}_2-L^{\scriptscriptstyle (i)}_1=m)>0\) for each \(m\in \mathbb {N}\) we have

$$\begin{aligned} \lim _{\ell \rightarrow \infty }\mathbb {P}(\exists k\ge 2: L^{\scriptscriptstyle (i)}_k-L^{\scriptscriptstyle (i)}_1=\ell -n)=\mathbb {E}[L^{\scriptscriptstyle (i)}_2-L_1^{\scriptscriptstyle (i)}]^{-1}. \end{aligned}$$

It is then easy to show that

$$\begin{aligned} \lim _{\ell \rightarrow \infty }\sum _{n \ge 1}\mathbb {P}(L^{\scriptscriptstyle (i)}_1=n)\mathbb {P}(\exists k\ge 2: L^{\scriptscriptstyle (i)}_k-L^{\scriptscriptstyle (i)}_1=\ell -n)=\mathbb {E}[L^{\scriptscriptstyle (i)}_2-L_1^{\scriptscriptstyle (i)}]^{-1}, \end{aligned}$$

and the result follows. \(\quad \square \)

Proof of Theorem 1(b)

It now follows from standard arguments invoking the law of large numbers for the i.i.d. sequence of processes between inter-regeneration levels that, almost surely,

$$\begin{aligned} \frac{X^{\scriptscriptstyle (i)}_n}{n}\cdot e_1\rightarrow \frac{\mathbb {E}[L^{\scriptscriptstyle (i)}_{2}-L^{\scriptscriptstyle (i)}_{1}]}{\mathbb {E}[\tau ^{\scriptscriptstyle (i,i)}_{2}-\tau ^{\scriptscriptstyle (i,i)}_{1}]}, \end{aligned}$$

where the right hand side is deterministic, and is equal to zero if and only if \(\mathbb {E}[\tau ^{\scriptscriptstyle (i,i)}_{2}-\tau ^{\scriptscriptstyle (i,i)}_{1}]=\infty \). If \(v^{\scriptscriptstyle (i)}\cdot e_1>0\) then Lemma 3.1(2) applies, and otherwise Lemma 3.1(3) applies to give the result. \(\quad \square \)

4 Ballistic Phase

The aim of this section is to prove part (a) of Theorem 2, which characterises the ballistic phase. This will be done by estimating the expected time for the ith random walk \(X^{\scriptscriptstyle (i)}\) to progress a unit distance in the \(\ell ^{\scriptscriptstyle (i)}\) direction (see Lemma 4.2). First, though, we confirm that \(t_{\scriptscriptstyle (i)}\), as introduced below (2), is well-defined and can be used to control the backtracking probability for the original random walk \(X^{\scriptscriptstyle (0)}\) in the direction \(\ell ^{\scriptscriptstyle (i)}\). We recall the notation \(\varphi _{\scriptscriptstyle (i)}(t)={\mathbb {E}}[\exp \{-tX^{\scriptscriptstyle (0)}_1\cdot \ell ^{\scriptscriptstyle (i)}\}]\) from (2).

Lemma 4.1

Assume Condition 1. For each \(i\in {\mathbb {N}}\cup \{\infty \}\), there exists a unique solution \(t_{\scriptscriptstyle (i)}>0\) to the equation \(\varphi _{\scriptscriptstyle (i)}(t)=1\). Moreover, for all \(h\ge 0\),

$$\begin{aligned} {\mathbb {P}}\left( -\min _{n\ge 0}X_n^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}\ge h\right) \le \mathrm {e}^{-t_{\scriptscriptstyle (i)}h}. \end{aligned}$$

Proof

Fix \(i\in {\mathbb {N}}\cup \{\infty \}\). Clearly \(\varphi _{\scriptscriptstyle (i)}(0)=1\). Since \(\ell ^{\scriptscriptstyle (i)}\ne 0\), we have \(\ell ^{\scriptscriptstyle (i)}\cdot e>0\) for some \(e\in \{\pm e_j:\,j=1,\dots ,d\}\). Moreover,

$$\begin{aligned} \varphi _{\scriptscriptstyle (i)}(t)\ge p^{\scriptscriptstyle (0)}(-e)\exp \{t e\cdot \ell ^{\scriptscriptstyle (i)}\}, \end{aligned}$$

so \(\varphi _{\scriptscriptstyle (i)}(t)\rightarrow \infty \) as \(t\rightarrow \infty \). Next, as the sum of convex functions, at least one of which is strictly convex, we have that \(\varphi _{\scriptscriptstyle (i)}\) is a strictly convex function. Thus to check the first claim of the lemma it will suffice to show that \(\varphi _{\scriptscriptstyle (i)}'(0)<0\). To this end, we compute the relevant derivative directly,

$$\begin{aligned} \varphi _{\scriptscriptstyle (i)}'(0)=-{\mathbb {E}}[X^{\scriptscriptstyle (0)}_1\cdot \ell ^{\scriptscriptstyle (i)}]=-\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}, \end{aligned}$$

and observe that this is strictly negative by Condition 1(c). Given the existence of \(t_{\scriptscriptstyle (i)}>0\), the remaining claim of the lemma is an immediate application of Lundberg’s inequality [1, Corollary II.3.4]. \(\quad \square \)

To state the next lemma, we define the stopping times

$$\begin{aligned} T^{\scriptscriptstyle (i,j)}_{n}:=\inf \left\{ m:\,X^{\scriptscriptstyle (i)}_m\cdot \ell ^{\scriptscriptstyle (j)}\ge n\right\} , \end{aligned}$$

which under Condition 1 are almost surely finite by Theorem 1(a), and recall that \(\alpha _{\scriptscriptstyle (i)}:=\mathrm {e}^{t_{\scriptscriptstyle (i)}}>1\).

Lemma 4.2

Assume Condition 1. For each \(i\in {\mathbb {N}}\cup \{\infty \}\), if \(\beta _{\scriptscriptstyle (i)}<\alpha _{\scriptscriptstyle (i)}\), then

$$\begin{aligned} \sup _{n\ge 0}{\mathbb {E}}\left[ T^{\scriptscriptstyle (i,i)}_{n+1}-T^{\scriptscriptstyle (i,i)}_{n}\right] <\infty . \end{aligned}$$

Proof

For \(n\in {\mathbb {R}}\), define \({\mathcal {Z}}^{\scriptscriptstyle (i)}_n:=\{x\in {\mathcal {Z}}^{\scriptscriptstyle (i)}: x\cdot \ell ^{\scriptscriptstyle (i)}\in [n,n+1)\}\). We then have that

$$\begin{aligned} {\mathbb {E}}\left[ T^{\scriptscriptstyle (i,i)}_{n+1}-T^{\scriptscriptstyle (i,i)}_{n}\right] \le {\mathbb {E}}\left[ \max _{x\in {\mathcal {Z}}^{\scriptscriptstyle (i)}_n}E^{\scriptscriptstyle (i)}_x\left[ T^{\scriptscriptstyle (i,i)}_{n+1}\right] \right] , \end{aligned}$$

where \(E_x^{\scriptscriptstyle (i)}\) denotes expectation with respect to the law \(P_x^{\scriptscriptstyle (i)}\) of a \(\varvec{p}^{\scriptscriptstyle (i)}\) random walk on \(\mathcal {Z}^{\scriptscriptstyle (i)}\) starting from \(x\in \mathcal {Z}^{\scriptscriptstyle (i)}\). Now, setting

$$\begin{aligned} T^{\scriptscriptstyle (i,j)}_{-m,n}:=\inf \left\{ k:\,X^{\scriptscriptstyle (i)}_k\cdot \ell ^{\scriptscriptstyle (j)}\not \in (-m,n)\right\} , \end{aligned}$$

the monotone convergence theorem yields that, for \({\mathbb {P}}\)-a.e. realisation of \({\mathcal {Z}}^{\scriptscriptstyle (i)}\), and every x in the finite set \({\mathcal {Z}}^{\scriptscriptstyle (i)}_n\), we have that

$$\begin{aligned} E^{\scriptscriptstyle (i)}_x\left[ T^{\scriptscriptstyle (i,i)}_{n+1}\right] =\lim _{m\rightarrow \infty }E^{\scriptscriptstyle (i)}_x\left[ T^{\scriptscriptstyle (i,i)}_{-m,n+1}\right] . \end{aligned}$$

Moreover, applying well-known estimates for random walks on electrical networks (e.g. characterisations of the Green’s function in terms of effective resistance from [2, Theorem 1.31 and Proposition 2.55]), we have that

$$\begin{aligned} E^{\scriptscriptstyle (i)}_x\left[ T^{\scriptscriptstyle (i,i)}_{-m,n+1}\right] \le R^{\scriptscriptstyle (i)}(x,{\mathcal {Z}}^{\scriptscriptstyle (i)}_{n+1})\sum _{l=-m+1}^{n}\sum _{y\in {\mathcal {Z}}^{\scriptscriptstyle (i)}_l}c^{\scriptscriptstyle (i)}(y), \end{aligned}$$

where \(R^{\scriptscriptstyle (i)}(x,{\mathcal {Z}}^{\scriptscriptstyle (i)}_{n+1})\) is the effective resistance from x to \({\mathcal {Z}}^{\scriptscriptstyle (i)}_{n+1}\) in \({\mathcal {Z}}^{\scriptscriptstyle (i)}\) (equipped with the conductances \(c^{\scriptscriptstyle (i)}(\cdot ,\cdot )\)). Hence (with \({\bar{c}}\) as in the proof of Lemma 3.3) we obtain that

$$\begin{aligned}&{\mathbb {E}}\left[ T^{\scriptscriptstyle (i,i)}_{n+1}-T^{\scriptscriptstyle (i,i)}_{n}\right] \nonumber \\&\quad \le {\mathbb {E}}\left[ \max _{x\in {\mathcal {Z}}^{\scriptscriptstyle (i)}_n}R^{\scriptscriptstyle (i)}(x,{\mathcal {Z}}^{\scriptscriptstyle (i)}_{n+1})\sum _{m\le n}\sum _{y\in {\mathcal {Z}}^{\scriptscriptstyle (i)}_m}c^{\scriptscriptstyle (i)}(y)\right] \nonumber \\&\quad \le {\mathbb {E}}\left[ \sum _{l=T^{\scriptscriptstyle (0,i)}_{n}}^{{\bar{T}}^{\scriptscriptstyle (0,i)}_{n}}r^{\scriptscriptstyle (i)}(X^{\scriptscriptstyle (0)}_l,X^{\scriptscriptstyle (0)}_{l+1})\sum _{m\le {\bar{T}}^{\scriptscriptstyle (0,i)}_{n}}{\bar{c}}^{\scriptscriptstyle (i)}(X^{\scriptscriptstyle (0)}_m){\mathbf {1}}_{\{X^{\scriptscriptstyle (0)}_m\cdot \ell ^{\scriptscriptstyle (i)}<n+1\}}\right] , \end{aligned}$$
(15)

where \({\bar{T}}^{\scriptscriptstyle (0,i)}_{n}\) is the time of the last visit by \(X^{\scriptscriptstyle (0)}\) to the set \({\mathcal {Z}}^{\scriptscriptstyle (i)}_{X^{\scriptscriptstyle (0)}_{T_n^{\scriptscriptstyle (0,i)}}\cdot \ell ^{\scriptscriptstyle (i)}}\). We will estimate this expectation by decomposing into various pieces. Firstly, define

$$\begin{aligned} a_{(i),n}:=\beta _{\scriptscriptstyle (i)}^{X_{T^{\scriptscriptstyle (0,i)}_{n}}^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}}, \end{aligned}$$

and then set

$$\begin{aligned} \Upsilon :=a_{(i),n}\sum _{m={T}^{\scriptscriptstyle (0,i)}_{n}}^{{\bar{T}}^{\scriptscriptstyle (0,i)}_{n}}r^{\scriptscriptstyle (i)}(X^{\scriptscriptstyle (0)}_m,X^{\scriptscriptstyle (0)}_{m+1}),\\\Gamma _1:=a_{(i),n}^{-1}\sum _{m\le T^{\scriptscriptstyle (0,i)}_{n}-1}{\bar{c}}^{\scriptscriptstyle (i)}(X^{\scriptscriptstyle (0)}_m), \end{aligned}$$

and

$$\begin{aligned} \Gamma _2:=a_{(i),n}^{-1}\sum _{m=T^{\scriptscriptstyle (0,i)}_{n}}^{{\bar{T}}^{\scriptscriptstyle (0,i)}_{n}}{\bar{c}}^{\scriptscriptstyle (i)}(X^{\scriptscriptstyle (0)}_m){\mathbf {1}}_{\{X^{\scriptscriptstyle (0)}_m\cdot \ell ^{\scriptscriptstyle (i)}<n+1\}}. \end{aligned}$$

Note that \(\Upsilon \) only depends on the path \((X^{\scriptscriptstyle (0)}_{T^{\scriptscriptstyle (0,i)}_{n}+m}-X^{\scriptscriptstyle (0)}_{T^{\scriptscriptstyle (0,i)}_{n}})_{m\ge 0}\), and so is independent of \(\Gamma _1\). Letting \(\Vert X\Vert _p:=\mathbb {E}[|X|^p]^{1/p}\), we see that for any \(p,q>1\) such that \(p^{-1}+q^{-1}=1\), the expectation at (15) is bounded above by

$$\begin{aligned} {\mathbb {E}}\left[ \Upsilon \left( \Gamma _1+\Gamma _2\right) \right] ={\mathbb {E}}\left[ \Upsilon \Gamma _1\right] +{\mathbb {E}}\left[ \Upsilon \Gamma _2\right] \le {\mathbb {E}}\left[ \Upsilon \right] {\mathbb {E}}\left[ \Gamma _1\right] + \left\| \Upsilon \right\| _p \left\| \Gamma _2\right\| _q. \end{aligned}$$

To bound the terms involving \(\Upsilon \), observe that, for some \(C>1\),

$$\begin{aligned}&\sup _{n\ge 0}{\mathbb {P}}\left( \max _{m\in \{T^{\scriptscriptstyle (0,i)}_{n},\dots ,{\bar{T}}^{\scriptscriptstyle (0,i)}_{n}\}}a_{(i),n}r^{\scriptscriptstyle (i)}(X^{\scriptscriptstyle (0)}_m,X^{\scriptscriptstyle (0)}_{m+1})\ge \lambda \right) \\&\quad \le \sup _{n\ge 0}{\mathbb {P}}\left( \max _{m\in \{T^{\scriptscriptstyle (0,i)}_{n},\dots ,{\bar{T}}^{\scriptscriptstyle (0,i)}_{n}\}}C\beta _{\scriptscriptstyle (i)}^{\left( X_{T^{\scriptscriptstyle (0,i)}_{n}}^{\scriptscriptstyle (0)}-X_m^{\scriptscriptstyle (0)}\right) \cdot \ell ^{\scriptscriptstyle (i)}}\ge \lambda \right) . \end{aligned}$$

Taking logs and using the Markov property and Lemma 4.1, for all \(\lambda >C\) this is at most

$$\begin{aligned} {\mathbb {P}}\left( -\min _{m\ge 0}X_m^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}\ge \frac{\log (\lambda /C)}{\log \beta _{\scriptscriptstyle (i)}}\right) \le \mathrm {e}^{-\frac{t_{\scriptscriptstyle (i)}\log (\lambda /C)}{\log \beta _{\scriptscriptstyle (i)}}}\le c\lambda ^{-\frac{\log \alpha _{\scriptscriptstyle (i)}}{\log \beta _{\scriptscriptstyle (i)}}}. \end{aligned}$$

In particular, for p sufficiently close to 1,

$$\begin{aligned} \sup _{n\ge 0}\left\| \max _{m\in \{T^{\scriptscriptstyle (0,i)}_{n},\dots ,{\bar{T}}^{\scriptscriptstyle (0,i)}_{n}\}}a_{(i),n}r^{\scriptscriptstyle (i)}(X^{\scriptscriptstyle (0)}_m,X^{\scriptscriptstyle (0)}_{m+1})\right\| _{p}<\infty . \end{aligned}$$

Since

$$\begin{aligned} \left\| \Upsilon \right\| _p\le \left\| \max _{m\in \{T^{\scriptscriptstyle (0,i)}_{n},\dots ,{\bar{T}}^{\scriptscriptstyle (0,i)}_{n}\}}a_{(i),n}r^{\scriptscriptstyle (i)}(X^{\scriptscriptstyle (0)}_m,X^{\scriptscriptstyle (0)}_{m+1})\right\| _{p^2}\left\| {\bar{T}}_{n}^{\scriptscriptstyle (0,i)}+1-T_{n}^{\scriptscriptstyle (0,i)}\right\| _{pq}, \end{aligned}$$

to complete the proof that \(\sup _{n\ge 0}\Vert \Upsilon \Vert _p<\infty \), it will thus be sufficient to check that

$$\begin{aligned} \sup _{n\ge 0}\left\| {\bar{T}}_{n}^{\scriptscriptstyle (0,i)}-T_{n}^{\scriptscriptstyle (0,i)}\right\| _q<\infty \end{aligned}$$
(16)

for arbitrary \(q\ge 1\), as we will do below. We continue for the moment, however, by dealing with \(\Gamma _1\), which satisfies

$$\begin{aligned} \Gamma _1\le & {} C\sum _{m\le T^{\scriptscriptstyle (0,i)}_{n}-1}\beta _{\scriptscriptstyle (i)}^{\left( X^{\scriptscriptstyle (0)}_m-X_{T^{\scriptscriptstyle (0,i)}_{n}}^{\scriptscriptstyle (0)}\right) \cdot \ell ^{\scriptscriptstyle (i)}}\\\le & {} C\sum _{m\le n}\beta _{\scriptscriptstyle (i)}^{m+1-n}\#\left\{ l\ge 0:\,X^{\scriptscriptstyle (0)}_l\in {\mathcal {Z}}_m^{\scriptscriptstyle (i)}\right\} . \end{aligned}$$

Thus, taking expectations, we find that

$$\begin{aligned} \sup _{n\ge 0}\mathbb {E}[\Gamma _1]\le C\sup _{n\in {\mathbb {Z}}}\mathbb {E}\left[ \#\left\{ l\ge 0:\,X^{\scriptscriptstyle (0)}_l\in {\mathcal {Z}}_n^{\scriptscriptstyle (i)}\right\} \right] \le \frac{C\sup _{n\ge 0}\mathbb {E}\left[ {\tilde{T}}^{\scriptscriptstyle (0,i)}_{n}-T^{\scriptscriptstyle (0,i)}_{n}\right] }{{\mathbb {P}}\left( \min _{m\ge 0}X_m^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}\ge 0\right) },\nonumber \\ \end{aligned}$$
(17)

where \({\tilde{T}}^{\scriptscriptstyle (0,i)}_{n}\) is the first time m after \(T_n^{\scriptscriptstyle (0,i)}\) such that \((X_m^{\scriptscriptstyle (0)}-X_{T_n^{\scriptscriptstyle (0,i)}}^{\scriptscriptstyle (0)})\cdot \ell ^{\scriptscriptstyle (i)}\ge 1\). We have obtained the expression in the right-hand side here by considering the number of returns by \(X^{\scriptscriptstyle (0)}\) from \({\mathcal {Z}}_{n+1}^{\scriptscriptstyle (i)}\) to \({\mathcal {Z}}_{n}^{\scriptscriptstyle (i)}\) – since \(X^{\scriptscriptstyle (0)}\) is simply a biased random walk on \({\mathbb {Z}}^d\), this number is stochastically dominated by a geometric \({\mathbb {P}}(\min _{m\ge 0}X_m^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}\ge 0)\) random variable, and noting that the duration of each excursion back into \({\mathcal {Z}}_n^{\scriptscriptstyle (i)}\) is stochastically dominated by \({\tilde{T}}^{\scriptscriptstyle (0,i)}_{n}-T^{\scriptscriptstyle (0,i)}_{n}\). Now, [11, Theorem II] implies that

$$\begin{aligned} \sup _{n\ge 0}{\mathbb {P}}\left( {\tilde{T}}_{n}^{\scriptscriptstyle (0,i)}-T_{n}^{\scriptscriptstyle (0,i)}\ge \lambda \right) \le C\mathrm {e}^{-c\lambda }, \end{aligned}$$
(18)

which in turn yields the numerator on the right-hand side of (17) is finite. Moreover, we also have that \({\mathbb {P}}(\min _{m\ge 0}X_m^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}\ge 0)>0\), and so we have established the desired bound for \(\Gamma _1\). Finally, the definition of \(\Gamma _2\) implies

$$\begin{aligned} \Gamma _2\le C\sum _{m=T^{\scriptscriptstyle (0,i)}_{n}}^{{\bar{T}}^{\scriptscriptstyle (0,i)}_{n}}\beta _{\scriptscriptstyle (i)}^{\left( X^{\scriptscriptstyle (0)}_m-X_{T^{\scriptscriptstyle (0,i)}_{n}}^{\scriptscriptstyle (0)}\right) \cdot \ell ^{\scriptscriptstyle (i)}}{\mathbf {1}}_{\{X^{\scriptscriptstyle (0)}_m\cdot \ell ^{\scriptscriptstyle (i)}<n+1\}}\le C\left( {\bar{T}}^{\scriptscriptstyle (0,i)}_{n}-T^{\scriptscriptstyle (0,i)}_{n}\right) . \end{aligned}$$

Now, arguing as in the previous paragraph, we have that \({\bar{T}}^{\scriptscriptstyle (0,i)}_{n}-T^{\scriptscriptstyle (0,i)}_{n}\) is stochastically dominated by \(\sum _{i=1}^G\xi _i\), where G is a geometric \({\mathbb {P}}(\min _{m\ge 0}X_m^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}\ge 0)\) random variable, and \((\xi _i)_{i\ge 1}\) are independent copies of \({\tilde{T}}^{\scriptscriptstyle (0,i)}_{n}-T^{\scriptscriptstyle (0,i)}_{n}\), independent of G. We thus obtain, by a simple adaptation of the argument leading to Wald’s identity, that, for integers \(q\in {\mathbb {N}}\),

$$\begin{aligned} \sup _{n\ge 0}{\mathbb {E}}[\Gamma _2^q]\le C {\mathbb {E}}[G^q]{\mathbb {E}}[\xi ^q], \end{aligned}$$

and hence we obtain from the finiteness of the moments of the geometric distribution and (18) that \(\sup _{n\ge 0}\Vert \Gamma _2\Vert _q<\infty \) for arbitrary \(q\ge 1\). Since the latter part of the argument also implies (16), the proof is complete. \(\quad \square \)

Proof of Theorem 2(a)

From Theorem 1 and the definition of \(T_n^{\scriptscriptstyle (i,i)}\), we have

$$\begin{aligned} \frac{X^{\scriptscriptstyle (i)}_{T_n^{\scriptscriptstyle (i,i)}}\cdot \ell ^{\scriptscriptstyle (i)}}{T_n^{\scriptscriptstyle (i,i)}}\rightarrow v:=v^{\scriptscriptstyle (i)}\cdot \ell ^{\scriptscriptstyle (i)}, \end{aligned}$$

and

$$\begin{aligned} \frac{X^{\scriptscriptstyle (i)}_{T_n^{\scriptscriptstyle (i,i)}}\cdot \ell ^{\scriptscriptstyle (i)}}{n}\rightarrow 1 \end{aligned}$$

almost surely. If \(v=0\), then from the above we have that \(n^{-1}T_n^{\scriptscriptstyle (i,i)} \rightarrow \infty \) almost surely, so for any M,

$$\begin{aligned} \mathbb {P}(n^{-1}T_n^{\scriptscriptstyle (i,i)}>M) \rightarrow 1 \end{aligned}$$

as \(n\rightarrow \infty \). However, by Lemma 4.2 there exists \(C>0\) such that \(\mathbb {E}[T_n^{\scriptscriptstyle (i,i)}]\le Cn\) for every n. Together with Markov’s inequality, this gives

$$\begin{aligned} \mathbb {P}( n^{-1}T_n^{\scriptscriptstyle (i,i)}>M)\le \frac{\mathbb {E}[n^{-1}T_n^{\scriptscriptstyle (i,i)}]}{M}\le \frac{C}{M}. \end{aligned}$$

For \(M>2C\) this is less than 1 / 2, and hence we conclude that \(v\ne 0\). \(\quad \square \)

5 Sub-ballistic Phase

In this section, we describe conditions under which the walk \(X^{\scriptscriptstyle (i)}\) is sub-ballistic, namely we prove Theorem 2(b). Throughout we work under the assumption that Condition 1 holds. To establish the main result, we consider a regeneration structure for \(X^{\scriptscriptstyle (0)}\) in the direction \(\ell ^{\scriptscriptstyle (i)}\) for each i, as defined above Lemma 2.1. Moreover, we need to define what it means to have a trap of height h for the walk \(X^{\scriptscriptstyle (i)}\) in the jth regeneration block for \(X^{\scriptscriptstyle (0)}\) with respect to the direction \(\ell ^{\scriptscriptstyle (i)}\). In particular, let \({\mathcal {E}}^{\scriptscriptstyle (i)}_j(h)\) be the event that there exist mn with \({\mathcal {T}}^{\ell ^{\scriptscriptstyle (i)}}_j\le m\le n\le {\mathcal {T}}^{\ell ^{\scriptscriptstyle (i)}}_{j+1}\) such that \(X_m^{\scriptscriptstyle (0)}\) and \(X_n^{\scriptscriptstyle (0)}\) are cut-points for \(X^{\scriptscriptstyle (0)}\) (i.e. for \(n'\in \{m,n\}\) we have that \(\{X_{m'}^{\scriptscriptstyle (0)}:\,m'\le n'\}\cap \{X_{m'}^{\scriptscriptstyle (0)}:\,m'> n'\}=\emptyset \)) and

$$\begin{aligned} \left\lfloor \left( X_m^{\scriptscriptstyle (0)}-X_{n}^{\scriptscriptstyle (0)}\right) \cdot \ell ^{\scriptscriptstyle (i)}\right\rfloor = h. \end{aligned}$$
(19)

The key ingredient of the argument to demonstrate sub-ballisticity is the following, which reveals that the probability of creating a trap in a regeneration block decays at an exponential rate no greater than that of the backtracking probability for \(X^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}\) (recall Lemma 4.1). (NB. Establishing the reverse inequality is much easier, but we do not need it in what follows.)

Lemma 5.1

Assume Condition 1. For each \(i\in {\mathbb {N}}\cup \{\infty \}\),

$$\begin{aligned} \limsup _{h\rightarrow \infty }\frac{-\log {\mathbb {P}}\left( {\mathcal {E}}^{\scriptscriptstyle (i)}_j(h)\right) }{h}\le t_{\scriptscriptstyle (i)}, \end{aligned}$$

where \(t_{\scriptscriptstyle (i)}\) was defined below (2). (NB. By the fact regeneration blocks are identically distributed, the left-hand side is independent of \(j\ge 1\).)

Before proving this result, we explain how it is used to establish Theorem 2(b). To this end, for \(n\in {\mathbb {N}}\) and \(\varepsilon \in (0,1)\), we introduce a parameter

$$\begin{aligned} h^{\scriptscriptstyle (i)}_{n,\varepsilon }=\frac{(1-\varepsilon )\log n}{t_{\scriptscriptstyle (i)}} \end{aligned}$$

that will be used to define what it means for a trap to be big for \(X^{\scriptscriptstyle (i)}\) at scale n. In particular, we set

$$\begin{aligned} N^{\scriptscriptstyle (i)}_{n,\varepsilon }:=\#\left\{ j\in \{1,\dots ,n-1\}\,:\, {\mathcal {E}}^{\scriptscriptstyle (i)}_j(h^{\scriptscriptstyle (i)}_{n,\varepsilon }) \text{ occurs }\right\} . \end{aligned}$$

The following lemma tells us that this random variable grows polynomially with n.

Lemma 5.2

Assume Condition 1, and let \(i\in {\mathbb {N}}\cup \{\infty \}\). For any \(\varepsilon \in (0,1)\), it holds that

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb {P}}\left( N_{n,\varepsilon }^{\scriptscriptstyle (i)}\ge n^{\varepsilon /2}\right) =1. \end{aligned}$$

Proof

Recall the definition of \(\alpha _{\scriptscriptstyle (i)}\) from (3). Lemma 5.1 gives us that, for large n,

$$\begin{aligned} {\mathbb {P}}\left( {\mathcal {E}}^{\scriptscriptstyle (i)}_j(h^{\scriptscriptstyle (i)}_{n,\varepsilon })\right) \ge \alpha _{\scriptscriptstyle (i)}^{-\left( \frac{1-3\varepsilon /4}{1-\varepsilon }\right) h^{\scriptscriptstyle (i)}_{n,\varepsilon }}=n^{-1+3\varepsilon /4}. \end{aligned}$$

Hence, applying the i.i.d. structure of regeneration blocks of Lemma 2.1, and writing \(\mathrm {Bin}(n,p)\) for a binomial random variable with parameters n and p,

$$\begin{aligned} {\mathbb {P}}\left( N^{\scriptscriptstyle (i)}_{n,\varepsilon }<n^{\varepsilon /2}\right)\le & {} {\mathbb {P}}\left( \mathrm {Bin}\left( n-1,n^{-1+3\varepsilon /4}\right) < n^{\varepsilon /2}\right) \\\le & {} \frac{\mathrm {Var}\left( \mathrm {Bin}\left( n-1,n^{-1+3\varepsilon /4}\right) \right) }{\left( (n-1)n^{-1+3\varepsilon /4}-n^{\varepsilon /2}\right) ^2}\\\le & {} C n^{-3\varepsilon /4}, \end{aligned}$$

which completes the proof. \(\quad \square \)

We next present a lemma which gives a lower bound for the probability of the time we spend in a trap being small. We introduce the notation \(\sigma _j^{\scriptscriptstyle (i)}\) to represent the hitting time of \(X^{\scriptscriptstyle (0)}_{{\mathcal {T}}^{\ell ^{\scriptscriptstyle (i)}}_j}\) by \(X^{\scriptscriptstyle (i)}\), and recall the notation \(P^{\scriptscriptstyle (i)}_x\) from the proof of Lemma 4.2.

Lemma 5.3

Assume Condition 1, and let \(i\in {\mathbb {N}}\cup \{\infty \}\). There exists a deterministic constant c such that, on the event \({\mathcal {E}}^{\scriptscriptstyle (i)}_j(h)\) with \(h\ge 1\),

$$\begin{aligned} P^{\scriptscriptstyle (i)}_{X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j}}\left( \sigma _{j+1}^{\scriptscriptstyle (i)}\ge \beta _{\scriptscriptstyle (i)}^{h}\right) \ge c. \end{aligned}$$

Proof

Suppose \({\mathcal {E}}^{\scriptscriptstyle (i)}_j(h)\) holds, and let \(x=X_m^{\scriptscriptstyle (0)}\) and \(y=X_n^{\scriptscriptstyle (0)}\) be the two vertices appearing in the definition of this event. (These are not necessarily uniquely defined, but that is unimportant for the proof.) Let

$$\begin{aligned} p=p_{\mathcal {Z}^{\scriptscriptstyle (i)}}:=P^{\scriptscriptstyle (i)}_x\left( X^{\scriptscriptstyle (i)} \text{ hits } \text{ y } \text{ before } \text{ returning } \text{ to } \text{ x }\right) , \end{aligned}$$

and note that, with \(u=X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j}\),

$$\begin{aligned} P^{\scriptscriptstyle (i)}_{u}\left( \sigma _{j+1}^{\scriptscriptstyle (i)}\ge \beta _{\scriptscriptstyle (i)}^{h}\right) \ge {\mathbb {P}}\left( \mathrm {Geo}\left( p\right) \ge \beta _{\scriptscriptstyle (i)}^{h}\right) =(1-p)^{\beta _{\scriptscriptstyle (i)}^{h}}, \end{aligned}$$

where (given \(\mathcal {Z}^{\scriptscriptstyle (i)}\)) \(\mathrm {Geo}(p)\) is a geometric random variable taking values in \(\mathbb {N}\) with parameter \(p=p_{\mathcal {Z}^{\scriptscriptstyle (i)}}\). We thus are motivated to find an upper bound for p. For this, we observe from [16, Exercise 2.62], for example, that with \(z:=X^{\scriptscriptstyle (0)}_{m+1}\),

$$\begin{aligned} p\le P^{\scriptscriptstyle (i)}_z\left( X^{\scriptscriptstyle (i)}_{0} \text{ hits } y\hbox { before } x\right) = \frac{r^{\scriptscriptstyle (i)}\left( x,z\right) }{r^{\scriptscriptstyle (i)}\left( x,z\right) +R^{\scriptscriptstyle (i)}\left( z,y\right) }=\frac{r^{\scriptscriptstyle (i)}\left( x,z\right) }{R^{\scriptscriptstyle (i)}\left( x,y\right) }, \end{aligned}$$

where we note there is a single edge (with resistance \(r^{\scriptscriptstyle (i)}\left( x,z\right) \)) between x and z in the graph \(\mathcal {Z}^{\scriptscriptstyle (i)}\), and we apply the series law for the second equality. Now,

$$\begin{aligned} r^{\scriptscriptstyle (i)}\left( x,z\right) =c^{\scriptscriptstyle (i)}\left( x,z\right) ^{-1}\le C \beta _{\scriptscriptstyle (i)}^{-x\cdot \ell ^{\scriptscriptstyle (i)}}, \end{aligned}$$

and

$$\begin{aligned} R^{\scriptscriptstyle (i)}\left( x,y\right) \ge R^{\scriptscriptstyle (i)}\left( y,\{w:\,w\sim _{\scriptscriptstyle (i)}y\}\right) \ge \frac{\min _{w:\,w\sim _{\scriptscriptstyle (i)}y}r^{\scriptscriptstyle (i)}\left( y,w\right) }{\#\{w:\,w\sim _{\scriptscriptstyle (i)}y\}}\ge C\beta _{\scriptscriptstyle (i)}^{-y\cdot \ell ^{\scriptscriptstyle (i)}}, \end{aligned}$$

where we use the notation \(w\sim _{\scriptscriptstyle (i)}y\) to mean that w is a neighbour of y in \({\mathcal {Z}}^{\scriptscriptstyle (i)}\). Combining these estimates with (19) thus yields

$$\begin{aligned} p\le C \beta _{\scriptscriptstyle (i)}^{-(x-y)\cdot \ell ^{\scriptscriptstyle (i)}}\le C \beta _{\scriptscriptstyle (i)}^{-h}. \end{aligned}$$

Hence we obtain that

$$\begin{aligned} P^{\scriptscriptstyle (i)}_{u}\left( \sigma _{j+1}^{\scriptscriptstyle (i)}\ge \beta _{\scriptscriptstyle (i)}^{h}\right) \ge \left( 1-C \beta _{\scriptscriptstyle (i)}^{-h}\right) ^{\beta _{\scriptscriptstyle (i)}^{h}}, \end{aligned}$$

which is clearly bounded away from 0 as \(h\rightarrow \infty \), and the result follows. \(\quad \square \)

We use the previous two lemmas to establish that the random walk \(X^{\scriptscriptstyle (i)}\) takes a super-linear time to reach the nth regeneration level.

Lemma 5.4

Assume Condition 1, and let \(i\in {\mathbb {N}}\cup \{\infty \}\). If \(\beta _{\scriptscriptstyle (i)}>\alpha _{\scriptscriptstyle (i)}\), then there exists a deterministic constant \(\delta >0\) such that

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb {P}}\left( \sigma _{n}^{\scriptscriptstyle (i)}\ge n^{1+\delta }\right) =1. \end{aligned}$$

Proof

On the event that \(N^{\scriptscriptstyle (i)}_{n,\varepsilon }\ge n^{\varepsilon /2}\), Lemma 5.3 implies that \(\sigma _{n}^{\scriptscriptstyle (i)}\) is stochastically bounded below by a random variable of the form \(\beta _{\scriptscriptstyle (i)}^{h^{\scriptscriptstyle (i)}_{n,\varepsilon }}\mathrm {Bin}(n^{\varepsilon /2},c)\). Note that the multiplying constant here is given by

$$\begin{aligned} \beta _{\scriptscriptstyle (i)}^{h^{\scriptscriptstyle (i)}_{n,\varepsilon }}=n^{\frac{(1-\varepsilon )\log \beta _{\scriptscriptstyle (i)}}{t_{\scriptscriptstyle (i)}}}, \end{aligned}$$

which, by our assumption that \(\beta _{\scriptscriptstyle (i)}>\alpha _{\scriptscriptstyle (i)}=\mathrm {e}^{t_{\scriptscriptstyle (i)}}\), can be made greater than n by choosing \(\varepsilon >0\) suitably small. Hence for all such \(\varepsilon \) we have that

$$\begin{aligned} {\mathbb {P}}\left( \sigma _{n}^{\scriptscriptstyle (i)}\ge n^{1+\delta }\right)&\ge {\mathbb {P}}\left( \sigma _{n}^{\scriptscriptstyle (i)}\ge n^{1+\delta }\big | N^{\scriptscriptstyle (i)}_{n,\varepsilon }\ge n^{\varepsilon /2} \right) {\mathbb {P}}\left( N^{\scriptscriptstyle (i)}_{n,\varepsilon }\ge n^{\varepsilon /2}\right) \nonumber \\&\ge {\mathbb {P}}\left( \mathrm {Bin}(n^{\varepsilon /2},c)\ge n^{\delta }\right) {\mathbb {P}}\left( N^{\scriptscriptstyle (i)}_{n,\varepsilon }\ge n^{\varepsilon /2}\right) . \end{aligned}$$
(20)

Taking \(\delta <\varepsilon /2\) and applying Lemma 5.2, we see that (20) converges to 1 as \(n\rightarrow \infty \), which establishes the desired conclusion. \(\quad \square \)

From the preceding result, the proof of Theorem 2(b) is straightforward.

Proof of Theorem 2(b)

By Lemma 5.4, we have that

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb {P}}\left( X_{n^{1+\delta }}^{\scriptscriptstyle (i)}\cdot \ell ^{\scriptscriptstyle (i)}\le X^{\scriptscriptstyle (0)}_{{\mathcal {T}}^{\ell ^{\scriptscriptstyle (i)}}_n}\cdot \ell ^{\scriptscriptstyle (i)}\right) =1. \end{aligned}$$

However, we also know from Lemma 2.1 that there exists finite constant C such that

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb {P}}\left( X^{\scriptscriptstyle (0)}_{{\mathcal {T}}^{\ell ^{\scriptscriptstyle (i)}}_n}\cdot \ell ^{\scriptscriptstyle (i)}\le Cn\right) =1. \end{aligned}$$

Hence we conclude that

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb {P}}\left( X_{n^{1+\delta }}^{\scriptscriptstyle (i)}\cdot \ell ^{\scriptscriptstyle (i)}\le Cn\right) =1, \end{aligned}$$

which yields

$$\begin{aligned} n^{-1}X_{n}^{\scriptscriptstyle (i)}\cdot \ell ^{\scriptscriptstyle (i)}\rightarrow 0 \end{aligned}$$

in probability. Since we also know the almost-sure limit of the left-hand side exists (by Theorem 1), we in fact have that the above convergence holds almost-surely. Moreover, recalling Condition 1(c) and the fact that \(v^{\scriptscriptstyle (i)}\) must be a scalar multiple of \(\delta ^{\scriptscriptstyle (0)}\) (i.e. Theorem 1(b)), we conclude that \(v^{\scriptscriptstyle (i)}=0\). \(\quad \square \)

To complete the section, we need to prove Lemma 5.1. Towards deducing an appropriate lower bound for the probability of \({\mathcal {E}}_j^{\scriptscriptstyle (i)}(h)\), we will present a collection of events that together ensure the existence of a trap, and for which we can suitably estimate the probability of their intersection. At the heart of the proof is the application of a large deviations result, the usefulness of which depends on the identification of the direction in which \(X^{\scriptscriptstyle (0)}\) is most likely to drift, conditional on the projection \(X^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}\) backtracking. In particular, the latter direction transpires to be described by

$$\begin{aligned} {\hat{\delta }}^{\scriptscriptstyle (i)}:={\mathbb {E}}\left[ \mathrm {e}^{-t_{\scriptscriptstyle (i)}X_1^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}}X_1^{\scriptscriptstyle (0)}\right] . \end{aligned}$$

(Note that, from the properties of \(\varphi _{\scriptscriptstyle (i)}\) set out in the proof of Lemma 4.1, it holds that

$$\begin{aligned} {\hat{\delta }}^{\scriptscriptstyle (i)}\cdot \ell ^{\scriptscriptstyle (i)}=-\varphi _{\scriptscriptstyle (i)}'(t_{\scriptscriptstyle (i)})<0, \end{aligned}$$

and so if \(X^{\scriptscriptstyle (0)}\) follows the direction \({\hat{\delta }}^{\scriptscriptstyle (i)}\), then \(X^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}\) is indeed decreasing.)

Remark 5.5

Although we will not need this fact in the proof of Lemma 5.1, we observe that \({\hat{\delta }}^{\scriptscriptstyle (i)}\) is the drift of the Markov process with transition probabilities given by the Doob transform

$$\begin{aligned} {\hat{p}}^{\scriptscriptstyle (0)}(y-x)=\frac{{p}^{\scriptscriptstyle (0)}(y-x)h^{\scriptscriptstyle (i)}(y)}{h^{\scriptscriptstyle (i)}(x)}, \end{aligned}$$

where \(h^{\scriptscriptstyle (i)}\) is the \(X^{\scriptscriptstyle (0)}\) harmonic function given by \(h^{\scriptscriptstyle (i)}(x):=\mathrm {e}^{-t_{\scriptscriptstyle (i)}x\cdot \ell ^{\scriptscriptstyle (i)}}\). At least in cases where \(X^{\scriptscriptstyle (0)}\) can only enter the region \(\{x:\,x\cdot \ell ^{\scriptscriptstyle (i)}\le -h\}\) by hitting the hyperplane \(L_h:=\{x:\,x\cdot \ell ^{\scriptscriptstyle (i)}=-h\}\) (as is the case in Example 1.2), the transition probabilities \({\hat{p}}^{\scriptscriptstyle (0)}\) precisely describe the law of \(X^{\scriptscriptstyle (0)}\) conditioned to hit \(L_h\) up to this hitting time (from which time the process proceeds with transition probability \(p^{\scriptscriptstyle (0)}\)).

Proof of Lemma 5.1

Given \(\varepsilon \in (0,1)\) and \(h\ge 1\), we start by defining a sequence of eight events. In understanding their definition, it will aid the reader to refer to Fig. 5. First, let

$$\begin{aligned} x_a=X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j^{\ell ^{\scriptscriptstyle (i)}}}, \end{aligned}$$

and consider three cylinders \(C_1\), \(C_2\), \(C_3\), as shown in Fig. 5. In particular, \(C_1\) has straight sides parallel to \(\delta ^{\scriptscriptstyle (0)}\), radius \(\varepsilon h\), is positioned so that its centre line runs through \(x_a\), and it has height chosen so that it just touches the planes

$$\begin{aligned} P_1:=\left\{ x:\,x\cdot \ell ^{\scriptscriptstyle (i)}=X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j^{\ell ^{\scriptscriptstyle (i)}}}\cdot \ell ^{\scriptscriptstyle (i)}\right\} \end{aligned}$$

and

$$\begin{aligned} P_2:=\left\{ x:\,x\cdot \ell ^{\scriptscriptstyle (i)}=X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j^{\ell ^{\scriptscriptstyle (i)}}}\cdot \ell ^{\scriptscriptstyle (i)}+h\right\} . \end{aligned}$$

We define \(C_2\) similarly, but parallel to \({\hat{\delta }}^{\scriptscriptstyle (i)}\), and positioned so that the ends of \(C_1\) and \(C_2\) closest to the plane \(P_2\) are separated by a distance of \(\varepsilon h\). The cylinder \(C_3\) is also defined in the same way, again parallel to \({\delta }^{\scriptscriptstyle (0)}\), and positioned so that the ends of \(C_2\) and \(C_3\) closest to the plane \(P_3\) are separated by a distance of \(\varepsilon h\). It is moreover possible to complete the above construction in such a way that the three cylinders are disjoint. In addition to these sets, we define points:

  • \(x_b\), to be the lattice point closest to the point \(y_{b,\varepsilon }\), where \(y_{b,\varepsilon }\) is the point on the centreline of \(C_1\) at distance \(2\varepsilon h\) from the end nearest to \(P_1\);

  • \(x_c\), to be a lattice point lying within distance \(2\varepsilon h\) of both \(C_1\) and \(C_2\), but separated by at least one lattice step from one of these sets, and satisfying \((x_c-x_a)\cdot \ell ^{\scriptscriptstyle (i)}\in [h(1+\varepsilon /4),h(1+3\varepsilon /4)]\);

  • \(x_d\), to be the lattice point closest to the point \(y_{d,\varepsilon }\), where \(y_{d,\varepsilon }\) is the point on the centreline of \(C_2\) at distance \(\varepsilon h\) from the end nearest to \(P_2\);

  • \(x_e\), to be a lattice point lying within distance \(2\varepsilon h\) of both \(C_2\) and \(C_3\), but separated by at least one lattice step from one of these sets, and satisfying \(\lfloor (x_c-x_e)\cdot \ell ^{\scriptscriptstyle (i)}\rfloor =h\);

  • \(x_f\), to be the lattice point closest to the point \(y_{f,\varepsilon }\), where \(y_{f,\varepsilon }\) is the point on the centreline of \(C_3\) at distance \(\varepsilon h\) from the end nearest to \(P_1\);

  • \(x_g\), to be a lattice point that lies within distance \(4\varepsilon h\) of \(C_3\), and satisfies \(x_g\cdot \ell ^{\scriptscriptstyle (i)}\ge X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j^{\ell ^{\scriptscriptstyle (i)}}}\cdot \ell ^{\scriptscriptstyle (i)}+h(1+3\varepsilon )\).

We next define the events of interest \((E_k)_{k=1}^8\).

  • \(E_1=\{\)After \({{\mathcal {T}}_j^{\ell ^{\scriptscriptstyle (i)}}}\), \(X^{\scriptscriptstyle (0)}\) follows a shortest lattice path to \(x_b\), follows the same path back to \(x_a\), and then returns again along that path to \(x_b\}\). (The recrossing of the path ensures that none of the points on it are regenerations.)

  • \(E_2=E_1\cap \{\)From \(x_b\), \(X^{\scriptscriptstyle (0)}\) stays inside \(C_1\) until it reaches a distance \(\le \varepsilon h\) from the end of \(C_1\) closest to \(P_2\}\).

  • \(E_3=E_2\cap \{\)From the previous stopping time, \(X^{\scriptscriptstyle (0)}\) follows a shortest simple lattice path that passes through \(x_c\) and eventually reaches \(x_d\}\).

  • \(E_4=E_3\cap \{\)From \(x_d\), \(X^{\scriptscriptstyle (0)}\) stays inside \(C_2\) until it reaches a distance \(\le \varepsilon h\) from the end of \(C_2\) closest to \(P_1\}\).

  • \(E_5=E_4\cap \{\)From the previous stopping time, \(X^{\scriptscriptstyle (0)}\) follows a shortest, simple lattice path that passes through \(x_e\) and eventually reaches \(x_f\), whilst satisfying \((X^{\scriptscriptstyle (0)}-x_a)\cdot \ell ^{\scriptscriptstyle (i)}\ge 0\}\).

  • \(E_6=E_5\cap \{\)From \(x_f\), \(X^{\scriptscriptstyle (0)}\) stays inside \(C_3\) until it reaches a distance \(\le \varepsilon h\) from the end of \(C_3\) closest to \(P_2\}\).

  • \(E_7=E_6\cap \{\)From the previous stopping time, \(X^{\scriptscriptstyle (0)}\) follows a shortest lattice path to \(x_g\}\).

  • \(E_8=E_7\cap \{\)From \(x_g\), \(X^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}\) never drops below \(x_g\cdot \ell ^{\scriptscriptstyle (i)}\}\).

By construction, for any fixed \(\varepsilon \in (0,1/4)\) and large enough h, we have that \({\mathcal {E}}_j^{\scriptscriptstyle (i)}(h)\supseteq E_8\), with \(x_c\) and \(x_e\) giving the relevant cut-points for \(X^{\scriptscriptstyle (0)}\). Hence we immediately obtain that

$$\begin{aligned} \limsup _{h\rightarrow \infty }\frac{-\log {\mathbb {P}}({\mathcal {E}}_j^{\scriptscriptstyle (i)}(h))}{h}\le \sum _{k=1}^8\lim _{\varepsilon \rightarrow 0}\limsup _{h\rightarrow \infty }\frac{-\log p_k}{h}, \end{aligned}$$

where \(p_1:={\mathbb {P}}(E_1)\) and for \(k=2,\dots ,8\). Note that, from \(x_a\), the process \(X^{\scriptscriptstyle (0)}\) continues as under its original law, but conditioned so that \(X^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}\) does not fall below \(x_a\cdot \ell ^{\scriptscriptstyle (i)}\). However, since the event \(E_8\) is contained within the latter event, which has strictly positive probability, it will be enough to estimate the probabilities \((p_k)_{k=1}^8\) for \(X^{\scriptscriptstyle (0)}\) under its original law (conditional on starting at \(x_a\)).

Fig. 5
figure 5

A possible realisation of the path of \(X^{\scriptscriptstyle (0)}\) on the event \(E_8\), as defined in the proof of Lemma 5.1. The black dots mark the positions of \(x_a\) (top left), \(x_b\), \(x_c\), \(x_d\), \(x_e\), \(x_f\) and \(x_g\) (bottom right)

For \(k=1,3,5,7\), since \(E_k=E_{k-1}\cap E_k'\) where \(E_k'\) only requires the random walk \(X^{\scriptscriptstyle (0)}\) to follow an exact path of \(\le C\varepsilon h\) steps, it follows that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\limsup _{h\rightarrow \infty }\frac{-\log p_k}{h}\le \lim _{\varepsilon \rightarrow 0}\left( -C\varepsilon \log \min _e p^{\scriptscriptstyle (0)}(e)\right) =0. \end{aligned}$$

Next, note the strong law of large numbers and some basic calculations yields that, for \(\eta >0\),

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb {P}}\left( \left| \frac{1}{n}X^{\scriptscriptstyle (0)}_{\lfloor nt\rfloor }-t{\delta }^{\scriptscriptstyle (0)}\right| \le \eta \text{ for } \text{ all } t\in [0,1]\right) =1. \end{aligned}$$

Applying the change of parameter \(n=h/({\delta }^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)})\) and \(\eta =\varepsilon \delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}\), the event within the above probability ensures that the process \((X^{\scriptscriptstyle (0)}_{\lfloor th/({\delta }^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)})\rfloor })_{t\in [0,1]}\) stays within a distance \(\varepsilon h\) of the linear function \((th\delta ^{0}/({\delta }^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}))_{t\in [0,1]}\), which at time 1 in particular has traversed a distance h in the \(\ell ^{\scriptscriptstyle (i)}\) direction. From this observation, we easily obtain that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\limsup _{h\rightarrow \infty }\frac{-\log p_k}{h}=0, \quad \text {for }k=2,6. \end{aligned}$$

Moreover, it is also straightforward to check from the directional transience of \(X^{\scriptscriptstyle (0)}\) that \(p_8\ge C\). Hence the previous limit also holds with \(k=8\).

To complete the proof, it is thus sufficient to establish that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\limsup _{h\rightarrow \infty }\frac{-\log p_4}{h}\le t_{\scriptscriptstyle (i)}. \end{aligned}$$
(21)

In this direction, we start by appealing to a sample path large deviations principle of Mogul\('\)skiĭ[17] (see also [10, Theorem 5.1.2 and the following remark (b)]) to deduce that

$$\begin{aligned} \lim _{\eta \rightarrow 0}\limsup _{n\rightarrow \infty }\frac{-\log {\mathbb {P}}\left( \left| \frac{1}{n}X^{\scriptscriptstyle (0)}_{\lfloor nt\rfloor }-t{\hat{\delta }}^{\scriptscriptstyle (i)}\right| \le \eta \text{ for } \text{ all } t\in [0,1]\right) }{n}=\Lambda ({\hat{\delta }}^{\scriptscriptstyle (i)}), \end{aligned}$$

where

$$\begin{aligned} \Lambda ({\hat{\delta }}^{\scriptscriptstyle (i)}):=\sup _{x\in {\mathbb {R}}^d}\left( x\cdot {\hat{\delta }}^{\scriptscriptstyle (i)}-\log {\mathbb {E}}\left[ \mathrm {e}^{X_1^{\scriptscriptstyle (0)}\cdot x}\right] \right) . \end{aligned}$$

Thus, by considering the change of parameter \(n=h/(-{\hat{\delta }}^{\scriptscriptstyle (i)}\cdot \ell ^{\scriptscriptstyle (i)})\) and \(\eta =\varepsilon (-{\hat{\delta }}^{\scriptscriptstyle (i)}\cdot \ell ^{\scriptscriptstyle (i)})\), we obtain that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\limsup _{h\rightarrow \infty }\frac{-\log p_4}{h}\le \frac{\Lambda ({\hat{\delta }}^{\scriptscriptstyle (i)})}{-{\hat{\delta }}^{\scriptscriptstyle (i)}\cdot \ell ^{\scriptscriptstyle (i)}}, \end{aligned}$$

and hence it will be enough for our purposes to show that the right-hand side here is equal to \(t_{\scriptscriptstyle (i)}\). Now, observe that

$$\begin{aligned}&\left. \frac{\partial }{\partial x_j}\left( x\cdot {\hat{\delta }}^{\scriptscriptstyle (i)}-\log {\mathbb {E}}\left[ \mathrm {e}^{X_1^{\scriptscriptstyle (0)}\cdot x}\right] \right) \right| _{x=-t_{\scriptscriptstyle (i)}\ell ^{\scriptscriptstyle (i)}}\\&\quad =\left. \left( {\hat{\delta }}^{\scriptscriptstyle (i)}_j-\frac{{\mathbb {E}}\left[ \mathrm {e}^{X_1^{\scriptscriptstyle (0)}\cdot x}X_1^{\scriptscriptstyle (0)}\cdot e_j\right] }{{\mathbb {E}}\left[ \mathrm {e}^{X_1^{\scriptscriptstyle (0)}\cdot x}\right] }\right) \right| _{x=-t_{\scriptscriptstyle (i)}\ell ^{\scriptscriptstyle (i)}}\\&\quad ={\hat{\delta }}^{\scriptscriptstyle (i)}_j-\frac{{\hat{\delta }}^{\scriptscriptstyle (i)}_j}{\varphi _{\scriptscriptstyle (i)}(t_{\scriptscriptstyle (i)})}\\&\quad =0, \end{aligned}$$

and so \(x=-t_{\scriptscriptstyle (i)}\ell ^{\scriptscriptstyle (i)}\) is a stationary point for \(x\cdot {\hat{\delta }}^{\scriptscriptstyle (i)}-\log {\mathbb {E}}[\mathrm {e}^{X_1^{\scriptscriptstyle (0)}\cdot x}]\). Moreover, since the latter expression is strictly concave as a function of x (see, for example the second lemma of [18]), the value \(x=-t_{\scriptscriptstyle (i)}\ell ^{\scriptscriptstyle (i)}\) must be its unique maximiser. It follows that

$$\begin{aligned} \Lambda ({\hat{\delta }}^{\scriptscriptstyle (i)})=-t_{\scriptscriptstyle (i)}\ell ^{\scriptscriptstyle (i)}\cdot {\hat{\delta }}^{\scriptscriptstyle (i)}-\log \varphi _{\scriptscriptstyle (i)}(t_{\scriptscriptstyle (i)})= -t_{\scriptscriptstyle (i)}\ell ^{\scriptscriptstyle (i)}\cdot {\hat{\delta }}^{\scriptscriptstyle (i)}, \end{aligned}$$

which confirms (21). \(\quad \square \)

6 The Limiting Path

To complete the article, in this section we prove Theorem 3, which we recall concerns the simplicity of \({\mathcal {Z}}^{\scriptscriptstyle (\infty )}\).

Proof of Theorem 3(a)

For convenience, in this proof we will suppose that \(\varvec{p}^{\scriptscriptstyle (i)}\) is constant for all \(i\ge 1\). To adapt the argument to the more general assumption of the theorem, one can just use an appropriate subsequence.

Let \(({\mathcal {T}}_j)_{j\ge 1}\) be the regeneration times of \(X^{\scriptscriptstyle (0)}\) in direction \(e_1\). Then, for every \(i\ge 1\), \({\mathcal {Z}}^{\scriptscriptstyle (i)}\) necessarily contains each edge of the form \((X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_{j}-1}, X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_j})\) for \(j\ge 2\). Now, conditional on \({\mathcal {Z}}^{\scriptscriptstyle (i)}\), the probability that the random walk \(X^{\scriptscriptstyle (i)}\) started from \(X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_{j}}\) never hits \(X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_{j}-1}\) is given by

$$\begin{aligned} p_{i,j}:=\frac{r^{\scriptscriptstyle (i)}(X_{{\mathcal {T}}_{j}-1},X_{{\mathcal {T}}_{j}})}{R^{\scriptscriptstyle (i)}(X_{{\mathcal {T}}_{j}-1},\infty )}. \end{aligned}$$

(To check this, one can apply [2, Theorem 2.11], for example.) We can bound the denominator above uniformly in i by the almost-surely finite random variable that appears on the left-hand side of (7). The numerator is constant by assumption. Hence \(p_{i,j}\ge p_j\) for some \({\mathbb {P}}\)-a.s. strictly positive random variable \(p_j\).

Now, fix \(j\ge 2\). Given \(X^{\scriptscriptstyle (0)}\), the configuration of \({\mathcal {Z}}^{\scriptscriptstyle (i)}\) between the vertices \(X_{{\mathcal {T}}_{j-1}}^{\scriptscriptstyle (0)}\) and \(X_{{\mathcal {T}}_j}^{\scriptscriptstyle (0)}\) is given by one of a finite collection of graphs (each containing a simple path between \(X_{{\mathcal {T}}_{j-1}}^{\scriptscriptstyle (0)}\) and \(X_{{\mathcal {T}}_j}^{\scriptscriptstyle (0)}\)). From this observation and the conclusion of the previous paragraph, we deduce that there exists a strictly positive random variable \({\tilde{p}}_j\) depending only on \(X^{\scriptscriptstyle (0)}\) such that, conditional on \({\mathcal {Z}}^{\scriptscriptstyle (i)}\), the probability that the random walk \(X^{\scriptscriptstyle (i)}\) started from \(X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_{j-1}}\) creates a simple path from \(X_{{\mathcal {T}}_{j-1}}^{\scriptscriptstyle (0)}\) to \(X_{{\mathcal {T}}_j}^{\scriptscriptstyle (0)}\), after which it never returns to \(X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_{j}-1}\), is bounded below \({\tilde{p}}_j\). Since, conditional on \({\mathcal {Z}}^{\scriptscriptstyle (i)}\), \(X^{\scriptscriptstyle (i)}\) is independent of the earlier walks, it readily follows from the latter observation that \({\mathbb {P}}\)-a.s. eventually one of the walks \(X^{\scriptscriptstyle (i)}\) will create a simple path between \(X_{{\mathcal {T}}_{j-1}}^{\scriptscriptstyle (0)}\) and \(X_{{\mathcal {T}}_j}^{\scriptscriptstyle (0)}\), and clearly this will remain as the unique path connecting these two vertices in \({\mathcal {Z}}^{\scriptscriptstyle (\infty )}\).

Since the regeneration times of interest occur infinitely often along the path of \(X^{\scriptscriptstyle (0)}\), we obtain that the part of the graph \({\mathcal {Z}}^{\scriptscriptstyle (\infty )}\) from \(X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_2-1}\) to \(\infty \) is a simple path. Since \({\mathcal {T}}_2\) is a finite random variable, essentially the same argument yields that eventually one of the walks \(X^{\scriptscriptstyle (i)}\) will create a simple path from 0 to \(X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_2}\), and never return to \(X^{\scriptscriptstyle (0)}_{{\mathcal {T}}_2-1}\). From this, we obtain the result. \(\square \)

Proof of Theorem 3(b)

Suppose \(e\not \in \{-e_1,e_1\}\) is an edge satisfying the assumption of the relevant part of the theorem, and let \(E=E_0\) be the event that \(X^{\scriptscriptstyle (0)}_1=e_1\), \(X^{\scriptscriptstyle (0)}_2=e_1+e\), \(X^{\scriptscriptstyle (0)}_3=e_1\), \(X^{\scriptscriptstyle (0)}_4=2e_1\), and 4 is a regeneration time for \(X^{\scriptscriptstyle (0)}\) in the direction \(e_1\). Moreover, for \(i\ge 1\), let \(E_i\) be the event that \(E_j\) holds for \(j<i\) and also \((e_1,e_1+e)\in {\mathcal {Z}}^{\scriptscriptstyle (i+1)}\), i.e. the vertex \(e_1+e\) is visited by \(X^{\scriptscriptstyle (i)}\). A standard result for random walks (apply [2, Theorem 2.11] with \(x=e\)) then gives that

$$\begin{aligned} {\mathbb {P}}\left( E_i\,|\,E_{i-1},\,{\mathcal {Z}}^{\scriptscriptstyle (i)}\right) =1-\frac{r^{\scriptscriptstyle (i)}(e_1,e_1+e)}{R^{\scriptscriptstyle (i)}(e_1,\infty )+r^{\scriptscriptstyle (i)}(e_1,e_1+e)}\ge 1-\frac{r^{\scriptscriptstyle (i)}(e_1,e_1+e)}{R^{\scriptscriptstyle (i)}(e_1,\infty )}. \end{aligned}$$
(22)

Now, on \(E_{i-1}\), we have that \(R^{\scriptscriptstyle (i)}(e_1,\infty )\ge r^{\scriptscriptstyle (i)}(e_1,2e_1)\), and so the above bound yields

$$\begin{aligned} {\mathbb {P}}\left( E_i\,|\,E_{i-1}\right) \ge 1-\frac{c^{\scriptscriptstyle (i)}(e_1,2e_1)}{c^{\scriptscriptstyle (i)}(e_1,e_1+e)}=1-\frac{c^{\scriptscriptstyle (i)}(0,e_1)}{c^{\scriptscriptstyle (i)}(0,e)}. \end{aligned}$$
(23)

On the other hand, similarly to the proof of Lemma 2.2, we have that

$$\begin{aligned} R^{\scriptscriptstyle (i)}(e_1,\infty )\ge r^{\scriptscriptstyle (i)}(e_1,2e_1)\sum _{j=2}^\infty \beta _{\scriptscriptstyle (i)}^{-(X_{{\mathcal {T}}_j}^{\scriptscriptstyle (0)}-e_1)\cdot \ell ^{\scriptscriptstyle (i)}}, \end{aligned}$$

where \(({\mathcal {T}}_j)_{j\ge 1}\) are the regeneration times of \(X^{\scriptscriptstyle (0)}\) in direction \(e_1\). Hence, using the obvious adaptation of the strong law of large numbers that appears at (10), we deduce that there exists a deterministic constant c and finite random variable \(N\ge 2\) such that, \({\mathbb {P}}\)-a.s.,

$$\begin{aligned} R^{\scriptscriptstyle (i)}(e_1,\infty )\ge r^{\scriptscriptstyle (i)}(e_1,2e_1)\sum _{j=N}^\infty \beta _{\scriptscriptstyle (i)}^{-cj\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}}=\frac{r^{\scriptscriptstyle (i)}(e_1,2e_1)\beta _{\scriptscriptstyle (i)}^{-cN}}{1-\beta _{\scriptscriptstyle (i)}^{-c\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}}}. \end{aligned}$$

Thus, applying this bound in conjunction with (22) and (23) yields

$$\begin{aligned} {\mathbb {P}}\left( E_i\,|\,E_{i-1},\,N\right) \ge 1-\frac{c^{\scriptscriptstyle (i)}(0,e_1)}{c^{\scriptscriptstyle (i)}(0,e)}\min \left\{ 1,\left( \beta _{\scriptscriptstyle (i)}^{c\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}}-1\right) \beta _{\scriptscriptstyle (i)}^{cN}\right\} . \end{aligned}$$

Since the event that \(\mathcal {Z}^{\scriptscriptstyle (\infty )}\) is not simple contains \(\cap _{i\ge 0}E_i\), it follows that

which is strictly positive whenever

$$\begin{aligned} \sum _{i=1}^\infty \frac{c^{\scriptscriptstyle (i)}(0,e_1)}{c^{\scriptscriptstyle (i)}(0,e)}\min \left\{ 1,\left( \beta _{\scriptscriptstyle (i)}^{c\delta ^{\scriptscriptstyle (0)}\cdot \ell ^{\scriptscriptstyle (i)}}-1\right) \beta _{\scriptscriptstyle (i)}^{cN}\right\} <\infty . \end{aligned}$$

By the assumption of the theorem and the fact that N is a finite random variable, the above sum is \({\mathbb {P}}\)-a.s. finite. It is further easy to check that \({\mathbb {P}}(E_0)>0\), and hence we have demonstrated that

$$\begin{aligned} \mathbb {P}\left( \mathcal {Z}^{\scriptscriptstyle (\infty )} \text { is not simple}\right) \ge {\mathbb {E}}\left( c_N{\mathbf {1}}_{E_0}\right) >0, \end{aligned}$$

which completes the proof in the case \(e\not \in \{-e_1,e_1\}\).

If \(e=-e_1\), essentially the same argument works, but we need to modify \(E=E_0\) to be the event that \(X^{\scriptscriptstyle (0)}_1=e_2\), \(X^{\scriptscriptstyle (0)}_2=e_2-e_1\), \(X^{\scriptscriptstyle (0)}_3=e_2\), \(X^{\scriptscriptstyle (0)}_4=e_2+e_1\), and 4 is a regeneration time for \(X^{\scriptscriptstyle (0)}\) in the direction \(e_1\). \(\quad \square \)