1 Introduction

Consider a finitely generated group G acting on a space X (on the right). For a point \(x\in X\) and a generating set S, the Schreier graph \(\varGamma =(xG,E)\) is the graph the vertex set of which is the orbit xG of x, and the edges E are the couples of the form (yy.s) for \(y\in xG\) and \(s\in S\). Throughout this article, we will assume the action to be transitive, that is for every x, \(xG=X\). We take a measure \(\mu \) on G and will study for which \((G,\varGamma ,\mu )\) the induced random walk on \(\varGamma \) converges towards an end of the graph. We recall the definition of the end space. Consider an exhaustive increasing sequence \(K_1\subset K_2\subset \dots \) of finite subsets of X. An end of \(\varGamma \) is a sequence \(U_1\supseteq U_2\supseteq \dots \) where \(U_n\) is an infinite connected component of the subgraph obtained by deleting the vertices in \(K_n\) and adjacent edges. For more details, see Definition 2.1. Our main result states:

Theorem 1.1

Consider a finitely generated group G acting transitively on a space X. Fix a generating set S and let \(\varGamma =(X,E)\) be the associated Schreier graph. Let \(\mu \) be a measure on G with a finite first moment such that the induced random walk on \(\varGamma \) is transient. Then, the random walk almost surely converges towards a (random) end of the graph.

Notice that for measures with finite support, the result is straightforward. The result is also already known in the case where the action of G on X is non-amenable (this is a particular case of [20, Theorem 21.16], which we recall as Theorem 2.5), under the condition of a finite first moment. An action is non-amenable when there is no G-invariant mean on X. Kesten’s criterion [11] states that for any symmetric non-degenerate measure on the group, the action is non-amenable if and only if the induced random walk on X has probability of return to the origin that decreases exponentially (see Bartholdi [2] for a survey on the amenability of group actions). The general case of the cited [20, Theorem 21.16] does not assume that the random walk is induced by a measure on a group. The result is no longer true if we assume neither that the walk is induced by a measure on a group nor that the probability of return to the origin decreases exponentially. We prove that in Proposition 2.6, where we construct a Markov chain (XP) that is transient, uniformly irreducible and has a uniform first moment, but does not converge towards an end of \(\varGamma \).

If the action is non-amenable, the random walk induced by any non-degenerate measure is transient (see [20, Lemma 1.9]). In the general case, transience can sometimes be obtained from the graph geometry using a comparison Lemma 2.2 due to Baldi–Lohoué–Peyrière [1]. Combining this lemma and the theorem, we obtain:

Corollary 1.2

Consider a finitely generated group G acting transitively on a space X. Fix a generating set S and let \(\varGamma =(X,E)\) be the associated Schreier graph. Assume that \(\varGamma \) is a transient graph. Then, for all measures \(\mu \) on G with finite first moments such that \({{\,\mathrm{supp}\,}}(\mu )\) generates G as a semigroup, the induced random walk almost surely converges towards an end of the graph.

We will also explain how this result can be applied to Thompson’s group F. Let us recall the definition of this group. The set of dyadic rationals \(\mathbb {Z}[\frac{1}{2}]\) is the set of numbers of the form \(a2^b\) with \(a,b\in \mathbb {Z}\). Thompson’s group F is the group of orientation-preserving piecewise linear self-isomorphisms of the closed unit interval with dyadic slopes, with a finite number of break points, all break points being in \(\mathbb {Z}[\frac{1}{2}]\). It is a finitely generated group with a canonical generating set (with two elements). See Cannon–Floyd–Parry [3] or Meier’s book [13, Ch. 10] for details and properties. Its amenability is a celebrated open question. It is well known that amenability is equivalent to the existence of a non-degenerate measure with trivial Poisson boundary (see Kaimanovich–Vershik [10], Rosenblatt [16]). The boundary of a random walk induced by an action is a quotient of the boundary of the walk on the group.

The Schreier graph on \(\mathbb {Z}[\frac{1}{2}]\) (of a conjugate action of F) has been described by Savchuk [17, Proposition 1]. It is a tree that can be understood as a combination of a skeleton quasi-isometric to a binary tree, and rays attached at each point of the skeleton (see Fig. 2). Understanding the geometry of the graph directly shows that it is transient. Kaimanovich [9, Theorem 14] also proves this result without using the geometry of the graph. Hence, by Corollary 1.2 and Lemma 4.2 we obtain

Corollary 1.3

Consider a measure on Thompson’s group F with a finite first moment, the support of which generates F as a semigroup. Then, the induced random walk on \(\mathbb {Z}[\frac{1}{2}]\) has non-trivial Poisson boundary.

This extends the following previous results. Kaimanovich [9] and Mishchenko [14] prove that the simple random walk on the Schreier graph given by that action has non-trivial boundary. Kaimanovich [9, Section 6.A] further shows that it is non-trivial for walks induced by measures with supports that are finite and generate F as a semigroup. We have also shown [19] that for any measure with a finite first moment on F, the support of which generates F as a semigroup, the walk on the group has non-trivial Poisson boundary.

The result of the corollary is false without assuming a finite first moment. Juschenko and Zheng [7] have proven that there exists a symmetric non-degenerate measure on F such that the induced random walk has trivial Poisson boundary. If the trajectories almost surely converge towards points on the end space, the end space endowed the exit measure on it is a quotient of the Poisson boundary. However, the self-similarity of the graph implies that the exit measure cannot be trivial, as we prove in Lemma 4.2. Combining the result of Juschenko–Zheng with this lemma, we obtain:

Corollary 1.4

There exists a finitely generated group G, a space X and a symmetric non-degenerate measure on G such that

  • G acts amenably and transitively on X,

  • the induced random walk on the Schreier graph is transient,

  • the induced random walk on the Schreier graph does not converge towards an end of the graph.

In particular, the measure described by Juschenko and Zheng [7, Theorem 3] provides an example for the action of Thompson’s group F on \(\mathbb {Z}[\frac{1}{2}]\).

Concerning Thompson’s group F, studying the Poisson boundary of random walks on it has been highlighted as a possible approach to proving non-amenability in the work of Kaimanovich. The results by him and Mischenko further suggested that one could consider the boundary of induced random walks \(\mathbb {Z}[\frac{1}{2}]\), but that was shown impossible by the result of Juschenko–Zheng. In more recent results, Juschenko [6] studied walks on the space of n-element subsets of \(\mathbb {Z}[\frac{1}{2}]\) and gave a combinatorial necessary and sufficient condition for the Poisson boundary of induced walks on that space to be non-trivial for all non-degenerate measures. In that situation, the existence of a measure with trivial boundary is due to Juschenko for \(n=2\) and to Schneider and Thom [18] for a general n.

2 Preliminaries

Consider a finitely generated group G acting transitively on a space X and a measure \(\mu \) on G. The random walk on G is defined by multiplication on the right. That is, the walk with trajectories \((g_n)\) for \(n\in \mathbb {N}\) where \(g_{n+1}=g_nh_n\) and the increments \(h_n\) are sampled by \(\mu \). In other words, the random walk is defined by the kernel \((g,h)\mapsto \mu (g^{-1}h)\). The trajectory of the induced random walk on X starting at a point \(\mathfrak {o}\) is the image of the trajectory of the walk on the group by the map:

$$\begin{aligned} (g_n)\mapsto (\mathfrak {o}.g_n).\end{aligned}$$

Its kernel is \(P(x,y)=\sum _{x.g=y}\mu (g)\). We now fix a generating set S of G and consider the undirected graph \(\varGamma =(X,E)\) with vertices X and edges \(E=\{(x,x.s) \text{ for } s\in S,x\in X\}\). We recall that this is called the Schreier graph, and that it is connected as we assumed the action to be transitive. It is worth noting that the directed version of the same definition is also referred to as the Schreier graph, and that in the figures in this article, the edges will have an assigned direction for easier visualisation. It is known that every connected regular graph of even degree is isomorphic to a Schreier graph. It was first proven by Gross [5] for finite graphs. For a detailed proof of the infinite case, see [12, Theorem 3.2.5].

Definition 2.1

For a compact \(K\subset X\) denote by \(\pi _0(X\setminus K)\) the set of connected components of \(X\setminus K\). There is a natural partial order defined by \(K_1\le K_2\) if and only if \(K_1\subseteq K_2\). That order gives rise to a natural morphism \(\pi _{1,2}:\pi _0(X\setminus K_2)\mapsto \pi _0(X\setminus K_1)\) which sends a connected component into the connected component of which it is a subset. This forms an inverse system indexed by \(K\subset X\) (see [15, Section 3.1.2]). The end space is then the inverse limit

$$\begin{aligned} \underset{compact}{\varprojlim _{K \subset X}}\pi _0(X\setminus K)=\{(x_K)\in \underset{compact}{\prod _{K\subset X}}\pi _0(X\setminus K)|\pi _{1,2}x_2=x_1,\ K_1\subset K_2\}.\end{aligned}$$

In our case, the end space can be described using an increasing exhaustive sequence of finite sets \(K_n\), as such sequences are cofinal in the set of all compact subsets. That is, any compact set is included in \(K_n\) for n large enough.

We use the following comparison lemma by Baldi–Lohoué–Peyrière [1].

Lemma 2.2

(Comparison lemma) Let \(P_1(x,y)\) and \(P_2(x,y)\) be doubly stochastic kernels on a countable set X and assume that \(P_2\) is symmetric. Assume that there exists \(\varepsilon \ge 0\) such that

$$\begin{aligned} P_1(x,y)\ge \varepsilon P_2(x,y)\end{aligned}$$

for any xy. Then, if \(P_2\) is transient, then so is \(P_1\).

Here, doubly stochastic kernels means that the operators are reversible and the inverse of each is also Markov. Equivalently, they preserve the counting measure; it is worth noting that the result holds true more generally for operators with any common stationary measure, see Kaimanovich [9, Section 3.C]; see also Woess [20, Section 2.C,3.A]. For the walks considered in this article, it is direct to verify that for all probability measures, \(p(x,y)=\mu (x^{-1}y)\) is doubly stochastic (as the inverse operator is defined by \((x,y)\mapsto \mu (y^{-1}x)\)).

We recall that a random walk is called transient if, for any point, almost every trajectory leaves that point after finite time. Otherwise, the walk is called recurrent and there is a point that the walk almost surely visits an infinite amount of times. A graph is called transient (recurrent) if the simple random walk on it is transient (recurrent). The Green function G is defined by \(G_z(x,y)=\sum _{n\in \mathbb {N}}p^{(n)}(x,y)z^n\) where \(p^{(n)}\) is the n-time transition probability of p. In other words, \(p^{(n)}(x,y)\) is the probability that a random walk starting in x is at y after n steps. We will write \(G(x,y)=G_1(x,y)\). A walk is transient if and only if \(G(x,x)<\infty \) for all \(x\in X\).

Remark that recurrent walks do not converge to the end space. However, it is possible for a measure on a group to induce a transient walk even if the uniform measure is recurrent, in which case we can apply Theorem 1.1. Here, we give an example of that situation in which the graph has infinitely many ends.

Example 2.3

Consider the graph \(\varPsi \) in Fig. 1. Consider the action of the free group on two generators \(F_2\) on it where the first generator a sends each vertex to the right, and the second generator b sends a vertex to the vertex above if possible, and to itself otherwise. The graph is recurrent. Consider the measure \(\mu (a)=\frac{3}{8}\), \(\mu (a^{-1})=\frac{1}{8}\), \(\mu (b)=\mu (b^{-1})=\frac{1}{4}\). It is transient and converges towards the ends defined by the right-hand side rays.

Fig. 1
figure 1

A recurrent graph with infinitely many ends

If we do not require the measure on \(F_2\) to have a finite first moment, it can be chosen symmetric, while the induced walk remains transient. This can be done on any graph containing an infinite array, see [4, Lemma 7.1]. Furthermore, we can construct recurrent graphs for which it is possible to have symmetric measures (on the acting group) with finite first moments that induce transient walks:

Example 2.4

Consider the graph \(\varPsi '\) obtained by \(\varPsi \) by replacing the horizontal lines with \(\mathbb {Z}^2\) planes. It is a recurrent graph. Consider the free product \(\mathbb {Z}*\mathbb {Z}^2\) with generators \(a\in \mathbb {Z}\) and \(b,c\in \mathbb {Z}^2\). Consider its action on \(\varPsi '\) where a moves a vertex to the vertex above if possible, and to itself otherwise, and b and c act horizontally. There is a symmetric transient measure \(\mu \) on \(\mathbb {Z}^2\) with a finite first moment. Consider \(\mu '=\frac{1}{4}(\delta _a+\delta _{a^{-1}})+\frac{1}{2}\mu \). It induces a transient walk on \(\varPsi '\), which by Theorem 1.1 almost surely converges to an element of end space.

Let us recall the exact statement of Theorem 21.16 from the book of Woess [20]. For a graph \( \varGamma =(X,E)\) and a Markov operator P on X, the theorem states:

Theorem 2.5

([20, Theorem 21.16]) If (XP) is uniformly irreducible and has a uniform first moment, and \(\rho (P)<1\), then the random walk defined by (XP) converges almost surely to a random end of \(\varGamma \).

Let us define the concepts in the statement. The walk is uniformly irreducible if there exists \(c>0\) and finite \(K\in \mathbb {N}\) such that for all neighbouring vertices x and y, there exists \(k\le K\) such that \(p^{(k)}(x,y)\ge c\). The step distribution on a point \(x\in X\) is defined as \(\sigma _x(n)=\sum _{y:d(x,y)=n}p(x,y)\). The step distributions are tight if there is a distribution \(\sigma \) on \(\mathbb {N}_0\), such that for all x and all n, the tails \(\sigma _x([n,+\infty ))\) are bounded by the tails of \(\sigma \). The walk has uniform first moment if the step distributions are tight with some \(\sigma \) that has finite first moment. The spectral radius is \(\rho (P)=\limsup _{n\rightarrow \infty }p^{(n)}(x,y)^{1/n}\). (This quantity does not depend on x and y.) It is straightforward to check that if \(\rho (P)<1\), then the random walk is transient. Moreover, applying the definition for \(x=y\) we see that \(\rho (P)<1\) if any only if the probability of return to the origin decreases exponentially. We will show that the result of Theorem 2.5 is not true without the assumption \(\rho (P)<1\). By \({{\,\mathrm{sgn}\,}}\) we denote the sign function on \(\mathbb {Z}\): \({{\,\mathrm{sgn}\,}}(z)=1\) if \(x\ge 0\) and \({{\,\mathrm{sgn}\,}}(x)=-1\) if \(x<0\).

Proposition 2.6

  1. 1.

    There is a graph \( \varGamma =(X,E)\) and a Markov operator P on X such that (XP) is transient, uniformly irreducible and has a uniform first moment, but the random walk defined by (XP) does not converge almost surely to a random end of \(\varGamma \).

  2. 2.

    Consider the Markov chain \((\mathbb {Z},P_{p_n,\varepsilon _n})\) which, given \(p_n\ge 0\) and \(\varepsilon _n\ge 0\), is defined as

    $$\begin{aligned} P(x,y)={\left\{ \begin{array}{ll} (1-p_n)\frac{1+\varepsilon _n}{2} &{} \quad \text{ for } y={{\,\mathrm{sgn}\,}}(x)(|x|+1)\\ (1-p_n)\frac{1-\varepsilon _n}{2} &{} \quad \text{ for } y={{\,\mathrm{sgn}\,}}(x)(|x|-1)\\ p_n &{} \quad \text{ for } y=-x\\ 0&{} \quad \text{ otherwise. } \end{array}\right. } \end{aligned}$$

    There is a choice of \(p_n\ge 0\) and \(\varepsilon _n\ge 0\) such that \((\mathbb {Z},P_{p_n,\varepsilon _n})\) is transient, uniformly irreducible, has uniform first moment and has an infinite expected number of steps where the sign changes. In particular, it verifies the conditions of (1).

The exact values that appear in the proof are \(p_n=\frac{1}{n^2(\ln n)^2}\) and \(\varepsilon _n=\frac{(n+1)(\ln (n+1))^2-n(\ln n)^2}{(n+1)(\ln (n+1))^2+n(\ln n)^2}\).

Proof

We will find sufficient conditions on \(p_n\ge 0\) and \(\varepsilon _n\ge 0\) under which \((\mathbb {Z},P_{p_n,\varepsilon _n})\) verifies the conditions we seek, and then provide a choice that satisfies those conditions. Specifically, the sufficient conditions are (1), (2), (3) and (4).

The tails \(\sigma _x([n,+\infty ))\) are bounded by the tail of the distribution \(\sigma \) on \(\mathbb {N}_0\) defined by \(\sigma (0)=\sigma (1)=1\), \(\sigma (2n)=p_n\) for \(n\ge 1\) and \(\sigma (x)=0\) otherwise. The Markov chain \((\mathbb {Z},P_{p_n,\varepsilon _n})\) has uniform first moment if and only if \(\sigma \) has finite first moment, or equivalently

$$\begin{aligned} \sum _{n\in \mathbb {N}} np_n<\infty . \end{aligned}$$
(1)

For \((\mathbb {Z},P_{p_n,\varepsilon _n})\) to be uniformly irreducible, it would suffice that there should exist \(c>0\) such that \((1-p_n)\frac{1-\varepsilon _n}{2}\ge c\) for all n. If we have

$$\begin{aligned} p_n\xrightarrow {n\rightarrow \infty }0 \text{ and } \varepsilon _n\xrightarrow {n\rightarrow \infty }0 \end{aligned}$$
(2)

then \((1-p_n)\frac{1-\varepsilon _n}{2}\xrightarrow {n\rightarrow \infty }\frac{1}{2}\). In that case, replacing if necessary the values a finite number of \(p_n\) and/or \(\varepsilon _n\) with 0, we can have \((1-p_n)\frac{1-\varepsilon _n}{2}\ge c\).

To study the transience of \((\mathbb {Z},P_{p_n,\varepsilon _n})\), we consider \(\widetilde{P}\) on \(\mathbb {N}_0\) defined by \(\widetilde{P}(k,k+1)=(1-p_n)\frac{1+\varepsilon _n}{2}\), \(\widetilde{P}(k,k-1)=(1-p_n)\frac{1-\varepsilon _n}{2}\) and \(\widetilde{P}(k,k)=p_n\). It is a nearest neighbour random walk on \(\mathbb {N}_0\), and its transience is equivalent to the transience of \((\mathbb {Z},P_{p_n,\varepsilon _n})\). Nearest neighbour random walks on \(\mathbb {N}_0\) are well understood. As seen in [20, Section 2.16], \((\mathbb {N}_0,\widetilde{P})\) is transient if and only if

$$\begin{aligned} \sum _{k=1}^\infty r(e_k)<\infty \end{aligned}$$
(3)

where \(r(e_k)=\frac{\widetilde{P}(k-1,k-2)\dots \widetilde{P}(1,0)}{\widetilde{P}(0,1)\dots \widetilde{P}(k-1,k)}\). We have \(\frac{r(e_{k+1})}{r(e_k)}=\frac{1-\varepsilon _k}{1+\varepsilon _k}\) and therefore defining \(\varepsilon _k\) is equivalent to defining \(r(e_k)\).

Finally, if the Green function of \(\widetilde{P}\) is \(G^{(\widetilde{P})}\), then the expected number of “jumps” between n and \(-n\) is \(G^{(\widetilde{P})}(n,n)p_n\). We wish to obtain \(\sum _nG^{(\widetilde{P})}(n,n)p_n=\infty \). From the results of [20, Example 2.13, Section 2.16], it follows that \(G^{(\widetilde{P})}(n,n)=\frac{1}{r(e_n)\widetilde{P}(n,n-1)}\sum _{k=n+1}^\infty r(e_k)\). If \(\widetilde{P}(n,n-1)\ge c\), it would suffice to have

$$\begin{aligned} \sum _{n\in \mathbb {N}}p_n\frac{1}{r(e_n)}\sum _{k=n+1}^\infty r(e_k)=\infty . \end{aligned}$$
(4)

We now define \(r(e_k)\) and \(p_k\) and claim that those choices verify conditions (1), (2), (3) and (4). Let

$$\begin{aligned} r(e_k)=\frac{1}{k(\ln k)^2} \text{ and } p_k=\frac{1}{k^2(\ln k)^2}.\end{aligned}$$

We first prove condition (1). It suffices to observe that

$$\begin{aligned} \sum _{n\ge 2} np_n\le \int _1^\infty \frac{x}{x^2(\ln x)^2}\hbox {d}x=-\frac{1}{\ln x}\Biggr |_1^\infty ,\end{aligned}$$

which is finite. As \(r(e_k)=kp_k\), this also proves condition (3). Condition (2) is straightforward.

We now only need to prove condition (4). Similarly, we have

$$\begin{aligned} \sum _{k=n+1}^\infty r(e_k)\ge \int _{n+1}^\infty \frac{1}{x(\ln x)^2}\hbox {d}x=-\frac{1}{\ln x}\Biggr |_{n+1}^\infty \end{aligned}$$

and thus

$$\begin{aligned} \frac{1}{r(e_n)}\sum _{k=n+1}^\infty r(e_k)\ge \frac{n(\ln n)^2}{\ln (n+1)}\approx n\ln n.\end{aligned}$$

Then,

$$\begin{aligned}\sum _{n\in \mathbb {N}}p_n\frac{1}{r(e_n)}\sum _{k=n+1}^\infty r(e_k)\ge \sum _{n\in \mathbb {N}}\frac{1}{n\ln (n+1)}\ge \int _2^\infty \frac{1}{x\ln x}\hbox {d}x=\ln (\ln (x))\Biggr |_2^\infty \end{aligned}$$

which is not finite. \(\square \)

3 Proof of Main Theorem 1.1

Consider a finite set \(K\subset X\) and denote \(\varGamma _1,\dots ,\varGamma _k\) the connected components of its complement. We will study the probability to change the component at step n and prove that the sum over n is finite.

Consider \(x\in X\setminus K\) and \(g\in G\). We will study the probability that x.g is not in the same component. Let \(g=s_1s_2\dots s_n\) where \(|g|=n\) and \(s_i\in S\). If x and x.g are in different components, by definition the path \(x,x.s_1,\dots ,x.g\) passes through K. Therefore, there is i such that \(x.s_1s_2\dots s_i\in K\). Equivalently, \(\langle \sum _{i\le n}\tau _{s_1s_2\dots s_i}\delta _x,\sum _{k\in K}\delta _k\rangle \ge 1\) where \(\delta _y\) is the characteristic function at a given point y and \(\tau _f\) is the translation defined by \(\tau _f\delta _y=\delta _{y.f}\). We observe

$$\begin{aligned} \langle \sum _{i\le n}\tau _{s_1s_2\dots s_i}\delta _x,\sum _{k\in K}\delta _k\rangle =\langle \delta _x,\sum _{i\le n}\sum _{k\in K}\tau _{s_i^{-1}\dots s_2^{-1}s_1^{-1}}\delta _k\rangle .\end{aligned}$$

We denote

$$\begin{aligned} f=\sum _{s_1s_2\dots s_n\in G}\mu (s_1s_2\dots s_n)\sum _{i\le n}\sum _{k\in K}\tau _{s_i^{-1}\dots s_2^{-1}s_1^{-1}}\delta _k.\end{aligned}$$

Then, the probability that x and x.g are in different components is not greater than \(\langle \delta _x,f\rangle \). Furthermore, the \(l^1\) norm of f satisfies \(\Vert f\Vert _1\le |K|\Vert \mu \Vert _1\) where \(\Vert \mu \Vert _1\) is the first moment of \(\mu \). In particular, it is finite.

Take a random walk starting at a fixed point \(\mathfrak {o}\) and consider n large enough so that the transient walk has left K. The probability of changing component at step \(n+1\) is then not greater than

$$\begin{aligned} \langle p^{(n)}\delta _\mathfrak {o},f\rangle .\end{aligned}$$

We have

$$\begin{aligned} \sum _{n\in \mathbb {N}}\langle p^{(n)}\delta _\mathfrak {o},f\rangle =\sum _{n\in \mathbb {N}}\sum _{x\in X}p^{(n)}(\mathfrak {o},x)f(x)=\sum _{x\in X}f(x)G(\mathfrak {o},x)\end{aligned}$$

where we will have the right to interchange the order of summation if we prove that the right-hand side is finite. Let \(\check{p}\) be the kernel induced by the inverse measure \(\check{\mu }:g\mapsto \mu (g^{-1})\), and \(G^{(\check{p})}\) the Green function corresponding to that kernel. Then, \(G(\mathfrak {o},x)=G^{(\check{p})}(x,\mathfrak {o})\). It is a known property of the Green function that for all \(x,y\in X\), we have \(G^{(\check{p})}(x,y)\le G^{(\check{p})}(y,y)\). This follows from the fact that the left-hand side is the expected number of visits of y of a walk starting at x, while the right-hand side is the expected number of visits starting at y. Thus,

$$\begin{aligned} \sum _{x\in X}f(x)G(\mathfrak {o},x)\le G^{(\check{p})}(\mathfrak {o},\mathfrak {o})\Vert f\Vert _1<\infty .\end{aligned}$$

This proves that after finite time, the walk almost surely stays in the same connected component of the complement of K. Applying this for an increasing exhaustive sequence of K, we obtain the result of Theorem 1.1.

It is worth mentioning that this approach is similar to the one used by Kaimanovich [8, Theorem 3.3] to prove pointwise convergence of the configuration of walks on lamplighter groups with a finite first moment.

4 Thompson’s Group F

We now apply Theorem 1.1 to Thompson’s group F. The Schreier graph on the dyadic numbers has been described by Savchuk [17, Proposition 1](see Fig. 2). We have the following self-similarity result:

Fig. 2
figure 2

Schreier graph of the dyadic action of F for the standard generators

Lemma 4.1

Consider the Schreier graphs of F for its action on \(\mathbb {Z}[\frac{1}{2}]\) (see Fig. 2). We denote left (respectively right) branch the subgraph of the vertices v for which any geodesic between v and \(\frac{5}{8}\) passes through \(\frac{13}{16}\) (respectively, \(\frac{9}{16}\)). On the figure, those are the left and right branches of the tree, along with the rays starting at them. Then, each branch can be embedded as a labelled graph into the other.

Remark that stronger results of self-similarity of this graph have already been observed, see, for example, [9, Section 5.F].

Proof

Each branch is a labelled tree, and thus, an equivariant embedding is uniquely defined by the image of the root. We choose the image of \(\frac{13}{16}\) to be \(\frac{25}{32}\). This defines an embedding of the left branch into the right one. Similarly, choosing \(\frac{11}{16}\) as the image of \(\frac{9}{16}\) defines an embedding of the right branch into the left one. \(\square \)

This implies:

Lemma 4.2

Fix a measure on F, the support of which generates F as a semi-group such that the induced random walk on the dyadic numbers almost surely converges towards an end of the graph. Then, the exit measure on the end space is not trivial.

Proof

We decompose the end space into five sets: two sets containing, respectively, the ends of the left or the right branch, and three sets that are the singletons corresponding to the rays at \(\frac{5}{8}\) and \(\frac{3}{4}\). The rays have equivariant embeddings into the branches. Combining with Lemma 4.1, this means that any of those five sets can be equivariantely embedded into another one. In particular, if the restriction of the exist measure on one of them has nonzero mass, then by transitivity the restriction on the embedding also has nonzero mass. \(\square \)