1 Introduction

Let G be a connected infinite graph and consider the situation that on each vertex of G there is a lamp. Consider a lamplighter on the graph that makes the following random movements; first, the lamplighter turns on or off the lamp on the site with equal probability, and then, he/she moves to the nearest neighbor of G with equal probability, and turns on or off the lamp on the new site with equal probability. The lamplighter repeats this random movement. Such a movement can be considered as a random walk on the wreath product of graphs \(\mathbb {Z}_2 \wr G\) which is roughly a graph putting \(\mathbb {Z}_2=\{0,1\}\) on each vertex of G (see Definition 2.1 for precise definition), and it is called a “switch-walk-switch walk” or “lamplighter walk” on \(\mathbb {Z}_2 \wr G\). We are interested in the long-time behavior of the walk. Some results are known when G is a specific graph. Pittet and Saloff-Coste [15] established on-diagonal heat kernel asymptotics of the random walk on \(\mathbb {Z}_2 \wr \mathbb {Z}^d\). More precisely, they obtained the following estimates; there exist positive constants \(c_1 , c_2, c_3 , c_4 >0\) such that

$$\begin{aligned} c_1 \exp \left[ - c_2 n^{ \frac{d}{d+2} } \right] \le h_{2n} (g,g) \le c_3 \exp \left[ - c_4n^{ \frac{d}{d+2} } \right] \end{aligned}$$
(1.1)

for all \(g \in \mathbb {Z}_2 \wr \mathbb {Z}^d\), where \(h_n(\cdot ,\cdot )\) is the heat kernel (see [19, 20] for earlier results for the case of \(G= \mathbb {Z}\) and [6] for the case that G is a finitely generated group with polynomial volume growth). Revelle [18] considered the lamplighter walk on the wreath product \(H \wr \mathbb {Z}\) when H is either a finite set or in a class of groups, and obtained some relations between the rate of escape of random walks on H and the law of the iterated logarithm (LIL in short) on \(H \wr \mathbb {Z}\). In particular, when \(H=\mathbb {Z}_2\), he proved that there exist (non-random) constants \(c_1 , c_2, c_3 , c_4 >0\) such that the following hold for all \(\varvec{x} \in \mathbb {Z}_2 \wr \mathbb {Z}\):

$$\begin{aligned} c_1\le & {} \limsup _{n \rightarrow \infty } \frac{ d(Y_0, Y_n) }{ n^{1/2} (\log \log n)^{1/2} } \le c_2,\nonumber \\ c_3\le & {} \liminf _{n \rightarrow \infty } \frac{ d (Y_0, Y_n) }{ n^{1/2} (\log \log n)^{-1/2} } \le c_4, \quad P_{ \varvec{x} }-\text {a.s.}, \end{aligned}$$
(1.2)

where \(\{Y_n\}\) is the lamplighter random walk and \(d(\cdot ,\cdot )\) is the graph distance on \(\mathbb {Z}_2 \wr \mathbb {Z}\).

We are interested in the following question:

(Question) How do the exponents in (1.1), (1.2) change when the graph G is more general?

Fig. 1
figure 1

The Sierpinski gasket graph

Fig. 2
figure 2

The Sierpinski carpet graph

In this paper, we will consider the question when G is typically a fractal graph. Figures 1 and 2 illustrate concrete examples of fractal graphs. It is known that the random walk on such a fractal graph behaves anomalously in that it diffuses slower than a simple random walk on \(\mathbb {Z}^d\). It has been proved that the heat kernel \(h_n (x,y)\) of the random walk \(\{X_n \}_{n\ge 0}\) enjoys the following sub-Gaussian estimates; there exist positive constants \(c_1 , c_2 , c_3 , c_4 >0\) such that

$$\begin{aligned}&\frac{c_1}{n^{d_f/d_w}} \exp \left( -c_2 \left( \frac{d(x,y)^{d_w} }{n} \right) ^{1/(d_w-1)} \right) \le h_{2n} (x,y)\nonumber \\&\quad \le \frac{c_3}{n^{d_f/d_w}} \exp \left( -c_4 \left( \frac{d(x,y)^{d_w} }{n} \right) ^{1/(d_w-1)} \right) \end{aligned}$$
(1.3)

for all \(d(x,y)\le 2n\) (note that \(h_{2n}(x,y)=0\) when \(d(x,y)>2n\)), where \(d(\cdot ,\cdot )\) is the graph distance, \(d_f\) is the volume growth exponent of the fractal graph and \(d_w\) is called the walk dimension, which expresses how fast the random walk on the fractal spreads out. Indeed, by integrating (1.3), one can obtain the following estimates; there exist positive constants \(c_1 , c_2 >0\) such that

$$\begin{aligned} c_1 n^{1/d_w} \le E d(X_{2n}, X_0) \le c_2 n^{1/d_w} \end{aligned}$$

for all \(n >0\). For more details on diffusions on fractals and random walks on fractal graphs, see [1, 11, 13]. As we see, properties of random walks on graphs are related to the geometric properties of the graphs. The volume growth is one of such properties. For the graphs with polynomial volume growth, there are well-established general methods to analyze the properties of random walks on them. But for the graphs with exponential volume growth, these methods are not applicable. In this sense, the graphs with exponential volume growth give us interesting research subject. The wreath product \(\mathbb {Z}_2 \wr G\) belongs to this category, and this is another reason why we are interested in the lamplighter random walks on fractal graphs.

We consider the random walk on \(\mathbb {Z}_2 \wr G\), where the random walk on G enjoys the sub-Gaussian heat kernel estimates (1.3). The main results of this paper are the following;

  1. (1)

    Sharp on-diagonal heat kernel estimates for the random walk on \(\mathbb {Z}_2 \wr G\) (Theorem 2.3),

  2. (2)

    LILs for the random walk on \(\mathbb {Z}_2 \wr G\) (Theorem 2.4).

The on-diagonal heat kernel estimates are heavily related to the asymptotic properties of the spectrum of the corresponding discrete operator. We can obtain the exponent \(d_f /(d_f + d_w)\) in our framework as the generalization of \(d/(d+2)\).

For the LILs, we establish the LIL for \(d_{\mathbb {Z}_2 \wr G} (Y_0, Y_n)\), where \(\{ Y_n \}_{n \ge 0}\) is the random walk on \(\mathbb {Z}_2 \wr G\), and the so-called another law of the iterated logarithm that gives the almost sure asymptotic behavior of the liminf of \(d_{\mathbb {Z}_2 \wr G} (Y_0, Y_n)\). Note that in (1.2), various properties that are specific to \(\mathbb {Z}\) were used, so the generalization to other graphs G is highly non-trivial. We have overcome the difficulty by finding some relationship between the range of the random walk on G and \(d_{\mathbb {Z}_2 \wr G} (Y_0, Y_n)\). To our knowledge, these are the first results on the LILs for the wreath product beyond the case of \(G=\mathbb {Z}\).

The outline of this paper is as follows. In Sect. 2, we explain the framework and the main results of this paper. In Sect. 3, we give some consequences of sub-Gaussian heat kernel estimates. These are preliminary results for Sects. 4 and 5, where we mainly treat the lamplighter random walks on fractal graphs. In Sect. 4, we prove the on-diagonal heat kernel estimates. Section 5 has three subsections. In Sect. 5.1, we give a relation between the range of random walk on G and \(d_{\mathbb {Z}_2 \wr G} (Y_0, Y_n)\). Here, one of the keys is to prove the existence of a path that covers a subgraph of G with the length of the path being (uniformly) comparable to the volume of the subgraph (Lemma 5.3). In Sect. 5.2, we deduce the LILs for the random walk on \(\mathbb {Z}_2 \wr G\) from the LILs for the range of the random walk on G (Theorem 5.5) when G is a strongly recurrent graph. In Sect. 5.3, we prove the LILs for the random walk on \(\mathbb {Z}_2 \wr G\) when G is a transient graph. In the Appendix 1, we give an outline of the proof of Theorem 5.5, on which the proof in Sect. 5.2 is based.

Throughout this paper, we use the following notation.

Notation

  1. (1)

    For two nonnegative sequences \(\{ a(n) \}_{n \ge 0 }\) and \(\{ b(n) \}_{n \ge 0 }\), we write

    • \(a(n) \asymp b(n)\) if there exist positive constants \(c_1 , c_2>0\) such that \(c_1 a(n) \le b (n) \le c_2 a(n) \) for all n.

    • \(a(n) \approx b(n)\) if there exist positive constants \(c_1 , c_2 , c_3 , c_4>0\) such that \(c_1 a(c_2 n) \le b(n) \le c_3 a(c_4 n)\) for all n.

  2. (2)

    We use \(c, C, c_1 , c_2 , \cdots \) to denote deterministic positive finite constants whose values are insignificant. These constants do not depend on time parameters \(n,k, \cdots \), distance parameters \(r,\cdots \), or vertices of graphs.

2 Framework and Main Results

In this section, we introduce the framework and the main results of this paper.

2.1 Framework

Let \(G=(V(G) , E(G) ) \) be an infinite, locally finite, connected graph. We assume V(G) is a countable set. We say that G is a graph of bounded degree if

$$\begin{aligned} M = \sup _{ v \in V(G) } \deg v < \infty . \end{aligned}$$
(2.1)

We denote d(xy) the graph distance of xy in G, i.e., the shortest length of paths between x and y. When we want to emphasize the graph G, we write \(d_G (x,y)\) instead of d(xy).

Next, we introduce the wreath product of graphs.

Definition 2.1

(Wreath product) Let \(G = (V(G),E(G))\) and \(H=(V(H), E(H))\) be graphs. We define the wreath product of G and H (denoted by \(H \wr G\)) in the following way. We define the vertex set of the wreath product as

$$\begin{aligned} V ( H \wr G ) = \left\{ (f , v) \in V(H)^{V(G)} \times V(G) \Biggm | \sharp {\text {Supp}}f < \infty \right\} , \end{aligned}$$

where \({\text {Supp}}f= \{ x \in V(G) \mid f(x) \ne 0 \}\) for a fixed element \(0 \in V(H)\). For \((f ,u ), (g,v) \in V( H \wr G )\), \( \left( (f ,u ), (g,v) \right) \in E ( H \wr G) \) if either (a) or (b) is satisfied:

  1. (a)

    \(f=g \quad \text {and} \quad (u,v) \in E(G) \),

  2. (b)

    \( u=v, \quad f(x) = g(x) \quad (\forall x \in V(G){\setminus } \{ u \}) \quad \text {and} \quad (f(u) , g(u)) \in E(H) \).

We call G the underlying graph of \(H \wr G\) and H the fiber graph of \(H \wr G\).

Throughout the paper, we will only consider the case \(H=\mathbb {Z}_2\) that consists of two vertices \(\{0,1\}\), say, and one edge that connects the two vertices (As noted in Remark 2.5(2), the results in this paper hold when H is a finite graph). We denote the elements of \(V( \mathbb {Z}_2 \wr G)\) by bold alphabets \(\varvec{x} , \varvec{y} \ldots \) and the elements of V(G) by standard alphabets \(x,y, \ldots \).

Next, we introduce the notion of weighted graphs. Let \(\mu : V(G) \times V(G) \rightarrow [0,\infty )\) be a symmetric function such that \(\mu _{xy} = \mu (x,y)>0\) if and only if \((x,y) \in E(G)\). We call the pair \((G, \mu )\) a weighted graph. For a weighted graph \((G, \mu )\), we define a measure \(m = m_G\) on V(G) by \(m(A) = \sum _{x \in A} m(x)\) for \(A\subset V(G)\) where \(m(x) = \sum _{y:y \sim x} \mu _{xy}\). We will write \( V(x,r) = V_G (x,r ) = m (B(x,r))\), where \(B(x,r) = \{ y \in V(G) \mid d (x,y) \le r \}\).

Let \(\{ X_n \}_{n \ge 0 }\) be the (time-homogeneous) random walk on G whose transition probability is \(P = (p(x,y) )_{x,y \in V(G)}\), where \(p(x,y)=\mu _{xy}/m(x)\). We call \(\{ X_n \}_{n \ge 0 }\) the random walk associated with the weighted graph \((G , \mu )\). \(\{ X_n \}_{n \ge 0 }\) is reversible w.r.t. m, i.e., \(m (x) p(x,y) = m (y) p(y,x)\) for all \(x,y \in V(G)\). Define

$$\begin{aligned} p_n(x,y):=P_x(X_n=y),\quad ~~\forall x,y \in V(G). \end{aligned}$$

\(p_n(x,y)/m(y)\) is called the heat kernel of the random walk.

We next give a set of conditions for the graph and the random walks.

Assumption 2.2

Let \((G,\mu )\) be a weighted graph. We consider the following assumptions for \((G, \mu )\).

  1. (1)

    (\(p_0\)-condition): \((G,\mu )\) satisfies \(p_0\)-condition, i.e., there exists \(p_0 >0\) such that \(\mu _{xy}/m(x) \ge p_0\) for all \(x,y \in V(G)\) with \(\{ x,y \} \in E(G)\).

  2. (2)

    (\(d_f\)-set condition): There exist positive constants \(d_f, c_1 , c_2 >0\) such that

    $$\begin{aligned} c_1 r^{d_f} \le V (x,r) \le c_2 r^{d_f} \end{aligned}$$
    (2.2)

    for all \(x \in V(G) , r \in \mathbb {N} \cup \{ 0 \} \). Here, we regard \(0^{d_f}\) as 1.

  3. (3)

    (On-diagonal heat kernel upper bound): There exists \(d_s > 0\) such that the heat kernel \(\displaystyle \frac{p_n (x,y)}{ m(y) } \) of \(\{ X_n \}_{ n \ge 0 }\) satisfies the following estimate for all \(x,y \!\in \! V(G), n \!\ge \! 1\):

    $$\begin{aligned} \frac{p_n (x,y)}{m(y)}&\le c_1n^{-d_s/2}. \end{aligned}$$
    (2.3)
  4. (4)

    (Sub-Gaussian heat kernel estimates): There exists \(d_w > 1\) such that the heat kernel \(\displaystyle \frac{p_n (x,y)}{ m(y) } \) of \(\{ X_n \}_{ n \ge 0 }\) satisfies the following estimates:

    $$\begin{aligned} \frac{p_n (x,y)}{m(y)}&\le \frac{c_1}{V(x,n^{\frac{1}{d_w}})} \exp \left( -c_2 \left( \frac{d(x,y)^{d_w}}{n} \right) ^{\frac{1}{d_w -1}} \right) \end{aligned}$$
    (2.4)

    for all \(x,y \in V(G), n \ge 1 \), and

    $$\begin{aligned} \frac{p_n (x,y)}{ m(y) } + \frac{ p_{n+1} (x,y) }{ m (y) }&\ge \frac{c_3 }{V(x,n^{\frac{1}{d_w}})} \exp \left( -c_4 \left( \frac{d(x,y)^{d_w}}{n} \right) ^{\frac{1}{d_w -1}} \right) \end{aligned}$$
    (2.5)

    for \(x,y \in V(G), n \ge 1\) with \(d(x,y) \le n\).

We will clarify which of the conditions above are assumed in each statement. As one can easily see, Assumptions 2.2 (2) and (2.4) imply Assumption 2.2 (3) with

$$\begin{aligned} d_s/2 = d_f/d_w. \end{aligned}$$

\(d_s\) is called the spectral dimension.

The fractal graphs such as the Sierpinski gasket graph and the Sierpinski carpet graph given in Sect. 1 satisfy Assumption 2.2 (see [3, 10]). Note that under Assumption 2.2 (1), G satisfies (2.1) with \(M = 1/p_0\). Also note that, under Assumption 2.2 (2), we have \(c_1 \le m_G (x) \le c_2\) for all \(x \in V(G)\) and hence

$$\begin{aligned} c_1 \sharp A \le m (A) \le c_2 \sharp A, \quad ~~\forall A \subset V(G), \end{aligned}$$
(2.6)

where \(\sharp A\) is the cardinal number of A. Finally, under Assumption 2.2 (1) (2) we have

$$\begin{aligned} 0< \inf _{x,y\in V(G) , x \sim y} \mu _{xy} \le \sup _{x,y \in V(G) , x \sim y} \mu _{xy} < \infty . \end{aligned}$$
(2.7)

Next, we define the lamplighter walk on \(\mathbb {Z}_2 \wr G\). We denote the transition probability on \(\mathbb {Z}_2\) by \(P^{ ( \mathbb {Z}_2) } = ( p^{ ( \mathbb {Z}_2) } (a,b))_{a,b\in \mathbb {Z}_2} \), where \( P^{ ( \mathbb {Z}_2) } \) is given by

$$\begin{aligned} p^{ ( \mathbb {Z}_2) } (a,b) = \frac{1}{2}, \quad \text { for all } a,b \in \mathbb {Z}_2. \end{aligned}$$

We can lift \(P=(p(x,y))_{x,y\in G}\) and \(P^{ ( \mathbb {Z}_2) } = ( p^{ ( \mathbb {Z}_2) } (a,b))_{a,b\in \mathbb {Z}_2} \) on \(\mathbb {Z}_2 \wr G\), by

$$\begin{aligned} \tilde{p}^{ (G) } ( (f,x) , (g,y) )&= {\left\{ \begin{array}{ll} p (x,y) &{} \text {if } f=g \\ 0 &{} \text {otherwise} \end{array}\right. }, \\ \tilde{p}^{ (\mathbb {Z}_2) } ( (f,x) , (g,y) )&= {\left\{ \begin{array}{ll} \frac{1}{2} &{} \text {if } x=y \text { and } f(v) = g(v) \text { for all } v \ne x \\ 0 &{} \text {otherwise} \end{array}\right. } . \\ \end{aligned}$$

Let \(Y_n = \{ (\eta _n , X_n ) \}_{n \ge 0 }\) be a random walk on \(\mathbb {Z}_2 \wr G\) whose transition probability \(\tilde{p}\) is given by

$$\begin{aligned}&\tilde{p} ( (f,x) , (g,y) ) = \tilde{p}^{ (\mathbb {Z}_2) } *\tilde{p}^{ (G) } *\tilde{p}^{ (\mathbb {Z}_2) } ( (f,x) , (g,y) ) \\&\quad = \sum _{ (h_1 , w_1) , (h_2 , w_2)} \tilde{p}^{ (\mathbb {Z}_2) } ( (f,x) ,(h_1, w_1) ) \tilde{p}^{ (G) } ( (h_1, w_1) , (h_2, w_2) ) \tilde{p}^{ (\mathbb {Z}_2) } ( (h_2,w_2) , (g,y) ) . \end{aligned}$$

Note that if \((f,x) , (g,y) \in V(\mathbb {Z}_2 \wr G)\) satisfy \(x \sim y\) and \(f(z) = g(z)\) for all \(z \ne x,y\) then

$$\begin{aligned} P(Y_{n+1} = (g,y) \mid Y_n = (f,x) ) = \frac{1}{4} p(x,y), \end{aligned}$$

and otherwise it is zero.

This random walk moves in the following way. Let \(X_n\) be the site of lamplighter at time n and \(\eta _n\) be the on-lamp state at time n. The lamplighter changes the lamp at \(X_n\) with probability 1 / 2, moves on G according to the transition probability \(P=(p(x,y))_{x,y \in G}\), and then changes the lamp at \(X_{n+1}\) with probability 1 / 2. The lamplighter repeats this procedure (In the first paragraph of Sect. 1, we discussed the case when \(\{X_n\}\) is a simple random walk on G).

Note that \(\{ Y_n \}_{n \ge 0 }\) is reversible w.r.t. \(m_{\mathbb {Z}_2 \wr G}\), where

$$\begin{aligned} m_{\mathbb {Z}_2 \wr G} ( (\eta , x) ) = m(x). \end{aligned}$$

We denote the transition probability of \(\{ Y_n \}_{n \ge 0 }\) as \(p( \varvec{x} , \varvec{y} )\) (cf. p(xy) is the transition probability of \(\{ X_n \}_{n \ge 0}\)). We sometimes write \(m (\varvec{x} )\) instead of \( m_{\mathbb {Z}_2 \wr G} (\varvec{x} )\).

2.2 Main Results

In this subsection, we state the main results of this paper.

Theorem 2.3

Suppose that Assumption 2.2 (1), (2) and (4) hold. Then, the following holds;

$$\begin{aligned} \frac{ p_{2n} (\varvec{x} , \varvec{x} )}{m_{\mathbb {Z}_2 \wr G} (\varvec{x} )} \approx \exp [ - n^{\frac{d_f}{d_f + d_w}} ],\quad ~~ \forall \varvec{x} \in V( \mathbb {Z}_2 \wr G). \end{aligned}$$

Next we state the results on the LILs when \(d_s/2 < 1\) and \(d_s/2 > 1\), respectively.

Theorem 2.4

Let \((G, \mu )\) be a weighted graph.

(I) Assume that Assumption 2.2 (1), (2), (4) and \(d_s / 2 <1\) hold. Then, there exist (non-random) constants \(c_1 , c_2, c_3 , c_4 >0\) such that the following hold for all \(\varvec{x} \in V(\mathbb {Z}_2 \wr G)\):

$$\begin{aligned} c_1&\le \limsup _{n \rightarrow \infty } \frac{ d_{\mathbb {Z}_2 \wr G } (Y_0, Y_n) }{ n^{d_s/2} (\log \log n)^{1-d_s/2} } \le c_2, \quad P_{ \varvec{x} }\text {-a.s.} \end{aligned}$$
(2.8)
$$\begin{aligned} c_3&\le \liminf _{n \rightarrow \infty } \frac{ d_{\mathbb {Z}_2 \wr G } (Y_0, Y_n) }{ n^{d_s/2} (\log \log n)^{-d_s/2} } \le c_4, \quad P_{ \varvec{x} }\text {-a.s.} \end{aligned}$$
(2.9)

(II) Assume that Assumption 2.2 (1), (2), (3) and \(d_s / 2 >1\) hold. Then, there exist (non-random) positive constants \(c_1,c_2 >0\) such that the following hold for all \(\varvec{x} \in V(\mathbb {Z}_2 \wr G)\):

$$\begin{aligned} c_1&\le \liminf _{n \rightarrow \infty } \frac{d_{\mathbb {Z}_2 \wr G} (Y_0 , Y_n ) }{n} \le \limsup _{n \rightarrow \infty } \frac{d_{\mathbb {Z}_2 \wr G} (Y_0 , Y_n ) }{n} \le c_2, \quad P_{ \varvec{x} } \text {-a.s. } \end{aligned}$$
(2.10)

Remark 2.5

  1. (1)

    Note that in Theorem 2.4(II), we do not need Assumption 2.2(4) but only need the on-diagonal upper bound (2.3). Since the transient case is discussed under a general framework in [14] (see Sect. 5.3), we do not pursue the minimum assumption for (2.10) to hold.

  2. (2)

    We can obtain the same results (by the same proof) if we replace \(\mathbb {Z}_2\) by a finite graph H with \(\sharp H \ge 2\) and \(p^{( \mathbb {Z}_2) }\) by \(p^{(H)}\), where \(p^{(H)}\) is the transition probability on H given by

    $$\begin{aligned} p^{(H)} (a,b) = \frac{1}{ \sharp H }, \quad \text {for all }a,b \in V(H). \end{aligned}$$
  3. (3)

    For each \(0< \alpha < 1\), Rau [17, Proposition 1.2] constructed a graph \(G_{\alpha }\) such thatthe random walk on \(G_{\alpha }\) satisfies the following heat kernel estimates:

    $$\begin{aligned} p_{2n} (x,x) \approx \exp ( -n^{\alpha } ). \end{aligned}$$
    (2.11)

    For the case \({1}/{3} \le \alpha < 1\), the graphs constructed by Rau are the wreath product on \(\mathbb {Z}\), but the fiber graphs are different site by site (The definition of wreath product given by Rau is more general than ours). On the other hand, for each \(d_f , d_w \) such that \(2 \le d_w \le 1 + d_f\) and \(d_f \ge 1\), Barlow [2, Theorem 2] constructed weighted graphs that satisfy Assumption 2.2. Combining this and Theorem 2.3, for any given \(1/3 \le \alpha < 1\) we can give an alternative example where the heat kernel enjoys (2.11).

  4. (4)

    For the case of \(d_s/2 = 1\), we have not been able to obtain the LIL in general. However, one can obtain the LIL for the case of \(\mathbb {Z}^2\) as follows (Note that \(d_s/2 =1\) in this case since \(d_f = d_w = 2\)). Define \(R_n =\sharp \{ X_0 , \ldots , X_n \}\). Dvoretzky and Erdős [8, Theorem 1,4] proved the following law of large numbers of \(R_n\):

    $$\begin{aligned} \lim _{n \rightarrow \infty } \frac{ R_n}{ \pi n/\log n}&= 1, \quad P\text {-a.s.} \end{aligned}$$

    In Propositions 5.1 and 5.2, we will show that \(\frac{1}{4} R_n \le d_{\mathbb {Z}_2 \wr \mathbb {Z}^2} (Y_0 , Y_n) \) for all but finitely many n and that there exists a positive constant \(C>0\) such that \(d_{\mathbb {Z}_2 \wr \mathbb {Z}^2} (Y_0, Y_n) \le CR_n\) for all n. Using these facts, we see that there exist positive constants \(c_1, c_2 >0\) such that for all \(\varvec{x} \in V(\mathbb {Z}_2 \wr G)\)

    $$\begin{aligned} c_1 \le \liminf _{n \rightarrow \infty } \frac{ d_{\mathbb {Z}_2 \wr \mathbb {Z}^2} (Y_0 , Y_n)}{ n/\log n } \le \limsup _{n \rightarrow \infty } \frac{ d_{\mathbb {Z}_2 \wr \mathbb {Z}^2} (Y_0 , Y_n)}{ n/\log n} \le c_2, \quad P_{\varvec{x}} \text {-a.s. } \end{aligned}$$

    As we see, the exponents differ from those in the cases of \(d_s/2<1\) and \(d_s/2 > 1\).

3 Consequences of Heat Kernel Estimates

In this section, we give preliminary results obtained from the sub-Gaussian heat kernel estimates (2.3), (2.4) and (2.5). Throughout this section, we assume that Assumption 2.2 (1), (2) hold.

First, the following can be obtained by a simple modification of [1, Lemma 3.9] (Note that (2.5) is not needed here).

Lemma 3.1

Suppose (2.4). Then, there exist positive constants \(c_{1} , c_{2} > 0\) such that

$$\begin{aligned} P_y \left( \max _{0 \le j \le n} d(x,X_j) \ge 3r \right) \le c_{1} \exp \left( -c_{2} \left( \frac{r^{d_w}}{n} \right) ^{\frac{1}{d_w -1}} \right) \end{aligned}$$

for all \(n \ge 1, r \ge 1, x,y \in V(G)\) with \( d(x,y) \le r\).

The following lemma will be used in Sect. 5.1. Note that unlike Lemma 3.1, we need only weaker condition (2.3) here.

Lemma 3.2

Suppose (2.3). Then, there exist positive constants \(c_1, c_2>0\) such that

$$\begin{aligned} P_x \left( \max _{0 \le j \le n } d(x,X_j) \le r \right) \le c_1\exp \left( -c_2 \frac{n}{r^{d_w}} \right) \end{aligned}$$
(3.1)

for all \(x \in V(G) , n,r \ge 1\).

Proof

We first show that there exists a positive constant \(c_{1} > 0\) and a positive integer R such that

$$\begin{aligned} P_x \left( \max _{0 \le j \le [r^{d_w}] } d(x,X_j) \le 2c_{1} r \right) \le \frac{1}{2} \end{aligned}$$
(3.2)

for all \(x \in V(G)\) and for all \(r \ge R\), where \([r^{d_w}]\) means the greatest integer which is less than or equal to \(r^{d_w}\). Using (2.3), for \(r\ge 1\) we have

$$\begin{aligned}&P_x \left( \max _{0 \le j \le [r^{d_w}] } d(x,X_j) \le 2 c_{1} r \right) \le P_x (d(x,X_{ [r^{d_w}] }) \le 2c_{1}r ) \\&\quad \quad \quad \le c_2 \frac{1}{ ([r^{d_w}])^{d_f/d_w} } \sum _{y \in B(x , 2c_{1}r) } m(y) \le c_3 \frac{1}{ r^{d_f}} m( B(x, 2c_{1} r) ). \end{aligned}$$

Recall that \(c_4 r^{d_f} \le m (B(x,r)) \le c_5 r^{d_f}\) for \(r \in \mathbb {N} \cup \{ 0 \} \) by (2.2). Take \(c_1\) as a positive constant which satisfies \(c_3 (2c_1)^{d_f} \le \frac{1}{2c_5}\). We first consider the case of \(2c_1 r \ge 1\). In this case, by the above estimate we have

$$\begin{aligned} P_x \left( \max _{0 \le j \le [r^{d_w}] } d(x,X_j) \le 2 c_{1} r \right) \le c_3 \frac{1}{r^{d_f}} m( B(x, 2c_{1} r) ) \le c_3 c_5 (2c_1)^{d_f} \le \frac{1}{2}. \end{aligned}$$

Next, we consider the case of \(2c_1 r < 1\). In this case, \(m( B(x,2c_1 r) ) =m ( \{ x \} ) \le c_5\). Take R such that \(R > (2c_3c_5)^{1/d_f}\). By the above estimate, we have

$$\begin{aligned} P_x \left( \max _{0 \le j \le [r^{d_w}] } d(x,X_j) \le 2 c_{1} r \right) \le c_3 \frac{1}{r^{d_f}} m( B(x, 2c_{1} r) ) \le c_3 \frac{c_5}{r^{d_f}} \le \frac{1}{2} \end{aligned}$$

for \(r \ge R\). We thus obtain (3.2).

We now prove (3.1). Let \(r \ge R\). It is enough to consider the case \(n\ge [r^{d_w}]\) since otherwise (3.1) is immediate by adjusting the constants. Let \(k\ge 1\) be such that \(k[r^{d_w}]\le n< (k+1)[r^{d_w}]\) and let \(t_i = i [r^{d_w}] \). Then, by the Markov property and (3.2) we have

$$\begin{aligned} P_x \left( \max _{0 \le j \le n} d(x,X_j ) \le c_{1}r \right)&\le P_x \left( \bigcap _{ 0 \le i \le k-1} \left\{ \max _{t_i \le j \le t_{i+1} } d( X_{t_i} , X_{j } ) \le 2c_{1}r \right\} \right) \\&\le \left\{ \sup _y P_y \left( \max _{0 \le j \le [r^{d_w}] } d(y,X_j ) \le 2c_{1} r \right) \right\} ^k \\&\le \left( \frac{1}{2} \right) ^k \le c_6 \exp (-c_7 k ) \le c_8 \exp (-c_{9} nr^{-d_w}). \end{aligned}$$

It is immediate for the case of \(1 \le r \le R\) from the above estimate. Hence, we obtain (3.1 ) by adjusting the constants. \(\square \)

In the next proposition, we show that Lemma 3.2 is sharp up to constants if we assume both (2.4) and (2.5). The idea of the proof is based on [16, Lemma 7.4.3], where a similar estimate was given for a class of random walks on \(\mathbb {Z}^d\).

Proposition 3.3

Suppose (2.4) and (2.5). Then, there exist \(N_0 \ge 1\) and positive constants \( c_1, c_2>0\) such that

$$\begin{aligned} P_x \left( \max _{0 \le j \le n } d(x,X_j) \le r \right) \ge c_1\exp \left( -c_2 \frac{n}{r^{d_w}} \right) \end{aligned}$$

for all \(r \ge N_0\) and \(n \ge 4^{ \frac{d_w}{d_w -1} }\).

The proof consists of the following two lemmas.

Lemma 3.4

Under (2.5) there exists \(\epsilon \in (0,1)\) such that

$$\begin{aligned} P_y (d(x, X_{n} ) > r ) \le 1- \epsilon \end{aligned}$$

for all \(r\ge 1\), \(n \ge 4^{\frac{d_w}{d_w -1}}\) with \(n < r^{d_w}\) and \(x, y \in V(G) \) with \(d(x,y) \le r\).

Proof

We follow the argument in [16, Lemma 7.4.7]. Let \(\gamma = 1/(d_w - 1)\). Let \(\ell _{x,y}\) be a geodesic path from x to y in G. Let \(x_n \in \ell _{x,y}\) be the \(([n^{1 / d_w} ] +1)\)-th vertex from y. Then, \(B(x_n, [n^{1/d_w} ] ) \subset B(x,r-1)\) since for all \( z \in B(x_n , [n^{1/d_w} ] ) \) we have \(d(x,z) \le d(x,x_n) + d(x_n , z ) \le (d(x,y) - [n^{1 / d_w} ]-1) + [n^{1 / d_w} ] \le r-1\). Also for all \(z \in B(x_n , [n^{1/d_w} ] )\), we have \(d(y,z) \le d(y , x_n) + d(x_n ,z) \le 2 [n^{1 / d_w} ] +1\). Hence, by (2.5) and (2.2) we have

$$\begin{aligned}&2P_y \left( d(x,X_{n}) \le r \right) \ge P_y \left( d(x , X_{n-1}) \le r-1 \right) + P_y \left( d(x,X_{n}) \le r-1 \right) \\&\quad \ge c_1 \sum _{ z \in B(x_n , [n^{1/d_w}] ) } \frac{ m( z ) }{ V(y ,(n-1)^{1 / d_w}) } \exp \left[ -c_2 \left( \frac{d( y ,z )^{d_w}}{n-1} \right) ^{ \gamma } \right] \\&\quad \ge c_3 \left( \sum _{ z \in B(x_n , [n^{1/d_w}] ) } \frac{ m(z) }{ n^{d_f / d_w} } \right) \exp \left[ -c_2 (2 \cdot 3^{d_w})^{ \gamma } \right] \ge c_4 \exp \left[ -c_2 (2\cdot 3^{d_w})^{\gamma } \right] , \end{aligned}$$

provided \(n \ge 4^{\frac{d_w}{d_w -1}}\) so that \(d(y,z) \le 2n^{1/d_w} +1 \le 3 n^{1/d_w} \le n-1\) for any \(z \in B(x_n , [n^{1/d_w}])\). The proof completes by taking \(\epsilon = \frac{1}{2} c_4 \exp \left[ -c_2 (2\cdot 3^{d_w})^{\gamma } \right] \) (note that we may take \(c_4<1\)). \(\square \)

Lemma 3.5

Let \(\epsilon \) be as in Lemma 3.4. Then, under (2.4) and (2.5), there exists \(\eta \ge 1\) such that for all \(x,y \in V(G) \), positive integers \(r \ge 8^{\frac{1}{d_w -1} }\) with \(d(x,y) \le 2r\) and for all integers \(k \ge 0\) and \(\ell \ge 4^{\frac{d_w}{d_w -1}}\) with \(k [r^{d_w}] \le \ell \le (k+1) [r^{d_w}]\), we have

$$\begin{aligned} P_y \left( \max _{0 \le j \le \ell } d(x , X_j) \le 3 \eta r, d(x,X_{\ell } ) \le 2r \right) \ge \left( \frac{ \epsilon }{2} \right) ^{k} \wedge \frac{\epsilon }{2}. \end{aligned}$$

Proof

We follow the argument in the proof of [16, Lemma 7.4.3]. We prove the assertion by induction in k.

Step I: We first prove the case \(k=0,1\). Let \(\gamma = 1/(d_w - 1)\). In general, \(1 \le P(A) + P(B) + P( (A \cup B)^c )\) holds for any events AB. As AB take \(A = \{ \max _{0 \le j \le \ell } d(x,X_j) > 3 \eta r \}\), \(B =\{ d(x,X_{\ell } ) > 2r \}\). Let \(4^{\frac{d_w}{d_w -1}} \le \ell \le 2[r^{d_w}]\). By Lemmas 3.1 and 3.4, we have

$$\begin{aligned} 1&\le P_y \left( \max _{ 0 \le j \le \ell } d(x,X_j)> 3 \eta r \right) + P_y \left( d(x,X_{\ell } ) > 2r \right) \\&\quad + P_y \left( \max _{0 \le j \le \ell } d(x , X_j) \le 3 \eta r , d(x,X_{\ell }) \le 2r \right) \\&\le c_1 \exp \left[ - c_2 \left( \frac{ (\eta r)^{d_w} }{\ell } \right) ^{ \gamma } \right] + (1-\epsilon )\\&\quad + P_y \left( \max _{0 \le j \le \ell } d(x , X_j) \le 3 \eta r , d(x,X_{\ell }) \le 2r \right) . \end{aligned}$$

From above and using \(\ell \le 2[r^{d_w}]\) we have

$$\begin{aligned}&P_y \left( \max _{0 \le j \le \ell } d(x , X_j) \le 3\eta r , d(x,X_{\ell }) \le 2r \right) \\&\quad \ge \epsilon - c_1 \exp \left[ - c_2 \left( \frac{ (\eta r)^{d_w} }{\ell } \right) ^{\gamma } \right] \ge \epsilon - c_1 \exp \left[ -c_2 \left( \frac{ \eta ^{d_w} }{2} \right) ^{\gamma } \right] . \end{aligned}$$

Taking \(\displaystyle \eta > \left\{ \frac{2^{\gamma }}{c_2} \log \left( \frac{2 c_1}{ \epsilon } \right) \right\} ^{1/(\gamma d_w) } \vee 1\), we obtain

$$\begin{aligned} P_y \left( \max _{0 \le j \le \ell } d(x , X_j) \le 3\eta r , d(x,X_{\ell }) \le 2r \right) \ge \frac{\epsilon }{2} \end{aligned}$$

for \(4^{\frac{d_w}{d_w -1}} \le \ell \le 2[r^{d_w}]\).

Step II: Let \(k \ge 1\) and assume that the result holds up to k. Let \(\ell \) satisfy \((k+1) [r^{d_w}] \le \ell \le (k+2) [r^{d_w}]\). Define \(\ell ^{\prime } = k[r^{d_w} ]\). Then, since \(\ell ^{\prime } \wedge (\ell - \ell ^{\prime }) \ge [r^{d_w}] \ge \frac{1}{2} r^{d_w} \ge 4^{ \frac{ d_w}{ d_w -1}}\) by \(r \ge 8^{\frac{1}{d_w -1}}\), using the Markov property and induction hypothesis, we have

$$\begin{aligned}&P_y \left( \max _{ 0 \le j \le \ell } d(x,X_j) \le 3 \eta r , d(x,X_{\ell } ) \le 2r \right) \\&\quad \ge P_y \left( \max _{ 0 \le j \le \ell } d(x,X_j) \le 3 \eta r , d(x,X_{\ell } ) \le 2r, d(x,X_{\ell ^{\prime }} ) \le 2r \right) \\&\quad = E_y \left[ 1_{ \{ \max _{ 0 \le j \le \ell ^{\prime } } d(x,X_j ) \quad \le 3 \eta r , d(x,X_{\ell ^{\prime }} ) \le 2r \} } P_{ X_{\ell ^{\prime }} } \left( d(x, X_{\ell - \ell ^{\prime } } )\right. \right. \nonumber \\&\qquad \left. \left. \le 2r,\max _{0 \le j \le \ell - \ell ^{\prime } } d(x,X_j) \le 3 \eta r \right) \right] \\&\quad \ge \frac{\epsilon }{2} P_y \left( \max _{ 0 \le j \le \ell ^{\prime } } d(x,X_j ) \le 3 \eta r , d(x,X_{\ell ^{\prime }} ) \le 2 r \right) \ge \left( \frac{ \epsilon }{2} \right) ^{k+1} . \end{aligned}$$

We thus complete the proof. \(\square \)

Given Lemma 3.5, it is straightforward to obtain Proposition 3.3.

4 On-Diagonal Heat Kernel

In this section, we give the proof of Theorem 2.3.

The lower bound follows by the same approach as in [16, Section 7] (cf. [15, Section 7] and [21, Section 15.D]). We use Proposition 3.3 for the proof.

Proof of the lower bound of Theorem 2.3

Let \(\varvec{x} = (\eta , x) \in V( \mathbb {Z}_2 \wr G)\). As we said before we write \(m_{\mathbb {Z}_2 \wr G} \) as m. For any finite subset \(A \subset V(\mathbb {Z}_2 \wr G)\), using the Cauchy-Schwarz inequality, we have

$$\begin{aligned} \frac{p_{2n} (\varvec{x} ,\varvec{x} )}{ m(\varvec{x} ) }&= \sum _{ \varvec{y} \in V(\mathbb {Z}_2 \wr G) } \frac{p_n ( \varvec{x} , \varvec{y}) p_{n} ( \varvec{y} , \varvec{x} )}{ m(\varvec{x}) }\nonumber \\&= \sum _{ \varvec{y} \in V(\mathbb {Z}_2 \wr G) } \frac{ p_n ( \varvec{x} , \varvec{y} )^2 }{ m(\varvec{y} ) } \ge \sum _{ \varvec{y} \in A} \frac{ p_n (\varvec{x} , \varvec{y} )^2 }{ m(\varvec{y} ) } \ge \frac{1}{ m(A) } P_{\varvec{x} } \left( Y_n \in A \right) ^2. \end{aligned}$$
(4.1)

Set \(A:= \{ \varvec{y} = (f , y) \in V(\mathbb {Z}_2 \wr G) \mid y \in B_G (x,r), f(z) = 0\) for all \(z \in V(G)\) such that \(d(x,z) >r\}\). Using (2.2) and (2.6), we have

$$\begin{aligned} m_{\mathbb {Z}_2 \wr G} (A) = \sum _{ y \in B_G (x,r) } m_G (y) 2^{ \sharp B_G (x,r)} \le c_1 r^{d_f} 2^{c_2 r^{d_f} }, \end{aligned}$$

and using Proposition 3.3 we have

$$\begin{aligned} P_{\varvec{x} } \left( Y_n \in A \right) \ge P_x \left( \max _{0 \le j \le n} d(x, X_j) \le r \right) \ge c_3 \exp \left[ - c_4 \frac{n}{r^{d_w}} \right] \end{aligned}$$

provided \(n \ge 4^{\frac{d_w}{d_w -1}}\) and \(r \ge N_0\). Hence, by (4.1), we have

$$\begin{aligned} \frac{ p_{2n} (\varvec{x} , \varvec{x} )}{ m(\varvec{x} ) } \ge c_5 \exp \left[ -c_6 \left( d_f \log r + r^{d_f} + \frac{n}{r^{d_w}} \right) \right] . \end{aligned}$$

Optimize the right-hand side (take \(r= n^{1/(d_f + d_w)}\)), then we obtain

$$\begin{aligned} \frac{p_{2n} (\varvec{x} , \varvec{x} )}{ m( \varvec{x} ) } \ge c \exp \left[ -C n^{\frac{d_f}{ d_f + d_w }} \right] \end{aligned}$$

provided \(n \ge 4^{\frac{d_w}{d_w-1}} \vee N_0^{d_f + d_w}\). The lower bound for \(1 \le n \le 4^{\frac{d_w}{d_w-1}} \vee N_0^{d_f + d_w}\) is obvious, and we thus complete the proof. \(\square \)

We next prove the upper bound of Theorem 2.3 (cf. [15, Section 8] and [21, Section 15.D]).

Proof of the upper bound of Theorem 2.3

Without loss of generality, we may and do assume \(\eta _0\) is identically 0. For the switch-walk-switch random walk \(\{ Y_n = (\eta _n , X_n ) \}_{n \ge 0}\) on \(\mathbb {Z}_2 \wr G\), \(\eta _n\) is equi-distributed on \(\{ f \in \prod _{z \in V(G)} \mathbb {Z}_2 \mid {\text {Supp}}(f - \eta _0) \subset \bar{R}_n \} \), where \(\bar{R}_n = \{ X_0, X_1, \cdots , X_n \}\). Hence, setting \(R_n = \sharp \bar{R}_n\), we have

$$\begin{aligned} P_{\varvec{x}} \left( Y_n = \varvec{x} \right) = \sum _{k=0}^{n+1} P_{ \varvec{x} } \left( Y_n = \varvec{x} , R_n = k \right) \le \sum _{k=0}^{n+1} E_{ \varvec{x} } \left[ 1_{ \{ R_n = k \} } 2^{-k} \right] \le E_{ \varvec{x} } \left[ 2^{-R_n} \right] . \end{aligned}$$

In [9, Theorem 1.2], Gibson showed the following Donsker–Varadhan type range estimate: for any \(\nu > 0 \) and any \(x \in V(G) \),

$$\begin{aligned} - \log E_x \left[ \exp \left\{ - \nu m ( \bar{R}_{ [n^{d_w} V(x,n)] } ) \right\} \right] \asymp V(x,n). \end{aligned}$$

Note that \(V(x,n) \asymp n^{d_f}\). Replacing n with \(n^{1/(d_f + d_w) }\) we have

$$\begin{aligned} E_x \left[ \exp \left\{ -\nu m( \bar{R}_{n} ) \right\} \right] \approx \exp \left[ - n^{d_f / (d_f + d_w) } \right] . \end{aligned}$$

Since \(cm(\bar{R}_n ) \le R_n\) (due to (2.6)), by the above estimates, we obtain the upper estimate, i.e.,

$$\begin{aligned} \frac{p_{n} (\varvec{x} , \varvec{x} )}{ m( \varvec{x} ) } \le c \exp \left[ -C n^{\frac{d_f}{ d_f + d_w }} \right] . \end{aligned}$$

We thus complete the proof. \(\square \)

5 Laws of the Iterated Logarithm

In this section, we will prove Theorem 2.4.

We first explain the idea of the proof. Let \((G, \mu )\) be a weighted graph such that G is of bounded degree. For notational simplicity, let \(o \in V(G)\) be a distinguished point and \(\varvec{0} \) be the element of \((\mathbb {Z}_2)^{V(G)}\) such that \(\varvec{0} (v) = 0\) for all \(v \in V(G)\). In order to realize a given lamp state \((\eta , x) \in V(\mathbb {Z}_2 \wr G)\) from the initial lamp state \((\varvec{0} , o) \in V(\mathbb {Z}_2 \wr G)\), we need to visit all the “on-lamp vertices”. So,

(5.1)

We apply the above observation to the lamplighter walk \(\{ Y_n = (\eta _n , X_n) \}_{n \ge 0}\) on \(\mathbb {Z}_2 \wr G\). Note that the lamp at a certain vertex of G (say z) cannot be changed without making the lamplighter visit at z. From this and (5.1), we see that \(d_{ \mathbb {Z}_2 \wr G } (Y_0 , Y_n ) \) is heavily related to the range of random walk \(\{ X_n \}_{n \ge 0}\) on G. Set \(R_n = \sharp \{ X_0 , X_1 , \cdots , X_n \}\). Intuitively, \(\sum _{i \in V(G) } \eta _n (i) \) is close to \(\frac{1}{2} R_n\). Indeed, we will show the following in Propositions 5.1 and 5.2:

$$\begin{aligned}&\frac{1}{4} R_n \le d_{ \mathbb {Z}_2 \wr G } ( Y_0 ,Y_n ) \quad \text {for all but finitely many }n, \quad a.s., \end{aligned}$$
(5.2)
$$\begin{aligned}&d_{ \mathbb {Z}_2 \wr G } ( Y_0 ,Y_n ) \le (2M+1) R_n, \qquad \text {for all }n, \end{aligned}$$
(5.3)

where M is defined by (2.1). We will prove (5.2) and (5.3) in Sect. 5.1. The behavior of \(R_n\) differs for \(d_s /2< 1\) and \(d_s /2 > 1\). In Sect. 5.2 (resp. 5.3), we prove the LILs of \(R_n\) and \(d_{ \mathbb {Z}_2 \wr G } (Y_0 , Y_n ) \) for \(d_s /2 <1\) (resp. for \(d_s/2 >1\)).

5.1 Relations Between the Distance and the Range

The main goal of this subsection is to prove (5.2) and (5.3).

Proposition 5.1

Suppose that Assumption 2.2 (1), (2) and (3) hold. Then, the following holds;

$$\begin{aligned} \frac{1}{4} R_n \le \sum _{i \in \{ X_0, X_1, \ldots , X_n \} } \eta _n (i) ~~ \text {for all but finitely many}\,n, ~~ {P_{(\varvec{0},x)}\text {-a.s., for all }x \in V(G).} \end{aligned}$$

Proof

We fix \(x \in V(G)\) and write P instead of \(P_{(\varvec{0},x)}\). Define \(S_n = \sum _{i \in \{ X_0, X_1, \cdots , X_n \}} \eta _n (i) \). It is easy to see that

$$\begin{aligned} P (S_n = l \mid R_n = k) = \left( \frac{1}{2} \right) ^{k} \begin{pmatrix} k \\ l \end{pmatrix} \end{aligned}$$

for \(0 \le l \le k\). Then, we have

$$\begin{aligned} P \left( S_n \le \frac{1}{4} R_n \right)&= \sum _{l=0}^n P \left( S_n \le \frac{1}{4} l , R_n = l \right) = \sum _{l=0}^n \sum _{m=0}^{ \left[ \frac{1}{4} l \right] } \left( \frac{1}{2} \right) ^{l} \begin{pmatrix} l \\ m \end{pmatrix} P(R_n = l) \\&\le \sum _{l=0}^n \exp \left( - \frac{1}{16} l \right) P(R_n = l) \qquad \text {by the Chernoff bound}\\&\le P \left( R_n \le n^{\frac{1}{2d_w}} \right) + \exp \left( -\frac{1}{16} n^{\frac{1}{2d_w}} \right) P(R_n \ge n^{\frac{1}{2d_w}} ) \\&\le P \left( \sup _{0 \le l \le n } d(X_0 , X_{l} ) \le n^{\frac{1}{2d_w}} \right) + \exp \left( - \frac{1}{16} n^{\frac{1}{2d_w}} \right) \\&\le c_1 \exp ( -c_2 n^{ \frac{1}{2}} ) + \exp \left( -\frac{1}{16} n^{\frac{1}{2d_w}} \right) , \qquad \text { by Lemma~3.2.} \end{aligned}$$

Using the Borel–Cantelli lemma, we complete the proof. \(\square \)

Proposition 5.2

Let \((G, \mu )\) be a weighted graph such that G is of bounded degree, and set \(M = \sup _{v \in V(G) } \deg v (<\infty )\). Then, for all realization of \(\{ Y_n \}_{n =0}^{\infty }\) and all \(n \ge 0\),

$$\begin{aligned} d_{ \mathbb {Z}_2 \wr G} (Y_0 , Y_n ) \le (2M+1) R_n. \end{aligned}$$

To prove the above proposition, we need the following lemma. To exclude ambiguity, we first introduce some terminology. Let H be a connected subgraph of G.

  • A path \(\gamma \) in H is a sequence of vertices \(v_0 v_1 \ldots v_k\) such that \(v_i \in V(H)\) and \(v_i v_{i+1} \in E(H)\) for all i . For a path \(\gamma \), we set \(V(\gamma ) = \{ v_0 , v_1 , \ldots , v_k \}\), and define the length of \(\gamma \) as \( |\gamma | = k\). \(e_j = v_j v_{j+1} ( j=0,1, \ldots , k-1)\) are said to be the edges of \(\gamma \).

  • For the path \(\gamma \) given above and a given edge \(e \in E(G)\), we define \(F(\gamma , e) = \{ l \in \{ 0,1,\ldots ,k-1 \} \mid e_{l} = e \}\).

  • We denote \(\overrightarrow{e} = \overrightarrow{uv}\) if the edge e is directed from u to v and call \(\overrightarrow{e}\) a directed edge. For two directed edges \(\overrightarrow{e_1} = \overrightarrow{u_1 v_1}\) and \(\overrightarrow{e_2} = \overrightarrow{u_2 v_2}\), \(\overrightarrow{e_1} \) and \(\overrightarrow{e_2 } \) are equal if and only if \(u_1 = u_2, v_1 = v_2\).

Lemma 5.3

Let G be a graph of bounded degree, \(M = \sup _{v \in V(G) } \deg v (<\infty )\) and H be a finite connected subgraph of G.

  1. (1)

    Let \(x,y \in V(H)\). Then, there exists a path \(\gamma = w_0 w_1 \cdots w_k\) in H such that the following hold.

    1. (a)

      \(\{ w_0, w_1, \ldots , w_k \} = V(H)\).

    2. (b)

      Define \(\tilde{e}_j = w_j w_{j+1}\) for \(j=0,1, \cdots , k-1\). Then, \(\sharp \left\{ j \in \{ 0,1,\ldots , k-1 \}\right. \left. \mid \tilde{e}_j = \tilde{e}_s \right\} \le 2\) for all \(s=0,1,\cdots , k-1\).

    3. (c)

      \(w_0 = x\) and \(w_k = y\).

  2. (2)

    Let \(\gamma \) be as in (1). Then, \(|\gamma | \le 2 M \sharp V(H)\).

Proof

(1) Take a path \( \eta = u_0 u_1 \cdots u_n \) in H such that \(u_0 = x\), \(u_n = y\) and \(\{ u_0 , u_1 , \cdots , u_n \} = V(H)\). Define \(f_j = u_j u_{j+1}\). If each edge \(f_j \) satisfies \(\sharp \{ l \in \{ 0,1,\cdots , n-1 \} \mid f_l = f_j \} \le 2\) for \(j=0,1, \ldots , n-1\), then \(\eta \) satisfies the conditions (a), (b) and (c). So, we may assume that there exists \(f_j\) such that \(\sharp \{ l \in \{ 0,1,\cdots , n-1 \} \mid f_l = f_j \} \ge 3\). For such an edge \(f_j\), there exist at least two elements \(s,t \in F(\eta , f_j)\) such that \(\overrightarrow{u_s u_{s+1}} = \overrightarrow{u_t u_{t+1}}\). Let \(s<t\). Define \(\eta _{st} = u_s u_{s+1} u_{s+2} \ldots u_{t-1} u_t \) (see Fig. 3). Replace \(\eta = u_0 \ldots u_{s-1} \eta _{st} u_{t+1} \ldots u_n \) by \(\tilde{ \eta } = \tilde{u_0} \tilde{u_1} \ldots \tilde{u_n} = u_0 \ldots u_{s-1} \tilde{ \eta }_{st} u_{t+1} \ldots u_n \) where \(\tilde{\eta }_{st} = u_s u_{t-1} u_{t-2} \ldots u_{s+2} \). \(\tilde{ \eta } \) is again a path, \(V(\tilde{\eta } ) = V(H)\) and \(\sharp F( \tilde{ \eta }, f_j) =\sharp F(\eta , f_j) - 2\) (see Fig. 4). Repeat this operation to \(f_0, f_1, \ldots , f_{n-1}\) inductively until obtaining a path satisfying (a), (b) and (c).

(2) Note that \(w_j\) appears in \(V(\gamma )\) at most \(2 \deg (w_j)\) times for each vertex \(w_j \in V(H)\) . The conclusion can be verified easily. \(\square \)

Fig. 3
figure 3

An example of the path \(\eta \)

Fig. 4
figure 4

An example of the surgery of \(\eta \)

Remark 5.4

While the revision of this paper was under review, Pierre Mathieu pointed us out a simpler proof of this lemma (therefore, a simpler proof of Proposition 5.2). Take a spanning tree of V(H) and let \(\gamma '\) be an exploration path of the spanning tree from x to x (i.e., a path that crosses each bond of the tree exactly twice). Then, one can produce a desired path \(\gamma \) by an easy modification of \(\gamma '\).

Proof of Proposition 5.2

The graph \(G^{\prime } = (V(G^{\prime }), E(G^{\prime }))\), where \(V(G^{\prime }) = \{ X_0 , X_1 , \ldots , X_n \}\) and \(E(G^{\prime }) = \{ X_i X_{i+1} \in E(G) \mid 0 \le i \le n-1 \}\), is itself a connected subgraph of G. So, applying Lemma 5.3 to \(G^{\prime }\), we have

$$\begin{aligned} \min \{ | \gamma | \mid \gamma \text { is a path starting at } X_0, V(\gamma ) = \{ X_0 , X_1 , \ldots , X_n \} \} \le 2 M R_n . \end{aligned}$$

By this and (5.1), we obtain \(d_{\mathbb {Z}_2 \wr G} (Y_0 , Y_n) \le (2M + 1) R_n\). \(\square \)

5.2 Proof of Theorem 2.4(I)

In this subsection, we prove the LILs for \(\{ Y_n \}_{n \ge 0}\) when \(d_s/2 < 1 \).

Theorem 5.5

Assume that Assumption 2.2 (1), (2), (4) and \(d_s/2 < 1\) hold. Then, there exist (non-random) constants \(c_1 , c_2>0\) such that the following hold:

$$\begin{aligned}&\limsup _{n \rightarrow \infty } \frac{R_n}{ n^{d_s/2} (\log \log n)^{1-d_s/2} } = c_1, \qquad { P_x \text {-a.s.} \,\, \forall x \in V(G)}, \\&\liminf _{n \rightarrow \infty } \frac{R_n}{ n^{d_s/2} (\log \log n)^{-d_s/2} } = c_2, \qquad {P_x \text {-a.s.} \,\, \forall x \in V(G)} . \end{aligned}$$

This is a discrete analog of [5, Propositions 4.9 and 4.10]. Note that the proof of these propositions relies on the self-similarity of the process. Since our random walk does not satisfy this property, we need non-trivial modifications for the proof. Quite recently, Kim et al. [12, Theorem 4.14] proved the LIL of the range for jump processes without using self-similarity of the process. By easy modifications, we can apply their argument to our random walk. The proof of Theorem 5.5 will be given in Appendix 1.

Proof of Theorem 2.4(I)

Note that Assumption 2.2 (1) implies that G is of bounded degree. Thus, by (5.1), Propositions 5.1, 5.2 and Theorem 5.5, we obtain (2.8) and (2.9). \(\square \)

5.3 Proof of Theorem 2.4(II)

In this subsection, we prove the LILs for \(\{ Y_n \}_{n \ge 0}\) when \(d_s/2 >1\).

First, we explain the notion of “uniform condition” defined in [14]. We define the Dirichlet form \(\mathcal {E}\) on the weighted graph \((G, \mu )\) by

$$\begin{aligned}&\mathcal {E} (f,f) = \sum _{x,y \in V(G) } ( f(x) - f(y) )^2 \mu _{xy}, \end{aligned}$$

for \(f: V(G) \rightarrow \mathbb {R}\), and the effective resistance \(R_{ \text {eff} } (\cdot ,\cdot ) \) as

$$\begin{aligned}&R_{ \text {eff}} (A,B)^{-1} = \inf \{ \mathcal {E} (f,f) ; f|A = 1, f|B = 0 \} \end{aligned}$$
(5.4)

for \(A,B \subset V(G)\) with \(A \cap B = \emptyset \). Denote \(\rho (x,n) = R_{ \text {eff}} ( \{ x \} , B(x,n)^c )\) for any \(x \in V(G), n \in \mathbb {N}\) and \(\rho (x) = \lim _{n \rightarrow \infty } \rho (x,n)\).

Definition 5.6

(Okamura [14]) We say that a weighted graph \((G,\mu )\) satisfies the uniform condition (U) if \(\rho (x,n)\) converges uniformly in \(x \in V(G)\) to \(\rho (x)\) as \(n \rightarrow \infty \).

For \(A \subset G\), define

$$\begin{aligned} T_A^{+}&= \inf \{ n \ge 1 \mid X_n \in A \} . \end{aligned}$$

We write \(T_x^+\) instead of \(T_{ \{ x \} }^+\).

The following is an improvement of [14, Corollary 2.3].

Proposition 5.7

Let \((G, \mu )\) be a weighted graph satisfying (U) and (2.7) and assume that G is of bounded degree. If \(\sup _{x \in V(G)} P_x (M< T_x^{+} < \infty ) = O (M^{-\delta })\) for some \(\delta >0\), then

$$\begin{aligned} 1-F_2 \le \liminf _{n \rightarrow \infty } \frac{R_n}{n} \le \limsup _{n \rightarrow \infty } \frac{R_n}{n} \le 1 - F_1 , \quad P_x\text {-a.s.} \end{aligned}$$

for all \(x \in V(G)\), where \(F_1 = \inf _{x \in V(G)} P_x ( T_x^{+} < \infty )\) and \(F_2 = \sup _{x \in V(G)} P_x (T_x^+ < \infty )\).

Remark 5.8

In [14, Corollary 2.3], a stronger condition \(\sup _x P_x (M< T_x^+ < \infty ) = O(M^{-1-\delta } )\) for some \(\delta >0\) is imposed to prove \(1-F_2 \le \liminf _{n \rightarrow \infty } \frac{R_n}{n} \). As we prove below, it is enough to assume \(\sup _x P_x (M< T_x^+ < \infty ) = O(M^{-\delta } )\).

Proof of Proposition 5.7

For the upper bound \(\limsup _{n \rightarrow \infty } \frac{R_n}{n} \le 1 - F_1\), [14, Proof of Corollary 2.3] goes through without any modifications.

Hence, we prove \( 1-F_2 \le \liminf _{n \rightarrow \infty } \frac{R_n}{n} \) under our assumption. Fix \(\epsilon >0\). By [14, (2.5), (2.6), (2.7)] there exists \(a \in (0,1)\) such that for any n and M we have

$$\begin{aligned} P_x \left( \frac{R_n}{n} \le 1-F_2 - \epsilon \right) \le \frac{2}{\epsilon } \sup _{x \in V(G)} P_x (M< T_x^+ < \infty ) + (M+1) a^{n/(M+1)}. \nonumber \\ \end{aligned}$$
(5.5)

Choose \(k > 2/\delta \). Replacing n by \(n^k\) in (5.5), we have

$$\begin{aligned} P_x \left( \frac{R_{n^k}}{n^k} \le 1-F_2 - \epsilon \right)\le & {} \frac{2}{\epsilon } \sup _{x \in V(G)} P_x (M< T_x^+ < \infty ) + (M+1) a^{n^{k}/(M+1)} \\= & {} \frac{2}{\epsilon } O(M^{-\delta } ) + (M+1)a^{n^{k}/(M+1)}. \end{aligned}$$

Take \(M=M (n) = n^{k/2} -1\) and we have

$$\begin{aligned} P_x \left( \frac{R_{n^k}}{n^k} \le 1-F_2 - \epsilon \right) \le \frac{2}{\epsilon } O \left( \frac{1}{n^{k\delta /2} } \right) +n^{k/2} a^{n^{k/2} }. \end{aligned}$$

Since \(k\delta /2 >1\), we can apply the Borel–Cantelli lemma and we obtain

$$\begin{aligned} 1 - F_2 \le \liminf _{n \rightarrow \infty } \frac{R_{n^k}}{n^k}, \qquad P_x\text {-a.s.} \end{aligned}$$

For any m, choose n as \(n^k \le m < (n+1)^{k}\), and we then have

$$\begin{aligned} \frac{R_m}{m} \ge \frac{n^k}{m} \frac{R_{n^k}}{n^k} = \left( \frac{n}{n+1} \right) ^k \frac{(n+1)^k}{m} \frac{R_{n^k}}{n^k} \ge \left( \frac{n}{n+1} \right) ^k \frac{R_{n^k}}{n^k}. \end{aligned}$$

Take \(\liminf _{m \rightarrow \infty }\) and we obtain \(1-F_2 \le \liminf _{n \rightarrow \infty } \frac{R_n}{n}\). \(\square \)

Proof of Theorem 2.4(II)

Note that the uniform condition (U) is satisfied in our framework by [14, Proposition 4.6].

Since \(d_s/ 2 > 1\), we have

$$\begin{aligned} P_x (M< T_x^+ < \infty ) \le \sum _{n=M+1}^{\infty } p_n (x,x) \le c\sum _{n=M+1}^{\infty } n^{-d_s/2} = O(M^{1-d_s/2}). \end{aligned}$$
(5.6)

Note that Assumption 2.2 (1), (2) imply (2.1) and (2.7). By (5.6) and Proposition 5.7, we have

$$\begin{aligned} 1-F_2 \le \liminf _{n \rightarrow \infty } \frac{R_n}{n} \le \limsup _{n \rightarrow \infty } \frac{R_n}{n} \le 1 - F_1 , \qquad P_x\text {-a.s.} \end{aligned}$$
(5.7)

Define \(G(x,x) = \sum _{n=0}^{\infty } p_n (x,x)\), and \(F(x,x) = \sum _{n=1}^{\infty } P_x (T_x^{+} = n) = P_x ( T_x^{+} < \infty )\). It is well known that

$$\begin{aligned} G(x,x) -1 = F(x,x) G(x,x). \end{aligned}$$
(5.8)

Since \(d_s/2 >1\), we have

$$\begin{aligned} \sup _{ x \in V(G) } \sum _{n = 0}^{\infty } p_n (x,x) \le c \sum _{n=1}^{\infty } \frac{1}{n^{d_s / 2}} < \infty . \end{aligned}$$

By this and (5.8), we have

$$\begin{aligned} F_2 = \sup _x F(x,x) < 1. \end{aligned}$$
(5.9)

Thus, by (5.1), Propositions 5.1, 5.2, (5.7) and (5.9), we conclude that for all \(\varvec{x} \in V(\mathbb {Z}_2 \wr G)\),

$$\begin{aligned} 0< \frac{1}{4} (1-F_2)\le & {} \liminf _{n \rightarrow \infty } \frac{ d_{\mathbb {Z}_2 \wr G} (Y_0, Y_n) }{n} \\\le & {} \limsup _{n \rightarrow \infty } \frac{ d_{\mathbb {Z}_2 \wr G} (Y_0,Y_n )}{n} \le (2M+1) (1-F_1)< \infty , \qquad P_{\varvec{x}}\text {-a.s.} \end{aligned}$$

Hence, we complete the proof. \(\square \)