1 Introduction

Say that a graph G has uniform polynomial growth of degree d if the cardinality of all balls of radius r in the graph metric lie between \(c r^d\) and \(C r^d\) for two absolute constants \(C> c > 0\), for every \(r > 0\). Say that a graph has nearly-uniform polynomial growth of degree d if the cardinality of balls is trapped between \((\log r)^{-C} r^d\) and \((\log r)^C r^d\) for some universal constant \(C \geqslant 1\).

Planar graphs of uniform (or nearly-uniform) polynomial volume growth of degree \(d > 2\) arise in a number of contexts. In particular, they appear in the study of random triangulations in 2D quantum gravity [1] and as combinatorial approximations to the boundaries of 3-dimensional hyperbolic groups in geometric group theory (see, e.g., [8]).

When the dimension of volume growth disagrees with the topological dimension, one sometimes witnesses certain geometrically or spectrally degenerate behaviors. For instance, it is known that random planar triangulations of the 2-sphere have nearly-uniform polynomial volume growth of degree 4 (in an appropriate statistical, asymptotic sense) [3]. The distributional limit (see Section 1.1.1) of such graphs is called the uniform infinite planar triangulation (UIPT). But this 4-dimensional volume growth does not come with 4-dimensional isoperimetry: With high probability, a ball in the UIPT of radius r about a vertex v can be separated from the complement of a 2r ball about v by removing a set of size O(r). And, indeed, Benjamini and Papasoglu [9] showed that this phenomenon holds generally: such annular separators of size O(r) exist in all planar graphs with uniform polynomial volume growth.

Similarly, it is known that diffusion on the UIPT is anomalous. Specifically, the random walk on the UIPT is almost surely subdiffusive. In other words, if \(\{X_t\}\) is the random walk and \(d_G\) denotes the graph metric, then \({{\,\mathrm{{\mathbb {E}}}\,}}d_G(X_0,X_t) \leqslant t^{1/2-\varepsilon }\) for some \(\varepsilon > 0\). This was established by Benjamini and Curien [6]. In [14], it is shown that on any unimodular random planar graph with nearly-uniform polynomial growth of degree \(d > 3\) (in a suitable statistical sense), the random walk is subdiffusive. So again, a disagreement between the dimension of volume growth and the topological dimension results in a degeneracy typical in the geometry of fractals (see, e.g., [4]).

Finally, consider a seminal result of Benjamini and Schramm [10]: If \((G,\rho )\) is the local distributional limit of a sequence of finite planar graphs with uniformly bounded degrees, then \((G,\rho )\) is almost surely recurrent. In this sense, any such limit is spectrally (at most) two-dimensional. This was extended by Gurel-Gurevich and Nachmias [12] to unimodular random graphs with an exponential tail on the degree of the root, making it applicable to the UIPT. Benjamini [7] has conjectured that this holds for every planar graph with uniform polynomial volume. We construct a family of counterexamples. Our focus on rational degrees of growth is largely for simplicity; suitable variants of our construction should yield similar results for all real \(d > 2\) (see Remark 3.6).

Theorem 1.1

For every rational \(d > 2\), there is a transient planar graph with uniform polynomial growth of degree d.

Conversely, it is well-known that any graph with growth rate \(d \leqslant 2\) is recurrent. The examples underlying Theorem 1.1 cannot be unimodular. Nevertheless, we construct unimodular examples addressing some of the issues raised above. Angel and Nachmias (unpublished) showed the existence, for every \(\varepsilon > 0\) sufficiently small, of a unimodular random planar graph \((G,\rho )\) on which the random walk is almost surely diffusive, and which almost surely satisfies

$$\begin{aligned} \lim _{r \rightarrow \infty } \frac{\log |B_G(\rho ,r)|}{\log r} = 3 - \varepsilon . \end{aligned}$$

Here, \(B_G(\rho ,r)\) is the graph ball around \(\rho \) of radius r. In other words, r-balls have an asymptotic growth rate of \(r^{3-\varepsilon }\) as \(r \rightarrow \infty \).

The authors of [9] asked whether in planar graphs with uniform growth of degree \(d \geqslant 2\), the speed of the walk should be at most \(t^{1/d+o(1)}\). We recall the following weaker theorem.

Theorem 1.2

([14]). Suppose \((G,\rho )\) is a unimodular random planar graph and G almost surely has uniform polynomial growth of degree d. Then:

$$\begin{aligned} {{\,\mathrm{{\mathbb {E}}}\,}}\left[ d_G(X_0,X_t) \mid X_0 = \rho \right] \lesssim t^{1/\max (2,d-1)}. \end{aligned}$$

We construct examples where this dependence is nearly tight.

Theorem 1.3

For every rational \(d \geqslant 2\) and \(\varepsilon > 0\), there is a constant \(c(\varepsilon ) > 0\) and a unimodular random planar graph \((G,\rho )\) such that G almost surely has uniform polynomial growth of degree d, and

$$\begin{aligned} {{\,\mathrm{{\mathbb {E}}}\,}}\left[ d_G(X_0,X_t) \mid X_0 = \rho \right] \geqslant c(\varepsilon ) t^{1/\left( \max (2,d-1)+\varepsilon \right) }. \end{aligned}$$

Finally, let us address another question from [9]. In conjunction with the existence of small annular separators, the authors asked whether a planar graph with uniform polynomial growth of degree \(d > 2\) can be such that the complement of every ball is connected. For example, in the UIPT, there are “baby universes” connected to the graph via a thin neck that can be cut off by removing a small graph ball.

Theorem 1.4

For every rational \(d \geqslant 2\), there is a unimodular random planar graph \((G,\rho )\) such that almost surely:

  1. 1.

    G has uniform polynomial growth of degree d.

  2. 2.

    The complement of every graph ball in G is connected.

Annular resistances. Our unimodular constructions have the property that the “Einstein relations” (see, e.g., [4]) for various dimensional exponents do not hold. In particular, this implies that the graphs we construct are not strongly recurrent (see, e.g., [13]). Indeed, the effective resistance across annuli can be made very small (see Section 2.3 for the definition of effective resistance).

Theorem 1.5

For every \(\varepsilon > 0\) and \(d \geqslant 3\), there is a unimodular random planar graph \((G,\rho )\) that almost surely has uniform polynomial volume growth of degree d and, moreover, almost surely satisfies

$$\begin{aligned} {\mathsf {R}}_{\mathrm {eff}}^G\left( B_G(\rho ,R) \leftrightarrow V(G) \setminus B_G(\rho ,2R)\right) \leqslant C(\varepsilon ) R^{-(1-\varepsilon )}, \quad \forall R \geqslant 1, \end{aligned}$$
(1.1)

where \(C(\varepsilon ) \geqslant 1\) is a constant depending only on \(\varepsilon \).

Note that the existence of annular separators of size O(R) mentioned previously gives \({\mathsf {R}}_{\mathrm {eff}}^G\left( B_G(\rho ,R) \leftrightarrow V(G) \setminus B_G(\rho ,2R)\right) \gtrsim R^{-1}\) by the Nash-Williams inequality. Moreover, recall that since the graph \((G,\rho )\) from Theorem 1.5 is unimodular and planar, it must be almost surely recurrent (cf. [10]). Therefore the electrical flow witnessing (1.1) cannot spread out “isotropically” from \(B_G(\rho ,R)\) to \(B_G(\rho ,2R)\). Indeed, if one were able to send a flow roughly uniformly from \(B_G(\rho ,2^i)\) to \(B_G(\rho ,2^{i+1})\), then these electrical flows would chain to give

$$\begin{aligned} {\mathsf {R}}_{\mathrm {eff}}^G\left( \rho \leftrightarrow V(G) \setminus B_G(\rho ,2^i)\right) \lesssim \sum _{j \leqslant i} 2^{-(1-\varepsilon )j}, \end{aligned}$$

and taking \(i \rightarrow \infty \) would show that G is transient.

One formalization of this fact is that the graphs in Theorem 1.5 (almost surely) do not satisfy an elliptic Harnack inequality. These graphs are almost surely one-ended, and one can easily pass to a quasi-isometric triangulation that admits a circle packing whose carrier is the entire plane \(\mathbb R^2\). By a result of Murugan [16], this implies that the graph metric \((V(G),d_G)\) on the graphs in Theorem 1.5 is not quasisymmetric to the Euclidean metric induced on the vertices by any such circle packing. (This can also be proved directly from (1.1).)

We remark on one other interesting feature of Theorem 1.5. Suppose that \(\Gamma \) is a Gromov hyperbolic group whose visual boundary \(\partial _{\infty } \Gamma \) is homeomorphic to the 2-sphere \(\mathbb S^2\). The authors of [8] construct a family \(\{G_n : n \geqslant 1\}\) of discrete approximations to \(\partial _{\infty } \Gamma \) such that each \(G_n\) is a planar graph and the family \(\{G_n\}\) has uniform polynomial volume growth.Footnote 1 They show that if there is a constant \(c > 0\) so that the annuli in \(G_n\) satisfy uniform effective resistance estimates of the form

$$\begin{aligned}&{\mathsf {R}}_{\mathrm {eff}}^{G_n}\left( B_{G_n}(x,R) \leftrightarrow V(G_n) \setminus B_{G_n}(x,2R)\right) \geqslant c, \quad \forall 1 \leqslant R \\&\quad \leqslant {\mathrm {diam}}(G_n)/10,\ x \in V(G_n),\ \forall n \geqslant 1, \end{aligned}$$

then \(\partial _{\infty } \Gamma \) is quasisymmetric to \(\mathbb S^2\) (cf. [8, Thm 11.1].)

In particular, if it were to hold that for any (infinite) planar graph G with uniform polynomial growth we have

$$\begin{aligned} {\mathsf {R}}_{\mathrm {eff}}^{G}\left( B_{G}(x,R) \leftrightarrow V(G) \setminus B_{G}(x,2R)\right) \geqslant c > 0, \quad \forall R \geqslant 1, x \in V(G), \end{aligned}$$

then it would confirm positively Cannon’s conjecture from geometric group theory. Theorem 1.5 exhibits graphs for which this fails in essentially the strongest way possible.

1.1 Preliminaries

We will consider primarily connected, undirected graphs \(G=(V,E)\), which we equip with the associated path metric \(d_G\). We will sometimes write V(G) and E(G), respectively, for the vertex and edge sets of G. If \(U \subseteq V(G)\), we write G[U] for the subgraph induced on U.

For \(v \in V\), let \(\deg _G(v)\) denote the degree of v in G. Let \({\mathrm {diam}}(G) {:}{=}\sup _{x,y \in V} d_G(x,y)\) denote the diameter (which is only finite for G finite). For \(v \in V\) and \(r \geqslant 0\), we use \(B_G(v,r) = \{ u \in V : d_G(u,v) \leqslant r\}\) to denote the closed ball in G. For subsets \(S,T \subseteq V\), we write \(d_G(S,T) {:}{=}\inf \{ d_G(s,t) : s \in S, t \in T\}\).

Say that an infinite graph G has uniform volume growth of rate f(r) if there exist constants \(C,c > 0\) such that

$$\begin{aligned} c f(r) \leqslant |B_G(v,r)| \leqslant C f(r)\qquad \forall v \in V, r \geqslant 1. \end{aligned}$$

A graph has uniform polynomial growth of degree d if it has uniform volume growth of rate \(f(r) = r^d\), and has uniform polynomial growth if this holds for some \(d > 0\).

For two expressions A and B, we use the notation \(A \lesssim B\) to denote that \(A \leqslant C B\) for some universal constant C. The notation \(A \lesssim _{\gamma } B\) denotes that \(A \leqslant C(\gamma ) B\) where \(C(\gamma )\) is a number depending only on the parameter \(\gamma \). We write \(A \asymp B\) for the conjunction \(A \lesssim B \wedge B \lesssim A\).

1.1.1 Distributional limits of graphs

We briefly review the weak local topology on random rooted graphs. One may consult the extensive reference of Aldous and Lyons [2], and [5] for the corresponding theory of reversible random graphs. The paper [10] offers a concise introduction to distributional limits of finite planar graphs. We briefly review some relevant points.

Let \(\mathcal G\) denote the set of isomorphism classes of connected, locally finite graphs; let \(\mathcal G_{\bullet }\) denote the set of rooted isomorphism classes of rooted, connected, locally finite graphs. Define a metric on \(\mathcal G_{\bullet }\) as follows: \(\mathbb {d}_{\mathrm {loc}}\left( (G_1,\rho _1), (G_2,\rho _2)\right) = 1/(1+\alpha )\), where

$$\begin{aligned} \alpha = \sup \left\{ r > 0 : B_{G_1}(\rho _1, r) \cong _{\rho } B_{G_2}(\rho _2, r) \right\} \,, \end{aligned}$$

and we use \(\cong _{\rho }\) to denote rooted isomorphism of graphs. \((\mathcal G_{\bullet }, \mathbb {d}_{\mathrm {loc}})\) is a separable, complete metric space. For probability measures \(\{\mu _n\}, \mu \) on \(\mathcal G_{\bullet }\), write \(\{\mu _n\} \Rightarrow \mu \) when \(\mu _n\) converges weakly to \(\mu \) with respect to \(\mathbb {d}_{\mathrm {loc}}\).

A random rooted graph \((G,X_0)\) is said to be reversible if \((G,X_0,X_1)\) and \((G,X_1,X_0)\) have the same law, where \(X_1\) is a uniformly random neighbor of \(X_0\) in G. A random rooted graph \((G,\rho )\) is said to be unimodular if it satisfies the Mass Transport Principle (see, e.g., [2]). For our purposes, it suffices to note that if \({{\,\mathrm{{\mathbb {E}}}\,}}[\deg _G(\rho )] < \infty \), then \((G,\rho )\) is unimodular if and only if the random rooted graph \(({\tilde{G}},{\tilde{\rho }})\) is reversible, where \(({\tilde{G}},{\tilde{\rho }})\) has the law of \((G,\rho )\) biased by \(\deg _G(\rho )\)

If \(\{(G_n,\rho _n)\} \Rightarrow (G,\rho )\), we say that \((G,\rho )\) is the distributional limit of the sequence \(\{(G_n,\rho _n)\}\), where we have conflated random variables with their laws in the obvious way. Consider a sequence \(\{G_n\} \subseteq \mathcal G\) of finite graphs, and let \(\rho _n\) denote a uniformly random element of \(V(G_n)\). Then \(\{(G_n,\rho _n)\}\) is a sequence of \(\mathcal G_{\bullet }\)-valued random variables, and one has the following: if \(\{(G_n,\rho _n)\} \Rightarrow (G,\rho )\), then \((G,\rho )\) is unimodular. Equivalently, if \(\{(G_n,\rho _n)\}\) is a sequence of connected finite graphs and \(\rho _n \in V(G_n)\) is chosen according to the stationary measure of \(G_n\), then if \(\{(G_n,\rho _n)\} \implies (G,\rho )\), it holds that \((G,\rho )\) is a reversible random graph.

2 A transient planar graph of uniform polynomial growth

We begin by constructing a transient planar graph with uniform polynomial growth of degree \(d > 2\). Our construction in this section has \(d = \log _3(12) \approx 2.26\). In Section 3, this construction is generalized to any rational \(d > 2\).

Fig. 1
figure 1

Tilings and their dual graph

2.1 Tilings and dual graphs

Our constructions are based on planar tilings by rectangles. A tile is an axis-parallel closed rectangle \(A \subseteq \mathbb R^2\). We will encode such a tile as a triple \((p(A), \ell _1(A), \ell _2(A))\), where \(p(A) \in \mathbb R^2\) denotes its bottom-left corner, \(\ell _1(A)\) its width (length of its projection onto the x-axis), and \(\ell _2(A)\) its height (length of its projection onto the y-axis). A tiling \(\varvec{T}\) is a finite collection of interior-disjoint tiles. Denote \([\![ \varvec{T}]\!] {:}{=}\bigcup _{A \in \varvec{T}} A\). If \(R \subseteq \mathbb R^2\), we say that \(\varvec{T}\) is a tiling of R if \([\![ \varvec{T}]\!] = R\). See Fig. 1a for a tiling of the unit square.

We associate to a tiling its dual graph \(G(\varvec{T})\) with vertex set \(\varvec{T}\) and with an edge between two tiles \(A,B \in \varvec{T}\) whenever \(A \cap B\) has Hausdorff dimension one; in other words, AB are tangent, but not only at a corner. Denote by \(\mathcal T\) the set of all tilings of the unit square. See Fig. 1b. For the remainder of the paper, we will consider only tilings \(\varvec{T}\) for which \(G(\varvec{T})\) is connected.

Fig. 2
figure 2

An example of the tiling product \(S \circ T\)

Definition 2.1

(Tiling product). For \(\varvec{S}, \varvec{T}\in \mathcal T\), define the product \(\varvec{S}\circ \varvec{T}\in \mathcal T\) as the tiling formed by replacing every tile in \(\varvec{S}\) by an (appropriately scaled) copy of \(\varvec{T}\). More precisely: For every \(A \in \varvec{S}\) and \(B \in \varvec{T}\), there is a tile \(R \in \varvec{S}\circ \varvec{T}\) with \(\ell _i(R) {:}{=}\ell _i(A) \ell _i(B)\), and

$$\begin{aligned} p_i(R) {:}{=}p_i(A)+p_i(B) \ell _i(A), \end{aligned}$$

for each \(i \in \{1,2\}\). See Fig. 2.

If \(\varvec{T}\in \mathcal T\) and \(n \geqslant 0\), we will use \(\varvec{T}^n {:}{=}\varvec{T}\circ \cdots \circ \varvec{T}\) to denote the n-fold tile product of \(\varvec{T}\) with itself. The following observation shows that this is well-defined.

Observation 2.2

The tiling product is associative: \((\varvec{S}\circ \varvec{T}) \circ \varvec{U}= \varvec{S}\circ (\varvec{T}\circ \varvec{U})\) for all \(S,T,U \in \mathcal T\). Moreover, if \(\varvec{I}\in \mathcal T\) consists of the single tile \([0,1]^2\), then \(\varvec{T}\circ \varvec{I}= \varvec{I}\circ \varvec{T}\) for all \(\varvec{T}\in \mathcal T\).

Fig. 3
figure 3

An example of the tiling concatenation \(S \mid T\)

Definition 2.3

(Tiling concatenation). Suppose that \(\varvec{S}\) is a tiling of a rectangle R and \(\varvec{T}\) is a tiling of a rectangle \(R'\) and the heights of R and \(R'\) coincide. Let \(R''\) denote the translation of \(R'\) for which the left edge of \(R''\) coincides with the right edge of R, and denote by \(\varvec{S}\mid \varvec{T}\) the induced tiling of the rectangle \(R \cup R''\). See Fig. 3.

Fig. 4
figure 4

The tiling \(\varvec{H}^1 \mid \varvec{H}^2 \mid \varvec{H}^3\)

Let \(\varvec{H}\) denote the tiling in Fig. 1a, and define \(\mathcal H_n {:}{=}G(\varvec{H}^0 \mid \varvec{H}^1 \mid \cdots \mid \varvec{H}^n)\); see Fig. 4, where we have omitted \(\varvec{H}^0\) for ease of illustration. The next theorem represents our primary goal for the remainder of this section. Note that \(\varvec{H}^0 = \{\rho \}\) consists of a single tile, and that \(\{(\mathcal H_n,\rho )\}\) forms a Cauchy sequence in \((\mathcal G_{\bullet }, \mathbb {d}_{\mathrm {loc}})\), since \((\mathcal H_n,\rho )\) is naturally a rooted subgraph of \((\mathcal H_{n+1},\rho )\). Letting \(\mathcal H_{\infty }\) denote its limit, we will establish the following.

Theorem 2.4

The infinite planar graph \(\mathcal H_{\infty }\) is transient and has uniform polynomial volume growth of degree \(\log _3(12)\).

Uniform growth is established in Lemma 2.11 and transience in Corollary 2.17.

2.2 Volume growth

The following lemma shows that a ball of radius \(r = {\mathrm {diam}}(\varvec{H}^n)\) in \(\varvec{H}^n\) has volume \(\asymp r^{\log _3(12)}\). Later on, in Lemma 2.10, we will show a similar bound holds for balls of arbitrary radius \(1 \leqslant r \leqslant {\mathrm {diam}}(\varvec{H}^n)\) in \(\varvec{H}^n\).

Lemma 2.5

For \(n \geqslant 0\), we have \(|\varvec{H}^n| = 12^n\), and \(3^n \leqslant {\mathrm {diam}}(\varvec{H}^n) \leqslant 3^{n+1}\).

Proof

The first claim is straightforward by induction. For the second claim, note that \(\ell _1(A) = 3^{-n}\) for every \(A \in \varvec{H}^n\).

Moreover, there are \(3^n\) tiles touching the left-most boundary of \([0,1]^2\). Therefore to connect any \(A,B \in \varvec{H}^n\) by a path in \(G(\varvec{H}^n)\), we need only go from A to the left-most column in at most \(3^n\) steps, then use at most \(3^n\) steps of the column, and finally move at most \(3^n\) steps to B. \(\square \)

The next lemma is straightforward.

Lemma 2.6

Consider \(\varvec{S},\varvec{T}\in \mathcal T\) and \(G=G(\varvec{S}\circ \varvec{T})\). For any \(X \in \varvec{S}\circ \varvec{T}\), it holds that \(|B_G(X, {\mathrm {diam}}(G(\varvec{T})))| \geqslant |\varvec{T}|\).

If \(\varvec{T}\) is a tiling, let us partition the edge set \(E(G(\varvec{T})) = E_1(\varvec{T}) \cup E_2(\varvec{T})\) into horizontal and vertical edges. For \(A \in \varvec{T}\) and \(i \in \{1,2\}\), let \(N_{\varvec{T}}(A,i)\) denote the set of tiles adjacent to A in \(G(\varvec{T})\) along the ith direction, meaning that the edge separating them is parallel to the ith axis (see Fig. 5).

Fig. 5
figure 5

A tile \(X \in \varvec{H}\) and its neighbors are marked. Here we have \(N_{\varvec{H}}(X, 1) = \{T, B\}\) and \(N_{\varvec{H}}(X, 2) = \{L, R\}\)

Further denote \(N_{\varvec{T}}(A) {:}{=}N_{\varvec{T}}(A,1) \cup N_{\varvec{T}}(A,2)\). Moreover, we define:

$$\begin{aligned} \alpha _{\varvec{T}}(A,i)&{:}{=}\max \left\{ \frac{\ell _{j}(A)}{\ell _{j}(B)}: B \in N_{\varvec{T}}(A,i), j \in \{1,2\} \right\} , \quad i \in \{1,2\} \nonumber \\ \alpha _{\varvec{T}}(A)&{:}{=}\max _{i \in \{1,2\}} \alpha _{\varvec{T}}(A,i) \nonumber \\ \alpha _{\varvec{T}}&{:}{=}\max \left\{ \alpha _{\varvec{T}}(A) : A \in \varvec{T}\right\} \nonumber \\ L_{\varvec{T}}&{:}{=}\max \left\{ \ell _i(A) : A \in \varvec{T}, i \in \{1,2\}\right\} . \end{aligned}$$
(2.1)

We take \(\alpha _{\varvec{T}} {:}{=}1\) if \(\varvec{T}\) contains a single tile. It is now straightforward to check that \(\alpha _{\varvec{T}}\) bounds the degrees in \(G(\varvec{T})\).

Lemma 2.7

For a tiling \(\varvec{T}\) and \(A \in \varvec{T}\), it holds that

$$\begin{aligned} \deg _{G(\varvec{T})}(A) \leqslant 4(1+\alpha _{\varvec{T}}) \leqslant 8 \alpha _{\varvec{T}}. \end{aligned}$$

Proof

After accounting for the four corners of A, every other tile \(B \in N_{\varvec{T}}(A,i)\) intersects A in a segment of length at least \(\ell _i(B) \geqslant \ell _i(A)/\alpha _{\varvec{T}}\). The second inequality follows from \(\alpha _{\varvec{T}} \geqslant 1\). \(\square \)

Lemma 2.8

Consider \(\varvec{S},\varvec{T}\in \mathcal T\) and let \(G = G(\varvec{S}\circ \varvec{T})\). Then for any \(X \in \varvec{S}\circ \varvec{T}\), it holds that

$$\begin{aligned} |B_G(X,1/(\alpha ^4_{\varvec{S}} L_{\varvec{T}}))| \leqslant 192 \alpha ^2_{\varvec{S}} |\varvec{T}|. \end{aligned}$$
(2.2)

Proof

For a tile \(Y \in \varvec{S}\circ \varvec{T}\), let \({\hat{Y}} \in \varvec{S}\) denote the unique tile for which \(Y \subseteq {\hat{Y}}\). Let us also define

$$\begin{aligned} {\tilde{N}}_{\varvec{S}}({\hat{X}}) {:}{=}\{{\hat{X}}\} \cup N_{\varvec{S}}({\hat{X}}) \cup N_{\varvec{S}}\left( N_{\varvec{S}}({\hat{X}},1),2\right) \cup N_{\varvec{S}}\left( N_{\varvec{S}}({\hat{X}},2),1\right) , \end{aligned}$$

which is the set of vertices of \(G(\varvec{S})\) that can be reached from \({\hat{X}}\) by following at most one edge in each direction.

We will show that

$$\begin{aligned} {}[\![ B_G(X,1/(\alpha ^4_{\varvec{S}} L_{\varvec{T}}))]\!] \subseteq [\![ {\tilde{N}}_{\varvec{S}}({\hat{X}})]\!]. \end{aligned}$$
(2.3)

It follows that

$$\begin{aligned} |B_G(X,1/(\alpha ^4_{\varvec{S}} L_{\varvec{T}}))| \leqslant |\varvec{T}| \cdot |{\tilde{N}}_{\varvec{S}}({\hat{X}})| \leqslant |\varvec{T}| \cdot 3\left( \max _{A \in \varvec{S}} \deg _{G(\varvec{S})}(A)\right) ^2, \end{aligned}$$

and then (2.2) follows from Lemma 2.7.

To establish (2.3), consider any path \(\left\langle X=X_0,X_1,X_2,\ldots ,X_h\right\rangle \) in G with \({\hat{X}}_h \notin {\tilde{N}}_{\varvec{S}}({\hat{X}})\). Let \(k \leqslant h\) be the smallest index for which \({\hat{X}}_k \notin {\tilde{N}}_{\varvec{S}}({\hat{X}})\). Then:

$$\begin{aligned}&{X_0},{X_1},\ldots ,{X_{k-1}} \subseteq [\![ {{\tilde{N}}}_{\varvec{S}}({\hat{X}})]\!] \end{aligned}$$
(2.4)
$$\begin{aligned}&X_{k-1} \cap \left( \partial [\![ {{\tilde{N}}}_{\varvec{S}}({\hat{X}})]\!] \cap (0,1)^2\right) \ne \emptyset . \end{aligned}$$
(2.5)

Now (2.4) implies that

$$\begin{aligned} \ell _i(X_j) \leqslant L_{\varvec{T}} \ell _i({\hat{X}}_j) \leqslant L_{\varvec{T}} \alpha ^2_{\varvec{S}} \ell _i({\hat{X}}),\qquad j \leqslant k-1, i \in \{1,2\}. \end{aligned}$$
(2.6)

And (2.5) shows that for some \(i' \in \{1,2\}\),

$$\begin{aligned} \sum _{j=0}^{k-1} \ell _{i'}(X_j) \geqslant \min \left\{ \ell _{i'}(Y) : Y \in {\tilde{N}}_{\varvec{S}}({\hat{X}}) \right\} \geqslant \ell _{i'}({\hat{X}})/\alpha ^2_{\varvec{S}}. \end{aligned}$$
(2.7)

To clarify why this is true, note that

$$\begin{aligned} {\hat{X}} + \left[ -\ell _1({\hat{X}})/\alpha ^2_{\varvec{S}}, \ell _1({\hat{X}})/\alpha ^2_{\varvec{S}}\right] \times \left[ -\ell _2({\hat{X}})/\alpha _{\varvec{S}}^2, \ell _2({\hat{X}})/\alpha ^2_{\varvec{S}}\right] \subseteq [\![ {\tilde{N}}_{\varvec{S}}({\hat{X}})]\!], \end{aligned}$$

where ’\(+\)’ here is the Minkowski sum \(R+S {:}{=}\{ r + s : r \in R, s \in S \}\). Indeed, this inclusion motivates our definition of the “\(\ell _{\infty }\) neighborhood” \({\tilde{N}}\) above.

Combining (2.6) and (2.7) now gives

$$\begin{aligned} h \geqslant k \geqslant \frac{1}{\alpha _{\varvec{S}}^4 L_{\varvec{T}}}, \end{aligned}$$

completing the proof. \(\square \)

We can now finish the analysis of the volume growth in the graphs \(\{\varvec{H}^n : n \geqslant 0\}\).

Lemma 2.9

For \(n \geqslant 1\), it holds that \(\alpha _{\varvec{H}^n} \leqslant 2\).

Proof

Consider \(A \in \varvec{H}^n\) and \(B \in N_{\varvec{H}^n}(A)\). First note that, as in the proof of Lemma 2.5, all the tiles in \(\varvec{H}^n\) have the same width \(3^{-n}\), and so \(\ell _1(A) = \ell _1(B)\). Moreover, one can easily verify that every two vertically adjacent tiles in \(\varvec{H}^n\) have the same height, and so we have \(\ell _2(A)=\ell _2(B)\) when \(B \in N_{\varvec{H}^n}(A, 1)\). Now we prove by an induction on n that for all horizontally adjacent tiles \(A, B \in \varvec{H}^n\) we have

$$\begin{aligned} \frac{\ell _2(A)}{\ell _2(B)} \leqslant 2. \end{aligned}$$

The base case is clear for \(n = 1\). For \(n \geqslant 2\) Let us write \(\varvec{H}^n = \varvec{H}\circ \varvec{H}^{n-1}\), and let \({\hat{A}},{\hat{B}} \in \varvec{H}\) be the unique tiles for which \(A \subseteq {\hat{A}}\) and \(B \subseteq {\hat{B}}\). If \({\hat{A}}={\hat{B}}\), then the claim follows from the induction hypothesis. Otherwise, as \(B \in N_{\varvec{H}^n}(A, 2)\), it holds that \({\hat{B}} \in N_{\varvec{H}}({\hat{A}}, 2)\) as well. By symmetry, the tiles touching the left and right edges of \(\varvec{H}^{n-1}\) have the same height, and therefore it follows that

$$\begin{aligned} \frac{\ell _2(A)}{\ell _2(B)} \leqslant \frac{\ell _2({\hat{A}})}{\ell _2({\hat{B}})} \leqslant 2, \end{aligned}$$

completing the proof. \(\square \)

Lemma 2.10

For any \(n \geqslant 0\), it holds that

$$\begin{aligned} \left| B_{G(\varvec{H}^n)}(A,r)\right| \asymp r^{\log _3(12)},\qquad \forall A \in \varvec{H}^n, 1 \leqslant r \leqslant {\mathrm {diam}}(G(\varvec{H}^n)). \end{aligned}$$

Proof

Writing \(\varvec{H}^n = \varvec{H}^{n-k} \circ \varvec{H}^{k}\) and employing Lemma 2.6 together with Lemma 2.5 gives

$$\begin{aligned} \left| B_{G(\varvec{H}^n)}(A, 3^{k+1})\right| \geqslant |\varvec{H}^k| = 12^k,\qquad \forall A \in \varvec{H}^n, k \in \{0,1,\ldots ,n\}. \end{aligned}$$

The desired lower bound now follows using monotonicity of \(|B_{G(H^n)}(A,r)|\) with respect to r.

To prove the upper bound, first note that we have \(L_{\varvec{H}^k} = 3^{-k}\) (recall the definition (2.1)). Moreover, by Lemma 2.9 we have \(\alpha _{\varvec{H}^n} \leqslant 2\). Hence invoking Lemma 2.8 with \(\varvec{S}= \varvec{H}^{n-k}\) and \(\varvec{T}= \varvec{H}^k\) gives

$$\begin{aligned} \left| B_{G(\varvec{H}^n)}(A, 3^{k}/16)\right| \leqslant 768 \cdot 12^{k},\qquad \forall A \in \varvec{H}^n, k \in \{0,1,\ldots ,n\}, \end{aligned}$$

completing the proof. \(\square \)

Finally, this allows us to establish a uniform polynomial growth rate for \(\mathcal H_{\infty }\).

Lemma 2.11

It holds that

$$\begin{aligned} |B_{\mathcal H_{\infty }}(v,r)| \asymp r^{\log _3(12)}\qquad \forall v \in V(\mathcal H_{\infty }), r \geqslant 1. \end{aligned}$$

Proof

Recall first the natural identification \(\varvec{H}^n \hookrightarrow V(\mathcal H_{\infty })\) under which \(V(\mathcal H_{\infty }) = \bigcup _{n \geqslant 0} \varvec{H}^n\) is a partition. Consider \(v \in V(\mathcal H_{\infty })\) and let \(n \geqslant 0\) be such that \(v \in \varvec{H}^n\). Now Lemma 2.10 in conjunction with Lemma 2.5 yields the bounds:

$$\begin{aligned} |B_{\mathcal H_{\infty }}(v,r)|&\geqslant |B_{\mathcal H_{\infty }}(v,r) \cap \varvec{H}^n| \gtrsim r^{\log _3(12)}&r \leqslant 3^{n+3} \\ |B_{\mathcal H_{\infty }}(v,r)|&\geqslant |\varvec{H}^k| = 12^k \gtrsim r^{\log _3(12)}&r \in [3^{k+3},3^{k+4}), k \geqslant n \\ |B_{\mathcal H_{\infty }}(v,r)|&= |B_{\mathcal H_{\infty }}(v,r) \cap \varvec{H}^{n-1}| + |B_{\mathcal H_{\infty }}(v,r) \cap \varvec{H}^n| + |B_{\mathcal H_{\infty }}(v,r) \cap \varvec{H}^{n+1}| \\&\lesssim r^{\log _3(12)}&r \leqslant 3^{n-1} \\ |B_{\mathcal H_{\infty }}(v,r)|&\leqslant \sum _{j \leqslant \max (k,n+1)} |\varvec{H}^j| \leqslant 2\cdot 12^{\max (k,n+1)} \lesssim r^{\log _3(12)}&r \in [3^{k-1},3^k), k \geqslant n. \end{aligned}$$

These four bounds together verify the desired claim. \(\square \)

2.3 Effective resistances

Consider a weighted, undirected graph \(G=(V,E,c)\) with edge conductances \(c :E \rightarrow \mathbb R_+\). For \(p \geqslant 1\), denote \(\ell _p(V) {:}{=}\{ f : V \rightarrow \mathbb R\mid \sum _{u \in V} |f(u)|^p < \infty \}\), and equip \(\ell _2(V)\) with the inner product \(\langle f,g\rangle = \sum _{u \in V} f(u) g(u)\).

For \(s,t \in \ell _1(V)\) with \(\Vert s\Vert _1=\Vert t\Vert _1\), we define the effective resistance

$$\begin{aligned} {\mathsf {R}}_{\mathrm {eff}}^G(s,t) {:}{=}\left\langle s-t, L_G^{\dagger } (s-t)\right\rangle , \end{aligned}$$

where \(L_G\) is the combinatorial Laplacian of G, and \(L_G^{\dagger }\) is the Moore-Penrose pseudoinverse. Here, \(L_G\) is the operator on \(\ell _2(V)\) defined by

$$\begin{aligned} L_G f(v) = \sum _{u : \{u,v\} \in E} c(\{u,v\}) \left( f(v)-f(u)\right) . \end{aligned}$$

If G is unweighted, we assume it is equipped with unit conductances \(c \equiv \mathbb {1}_{E(G)}\).

Equivalently, if we consider mappings \(\theta : E \rightarrow \mathbb R\), and define the energy functional

$$\begin{aligned} \mathcal E_G(\theta ) {:}{=}\sum _{e \in E} c(e)^{-1} \theta (e)^2, \end{aligned}$$

then \({\mathsf {R}}_{\mathrm {eff}}^G(s,t)\) is the minimum energy of a flow with demands \(s-t\). (See, for instance, [15, Ch. 2].) For two finite sets \(A,B \subseteq V\) in a graph, we define

$$\begin{aligned}&{\mathsf {R}}_{\mathrm {eff}}^G(A \leftrightarrow B)\\&\quad {:}{=}\inf \left\{ {\mathsf {R}}_{\mathrm {eff}}^G(s,t) : {{\,\mathrm{supp}\,}}(s) \subseteq A, {{\,\mathrm{supp}\,}}(t) \subseteq B, s,t \in \ell _1(V), \Vert s\Vert _1=\Vert t\Vert _1=1\right\} , \end{aligned}$$

and we recall the following standard characterization (see, e.g., [15, Thm. 2.3]).

If we define additionally \(c_v {:}{=}\sum _{u \in V(L)} c(\{u,v\})\) for \(v \in V\), then we can recall that weighted random walk \(\{X_t\}\) on G with Markovian law

$$\begin{aligned} {{\,\mathrm{{\mathbb {P}}}\,}}\left[ X_{t+1}=v \mid X_t=u\right] = \frac{c(\{u,v\})}{c_u},\qquad u,v \in V. \end{aligned}$$

Theorem 2.12

(Transience criterion). A weighted graph \(G=(V,E,c)\) is transient if and only if there is a vertex \(v \in V\) and an increasing sequence \(V_1 \subseteq \cdots \subseteq V_n \subseteq V_{n+1} \subseteq \cdots \) of finite subsets of vertices satisfying \(\bigcup _{n \geqslant 1} V_n = V\) and

$$\begin{aligned} \sup _{n \geqslant 1} {\mathsf {R}}_{\mathrm {eff}}^G\left( \{v\} \leftrightarrow V \setminus V_n\right) < \infty . \end{aligned}$$

For a tiling \(\varvec{T}\) of a closed rectangle R, let \(\mathscr {L}(\varvec{T})\) and \(\mathscr {R}(\varvec{T})\) denote the sets of tiles that intersect the left and right edges of R, respectively. We define

$$\begin{aligned} \rho (\varvec{T}) {:}{=}{\mathsf {R}}_{\mathrm {eff}}^{G(\varvec{T})}\left( \mathbb {1}_{\mathscr {L}(\varvec{T})}/|\mathscr {L}(\varvec{T})|, \mathbb {1}_{\mathscr {R}(\varvec{T})}/|\mathscr {R}(\varvec{T})|\right) , \end{aligned}$$

Observation 2.13

For any \(\varvec{S},\varvec{T}\in \mathcal T\), we have \(|\mathscr {L}(\varvec{S}\circ \varvec{T})| = |\mathscr {L}(\varvec{S})|\cdot |\mathscr {L}(\varvec{T})|\) and \(|\mathscr {R}(\varvec{S}\circ \varvec{T})| = |\mathscr {R}(\varvec{S})|\cdot |\mathscr {R}(\varvec{T})|\). In particular, \(|\mathscr {L}(\varvec{H}^n)| = |\mathscr {R}(\varvec{H}^n)| = 3^n\).

Lemma 2.14

Suppose that \(\varvec{S},\varvec{T}\) are tilings satisfying the conditions of Definition 2.3. Suppose furthermore that all rectangles in \(\mathscr {R}(\varvec{S})\) have the same height, and the same is true for \(\mathscr {L}(\varvec{T})\). Then we have

$$\begin{aligned} \rho (\varvec{S}\mid \varvec{T}) \leqslant \rho (\varvec{S}) + \rho (\varvec{T}) + \frac{1}{\max (|\mathscr {R}(\varvec{S})|,|\mathscr {L}(\varvec{T})|)}\,. \end{aligned}$$

Proof

By the triangle inequality for effective resistances, it suffices to prove that

$$\begin{aligned} {\mathsf {R}}_{\mathrm {eff}}^{G}\left( \mathbb {1}_{\mathscr {R}(\varvec{S})}/|\mathscr {R}(\varvec{S})|, \mathbb {1}_{\mathscr {L}(\varvec{T})}/|\mathscr {L}(\varvec{T})|\right) \leqslant \frac{1}{\max (|\mathscr {R}(\varvec{S})|,|\mathscr {L}(\varvec{T})|)}\,, \end{aligned}$$

where \(G = G(\varvec{S}\mid \varvec{T})\). We construct a flow from \(\mathscr {R}(\varvec{S})\) to \(\mathscr {L}(\varvec{T})\) as follows: If \(A \in \mathscr {R}(\varvec{S}), B \in \mathscr {L}(\varvec{T})\) and \(\{A,B\} \in E(G)\), then the flow value on \(\{A,B\}\) is

$$\begin{aligned} F_{AB} {:}{=}\frac{\mathrm {len}(A \cap B)}{\ell _2(A)} \frac{1}{|\mathscr {R}(\varvec{S})|}. \end{aligned}$$

Denoting \(m {:}{=}\max (|\mathscr {R}(\varvec{S})|,|\mathscr {L}(\varvec{T})|)\), we clearly have \(F_{AB} \leqslant 1/m\). Moreover,

$$\begin{aligned} \sum _{A \in \mathscr {R}(\varvec{S})} \sum _{\begin{array}{c} B \in \mathscr {L}(\varvec{T}) : \\ \{A,B\} \in E(G) \end{array}} F_{AB} = 1, \end{aligned}$$

hence

$$\begin{aligned} \sum _{A \in \mathscr {R}(\varvec{S})} \sum _{\begin{array}{c} B \in \mathscr {L}(\varvec{T}) : \\ \{A,B\} \in E(G) \end{array}} F_{AB}^2 \leqslant 1/m, \end{aligned}$$

completing the proof. \(\square \)

Say that a tiling \(\varvec{T}\) is non-degenerate if \(\mathcal L(\varvec{T}) \cap \mathcal R(\varvec{T}) = \emptyset \), i.e., if no tile \(A \in \varvec{T}\) touches both the left and right edges of \([\![ \varvec{T}]\!]\). Let \(\Delta _{\varvec{T}} {:}{=}\max _{A \in \varvec{T}} \deg _{G(\varvec{T})}(A)\). If \(\varvec{S}\) and \(\varvec{T}\) are non-degenerate, we have the simple inequalities \(\rho (\varvec{T}) \geqslant 1/(\Delta _{\varvec{T}} \cdot |\mathscr {L}(\varvec{T})|) \) and \(\rho (\varvec{T}) \geqslant 1/(\Delta _{\varvec{T}} \cdot |\mathscr {R}(\varvec{T})|)\). Together with Lemma 2.7, this yields a fact that we will employ later.

Corollary 2.15

For any two non-degenerate tilings \(\varvec{S},\varvec{T}\) satisfying the assumptions of Lemma 2.14, it holds that

$$\begin{aligned} \rho (\varvec{S}\mid \varvec{T})&\leqslant \rho (\varvec{S}) + \rho (\varvec{T}) + 8 \min (\alpha _S \rho (\varvec{S}),\alpha _T \rho (\varvec{T})) \\&\lesssim _{\alpha _S, \alpha _T} \rho (\varvec{S}) + \rho (\varvec{T}). \end{aligned}$$

Lemma 2.16

For every \(n \geqslant 1\), it holds that

$$\begin{aligned} \rho (\varvec{H}^n) \lesssim (5/6)^n. \end{aligned}$$

Proof

Fix \(n \geqslant 2\). Recalling Fig. 1b, let us consider \(\varvec{H}^n\) as consisting of three (identical) tilings stacked vertically, and where each of these three tilings is written as \(\varvec{H}^{n-1} \mid \varvec{S}\mid \varvec{H}^{n-1}\) where \(\varvec{S}\) consists of two copies of \(\varvec{H}^{n-1}\) stacked vertically. Applying Lemma 2.14 to \(\varvec{H}^{n-1} \mid \varvec{S}\mid \varvec{H}^{n-1}\) gives

$$\begin{aligned} \rho (\varvec{H}^n)&\leqslant (1/3)^2 \cdot 3 \left( 2 \rho (\varvec{H}^{n-1}) + \rho (\varvec{S}) + \frac{1}{\max (|\mathscr {R}(\varvec{H}^{n-1})|,|\mathscr {L}(\varvec{S})|)}\right. \\&\quad \left. + \frac{1}{\max (|\mathscr {L}(\varvec{H}^{n-1})|,|\mathscr {R}(\varvec{S})|)}\right) \\&\leqslant (1/3)^2 \cdot 3 \left( 2 \rho (\varvec{H}^{n-1}) + (1/2)^2 \cdot 2 \rho (\varvec{H}^{n-1}) + \frac{2}{2 \cdot 3^{n-1}}\right) \\&= (5/6) \rho (\varvec{H}^{n-1}) + 3^{-n}, \end{aligned}$$

where in the second inequality we have employed Observation 2.13. This yields the desired result by induction on n. \(\square \)

Corollary 2.17

The graphs \(\mathcal H_n = G(\varvec{H}^0 \mid \varvec{H}^1 \mid \cdots \mid \varvec{H}^n)\) satisfy

$$\begin{aligned} \sup _{n \geqslant 1} \rho (\mathcal H_n) < \infty . \end{aligned}$$
(2.8)

Hence \(\mathcal H_{\infty }\) is transient.

Proof

Employing Lemma 2.14, Observation 2.13, and Lemma 2.16 together yields

$$\begin{aligned} \rho (\mathcal H_n) \lesssim \sum _{j=1}^n \left[ (5/6)^j + 3^{-j}\right] , \end{aligned}$$

verifying (2.8). Now Theorem 2.12 yields the transience of \(\mathcal H_{\infty }\). \(\square \)

3 Generalizations and unimodular constructions

Consider a sequence \(\gamma = \left\langle \gamma _1,\ldots ,\gamma _b\right\rangle \) with \(\gamma _i \in \mathbb N\). Define a tiling \(\varvec{T}_{\gamma } \in \mathcal T\) as follows: The unit square is partitioned into b columns of width 1/b, and for \(i \in \{1,2,\ldots ,b\}\), the ith column has \(\gamma _i\) rectangles of height \(1/\gamma _i\). For instance, the tiling \(\varvec{H}\) from Fig. 1a can be written \(\varvec{H}= \varvec{T}_{\langle 3,6,3\rangle }\).

We will assume throughout this section that \(\min (\gamma )=b\) and \(\gamma _1=\gamma _b\). Let us use the notation \(|\gamma | {:}{=}\gamma _1 + \cdots + \gamma _b\). The proof of the next lemma follows just as for Lemma 2.5 using \(\min (\gamma )=b\) so that there is a column in \(\varvec{T}_{\gamma }^n\) of height \(b^n\).

Lemma 3.1

For \(n \geqslant 0\), it holds that \(|\varvec{T}_{\gamma }^n| = |\gamma |^n\), and \(b^n \leqslant {\mathrm {diam}}(\varvec{T}_{\gamma }^n) \leqslant 3 b^n\).

Clearly we have \(\alpha _{\varvec{T}_{\gamma }} \leqslant |\gamma |/b\). The following lemma can be shown using a similar argument to that of Lemma 2.9. Note that the only symmetry required in the proof of Lemma 2.9 is that the first and last column of \(\varvec{T}_{\gamma }\) have the same geometry, and this is true since \(\gamma _1=\gamma _b\).

Lemma 3.2

For any \(n \geqslant 1\), it holds that \(\alpha _{\varvec{T}_{\gamma }^n} \leqslant \alpha _{\varvec{T}_{\gamma }} \leqslant |\gamma |/b\).

The next lemma also follows from Lemma 3.1 and the same reasoning used in the proof of Lemma 2.10. The dependence of the implicit constant on \(|\gamma |/b\) comes from Lemma 3.2.

Lemma 3.3

For any \(n \geqslant 0\), it holds that

$$\begin{aligned} \left| B_{G}(A,r)\right| \asymp _{|\gamma |/b} r^{\log _b(|\gamma |)} \qquad \forall A \in \varvec{T}_{\gamma }^n, \ 1 \leqslant r \leqslant {\mathrm {diam}}(G(\varvec{T}_{\gamma }^n)). \end{aligned}$$

3.1 Degrees of growth

Consider \(b, k \in \mathbb N\) with \(k \geqslant b \geqslant 4\), and define the sequence

$$\begin{aligned} \gamma ^{(b,k)} {:}{=}\left\langle b, \underbrace{\left\lceil \frac{k-3}{b-3} \right\rceil b, \ldots , \left\lceil \frac{k-3}{b-3} \right\rceil b}_{(k-3) \bmod (b-3) \text { copies}}, b, \underbrace{\left\lfloor \frac{k-3}{b-3} \right\rfloor b, \ldots , \left\lfloor \frac{k-3}{b-3} \right\rfloor b}_{(b-3)-[(k-3) \bmod (b-3)] \text { copies}}, b\right\rangle . \end{aligned}$$

Denote \(\varvec{T}_{(b,k)} {:}{=}\varvec{T}_{\gamma ^{(b,k)}}\) and note that \(|\gamma ^{(b,k)}|=bk\). Define \({\mathsf {d}}_g(b,k) {:}{=}\log _b(bk)\), and \(\Gamma _{b,k} {:}{=}\sum _{i=1}^b 1/\gamma _i^{(b,k)}\).

Observation 3.4

The following facts hold for \(k \geqslant b \geqslant 4\) and \(n \geqslant 0\):

  1. (a)

    There are \(b^n\) tiles in the left- and right-most columns of \(\varvec{T}^n_{(b,k)}\).

  2. (b)

    If a pair of consecutive columns in \(\varvec{T}^n_{(b,k)}\) have heights h and \(h'\), then \(\min (h,h')\) divides \(\max (h,h')\).

Now observe that Lemma 3.3 yields the following.

Corollary 3.5

The family of graphs \(\mathcal F= \big \{G\big (\varvec{T}_{(b,k)}^n\big ) : n \geqslant 0\big \}\) has uniform polynomial growth of degree \({\mathsf {d}}_g(b,k)\) in the sense that

$$\begin{aligned} |B_G(x,r)| \asymp _k r^{{\mathsf {d}}_g(b,k)},\qquad \forall G \in \mathcal F, x \in V(G), 1 \leqslant r \leqslant {\mathrm {diam}}(G)\,. \end{aligned}$$

For any rational \(p/q \geqslant 2\), one can achieve \({\mathsf {d}}_g(b,k)=p/q\) by taking \(b=4^q\) and \(k=4^{p-q}\).

Remark 3.6

(Arbitrary real degrees \(d > 2\)). We note that by considering more general products of tilings, one can obtain planar graphs of uniform polynomial growth of any real degree \(d > 2\) for which the main results of this paper still hold. Instead of working with the family of powers \(\{\varvec{T}_\gamma ^n\}\) for a fixed tiling \(\varvec{T}_\gamma \), one defines an infinite sequence \(\langle \gamma ^{(1)}, \gamma ^{(2)}, \ldots \rangle \), and examines the family of graphs \(\varvec{T}_{\gamma ^{(n)}} \circ \cdots \circ \varvec{T}_{\gamma ^{(1)}}\).

More concretely, fix some real \(d > 2\), and let us consider a sequence \(\{h_n : n \geqslant 1\}\) of nonnegative integers. Also define \(\gamma ^{(n)} {:}{=}\gamma ^{(4,4+h_n)}\) and \(\varvec{T}^{(n)} {:}{=}\varvec{T}_{\gamma ^{(n)}} \circ \cdots \circ \varvec{T}_{\gamma ^{(1)}}\). Then \(|\varvec{T}^{(n)}| = 4^n \prod _{j=1}^n (4+h_j)\), and \({\mathrm {diam}}(\varvec{T}^{(n)}) \asymp 4^n\). By a similar argument as in Lemma 2.10 based on the recursive structure, it holds that for \(i \in \{1,2,\ldots ,n\}\), balls of radius \(\asymp 4^i\) in \(\varvec{T}^{(n)}\) have volume \(\asymp _K 4^i \prod _{j=1}^i (4+h_j)\), where \(K {:}{=}\max \{ h_n : n \geqslant 1\}\). Given our choice of \(\{h_1,\ldots ,h_{n-1}\}\), we choose \(h_n \geqslant 0\) as large as possible subject to

$$\begin{aligned} \sum _{j=1}^n \log _4(1+h_j/4) \leqslant (d-2) n. \end{aligned}$$

It is straightforward to argue that \(K \lesssim _d 1\), and

$$\begin{aligned} \left| \sum _{j=1}^n \log _4(1+h_j/4) - (d-2) n\right| \lesssim _d 1, \end{aligned}$$

implying that \(4^n \prod _{j=1}^n (4+h_j) \asymp _d 4^{dn}\) for every \(n \geqslant 1\). It follows that the graphs \(\{\varvec{T}^{(n)} : n \geqslant 1 \}\) have uniform polynomial growth of degree d.

Let us now return to the graphs \(\varvec{T}^n_{(b,k)}\) and analyze the effective resistance across them.

Lemma 3.7

For every \(n \geqslant 1\), it holds that

$$\begin{aligned} \Gamma ^n_{b,k} \lesssim _k \rho \left( \varvec{T}^n_{(b,k)}\right) \lesssim \Gamma ^n_{b,k}\,. \end{aligned}$$

Proof

Fix \(n \geqslant 2\) and write \(\varvec{T}^n_{(b,k)} = \varvec{T}_{(b,k)} \circ \varvec{T}^{n-1}_{(b,k)}\) as \(\varvec{A}_1 \mid \varvec{A}_2 \mid \cdots \mid \varvec{A}_b\) where, for \(1 \leqslant i \leqslant b\), each \(\varvec{A}_i\) is a vertical stack of \(\gamma _i^{(b,k)}\) copies of \(\varvec{T}^{n-1}_{(b,k)}\). Since \(\rho (\varvec{A}_i) = \rho (\varvec{T}_{(b,k)}^{n-1})/\gamma _i^{(b,k)}\) by the parallel law for effective resistances, applying Lemma 2.14 to \(\varvec{A}_1 \mid \varvec{A}_2 \mid \cdots \mid \varvec{A}_b\) gives

$$\begin{aligned}&\rho \left( \varvec{T}^n_{(b,k)}\right) \leqslant \sum _{i-1}^b \rho (\varvec{T}_{(b,k)}^{n-1})/\gamma _i^{(b,k)} \\&\quad + \sum _{i=1}^{b-1} \frac{1}{\min \left( \mathscr {R}(\varvec{A}_i),\mathscr {L}(\varvec{A}_{i+1})\right) } \leqslant \rho (\varvec{T}^{n-1}_{(b,k)}) \Gamma _{b,k} + b^{1-n}, \end{aligned}$$

where in the second inequality we have employed \(\min \left( \mathscr {R}(\varvec{A}_i),\mathscr {L}(\varvec{A}_{i+1})\right) \geqslant b^{n}\) which follows from Observation 3.4(a). Finally, observe that \(\Gamma _{b,k} \geqslant 1/\gamma _1^{(b,k)} + 1/\gamma _b^{(b,k)} = 2/b\), and therefore the desired upper bound follows by induction.

For the lower bound, note that since the degrees in \(\varvec{T}^n_{(b,k)}\) are bounded by k, the Nash-Williams inequality (see, e.g., [15, §5]) gives

$$\begin{aligned} \rho (\varvec{T}^n_{(b,k)}) \gtrsim _k \sum _{i=2}^{b^n-1} \frac{1}{|K_i|} \gtrsim \sum _{i=1}^{b^n} \frac{1}{|K_i|} = \Gamma _{b,k}^n, \end{aligned}$$
(3.1)

where \(K_i\) is the ith column of rectangles in \(\varvec{T}^n_{(b,k)}\), and the last equality follows by a simple induction. \(\square \)

The next result establishes Theorem 1.1.

Theorem 3.8

For every \(k > b\), the graphs \(\mathcal T_n^{(b,k)} {:}{=}G\left( \varvec{T}^0_{(b,k)} \mid \varvec{T}^1_{(b,k)} \mid \cdots \mid \varvec{T}^n_{(b,k)}\right) \) satisfy

$$\begin{aligned} \sup _{n \geqslant 1} \rho \left( \mathcal T_n^{(b,k)}\right) < \infty . \end{aligned}$$
(3.2)

Hence the limit graph \(\mathcal T_{\infty }^{(b,k)}\) is transient. Moreover, \(\mathcal T_{\infty }^{(b,k)}\) has uniform polynomial growth of degree \({\mathsf {d}}_g(b,k)\).

Proof

Employing Lemma 2.14, Observation 3.4(a), and Lemma 3.7 together yields

$$\begin{aligned} \rho \left( \mathcal T_n^{(b,k)}\right) \lesssim \sum _{j=1}^n \left( \Gamma _{b,k}^j + b^{-j}\right) . \end{aligned}$$

For \(k > b\), we have \(\max (\gamma ^{(b,k)}) > b\) and \(\min (\gamma ^{(b,k)})=b\), hence \(\Gamma _{b,k} < 1\), verifying (3.2). Now Theorem 2.12 yields transience of \(\mathcal T_{\infty }^{(b,k)}\).

Uniform polynomial growth of degree \({\mathsf {d}}_g(b,k)\) follows from Corollary 3.5 as in the proof of Lemma 2.11. \(\square \)

3.2 The distributional limit

Fix \(k \geqslant b \geqslant 4\) and take \(G_n {:}{=}G(\varvec{T}_{(b,k)}^n)\). Since the degrees in \(\{G_n\}\) are uniformly bounded, the sequence has a subsequential distributional limit, and in all arguments that follow, we could consider any such limit. But let us now argue that if \(\mu _n\) is the law of \((G_n,\rho _n)\) with \(\rho _n \in V(G_n)\) chosen according to the stationary measure, then the measures \(\{\mu _n : n \geqslant 0\}\) have a distributional limit.

Lemma 3.9

For any \(k \geqslant b \geqslant 4\), there is a reversible random graph \((G_{b,k},\rho )\) such that \(\{(G_n,\rho _n)\} \Rightarrow (G_{b,k},\rho )\). Moreover, almost surely \(G_{b,k}\) has uniform polynomial volume growth of degree \({\mathsf {d}}_g(b,k)\).

Proof

It suffices to prove that \(\{(G_n,\rho _n)\}\) has a limit \((G_{b,k},\rho )\). Reversibility of the limit then follows automatically (as noted in Section 1.1.1), and the degree of growth is an immediate consequence of Corollary 3.5. It will be slightly easier to show that the sequence \(\{(G_n,{\hat{\rho }}_n)\}\) has a distributional limit, with \({\hat{\rho }}_n \in V(G_n)\) chosen uniformly at random. As noted in Section 1.1.1, the claim then follows from [5, Prop. 2.5] (the correspondence between unimodular and reversible random graphs under degree-biasing).

Let \(\mu _{n,r}\) be the law of \(B_{G_n}({\hat{\rho }}_n,r)\). It suffices to show that the measures \(\{\mu _{n,r} : n \geqslant 0\}\) converge for every fixed \(r \geqslant 1\), and then a standard application of Kolmogorov’s extension theorem proves the existence of a limit.

For a tiling \(\varvec{T}\) of a rectangle R, let \(\partial \varvec{T}\) denote the set of tiles that intersect some side of R. Define the neighborhood \(N_r(\partial \varvec{T}_{(b,k)}^n) {:}{=}\{ v \in \varvec{T}^n_{(b,k)} : d_{G_n}(v, \partial \varvec{T}_{(b,k)}^n) \leqslant r\}\) and abbreviate \({\mathsf {d}} = {\mathsf {d}}_g(b,k)\). Then \(|\partial \varvec{T}^n_{(b,k)}| \leqslant 4b^n\), so Corollary 3.5 gives

$$\begin{aligned} \left| N_r(\partial \varvec{T}^n_{(b,k)})\right| \lesssim _k b^n r^{{\mathsf {d}}}\,. \end{aligned}$$

Since \(|\varvec{T}^n_{(b,k)}| = (bk)^n\), it follows that

$$\begin{aligned} 1-{{\,\mathrm{{\mathbb {P}}}\,}}\left[ \mathcal E_{r,n}\right] \lesssim _k k^{-n} r^{\mathsf {d}}, \end{aligned}$$

where \(\mathcal E_{r,n}\) is the event \(\{ B_{G_{n}}({\hat{\rho }}_{n}, r) \cap \partial \varvec{T}^{n}_{(b,k)} = \emptyset \}\).

Now write \(\varvec{T}^n_{(b,k)} = \varvec{T}_{(b,k)} \circ \varvec{T}^{n-1}_{(b,k)}\), and note that \({\hat{\rho }}_n\) falls into one of the \(|\gamma ^{(b,k)}|=bk\) copies of \(G_{n-1}\) and is, moreover, uniformly distributed in that copy. Therefore we can naturally couple \((G_n,{\hat{\rho }}_n)\) and \((G_{n-1},{\hat{\rho }}_{n-1})\) by identifying \({\hat{\rho }}_n\) with \({\hat{\rho }}_{n-1}\). Moreover, conditioned on the event \(\mathcal E_{r,n-1}\), we can similarly couple \(B_{G_n}({\hat{\rho }}_n,r)\) and \(B_{G_{n-1}}({\hat{\rho }}_{n-1},r)\).

It follows that, for every \(r \geqslant 1\),

$$\begin{aligned} d_{TV}\left( \mu _{n-1,r},\mu _{n,r}\right) \leqslant 1-{{\,\mathrm{{\mathbb {P}}}\,}}[\mathcal E_{r,n-1}] \lesssim _k k^{-n} r^{{\mathsf {d}}}\,. \end{aligned}$$

As the latter sequence is summable, it follows that \(\{\mu _{n,r}\}\) converges for every fixed \(r \geqslant 1\), completing the proof. \(\square \)

3.3 Speed of the random walk

Let \(\{X_t\}\) denote the random walk on \(G_{b,k}\) with \(X_0 = \rho \). Our first goal will be to prove a lower bound on the speed of the walk. Define:

$$\begin{aligned} {\mathsf {d}}_w(b,k)&{:}{=}{\mathsf {d}}_g(b,k) + \log _b(\Gamma _{b,k}). \end{aligned}$$

We will show that \({\mathsf {d}}_w(b,k)\) is related to the speed exponent for the random walk.

Theorem 3.10

Consider any \(k \geqslant b \geqslant 4\). It holds that for all \(T \geqslant 1\),

$$\begin{aligned} {{\,\mathrm{{\mathbb {E}}}\,}}\left[ d_{G_{b,k}}\!\left( X_T,X_0\right) \mid X_0 = \rho \right] \gtrsim _k T^{1/{\mathsf {d}}_w(b,k)}. \end{aligned}$$
(3.3)

Before proving the theorem, let us observe that it yields Theorem 1.3. Fix \(k \geqslant b \geqslant 4\). Observe that for any positive integer \(p \geqslant 1\), we have \({\mathsf {d}}_g(b^p,k^p)={\mathsf {d}}_g(b,k)\). On the other hand,

$$\begin{aligned} {\mathsf {d}}_g(b^p,k^p) - {\mathsf {d}}_w(b^p,k^p)&= -\log _{b^p}(\Gamma _{b^p,k^p}) \nonumber \\&\geqslant - \log _{b^p}\left( 3 b^{-p} + (b/k)^p\right) - o_p(1) \nonumber \\&\geqslant \min \left( 1, \log _b(k)-1\right) - o_p(1) \nonumber \\&\geqslant \min \left( 1, {\mathsf {d}}_g(b^p,k^p)-2\right) - o_p(1)\,. \end{aligned}$$
(3.4)

So for every \(\varepsilon > 0\), there is some \(p=p(\varepsilon )\) such that

$$\begin{aligned} {\mathsf {d}}_w(b^p,k^p) \leqslant \max \left( 2,{\mathsf {d}}_g(b^p,k^p)-1\right) + \varepsilon \,, \end{aligned}$$

and moreover \(G_{b^p,k^p}\) almost surely has uniform polynomial growth of degree \({\mathsf {d}}_g(b,k)\). Combining this with the construction of Corollary 3.5 for all rational \(d \geqslant 2\) yields Theorem 1.3.

3.3.1 The linearized graphs

Fix integers \(k \geqslant b \geqslant 4\) and \(n \geqslant 1\), and let us consider now the (weighted) graph \(L=L_{(b,k)}^n\) derived from \(G=G(\varvec{T}_{(b,k)}^n)\) by identifying every column of rectangles into a single vertex. Thus \(|V(L)| = b^n\).

We connect two vertices \(u,v \in V(L)\) if their corresponding columns \(\mathcal C_u\) and \(\mathcal C_v\) in G are adjacent, and we define the conductances \(c_{uv} {:}{=}|E_G(\mathcal C_u,\mathcal C_v)|\), where \(E_G(S,T)\) denotes the number of edges between two subsets \(S,T \subseteq V(G)\). Define additionally \(c_{uu} {:}{=}2 |\mathcal C_u|\) and

$$\begin{aligned} c_u&{:}{=}c_{uu} + \sum _{v : \{u,v\} \in E(L)} c(\{u,v\})\,. \end{aligned}$$

Let us order the vertices of L from left to right as \(V(L) = \{\ell _1, \ldots , \ell _{b^n}\}\). The series law for effective resistances gives the following.

Observation 3.11

For \(1 \leqslant i \leqslant t \leqslant j\leqslant b^n\), we have

$$\begin{aligned} {\mathsf {R}}_{\mathrm {eff}}^L(\ell _i \leftrightarrow \ell _j) = {\mathsf {R}}_{\mathrm {eff}}^L(\ell _i \leftrightarrow \ell _t) + {\mathsf {R}}_{\mathrm {eff}}^L(\ell _t \leftrightarrow \ell _j) \end{aligned}$$

We will use this to bound the resistance between any pair of columns.

Lemma 3.12

If \(1 \leqslant s < t \leqslant b^n\), then

$$\begin{aligned} {\mathsf {R}}_{\mathrm {eff}}^L(\ell _s \leftrightarrow \ell _t) \cdot \left( c_{\ell _s} + c_{\ell _{s+1}} + \cdots + c_{\ell _t}\right) \asymp _k (\Gamma _{b,k} \cdot bk)^{\log _b(t-s)}\,. \end{aligned}$$
(3.5)

Proof

Let us first establish the upper bound. Denote \(h {:}{=}\lceil \log _b(t-s) \rceil \), \(\varvec{T}{:}{=}\varvec{T}_{(b,k)}\), and \(\Gamma {:}{=}\Gamma _{b,k}\). Write \(\varvec{T}^n = \varvec{T}^{n-h} \circ \varvec{T}^h\) and along this decomposition, partition \(\varvec{T}^n\) into \(b^{n-h}\) sets of tiles \(\mathcal D_1, \ldots , \mathcal D_{b^{n-h}}\), where each \(\mathcal D_i\) is formed from adjacent columns

$$\begin{aligned} \mathcal D_i {:}{=}\mathcal C_{(i-1)\cdot b^h + 1} \cup \cdots \cup \mathcal C_{i\cdot b^h}\,. \end{aligned}$$
(3.6)

Suppose that, for \(1 \leqslant i \leqslant b^{n-h}\), the tiling \(\varvec{T}^{n-h}\) has \(\beta _i\) tiles in its ith column. Then \(\mathcal D_i\) consists of \(\beta _i\) copies of \(\varvec{T}^h\) stacked atop each other.

Thus we have \(|\mathcal D_i| = \beta _i |\varvec{T}^h|\), and furthermore \(\rho (\mathcal D_i) \leqslant \rho (\varvec{T}^h)/\beta _i\), hence

$$\begin{aligned} \rho (\mathcal D_i) \cdot |\mathcal D_i| \leqslant \rho (\varvec{T}^h) \cdot |\varvec{T}^h| \lesssim \Gamma ^h \cdot (bk)^h, \end{aligned}$$
(3.7)

where the last inequality uses Lemma 3.7.

Let \(1 \leqslant i \leqslant j \leqslant b^{n-h}\) be such that \(\mathcal C_s \subseteq \mathcal D_i\) and \(\mathcal C_t \subseteq \mathcal D_j\). Since \(t \leqslant s + b^h\), and each set \(\mathcal D_i\) consists of \(b^h\) consecutive columns, it must be that \(j \leqslant i+1\). If \(i=j\), then \(|\mathcal C_s|+\cdots +|\mathcal C_t| \leqslant |\mathcal D_i|\), and Observation 3.11 gives

$$\begin{aligned} {\mathsf {R}}_{\mathrm {eff}}^L(\ell _s \leftrightarrow \ell _t) \leqslant \rho (\mathcal D_i), \end{aligned}$$

thus (3.7) yields (3.5), as desired.

Suppose, instead, that \(j=i+1\). From Lemma 3.2, we have \(\alpha _{\varvec{T}^{n-h}} \leqslant |\gamma |/b \leqslant k\). Therefore \(1/k \leqslant \beta _{i+1}/\beta _i \leqslant k\). Since the degrees in \(G(\varvec{T}^n_{(b,k)})\) are bounded by k, this yields the following claim, which we will also employ later.

Claim 3.13

For any \(\ell _i \in V(L)\), we have

$$\begin{aligned} c_{\ell _i} \asymp _k |\mathcal C_i|, \end{aligned}$$
(3.8)

and for any \(D > 0\) and columns \(\mathcal C_a \in \mathcal D_i\) and \(\mathcal C_b \in \mathcal D_j\) with \(|i-j| \leqslant D\), it holds that

$$\begin{aligned} c_{\ell _a} \asymp _k |\mathcal C_a| \asymp _{k,D} |\mathcal C_b| \asymp _k c_{\ell _b}. \end{aligned}$$
(3.9)

Thus using Corollary 2.15 (and noting that each \(\mathcal D_i\) is non-degenerate) along with (3.7) gives

$$\begin{aligned} \rho (\mathcal D_i \cup \mathcal D_{i+1}) = \rho (\mathcal D_i \mid \mathcal D_{i+1}) \lesssim _k \rho (\mathcal D_i) + \rho (\mathcal D_{i+1}) \lesssim _k \rho (\varvec{T}^h)/\beta _i. \end{aligned}$$

Since it also holds that \(|\mathcal D_i|+|\mathcal D_{i+1}| = (\beta _i+\beta _{i+1}) |\varvec{T}^h| \leqslant 2k \beta _i |\varvec{T}^h|\), Observation 3.11 gives

$$\begin{aligned} {\mathsf {R}}_{\mathrm {eff}}^L(\ell _s \leftrightarrow \ell _t) \cdot \left( |\mathcal C_s|+\cdots +|\mathcal C_t|\right) \leqslant \rho (\mathcal D_i \cup \mathcal D_{i+1}) |\mathcal D_i \cup \mathcal D_{i+1}| \lesssim _k \rho (\varvec{T}^h) |\varvec{T}^h|, \end{aligned}$$

and again (3.7) establishes (3.5). Now (3.8) completes the proof of the upper bound.

For the lower bound, define \(h' {:}{=}\lfloor \log _b (t-s)\rfloor - 1\) and decompose \(\varvec{T}^n = \varvec{T}^{n-h'} \circ \varvec{T}^{h'}\). Partition \(\varvec{T}^n\) similarly into \(b^{n-h'}\) sets of tiles \(\mathcal D_1,\ldots ,\mathcal D_{b^{n-h'}}\). Suppose that \(\mathcal C_s \subseteq \mathcal D_i\) and \(\mathcal C_t \subseteq \mathcal D_j\), and note that the width of each \(\mathcal D_i\) is \(b^{h'}\) and \(b^{h'+2} \geqslant t-s \geqslant b^{h'+1}\), hence \(j > i+1\). Therefore using again Observation 3.11 and the Nash-Williams inequality, we have

$$\begin{aligned} {\mathsf {R}}_{\mathrm {eff}}^L(\ell _s \leftrightarrow \ell _t) \geqslant \sum _{j=s+1}^{t-1} \frac{1}{c_{\ell _j}} {\mathop {\gtrsim _k}\limits ^{(3.8)}} \sum _{j=s+1}^{t-1} \frac{1}{|\mathcal C_j|} \geqslant \sum _{j=(i-1) b^h+1}^{i b^h} \frac{1}{|\mathcal C_j|} = \frac{1}{\beta _{i+1}} \Gamma _{b,k}^n, \end{aligned}$$

where the final inequality uses (3.1). Note also that

$$\begin{aligned} |\mathcal C_s|+\cdots +|\mathcal C_t| \geqslant |\mathcal D_{i+1}| = \beta _{i+1} |\varvec{T}^{h'}| = \beta _{i+1} (bk)^{h'}. \end{aligned}$$

An application of (3.8) completes the proof of the lower bound. \(\square \)

3.3.2 Rate of escape in L

Consider again the linearized graph \(L = L^n_{(b,k)}\) with conductances \(c : E(L) \rightarrow \mathbb R_+\) defined in Section 3.3.1, and let \(\{Y_t\}\) be the random walk on L defined by

$$\begin{aligned} {{\,\mathrm{{\mathbb {P}}}\,}}[Y_{t+1} = v \mid Y_t = u] = \frac{c_{uv}}{c_u}, \quad \{u,v\} \in E(L) \text { or } u=v. \end{aligned}$$
(3.10)

Let \(\pi _L\) be the stationary measure of \(\{Y_t\}\).

For a parameter \(1 \leqslant h \leqslant n\), consider the decomposition \(\varvec{T}^n = \varvec{T}^{n-h} \circ \varvec{T}^h\), and let \(V_1,V_2,\ldots ,V_{b^{n-h}}\) be a partition of V(L) into continguous subsets with \(|V_1|=|V_2|=\cdots =|V_b^{n-h}|=b^h\).

Let \(\{Z_i : i \in \{1,2,\ldots ,b^{n-h}\}\}\) be a collection of independent random variables with

$$\begin{aligned} {{\,\mathrm{{\mathbb {P}}}\,}}[Z_i = v] = \frac{\pi _L(v)}{\pi _L(V_i)},\quad v \in V_i. \end{aligned}$$

Define the random time \(\tau (h)\) as follows: Given \(Y_0 \in V_j\), let \(\tau (h)\) be the first time \(\tau \geqslant 1\) at which

$$\begin{aligned} Y_{\tau } \in \{Z_{j-2},Z_{j+2}\} \qquad&3 \leqslant j \leqslant b^{n-h}-2 \\ Y_{\tau } = Z_{j+2} \qquad&j \in \{1,2\} \\ Y_{\tau } = Z_{j-2} \qquad&j \in \{b^{n-h}-1,b^{n-h}\}. \end{aligned}$$

The next lemma shows that the law of the walk stopped at time \(\tau (h)\) is within a constant factor of the stationary measure.

Lemma 3.14

Suppose \(Y_0\) is chosen according to \(\pi _L\). Then for every \(v \in V(L)\),

$$\begin{aligned} {{\,\mathrm{{\mathbb {P}}}\,}}\left[ Y_{\tau (h)} = v\right] \asymp _k \pi _L(v). \end{aligned}$$

Proof

Consider some \(5 \leqslant j \leqslant b^{n-h}-4\) and \(v \in V_j\). The proof for the other cases is similar. Let \(\mathcal E\) denote the event \(\{Y_0 \in \{V_{j-2},V_{j+2}\}\}\). The conditional measure is

$$\begin{aligned} {{\,\mathrm{{\mathbb {P}}}\,}}[Y_0=u \mid \mathcal E] = \frac{\pi _L(u)}{\pi _L(V_{j-2})+\pi _L(V_{j+2})},\quad u \in V_{j-2} \cup V_{j+2}. \end{aligned}$$

Consider three linearly ordered vertices \(v,u,w \in V(L)\), i.e., such that vw are in distinct connected components of \(L[V(L) \setminus \{u\}]\)). Let \(p_{u}^{v \prec w}\) denote the probability that the random walk, started from \(Y_0 = u\) hits v before it hits w. Now we have:

$$\begin{aligned}&{{\,\mathrm{{\mathbb {P}}}\,}}\left[ Y_{\tau (h)} = v\right] = \frac{\pi _L(v)}{\pi _L(V_j)} \left( \sum _{u \in V_{j-2}} \pi _L(u) \sum _{w \in V_{j-4}} \frac{\pi _L(w)}{\pi _L(V_{j-4})} p_u^{v \prec w}\right. \nonumber \\&\quad \left. + \sum _{u \in V_{j+2}} \pi _L(u) \sum _{w \in V_{j+4}} \frac{\pi _L(w)}{\pi _L(V_{j+4})} p_u^{v \prec w}\right) \end{aligned}$$
(3.11)

It is a classical fact (see [15, Ch. 2]) that

$$\begin{aligned} p_u^{v \prec w} = \frac{{\mathsf {R}}_{\mathrm {eff}}^L(u \leftrightarrow w)}{{\mathsf {R}}_{\mathrm {eff}}^L(u \leftrightarrow v) + {\mathsf {R}}_{\mathrm {eff}}^L(u \leftrightarrow w)} = \frac{{\mathsf {R}}_{\mathrm {eff}}^L(u\leftrightarrow w)}{{\mathsf {R}}_{\mathrm {eff}}^L(v \leftrightarrow w)}, \end{aligned}$$

where the final equality uses Observation 3.11 and the fact that vuw are linearly ordered.

Thus from Lemma 3.12 and (3.9), whenever \(w \in V_{j-4}, u \in V_{j-2}, v \in V_{j}\) or \(u \in V_{j+2}, v \in V_j, w \in V_{j+4}\), it holds that

$$\begin{aligned} p_{u}^{v \prec w} \asymp _k 1 \end{aligned}$$

Another application of (3.9) gives

$$\begin{aligned} \pi _L(V_{j-2})\asymp _k \pi _L(V_{j-4}) \asymp _k \pi _L(V_j) \asymp _k \pi _L(V_{j+2}), \end{aligned}$$

hence (3.11) gives

$$\begin{aligned} {{\,\mathrm{{\mathbb {P}}}\,}}\left[ Y_{\tau (h)} = v\right] \asymp _k \pi _L(v), \end{aligned}$$

completing the proof. \(\square \)

Lemma 3.15

It holds that \({{\,\mathrm{{\mathbb {E}}}\,}}[\tau (h) \mid Y_0] \lesssim _k b^{h\,{\mathsf {d}}_w(b,k)}\).

Proof

Consider a triple of vertices \(u \in V_{j}, v \in V_{j-2}, w\in V_{j+2}\) for \(3 \leqslant j \leqslant b^{n-h} - 2\). Let \(\tau _{v,w}\) be the smallest time \(\tau \geqslant 0\) such that \(X_\tau \in \{v,w\}\), and denote

$$\begin{aligned} t_{u}^{v,w} {:}{=}{{\,\mathrm{{\mathbb {E}}}\,}}[\tau _{v,w} \mid Y_0 = u]. \end{aligned}$$

Then the standard connection between hitting times and effective resistances [11] yields

$$\begin{aligned} t_u^{v,w} \leqslant 2 \left( \sum _{i=j-2}^{j+2} \sum _{x \in V_i} c_x\right) \min \left( {\mathsf {R}}_{\mathrm {eff}}^L(u \leftrightarrow v), {\mathsf {R}}_{\mathrm {eff}}^L(u \leftrightarrow w)\right) \lesssim _k \left( \Gamma _{b,k} bk\right) ^{h}, \end{aligned}$$

where the last line employs Lemma 3.12. Recalling that \({\mathsf {d}}_w(b,k) = \log _b(bk \Gamma _{b,k})\), this yields

$$\begin{aligned} {{\,\mathrm{{\mathbb {E}}}\,}}[\tau (h) \mid Y_0 = v] \lesssim _k b^{h\,{\mathsf {d}}_w(b,k)} \end{aligned}$$

for any \(v \in V_3 \cup V_4 \cup \cdots \cup V_{b^{n-h}-2}\). A one-sided variant of the argument follows in the same manner for \(v \in V_j\) when \(j \leqslant 2\) or \(j \geqslant b^{n-h} - 1\). \(\square \)

Lemma 3.16

Let \(Y_0\) have law \(\pi _L\). There is a number \(c_k > 0\) such that for any \(T \leqslant c_k {\mathrm {diam}}(L)^{{\mathsf {d}}_w(b,k)}\), we have

$$\begin{aligned} {{\,\mathrm{{\mathbb {E}}}\,}}\left[ d_L(Y_0,Y_T)\right] \gtrsim _k T^{1/{\mathsf {d}}_w(b,k)}. \end{aligned}$$

Proof

First, we claim that for every \(T \geqslant 1\),

$$\begin{aligned} {{\,\mathrm{{\mathbb {E}}}\,}}\left[ d_L(Y_0,Y_T)\right] \geqslant \frac{1}{2} \max _{0 \leqslant t \leqslant T} {{\,\mathrm{{\mathbb {E}}}\,}}\left[ d_L(Y_0,Y_t) - d_L(Y_0,Y_1)\right] . \end{aligned}$$
(3.12)

Let \(s' \leqslant T\) be such that

$$\begin{aligned} {{\,\mathrm{{\mathbb {E}}}\,}}[d_L(Y_0,Y_{s'})] = \max _{0 \leqslant t \leqslant T} {{\,\mathrm{{\mathbb {E}}}\,}}[d_L(Y_0,Y_t)]. \end{aligned}$$

Then there exists an even time \(s \in \{s',s'-1\}\) such that \({{\,\mathrm{{\mathbb {E}}}\,}}[d_L(Y_0,Y_s)] \geqslant {{\,\mathrm{{\mathbb {E}}}\,}}[d_L(Y_0,Y_{s'})-d_L(Y_0,Y_1)]\). Consider \(\{Y_t\}\) and an identically distributed walk \(\{{\tilde{Y}}_t\}\) such that \({\tilde{Y}}_t=Y_t\) for \(t \leqslant s/2\) and \({\tilde{Y}}_t\) evolves independently after time s/2. By the triangle inequality, we have

$$\begin{aligned} d_L(Y_0,{\tilde{Y}}_T) + d_L({\tilde{Y}}_T,Y_s) \geqslant d_L(Y_0, Y_s). \end{aligned}$$

But since \(\{Y_t\}\) is stationary and reversible, \((Y_0,{\tilde{Y}}_T)\) and \(({\tilde{Y}}_T, Y_s)\) have the same law as \((Y_0,Y_T)\). Taking expectations yields (3.12).

Let \(h \in \{1,2,\ldots ,n\}\) be the largest value such that \({{\,\mathrm{{\mathbb {E}}}\,}}[\tau (h)] \leqslant T\). We may assume that T is sufficiently large so that \({{\,\mathrm{{\mathbb {E}}}\,}}[\tau (1)] \leqslant T\), and Lemma 3.15 guarantees that

$$\begin{aligned} b^h \gtrsim _k T^{1/{\mathsf {d}}_w(b,k)}, \end{aligned}$$
(3.13)

as long as \(h < n\) (which gives our restriction \(T \leqslant c_k b^{h {\mathsf {d}}_w(b,k)} = c_k {\mathrm {diam}}(L)^{1/{\mathsf {d}}_w(b,k)}\) for some \(c_k > 0\)).

From the definition of \(\tau (h)\), we have

$$\begin{aligned} d_L(Y_0, Y_{\tau (h)}) \geqslant b^h, \end{aligned}$$

hence the triangle inequality implies

$$\begin{aligned} d_L(Y_0,Y_{2T}) \geqslant \mathbb {1}_{\{\tau (h) \leqslant 2T\}} \left( b^h - d_L(Y_{\tau (h)}, Y_{2T})\right) . \end{aligned}$$
(3.14)

Again, let \(\{{\tilde{Y}}_t\}\) be an independent copy of \(\{Y_t\}\). Then since \({{\,\mathrm{{\mathbb {P}}}\,}}(\tau (h) \leqslant 2T) \geqslant 1/2\), Lemma 3.14 implies

$$\begin{aligned} {{\,\mathrm{{\mathbb {P}}}\,}}\left[ Y_{\tau (h)}=v \mid \{\tau (h) \leqslant 2T\}\right] \leqslant 2 {{\,\mathrm{{\mathbb {P}}}\,}}\left[ Y_{\tau (h)}=v\right] \lesssim _k {{\,\mathrm{{\mathbb {P}}}\,}}\left[ {\tilde{Y}}_0=v\right] . \end{aligned}$$

Therefore,

$$\begin{aligned} {{\,\mathrm{{\mathbb {E}}}\,}}\left[ \mathbb {1}_{\{\tau (h) \leqslant 2T\}} \,d_L(Y_{\tau (h)}, Y_{2T})\right]&\lesssim _k {{\,\mathrm{{\mathbb {E}}}\,}}\left[ \mathbb {1}_{\{\tau (h) \leqslant 2T\}} \,d_L({\tilde{Y}}_{0},{\tilde{Y}}_{2T-\tau (h)})\right] . \end{aligned}$$

Define \(\eta {:}{=}b^{-h} \max _{0 \leqslant t \leqslant 2T} {{\,\mathrm{{\mathbb {E}}}\,}}[d_L(Y_0,Y_t)]\). Using the above bound yields

$$\begin{aligned} {{\,\mathrm{{\mathbb {E}}}\,}}\left[ \mathbb {1}_{\{\tau (h) \leqslant 2T\}} \,d_L(Y_{\tau (h)}, Y_{2T})\right] \leqslant C(k) {{\,\mathrm{{\mathbb {P}}}\,}}(\tau (h) \leqslant 2T)\,\eta b^h, \end{aligned}$$

for some number C(k). Taking expectations in (3.14) gives

$$\begin{aligned} {{\,\mathrm{{\mathbb {E}}}\,}}\left[ d_L(Y_0,Y_{2T})\right] \geqslant {{\,\mathrm{{\mathbb {P}}}\,}}(\tau (h) \leqslant 2T) \left( 1-\eta C(k)\right) b^h \geqslant \frac{1}{2} \left( 1-\eta C(k)\right) b^h. \end{aligned}$$

If \(\eta \leqslant 1/(2 C(k))\), then \({{\,\mathrm{{\mathbb {E}}}\,}}[d_L(Y_0,Y_{2T})] \geqslant b^h/4\). If, on the other hand, \(\eta > 1/(2 C(k))\), then (3.12) yields

$$\begin{aligned} {{\,\mathrm{{\mathbb {E}}}\,}}[d_L(Y_0,Y_{2T})] \geqslant \frac{1}{2} \left( \eta b^h - 1\right) \gtrsim _k b^h. \end{aligned}$$

Now (3.13) completes the proof. \(\square \)

3.3.3 Rate of escape in \(G_{b,k}\)

Fig. 6
figure 6

The cylindrical graph \({\tilde{G}}\) for \(G=G(\varvec{T}_{\langle 3,6,3\rangle })\). The new edges are dashed

Consider now the graphs \(G_n {:}{=}G(\varvec{T}^n_{(b,k)})\) for some \(k \geqslant b \geqslant 4\) and \(n \geqslant 1\). Let us define the cylindrical version \({\tilde{G}}_n\) of \(G_n\) with the same vertex set, but additionally and edge from the top tile to the bottom tile in every column (see Fig. 6). If we choose \({\tilde{\rho }}_n \in V({\tilde{G}}_n)\) according to the stationary measure on \({\tilde{G}}_n\), then clearly \(\{({\tilde{G}}_n,{\tilde{\rho }}_n)\} \Rightarrow (G_{b,k},\rho )\) as well.

Define also \(L_n {:}{=}L^n_{b,k}\). Because of Observation 3.4(b), the graph \({\tilde{G}}_n\) has vertical symmetry: Tiles within a column all have the same degree and, more specifically, have the same number of neighbors on the left and on the right. Let \(\pi _n : V(G_n) \rightarrow V(L_n)\) denote the projection map and observe that

$$\begin{aligned} d_{{\tilde{G}}_n}(u,v) \geqslant d_{L_n}\!\left( \pi _n(u),\pi _n(v)\right) , \qquad \forall u,v \in V(G_n). \end{aligned}$$

Let \(\{X^{(n)}_t\}\) denote the random walk on \({\tilde{G}}_n\) with \(X^{(n)}_0={\tilde{\rho }}_n\), and let \(\{Y^{(n)}_t\}\) be the stationary random walk on \(L_n\) defined in (3.10).

Note that, by construction, \(\{\pi _n(X_t^{(n)})\}\) and \(\{Y_t^{(n)}\}\) have the same law, and therefore

$$\begin{aligned} {{\,\mathrm{{\mathbb {E}}}\,}}\left[ d_{{\tilde{G}}_n}\left( X^{(n)}_0,X^{(n)}_T\right) \right] \geqslant {{\,\mathrm{{\mathbb {E}}}\,}}\left[ d_{L_n}\left( Y^{(n)}_0,Y^{(n)}_T\right) \right] . \end{aligned}$$
(3.15)

With this in hand, we can establish speed lower bounds in the limit \((G_{b,k},\rho )\).

Proof of Theorem 3.10

Observe that (3.15) in conjunction with Lemma 3.16 gives, for every \(T \leqslant c_k ({\mathrm {diam}}({\tilde{G}}_n)/2)^{{\mathsf {d}}_w(b,k)}\),

$$\begin{aligned} {{\,\mathrm{{\mathbb {E}}}\,}}\left[ d_{{\tilde{G}}_n}\!\left( X^{(n)}_0,X^{(n)}_T\right) \right] \gtrsim _k T^{1/{\mathsf {d}}_w(b,k)} \end{aligned}$$

Since \(\{({\tilde{G}}_n,\rho _n)\} \Rightarrow (G,\rho )\) by Lemma 3.9, it holds that if \(\{X_t\}\) is the random walk on G with \(X_0=\rho \), then for all \(T \geqslant 1\),

$$\begin{aligned} {{\,\mathrm{{\mathbb {E}}}\,}}\left[ d_{G}(X_0,X_T)\right] \gtrsim _k T^{1/{\mathsf {d}}_w(b,k)}. \end{aligned}$$

\(\square \)

3.4 Annular resistances

We will establish Theorem 1.5 by proving the following.

Theorem 3.17

For any \(k \geqslant b \geqslant 4\), there is a constant \(C=C(k)\) such that for \(G=G_{b,k}\), almost surely

$$\begin{aligned} {\mathsf {R}}_{\mathrm {eff}}^{G}\left( B_G(\rho , R) \leftrightarrow V(G) \setminus B_G(\rho , 2 R)\right) \leqslant C R^{\log _b(\Gamma _{b,k})},\quad \forall R \geqslant 1. \end{aligned}$$

To see that this yields Theorem 1.5, consider some \(k \geqslant b^2\), corresponding to the restriction \({\mathsf {d}}_g(b,k) \geqslant 3\). Then for all positive integers \(p \geqslant 1\), we have \({\mathsf {d}}_g(b^p,k^p)={\mathsf {d}}_g(b,k)\) and recalling (3.4),

$$\begin{aligned} \lim _{p \rightarrow \infty } \log _{b^p}\!\left( \Gamma _{b^p,k^p}\right) = -1\,. \end{aligned}$$

To prove Theorem 3.17, it suffices to show the following.

Lemma 3.18

For every \(n \geqslant 1\), \(k \geqslant b \geqslant 4\), there is a constant \(C=C(k)\) such that for \(G=G(\varvec{T}^n_{(b,k)})\), we have

$$\begin{aligned}&{\mathsf {R}}_{\mathrm {eff}}^{G}\left( B_G(x, R) \leftrightarrow V(G) \setminus B_G(x, 2 R)\right) \leqslant C R^{\log _b(\Gamma _{b,k})},\quad \forall x \in V(G), 1 \leqslant R \\&\quad \leqslant {\mathrm {diam}}(G)/C\,. \end{aligned}$$

Proof

Denote \(\varvec{T}{:}{=}\varvec{T}_{(b,k)}\). Consider some value \(1 \leqslant R \leqslant {\mathrm {diam}}(G)/C\), and define \(h {:}{=}\lfloor \log _b(R/3)\rfloor \).

Let \(\mathcal C_1,\ldots ,\mathcal C_{b^n}\) denote the columns of \(\varvec{T}^n\) and writing \(\varvec{T}^n = \varvec{T}^{n-h} \circ \varvec{T}^h\), let us partition the columns into consecutive sets \(\mathcal D_1,\ldots ,\mathcal D_{b^{n-h}}\) (as in the proof of Lemma 3.12), where \(\mathcal D_i = \mathcal C_{(i-1) b^h + 1} \cup \cdots \cup \mathcal C_{i b^h}\). For \(1 \leqslant i \leqslant b^{n-h}\), let \(\beta _i\) denote the number of tiles in the ith column of \(\varvec{T}^{n-h}\) so that \(\mathcal D_i\) consists of \(\beta _i\) copies of \(\varvec{T}^h\) stacked vertically.

Fix some vertex \(x \in V(G)\) and suppose that \(x \in \mathcal D_s\) for some \(1 \leqslant s \leqslant b^{n-h}\). Denote \(\Delta {:}{=}9b\). By choosing C sufficiently large, we can assume that \(b^{n-h} > 2\Delta \), so that either \(s \leqslant b^{n-h} -\Delta \) or \(s \geqslant 1+\Delta \). Let us assume that \(s \leqslant b^{n-h}-\Delta \), as the other case is treated symmetrically. Define \(t {:}{=}\lceil s+2+6b\rceil \) so that \(t-s \leqslant \Delta \), and

$$\begin{aligned} d_G(\mathcal D_s, \mathcal D_{t}) \geqslant b^h (t-s-1) \geqslant (t-s-1) \frac{R}{3b} > 2 R\,. \end{aligned}$$
(3.16)

Denote \(\xi {:}{=}\gcd (\beta _s,\beta _{s+1},\ldots ,\beta _t)\). We claim that

$$\begin{aligned} \xi \gtrsim _{k} \max (\beta _s,\beta _{s+1},\ldots ,\beta _{t})\,. \end{aligned}$$
(3.17)

This follows because \(\min (\beta _i,\beta _{i+1}) \mid \max (\beta _i,\beta _{i+1})\) for all \(1 \leqslant i < b^n\) (cf. Observation 3.4(b)), and moreover the ratio \(\max (\beta _i,\beta _{i+1})/\min (\beta _i,\beta _{i+1})\) is bounded by a function depending only on k. Since \(t-s \lesssim _{k} 1\), this verifies (3.17).

Denote \({\hat{\mathcal D}} {:}{=}\mathcal D_s \cup \cdots \cup \mathcal D_t\). One can verify that \({\hat{\mathcal D}}\) is a vertical stacking of \(\xi \) copies of \({\hat{\varvec{T}}} {:}{=}\varvec{T}_{\langle \beta _s/\xi ,\ldots ,\beta _t/\xi \rangle } \circ \varvec{T}^h\), and Corollary 2.15 implies that

$$\begin{aligned} \rho ({\hat{\varvec{T}}}) \lesssim _{k} \rho (\varvec{T}^h) \lesssim \Gamma _{b,k}^h\,, \end{aligned}$$
(3.18)

with the final inequality being the content of Lemma 3.7.

Let \({\hat{\varvec{A}}}\) be the copy of \({\hat{\varvec{T}}}\) that contains x, and let \(\varvec{S}\) be the copy of \(\varvec{T}^h\) in \(\varvec{T}^n = \varvec{T}^{n-h} \circ \varvec{T}^h\) that contains x. Since \(\xi \) divides \(\beta _s\), it holds that \(\varvec{S}\subseteq {\hat{\varvec{A}}}\) and \(\mathscr {L}(\varvec{S}) \subseteq \mathscr {L}({\hat{\varvec{A}}})\). We further have

$$\begin{aligned} |\mathscr {L}({\hat{\varvec{A}}})| = |\mathscr {L}({\hat{\varvec{T}}})| = (\beta _s/\xi ) |\mathscr {L}(\varvec{T}^h)| = (\beta _s/\xi ) |\mathscr {L}(\varvec{S})| \lesssim _{k} |\mathscr {L}(\varvec{S})|\,. \end{aligned}$$

This yields

$$\begin{aligned} {\mathsf {R}}_{\mathrm {eff}}^G(\mathbb {1}_{\mathscr {L}(\varvec{S})}/|\mathscr {L}(\varvec{S})| \leftrightarrow \mathscr {R}({\hat{\varvec{A}}})) \lesssim _{k} {\mathsf {R}}_{\mathrm {eff}}^G(\mathbb {1}_{\mathscr {L}({\hat{\varvec{A}}})}/|\mathscr {L}({\hat{\varvec{A}}})| \leftrightarrow \mathscr {R}({\hat{\varvec{A}}})), \end{aligned}$$
(3.19)

where we have used the hybrid notation: For \(s \in \ell _1(V)\),

$$\begin{aligned} {\mathsf {R}}_{\mathrm {eff}}^G(s \leftrightarrow U) {:}{=}\inf \left\{ {\mathsf {R}}_{\mathrm {eff}}^G(s,t) : {{\,\mathrm{supp}\,}}(t) \subseteq U, \Vert t\Vert _1=\Vert s\Vert _1\right\} . \end{aligned}$$

Therefore,

$$\begin{aligned} {\mathsf {R}}_{\mathrm {eff}}^G(\mathscr {L}(\varvec{S}) \leftrightarrow \mathscr {R}({\hat{\varvec{A}}}))&\leqslant {\mathsf {R}}_{\mathrm {eff}}^G(\mathbb {1}_{\mathscr {L}(\varvec{S})}/|\mathscr {L}(\varvec{S})| \leftrightarrow \mathscr {R}({\hat{\varvec{A}}})) \\&{\mathop {\lesssim _{k}}\limits ^{(3.19)}} {\mathsf {R}}_{\mathrm {eff}}^G(\mathbb {1}_{\mathscr {L}({\hat{\varvec{A}}})}/|\mathscr {L}({\hat{\varvec{A}}})| \leftrightarrow \mathscr {R}({\hat{\varvec{A}}})) =\rho ({\hat{\varvec{T}}}) {\mathop {\lesssim _{k}}\limits ^{(3.18)}} \Gamma _{b,k}^h\,. \end{aligned}$$

Since \({\mathrm {diam}}_G(\varvec{S}) \leqslant 3 b^h \leqslant R\) and \(x \in \varvec{S}\), it holds that \(\varvec{S}\subseteq B_G(x,R)\). On the other hand, since \(x \in \varvec{S}\), (3.16) shows that \(B_G(x, 2 R) \cap \mathscr {R}({\hat{\varvec{A}}}) = \emptyset \). We conclude that

$$\begin{aligned}&{\mathsf {R}}_{\mathrm {eff}}^G(B_G(x,R) \leftrightarrow V(G) \setminus B_G(x,2 R)) \leqslant {\mathsf {R}}_{\mathrm {eff}}^G(\mathscr {L}(\varvec{S}) \leftrightarrow \mathscr {R}({\hat{\varvec{A}}}))\\&\quad \lesssim _{k} \Gamma ^h_{b,k} \lesssim _{k} R^{\log _b(\Gamma _{b,k})}, \end{aligned}$$

as desired. \(\square \)

3.5 Complements of balls are connected

Let us finally prove Theorem 1.4. Recall the setup from Section 3.3.3: We take \(k \geqslant b \geqslant 4\), define \(G_n = G(\varvec{T}^n_{(b,k)})\), and use \(({\tilde{G}}_n,{\tilde{\rho }}_n)\) to denote the cylindrical version, which satisfies \(\{({\tilde{G}}_n,{\tilde{\rho }}_n)\} \Rightarrow (G_{b,k},\rho )\).

Partition the vertices of \({\tilde{G}}_n\) into columns \(\mathcal C_1, \ldots , \mathcal C_{b^n}\) in the natural way (as was done in Sects. 3.3 and 3.4). In what follows, we say that a set \(S \subseteq V({\tilde{G}}_n)\) is connected to mean that \({\tilde{G}}_n[S]\) is a connected graph.

Definition 3.19

Say that a set of vertices \(U \subseteq V({\tilde{G}}_n)\) is vertically convex if for all \(1 \leqslant i \leqslant b^n\), either \(U \cap \mathcal C_i = \emptyset \) or \(U \cap \mathcal C_i\) is connected.

Observation 3.20

Consider a connected set \(U \subseteq \mathcal C_i\) for some \(1 \leqslant i \leqslant b^n\). Then for \(1 \leqslant i' \leqslant b^n\) with \(|i - i'| \leqslant 1\), the set \(B_{{\tilde{G}}_n}(U, 1) \cap \mathcal C_{i'}\) is connected.

Momentarily, we will argue that balls in \({\tilde{G}}_n\) are vertically convex.

Lemma 3.21

Consider any \(A \in \varvec{T}\) and \(R \geqslant 0\). It holds that \(B_{{\tilde{G}}_n}(A,R)\) is vertically convex.

With this lemma in hand, it is easy to establish the following theorem which, in conjunction with Lemma 3.9, implies Theorem 1.4.

Theorem 3.22

Almost surely, the complement of every ball in \(G_{b,k}\) is connected.

Proof

Since \(\{({\tilde{G}}_n,{\tilde{\rho }}_n)\} \Rightarrow G_{b,k}\), it suffices to argue that for every \(n \geqslant 1\), \(x \in V({\tilde{G}}_n)\), and \(R \leqslant b^n/3\), the set \(U {:}{=}V({\tilde{G}}_n) \setminus B_{{\tilde{G}}_n}(x,R)\) is connected.

By Lemma 3.21, it holds that \(B_{{\tilde{G}}_n}(x,R)\) is vertically convex. Since the complement of a vertically convex set is vertically convex (given that every column in \({\tilde{G}}_n\) is isomorphic to a cycle), U is vertically convex as well. To argue that U is connected, it therefore suffices to prove that there is a path from \(\mathscr {L}(\varvec{T}^n_{(b,k)})\) to \(\mathscr {R}(\varvec{T}^n_{(b,k)})\) in \({\tilde{G}}_n[U]\).

In fact, the tiles in \(\varvec{T}^n_{(b,k)}\) have height at most \(b^{-n}\), and therefore the projection of \(K {:}{=}[\![ B_{{\tilde{G}}_n}(x,R)]\!]\) onto \(\{0\} \times [0,1]\) has length at most \((2R+1) b^{-n} < 1\). Hence there is some height \(h \in [0,1]\) such that a horizontal line \(\ell \) at height h does not intersect K. The set of tiles \(\{ A \in V({\tilde{G}}_n) : \ell \cap [\![ A]\!] \ne \emptyset \}\) therefore contains a path from \(\mathscr {L}(\varvec{T}^n_{(b,k)})\) to \(\mathscr {R}(\varvec{T}^n_{(b,k)})\) that is contained in \({\tilde{G}}_n[U]\), completing the proof. \(\square \)

We are left to prove Lemma 3.21. To state the next lemma more cleanly, let us denote \(\mathcal C_0 = \mathcal C_{b^n+1} = \emptyset \).

Lemma 3.23

Consider \(1 \leqslant i \leqslant b^n\) and let \(U \subseteq \mathcal C_{i-1} \cup \mathcal C_i \cup \mathcal C_{i+1}\) be a connected, vertically convex subset of vertices. Then \(B_{{\tilde{G}}_n}(U, 1) \cap \mathcal C_i\) is connected as well.

Proof

For \(i' \in \{i-1, i, i+1\}\), denote \(U_{i'} {:}{=}U \cap C_{i'}\). Clearly we have

$$\begin{aligned} B_{{\tilde{G}}_n}(U, 1) = B_{{\tilde{G}}_n}(U_{i-1}, 1) \cup B_{{\tilde{G}}_n}(U_i, 1) \cup B_{{\tilde{G}}_n}(U_{i+1}, 1)\,. \end{aligned}$$

If \(U_i = \emptyset \), then as U is connected it should be that either \(U = U_{i-1}\) or \(U = U_{i+1}\), and in either case the claim follows by Observation 3.20.

Now suppose \(U_i \ne \emptyset \), and consider some \(i' \in \{i-1, i+1\}\) such that \(U_{i'} \ne \emptyset \). Then Observation 3.20 implies that \(B_{{\tilde{G}}_n}(U_{i'}, 1) \cap \mathcal C_i\) is connected. Furthermore, since U is connected, it holds that \(B_{{\tilde{G}}_n}(U_{i'}, 1) \cap U_i \ne \emptyset \). Now, as we know that \(U_i\) is connected by the assumed vertical convexity of U, we obtain that \(B_{{\tilde{G}}_n}(U, 1) \cap \mathcal C_i\) is connected, completing the proof. \(\square \)

Proof of lemma 3.21

We proceed by induction on R, where the base case \(R=0\) is trivial, so consider \(R \geqslant 1\). Fix some \(i \in \{1,2,\ldots ,b^n\}\), and suppose \(B_{{\tilde{G}}_n}(A, R) \cap \mathcal C_i \ne \emptyset \). Denote

$$\begin{aligned}U {:}{=}B_{{\tilde{G}}_n}(A, R - 1) \cap (\mathcal C_{i-1} \cup \mathcal C_i \cup \mathcal C_{i+1})\,.\end{aligned}$$

Clearly we have \(B_{{\tilde{G}}_n}(A, R) \cap \mathcal C_i = B_{{\tilde{G}}_n}(U, 1) \cap \mathcal C_i\). The set U is manifestly connected and, by the induction hypothesis, is also vertically convex. Thus from Lemma 3.23, it follows that \(B_{{\tilde{G}}_n}(U, 1) \cap \mathcal C_i\) is connected as well. We conclude that \(B_{{\tilde{G}}_n}(A, R)\) is vertically convex, completing the proof. \(\square \)