1 Introduction

Let \((Y_k)_{k \geqslant 1}\) be a sequence of independent, identically distributed random variables defined on a probability space \((\Omega , \mathcal {F}, \mathbb {P})\), taking values in the integer lattice \(\mathbb {Z}^d\), with distribution \(\mathbb {P}(Y_k = e_i) = \mathbb {P}(Y_k = -e_i) = 1/2d\), \(i = 1, 2, \ldots , d\), where \(e_i\) is the i-th vector of the standard basis for \(\mathbb {R}^d\). A simple symmetric random walk in \(\mathbb {Z}^d\)\((d \geqslant 1)\) starting at \(x \in \mathbb {Z}^d\) is a stochastic process \(Z = (Z_n)_{n \geqslant 0}\), with \(Z_0 = x\) and \(Z_n = x + Y_1 + \cdots + Y_n\).

Let \(Z = (Z_n)_{n \geqslant 0}\) be a simple symmetric random walk in \(\mathbb {Z}^d\) starting at the origin. Further, let

$$\begin{aligned} \phi (\lambda ) := \int _{( 0, \infty )}{\left( 1 - e^{-\lambda t} \right) \mu (\hbox {d}t)} \end{aligned}$$

be a Bernstein function satisfying \(\phi (1)=1\). Here \(\mu \) is a measure on \(( 0,\infty )\) satisfying \(\int _{( 0,\infty )} {(1 \wedge t) \mu (\hbox {d}t)} < \infty \) called the Lévy measure. For \(m \in \mathbb {N}\) denote

$$\begin{aligned} c_{m}^{\phi } := \int _{( 0, \infty )}{\frac{t^m}{m!}e^{-t}\mu (\mathrm{d}t)} \end{aligned}$$
(1.1)

and notice that

$$\begin{aligned} \sum _{m = 1}^{\infty } {c_{m}^{\phi }} = \int _{( 0, \infty )} {(e^t - 1) e^{-t} \mu (\hbox {d}t)} = \int _{( 0, \infty )} {(1 - e^{-t}) \mu (\hbox {d}t)} = \phi (1) = 1. \end{aligned}$$

Hence, we can define a random variable R with \(\mathbb {P}(R = m) = c_{m}^{\phi }\), \(m \in \mathbb {N}\). Now we define the random walk \(T = (T_n)_{n \geqslant 0}\) on \(\mathbb {Z}_+\) by \(T_n := \sum _{k = 1}^{n} {R_k}\), where \((R_k)_{k \geqslant 1}\) is a sequence of independent, identically distributed random variables with the same distribution as R and independent of the process Z. Subordinate random walk is a stochastic process \(X = (X_n)_{n \geqslant 0}\) which is defined by \(X_n := Z_{T_n}\), \(n \geqslant 0\). It is straightforward to see that the subordinate random walk is indeed a random walk. Hence, there exists a sequence of independent, identically distributed random variables \((\xi _k)_{k \geqslant 1}\) with the same distribution as \(X_1\) such that

$$\begin{aligned} X_n \overset{d}{=} \sum _{k = 1}^n {\xi _k}, \quad n \geqslant 0. \end{aligned}$$
(1.2)

We can easily find the explicit expression for the distribution of the random variable \(X_1\):

$$\begin{aligned} \mathbb {P}(X_1 = x)&= \mathbb {P}(Z_{T_1} = x) = \mathbb {P}(Z_{R_1} = x) = \sum _{m = 1}^{\infty } {\mathbb {P}(Z_{R_1} = x \mid R_1 = m)} c_{m}^{\phi } \nonumber \\&= \sum \limits _{m = 1}^{\infty } \int _{( 0, \infty )}{\frac{t^m}{m!} e^{-t} \mu (\mathrm{d}t)} \mathbb {P}(Z_m = x), \quad x \in \mathbb {Z}^d. \end{aligned}$$
(1.3)

We denote the transition matrix of the subordinate random walk X with P, i.e., \(P = (p(x, y) : x, y \in \mathbb {Z}^d)\), where \(p(x, y) = \mathbb {P}(x + X_1 = y)\).

We will impose some additional constraints on the Laplace exponent \(\phi \). First, \(\phi \) will be a complete Bernstein function [13, Definition 6.1.] and it will satisfy the following lower scaling condition: there exist \(0< \gamma _1 < 1\) and \(a_1 > 0\) such that

$$\begin{aligned} \frac{\phi (R)}{\phi (r)} \geqslant a_1 \left( \frac{R}{r} \right) ^{\gamma _1}, \quad \forall \,\, 0 < r \leqslant R\leqslant 1. \end{aligned}$$
(1.4)

Additionally, \(\phi \) will satisfy upper scaling condition of the following shape: there exist \(\gamma _1 \leqslant \gamma _2 < 1\) and \(a_2 > 0\) such that

$$\begin{aligned} \frac{\phi (R)}{\phi (r)} \leqslant a_2 \left( \frac{R}{r} \right) ^{\gamma _2}, \quad \forall \,\, 0 < r \leqslant R\leqslant 1. \end{aligned}$$
(1.5)

However, it is well known that, if \(\phi \) is a Bernstein function, then \(\phi (\lambda t) \leqslant \lambda \phi (t)\) for all \(\lambda \geqslant 1\), \(t > 0\), implying \(\phi (v)/v \leqslant \phi (u)/u\), \(0 < u \leqslant v\). Using these two facts, proof of the next lemma is straightforward.

Lemma 1.1

If \(\phi \) is a Bernstein function, then for all \(\lambda , t > 0\), \(1 \wedge \lambda \leqslant \phi (\lambda t) / \phi (t) \leqslant 1 \vee \lambda \).

Using Lemma 1.1 we get

$$\begin{aligned} \frac{\phi (R)}{\phi (r)} \leqslant \frac{R}{r}, \quad \forall \,\, 0 < r \leqslant R\leqslant 1, \end{aligned}$$

and this looks like upper scaling condition with \(a_2 = \gamma _2 = 1\). We will need (1.5) in dimensions \(d \leqslant 2\), but in dimensions \(d \geqslant 3\) Lemma 1.1 will sometimes suffice so we won’t need to additionally assume (1.5).

The main result of this paper is a scale-invariant Harnack inequality for subordinate random walks. The proof will be given in the last section. Before we state the result, we define the notion of harmonic function with respect to subordinate random walk X.

Definition 1.2

We say that a function \({f}:{\mathbb {Z}^d}\rightarrow {[0, \infty )}\) is harmonic in \(B \subset \mathbb {Z}^d\), with respect to X, if

$$\begin{aligned} f(x) = Pf(x) = \sum _{y \in \mathbb {Z}^d} {p(x, y) f(y)}, \quad \forall \ x \in B. \end{aligned}$$

This definition is equivalent to the mean-value property in terms of the exit from a finite subset of \(\mathbb {Z}^d\): If \(B \subset \mathbb {Z}^d\) is finite then \({f}:{\mathbb {Z}^d}\rightarrow {[0, \infty )}\) is harmonic in B, with respect to X, if and only if \(f(x) = \mathbb {E}_x[f(X_{\tau _B})]\) for every \(x \in B\), where \(\tau _B := \inf \{n \geqslant 1 : X_n \notin B\}\).

For \(x \in \mathbb {Z}^d\) and \(r > 0\) we define \(B(x, r) := \{y \in \mathbb {Z}^d : \vert y - x \vert < r\}\). We use shorthand notation \(B_r\) for B(0, r).

Theorem 1.3

(Harnack inequality) Let \(X = (X_n)_{n \geqslant 0}\) be a subordinate random walk in \(\mathbb {Z}^d\), \(d \geqslant 1\), with \(\phi \) a complete Bernstein function satisfying (1.4) and (1.5). For each \(a < 1\), there exists a constant \(c_a < \infty \) such that if \({f}:{\mathbb {Z}^d}\rightarrow {[0, \infty )}\) is harmonic on B(xn), with respect to X, for \(x \in \mathbb {Z}^d\) and \(n \in \mathbb {N}\), then

$$\begin{aligned} f(x_1) \leqslant c_a f(x_2), \quad \forall \ x_1, x_2 \in B(x, an). \end{aligned}$$

Notice that the constant \(c_a\) is uniform for all \(n \in \mathbb {N}\). That is why we call this result scale-invariant Harnack inequality.

Some authors have already dealt with this problem and Harnack inequality was proved for symmetric simple random walk in \(\mathbb {Z}^d\) [9, Theorem 1.7.2] and random walks with steps of infinite range, but with some assumptions on the Green function and some restrictions such as finite second moment of the step [2, 10].

Notion of discrete subordination was developed in [6] and it was discussed in details in [4], but under different assumptions on \(\phi \) than the ones we have. Using discrete subordination, we can obtain random walks with steps of infinite second moment (see Remark 3.3). Harnack inequality for specific random walks with steps of infinite second moment was proved in [3] and the random walk considered there can also be obtained using discrete subordination.

In Sect. 2 we state an important result about gamma function that we use later, we discuss under which conditions subordinate random walk is transient and we introduce functions g and j and examine their properties. The estimates of one-step transition probabilities of subordinate random walk are given in Sect. 3. In Sect. 4 we derive estimates for the Green function. This is very valuable result which gives the answer to the question related to the one posed in [5, Remark 1]. Using estimates developed in Sect. 3 and 4 and following [11, Sect. 4], in Sect. 5 we find estimates for the Green function of a ball. In Sect. 6 we introduce Poisson kernel and prove Harnack inequality.

Throughout this paper, \(d\geqslant 1\) and the constants \(a_1, a_2\), \(\gamma _1, \gamma _2\) and \(C_i\), \(i = 1, 2, \ldots , 9\) will be fixed. We use \(c_1, c_2, \ldots \) to denote generic constants, whose exact values are not important and can change from one appearance to another. The labeling of the constants \(c_1, c_2, \ldots \) starts anew in the statement of each result. The dependence of the constant c on the dimension d will not be mentioned explicitly. We will use “:=” to denote a definition, which is read as “is defined to be”. We will use \(\hbox {d}x\) to denote the Lebesgue measure in \(\mathbb {R}^d\). We denote the Euclidean distance between x and y in \(\mathbb {R}^d\) by \(\vert x - y \vert \). For \(a, b \in \mathbb {R}\), \(a \wedge b := \min \{a, b\}\) and \(a \vee b := \max \{a, b\}\). For any two positive functions f and g, we use the notation \(f \asymp g\), which is read as “f is comparable to g”, to denote that there exist some constants \(c_1, c_2 > 0\) such that \(c_1 f \leqslant g \leqslant c_2 f\) on their common domain of definition. We also use notation \(f \sim g\) to denote that \(\lim _{x \rightarrow \infty } f(x) / g(x) = 1\).

2 Preliminaries

In this section, we first state an important result about the ratio of gamma functions that is needed later. Secondly, we discuss under which conditions subordinate random walk is transient. At the end of this section, we define functions g and j that we use later and we prove some of their properties.

2.1 Ratio of Gamma Functions

Lemma 2.1

Let \(\Gamma (x, a) = \int _{a}^{\infty } {t^{x - 1} e^{-t} \hbox {d}t}\). Then

$$\begin{aligned} \lim _{x \rightarrow \infty } {\frac{\Gamma (x + 1, x)}{\Gamma (x + 1)}} = \frac{1}{2}. \end{aligned}$$
(2.1)

Proof

Using a well-known Stirling’s formula

$$\begin{aligned} \Gamma (x + 1) \sim \sqrt{2 \pi x}\ x^x e^{-x}, \quad x \rightarrow \infty \end{aligned}$$
(2.2)

and [1, Formula 6.5.35] that states

$$\begin{aligned} \Gamma (x + 1, x) \sim \sqrt{2^{-1} \pi x} \ x^x e^{-x}, \quad x \rightarrow \infty \end{aligned}$$

we get

$$\begin{aligned} \lim _{x \rightarrow \infty } {\frac{\Gamma (x + 1, x)}{\Gamma (x + 1)}} = \lim _{x \rightarrow \infty } \frac{\sqrt{2^{-1} \pi x}\ x^x e^{-x}}{\sqrt{2 \pi x} \ x^x e^{-x}} = \frac{1}{2}. \end{aligned}$$

\(\square \)

2.2 Transience of Subordinate Random Walks

Our considerations only make sense if the random walk that we defined is transient. In the case of a recurrent random walk, the Green function takes value \(+\infty \) for every argument x. We will use Chung–Fuchs theorem to show under which condition subordinate random walk is transient. Denote with \(\varphi _{X_1}\) the characteristic function of one step of a subordinate random walk. We want to prove that there exists \(\delta > 0\) such that

$$\begin{aligned} \int _{{( -\delta , \delta )}^d} {{\text {Re}}\left( \frac{1}{1 - \varphi _{X_1} (\theta )} \right) \hbox {d}\theta } < +\infty . \end{aligned}$$

The law of variable \(X_1\) is given with (1.3). We denote one step of the simple symmetric random walk \((Z_n)_{n \geqslant 0}\) with \(Y_1\) and the characteristic function of that random variable with \(\varphi \). Assuming \(\vert \theta \vert < 1\) we have

$$\begin{aligned} \varphi _{X_1} (\theta )&= \mathbb {E}\left[ e^{i \theta \cdot X_1} \right] = \sum _{x \in \mathbb {Z}^d} {e^{i \theta \cdot x} \sum _{m = 1}^{\infty } {\int _{( 0, +\infty )} {\frac{t^m}{m!} e^{-t} \mu (\hbox {d}t)} \mathbb {P}(Z_m = x)}} \\&= \sum _{m = 1}^{\infty } {\int _{( 0, +\infty )} {\frac{t^m}{m!} e^{-t} \mu (\hbox {d}t)} \sum _{x \in \mathbb {Z}^d} {e^{i \theta \cdot x} \mathbb {P}(Z_m = x)}} \\&= \sum _{m = 1}^{\infty } {\int _{( 0, +\infty )} {\frac{t^m}{m!} e^{-t} \mu (\hbox {d}t)} (\varphi (\theta ))^m} \\&= \int _{( 0, +\infty )} {\left( e^{t \varphi (\theta )} - 1 \right) e^{-t} \mu (\hbox {d}t)} \\&= \phi (1) - \phi (1 - \varphi (\theta )) = 1 - \phi (1 - \varphi (\theta )). \end{aligned}$$

From [9, Sect. 1.2, page 13] we have

$$\begin{aligned} \varphi (\theta ) = \frac{1}{d} \sum _{m = 1}^{d} {\cos (\theta _m)}, \quad \theta = (\theta _1, \theta _2, \ldots , \theta _m). \end{aligned}$$

That is function with real values so

$$\begin{aligned} \int _{{( -\delta , \delta )}^d} {{\text {Re}}\left( \frac{1}{1 - \varphi _{X_1}(\theta )} \right) \hbox {d}\theta } = \int _{{( -\delta , \delta )}^d} {\frac{1}{\phi (1 - \varphi (\theta ))} \hbox {d}\theta }. \end{aligned}$$

From Taylor’s theorem it follows that there exists \(a \leqslant 1\) such that

$$\begin{aligned} \vert \varphi (\theta ) \vert = \varphi (\theta ) \leqslant 1 - \frac{1}{4d} \vert \theta \vert ^2, \quad \theta \in B(0, a). \end{aligned}$$
(2.3)

Now we take \(\delta \) such that \({( -\delta , \delta )}^d \subset B(0, a)\). From (2.3), using the fact that \(\phi \) is increasing, we get

$$\begin{aligned} \frac{1}{\phi \left( 1 - \varphi (\theta ) \right) } \leqslant \frac{1}{\phi \left( \vert \theta \vert ^2 / 4d \right) }, \quad \theta \in B(0, a). \end{aligned}$$

Hence,

$$\begin{aligned} \int _{{( -\delta , \delta )}^d} {\frac{1}{\phi (1 - \varphi (\theta ))} \hbox {d}\theta }&\leqslant \int _{{( -\delta , \delta )}^d} {\frac{1}{\phi \left( \vert \theta \vert ^2 / 4d \right) } \hbox {d}\theta } \leqslant \int _{B(0, a)} {\frac{\phi (\vert \theta \vert ^2)}{\phi \left( \vert \theta \vert ^2 / 4d \right) } \frac{1}{\phi (\vert \theta \vert ^2)} \hbox {d}\theta } \\&\leqslant a_2 (4d)^{\gamma _2} \int _{B(0, a)} {\frac{1}{\phi (\vert \theta \vert ^2)} \hbox {d}\theta } = c_1 (4d)^{\gamma _2} \int _{0}^{a} {\frac{r^{d - 1}}{\phi (r^2)} \hbox {d}r} \\&= \frac{c_1 (4d)^{\gamma _2}}{\phi (a)} \int _{0}^{a} {r^{d - 1} \frac{\phi (a)}{\phi (r^2)} \hbox {d}r} \\&\leqslant \frac{c_1 a_2 (4ad)^{\gamma _2}}{\phi (a)} \int _{0}^{a} {r^{d - 2 \gamma _2 - 1} \hbox {d}r} \end{aligned}$$

and the last integral converges for \(d - 2 \gamma _2 - 1 > -1\). So, subordinate random walk is transient for \(\gamma _2 < d/2\). Notice that in the case when \(d \geqslant 3\) we have \(\gamma _2 < d/2\) even when \(\gamma _2 = 1\). That is the reason why we sometimes do not need (1.5) in proving results in dimensions higher than or equal to 3. We will always state whether we need (1.5) for all dimensions or only for dimensions \(d \leqslant 2\).

2.3 Properties of Functions g and j

Let \({g}:{( 0, +\infty )}\rightarrow {( 0, +\infty )}\) be defined by

$$\begin{aligned} g(r) = \frac{1}{r^d \phi (r^{-2})} \end{aligned}$$
(2.4)

and let \({j}:{( 0, +\infty )}\rightarrow {( 0, +\infty )}\) be defined by

$$\begin{aligned} j(r) = r^{-d} \phi (r^{-2}). \end{aligned}$$
(2.5)

We use these functions in numerous places in our paper. In this section, we present some of their properties that we need later.

Lemma 2.2

Assume (1.5), if \(d \leqslant 2\), and let \(1 \leqslant r \leqslant q\). Then \(g(r) \geqslant a_{2}^{-1} g(q)\).

Proof

Using (1.5) and the fact that \(d > 2\gamma _2\) we have

$$\begin{aligned} g(r) = \frac{1}{\frac{r^d}{q^d} q^d \phi (q^{-2}) \frac{\phi (r^{-2})}{\phi (q^{-2})}} \geqslant \frac{1}{a_2} \left( \frac{q}{r} \right) ^{d - 2\gamma _2} g(q) \geqslant \frac{1}{a_2} g(q). \end{aligned}$$

\(\square \)

We prove similar assertion for the function j.

Lemma 2.3

Assume (1.4) and let \(1 \leqslant r \leqslant q\). Then \(j(r) \geqslant a_1 j(q)\).

Proof

Using (1.4) we have

$$\begin{aligned} j(r) = \frac{r^{-d}}{q^{-d}} q^{-d} \phi (q^{-2}) \frac{\phi (r^{-2})}{\phi (q^{-2})} \geqslant a_1 \left( \frac{q}{r} \right) ^{d + 2\gamma _1} j(q) \geqslant a_1 j(q). \end{aligned}$$

\(\square \)

Using (1.4), (1.5) and Lemma 1.1, we can easily prove a lot of different results about functions g and j. We will state only those results that we need in the remaining part of our paper. For the first lemma, we do not need any additional assumptions on the function \(\phi \). For the second one we need (1.4) and for the third one we need (1.5).

Lemma 2.4

Let \(r \geqslant 1\). If \(0 < a \leqslant 1\) then

$$\begin{aligned} j(ar)\leqslant & {} a^{-d - 2} j(r), \end{aligned}$$
(2.6)
$$\begin{aligned} g(ar)\geqslant & {} a^{-d + 2} g(r). \end{aligned}$$
(2.7)

If \(a \geqslant 1\) then

$$\begin{aligned} j(ar) \geqslant a^{-d - 2} j(r). \end{aligned}$$
(2.8)

Lemma 2.5

Assume (1.4) and let \(0 < a \leqslant 1\) and \(r \geqslant 1\) such that \(ar \geqslant 1\). Then

$$\begin{aligned} g(ar) \leqslant \frac{g(r)}{a_{1} a^{d - 2\gamma _1}}. \end{aligned}$$
(2.9)

Lemma 2.6

Assume (1.5) and let \(r \geqslant 1\). If \(0 < a \leqslant 1\) such that \(ar \geqslant 1\) then

$$\begin{aligned} g(ar) \geqslant \frac{g(r)}{a_{2} a^{d - 2\gamma _2}}. \end{aligned}$$
(2.10)

If \(a \geqslant 1\) then

$$\begin{aligned} g(ar) \leqslant \frac{a_2}{a^{d - 2\gamma _2}} g(r). \end{aligned}$$
(2.11)

3 Transition Probability Estimates

In this section, we will investigate the behavior of the expression \(\mathbb {P}(X_1 = z)\). We will prove that \(\mathbb {P}(X_1 = z) \asymp j(\vert z \vert )\), \(z \ne 0\). First we have to examine the behavior of the expression \(c_{m}^{\phi }\).

Lemma 3.1

Assume (1.4) and (1.5) and let \(c_{m}^{\phi }\) be as in (1.1). Then

$$\begin{aligned} c_{m}^{\phi } \asymp \frac{\phi (m^{-1})}{m}, \quad m\in \mathbb {N}. \end{aligned}$$
(3.1)

Proof

Since \(\phi \) is a complete Bernstein function, there exists completely monotone density \(\mu (t)\) such that

$$\begin{aligned} c_{m}^{\phi } = \int _{0}^{\infty } {\frac{t^m}{m!} e^{-t} \mu (t) \mathrm{d}t}, \quad m\in \mathbb {N}. \end{aligned}$$

From [8, Proposition 2.5] we have

$$\begin{aligned} \mu (t) \leqslant (1 - 2e^{-1})^{-1} t^{-1} \phi (t^{-1}) = c_1 t^{-1} \phi (t^{-1}), \quad t > 0 \end{aligned}$$
(3.2)

and

$$\begin{aligned} \mu (t) \geqslant c_2 t^{-1} \phi (t^{-1}), \quad t \geqslant 1. \end{aligned}$$
(3.3)

Inequality (3.3) holds if (1.4) and (1.5) are satisfied and for inequality (3.2) we do not need any scaling conditions. Using monotonicity of \(\mu \), (2.1) and (3.3) we have

$$\begin{aligned} c_{m}^{\phi } \geqslant \frac{\mu (m)}{m!} \int _{0}^m {t^m e^{-t} \mathrm{d}t} = \mu (m) \left( 1 - \frac{\Gamma (m + 1, m)}{\Gamma (m + 1)} \right) \geqslant \frac{1}{4} \mu (m) \geqslant \frac{c_2}{4} \frac{\phi (m^{-1})}{m} \end{aligned}$$

for m large enough. On the other side, using inequality (3.2), monotonicity of \(\mu \) and Lemma 1.1, we get

$$\begin{aligned} c_{m}^{\phi }&\leqslant \frac{1}{m!} \int _{0}^{m} {t^m e^{-t} c_1 \frac{\phi (t^{-1})}{t} \mathrm{d}t} + \frac{\mu (m)}{m!} \int _{m}^{\infty } {t^m e^{-t} \mathrm{d}t} \\&\leqslant \frac{c_1}{m!} \phi (m^{-1}) \int _{0}^{m} {t^{m - 1} e^{-t} \frac{\phi (t^{-1})}{\phi (m^{-1})} \mathrm{d}t} + \frac{\mu (m)}{m!} \int _{0}^{\infty } {t^m e^{-t} \mathrm{d}t} \\&\leqslant c_1 \phi (m^{-1}) \frac{1}{\Gamma (m)} \int _{0}^{\infty } {t^{m - 2} e^{-t} \mathrm{d}t} + \mu (m) = c_1 \phi (m^{-1}) \frac{\Gamma (m - 1)}{\Gamma (m)} + \mu (m) \\&\leqslant c_3 \frac{\phi (m^{-1})}{m} + c_1 \frac{\phi (m^{-1})}{m} = c_4 \frac{\phi (m^{-1})}{m}. \end{aligned}$$

Hence, we have

$$\begin{aligned} \frac{c_2}{4} \frac{\phi (m^{-1})}{m} \leqslant c_{m}^{\phi } \leqslant c_4 \frac{\phi (m^{-1})}{m} \end{aligned}$$

for m large enough, but we can change constants and get (3.1). \(\square \)

We are now ready to examine the expression \(\mathbb {P}(X_1 = z)\).

Proposition 3.2

Assume (1.4) and (1.5). Then

$$\begin{aligned} \mathbb {P}(X_1 = z) \asymp \vert z \vert ^{-d} \phi (\vert z \vert ^{-2}), \quad z\ne 0. \end{aligned}$$

Proof

Using (1.3) and the fact that \(\mathbb {P}(Z_m = z) = 0\) for \(\vert z \vert > m\), we have

$$\begin{aligned} \mathbb {P}(X_1 = z) = \sum _{m \geqslant \vert z \vert } {c_{m}^{\phi } \mathbb {P}(Z_m = z)}. \end{aligned}$$

To get the upper bound for the expression \(\mathbb {P}(X_1 = z)\) we will use [7, Theorem 2.1] which states that there are constants \(C' > 0\) and \(C > 0\) such that

$$\begin{aligned} \mathbb {P}(Z_m = z) \leqslant C' m^{-\frac{d}{2}} e^{-\frac{\vert z \vert ^2}{C m}}, \quad \forall \, z \in \mathbb {Z}^d,\, \forall \, m \in \mathbb {N}. \end{aligned}$$
(3.4)

Together with (3.1) this result yields

$$\begin{aligned} \mathbb {P}(X_1 = z)&\leqslant \sum _{m \geqslant \vert z \vert } {c_1 \frac{\phi (m^{-1})}{m} C' m^{-\frac{d}{2}} e^{-\frac{\vert z \vert ^2}{C m}}} \leqslant c_2 \int _{\vert z \vert }^{\infty } {\phi (t^{-1}) t^{-\frac{d}{2} - 1} e^{-\frac{\vert z \vert ^2}{C t}} \mathrm{d}t} \\&= c_2 \int _{0}^{\frac{\vert z \vert }{C}} {\phi (C s\vert z \vert ^{-2}) \left( \frac{\vert z \vert ^2}{C s} \right) ^{-\frac{d}{2} - 1} e^{-s} \ \frac{\vert z \vert ^2}{C s^2} ds} \\&= c_3 \vert z \vert ^{-d} \left( \int _{0}^{\frac{1}{C}} {\phi (C s\vert z \vert ^{-2}) s^{\frac{d}{2} - 1} e^{-s} ds} + \int _{\frac{1}{C}}^{\frac{\vert z \vert }{C}} {\phi (C s\vert z \vert ^{-2}) s^{\frac{d}{2} - 1} e^{-s} ds} \right) \\&=: c_3 \vert z \vert ^{-d} (I_1(z) + I_2(z)). \end{aligned}$$

Let us first examine \(I_1(z)\). Using (1.4), we get

$$\begin{aligned} I_1(z) = \phi (\vert z \vert ^{-2}) \int _{0}^{\frac{1}{C}} {\frac{\phi (C s\vert z \vert ^{-2})}{\phi (\vert z \vert ^{-2})} s^{\frac{d}{2} - 1} e^{-s} ds} \leqslant c_4 \phi (\vert z \vert ^{-2}). \end{aligned}$$

Using Lemma 1.1 instead of (1.4) and replacing the upper limit in the integral by \(\infty \), we get \(I_2(z) \leqslant c_5 \phi (\vert z \vert ^{-2})\). Hence, \(\mathbb {P}(X_1 = z) \leqslant c_6 \vert z \vert ^{-d} \phi (\vert z \vert ^{-2})\).

In finding the matching lower bound for \(\mathbb {P}(X_1 = z)\), periodicity of a simple random walk plays very important role. We write \(n \leftrightarrow x\) if n and x have the same parity, i.e., if \(n + x_1 + x_2 + \cdots + x_d\) is even. Directly from [9, Proposition 1.2.5], we get

$$\begin{aligned} \mathbb {P}(Z_m = z) \geqslant c_7 m^{-\frac{d}{2}} e^{-\frac{d \vert z \vert ^2}{2m}} \end{aligned}$$
(3.5)

for \(0 \leftrightarrow z \leftrightarrow m\) and \(\vert z \vert \leqslant m^{\alpha }\), \(\alpha < 2/3\). In the case when \(1 \leftrightarrow z \leftrightarrow m\) we have

$$\begin{aligned} \mathbb {P}(Z_m = z) = \frac{1}{2d} \sum _{i = 1}^d {[\mathbb {P}(Z_{m - 1} = z + e_i) + \mathbb {P}(Z_{m - 1} = z - e_i)]}. \end{aligned}$$
(3.6)

By combining (3.5) and (3.6), we can easily get

$$\begin{aligned} \mathbb {P}(Z_m = z) \geqslant c_{8} m^{-\frac{d}{2}} e^{-\frac{\vert z \vert ^2}{c m}}, \quad \vert z \vert \leqslant m^{\frac{1}{2}}, 1 \leftrightarrow z \leftrightarrow m. \end{aligned}$$
(3.7)

We will find lower bound for \(\mathbb {P}(X_1 = z)\) when \(z \leftrightarrow 0\) by using (3.5), the proof when \(z \leftrightarrow 1\) being analogous using (3.7). If \(z \leftrightarrow 0\) then \(\mathbb {P}(Z_m = z) = 0\) for \(m = 2l - 1\), \(l \in \mathbb {N}\). Hence,

$$\begin{aligned} \mathbb {P}(X_1 = z)&\geqslant \sum _{m \geqslant \vert z \vert ^2, m = 2l} {c_{9} \frac{\phi (m^{-1})}{m}} m^{-\frac{d}{2}} e^{-\frac{-d\vert z \vert ^2}{2m}} \\&= c_{9} \sum _{l \geqslant \vert z \vert ^2 / 2} {\frac{\phi ((2l)^{-1})}{2l} (2l)^{-\frac{d}{2}} e^{-\frac{d\vert z \vert ^2}{4l}}} \\&\geqslant c_{10} \int _{\vert z \vert ^2 / 2}^{\infty } {\frac{\phi ((2t)^{-1})}{2t} (2t)^{-\frac{d}{2}} e^{-\frac{d\vert z \vert ^2}{4t}} \hbox {d}t} \\&= \frac{c_{10}}{2} \int _{\vert z \vert ^2}^{\infty } {\phi (t^{-1}) t^{-\frac{d}{2} - 1} e^{-\frac{d\vert z \vert ^2}{2t}} \hbox {d}t} \\&= \frac{c_{10}}{2} \int _{0}^{\frac{d}{2}} {\phi \left( \frac{2s}{d \vert z \vert ^2} \right) \left( \frac{d \vert z \vert ^2}{2s} \right) ^{-\frac{d}{2} - 1} e^{-s}\ \frac{d \vert z \vert ^2}{2s^2} \hbox {d}s} \\&= c_{11} \vert z \vert ^{-d} \phi (\vert z \vert ^{-2}) \int _{0}^{\frac{d}{2}} {\frac{\phi \left( \frac{2s}{d} \vert z \vert ^{-2} \right) }{\phi (\vert z \vert ^{-2})} s^{\frac{d}{2} - 1} e^{-s} \hbox {d}s} \\&\geqslant c_{11} \vert z \vert ^{-d} \phi (\vert z \vert ^{-2}) \int _{0}^{\frac{d}{2}} {\frac{2}{d} s^{\frac{d}{2}} e^{-s} ds} = c_{12} \vert z \vert ^{-d} \phi (\vert z \vert ^{-2}), \end{aligned}$$

where in the last line we used Lemma 1.1.\(\square \)

Remark 3.3

It follows immediately form Proposition 3.2 that the second moment of the step \(X_1\) is infinite.

4 Green Function Estimates

The Green function of X is defined by \(G(x, y) = G(y - x)\), where

$$\begin{aligned} G(y) = \mathbb {E}\left[ \sum \limits _{n = 0}^{\infty } {mathbb {1}_{\{X_n = y\}}}\right] . \end{aligned}$$

We can rewrite that in the following way

$$\begin{aligned} G(y)&= \sum _{n = 0}^{\infty } {\mathbb {P}(X_n = y)} = \sum _{n = 0}^{\infty } {\mathbb {P}(Z_{T_n} = y)} = \sum _{n = 0}^{\infty } \sum _{m = 0}^{\infty } {\mathbb {P}(Z_m = y) \mathbb {P}(T_n = m)} \nonumber \\&= \sum _{m = 0}^{\infty } \sum _{n = 0}^{\infty } {\mathbb {P}(T_n = m) \mathbb {P}(Z_m = y)} = \sum _{m = 0}^{\infty } {c(m) \mathbb {P}(Z_m = y)} \end{aligned}$$
(4.1)

where

$$\begin{aligned} c(m) = \sum _{n = 0}^{\infty } {\mathbb {P}(T_n = m)}, \end{aligned}$$
(4.2)

and \(T_n\) is as before. Now we will investigate the behavior of the sequence c(m). Instead of assuming that \(\phi \) is a complete Bernstein function, we will assume that \(\phi \) is only a special Bernstein function. Using that assumption, we have

$$\begin{aligned} \frac{1}{\phi (\lambda )} = \int _{( 0, \infty )} {e^{-\lambda t} u(t) \hbox {d}t} \end{aligned}$$
(4.3)

for some non-increasing function \({u}:{( 0, \infty )}\rightarrow {( 0, \infty )}\) satisfying \(\int _{0}^{1}{u(t)\hbox {d}t} < \infty \), see [13, Theorem 11.3.].

Lemma 4.1

Let c(m) be as in (4.2). Then

$$\begin{aligned} c(m) = \frac{1}{m!} \int _{( 0, \infty )} {t^m e^{-t} u(t) \hbox {d}t}, \quad m \in \mathbb {N}_0. \end{aligned}$$
(4.4)

Proof

We follow the proof of [4, Theorem 2.3]. Define \(M(x) = \sum _{m \leqslant x} {c(m)}\), \(x \in \mathbb {R}\). The Laplace transformation \(\mathcal {L}(M)\) of the measure generated by M is equal to

$$\begin{aligned} \mathcal {L}(M)(\lambda )&= \int _{[0, \infty )} {e^{-\lambda x} \hbox {d}M(x)} = \sum _{m = 0}^{\infty } {c(m) e^{-\lambda m}} = \sum _{m = 0}^{\infty } {e^{-\lambda m}} \sum _{n = 0}^{\infty } {\mathbb {P}(T_n = m)} \nonumber \\&= \sum _{n = 0}^{\infty } \sum _{m = 0}^{\infty } {e^{-\lambda m} \mathbb {P}(T_n = m)} \nonumber \\&= \sum _{n = 0}^{\infty } {\mathbb {E}[e^{-\lambda T_n}]} = \sum _{n = 0}^{\infty } {\left( \mathbb {E}[e^{-\lambda R_1}] \right) ^n} = \frac{1}{1 - \mathbb {E}[e^{-\lambda R_1}]}. \end{aligned}$$
(4.5)

Now we calculate \(\mathbb {E}[e^{-\lambda R_1}]\):

$$\begin{aligned} \mathbb {E}[e^{-\lambda R_1}]&= \sum _{m = 1}^{\infty } {e^{-\lambda m} \int _{( 0, \infty )} {\frac{t^m}{m!} e^{-t}} \mu (\hbox {d}t)} \\&= \int _{( 0, \infty )} {\left( e^{te^{-\lambda }} - 1 \right) e^{-t} \mu (\hbox {d}t)} = 1 - \phi (1 - e^{-\lambda }), \end{aligned}$$

where we used \(\phi (1) = 1\) in the last equality. Hence, \(\mathcal {L}(M)(\lambda ) = 1 / \phi (1 - e^{-\lambda })\). On the other hand

$$\begin{aligned} \sum _{m = 0}^{\infty }&{\frac{1}{m!} \int _{(0, \infty )} {t^m e^{-t} u(t) \hbox {d}t}\, e^{-\lambda m}} = \int _{(0, \infty )} {e^{-t} \sum _{m = 0}^{\infty } {\frac{(t e^{-\lambda })^m}{m!}} u(t) \hbox {d}t} \nonumber \\&= \int _{(0, \infty )} {e^{-t(1 - e^{-\lambda })} u(t) \hbox {d}t} = \frac{1}{\phi (1 - e^{-\lambda })}. \end{aligned}$$
(4.6)

Since \(\mathcal {L}(M)(\lambda ) = 1/\phi (1 - e^{-\lambda })\), from calculations (4.5) and (4.6) we have

$$\begin{aligned} \sum _{m = 0}^{\infty } {c(m) e^{-\lambda m}} = \sum _{m = 0}^{\infty } {\frac{1}{m!} \int _{( 0, \infty )} {t^m e^{-t} u(t) \hbox {d}t} \, e^{-\lambda m}}. \end{aligned}$$

The statement of this lemma follows by the uniqueness of the Laplace transformation. \(\square \)

Lemma 4.2

Assume (1.4). Then

$$\begin{aligned} c(m) \asymp \frac{1}{m \phi (m^{-1})}, \quad m \in \mathbb {N}. \end{aligned}$$

Proof

Let u be as in Lemma 4.1. From [8, Corollary 2.4.] we have

$$\begin{aligned} u(t) \leqslant (1 - e^{-1})^{-1} t^{-1} \phi (t^{-1})^{-1} = c_1 t^{-1} \phi (t^{-1})^{-1}, \quad t > 0. \end{aligned}$$
(4.7)

and

$$\begin{aligned} u(t) \geqslant c_2 t^{-1} \phi (t^{-1})^{-1}, \quad t \geqslant 1. \end{aligned}$$
(4.8)

Inequality (4.8) holds if (1.4) is satisfied and for inequality (4.7) we do not need any scaling conditions. Using monotonicity of u, (2.1) and (4.8), we get

$$\begin{aligned} c(m)\geqslant & {} u(m) \frac{1}{m!} \int _{0}^{m} {t^m e^{-t} \hbox {d}t} \\= & {} u(m) \left( 1 - \frac{\Gamma (m + 1, m)}{\Gamma (m + 1)} \right) \geqslant \frac{1}{4} u(m) \geqslant \frac{c_3}{m \phi (m^{-1})}, \end{aligned}$$

for m large enough. Now we will find the upper bound for c(m).

$$\begin{aligned} c(m)&\leqslant \frac{c_1}{m!} \int _{0}^{m} {t^m e^{-t} \frac{1}{t \phi (t^{-1})} \hbox {d}t} + \frac{u(m)}{m!} \int _{0}^{\infty } {t^m e^{-t} \hbox {d}t} \\&\leqslant \frac{c_1}{m! \phi (m^{-1})} \int _{0}^{m} {t^{m - 1} e^{-t} \hbox {d}t} + u(m) \leqslant \frac{c_4}{m \phi (m^{-1})} \end{aligned}$$

since \(\phi \) is an increasing function. Hence,

$$\begin{aligned} \frac{c_3}{m \phi (m^{-1})} \leqslant c(m) \leqslant \frac{c_4}{m \phi (m^{-1})} \end{aligned}$$

for m large enough. We can now change constants in such a way that the statement of this lemma is true for every \(m \in \mathbb {N}\). \(\square \)

Theorem 4.3

Assume (1.4) and, if \(d \leqslant 2\), assume additionally (1.5). Then

$$\begin{aligned} G(x) \asymp \frac{1}{\vert x \vert ^d \phi (\vert x \vert ^{-2})}, \quad \vert x \vert \geqslant 1. \end{aligned}$$
(4.9)

Proof

We assume \(\vert x \vert \geqslant 1\) throughout the whole proof. In (4.1) we showed that \(G(x) = \sum _{m = 1}^{\infty } {c(m) p(m, x)}\), where \(p(m, x) = \mathbb {P}(Z_m = x)\). Let \(q(m, x) = 2 \left( d / (2 \pi m) \right) ^{\frac{d}{2}} e^{-\frac{d \vert x \vert ^2}{2m}}\) and \(E(m, x) = p(m, x) - q(m, x)\). By [9, Theorem 1.2.1]

$$\begin{aligned} \vert E(m, x) \vert \leqslant c_1 m^{-\frac{d}{2}} / \vert x \vert ^2. \end{aligned}$$
(4.10)

Since \(p(m, x) = 0\) for \(m <\vert x \vert \), we have

$$\begin{aligned} G(x) = \sum _{m > \vert x \vert ^2} {c(m) p(m, x)} + \sum _{\vert x \vert \leqslant m \leqslant \vert x \vert ^2} {c(m) p(m, x)} =: J_1(x) + J_2(x). \end{aligned}$$

First we estimate

$$\begin{aligned} J_1(x) = \sum _{m> \vert x \vert ^2} {c(m) q(m, x)} + \sum _{m > \vert x \vert ^2} {c(m) E(m, x)} =: J_{11}(x) + J_{12}(x). \end{aligned}$$

By Lemma 4.2, (4.10) and (1.5)

$$\begin{aligned} \vert J_{12}(x) \vert&\leqslant c_2 \sum _{m> \vert x \vert ^2} {\frac{1}{m \phi (m^{-1})} \frac{m^{-\frac{d}{2}}}{\vert x \vert ^2}} = \frac{c_2}{\vert x \vert ^2 \phi (\vert x \vert ^{-2})} \sum _{m > \vert x \vert ^2} {\frac{\phi (\vert x \vert ^{-2})}{\phi (m^{-1})} m^{-\frac{d}{2} - 1}} \\&\leqslant \frac{c_3 \vert x \vert ^{-2 \gamma _2}}{\vert x \vert ^2 \phi (\vert x \vert ^{-2})} \int _{\vert x \vert ^2}^{\infty } {t^{\gamma _2 - \frac{d}{2} - 1} \hbox {d}t} = \frac{c_4}{\vert x \vert ^2} \frac{1}{\vert x \vert ^d \phi (\vert x \vert ^{-2})}. \end{aligned}$$

Now we have

$$\begin{aligned} \lim _{\vert x \vert \rightarrow \infty } {\vert x \vert ^d \phi (\vert x \vert ^{-2}) \vert J_{12}(x) \vert } = 0. \end{aligned}$$

By Lemma 4.2, (1.4) and (1.5)

$$\begin{aligned} J_{11}(x)&\asymp \int _{\vert x \vert ^2}^{\infty } {\frac{1}{t \phi (t^{-1})} t^{-\frac{d}{2}} e^{-\frac{d \vert x \vert ^2}{2 t}} \hbox {d}t} = \frac{1}{\phi (\vert x \vert ^{-2})} \int _{\vert x \vert ^2}^{\infty } {\frac{\phi (\vert x \vert ^{-2})}{\phi (t^{-1})} t^{-\frac{d}{2} - 1} e^{-\frac{d \vert x \vert ^2}{2 t}} \hbox {d}t} \\&\asymp \frac{\vert x \vert ^{-2 \gamma _i}}{\phi (\vert x \vert ^{-2})} \int _{\vert x \vert ^2}^{\infty } {t^{\gamma _i - \frac{d}{2} - 1} e^{-\frac{d \vert x \vert ^2}{2 t}} \hbox {d}t} \\&= \frac{1}{\vert x \vert ^d \phi (\vert x \vert ^{-2})} \int _{0}^{\frac{d}{2}} {s^{\frac{d}{2} -\gamma _i - 1} e^{-s} \hbox {d}s} \asymp \frac{1}{\vert x \vert ^d \phi (\vert x \vert ^{-2})}, \end{aligned}$$

where the last integral converges because of the condition \(\gamma _2 < d/2\). We estimate \(J_2(x)\) using (3.4) and (1.4):

$$\begin{aligned} J_2(x)&\leqslant c_5 \int _{\vert x \vert }^{\vert x \vert ^2} {\frac{t^{-\frac{d}{2} - 1}}{\phi (t^{-1})} e^{-\frac{\vert x \vert ^2}{C t}} \hbox {d}t} = \frac{c_5}{\phi (\vert x \vert ^{-2})} \int _{{\vert x \vert }}^{\vert x \vert ^2} {\frac{\phi (\vert x \vert ^{-2})}{\phi (t^{-1})} t^{-\frac{d}{2} - 1} e^{-\frac{\vert x \vert ^2}{C t}} \hbox {d}t} \\&\leqslant \frac{c_5 \vert x \vert ^{-2 \gamma _1}}{a_1 \phi (\vert x \vert ^{-2})} \int _{{\vert x \vert }}^{\vert x \vert ^2} {t^{\gamma _1 - \frac{d}{2} - 1} e^{-\frac{\vert x \vert ^2}{C t}} \hbox {d}t} \\&= \frac{c_5 \vert x \vert ^{-2 \gamma _1}}{a_1 \phi (\vert x \vert ^{-2})} \int _{\frac{1}{C}}^{\frac{\vert x \vert }{C}} {\left( \frac{\vert x \vert ^2}{Cs} \right) ^{\gamma _1 - \frac{d}{2} - 1} e^{-s} \frac{\vert x \vert ^2}{C s^2} \hbox {d}s} \\&\leqslant \frac{c_6}{\vert x \vert ^d \phi (\vert x \vert ^{-2})} \int _{0}^{\infty } {s^{\frac{d}{2} - \gamma _1 - 1} e^{-s} \hbox {d}s} = \frac{c_7}{\vert x \vert ^d \phi (\vert x \vert ^{-2})}. \end{aligned}$$

Using \(J_{11}(x) \geqslant (2 c_8) / (\vert x \vert ^{d} \phi (\vert x \vert ^{-2}))\) and \(J_{12}(x) \vert x \vert ^d \phi (\vert x \vert ^{-2}) \geqslant - c_8\) for \(\vert x \vert \) large enough and for some constant \(c_8 > 0\), we get

$$\begin{aligned} G(x) \vert x \vert ^d \phi (\vert x \vert ^{-2}) \geqslant J_{11}(x) \vert x \vert ^d \phi (\vert x \vert ^{-2}) + J_{12}(x) \vert x \vert ^d \phi (\vert x \vert ^{-2}) \geqslant 2c_8 - c_8 = c_8 \end{aligned}$$

On the other hand

$$\begin{aligned} G(x) \vert x \vert ^d \phi (\vert x \vert ^{-2}) \leqslant c_{9} + J_{12}(x) \vert x \vert ^d \phi (\vert x \vert ^{-2}) + c_7 \leqslant 2 c_{9} + c_7 = c_{10}. \end{aligned}$$

Here we used \(J_{11}(x) \leqslant c_9 / (\vert x \vert ^d \phi (\vert x \vert ^{-2}))\), \(J_2(x) \leqslant c_7 / (\vert x \vert ^d \phi (\vert x \vert ^{-2}))\) and \(J_{12}(x) \vert x \vert ^d \phi (\vert x \vert ^{-2}) \leqslant c_{9}\) for \(\vert x \vert \) large enough and for some constant \(c_9 > 0\). So, we have \(c_8 \leqslant G(x) \vert x \vert ^d \phi (\vert x \vert ^{-2}) \leqslant c_{10}\) for \(\vert x \vert \) large enough. Now we can change the constants \(c_8\) and \(c_{10}\) to get

$$\begin{aligned} G(x) \asymp \frac{1}{\vert x \vert ^d \phi (\vert x \vert ^{-2})}, \quad \text {for all}\,\, \vert x \vert \geqslant 1. \end{aligned}$$

\(\square \)

5 Estimates of the Green Function of a Ball

Let \(B \subset \mathbb {Z}^d\) and define

$$\begin{aligned} G_B (x, y) = \mathbb {E}_x \left[ \sum \limits _{n = 0}^{\tau _B - 1} {mathbb {1}_{\{X_n = y\}}} \right] \end{aligned}$$

where \(\tau _B\) is as before. A well-known result about Green function of a set is formulated in the following lemma.

Lemma 5.1

Let B be a finite subset of \(\mathbb {Z}^d\). Then

$$\begin{aligned} G_B (x, y)&= G(x, y) - \mathbb {E}_x \left[ G(X_{\tau _B}, y) \right] , \quad x, y \in B, \\ G_B (x, x)&= \frac{1}{\mathbb {P}_x (\tau _B < \sigma _x)}, \quad x\in B, \end{aligned}$$

where \(\sigma _x = \inf \{n \geqslant 1 : X_n = x\}\).

Our approach in obtaining estimates for the Green function of a ball uses the maximum principle for the operator A that we define by

$$\begin{aligned} (Af)(x) := ((P - I)f)(x) = (Pf)(x) - (If)(x) = \sum _{y \in \mathbb {Z}^d} {p(x, y) f(y)} - f(x). \end{aligned}$$
(5.1)

Since \(\sum _{y \in \mathbb {Z}^d} {p(x, y)} = 1\) and \(p(x, y) = \mathbb {P}(X_1 = y - x)\) we have

$$\begin{aligned} (Af)(x) = \sum _{y \in \mathbb {Z}^d} {\mathbb {P}(X_1 = y - x) (f(y) - f(x))}. \end{aligned}$$

Before proving the maximum principle, we will show that for the function \(\eta (x) := \mathbb {E}_x [\tau _{B_n}]\) we have \((A\eta )(x) = -1\), for all \(x \in B_n\). Let \(x \in B_n\). Then

$$\begin{aligned} \eta (x)= & {} \sum _{y \in \mathbb {Z}^d} {\mathbb {E}_x[\tau _{B_n} \mid X_1 = y] \mathbb {P}_x(X_1 = y)} \\= & {} \sum _{y \in \mathbb {Z}^d} {(1 + \mathbb {E}_y[\tau _{B_n}]) \mathbb {P}(X_1 = y - x)} = 1 + (P\eta )(x) \end{aligned}$$

and this is obviously equivalent to \((A\eta )(x) = -1\), for all \( x \in B_n\). It follows from Definition 1.2 that f is harmonic in \(B \subset \mathbb {Z}^d\) if and only if \((Af)(x) = 0\), for all \(x \in B\).

Proposition 5.2

Assume that there exists \(x \in \mathbb {Z}^d\) such that \((Af)(x) < 0\). Then

$$\begin{aligned} f(x) > \inf _{y \in \mathbb {Z}^d} f(y). \end{aligned}$$
(5.2)

Proof

If (5.2) is not true, then \(f(x) \leqslant f(y)\), for all \( y \in \mathbb {Z}^d\). In this case, we have

$$\begin{aligned} (Pf)(x) = \sum _{y \in \mathbb {Z}^d} {\mathbb {P}(X_1 = y - x) f(y)} \geqslant f(x) \sum _{y \in \mathbb {Z}^d} {\mathbb {P}(X_1 = y - x)} = f(x). \end{aligned}$$

This implies \((Af)(x) = (Pf)(x) - f(x) \geqslant 0\) which is in contradiction with the assumption that \((Af)(x) < 0\). \(\square \)

We will now prove a series of lemmas and propositions in order to get the estimates for the Green function of a ball. In all those results, we assume (1.4) and, if \(d \leqslant 2\), we additionally assume (1.5). When we use some results from Sect. 3, we need (1.5) even when \(d \geqslant 3\), but in other cases, we can use Lemma 1.1 instead. Throughout the rest of this section, we follow [11, Sect. 4].

Lemma 5.3

There exist \(a \in ( 0, 1/3 )\) and \(C_1 > 0\) such that for every \(n \in \mathbb {N}\)

$$\begin{aligned} G_{B_n}(x, y) \geqslant C_1 G(x, y), \quad \forall \, x, y \in B_{an}. \end{aligned}$$
(5.3)

Proof

From Lemma 5.1 we have

$$\begin{aligned} G_{B_n} (x, y) = G(x, y) - \mathbb {E}_x[G(X_{\tau _{B_n}}, y)]. \end{aligned}$$

We will first prove this lemma in the case when \(x \ne y\). If we show that \(\mathbb {E}_x[G(X_{\tau _{B_n}}, y)] \leqslant c_1 G(x, y)\) for some \(c_1 \in ( 0, 1 )\) we will have (5.3) with the constant \(c_2 = 1 - c_1\). Let \(a \in ( 0, 1/3 )\) and \(x, y \in B_{an}\). In that case, we have \(\vert x - y \vert \leqslant 2an\). Since \(X_{\tau _{B_n}} \notin B_n\), \(x \ne y\) and \((1 - a) / (2a) > 1\) if and only if \(a < 1/3\), we have

$$\begin{aligned} \vert y - X_{\tau _{B_n}} \vert \geqslant (1 - a)n = \frac{1 - a}{2a} 2an \geqslant \frac{1 - a}{2a} \vert x - y \vert \geqslant 1. \end{aligned}$$
(5.4)

Using Theorem 4.3, (5.4), Lemma 2.2 and (2.11), we get

$$\begin{aligned} G(X_{\tau _{B_n}}, y)&\asymp g(\vert y - X_{\tau _{B_n}} \vert ) \leqslant a_2 g\left( \frac{1 - a}{2a} \vert x - y \vert \right) \\&\leqslant a_{2}^{2} \left( \frac{2a}{1 - a} \right) ^{d - 2\gamma _2} g(\vert x - y \vert ) \asymp a_{2}^{2} \left( \frac{2a}{1 - a} \right) ^{d - 2\gamma _2} G(x, y). \end{aligned}$$

Since \(2a / (1 - a) \longrightarrow 0\) when \(a \rightarrow 0\) and \(d > 2 \gamma _2\), if we take a small enough and then fix it, we have \(\mathbb {E}_x[G(X_{\tau _{B_n}}, y)] \leqslant c_1 G(x, y)\) for \(c_1 \in ( 0, 1 )\) and that is what we wanted to prove. Now we deal with the case when \(x = y\). From Lemma 5.1 we have \(G_{B_n}(x, x) = (\mathbb {P}(\tau _{B_n} < \sigma _x))^{-1}\) and from the definition of the function G and the transience of random walk we get \(G(x, x) = G(0) \in [1, \infty )\). Now, we can conclude that

$$\begin{aligned} G_{B_n}(x, x) \geqslant 1 = (G(0))^{-1} G(0) = (G(0))^{-1} G(x, x). \end{aligned}$$

If we define \(C_1 := \min \{c_2, (G(0))^{-1}\}\) we have (5.3). \(\square \)

Using Lemma 5.3 we can prove the following result:

Proposition 5.4

There exists constant \(C_2 > 0\) such that for all \(n \in \mathbb {N}\)

$$\begin{aligned} \mathbb {E}_x[\tau _{B_n}] \geqslant \frac{C_2}{\phi (n^{-2})}, \quad \forall \, x \in B_{\frac{an}{2}}, \end{aligned}$$
(5.5)

where \(a \in ( 0, 1/3 )\) is as in Lemma 5.3.

Proof

Let \(x \in B_{\frac{an}{2}}\). In that case, we have \(B(x, an/2) \subseteq B_{an}\). We set \(b = a/2\) for easier notation. Notice that \(\mathbb {E}_x[\tau _{B_n}] = \sum _{y \in B_n} G_{B_n}(x, y)\). Using this equality, Lemma 5.3, Theorem 4.3 and Lemma 1.1, we have

$$\begin{aligned} \mathbb {E}_x[\tau _{B_n}]&\geqslant \sum _{y \in B(x, bn)} {G_{B_n} (x, y)} \geqslant \sum _{y \in B(x, bn) \setminus \{x\}} {C_1 G(x, y)} \asymp \sum _{y \in B(x, bn) \setminus \{x\}} {g(\vert x - y \vert )} \\&\asymp \int _{1}^{bn} {g(r) r^{d - 1} \hbox {d}r} = \int _{1}^{bn} {\frac{1}{r \phi (r^{-2})} \hbox {d}r} = \frac{1}{\phi (n^{-2})} \int _{1}^{bn} {\frac{1}{r} \frac{\phi (n^{-2})}{\phi (r^{-2})} \hbox {d}r} \\&\geqslant \frac{1}{a_2 \phi (n^{-2}) n^{2 \gamma _2}} \int _{1}^{bn} {r^{2 \gamma _2 - 1} \hbox {d}r} \\&= \frac{1}{2a_2 \gamma _2 \phi (n^{-2})} \left[ b^{2 \gamma _2} - \frac{1}{n^{2 \gamma _2}} \right] \geqslant \frac{b^{2 \gamma _2}}{4 a_2 \gamma _2 \phi (n^{-2})}, \end{aligned}$$

for n large enough. Hence, we can conclude that \(\mathbb {E}_x[\tau _{B_n}] \geqslant C_2 / \phi (n^{-2})\), for all \(x \in B_{\frac{an}{2}}\), for n large enough and for some \(C_2 > 0\). As usual, we can adjust the constant to get the statement of this proposition for every \(n \in \mathbb {N}\). Notice that this is true regardless of the dimension because here, we can always plug in \(\gamma _2 = 1\). \(\square \)

Now we want to find the upper bound for \(\mathbb {E}_x [\tau _{B_n}]\).

Lemma 5.5

There exists constant \(C_3 > 0\) such that for all \(n \in \mathbb {N}\)

$$\begin{aligned} \mathbb {E}_x[\tau _{B_n}] \leqslant \frac{C_3}{\phi (n^{-2})}, \quad \forall \, x \in B_n. \end{aligned}$$
(5.6)

Proof

We define the process \(M^f = (M_{n}^{f})_{n \geqslant 0}\) with

$$\begin{aligned} M_{n}^{f} := f(X_{n}) - f(X_{0}) - \sum _{k = 0}^{n - 1} {(Af)(X_{k})} \end{aligned}$$

where f is a function defined on \(\mathbb {Z}^d\) with values in \(\mathbb {R}\), A is defined as in (5.1) and \(X = (X_n)_{n \geqslant 0}\) is a subordinate random walk. By [12, Theorem 4.1.2], the process \(M^f\) is a martingale for every bounded function f. Let \(f := mathbb {1}_{B_{2n}}\) and \(x \in B_n\). By the optional stopping theorem, we have

$$\begin{aligned} \mathbb {E}_{x} [M_{\tau _{B_n}}^{f}] = \mathbb {E}_{x} \left[ f(X_{\tau _{B_n}}) - f(X_0) - \sum _{k = 0}^{\tau _{B_n} - 1} {(Af)(X_k)} \right] = \mathbb {E}_x [M_{0}^{f}] = 0. \end{aligned}$$

Hence

$$\begin{aligned} \mathbb {E}_x \left[ f(X_{\tau _{B_n}}) - f(X_0) \right] = \mathbb {E}_x \left[ \sum _{k = 0}^{\tau _{B_n} - 1} {(Af)(X_k)} \right] . \end{aligned}$$
(5.7)

We now investigate both sides of the relation (5.7). For every \(k < \tau _{B_n}\), \(X_k \in B_n\), and for every \(y \in B_n\), using Proposition 3.2, (1.4) and (1.5), we have

$$\begin{aligned} (Af)(y)&= \sum _{u \in \mathbb {Z}^d} {\mathbb {P}(X_1 = u - y) (f(u) - f(y))} \asymp -\sum _{u \in B_{2n}^c} {\vert u - y \vert ^{-d} \phi (\vert u - y \vert ^{-2})} \\&\asymp -\int _{n}^{\infty } {r^{-d} \phi (r^{-2}) r^{d - 1} \hbox {d}r} = -\phi (n^{-2}) \int _{n}^{\infty } {r^{-1} \frac{\phi (r^{-2})}{\phi (n^{-2})} \hbox {d}r} \\&\asymp -\phi (n^{-2}) \int _{n}^{\infty } {r^{-1} \frac{n^{2\gamma _i}}{r^{2\gamma _i}} \hbox {d}r} = -\phi (n^{-2}) n^{2\gamma _i} \frac{n^{-2\gamma _i}}{2\gamma _i} \asymp -\phi (n^{-2}). \end{aligned}$$

Using the above estimate, we get

$$\begin{aligned} \mathbb {E}_x \left[ \sum _{k = 0}^{\tau _{B_n} - 1} {(Af)(X_k)} \right] \asymp \mathbb {E}_x \left[ -\sum _{k = 0}^{\tau _{B_n} - 1} {\phi (n^{-2})} \right] = -\phi (n^{-2}) \mathbb {E}_x[\tau _{B_n}]. \end{aligned}$$
(5.8)

Using (5.7), (5.8) and \(\mathbb {E}_x [f(X_{\tau _{B_n}}) - f(X_0)] = \mathbb {P}_x (X_{\tau _{B_n}} \in B_{2n}) - 1 = -\mathbb {P}_x(X_{\tau _{B_n}} \in B_{2n}^c)\), we get

$$\begin{aligned} \mathbb {P}_x (X_{\tau _{B_n}} \in B_{2n}^c) \asymp \phi (n^{-2}) \mathbb {E}_x[\tau _{B_n}] \end{aligned}$$

and this implies

$$\begin{aligned} \mathbb {E}_x [\tau _{B_n}] \leqslant \frac{C_3 \mathbb {P}_x(X_{\tau _{B_n}} \in B_{2n}^c)}{\phi (n^{-2})} \leqslant \frac{C_3}{\phi (n^{-2})}. \end{aligned}$$
(5.9)

\(\square \)

In the next two results, we develop estimates for the Green function of a ball. We define \(A(r, s) := \{x \in \mathbb {Z}^d : r \leqslant \vert x \vert < s\}\) for \(r, s \in \mathbb {R}\), \(0< r < s\).

Proposition 5.6

There exists constant \(C_4 > 0\) such that for all \(n \in \mathbb {N}\)

$$\begin{aligned} G_{B_n}(x, y) \leqslant C_4n^{-d} \eta (y), \quad \forall \, x \in B_{\frac{an}{4}}, y \in A(an/2, n), \end{aligned}$$
(5.10)

where \(\eta (y) = \mathbb {E}_y[\tau _{B_n}]\) and \(a \in ( 0, 1/3 )\) is as in Lemma 5.3.

Proof

Let \(x \in B_{\frac{an}{4}}\) and \(y \in A(an/2, n)\). We define function \(h(z) := G_{B_n} (x, z)\). Notice that for \(z \in B_n \setminus \{x\}\) we have

$$\begin{aligned} h(z)= & {} G_{B_n}(x, z) = G_{B_n}(z, x) \\= & {} \sum _{y \in \mathbb {Z}^d} {\mathbb {P}(X_1 = y - z) G_{B_n} (y, x)} = \sum _{y \in \mathbb {Z}^d} {\mathbb {P}(X_1 = y - z)} h(y). \end{aligned}$$

Hence, h is a harmonic function in \(B_n \setminus \{x\}\). If we take \(z \in B(x, an/16)^c\) then \(\vert z - x \vert \geqslant an/16 \geqslant 1\) for n large enough. Using Lemma 2.2 and Theorem 4.3 we get

$$\begin{aligned} g(an/16) \geqslant a_{2}^{-1} g(\vert z - x \vert ) \asymp G(x, z) \geqslant G_{B_n}(x, z) = h(z). \end{aligned}$$

Hence, \(h(z) \leqslant kg(an/16)\) for \(z \in B(x, an/16)^c\) and for some constant \(k > 0\). Notice that \(A(an/2, n) \subseteq B(x, an/16)^c\), hence \(y \in B(x, an/16)^c\). Using these facts together with Proposition 3.2, we have

$$\begin{aligned}&A (h \wedge kg(an/16))(y) = A(h \wedge kg(an/16) - h)(y) \\&\quad = \sum _{v \in \mathbb {Z}^d} {\mathbb {P}(X_1 = v - y)} (h(v) \wedge kg(an/16) - h(v) - h(y) \wedge kg(an/16) + h(y)) \\&\quad \asymp \sum _{v \in B(x, an/16)} {j(\vert v - y \vert ) (h(v) \wedge kg(an/16) - h(v))} \\&\quad \geqslant -\sum _{v \in B(x, an/16)} {j(\vert v - y \vert ) h(v)} \\&\quad \geqslant -\sum _{v \in B(x, an/16)} {a_{1}^{-1} j(an/16) h(v)} = -a_{1}^{-1} j(an/16) \sum _{v \in B(x, an/16)} {G_{B_n}(x, v)} \\&\quad \geqslant -a_{1}^{-1} j(an/16) \eta (x), \end{aligned}$$

where in the last line we used Lemma 2.3 together with \(\vert v - y \vert \geqslant an/16 \geqslant 1\) for \(v \in B(x, an/16)\) and for n large enough. Using (2.6) we get \(j(an/16) \leqslant (a/16)^{-d-2} j(n)\). Hence, using (5.6), we have

$$\begin{aligned} A(h \wedge kg(an/16))(y)\geqslant & {} -c_1 n^{-d} \phi \left( n^{-2} \right) \eta (x)\\\geqslant & {} -c_1 n^{-d} \phi \left( n^{-2} \right) C_3\left( \phi \left( n^{-2} \right) \right) ^{-1} = -c_2 n^{-d} \end{aligned}$$

for some \(c_2 > 0\). On the other hand, using (2.9) and Proposition 5.4 we get

$$\begin{aligned} g(an/16)&\leqslant a_{1}^{-1} (a/16)^{-d + 2\gamma _1} g(n) \\&\leqslant (a_1 C_2)^{-1} (a/16)^{-d + 2\gamma _1} n^{-d} \eta (z) = c_3n^{-d} \eta (z), \quad \forall z \in B_{an/2}. \end{aligned}$$

Now we define \(C_4 := (c_2 \vee kc_3) + 1\) and using

$$\begin{aligned} h(z) \wedge kg(an/16) \leqslant kg(an/16) \leqslant kc_3n^{-d} \eta (z) \end{aligned}$$

we get

$$\begin{aligned} C_4n^{-d} \eta (z) - h(z) \wedge kg(an/16) \geqslant (C_4 - kc_3)n^{-d} \eta (z) \geqslant 0, \quad \forall \, z \in B_{an/2}. \end{aligned}$$

So, if we define \(u(\cdot ) := C_4n^{-d}\eta (\cdot ) - h(\cdot ) \wedge kg(an/16)\), we showed that u is nonnegative function on \(B_{an/2}\). It is obvious that it vanishes on \(B_{n}^{c}\) and for \(y \in A(an/2, n)\) we have

$$\begin{aligned} (Au)(y) = C_{4}n^{-d} (A\eta )(y) - A(h \wedge kg(an/16))(y) \leqslant -C_{4}n^{-d} + c_2n^{-d} < 0. \end{aligned}$$

Since \(u \geqslant 0\) on \(B_{an/2}\) and u vanishes on \(B_{n}^{c}\), if \(\inf _{y \in \mathbb {Z}^d}u(y) < 0\) then there would exist \(y_0 \in A(an/2, n)\) such that \(u(y_0) = \inf _{y \in \mathbb {Z}^d}u(y)\). But then, by Proposition 5.2, \((Au)(y_0) \geqslant 0\) which is in contradiction with \((Au)(y) < 0\) for \(y \in A(an/2, n)\). Hence,

$$\begin{aligned} u(y) = C_4n^{-d}\eta (y) - h(y) \wedge kg(an/16) \geqslant 0, \quad \forall \, y \in \mathbb {Z}^d \end{aligned}$$

and then, because \(h(y) \leqslant kg(an/16)\) for \(y \in A(an/2, n)\) we get

$$\begin{aligned} G_{B_n}(x, y) = h(y) \leqslant C_4n^{-d} \eta (y), \quad \forall \, x \in B_{\frac{an}{4}},\, y \in A(an/2, n). \end{aligned}$$

\(\square \)

Now we will prove a proposition that will give us the lower bound for the Green function of a ball. We use the fact that \(\vert B_n \cap \mathbb {Z}^d \vert \geqslant c n^d\) for some constant \(c > 0\), where \(\vert \cdot \vert \) denotes the cardinality of a set.

Proposition 5.7

There exist \(C_5 > 0\) and \(b \leqslant a/4\) such that for all \(n \in \mathbb {N}\)

$$\begin{aligned} G_{B_n}(x, y) \geqslant C_5n^{-d}\eta (y), \quad \forall \, x \in B_{bn}, y \in A(an/2, n), \end{aligned}$$
(5.11)

where a is as in Lemma 5.3 and \(\eta (y) = \mathbb {E}_y[\tau _{B_n}]\).

Proof

Let \(a \in ( 0, 1/3 )\) as in Lemma 5.3. Then there exists \(C_1 > 0\)

$$\begin{aligned} G_{B_n} (x, v) \geqslant C_1G(x, v), \quad x, v \in B_{an}. \end{aligned}$$
(5.12)

From Proposition 5.6 it follows that there exists constant \(C_4 > 0\) such that

$$\begin{aligned} G_{B_n}(x, v) \leqslant C_4n^{-d}\eta (v), \quad x \in B_{an/4}, v \in A(an/2, n). \end{aligned}$$
(5.13)

From Lemma 5.5 we have

$$\begin{aligned} \eta (v) \leqslant \frac{C_3}{\phi \left( n^{-2} \right) }, \quad v\in B_n, \end{aligned}$$
(5.14)

for some constant \(C_3 > 0\). By Theorem 4.3 and (2.4) there exists \(c_1 > 0\) such that \(G(x) \geqslant c_1 g(\vert x \vert )\), \(x \ne 0\). Now we take

$$\begin{aligned} b \leqslant \min \left\{ \frac{a}{4}, \left( \frac{C_1 c_1}{2 a_{2}^{2} C_3 C_4} \right) ^{\frac{1}{d - 2\gamma _2}} \right\} \end{aligned}$$

and fix it. Notice that \((C_1 c_1) / (a_{2}^{2} C_3 b^{d - 2\gamma _2}) \geqslant 2C_4\). Let \(x \in B_{bn}\), \(v \in B(x, bn)\). Since \(b \leqslant a/4\) we have \(x, v \in B_{an}\). We want to prove that \(G_{B_n}(x, v) \geqslant 2C_4 n^{-d} \eta (v)\). We will first prove that assertion for \(x \ne v\). In that case we have \(1 \leqslant \vert x - v \vert \). Since \(v \in B(x, bn)\), we have \(\vert x - v \vert \leqslant bn\) so we can use (5.12), Lemma 2.2 and (2.10) to get

$$\begin{aligned} G_{B_n}(x, v) \geqslant C_1 G(x, v) \geqslant \frac{C_1 c_1}{a_2}g(bn) \geqslant \frac{C_1 c_1}{a_{2}^{2} b^{d - 2\gamma _2}} g(n) \geqslant \frac{2C_3 C_4}{n^d \phi (n^{-2})}. \end{aligned}$$
(5.15)

Using (5.14) and (5.15), we get \(G_{B_n}(x, v) \geqslant 2C_4 n^{-d} \eta (v)\) for \(x \ne v\). Now we will prove that \(G_{B_n}(x, x) \geqslant 2C_4 n^{-d} \eta (x)\), for \(x \in B_{bn}\) and for n large enough. First note that

$$\begin{aligned} \lim _{n \rightarrow \infty } {n^d \phi (n^{-2})} = \lim _{n \rightarrow \infty } {n^d\ \frac{\phi (n^{-2})}{\phi (1)}} \geqslant \lim _{n \rightarrow \infty } {n^d \frac{1}{a_2 n^{2 \gamma _2}}} = \lim _{n \rightarrow \infty } {\frac{1}{a_2} n^{d - 2\gamma _2}} = \infty , \end{aligned}$$

since \(d - 2\gamma _2 > 0\). Therefore

$$\begin{aligned} 2 C_4 n^{-d} \eta (x) \leqslant \frac{2 C_4 C_3}{n^d \phi (n^{-2})} \leqslant 1 \leqslant G_{B_n}(x, x) \end{aligned}$$

for n large enough. Hence,

$$\begin{aligned} C_4n^{-d} \eta (v) \leqslant \frac{1}{2}G_{B_n}(x, v), \quad \forall \, x \in B_{bn}, v \in B(x, bn). \end{aligned}$$
(5.16)

Now we fix \(x \in B_{bn}\) and define the function

$$\begin{aligned} h(v) := G_{B_n}(x, v) \wedge \left( C_4n^{-d}\eta (v) \right) . \end{aligned}$$

From (5.16) we have \(h(v) \leqslant \frac{1}{2}G_{B_n}(x, v)\) for \(v \in B(x, bn)\). Recall that \(G_{B_n}(x, \cdot )\) is harmonic in A(an / 2, n). Using (5.13) we get \(h(y) = G_{B_n}(x, y)\) for \(y \in A(an/2, n)\). Hence, for \(y \in A(an/2, n)\)

$$\begin{aligned} (Ah)(y)&= A(h(\cdot ) - G_{B_n}(x, \cdot ))(y) \nonumber \\&\asymp \sum _{v \in \mathbb {Z}^d} {j(\vert v - y \vert ) \left( h(v) - G_{B_n}(x, v) - h(y) + G_{B_n}(x, y) \right) } \nonumber \\&\leqslant \sum _{v \in B(x, bn)} {j(\vert v - y \vert )\left( h(v) - G_{B_n}(x, v) \right) } \nonumber \\&\leqslant -(a_1/2) j(2n) \sum _{v \in B(x, bn)} {G_{B_n}(x, v)}, \end{aligned}$$
(5.17)

where we used Proposition 3.2 and Lemma 2.3 together with \(1 \leqslant \vert v - y \vert \leqslant 2n\). Using (5.15) and \(\vert B_n \cap \mathbb {Z}^d \vert \geqslant c_2 n^d\), we get

$$\begin{aligned} \sum _{v \in B(x, bn)} {G_{B_n}(x, v)} \geqslant \frac{2C_3 C_4}{n^d \phi \left( n^{-2} \right) } \vert B_{bn} \cap \mathbb {Z}^d \vert \geqslant \frac{2c_2 C_3 C_4}{n^d \phi \left( n^{-2} \right) } (bn)^d = \frac{c_3}{\phi \left( n^{-2} \right) }. \end{aligned}$$
(5.18)

Using (2.8) we get \(j(2n) \geqslant 2^{-d - 2}j(n)\). When we put this together with (5.17) and (5.18), we get

$$\begin{aligned} (Ah)(y) \leqslant -c_4n^{-d}. \end{aligned}$$

Define \(u := h - \kappa \eta \), where

$$\begin{aligned} \kappa := \min \left\{ \frac{c_4}{2}, \frac{c_5}{2}, \frac{C_4}{2} \right\} n^{-d}, \end{aligned}$$

where \(c_5 > 0\) will be defined later. For \(y \in A(an/2, n)\)

$$\begin{aligned} (Au)(y) = (Ah)(y) - \kappa (A\eta )(y) \leqslant -c_4n^{-d} + \kappa \leqslant -c_4n^{-d} + \frac{c_4}{2} n^{-d} = -\frac{c_4}{2}n^{-d} < 0. \end{aligned}$$

For \(x \in B_{bn} \subseteq B_{an/2}\), \(v \in B_{an/2}\) we have \(\vert x - v \vert \leqslant an \leqslant n\). We will first assume that \(x \ne v\) so that we can use Theorem 4.3, Lemma 2.2 and (2.10). In this case, we have

$$\begin{aligned} G_{B_n}(x, v)&\geqslant C_1G(x, v) \asymp g(\vert x - v \vert ) \\&\geqslant \frac{1}{a_2}g(an) \geqslant \frac{1}{a_{2}^{2} a^{d - 2\gamma _2}} g(n) \geqslant \frac{1}{a_{2}^{2} C_3 a^{d - 2\gamma _2}}n^{-d} \eta (v). \end{aligned}$$

So, \(G_{B_n}(x, v) \geqslant c_5 n^{-d} \eta (v)\) for some constant \(c_5 > 0\) and for \(x \ne v\). If \(x = v\) we can use the same arguments that we used when we were proving that \(G_{B_n}(x, x) \geqslant 2C_4 n^{-d} \eta (x)\) for n large enough to prove that \(G_{B_n}(x, x) \geqslant c_5 n^{-d} \eta (x)\) for n large enough. Hence, \(G_{B_n}(x, v) \geqslant c_5 n^{-d} \eta (v)\) for all \(x \in B_{bn}\) and \(v \in B_{an/2}\) and for n large enough. Now we have

$$\begin{aligned} h(v)= & {} G_{B_n}(x, v) \wedge \left( C_4n^{-d}\eta (v) \right) \\\geqslant & {} \left( c_5n^{-d}\eta (v) \right) \wedge \left( C_4n^{-d}\eta (v) \right) = (C_4 \wedge c_5) n^{-d}\eta (v). \end{aligned}$$

Hence,

$$\begin{aligned} u(v) = h(v) - \kappa \eta (v) \geqslant (C_4 \wedge c_5) n^{-d} \eta (v) - \left( \frac{C_4}{2} \wedge \frac{c_5}{2} \right) n^{-d} \eta (v) \geqslant 0. \end{aligned}$$

Since \(u(v) \geqslant 0\) for \(v \in B_{an/2}\), \(u(v) = 0\) for \(v \in B_{n}^{c}\) and \((Au)(v) < 0\) for \(v \in A(an/2, n)\) we can use the same argument as in Proposition 5.6 to conclude by Proposition 5.2 that \(u(y) \geqslant 0\) for all \(y \in \mathbb {Z}^d\). Since \(G_{B_n}(x, y) \leqslant C_4 n^{-d} \eta (y)\) for \(x \in B_{an/4}, y \in A(an/2, n)\) we have \(h(y) = G_{B_n}(x, y)\). Using that, we have

$$\begin{aligned} G_{B_n}(x, y) \geqslant \kappa \eta (y) = C_5n^{-d}\eta (y), \quad x\in B_{bn}, y \in A(an/2, n), \end{aligned}$$

for n large enough. As before, we can change the constant and get (5.11) for all \(n \in \mathbb {N}\). \(\square \)

Using last two propositions, we have the next corollary.

Corollary 5.8

Assume (1.4) and (1.5). Then there exist constants \(C_6, C_7 > 0\) and \(b_1, b_2 \in ( 0, \frac{1}{2} )\), \(2b_1 \leqslant b_2\) such that

$$\begin{aligned} C_6 n^{-d} \mathbb {E}_y[\tau _{B_n}] \leqslant G_{B_n}(x, y) \leqslant C_7 n^{-d} \mathbb {E}_y[\tau _{B_n}], \quad \forall \, x \in B_{b_1n}, y \in A(b_2n, n). \end{aligned}$$
(5.19)

Proof

This corollary follows directly from Propositions 5.6 and 5.7. We can set \(b_2 = a/2\) where \(a \in ( 0, 1/3 )\) is as in Lemma 5.3 and \(b_1 = b\) where \(b \leqslant a/4\) is as in Proposition 5.7. \(\square \)

6 Proof of the Harnack Inequality

We start this section with the proof of the proposition that will be crucial for the remaining part of our paper.

Proposition 6.1

Let \({f}:{\mathbb {Z}^d \times \mathbb {Z}^d}\rightarrow {[0, \infty )}\) be a function and \(B \subset \mathbb {Z}^d\) a finite set. For every \(x \in B\) we have

$$\begin{aligned} \mathbb {E}_x \left[ f(X_{\tau _B - 1}, X_{\tau _B}) \right] = \sum _{y \in B} {G_B (x, y) \mathbb {E}\left[ f(y, y + X_1) mathbb {1}_{\{y + X_1 \notin B\}} \right] }. \end{aligned}$$
(6.1)

Proof

We have

$$\begin{aligned} \mathbb {E}_x \left[ f(X_{\tau _B - 1}, X_{\tau _B}) \right] = \sum _{y \in B, z \in B^c} {\mathbb {P}_x (X_{\tau _B - 1} = y, X_{\tau _B} = z) f(y, z)}. \end{aligned}$$

Using (1.2), we get

$$\begin{aligned} \mathbb {P}_x (X_{\tau _B - 1}&= y, X_{\tau _B} = z) = \sum _{m = 1}^{\infty } {\mathbb {P}_x (X_{\tau _B - 1} = y, X_{\tau _B} = z, \tau _B = m)} \\&= \sum _{m = 1}^{\infty } {\mathbb {P}(x + X_{m - 1} + \xi _m = z, x + X_{m - 1} = y, X_1, \ldots , X_{m - 2} \in B - x)} \\&= \sum _{m = 1}^{\infty } {\mathbb {P}(\xi _m = z - y) \mathbb {P}(x + X_{m - 1} = y, X_1, \ldots , X_{m - 2} \in B - x)} \\&= \mathbb {P}(\xi _1 = z - y) \sum _{m = 1}^{\infty } {\mathbb {P}_x (X_{m - 1} = y, X_1, \ldots , X_{m - 2} \in B)} \\&= \mathbb {P}(X_1 = z - y) \sum _{m = 1}^{\infty } {\mathbb {P}_x (X_{m - 1} = y, \tau _B > m - 1)} \\&= \mathbb {P}(X_1 = z - y) G_B (x, y). \end{aligned}$$

Hence,

$$\begin{aligned} \mathbb {E}_x \left[ f(X_{\tau _B - 1}, X_{\tau _B}) \right]&= \sum _{y \in B, z \in B^c} {f(y, z) G_B (x, y) \mathbb {P}(y + X_1 = z)} \\&= \sum _{y \in B} {G_B (x, y) \mathbb {E}\left[ f(y, y + X_1) mathbb {1}_{\{y + X_1 \notin B\}} \right] }. \end{aligned}$$

\(\square \)

Remark 6.2

Formula (6.1) can be considered as a discrete counterpart of the continuous-time Ikeda–Watanabe formula. We will refer to it as discrete Ikeda–Watanabe formula.

It can be proved that if \({f}:{\mathbb {Z}^d}\rightarrow {[0, \infty )}\) is harmonic in B, with respect to X, then \(\{f(X_{n \wedge \tau _B}) : n \geqslant 0\}\) is a martingale with respect to the natural filtration of X (proof is the same as [9, Proposition 1.4.1], except that we have a nonnegative instead of a bounded function). Using this fact, we can prove the following lemma.

Lemma 6.3

Let B be a finite subset of \(\,\mathbb {Z}^d\). Then \({f}:{\mathbb {Z}^d}\rightarrow {[0, \infty )}\) is harmonic in B, with respect to X, if and only if \(f(x) = \mathbb {E}_x[f(X_{\tau _B})]\) for every \(x \in B\).

Proof

Let us first assume that \({f}:{\mathbb {Z}^d}\rightarrow {[0, \infty )}\) is harmonic in B, with respect to X. We take arbitrary \(x \in B\). By the martingale property \(f(x) = \mathbb {E}_x[f(X_{n \wedge \tau _B})]\), for all \(n \geqslant 1\). First, by Fatou’s lemma we have \(\mathbb {E}_x[f(X_{\tau _B})] \leqslant f(x)\) so \(f(X_{\tau _B})\) is a \(\mathbb {P}_x\)-integrable random variable. Since B is a finite set, we have \(f \leqslant M\) on B, for some constant \(M > 0\), and \(\mathbb {P}_x(\tau _B < \infty ) = 1\). Using these two facts, we get

$$\begin{aligned} f(X_{n \wedge \tau _B}) = f(X_n) mathbb {1}_{\{n < \tau _B\}} + f(X_{\tau _B}) mathbb {1}_{\{\tau _B \leqslant n\}} \leqslant M + f(X_{\tau _B}). \end{aligned}$$

Since the right-hand side is \(\mathbb {P}_x\)-integrable, we can use the dominated convergence theorem and we get

$$\begin{aligned} f(x) = \lim _{n \rightarrow \infty } {\mathbb {E}_x[f(X_{n \wedge \tau _B})]} = \mathbb {E}_x[\lim _{n \rightarrow \infty } {f(X_{n \wedge \tau _B})}] = \mathbb {E}_x[f(X_{\tau _B})]. \end{aligned}$$

On the other hand, if \(f(x) = \mathbb {E}_x[f(X_{\tau _B})]\), for every \(x \in B\), then for \(x \in B\) we have

$$\begin{aligned} f(x)= & {} \sum _{y \in \mathbb {Z}^d} {\mathbb {E}_x\,[\,f(X_{\tau _B}) \mid X_1 = y\,]\, \mathbb {P}_x(X_1 = y)} \\= & {} \sum _{y \in \mathbb {Z}^d} {p(x, y) \mathbb {E}_y[f(X_{\tau _B})]} = \sum _{y \in \mathbb {Z}^d} {p(x, y) f(y)}. \end{aligned}$$

\(\square \)

Hence, if we take \(B \subset \mathbb {Z}^d\) finite and \({f}:{\mathbb {Z}^d}\rightarrow {[0, \infty )}\) harmonic in B, with respect to X, then by Lemma 6.3 and the discrete Ikeda–Watanabe formula, we get

$$\begin{aligned} f(x) = \mathbb {E}_x \left[ f(X_{\tau _B}) \right] = \sum _{y \in B} {G_B (x, y) \mathbb {E}\left[ f(y + X_1) mathbb {1}_{\{y + X_1 \notin B\}} \right] }. \end{aligned}$$
(6.2)

Let us define the discrete Poisson kernel of a finite set \(B \subset \mathbb {Z}^d\) by

$$\begin{aligned} K_B(x, z):= \sum _{y \in B} {G_{B} (x, y) \mathbb {P}(X_1 = z - y)} , \quad x \in B, z \in B^c. \end{aligned}$$
(6.3)

If the function f is nonnegative and harmonic in \(B_n\), with respect to X, from (6.2) we have

$$\begin{aligned} f(x)&= \sum _{y \in B_n} {G_{B_n} (x, y) \sum _{z \notin B_n} {\mathbb {E}\left[ f(y + X_1) mathbb {1}_{\{y + X_1 \notin B_n\}} \mid X_1 = z - y \right] \mathbb {P}(X_1 = z - y)}} \nonumber \\&= \sum _{z \notin B_n} {\sum _{y \in B_n} {G_{B_n} (x, y) \mathbb {E}\left[ f(y + z - y) mathbb {1}_{\{y + z - y \notin B_n\}} \right] \mathbb {P}(X_1 = z - y)}} \nonumber \\&= \sum _{z \notin B_n} {f(z) \left( \sum _{y \in B_n} {G_{B_n} (x, y) \mathbb {P}(X_1 = z - y)} \right) } = \sum _{z \notin B_n} {f(z) K_{B_n}(x, z)}. \end{aligned}$$
(6.4)

Now we are ready to show that the Poisson kernel \(K_{B_n}(x, z)\) is comparable to an expression that is independent of x. When we prove that Harnack inequality will follow immediately.

Lemma 6.4

Assume (1.4) and (1.5) and let \(b_1, b_2 \in ( 0, \frac{1}{2} )\) be as in Corollary 5.8. Then \(K_{B_n}(x, z) \asymp l(z)\) for all \(x \in B_{b_1n}\), where

$$\begin{aligned} l(z) = \frac{j(\vert z \vert )}{\phi \left( n^{-2} \right) } + n^{-d} \sum _{y \in A(b_2n, n)}{\mathbb {E}_y[\tau _{B_n}] j(\vert z - y \vert )}. \end{aligned}$$

Proof

Splitting the expression (6.3) for the Poisson kernel in two parts and using Proposition 3.2, we get

$$\begin{aligned} K_{B_n}(x, z) \asymp \sum _{y \in B_{b_2n}} {G_{B_n}(x, y) j(\vert z - y \vert )} + \sum _{y \in A(b_2n, n)} {G_{B_n}(x, y) j(\vert z - y \vert )}. \end{aligned}$$

Since \(G_{B_n}(x, y) \asymp n^{-d}\mathbb {E}_y[\tau _{B_n}]\) for \(x \in B_{b_1n}\), \(y \in A(b_2n, n)\), for the second sum in the upper expression we have

$$\begin{aligned} \sum _{y \in A(b_2n, n)} {G_{B_n}(x, y) j(\vert z - y \vert )} \asymp n^{-d} \sum _{y \in A(b_2n, n)}{\mathbb {E}_y[\tau _{B_n}] j(\vert z - y \vert )}. \end{aligned}$$
(6.5)

Now we look closely at the expression \(\sum _{y \in B_{b_2n}} {G_{B_n}(x, y) j(\vert z - y \vert )}\). Using the fact that \(y \in B_{b_2n}\), \(b_2 \in ( 0, \frac{1}{2} )\) and \(\vert z \vert \geqslant n\) because \(z \in B_{n}^{c}\), we have

$$\begin{aligned} \vert z - y \vert \leqslant \vert z \vert + \vert y \vert \leqslant \vert z \vert + b_2n \leqslant \vert z \vert + b_2\vert z \vert \leqslant (1 + b_2)\vert z \vert \leqslant 2\vert z \vert . \end{aligned}$$

On the other hand

$$\begin{aligned} \vert z \vert \leqslant \vert z - y \vert + \vert y \vert \leqslant \vert z - y \vert + b_2n \leqslant \vert z - y \vert + b_2\vert z \vert . \end{aligned}$$
(6.6)

Hence,

$$\begin{aligned} \frac{1}{2}\vert z \vert \leqslant (1 - b_2)\vert z \vert \leqslant \vert z - y \vert . \end{aligned}$$
(6.7)

Combining (6.6), (6.7) and using Lemma 2.3, we have

$$\begin{aligned} \frac{1}{a_1} j\left( \frac{1}{2}\vert z \vert \right) \geqslant j(\vert z - y \vert ) \geqslant a_1 j(2\vert z \vert ). \end{aligned}$$

Using (2.6), we get \(j(\frac{1}{2} \vert z \vert ) \leqslant 2^{d + 2}j(\vert z \vert ) = c_1 j(\vert z \vert )\). Similarly, from (2.8), we get \(j(2\vert z \vert ) \geqslant 2^{-d -2} j(\vert z \vert ) = c_2 j(\vert z \vert )\). Hence, \(a_1 c_2 j(\vert z \vert ) \leqslant a_1 j(2\vert z \vert ) \leqslant j(\vert z - y \vert ) \leqslant a_1^{-1} j\left( \frac{1}{2}\vert z \vert \right) \leqslant a_1^{-1} c_1 j(\vert z \vert )\) for some \(c_1, c_2 > 0\). Therefore,

$$\begin{aligned} j(\vert z - y \vert ) \asymp j(\vert z \vert ), \quad y \in B_{b_2n},\, z \in B_{n}^{c}. \end{aligned}$$
(6.8)

Using (6.8) we have

$$\begin{aligned} \sum _{y \in B_{b_2n}} {G_{B_n}(x, y) j(\vert z - y \vert )} \asymp \sum _{y \in B_{b_2n}} {G_{B_n}(x, y) j(\vert z \vert )} = j(\vert z \vert ) \sum _{y \in B_{b_2n}} {G_{B_n}(x, y)}. \end{aligned}$$

Now we want to show that \(\sum _{y \in B_{b_2n}} {G_{B_n}(x, y)} \asymp 1 / \phi \left( n^{-2} \right) \). Using the fact that \(G_{B_n}\) is nonnegative function and that \(\mathbb {E}_x[\tau _{B_n}] \leqslant C_3 / \phi \left( n^{-2} \right) \) for \(x \in B_n\) we have

$$\begin{aligned} \sum _{y \in B_{b_2n}} {G_{B_n}(x, y)} \leqslant \sum _{y \in B_n} {G_{B_n}(x, y)} = \mathbb {E}_x[\tau _{B_n}] \leqslant \frac{C_3}{\phi \left( n^{-2} \right) }. \end{aligned}$$
(6.9)

To prove the other inequality we will use Lemma 5.3, Theorem 4.3, Lemma 2.2 together with \(1 \leqslant \vert x - y \vert \leqslant 2b_2n\), \(\vert B_n \cap \mathbb {Z}^d \vert \geqslant c_3n^d\) and Lemma 1.1. Thus

$$\begin{aligned} \sum _{y \in B_{b_2n}} {G_{B_n}(x, y)}&\geqslant C_1\sum _{y \in B_{b_2n} \setminus \{x\}} {G(x, y)} \asymp \sum _{y \in B_{b_2n} \setminus \{x\}} {g(\vert x - y \vert )} \\&\geqslant \frac{1}{a_2} (\vert B_{b_2n} \cap \mathbb {Z}^d \vert - 1) g(2b_2n) \\&\geqslant \frac{1}{a_2} \frac{c_3}{2}(b_2n)^d \frac{1}{2^d (b_2n)^d} \frac{1}{\phi \left( n^{-2} \right) } \frac{\phi (n^{-2})}{\phi ((2b_2n)^{-2})} \\&\geqslant \frac{c_3}{2 a_2} \frac{1}{2^d \phi (n^{-2})} (2b_2)^2 \geqslant \frac{c_3 (2b_2)^2}{2^{d + 1} a_2} \frac{1}{\phi \left( n^{-2} \right) }. \end{aligned}$$

Hence,

$$\begin{aligned} \sum _{y \in B_{b_2n}} {G_{B_n}(x, y)} \geqslant \frac{c_4}{\phi \left( n^{-2} \right) }. \end{aligned}$$
(6.10)

From (6.9) and (6.10) we have

$$\begin{aligned} \sum _{y \in B_{b_2n}} {G_{B_n}(x, y)} \asymp \frac{1}{\phi \left( n^{-2} \right) }. \end{aligned}$$
(6.11)

Finally, using (6.8) and (6.11) we have

$$\begin{aligned} \sum _{y \in B_{b_2n}} {G_{B_n}(x, y) j(\vert z - y \vert )} \asymp \frac{j(\vert z \vert )}{\phi \left( n^{-2} \right) }. \end{aligned}$$
(6.12)

And now, from (6.12) and (6.5) we have the statement of the lemma. \(\square \)

Lemma 6.4 basically states that there exist constants \(C_8, C_9 > 0\) such that

$$\begin{aligned} C_8 l(z) \leqslant K_{B_n}(x, z) \leqslant C_9 l(z), \quad x \in B_{b_1n}, \, z \in B_{n}^{c}. \end{aligned}$$
(6.13)

Now we are ready to prove our main result.

Proof of Theorem 1.3

Notice that, because of the spatial homogeneity, it is enough to prove this result for balls centered at the origin. We will prove the theorem for \(a = b_1\), where \(b_1\) is as in Corollary 5.8. General case follows using the standard Harnack chain argument. Let \(x_1, x_2 \in B_{b_1n}\). Using (6.13) we get

$$\begin{aligned} K_{B_n}(x_1, z) \leqslant C_9 l(z) = \frac{C_9}{C_8} C_8 l(z) \leqslant \frac{C_9}{C_8} K_{B_n}(x_2, z). \end{aligned}$$

Now we can multiply both sides with \(f(z) \geqslant 0\) and sum over all \(z \notin B_n\) and we get

$$\begin{aligned} \sum _{z \notin B_n} {f(z) K_{B_n}(x_1, z)} \leqslant \frac{C_9}{C_8} \sum _{z \notin B_n} {f(z) K_{B_n}(x_2, z)}. \end{aligned}$$

If we look at the expression (6.4) we see that this means

$$\begin{aligned} f(x_1) \leqslant \frac{C_9}{C_8} f(x_2) \end{aligned}$$

and that is what we wanted to prove. \(\square \)