Keywords

AMS Subject Classification

1 Introduction and Main Results

1.1 Description of the Model

We describe our model and then state our main results: see Sect. 1.4 for a discussion of related literature. Write \(x \in {\mathbb R}^2\) in Cartesian coordinates as x = (x 1, x 2). For parameters a +, a  > 0 and β +, β ≥ 0, define, for z ≥ 0, functions \(d^+ ( z ) := a^+ z^{\beta ^+}\) and \(d^- ( z ) := a^- z^{\beta ^-}\). Set

$$\displaystyle \begin{aligned} {\mathcal D} := \left\{ x \in {\mathbb R}^2 : x_1 \geq 0, \, - d^- (x_1 ) \leq x_2 \leq d^+ (x_1) \right\} .\end{aligned}$$

Write ∥ ⋅ ∥ for the Euclidean norm on \({\mathbb R}^2\). For \(x \in {\mathbb R}^2\) and \(A \subseteq {\mathbb R}^2\), write d(x, A) :=infyAx − y∥ for the distance from x to A. Suppose that there exist B ∈ (0, ) and a subset \({\mathcal D}_B\) of \({\mathcal D}\) for which every \(x \in {\mathcal D}_B\) has \(d(x, {\mathbb R}^2 \setminus {\mathcal D} ) \leq B\). Let \({\mathcal D}_I := {\mathcal D} \setminus {\mathcal D}_B\); we call \({\mathcal D}_B\) the boundary and \({\mathcal D}_I\) the interior. Set \({\mathcal D}^\pm _B := \{ x \in {\mathcal D}_B : \pm x_2 > 0\}\) for the parts of \({\mathcal D}_B\) in the upper and lower half-plane, respectively.

Let ξ := (ξ 0, ξ 1, …) be a discrete-time, time-homogeneous Markov chain on state-space \(S \subseteq {\mathcal D}\). Set \(S_I := S \cap {\mathcal D}_I\), \(S_B := S \cap {\mathcal D}_B\), and \(S^\pm _B := S \cap {\mathcal D}^\pm _B\). Write \({\mathbb P}_{x}\) and \(\mathbb {E}_{x}\) for conditional probabilities and expectations given ξ 0 = x ∈ S, and suppose that \({\mathbb P}_x ( \xi _n \in S \text{ for all } n \geq 0 ) = 1\) for all x ∈ S. Set Δ := ξ 1 − ξ 0. Then \({\mathbb P} ( \xi _{n+1} \in A \mid \xi _n = x ) = {\mathbb P}_{x} ( x + \varDelta \in A )\) for all x ∈ S, all measurable \(A \subseteq {\mathcal D}\), and all \(n \in {\mathbb Z}_+\). In what follows, we will always treat vectors in \({\mathbb R}^2\) as column vectors.

We will assume that ξ has uniformly bounded p > 2 moments for its increments, that in S I it has zero drift and a fixed increment covariance matrix, and that it reflects in S B, meaning it has drift away from \(\partial {\mathcal D}\) at a certain angle relative to the inwards-pointing normal vector. In fact we permit perturbations of this situation that are appropriately small as the distance from the origin increases. See Fig. 1 for an illustration.

Fig. 1
figure 1

An illustration of the model parameters, in the case where β + = β ∈ (0, 1)

To describe the assumptions formally, for x 1 > 0 let n +(x 1) denote the inwards-pointing unit normal vector to \(\partial {\mathcal D}\) at (x 1, d +(x 1)), and let n (x 1) be the corresponding normal at (x 1, −d (x 1)); then n +(x 1) is a scalar multiple of \((a^+ \beta ^+ x_1^{\beta ^+-1}, -1)\), and n (x 1) is a scalar multiple of \((a^- \beta ^- x_1^{\beta ^--1}, 1)\). Let n +(x 1, α) denote the unit vector obtained by rotating n +(x 1) by angle α anticlockwise. Similarly, let n (x 1, α) denote the unit vector obtained by rotating n (x 1) by angle α clockwise. (The orientation is such that, in each case, reflection at angle α < 0 is pointing on the side of the normal towards 0.)

We write ∥ ⋅ ∥op for the matrix (operator) norm defined by ∥Mop :=supuMu∥, where the supremum is over all unit vectors \(u \in {\mathbb R}^2\). We take ξ 0 = x 0 ∈ S fixed, and impose the following assumptions for our main results.

  • (N) Suppose that \({\mathbb P}_x ( \limsup _{n \to \infty } \| \xi _n \| = \infty ) =1\) for all x ∈ S.

  • (Mp) There exists p > 2 such that

    $$\displaystyle \begin{aligned} \sup_{x \in S} \mathbb{E}_x ( \| \varDelta \|{}^p ) < \infty . \end{aligned} $$
    (1)
  • (D) We have that \(\sup _{x \in S_I : \| x \| \geq r } \| \mathbb {E}_x \varDelta \| = o (r^{-1})\) as r →.

  • (R) There exist angles α ±∈ (−π∕2, π∕2) and functions \(\mu ^\pm : S^\pm _B \to {\mathbb R}\) with liminfx∥→ μ ±(x) > 0, such that, as r →,

    $$\displaystyle \begin{aligned} \sup_{x \in S_B^+ : \| x \| \geq r } \| \mathbb{E}_x \varDelta - \mu^+ (x) n^+(x_1,\alpha^+) \| & = O ( r^{-1}) ; \end{aligned} $$
    (2)
    $$\displaystyle \begin{aligned} \sup_{x \in S_B^- : \| x \| \geq r } \| \mathbb{E}_x \varDelta - \mu^- (x) n^-(x_1,\alpha^-) \| & = O( r^{-1} ) .\end{aligned} $$
    (3)
  • (C) There exists a positive-definite, symmetric 2 × 2 matrix Σ for which

We write the entries of Σ in (C) as

$$\displaystyle \begin{aligned} \varSigma = \begin{pmatrix} \sigma_1^2 & \rho \\ \rho & \sigma_2^2 \end{pmatrix} .\end{aligned}$$

Here ρ is the asymptotic increment covariance, and, since Σ is positive definite, σ 1 > 0, σ 2 > 0, and \(\rho ^2 < \sigma _1^2 \sigma _2^2\).

To identify the critically recurrent cases, we need slightly sharper control of the error terms in the drift assumption (D) and covariance assumption (C). In particular, we will in some cases impose the following stronger versions of these assumptions:

  • (D+) There exists ε > 0 such that as r →.

  • (C+) There exists ε > 0 and a positive definite symmetric 2 × 2 matrix Σ for which

Without loss of generality, we may use the same constant for both (D+) and (C+).

The non-confinement condition (N) ensures our questions of recurrence and transience (see below) are non-trivial, and is implied by standard irreducibility or ellipticity conditions: see [26] and the following example.

Example 1

Let \(S = {\mathbb Z}^2 \cap {\mathcal D}\), and take \({\mathcal D}_B\) to be the set of \(x \in {\mathcal D}\) for which x is within unit -distance of some \(y \in {\mathbb Z}^2 \setminus {\mathcal D}\). Then S B contains those points of S that have a neighbour outside of \({\mathcal D}\), and S I consists of those points of S whose neighbours are all in \({\mathcal D}\). If ξ is irreducible on S, then (N) holds (see e.g. Corollary 2.1.10 of [26]). If β + > 0, then, for all ∥x∥ sufficiently large, every point of \(x \in S_B^+\) has its neighbours to the right and below in S, so if α + = 0, for instance, we can achieve the asymptotic drift required by (2) using only nearest-neighbour jumps if we wish; similarly in \(S_B^-\).

Under the non-confinement condition (N), the first question of interest is whether liminfnξ n∥ is finite or infinite. We say that ξ is recurrent if there exists \(r_0 \in {\mathbb R}_+\) for which liminfnξ n∥≤ r 0, a.s., and that ξ is transient if limnξ n∥ = , a.s. The first main aim of this paper is to classify the process into one or other of these cases (which are not a priori exhaustive) depending on the parameters. Further, in the recurrent cases it is of interest to quantify the recurrence by studying the tails (or moments) of return times to compact sets. This is the second main aim of this paper.

In the present paper we focus on the case where α + + α  = 0, which we call ‘opposed reflection’. This case is the most subtle from the point of view of recurrence/transience, and, as we will see, exhibits a rich phase diagram depending on the model parameters. We emphasize that the model in the case α + + α  = 0 is near-critical in that both recurrence and transience are possible, depending on the parameters, and moreover (i) in the recurrent cases, return-times to bounded sets have heavy tails being, in particular, non-integrable, and so stationary distributions will not exist, and (ii) in the transient cases, escape to infinity will be only diffusive. There is a sense in which the model studied here can be viewed as a perturbation of zero-drift random walks, in the manner of the seminal work of Lamperti [19]: see e.g. [26] for a discussion of near-critical phenomena. We leave for future work the case α + + α ≠ 0, in which very different behaviour will occur: if β ± < 1, then the case α + + α  > 0 gives super-diffusive (but sub-ballistic) transience, while the case α + + α  < 0 leads to positive recurrence.

Opposed reflection includes the special case where α + = α  = 0, which is ‘normal reflection’. Since the results are in the latter case more easily digested, and since it is an important case in its own right, we present the case of normal reflection first, in Sect. 1.2. The general case of opposed reflection we present in Sect. 1.3. In Sect. 1.4 we review some of the extensive related literature on reflecting processes. Then Sect. 1.5 gives an outline of the remainder of the paper, which consists of the proofs of the results in Sects. 1.21.3.

1.2 Normal Reflection

First we consider the case of normal (i.e., orthogonal) reflection.

Theorem 1

Suppose that (N) , (Mp) , (D) , (R) , and (C) hold with α + = α  = 0.

  1. (a)

    Suppose that β +, β ∈ [0, 1). Let \(\beta := \max ( \beta ^+, \beta ^-)\) . Then the following hold.

    1. (i)

      If \(\beta < \sigma _1^2 / \sigma _2^2\) , then ξ is recurrent.

    2. (ii)

      If \(\sigma _1^2 / \sigma _2^2 < \beta < 1\) , then ξ is transient.

    3. (iii)

      If, in addition, (D+) and (C+) hold, then the case \(\beta = \sigma _1^2/\sigma _2^2\) is recurrent.

  2. (b)

    Suppose that (D+) and (C+) hold, and β +, β  > 1. Then ξ is recurrent.

Remark 1

  1. (i)

    Omitted from Theorem 1 is the case when at least one of β ± is equal to 1, or their values fall each each side of 1. Here we anticipate behaviour similar to [5].

  2. (ii)

    If \(\sigma _1^2 / \sigma _2^2 < 1\), then Theorem 1 shows a striking non-monotonicity property: there exist regions \({\mathcal D}_1 \subset {\mathcal D}_2 \subset {\mathcal D}_3\) such that the reflecting random walk is recurrent on \({\mathcal D}_1\) and \({\mathcal D}_3\), but transient on \({\mathcal D}_2\). This phenomenon does not occur in the classical case when Σ is the identity: see [28] for a derivation of monotonicity in the case of normally reflecting Brownian motion in unbounded domains in \({\mathbb R}^d\), d ≥ 2.

  3. (iii)

    Note that the correlation ρ and the values of a +, a play no part in Theorem 1; ρ will, however, play a role in the more general Theorem 3 below.

Let \(\tau _r := \min \{ n \in {\mathbb Z}_+ : \| \xi _n \| \leq r \}\). Define

$$\displaystyle \begin{aligned} s_0 := s_0 (\varSigma,\beta) := \frac{1}{2} \left(1 - \frac{\sigma_2^2 \beta}{\sigma_1^2} \right) .\end{aligned} $$
(4)

Our next result concerns the moments of τ r. Since most of our assumptions are asymptotic, we only make statements about r sufficiently large; with appropriate irreducibility assumptions, this restriction could be removed.

Theorem 2

Suppose that (N) , (Mp) , (D) , (R) , and (C) hold with α + = α  = 0.

  1. (a)

    Suppose that β +, β ∈ [0, 1). Let \(\beta := \max ( \beta ^+, \beta ^-)\) . Then the following hold.

    1. (i)

      If \(\beta < \sigma _1^2 / \sigma _2^2\) , then \(\mathbb {E}_x ( \tau _r^s ) < \infty \) for all s < s 0 and all r sufficiently large, but \(\mathbb {E}_x ( \tau _r^s ) = \infty \) for all s > s 0 and all x withx∥ > r for r sufficiently large.

    2. (ii)

      If \(\beta \geq \sigma _1^2 / \sigma _2^2\) , then \(\mathbb {E}_x ( \tau _r^s ) = \infty \) for all s > 0 and all x withx∥ > r for r sufficiently large.

  2. (b)

    Suppose that β +, β  > 1. Then \(\mathbb {E}_x ( \tau _r^s ) = \infty \) for all s > 0 and all x withx∥ > r for r sufficiently large.

Remark 2

  1. (i)

    Note that if \(\beta < \sigma _1^2 /\sigma _2^2\), then s 0 > 0, while s 0 < 1∕2 for all β > 0, in which case the return time to a bounded set has a heavier tail than that for one-dimensional simple symmetric random walk.

  2. (ii)

    The transience result in Theorem 1(a)(ii) is essentially stronger than the claim in Theorem 2(a)(ii) for \(\beta < \sigma _1^2 / \sigma _2^2\), so the borderline (recurrent) case \(\beta = \sigma _1^2 / \sigma _2^2\) is the main content of the latter.

  3. (iii)

    Part (b) shows that the case β ± > 1 is critical: no moments of return times exist, as in the case of, say, simple symmetric random walk in \({\mathbb Z}^2\) [26, p. 77].

1.3 Opposed Reflection

We now consider the more general case where α + + α  = 0, i.e., the two reflection angles are equal but opposite, relative to their respective normal vectors. For α + = −α ≠ 0, this is a particular example of oblique reflection. The phase transition in β now depends on ρ and α in addition to \(\sigma _1^2\) and \(\sigma _2^2\). Define

$$\displaystyle \begin{aligned} \beta_{\mathrm{c}} := \beta_{\mathrm{c}} (\varSigma, \alpha ) := \frac{\sigma_1^2}{\sigma_2^2} + \left( \frac{\sigma_2^2-\sigma_1^2}{\sigma_2^2} \right) \sin^2 \alpha + \frac{\rho}{\sigma_2^2} \sin 2 \alpha .\end{aligned} $$
(5)

The next result gives the key properties of the critical threshold function β c which are needed for interpreting our main result.

Proposition 1

For a fixed, positive-definite Σ such that \(|\sigma _1^2 - \sigma _2^2| + |\rho | > 0\) , the function αβ c(Σ, α) over the interval \([ -\frac {\pi }{2}, \frac {\pi }{2}]\) is strictly positive for |α|≤ π∕2, with two stationary points, one in \((-\frac {\pi }{2},0)\) and the other in \((0,\frac {\pi }{2})\) , at which the function takes its maximum/minimum values of

$$\displaystyle \begin{aligned} \frac{1}{2} + \frac{\sigma_1^2}{2\sigma_2^2} \pm \frac{1}{2\sigma_2^2} \sqrt{ \left( \sigma_1^2 - \sigma_2^2\right)^2 + 4 \rho^2 } . \end{aligned} $$
(6)

The exception is the case where \(\sigma _1^2 - \sigma _2^2 = \rho = 0\) , when β c = 1 is constant.

Here is the recurrence classification in this setting.

Theorem 3

Suppose that (N) , (Mp) , (D) , (R) , and (C) hold with α + = −α  = α for |α| < π∕2.

  1. (a)

    Suppose that β +, β ∈ [0, 1). Let \(\beta := \max ( \beta ^+, \beta ^-)\) . Then the following hold.

    1. (i)

      If β < β c , then ξ is recurrent.

    2. (ii)

      If β > β c , then ξ is transient.

    3. (iii)

      If, in addition, (D+) and (C+) hold, then the case β = β c is recurrent.

  2. (b)

    Suppose that (D+) and (C+) hold, and β +, β  > 1. Then ξ is recurrent.

Remark 3

  1. (i)

    The threshold (5) is invariant under the map (α, ρ)↦(−α, −ρ).

  2. (ii)

    For fixed Σ with \(| \sigma _1^2 - \sigma _2^2 | + |\rho | >0\), Proposition 1 shows that β c is non-constant and has exactly one maximum and exactly one minimum in \((-\frac {\pi }{2}, \frac {\pi }{2})\). Since \(\beta _{\mathrm {c}} (\varSigma , \pm \frac {\pi }{2} ) =1\), it follows from uniqueness of the minimum that the minimum is strictly less than 1, and so Theorem 3 shows that there is always an open interval of α for which there is transience.

  3. (iii)

    Since β c > 0 always, recurrence is certain for small enough β.

  4. (iv)

    In the case where \(\sigma _1^2 = \sigma _2^2\) and ρ = 0, then β c = 1, so recurrence is certain for all β +, β  < 1 and all α.

  5. (v)

    If α = 0, then \(\beta _{\mathrm {c}} = \sigma _1^2/\sigma _2^2\), so Theorem 3 generalizes Theorem 1.

Next we turn to passage-time moments. We generalize (4) and define

$$\displaystyle \begin{aligned} s_0 := s_0 (\varSigma,\alpha,\beta) := \frac{1}{2} \left( 1 - \frac{\beta}{\beta_{\mathrm{c}}} \right) ,\end{aligned} $$
(7)

with β c given by (5). The next result includes Theorem 2 as the special case α = 0.

Theorem 4

Suppose that (N) , (Mp) , (D) , (R) , and (C) hold with α + = −α  = α for |α| < π∕2.

  1. (a)

    Suppose that β +, β ∈ [0, 1). Let \(\beta := \max ( \beta ^+, \beta ^-)\) . Then the following hold.

    1. (i)

      If β < β c , then s 0 ∈ (0, 1∕2], and \(\mathbb {E}_x ( \tau _r^s ) < \infty \) for all s < s 0 and all r sufficiently large, but \(\mathbb {E}_x ( \tau _r^s ) = \infty \) for all s > s 0 and all x withx∥ > r for r sufficiently large.

    2. (ii)

      If β  β c , then \(\mathbb {E}_x ( \tau _r^s ) = \infty \) for all s > 0 and all x withx∥ > r for r sufficiently large.

  2. (b)

    Suppose that β +, β  > 1. Then \(\mathbb {E}_x ( \tau _r^s ) = \infty \) for all s > 0 and all x withx∥ > r for r sufficiently large.

1.4 Related Literature

The stability properties of reflecting random walks or diffusions in unbounded domains in \({\mathbb R}^d\) have been studied for many years. A pre-eminent place in the development of the theory is occupied by processes in the quadrant \({\mathbb R}_+^2\) or quarter-lattice \({\mathbb Z}_+^2\), due to applications arising in queueing theory and other areas. Typically, the process is assumed to be maximally homogeneous in the sense that the transition mechanism is fixed in the interior and on each of the two half-lines making up the boundary. Distinct are the cases where the motion in the interior of the domain has non-zero or zero drift.

It was in 1961, in part motivated by queueing models, that Kingman [18] proposed a general approach to the non-zero drift problem on \({\mathbb Z}_+^2\) via Lyapunov functions and Foster’s Markov chain classification criteria [14]. A formal statement of the classification was given in the early 1970s by Malyshev, who developed both an analytic approach [22] as well as the Lyapunov function one [23] (the latter, Malyshev reports, prompted by a question of Kolmogorov). Generically, the classification depends on the drift vector in the interior and the two boundary reflection angles. The Lyapunov function approach was further developed, so that the bounded jumps condition in [23] could be relaxed to finiteness of second moments [10, 27, 29] and, ultimately, of first moments [13, 30, 33]. The analytic approach was also subsequently developed [11], and although it seems to be not as robust as the Lyapunov function approach (the analysis in [22] was restricted to nearest-neighbour jumps), when it is applicable it can yield very precise information: see e.g. [15] for a recent application in the continuum setting. Intrinsically more complicated results are available for the non-zero drift case in \({\mathbb Z}_+^3\) [24] and \({\mathbb Z}_+^4\) [17].

The recurrence classification for the case of zero-drift reflecting random walk in \({\mathbb Z}_+^2\) was given in the early 1990s in [6, 12]; see also [13]. In this case, generically, the classification depends on the increment covariance matrix in the interior as well as the two boundary reflection angles. Subsequently, using a semimartingale approach extending work of Lamperti [19], passage-time moments were studied in [5], with refinements provided in [2, 3].

Parallel continuum developments concern reflecting Brownian motion in wedges in \({\mathbb R}^2\). In the zero-drift case with general (oblique) reflections, in the 1980s Varadhan and Williams [31] had showed that the process was well-defined, and then Williams [32] gave the recurrence classification, thus preceding the random walk results of [6, 12], and, in the recurrent cases, asymptotics of stationary measures (cf. [4] for the discrete setting). Passage-time moments were later studied in [7, 25], by providing a continuum version of the results of [5], and in [2], using discrete approximation [1]. The non-zero drift case was studied by Hobson and Rogers [16], who gave an analogue of Malyshev’s theorem in the continuum setting.

For domains like our \({\mathcal D}\), Pinsky [28] established recurrence in the case of reflecting Brownian motion with normal reflections and standard covariance matrix in the interior. The case of general covariance matrix and oblique reflection does not appear to have been considered, and neither has the analysis of passage-time moments. The somewhat related problem of the asymptotics of the first exit time τ e of planar Brownian motion from domains like our \({\mathcal D}\) has been considered [8, 9, 20]: in the case where β + = β  = β ∈ (0, 1), then \(\log {\mathbb P} ( \tau _e > t )\) is bounded above and below by constants times − t (1−β)∕(1+β): see [20] and (for the case β = 1∕2) [9].

1.5 Overview of the Proofs

The basic strategy is to construct suitable Lyapunov functions \(f : {\mathbb R}^2 \to {\mathbb R}\) that satisfy appropriate semimartingale (i.e., drift) conditions on \(\mathbb {E}_x [ f (\xi _1 ) - f(\xi _0 ) ]\) for x outside a bounded set. In fact, since the Lyapunov functions that we use are most suitable for the case where the interior increment covariance matrix is Σ = I, the identity, we first apply a linear transformation T of \({\mathbb R}^2\) and work with . The linear transformation is described in Sect. 2. Of course, one could combine these two steps and work directly with the Lyapunov function given by the composition f ∘ T for the appropriate f. However, for reasons of intuitive understanding and computational convenience, we prefer to separate the two steps.

Let β ± < 1. Then for α + = α  = 0, the reflection angles are both pointing essentially vertically, with an asymptotically small component in the positive x 1 direction. After the linear transformation T, the reflection angles are no longer almost vertical, but instead are almost opposed at some oblique angle, where the deviation from direct opposition is again asymptotically small, and in the positive x 1 direction. For this reason, the case α + = −α  = α ≠ 0 is not conceptually different from the simpler case where α = 0, because after the linear transformation, both cases are oblique. In the case α ≠ 0, however, the details are more involved as both α and the value of the correlation ρ enter into the analysis of the Lyapunov functions, which is presented in Sect. 3, and is the main technical work of the paper. For β ± > 1, intuition is provided by the case of reflection in the half-plane (see e.g. [32] for the Brownian case).

Once the Lyapunov function estimates are in place, the proofs of the main theorems are given in Sect. 4, using some semimartingale results which are variations on those from [26]. The appendix contains the proof of Proposition 1 on the properties of the threshold function β c defined at (5).

2 Linear Transformation

The inwards pointing normal vectors to \(\partial {\mathcal D}\) at (x 1, d ±(x 1)) are

$$\displaystyle \begin{aligned} n^\pm (x_1) = \frac{1}{r^\pm(x_1)} \begin{pmatrix} a^\pm \beta^\pm x_1^{\beta^\pm -1} \\ \mp 1 \end{pmatrix}, \text{ where } r^\pm (x_1 ) := \sqrt{ 1 + (a^\pm )^2 (\beta^\pm)^2 x_1^{2 \beta^\pm -2} } .\end{aligned}$$

Define

$$\displaystyle \begin{aligned} n^\pm_\perp (x_1) := \frac{1}{r^\pm(x_1)} \begin{pmatrix} \pm 1 \\ a^\pm \beta^\pm x_1^{\beta^\pm -1} \end{pmatrix} .\end{aligned}$$

Recall that n ±(x 1, α ±) is the unit vector at angle α ± to n ±(x 1), with positive angles measured anticlockwise (for n +) or clockwise (for n ). Then (see Fig. 2 for the case of n +) we have \(n^\pm (x_1, \alpha ^\pm ) = n^\pm (x_1) \cos \alpha ^\pm + n_\perp ^\pm (x_1) \sin \alpha ^\pm \), so

$$\displaystyle \begin{aligned} n^\pm ( x_1, \alpha^\pm ) = \frac{1}{r^\pm(x_1)} \begin{pmatrix} \sin \alpha^\pm + a^\pm \beta^\pm x_1^{\beta^\pm -1} \cos \alpha^\pm \\ \mp \cos \alpha^\pm \pm a^\pm \beta^\pm x_1^{\beta^\pm -1} \sin \alpha^\pm \end{pmatrix} .\end{aligned}$$
Fig. 2
figure 2

Diagram describing oblique reflection at angle α + > 0

In particular, if α + = −α  = α,

$$\displaystyle \begin{aligned} n^\pm ( x_1, \alpha^\pm ) = \frac{1}{r^\pm(x_1)} \begin{pmatrix} \pm \sin \alpha + a^\pm \beta^\pm x_1^{\beta^\pm -1} \cos \alpha \\ \mp \cos \alpha + a^\pm \beta^\pm x_1^{\beta^\pm -1} \sin \alpha \end{pmatrix} =: \begin{pmatrix} n_1^\pm ( x_1, \alpha^\pm ) \\ n_2^\pm ( x_1, \alpha^\pm ) \end{pmatrix} .\end{aligned} $$
(8)

Recall that Δ = ξ 1 − ξ 0. Write Δ = (Δ 1, Δ 2) in components.

Lemma 1

Suppose that (R) holds, with α + = −α  = α and β +, β ≥ 0. If β ± < 1, then, for \(x \in S_B^\pm \) , asx∥→∞,

$$\displaystyle \begin{aligned} \mathbb{E}_x \varDelta_1 & = \pm \mu^\pm (x) \sin \alpha + a^\pm \beta^\pm \mu^\pm (x) x_1^{\beta^\pm -1} \cos \alpha \\ & {} \qquad {} + O ( \| x\|{}^{2\beta^\pm -2}) + O ( \|x \|{}^{-1}); \end{aligned} $$
(9)
$$\displaystyle \begin{aligned} \mathbb{E}_x \varDelta_2 & = \mp \mu^\pm (x) \cos \alpha + a^\pm \beta^\pm \mu^\pm (x) x_1^{\beta^\pm -1} \sin \alpha \\ & {} \qquad {} + O ( \| x\|{}^{2\beta^\pm -2}) + O ( \|x \|{}^{-1}) . \end{aligned} $$
(10)

If β ± > 1, then, for \(x \in S_B^\pm \) , asx∥→∞,

$$\displaystyle \begin{aligned} \mathbb{E}_x \varDelta_1 & = \mu^\pm (x) \cos \alpha \pm \frac{\mu^\pm (x) \sin \alpha}{a^\pm \beta^\pm} x_1^{1-\beta^\pm} + O (x_1^{2-2\beta^\pm}) + O ( \|x \|{}^{-1}); \end{aligned} $$
(11)
$$\displaystyle \begin{aligned} \mathbb{E}_x \varDelta_2 & = \mu^\pm (x) \sin \alpha \mp \frac{\mu^\pm (x) \cos \alpha}{a^\pm \beta^\pm} x_1^{1-\beta^\pm} + O (x_1^{2-2\beta^\pm} ) + O ( \|x \|{}^{-1}).\end{aligned} $$
(12)

Proof

Suppose that \(x \in S^\pm _B\). By (2), we have that \(\| \mathbb {E}_x \varDelta - \mu ^\pm (x) n^\pm (x_1,\alpha ^\pm )\| = O( \|x\|{ }^{-1})\). First suppose that 0 ≤ β ± < 1. Then, \(1/r^\pm (x_1) = 1 + O (x_1^{2\beta ^\pm -2})\), and hence, by (8),

$$\displaystyle \begin{aligned} n_1^\pm (x_1,\alpha^\pm) & = \pm \sin \alpha + a^\pm \beta^\pm x_1^{\beta^\pm -1} \cos \alpha + O (x_1^{2\beta^\pm -2}) ;\\ n_2^\pm (x_1,\alpha^\pm) & = \mp \cos \alpha + a^\pm \beta^\pm x_1^{\beta^\pm -1} \sin \alpha + O (x_1^{2\beta^\pm -2} ).\end{aligned} $$

Then, since ∥x∥ = x 1 + o(x 1) as ∥x∥→ with \(x \in {\mathcal D}\), we obtain (9) and (10).

On the other hand, if β ± > 1, then

$$\displaystyle \begin{aligned} \frac{1}{r^\pm (x_1)} = \frac{x_1^{1-\beta^\pm}}{a^\pm \beta^\pm} + O (x_1^{3-3\beta^\pm}), \end{aligned}$$

and hence, by (8),

$$\displaystyle \begin{aligned} n_1^\pm (x_1,\alpha^\pm) & = \cos \alpha \pm \frac{\sin \alpha}{a^\pm \beta^\pm} x_1^{1-\beta^\pm} + O (x_1^{2-2\beta^\pm}) ;\\ n_2^\pm (x_1,\alpha^\pm) & = \sin \alpha \mp \frac{\cos \alpha}{a^\pm \beta^\pm} x_1^{1-\beta^\pm} + O (x_1^{2-2\beta^\pm} ).\end{aligned} $$

The expressions (11) and (12) follow. □

It is convenient to introduce a linear transformation of \({\mathbb R}^2\) under which the asymptotic increment covariance matrix Σ appearing in (C) is transformed to the identity. Define

$$\displaystyle \begin{aligned} T := \begin{pmatrix} \frac{\sigma_2}{s} & - \frac{\rho}{s\sigma_2} \\ 0 & \frac{1}{\sigma_2} \end{pmatrix}, \text{ where } s := \sqrt{ \det \varSigma } = \sqrt{ \sigma_1^2 \sigma_2^2 - \rho^2 } ;\end{aligned}$$

recall that σ 2, s > 0, since Σ is positive definite. The choice of T is such that (the identity), and xTx leaves the horizontal direction unchanged. Explicitly,

$$\displaystyle \begin{aligned} T \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} \frac{\sigma_2}{s} x_1 - \frac{\rho}{s\sigma_2} x_2 \\ \frac{1}{\sigma_2} x_2 \end{pmatrix} . \end{aligned} $$
(13)

Note that T is positive definite, and so ∥Tx∥ is bounded above and below by positive constants times ∥x∥. Also, if \(x \in {\mathcal D}\) and β +, β  < 1, the fact that |x 2| = o(x 1) means that Tx has the properties (i) (Tx)1 > 0 for all x 1 sufficiently large, and (ii) |(Tx)2| = o(|(Tx)1|) as x 1 →. See Fig. 3 for a picture.

Fig. 3
figure 3

An illustration of the transformation T with ρ > 0 acting on a domain \({\mathcal D}\) with β + = β  = β for β ∈ (0, 1) (left) and β > 1 (right). The angle θ 2 is given by \(\theta _2 = \arctan (\rho /s)\), measured anticlockwise from the positive horizontal axis

The next result describes the increment moment properties of the process under the transformation T. For convenience, we set \(\tilde \varDelta := T \varDelta \) for the transformed increment, with components \(\tilde \varDelta _i = ( T \varDelta )_i\).

Lemma 2

Suppose that (D) , (R) , and (C) hold, with α + = −α  = α, and β +, β ≥ 0. Then, ifx∥→∞ with x  S I,

(14)

If, in addition, (D+) and (C+) hold with ε > 0, then, ifx∥→∞ with x  S I,

(15)

If β ± < 1, then, asx∥→∞ with \(x \in S_B^\pm \),

$$\displaystyle \begin{aligned} \mathbb{E}_x \tilde \varDelta_1 & = \pm \frac{\sigma_2 \mu^\pm (x)}{s} \sin \alpha \pm \frac{\rho \mu^\pm(x)}{s\sigma_2} \cos \alpha + \frac{\sigma_2 a^\pm \beta^\pm \mu^\pm (x)}{s} x_1^{\beta^\pm-1} \cos \alpha \\ & {} \qquad \qquad {} - \frac{\rho a^\pm \beta^\pm \mu^\pm (x)}{s \sigma_2} x_1^{\beta^\pm-1} \sin \alpha + O (\|x\|{}^{2\beta^\pm -2} ) + O ( \| x \|{}^{-1} ); \end{aligned} $$
(16)
$$\displaystyle \begin{aligned} \mathbb{E}_x \tilde \varDelta_2 & = \mp \frac{ \mu^\pm(x)}{\sigma_2} \cos \alpha + \frac{a^\pm \beta^\pm \mu^\pm (x)}{\sigma_2} x_1^{\beta^\pm-1} \sin \alpha \\ & {} \qquad \qquad {} + O (\|x\|{}^{2\beta^\pm -2} ) + O ( \| x \|{}^{-1} ).\end{aligned} $$
(17)

If β ± > 1, then, asx∥→∞ with \(x \in S_B^\pm \),

$$\displaystyle \begin{aligned} \mathbb{E}_x \tilde \varDelta_1 & = \frac{\sigma_2 \mu^\pm (x)}{s} \cos \alpha - \frac{\rho \mu^\pm (x)}{s \sigma_2} \sin \alpha \pm \frac{\sigma_2 \mu^\pm (x)}{a^\pm \beta^\pm s} x_1^{1-\beta^\pm} \sin \alpha \\ & {} \qquad \qquad {} \pm \frac{\rho \mu^\pm(x)}{a^\pm \beta^\pm s \sigma_2} x_1^{1-\beta^\pm} \cos \alpha + O( x_1^{2-2\beta^\pm} ) + O ( \| x \|{}^{-1} ); \end{aligned} $$
(18)
$$\displaystyle \begin{aligned} \mathbb{E}_x \tilde \varDelta_2 & = \frac{\mu^\pm (x)}{\sigma_2} \sin \alpha \mp \frac{ \mu^\pm(x)}{a^\pm \beta^\pm \sigma_2} x_1^{1-\beta^\pm} \cos \alpha + O( x_1^{2-2\beta^\pm} ) + O ( \| x \|{}^{-1} ).\end{aligned} $$
(19)

Proof

By linearity,

$$\displaystyle \begin{aligned} \mathbb{E}_x \tilde \varDelta = T \mathbb{E}_x \varDelta, \end{aligned} $$
(20)

which, by (D) or (D+), is, respectively, o(∥x−1) or O(∥x−1−ε) for x ∈ S I. Also, since , we have

For x ∈ S I, the middle matrix in the last product here has norm o(1) or O(∥xε), by (C) or (C+). Thus we obtain (14) and (15). For \(x \in S^\pm _B\), the claimed results follow on using (20), (13), and the expressions for \(\mathbb {E}_x \varDelta \) in Lemma 1. □

3 Lyapunov Functions

For the rest of the paper, we suppose that α + = −α  = α for some |α| < π∕2. Our proofs will make use of some carefully chosen functions of the process. Most of these functions are most conveniently expressed in polar coordinates.

We write x = (r, θ) in polar coordinates, with angles measured relative to the positive horizontal axis: r := r(x) := ∥x∥ and θ := θ(x) ∈ (−π, π] is the angle between the ray through 0 and x and the ray in the Cartesian direction (1, 0), with the convention that anticlockwise angles are positive. Then \(x_1 = r \cos \theta \) and \(x_2 = r \sin \theta \).

For \(w \in {\mathbb R}\), θ 0 ∈ (−π∕2, π∕2), and \(\gamma \in {\mathbb R}\), define

$$\displaystyle \begin{aligned} h_w (x) := h_w (r,\theta) := r^w \cos (w\theta-\theta_0), ~\text{and}~ f^\gamma_w (x) := ( h_w (Tx) )^\gamma ,\end{aligned} $$
(21)

where T is the linear transformation described at (13). The functions h w were used in analysis of processes in wedges in e.g. [5, 21, 29, 31]. Since the h w are harmonic for the Laplacian (see below for a proof), Lemma 2 suggests that h w( n) will be approximately a martingale in S I, and the choice of the geometrical parameter θ 0 gives us the flexibility to try to arrange things so that the level curves of h w are incident to the boundary at appropriate angles relative to the reflection vectors. The level curves of h w cross the horizontal axis at angle θ 0: see Fig. 4, and (33) below. In the case β ± < 1, the interest is near the horizontal axis, and we take θ 0 to be such that the level curves cut \(\partial {\mathcal D}\) at the reflection angles (asymptotically), so that h w( n) will be approximately a martingale also in S B. Then adjusting w and γ will enable us to obtain a supermartingale with the properties suitable to apply some Foster–Lyapunov theorems. This intuition is solidified in Lemma 4 below, where we show that the parameters w, θ 0, and γ can be chosen so that \(f^\gamma _w (\xi _n)\) satisfies an appropriate supermartingale condition outside a bounded set. For the case β ± < 1, since we only need to consider θ ≈ 0, we could replace these harmonic functions in polar coordinates by suitable polynomial approximations in Cartesian components, but since we also want to consider β ± > 1, it is convenient to use the functions in the form given. When β ± > 1, the recurrence classification is particularly delicate, so we must use another function (see (57) below), although the functions at (21) will still be used to study passage time moments in that case.

Fig. 4
figure 4

Level curves of the function h w(x) with θ 0 = π∕6 and w = 1∕4. The level curves cut the horizontal axis at angle θ 0 to the vertical

If β +, β  < 1, then θ(x) → 0 as ∥x∥→ with \(x \in {\mathcal D}\), which means that, for any |θ 0| < π∕2, h w(x) ≥ δxw for some δ > 0 and all x ∈ S with ∥x∥ sufficiently large. On the other hand, for β +, β  > 1, we will restrict to the case with w > 0 sufficiently small such that \(\cos ( w \theta - \theta _0 )\) is bounded away from zero, uniformly in θ ∈ [−π∕2, π∕2], so that we again have the estimate h w(x) ≥ δxw for some δ > 0 and all \(x \in {\mathcal D}\), but where now \({\mathcal D}\) is close to the whole half-plane (see Remark 4). In the calculations that follow, we will often use the fact that h w(x) is bounded above and below by a constant times ∥xw as ∥x∥→ with \(x \in {\mathcal D}\).

We use the notation \(D_i := \frac {{\mathrm d}}{{\mathrm d} x_i}\) for differentials, and for \(f : {\mathbb R}^2 \to {\mathbb R}\) write Df for the vector with components (Df)i = D i f. We use repeatedly

$$\displaystyle \begin{aligned} D_1 r = \cos \theta, ~ D_2 r = \sin \theta, ~ D_1 \theta = - \frac{\sin \theta}{r}, ~ D_2 \theta = \frac{\cos \theta}{r} .\end{aligned} $$
(22)

Define

$$\displaystyle \begin{aligned} \theta_1 := \theta_1 ( \varSigma, \alpha ) := \arctan \left( \frac{\sigma_2^2}{s} \tan \alpha + \frac{\rho}{s} \right) \in (-\pi/2, \pi/2) . \end{aligned} $$
(23)

For β ± > 1, we will also need

$$\displaystyle \begin{aligned} \theta_2 := \theta_2 (\varSigma) := \arctan \left( \frac{\rho}{s} \right) \in (-\pi/2, \pi/2), \end{aligned} $$
(24)

and θ 3 := θ 3(Σ, α) ∈ (−π, π) for which

$$\displaystyle \begin{aligned} \sin \theta_3 = \frac{s \sin \alpha}{\sigma_2 d}, \text{ and } \cos \theta_3 = \frac{\sigma_2^2 \cos \alpha - \rho \sin \alpha}{\sigma_2 d} ,\end{aligned} $$
(25)

where

$$\displaystyle \begin{aligned} d := d (\varSigma,\alpha) := \sqrt{ \sigma_2^2 \cos^2 \alpha - 2 \rho \sin \alpha \cos \alpha + \sigma_1^2 \sin^2 \alpha } .\end{aligned} $$
(26)

The geometric interpretation of θ 1, θ 2, and θ 3 is as follows.

  • The angle between (0, ±1) and T(0, ±1) has magnitude θ 2. Thus, if β ± < 1, then θ 2 is, as x 1 →, the limiting angle of the transformed inwards pointing normal at x 1 relative to the vertical. On the other hand, if β ± > 1, then θ 2 is, as x 1 →, the limiting angle, relative to the horizontal, of the inwards pointing normal to \(T \partial {\mathcal D}\). See Fig. 3.

  • The angle between (0, −1) and \(T (\sin \alpha , -\cos \alpha )\) is θ 1. Thus, if β ± < 1, then θ 1 is, as x 1 →, the limiting angle between the vertical and the transformed reflection vector. Since the normal in the transformed domain remains asymptotically vertical, θ 1 is in this case the limiting reflection angle, relative to the normal, after the transformation.

  • The angle between (1, 0) and \(T ( \cos \alpha , \sin \alpha )\) is θ 3. Thus, if β ± > 1, then θ 3 is, as x 1 →, the limiting angle between the horizontal and the transformed reflection vector. Since the transformed normal is, asymptotically, at angle θ 2 relative to the horizontal, the limiting reflection angle, relative to the normal, after the transformation is in this case θ 3 − θ 2.

We need two simple facts.

Lemma 3

We have (i) \(\inf _{\alpha \in [-\frac {\pi }{2},\frac {\pi }{2}]} d (\varSigma ,\alpha ) >0\) , and (ii) |θ 3 − θ 2| < π∕2.

Proof

For (i), from (26) we may write

$$\displaystyle \begin{aligned} d^2 = \sigma_2^2 + \left( \sigma_1^2 - \sigma_2^2 \right) \sin^2 \alpha - \rho \sin 2\alpha .\end{aligned} $$
(27)

If \(\sigma _1^2 \neq \sigma _2^2\), then, by Lemma 11, the extrema over \(\alpha \in [-\frac {\pi }{2},\frac {\pi }{2}]\) of (27) are

$$\displaystyle \begin{aligned} \sigma_2^2 + \frac{\sigma_1^2 - \sigma_2^2 }{2} \left(1 \pm \sqrt{1 + \frac{4\rho^2}{(\sigma_1^2 - \sigma_2^2)^2}} \right). \end{aligned} $$

Hence

$$\displaystyle \begin{aligned} d^2 \geq \frac{\sigma_1^2 +\sigma_2^2}{2} - \frac{1}{2} \sqrt{ (\sigma_1^2 - \sigma_2^2)^2 + 4 \rho^2 } ,\end{aligned}$$

which is strictly positive since \(\rho ^2 < \sigma _1^2 \sigma _2^2\). If \(\sigma _1^2 = \sigma _2^2\), then \(d^2 \geq \sigma _2^2 - | \rho |\), and \(| \rho | < | \sigma _1 \sigma _2 | = \sigma _2^2\), so d is also strictly positive in that case.

For (ii), we use the fact that \(\cos (\theta _3 - \theta _2 ) = \cos \theta _3 \cos \theta _2 + \sin \theta _3 \sin \theta _2\), where, by (24), \(\sin \theta _2 = \frac {\rho }{\sigma _1\sigma _2}\) and \(\cos \theta _2 = \frac {s}{\sigma _1\sigma _2}\), and (25), to get \(\cos (\theta _3 - \theta _2 ) = \frac {s}{\sigma _1 d} \cos \alpha > 0\). Since |θ 3 − θ 2| < 3π∕2, it follows that |θ 3 − θ 2| < π∕2, as claimed. □

We estimate the expected increments of our Lyapunov functions in two stages: the main term comes from a Taylor expansion valid when the jump of the walk is not too big compared to its current distance from the origin, while we bound the (smaller) contribution from big jumps using the moments assumption (Mp). For the first stage, let \(B_b (x) := \{ z \in {\mathbb R}^2 : \| x - z \| \leq b \}\) denote the (closed) Euclidean ball centred at x with radius b ≥ 0. We use the multivariable Taylor theorem in the following form. Suppose that \(f : {\mathbb R}^2 \to {\mathbb R}\) is thrice continuously differentiable in B b(x). Recall that Df(x) is the vector function whose components are D i f(x). Then, for y ∈ B b(x),

$$\displaystyle \begin{aligned} f (x + y) & = f(x) + \langle D f(x) , y \rangle + y_1^2 \frac{D^2_{1} f(x)}{2} + y_2^2 \frac{D^2_{2} f(x)}{2} + y_1 y_2 D_1 D_2 f(x) \\ & {} \qquad {} + R (x,y) ,\end{aligned} $$
(28)

where, for all y ∈ B b(x), |R(x, y)|≤ Cy3 R(x) for an absolute constant C <  and

$$\displaystyle \begin{aligned} R (x) := \max_{i,j,k} \sup_{ z \in B_b(x) } \left| D_i D_j D_k f (z) \right| .\end{aligned}$$

For dealing with the large jumps, we observe the useful fact that if p > 2 is a constant for which (1) holds, then for some constant C < , all δ ∈ (0, 1), and all q ∈ [0, p],

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ \| \varDelta \|{}^q {{\mathbf 1}{\{{ \| \varDelta \| \geq \| x \|{}^\delta }\}}} \big] \leq C \| x \|{}^{- \delta(p-q)} ,\end{aligned} $$
(29)

for all ∥x∥ sufficiently large. To see (29), write ∥Δq = ∥ΔpΔqp and use the fact that ∥Δ∥≥∥xδ to bound the second factor.

Here is our first main Lyapunov function estimate.

Lemma 4

Suppose that (Mp) , (D) , (R) , and (C) hold, with p > 2, α + = −α  = α for |α| < π∕2, and β +, β ≥ 0. Let \(w, \gamma \in {\mathbb R}\) be such that 2 − p < γw < p. Take θ 0 ∈ (−π∕2, π∕2). Then asx∥→∞ with x  S I,

$$\displaystyle \begin{aligned} \mathbb{E} [ f^\gamma_w(\xi_{n+1}) - f^\gamma_w( \xi_n) \mid \xi_n = x ] & = \frac{\gamma(\gamma-1)}{2} w^2 ( h_w(Tx) )^{\gamma-2} \| Tx \|{}^{2w-2} \\ & {} \qquad {} + o ( \|x\|{}^{\gamma w -2}). \end{aligned} $$
(30)

We separate the boundary behaviour into two cases.

  1. (i)

    If 0 ≤ β ± < 1, take θ 0 = θ 1 given by (23). Then, asx∥→∞ with \(x \in S^\pm _B\),

    $$\displaystyle \begin{aligned} & {} \mathbb{E} [ f^\gamma_w(\xi_{n+1}) - f^\gamma_w( \xi_n) \mid \xi_n = x ] \\ & {} \quad {} = \gamma w \| Tx \|{}^{w-1} \left( h_w (Tx ) \right)^{\gamma-1} \frac{a^\pm \mu^\pm (x) \sigma_2 \cos \theta_1 }{s \cos \alpha} \left(\beta^\pm - (1-w) \beta_{\mathrm{c}} \right) x_1^{\beta^\pm-1} \\ & {} \qquad {} + o( \| x\|{}^{w \gamma + \beta^\pm-2} ) , \end{aligned} $$
    (31)

    where β c is given by (5).

  2. (ii)

    If β ± > 1, suppose that w ∈ (0, 1∕2) and θ 0 = θ 0(Σ, α, w) = θ 3 − (1 − w)θ 2 , where θ 2 and θ 3 are given by (24) and (25), such that \(\sup _{\theta \in [-\frac {\pi }{2},\frac {\pi }{2}]} |w \theta - \theta _0 | < \pi /2\) . Then, with d = d(Σ, α) as defined at (26), asx∥→∞ with \(x \in S^\pm _B\),

    $$\displaystyle \begin{aligned} & {} \mathbb{E} [ f^\gamma_w(\xi_{n+1}) - f^\gamma_w( \xi_n) \mid \xi_n = x ] \\ & {} \quad {}= \gamma w \| Tx \|{}^{w-1} \left( h_w (Tx ) \right)^{\gamma-1} \frac{d \mu^\pm (x)}{s} \left( \cos ( (1-w) (\pi/2) ) + o(1) \right) .\end{aligned} $$
    (32)

Remark 4

We can choose w > 0 small enough so that |θ 3 − (1 − w)θ 2| < π∕2, by Lemma 3(ii), and so if θ 0 = θ 3 − (1 − w)θ 2, we can always choose w > 0 small enough so that \(\sup _{\theta \in [-\frac {\pi }{2},\frac {\pi }{2}]} |w \theta - \theta _0 | < \pi /2\), as required for the β ± > 1 part of Lemma 4.

Proof (of Lemma 4 )

Differentiating (21) and using (22) we see that

$$\displaystyle \begin{aligned} D_1 h_w(x) & = w r^{w-1} \cos \left( (w-1) \theta - \theta_0 \right) , \text{ and } \\ D_2 h_w (x) & = - w r^{w-1} \sin \left( (w-1) \theta - \theta_0 \right) {} .\end{aligned} $$
(33)

Moreover,

$$\displaystyle \begin{aligned} D_1^2 h_w(x) = w (w-1) r^{w-2} \cos \left( (w-2)\theta - \theta_0 \right) = - D_2^2 h_w(x) ,\end{aligned}$$

verifying that h w is harmonic. Also, for any i, j, k, |D i D j D k h w(x)| = O(r w−3). Writing \(h_w^\gamma (x) := ( h_w (x) )^\gamma \), we also have that \(D_i h_w^\gamma (x) = \gamma h_w^{\gamma -1} (x) D_i h_w (x)\), that

$$\displaystyle \begin{aligned} D_i D_j h_w^\gamma (x) & = \gamma h_w^{\gamma-1} (x) D_i D_j h_w (x) + \gamma (\gamma -1) h_w^{\gamma -2} (x) ( D_i h_w (x) ) (D_j h_w(x)) ,\end{aligned} $$

and \(| D_i D_j D_k h_w^\gamma (x) | = O ( r^{\gamma w -3} )\). We apply Taylor’s formula (28) in the ball B r∕2(x) together with the harmonic property of h w, to obtain, for y ∈ B r∕2(x),

$$\displaystyle \begin{aligned} h^\gamma_w (x +y ) & = h^\gamma_w (x ) + \gamma \langle D h_w(x), y \rangle h_w ^{\gamma -1} (x) + \frac{\gamma (\gamma -1)}{2} \langle D h_w(x), y \rangle^2 h^{\gamma-2}_w (x ) \\ & {} \qquad {} + \gamma \left( \frac{(y_1^2 - y_2^2) D_1^2 h_w(x)}{2} + y_1 y_2 D_{1} D_{2} h_w(x) \right) h^{\gamma-1}_w (x ) \\ & {} \qquad {} + R(x,y) , \end{aligned} $$
(34)

where |R(x, y)|≤ Cy3xγw−3, using the fact that h w(x) is bounded above and below by a constant times ∥xw.

Let E x := {∥Δ∥ < ∥xδ}, where we fix a constant δ satisfying

$$\displaystyle \begin{aligned} \frac{\max \{ 2 , \gamma w , 2 -\gamma w \}}{p} < \delta < 1 ; \end{aligned} $$
(35)

such a choice of δ is possible since p > 2 and 2 − p < γw < p. If ξ 0 = x and E x occurs, then \(Tx + \tilde \varDelta \in B_{r/2} (Tx)\) for all ∥x∥ sufficiently large. Thus, conditioning on ξ 0 = x, on the event E x we may use the expansion in (34) for \(h^\gamma _w (Tx + \tilde \varDelta )\), which, after taking expectations, yields

$$\displaystyle \begin{aligned} & {} \mathbb{E}_x \big[ ( f_w^\gamma (\xi_1) -f_w^\gamma (\xi_0) ){\mathbf 1}_{E_x} \big] = \gamma \left( h_w (Tx ) \right)^{\gamma-1} \mathbb{E}_x \big[ \langle D h_w(Tx), \tilde \varDelta \rangle {\mathbf 1}_ {E_x} \big] \\ & {} \qquad {} + \gamma \left( h_w (Tx ) \right)^{\gamma-1} \left[ \frac{ D_1^2 h_w(Tx) \mathbb{E}_x \big[ ( \tilde \varDelta_1^2 - \tilde \varDelta_2^2) {\mathbf 1}_ {E_x} \big]}{2} + D_{1} D_{2} h_w(Tx) \mathbb{E}_x \big[ \tilde \varDelta_1 \tilde \varDelta_2 {\mathbf 1}_ {E_x} \big] \right] \\ & {} \qquad {} + \frac{\gamma (\gamma -1)}{2} \left( h_w (Tx ) \right)^{\gamma-2} \mathbb{E}_x \big[ \langle D h_w(Tx), \tilde \varDelta \rangle^2 {\mathbf 1}_ {E_x} \big] + \mathbb{E}_x \big[ R(Tx,\tilde \varDelta) {\mathbf 1}_ {E_x} \big] .\end{aligned} $$
(36)

Let p′ = p ∧ 3, so that (1) also holds for p′∈ (2, 3]. Then, writing \(\| \tilde \varDelta \|{ }^3 = \| \tilde \varDelta \|{ }^{p'} \| \tilde \varDelta \|{ }^{3-p'}\),

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ | R(Tx,\tilde \varDelta) | {\mathbf 1}_ {E_x} \big] \leq C \| x \|{}^{\gamma w -3 + (3-p')\delta} \mathbb{E}_x \big[ \| \tilde \varDelta \|{}^{p'} \big] = o ( \| x \|{}^{\gamma w -2} ) ,\end{aligned}$$

since (3 − p′)δ < 1. If x ∈ S I, then (14) shows \(| \mathbb {E}_x \langle D h_w(Tx), \tilde \varDelta \rangle | = o ( \| x \|{ }^{w-2} )\), so

$$\displaystyle \begin{aligned} \mathbb{E}_x \big| \langle D h_w(Tx), \tilde \varDelta \rangle {\mathbf 1}_ {E_x} \big| \leq C \| x \|{}^{w-1} \mathbb{E}_x ( \| \varDelta \|{\mathbf 1}_ {E^{\mathrm{c}}_x} ) + o ( \| x \|{}^{w-2} ) .\end{aligned}$$

Note that, by (35), \(\delta > \frac {2}{p} > \frac {1}{p-1}\). Then, using the q = 1 case of (29), we get

$$\displaystyle \begin{aligned} \mathbb{E}_x \big| \langle D h_w(Tx), \tilde \varDelta \rangle {\mathbf 1}_ {E_x} \big| = o ( \| x \|{}^{w-2} ) .\end{aligned} $$
(37)

A similar argument using the q = 2 case of (29) gives

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ \langle D h_w(Tx), \tilde \varDelta \rangle^2 {\mathbf 1}_ {E^{\mathrm{c}}_x} \big] \leq C \| x \|{}^{2w-2-\delta(p-2)} = o ( \| x \|{}^{2w-2} ).\end{aligned}$$

If x ∈ S I, then (14) shows that \(\mathbb {E}_x (\tilde \varDelta _1^2 - \tilde \varDelta _2^2)\) and \(\mathbb {E}_x ( \tilde \varDelta _1 \tilde \varDelta _2 )\) are both o(1), and hence, by the q = 2 case of (29) once more, we see that \(\mathbb {E}_x [ | \tilde \varDelta _1^2 - \tilde \varDelta _2^2| {\mathbf 1}_ {E_x} ]\) and \(\mathbb {E}_x [| \tilde \varDelta _1 \tilde \varDelta _2 | {\mathbf 1}_ {E_x} ]\) are both o(1). Moreover, (14) also shows that

Putting all these estimates into (36) we get, for x ∈ S I,

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ ( f_w^\gamma (\xi_1) -f_w^\gamma (\xi_0) ){\mathbf 1}_ {E_x} \big] & = \frac{\gamma (\gamma -1)}{2} \left( h_w (Tx ) \right)^{\gamma-2} \left( ( D_1 h_w(Tx) )^2 + ( D_2 h_w(Tx) )^2 \right) \\ & {} \qquad {} + o ( \| x \|{}^{\gamma w - 2} ) .\end{aligned} $$
(38)

On the other hand, given ξ 0 = x, if γw ≥ 0, by the triangle inequality,

$$\displaystyle \begin{aligned} \big| f_w^\gamma (\xi_1) -f_w^\gamma (x) \big| & \leq \| T\xi_1 \|{}^{\gamma w} + \| Tx \|{}^{\gamma w} \leq 2 \big( \| T\xi_1 \| + \| Tx \| \big)^{\gamma w} \\ & \leq 2 \big( 2 \| Tx \| + \| \tilde \varDelta \| \big)^{\gamma w} .\end{aligned} $$
(39)

It follows from (39) that \(| f_w^\gamma (\xi _1) -f_w^\gamma (x) | {\mathbf 1}_ {E_x^{\mathrm {c}} } \leq C \| \varDelta \|{ }^{\gamma w /\delta }\), for some constant C <  and all ∥x∥ sufficiently large. Hence

$$\displaystyle \begin{aligned} \mathbb{E}_x \big| ( f_w^\gamma (\xi_1) -f_w^\gamma (\xi_0) ){\mathbf 1}_ {E^{\mathrm{c}}_x} \big| \leq C \mathbb{E}_x \big[ \| \varDelta \|{}^{\gamma w /\delta} {\mathbf 1}_ {E^{\mathrm{c}}_x} \big] .\end{aligned}$$

Since \(\delta > \frac {\gamma w}{p}\), by (35), we may apply (29) with \(q = \frac {\gamma w}{\delta }\) to get

$$\displaystyle \begin{aligned} \mathbb{E}_x \big| ( f_w^\gamma (\xi_1) -f_w^\gamma (\xi_0) ){\mathbf 1}_ {E^{\mathrm{c}}_x} \big| = O ( \| x \|{}^{\gamma w - \delta p } ) = o ( \| x \|{}^{\gamma w - 2} ),\end{aligned} $$
(40)

since \(\delta > \frac {2}{p}\). If  < 0, then we use the fact that \(f_w^\gamma \) is uniformly bounded to get

$$\displaystyle \begin{aligned} \mathbb{E}_x \big| ( f_w^\gamma (\xi_1) -f_w^\gamma (\xi_0) ){\mathbf 1}_ {E^{\mathrm{c}}_x} \big| \leq C {\mathbb P}_x ( E^{\mathrm{c}}_x ) = O ( \| x \|{}^{-\delta p} ) ,\end{aligned}$$

by the q = 0 case of (29). Thus (40) holds in this case too, since γw > 2 − δp by choice of δ at (35). Then (30) follows from combining (38) and (40) with (33).

Next suppose that x ∈ S B. Truncating (34), we see that for all y ∈ B r∕2(x),

$$\displaystyle \begin{aligned} h^\gamma_w (x +y ) = h^\gamma_w (x ) + \gamma \langle D h_w(x), y \rangle h_w ^{\gamma -1} (x) + R(x,y) ,\end{aligned} $$
(41)

where now |R(x, y)|≤ Cy2xγw−2. It follows from (41) and (Mp) that

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ ( f_w^\gamma (\xi_1) -f_w^\gamma (\xi_0) ) {\mathbf 1}_ {E_x} \big] & = \gamma h^{\gamma -1}_w (Tx ) \mathbb{E}_x \big[ \langle D h_w(Tx), \tilde \varDelta \rangle {\mathbf 1}_ {E_x} \big] + O ( \| x \|{}^{\gamma w -2} ). \end{aligned} $$

By the q = 1 case of (29), since \(\delta > \frac {1}{p-1}\), we see that \(\mathbb {E}_x [ \langle D h_w(Tx), \tilde \varDelta \rangle {\mathbf 1}_ {E^{\mathrm {c}}_x} ] = o ( \| x \|{ }^{w-2} )\), while the estimate (40) still applies, so that

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ f_w^\gamma (\xi_1) -f_w^\gamma (\xi_0) \big] & = \gamma h^{\gamma -1}_w (Tx ) \mathbb{E}_x \langle D h_w(Tx), \tilde \varDelta \rangle + O ( \| x \|{}^{\gamma w -2} ). \end{aligned} $$
(42)

From (33) we have

$$\displaystyle \begin{aligned} D h_w (Tx) = w \| Tx\|{}^{w-1} \begin{pmatrix} \cos ( (1-w)\theta(Tx) + \theta_0 ) \\ \sin ( (1-w)\theta(Tx) + \theta_0 ) \end{pmatrix} .\end{aligned} $$
(43)

First suppose that β ± < 1. Then, by (13), for \(x \in S_B^\pm \), \(x_2 = \pm a^\pm x_1^{\beta ^\pm } + O(1)\) and

$$\displaystyle \begin{aligned} \sin \theta (Tx) = \pm \frac{s a^\pm}{\sigma_2^2} x_1^{\beta^\pm -1} + O (x_1^{2\beta^\pm -2} ) + O( x_1^{-1} ) .\end{aligned} $$

Since \(\arcsin z = z + O (z^3)\) as z → 0, it follows that

$$\displaystyle \begin{aligned} \theta (Tx) = \pm \frac{s a^\pm}{\sigma_2^2} x_1^{\beta^\pm -1} + O (x_1^{2\beta^\pm -2} ) + O( x_1^{-1} ) .\end{aligned} $$

Hence

$$\displaystyle \begin{aligned} \cos \left( (1-w) \theta (Tx) + \theta_0 \right) & = \cos \theta_0 \mp (1-w) \frac{s a^\pm}{\sigma_2^2} x_1^{\beta^\pm-1} \sin \theta_0 + O (x_1^{2\beta^\pm-2} ) + O( x_1^{-1} ) ; \\ \sin \left( (1-w) \theta (Tx) + \theta_0 \right) & = \sin \theta_0 \pm (1-w) \frac{s a^\pm}{\sigma_2^2} x_1^{\beta^\pm-1} \cos \theta_0 + O (x_1^{2\beta^\pm-2} ) + O( x_1^{-1} ) .\end{aligned} $$

Then (43) with (16) and (17) shows that

$$\displaystyle \begin{aligned} & \mathbb{E}_x \langle D h_w(Tx), \tilde \varDelta \rangle \\ & {} \qquad {} = w \| Tx \|{}^{w-1} \frac{\mu^\pm (x) \cos \theta_0 \cos \alpha}{s \sigma_2} \left( \pm A_1 + ( a^\pm A_2 + o(1)) x_1^{\beta^\pm -1} \right) , \end{aligned} $$
(44)

where, for |θ 0| < π∕2, \(A_1 = \sigma ^2_2 \tan \alpha + \rho - s \tan \theta _0\), and

$$\displaystyle \begin{aligned} A_2 & = \sigma_2^2 \beta^\pm - \rho \beta^\pm \tan \alpha - (1-w) s \tan \theta_0 \tan \alpha - (1-w) \frac{s \rho}{\sigma_2^2} \tan \theta_0 \\ & {} \qquad {} + s \beta^\pm \tan \theta_0 \tan \alpha - (1-w) \frac{s^2}{\sigma_2^2} . \end{aligned} $$

Now take θ 0 = θ 1 as given by (23), so that \(s \tan \theta _0 = \sigma _2^2 \tan \alpha + \rho \). Then A 1 = 0, eliminating the leading order term in (44). Moreover, with this choice of θ 0 we get, after some further cancellation and simplification, that

$$\displaystyle \begin{aligned} A_2 = \frac{\sigma_2^2 \left( \beta^\pm - (1-w) \beta_{\mathrm{c}} \right)}{\cos^2 \alpha} ,\end{aligned}$$

with β c as given by (5). Thus with (44) and (42) we verify (31).

Finally suppose that β ± > 1, and restrict to the case w ∈ (0, 1∕2). Let θ 2 ∈ (−π∕2, π∕2) be as given by (24). Then if x = (0, x 2), we have \(\theta (Tx) = \theta _2 - \frac {\pi }{2}\) if x 2 < 0 and \(\theta (Tx) = \theta _2 + \frac {\pi }{2}\) if x 2 > 0 (see Fig. 3). It follows from (13) that

$$\displaystyle \begin{aligned} \theta (Tx ) = \theta_2 \pm \frac{\pi}{2} + O ( x_1^{1-\beta^\pm} ), \text{ for } x \in S_B^\pm, \end{aligned}$$

as ∥x∥→ (and x 1 →). Now (43) with (18) and (19) shows that

$$\displaystyle \begin{aligned} \mathbb{E}_x \langle D h_w (Tx), \tilde \varDelta \rangle & = w \| Tx \|{}^{w-1} \frac{\mu^\pm (x)}{s\sigma_2} \Big( \sigma_2^2 \cos \alpha \cos \left( (1-w) \theta (Tx) + \theta_0 \right) \\ & {} \qquad {} - \rho \sin \alpha \cos \left( (1-w) \theta (Tx) + \theta_0 \right) \\ & {} \qquad {} + s \sin \alpha \sin \left( (1-w) \theta (Tx) + \theta_0 \right) + O ( x_1^{1-\beta^\pm} ) \Big) . \end{aligned} $$
(45)

Set \(\phi := (1-w) \frac {\pi }{2}\). Choose θ 0 = θ 3 − (1 − w)θ 2, where θ 3 ∈ (−π, π) satisfies (25). Then we have that, for \(x \in S_B^\pm \),

$$\displaystyle \begin{aligned} \cos \left( (1-w) \theta (Tx) + \theta_0 \right) & = \cos \left( \theta_3 \pm \phi \right) + O ( x_1^{1-\beta^\pm} ) \\ & = \cos \phi \cos \theta_3 \mp \sin \phi \sin \theta_3 + O ( x_1^{1-\beta^\pm} ).\end{aligned} $$
(46)

Similarly, for \(x \in S_B^\pm \),

$$\displaystyle \begin{aligned} \sin \left( (1-w) \theta (Tx) + \theta_0 \right) =\cos \phi \sin \theta_3 \pm \sin \phi \cos \theta_3 + O ( x_1^{1-\beta^\pm} ). \end{aligned} $$
(47)

Using (46) and (47) in (45), we obtain

$$\displaystyle \begin{aligned} \mathbb{E}_x \langle D h_w (Tx), \tilde \varDelta \rangle = w \| Tx \|{}^{w-1} \frac{\mu^\pm (x)}{s\sigma_2} \left( A_3 \cos \phi \mp A_4 \sin \phi + o(1) \right) ,\end{aligned} $$

where

$$\displaystyle \begin{aligned} A_3 & = \left( \sigma_2^2 \cos \alpha - \rho \sin \alpha \right) \cos \theta_3 + s \sin \alpha \sin \theta_3 \\ & = \sigma_2 d \cos^2 \theta_3 + \sigma_2 d \sin^2 \theta_3 = \sigma_2 d ,\end{aligned} $$

by (25), and, similarly,

$$\displaystyle \begin{aligned} A_4 = \left( \sigma_2^2 \cos \alpha - \rho \sin \alpha \right) \sin \theta_3 - s \sin \alpha \cos \theta_3 = 0.\end{aligned}$$

Then with (42) we obtain (32). □

In the case where β +, β  < 1 with β + ≠ β , we will in some circumstances need to modify the function \(f_w^\gamma \) so that it can be made insensitive to the behaviour near the boundary with the smaller of β +, β . To this end, define for \(w, \gamma , \nu , \lambda \in {\mathbb R}\),

$$\displaystyle \begin{aligned} F_w^{\gamma,\nu} (x) := f_w^\gamma (x) + \lambda x_2 \| T x\|{}^{2\nu} .\end{aligned} $$
(48)

We state a result for the case β  < β +; an analogous result holds if β + < β .

Lemma 5

Suppose that (Mp) , (D) , (R) , and (C) hold, with p > 2, α + = −α  = α for |α| < π∕2, and 0 ≤ β  < β + < 1. Let \(w, \gamma \in {\mathbb R}\) be such that 2 − p < γw < p. Take θ 0 = θ 1 ∈ (−π∕2, π∕2) given by (23). Suppose that

$$\displaystyle \begin{aligned} \gamma w + \beta^- -2 < 2 \nu < \gamma w + \beta^+ -2 . \end{aligned}$$

Then asx∥→∞ with x  S I,

$$\displaystyle \begin{aligned} & {} \mathbb{E} [ F^{\gamma,\nu}_w(\xi_{n+1}) - F_w^{\gamma,\nu} ( \xi_n) \mid \xi_n = x ] \\ & {} \qquad {} = \frac{1}{2}\gamma(\gamma-1) (w^2 + o(1) ) ( h_w(Tx) )^{\gamma-2} \| Tx \|{}^{2w-2}. \end{aligned} $$
(49)

Asx∥→∞ with \(x \in S^+_B\),

$$\displaystyle \begin{aligned} & {} \mathbb{E} [ F^{\gamma,\nu}_w(\xi_{n+1}) - F^{\gamma,\nu}_w( \xi_n) \mid \xi_n = x ] \\ & {} \qquad {} = \gamma w \| Tx \|{}^{w-1} \left( h_w (Tx ) \right)^{\gamma-1} \frac{a^+ \mu^+ (x) \sigma_2 \cos \theta_1 }{s \cos \alpha} \left(\beta^+ - (1-w) \beta_{\mathrm{c}} \right) x_1^{\beta^+-1} \\ & {} \qquad \qquad {} + o( \| x\|{}^{w \gamma + \beta^+-2} ) .\end{aligned} $$
(50)

Asx∥→∞ with \(x \in S^-_B\),

$$\displaystyle \begin{aligned} \mathbb{E} [ F^{\gamma,\nu}_w(\xi_{n+1}) - F^{\gamma,\nu}_w( \xi_n) \mid \xi_n = x ] = \lambda \| T x \|{}^{2\nu} \left( \mu^- (x) \cos \alpha + o(1) \right) .\end{aligned} $$
(51)

Proof

Suppose that 0 ≤ β  < β + < 1. As in the proof of Lemma 4, let E x = {∥Δ∥ < ∥xδ}, where δ ∈ (0, 1) satisfies (35). Set v ν(x) := x 2Tx2ν. Then, using Taylor’s formula in one variable, for \(x, y \in {\mathbb R}^2\) with y ∈ B r∕2(x),

$$\displaystyle \begin{aligned} \| x + y \|{}^{2\nu} & = \| x \|{}^{2\nu} \left( 1 + \frac{2 \langle x, y \rangle + \| y\|{}^2}{\| x \|{}^2} \right)^\nu = \| x \|{}^{2\nu} + 2 \nu \langle x,y\rangle \| x\|{}^{2\nu -2} + R (x,y) ,\end{aligned} $$

where |R(x, y)|≤ Cy2x2ν−2. Thus, for x ∈ S with y ∈ B r∕2(x) and x + y ∈ S,

$$\displaystyle \begin{aligned} v_\nu (x+y) - v_\nu (x) & = (x_2 + y_2) \| T x+ T y \|{}^{2\nu} - x_2 \| T x \|{}^{2\nu} \\ & = y_2 \| T x \|{}^{2\nu} + 2 \nu x_2 \langle T x, T y\rangle \| T x\|{}^{2\nu -2} + 2 \nu y_2 \langle T x, T y\rangle \| T x\|{}^{2\nu -2} \\ & {} \qquad {} + R (x,y ),\end{aligned} $$
(52)

where now \(| R(x,y) | \leq C \| y \|{ }^2 \| x \|{ }^{2 \nu + \beta ^+ -2}\), using the fact that both |x 2| and |y 2| are \(O ( \| x \|{ }^{\beta ^+} )\). Taking x = ξ 0 and y = Δ so \(Ty = \tilde \varDelta \), we obtain

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ ( v_\nu ( \xi_1 ) {-} v_\nu (\xi_0 ) ) {\mathbf 1}_ { E_x } \big] & = \| T x \|{}^{2\nu} \mathbb{E}_x \big[ \varDelta_2 {\mathbf 1}_ { E_x } \big] {+} 2 \nu x_2 \| T x\|{}^{2\nu -2} \mathbb{E}_x \big[ \langle T x, \tilde \varDelta\rangle {\mathbf 1}_ { E_x } \big] \\ & {} \qquad {} + 2 \nu \| T x\|{}^{2\nu -2} \mathbb{E} \big[ \varDelta_2 \langle T x, \tilde \varDelta \rangle {\mathbf 1}_ { E_x } \big] \\ & {} \qquad {} + \mathbb{E} \big[ R(x, \varDelta ) {\mathbf 1}_ { E_x } \big].\qquad {} \end{aligned} $$
(53)

Suppose that x ∈ S I. Similarly to (37), we have \(\mathbb {E}_x [ \langle T x, \tilde \varDelta \rangle {\mathbf 1}_ { E_x } ] = o(1)\), and, by similar arguments using (29), \(\mathbb {E} [ \varDelta _2 {\mathbf 1}_ { E_x } ] = o ( \| x \|{ }^{-1})\), \(\mathbb {E}_x | \varDelta _2 \langle T x, \tilde \varDelta \rangle {\mathbf 1}_ { E^{\mathrm {c}}_x } | = o ( \| x \| )\), and \(\mathbb {E}_x | R(x, \varDelta ) {\mathbf 1}_ { E_x } | = o ( \|x \|{ }^{2\nu -1} )\), since β + < 1. Also, by (13),

$$\displaystyle \begin{aligned} \mathbb{E}_x ( \varDelta_2 \langle T x, \tilde \varDelta \rangle ) & = \sigma_2 \mathbb{E}_x ( \tilde \varDelta_2 \langle T x, \tilde \varDelta \rangle ) \\ & = \sigma_2 (T x)_1 \mathbb{E}_x ( \tilde \varDelta_1 \tilde \varDelta_2 ) + \sigma_2 (Tx)_2 \mathbb{E}_x ( \tilde \varDelta_2^2 ) . \end{aligned} $$

Here, by (14), \(\mathbb {E}_x ( \tilde \varDelta _1 \tilde \varDelta _2 ) = o (1)\) and \(\mathbb {E}_x ( \tilde \varDelta _2^2 ) = O(1)\), while \( \sigma _2 (Tx)_2 = x_2 = O ( \| x \|{ }^{\beta ^+})\). Thus \(\mathbb {E}_x ( \varDelta _2 \langle T x, \tilde \varDelta \rangle ) = o ( \| x \| )\). Hence also

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ \varDelta_2 \langle T x, \tilde \varDelta \rangle {\mathbf 1}_ { E_x } \big] = o ( \| x\|) .\end{aligned}$$

Thus from (53) we get that, for x ∈ S I,

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ ( v_\nu ( \xi_1 ) - v_\nu (\xi_0 ) ) {\mathbf 1}_ { E_x } \big] = o ( \| x \|{}^{2\nu -1} ) .\end{aligned} $$
(54)

On the other hand, since \(| v_\nu (x+y ) - v_\nu (x) | \leq C ( \| x \| + \| y \| )^{2\nu + \beta ^+}\) we get

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ | v_\nu ( \xi_1 ) - v_\nu (\xi_0 ) | {\mathbf 1}_ { E^{\mathrm{c}}_x } \big] \leq C \mathbb{E}_x \big[ \| \varDelta \|{}^{(2 \nu+\beta^+) /\delta} {\mathbf 1}_ { E^{\mathrm{c}}_x } \big] .\end{aligned}$$

Here 2ν + β + < 2ν + 1 < γw < δp, by choice of ν and (35), so we may apply (29) with q = (2ν + β +)∕δ to get

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ | v_\nu ( \xi_1 ) - v_\nu (\xi_0 ) | {\mathbf 1}_ { E^{\mathrm{c}}_x } \big] = O ( \| x \|{}^{2\nu+\beta^+-\delta p} ) = o ( \| x \|{}^{2\nu-1} ) ,\end{aligned} $$
(55)

since δp > 2, by (35). Combining (54), (55) and (30), we obtain (49), provided that 2ν − 1 < γw − 2, which is the case since 2ν < γw + β + − 2 and β + < 1.

Now suppose that \(x \in S_B^\pm \). We truncate (52) to see that, for x ∈ S with y ∈ B r∕2(x) and x + y ∈ S,

$$\displaystyle \begin{aligned} v_\nu (x+y) - v_\nu (x) = y_2 \| T x \|{}^{2\nu} + R (x,y ), \end{aligned}$$

where now \(| R(x,y) | \leq C \| y \| \| x \|{ }^{2\nu + \beta ^\pm -1}\), using the fact that for \(x \in S_B^\pm \), \(|x_2| = O ( \| x \|{ }^{\beta ^\pm } )\). It follows that, for \(x \in S_B^\pm \),

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ ( v_\nu ( \xi_1 ) - v_\nu (\xi_0 ) ) {\mathbf 1}_ {E_x } \big] = \| T x \|{}^{2\nu} \mathbb{E}_x \big[ \varDelta_2 {\mathbf 1}_ {E_x } \big] + O ( \| x \|{}^{2\nu + \beta^\pm -1} ). \end{aligned}$$

By (29) and (35) we have that \(\mathbb {E} [ |\varDelta _2| {\mathbf 1}_ {E^{\mathrm {c}}_x } ] = O( \| x \|{ }^{-\delta (p-1)} ) = o( \| x\|{ }^{-1} )\), while if \(x \in S_B^\pm \), then, by (10), \(\mathbb {E}_x \varDelta _2 = \mp \mu ^\pm (x) \cos \alpha + O( \|x\|{ }^{\beta ^\pm -1})\). On the other hand, the estimate (55) still applies, so we get, for \(x \in S_B^\pm \),

$$\displaystyle \begin{aligned} \mathbb{E}_x [ v_\nu ( \xi_1 ) - v_\nu (\xi_0 ) ] = \mp \| T x \|{}^{2\nu} \mu^\pm (x) \cos \alpha + O ( \| x \|{}^{2\nu +\beta^\pm-1 } ) .\end{aligned} $$
(56)

If we choose ν such that 2ν < γw + β + − 2, then we combine (56) and (31) to get (50), since the term from (31) dominates. If we choose ν such that 2ν > γw + β − 2, then the term from (56) dominates that from (31), and we get (51). □

In the critically recurrent cases, where \(\max ( \beta ^+,\beta ^- ) = \beta _{\mathrm {c}} \in (0,1)\) or β +, β  > 1, in which no passage-time moments exist, the functions of polynomial growth based on h w as defined at (21) are not sufficient to prove recurrence. Instead we need functions which grow more slowly. For \(\eta \in {\mathbb R}\) let

$$\displaystyle \begin{aligned} h (x) := h(r,\theta ) := \log r + \eta \theta , \text{ and } \ell (x) := \log h( Tx) ,\end{aligned} $$
(57)

where we understand \(\log y\) to mean \(\max (1,\log y)\). The function h is again harmonic (see below) and was used in the context of reflecting Brownian motion in a wedge in [31]. Set

$$\displaystyle \begin{aligned} \eta_0 := \eta_0 ( \varSigma,\alpha ) := \frac{\sigma_2^2 \tan \alpha + \rho}{s}, \text{ and } \eta_1 := \eta_1 (\varSigma, \alpha):= \frac{\sigma_1^2 \tan \alpha - \rho}{s} .\end{aligned} $$
(58)

Lemma 6

Suppose that (Mp) , (D+) , (R) , and (C+) hold, with p > 2, ε > 0, α + = −α  = α for |α| < π∕2, and β +, β ≥ 0. For any \(\eta \in {\mathbb R}\) , asx∥→∞ with x  S I,

$$\displaystyle \begin{aligned} \mathbb{E} [ \ell ( \xi_{n+1}) - \ell ( \xi_n) \mid \xi_n = x ] = - \frac{1+\eta^2 +o(1)}{2 \| Tx\|{}^2(\log \| Tx\|)^2} .\end{aligned} $$
(59)

If 0 ≤ β ± < 1, take η = η 0 as defined at (58). Then, asx∥→∞ with \(x \in S^\pm _B\),

$$\displaystyle \begin{aligned} & {} \mathbb{E} [ \ell (\xi_{n+1}) - \ell (\xi_n) \mid \xi_n = x ] \\ & {} = \frac{\sigma_2^2 a^\pm \mu^\pm (x)}{s^2 \cos \alpha} \frac{1}{\|T x\|{}^2 \log\|T x\|} \left( ( \beta^\pm - \beta_{\mathrm{c}} ) x^{\beta^\pm}_1 + O ( \| x \|{}^{2\beta^\pm-1} ) + O(1) \right) .\end{aligned} $$
(60)

If β ± > 1, take η = η 1 as defined at (58). Then asx∥→∞ with \(x \in S^\pm _B\),

(61)

Proof

Given \(\eta \in {\mathbb R}\), for \(r_0 = r_0 (\eta ) = \exp ( {\mathrm {e}} + | \eta | \pi )\), we have from (58) that both h and \(\log h\) are infinitely differentiable in the domain \({\mathcal R}_{r_0} := \{ x \in {\mathbb R}^2 : x_1 > 0, \, r (x) > r_0 \}\). Differentiating (58) and using (22) we obtain, for \(x \in {\mathcal R}_{r_0}\),

$$\displaystyle \begin{aligned} D_1 h(x) = \frac{1}{r} \left( \cos \theta - \eta \sin \theta \right) , \text{ and } D_2 h(x) = \frac{1}{r} \left( \sin \theta + \eta \cos \theta \right) .\end{aligned} $$
(62)

We verify that h is harmonic in \({\mathcal R}_{r_0}\), since

$$\displaystyle \begin{aligned} D_1^2 h(x) = \frac{\eta \sin 2 \theta}{r^2} - \frac{\cos 2 \theta}{r^2} = - D_2^2 h(x) .\end{aligned}$$

Also, for any i, j, k, |D i D j D k h(x)| = O(r −3). Moreover, \(D_i \log h(x) = (h(x))^{-1} D_i h(x)\),

$$\displaystyle \begin{aligned} D_i D_j \log h(x) = \frac{D_i D_j h(x)}{h(x)} - \frac{ (D_i h(x)) (D_j h(x))}{(h(x))^2} ,\end{aligned}$$

and \(| D_i D_j D_k \log h(x) | = O (r^{-3} (\log r )^{-1})\). Recall that Dh(x) is the vector function whose components are D i h(x). Then Taylor’s formula (28) together with the harmonic property of h shows that for \(x \in {\mathcal R}_{2r_0}\) and y ∈ B r∕2(x),

$$\displaystyle \begin{aligned} \log h (x +y ) & = \log h(x) + \frac{\langle D h(x), y \rangle}{h(x)} + \frac{(y_1^2 - y_2^2) D_1^2 h(x)}{2h(x)} + \frac{ y_1 y_2 D_{1} D_{2} h(x)}{h(x)} \\ & {} \qquad {} - \frac{\langle D h(x), y \rangle^2}{2(h(x))^2} + R (x,y) ,\end{aligned} $$
(63)

where \(| R(x,y) | \leq C \| y \|{ }^3 \| x \|{ }^{-3} (\log \| x \|)^{-1}\) for some constant C < , all y ∈ B r∕2(x), and all ∥x∥ sufficiently large. As in the proof of Lemma 4, let E x = {∥Δ∥ < ∥xδ} for \(\delta \in (\frac {2}{p},1)\). Then applying the expansion in (63) to \(\log h(Tx + \tilde \varDelta )\), conditioning on ξ 0 = x, and taking expectations, we obtain, for ∥x∥ sufficiently large,

$$\displaystyle \begin{aligned} & {} \mathbb{E}_x \big[ ( \ell (\xi_1) - \ell (\xi_0) ) {\mathbf 1}_ { E_x} \big] = \frac{\mathbb{E}_x \big[ \langle D h(Tx), \tilde \varDelta \rangle {\mathbf 1}_ { E_x} \big] }{h(Tx)} + \frac{D_1^2 h(Tx) \mathbb{E}_x \big[ (\tilde \varDelta_1^2 - \tilde \varDelta_2^2){\mathbf 1}_ { E_x} \big] }{2h(Tx)} \\ & {} \quad {} + \frac{ D_{1} D_{2} h(Tx) \mathbb{E}_x \big[ \tilde \varDelta_1 \tilde \varDelta_2 {\mathbf 1}_ { E_x} \big]}{h(Tx)} - \frac{\mathbb{E}_x \big[ \langle D h(Tx), \tilde \varDelta \rangle^2 {\mathbf 1}_ { E_x} \big]}{2(h(Tx))^2} + \mathbb{E}_x \big[ R (Tx,\tilde \varDelta){\mathbf 1}_ { E_x} \big] .\end{aligned} $$
(64)

Let p′∈ (2, 3] be such that (1) holds. Then

$$\displaystyle \begin{aligned} \mathbb{E}_x \big| R (Tx,\tilde \varDelta){\mathbf 1}_ { E_x} \big| \leq C \| x \|{}^{-3+(3-p')\delta} \mathbb{E}_x ( \| \varDelta \|{}^{p'} ) = O ( \| x \|{}^{-2-\varepsilon'} ) ,\end{aligned}$$

for some ε′ > 0.

Suppose that x ∈ S I. By (15), \(\mathbb {E}_x ( \tilde \varDelta _1 \tilde \varDelta _2 ) = O ( \| x \|{ }^{-\varepsilon } )\) and, by (29), \(\mathbb {E}_x | \tilde \varDelta _1 \tilde \varDelta _2{\mathbf 1}_ {E_x^{\mathrm {c}}} | \leq C \mathbb {E} [ \| \varDelta \|{ }^2 {\mathbf 1}_ {E_x^{\mathrm {c}}} ] = O ( \| x \|{ }^{-\varepsilon '} )\), for some ε′ > 0. Thus \(\mathbb {E}_x ( \tilde \varDelta _1 \tilde \varDelta _2{\mathbf 1}_ {E_x} ) = O ( \| x \|{ }^{-\varepsilon '} )\). A similar argument gives the same bound for \(\mathbb {E}_x [ ( \tilde \varDelta _1^2 - \tilde \varDelta _2^2) {\mathbf 1}_ { E_x} ]\). Also, from (15) and (62), \(\mathbb {E}_x ( \langle D h(Tx), \tilde \varDelta \rangle ) = O ( \| x \|{ }^{-2-\varepsilon } )\) and, by (29), \(\mathbb {E}_x | \langle D h(Tx), \tilde \varDelta \rangle {\mathbf 1}_ {E_x^{\mathrm {c}}} | = O ( \| x \|{ }^{-2-\varepsilon '} )\) for some ε′ > 0. Hence \(\mathbb {E}_x [ \langle D h(Tx), \tilde \varDelta \rangle {\mathbf 1}_ { E_x} ] = O ( \| x \|{ }^{-2-\varepsilon '} )\). Finally, by (15) and (62),

while, by (29), \( \mathbb {E}_x | \langle D h(Tx), \tilde \varDelta \rangle ^2 {\mathbf 1}_ {E_x^{\mathrm {c}} } | = O ( \| x \|{ }^{-2-\varepsilon '} )\). Putting all these estimates into (64) gives

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ ( \ell (\xi_1) - \ell (\xi_0) ) {\mathbf 1}_ { E_x} \big] = - \frac{ ( D_1 h(Tx) )^2 + ( D_2 h(Tx) )^2}{2(h(Tx))^2} + O ( \| x \|{}^{-2-\varepsilon'} ) ,\end{aligned} $$

for some ε′ > 0. On the other hand, for all ∥x∥ sufficiently large, \(| \ell (x+y) - \ell (x) | \leq C \log \log \| x \| + C \log \log \| y \|\). For any p > 2 and \(\delta \in (\frac {2}{p}, 1)\), we may (and do) choose q > 0 sufficiently small such that δ(p − q) > 2, and then, by (29),

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ ( \ell (\xi_1) - \ell (\xi_0) ) {\mathbf 1}_ { E^{\mathrm{c}}_x} \big] & \leq C \mathbb{E}_x \big[ \| \varDelta \|{}^{q} {\mathbf 1}_ { E^{\mathrm{c}}_x} \big] \\ & = O ( \| x \|{}^{-\delta(p-q)} ) = O ( \| x \|{}^{-2-\varepsilon'} ) ,\end{aligned} $$
(65)

for some ε′ > 0. Thus we conclude that

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ \ell (\xi_1) - \ell (\xi_0) \big] = - \frac{ ( D_1 h(Tx) )^2 + ( D_2 h(Tx) )^2}{2(h(Tx))^2} + O ( \| x \|{}^{-2-\varepsilon'} ), \end{aligned} $$

for some ε′ > 0. Then (59) follows from (62).

Next suppose that x ∈ S B. Truncating (63), we have for \(x \in {\mathcal R}_{2r_0}\) and y ∈ B r∕2(x),

$$\displaystyle \begin{aligned} \log h (x+y ) = \log h (x) + \frac{ \langle D h(x), y \rangle }{h(x)} + R(x,y) ,\end{aligned}$$

where now \(|R(x,y)| \leq C \| y \|{ }^2 \| x \|{ }^{-2} (\log \| x \|)^{-1}\) for ∥x∥ sufficiently large. Hence

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ ( \ell (\xi_1) - \ell (\xi_0) ) {\mathbf 1}_ { E_x } \big] & = \frac{\mathbb{E}_x \big[ \langle D h(Tx), \tilde \varDelta \rangle {\mathbf 1}_ { E_x } \big] + O ( \| x \|{}^{-2})}{h(Tx)} .\end{aligned} $$

Then by (65) and the fact that \(\mathbb {E}_x | \langle D h(Tx), \tilde \varDelta \rangle {\mathbf 1}_ {E_x^{\mathrm {c}}} | = O ( \| x \|{ }^{-2-\varepsilon '} )\) (as above),

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ \ell (\xi_1) - \ell (\xi_0) \big] & = \frac{\mathbb{E}_x \big[ \langle D h(Tx), \tilde \varDelta \rangle \big] + O ( \| x \|{}^{-2})}{h(Tx)} .\end{aligned} $$
(66)

From (62) we have

$$\displaystyle \begin{aligned} D h (x) = \frac{1}{\| x \|{}^2} \begin{pmatrix} x_1 - \eta x_2 \\ x_2 + \eta x_1 \end{pmatrix}, \text{ and hence } D h (Tx) = \frac{1}{\| Tx \|{}^2} \begin{pmatrix} \frac{\sigma_2}{s} x_1 -\frac{\rho}{s\sigma_2} x_2 - \frac{\eta}{\sigma_2} x_2 \\ \frac{1}{\sigma_2}x_2 + \frac{\eta\sigma_2}{s} x_1 - \frac{\eta \rho}{s\sigma_2} x_2 \end{pmatrix} ,\end{aligned}$$

using (13). If β ± < 1 and \(x \in S_B^\pm \), we have from (16) and (17) that

$$\displaystyle \begin{aligned} & \mathbb{E}_x \langle D h(Tx), \tilde \varDelta \rangle \\ & {} \quad {} = \frac{\mu^\pm (x)}{s^2} \frac{1}{\| Tx \|{}^2} \bigg\{ a^\pm \Big[ \left( s \eta (\beta^\pm-1) - \rho (1+\beta^\pm) \right) \sin \alpha + \left( \sigma_2^2 \beta^\pm - \sigma_1^2 \right) \cos \alpha \Big] x_1^{\beta^\pm} \\ & {} \qquad {} \pm \Big[ \sigma_2^2 \sin \alpha + (\rho - s \eta ) \cos \alpha \Big] x_1 + O ( x_1^{2\beta^\pm -1} ) + O(1) \bigg\} . \end{aligned} $$

Taking η = η 0 as given by (58), the ± x 1 term vanishes; after simplification, we get

$$\displaystyle \begin{aligned} \mathbb{E}_x \langle D h(Tx), \tilde \varDelta \rangle = \frac{\sigma_2^2 a^\pm \mu^\pm (x)}{\| Tx \|{}^2 s^2 \cos \alpha} \left( \left( \beta^\pm - \beta_{\mathrm{c}} \right) x_1^{\beta^\pm} + O ( x_1^{2\beta^\pm -1} ) + O(1) \right) . \end{aligned} $$
(67)

Using (67) in (66) gives (60).

On the other hand, if β ± > 1 and \(x \in S_B^\pm \), we have from (18) and (19) that

$$\displaystyle \begin{aligned} & \mathbb{E}_x \langle D h(Tx), \tilde \varDelta \rangle \\ & {} \quad {} = \frac{\mu^\pm (x)}{s^2} \frac{1}{\| Tx \|{}^2} \bigg\{ \frac{1}{\beta^\pm} \Big[ \left( s \eta (\beta^\pm-1) - \rho (1+\beta^\pm) \right) \sin \alpha + \left( \sigma_2^2 \beta^\pm - \sigma_1^2 \right) \cos \alpha \Big] x_1 \\ & {} \qquad {} \pm a^\pm \Big[ \sigma_1^2 \sin \alpha - (\rho + s \eta ) \cos \alpha \Big] x_1^{\beta^\pm} + O ( x_1^{2-\beta^\pm} ) + O(1) \bigg\} . \end{aligned} $$

Taking η = η 1 as given by (58), the \(\pm x_1^{\beta ^\pm }\) term vanishes, and we get

as ∥x∥→ (and x 1 →). Then using the last display in (66) gives (61). □

The function is not by itself enough to prove recurrence in the critical cases, because the estimates in Lemma 6 do not guarantee that satisfies a supermartingale condition for all parameter values of interest. To proceed, we modify the function slightly to improve its properties near the boundary. In the case where \(\max (\beta ^+,\beta ^-) = \beta _{\mathrm {c}} \in (0,1)\), the following function will be used to prove recurrence,

$$\displaystyle \begin{aligned} g_\gamma (x) := g_\gamma (r, \theta) : = \ell (x) + \frac{\theta^2}{(1+r)^{\gamma}} , \end{aligned}$$

where the parameter η in is chosen as η = η 0 as given by (58).

Lemma 7

Suppose that (Mp) , (D+) , (R) , and (C+) hold, with p > 2, ε > 0, α + = −α  = α for |α| < π∕2, and β +, β ∈ (0, 1) with β +, β  β c . Let η = η 0 , and suppose

$$\displaystyle \begin{aligned} 0 < \gamma < \min ( \beta^+, \beta^-, 1 - \beta^+, 1-\beta^- , p-2 ). \end{aligned}$$

Then asx∥→∞ with x  S I,

$$\displaystyle \begin{aligned} \mathbb{E} [ g_\gamma (\xi_{n+1}) - g_\gamma (\xi_n) \mid \xi_n = x] = - \frac{1+\eta^2 + o(1)}{2 \| Tx\|{}^2(\log \| Tx\|)^2} .\end{aligned} $$
(68)

Moreover, asx∥→∞ with \(x \in S_B^\pm \),

$$\displaystyle \begin{aligned} \mathbb{E} [ g_\gamma (\xi_{n+1}) - g_\gamma (\xi_n) \mid \xi_n = x] \leq - 2 a^\pm \mu^\pm (x) (\cos \alpha +o(1)) \| x \|{}^{\beta^\pm -2 - \gamma} .\end{aligned} $$
(69)

Proof

Set u γ(x) := u γ(r, θ) := θ 2(1 + r)γ, and note that, by (22), for x 1 > 0,

$$\displaystyle \begin{aligned} D_1 u_\gamma (x) = - \frac{2\theta \sin \theta}{r (1+r)^{\gamma}} - \frac{\gamma \theta^2 \cos \theta}{(1+r)^{1+\gamma}} , ~~~ D_2 u_\gamma (x) = \frac{2\theta \cos \theta}{r (1+r)^{\gamma}} - \frac{\gamma \theta^2 \sin \theta}{(1+r)^{1+\gamma}} , \end{aligned} $$

and |D i D j u γ(x)| = O(r −2−γ) for any i, j. So, by Taylor’s formula (28), for all y ∈ B r∕2(x),

$$\displaystyle \begin{aligned} u_\gamma (x +y ) = u_\gamma (x) + \langle D u_\gamma (x), y \rangle + R(x,y),\end{aligned}$$

where |R(x, y)|≤ Cy2x−2−γ for all ∥x∥ sufficiently large. Once more define the event E x = {∥Δ∥ < ∥xδ}, where now \(\delta \in ( \frac {2+\gamma }{p},1)\). Then

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ ( u_\gamma (\xi_1) - u_\gamma (\xi_0) ) {\mathbf 1}_ {E_x } \big] = \mathbb{E}_x \big[ \langle D u_\gamma (x), \varDelta \rangle {\mathbf 1}_ {E_x } \big] + O (\|x\|{}^{-2-\gamma}) .\end{aligned}$$

Moreover, \(\mathbb {E}_x | \langle D u_\gamma (x), \varDelta \rangle {\mathbf 1}_ {E^{\mathrm {c}}_x } | \leq C \| x \|{ }^{-1-\gamma } \mathbb {E}_x ( \| \varDelta \|{\mathbf 1}_ {E^{\mathrm {c}}_x } ) = O ( \| x \|{ }^{-2-\gamma })\), by (29) and the fact that \(\delta > \frac {2}{p} > \frac {1}{p-1}\). Also, since u γ is uniformly bounded,

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ | u_\gamma (\xi_1) - u_\gamma (\xi_0) | {\mathbf 1}_ {E^{\mathrm{c}}_x } \big] \leq C {\mathbb P}_x ( E^{\mathrm{c}}_x ) = O ( \| x \|{}^{-p\delta} ) ,\end{aligned}$$

by (29). Since  > 2 + γ, it follows that

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ u_\gamma (\xi_1) - u_\gamma (\xi_0) \big] = \mathbb{E}_x \langle D u_\gamma (x), \varDelta \rangle + O (\|x\|{}^{-2-\gamma}) .\end{aligned} $$
(70)

For x ∈ S I, it follows from (70) and (D+) that \(\mathbb {E}_x [ u_\gamma (\xi _1) - u_\gamma (\xi _0) ] = O (\|x\|{ }^{-2-\gamma })\), and combining this with (59) we get (68).

Let \(\beta = \max (\beta ^+, \beta ^- ) < 1\). For x ∈ S, |θ(x)| = O(r β−1) as ∥x∥→, so (70) gives

$$\displaystyle \begin{aligned} \mathbb{E}_x [ u_\gamma (\xi_1) - u_\gamma (\xi_0) ] = \frac{2\theta \cos \theta \mathbb{E}_x \varDelta_2}{\| x \| (1+\|x\|)^{\gamma}} + O (\|x\|{}^{2\beta -3-\gamma} ) + O (\|x\|{}^{-2-\gamma}) .\end{aligned}$$

If \(x \in S_B^\pm \) then \(\theta = \pm a^\pm (1+o(1)) x_1^{\beta ^\pm -1}\) and, by (10), \(\mathbb {E}_x \varDelta _2 = \mp \mu ^\pm (x) \cos \alpha +o(1)\), so

$$\displaystyle \begin{aligned} \mathbb{E}_x [ u_\gamma (\xi_1) - u_\gamma (\xi_0) ] = - 2 a^\pm \mu^\pm (x) (\cos \alpha +o(1)) \| x \|{}^{\beta^\pm -2 - \gamma} . \end{aligned} $$
(71)

For η = η 0 and β +, β ≤ β c, we have from (60) that

$$\displaystyle \begin{aligned} \mathbb{E}_x [ \ell (\xi_{1}) - \ell (\xi_0) ] \leq \frac{1}{\|T x\|{}^2 \log\|T x\|} \left( O ( \| x \|{}^{2\beta^\pm-1} ) + O(1) \right) .\end{aligned}$$

Combining this with (71), we obtain (69), provided that we choose γ such that β ±− 2 − γ > 2β ±− 3 and β ±− 2 − γ > −2, that is, γ < 1 − β ± and γ < β ±. □

In the case where β +, β  > 1, we will use the function

$$\displaystyle \begin{aligned} w_\gamma (x) := \ell (x) - \frac{x_1}{(1+\| x \|{}^2)^{\gamma}} , \end{aligned}$$

where the parameter η in is now chosen as η = η 1 as defined at (58). A similar function was used in [6].

Lemma 8

Suppose that (Mp) , (D+) , (R) , and (C+) hold, with p > 2, ε > 0, α + = −α  = α for |α| < π∕2, and β +, β  > 1 Let η = η 1 , and suppose that

$$\displaystyle \begin{aligned} \frac{1}{2} < \gamma < \min \left( 1 - \frac{1}{2\beta^+}, 1 - \frac{1}{2\beta^-} , \frac{p-1}{2} \right) .\end{aligned}$$

Then asx∥→∞ with x  S I,

$$\displaystyle \begin{aligned} \mathbb{E} [ w_\gamma ( \xi_{n+1} ) - w_\gamma ( \xi_n ) \mid \xi_n = x ] = - \frac{1+\eta^2 + o(1)}{2 \| Tx\|{}^2(\log \| Tx\|)^2} .\end{aligned} $$
(72)

Moreover, asx∥→∞ with \(x \in S^\pm _B\),

$$\displaystyle \begin{aligned} \mathbb{E} [ w_\gamma ( \xi_{n+1} ) - w_\gamma ( \xi_n ) \mid \xi_n = x ] = - \frac{\mu^\pm (x) \cos \alpha + o(1)}{\| x \|{}^{2\gamma}} .\end{aligned} $$
(73)

Proof

Let q γ(x) := x 1(1 + ∥x2)γ. Then

$$\displaystyle \begin{aligned} D_1 q_\gamma (x) = \frac{1}{(1+\| x\|{}^2)^{\gamma}} - \frac{2\gamma x_1^2}{(1+\| x \|{}^2)^{1+\gamma}}, ~~~ D_2 q_\gamma (x) = - \frac{2\gamma x_1 x_2}{(1+\| x\|{}^2)^{1+\gamma}} ,\end{aligned}$$

and |D i D j q γ(x)| = O(∥x−1−2γ) for any i, j. Thus by Taylor’s formula, for y ∈ B r∕2(x),

$$\displaystyle \begin{aligned} q_\gamma (x + y ) - q_\gamma (x) = \langle D q_\gamma (x), y \rangle + R (x,y), \end{aligned}$$

where |R(x, y)|≤ Cy2x−1−2γ for ∥x∥ sufficiently large. Once more let E x = {∥Δ∥ < ∥xδ}, where now we take \(\delta \in (\frac {1+2\gamma }{p}, 1)\). Then

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ ( q_\gamma (\xi_1 ) - q_\gamma (\xi_0) ) {\mathbf 1}_ { E_x} \big] = \mathbb{E}_x \big[ \langle D q_\gamma (x), \varDelta \rangle {\mathbf 1}_ { E_x} \big] + O ( \| x \|{}^{-1-2\gamma} ) .\end{aligned}$$

Moreover, we get from (29) that \(\mathbb {E}_x | \langle D q_\gamma (x), \varDelta \rangle {\mathbf 1}_ { E^{\mathrm {c}}_x} | = O ( \| x \|{ }^{-2\gamma -\delta (p-1) })\), where δ(p − 1) > 2γ > 1, and, since q γ is uniformly bounded for γ > 1∕2,

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ ( q_\gamma (\xi_1 ) - q_\gamma (\xi_0) ) {\mathbf 1}_ { E^{\mathrm{c}}_x} \big] = O ( \| x \|{}^{-p\delta} ) ,\end{aligned}$$

where  > 1 + 2γ. Thus

$$\displaystyle \begin{aligned} \mathbb{E}_x \big[ q_\gamma (\xi_1 ) - q_\gamma (\xi_0) \big] = \mathbb{E}_x \langle D q_\gamma (x), \varDelta \rangle + O ( \| x \|{}^{-1-2\gamma} ) .\end{aligned} $$
(74)

If x ∈ S I, then (D+) gives \(\mathbb {E}_x \langle D q_\gamma (x), \varDelta \rangle = O ( \| x\|{ }^{-1-2\gamma })\) and with (59) we get (72), since γ > 1∕2. On the other hand, suppose that \(x \in S_B^\pm \) and β ± > 1. Then \(\| x\| \geq c x_1^{\beta ^\pm }\) for some c > 0, so \(x_1 = O ( \| x \|{ }^{1/\beta ^\pm } )\). So, by (74),

$$\displaystyle \begin{aligned} \mathbb{E}_x [ q_\gamma (\xi_1) - q_\gamma (\xi_0) ] = \frac{\mathbb{E}_x \varDelta_1}{(1+\| x \|{}^2)^{\gamma}} + O \left( \|x\|{}^{\frac{1}{\beta^\pm} - 1 -2\gamma} \right).\end{aligned}$$

Moreover, by (11), \(\mathbb {E}_x \varDelta _1 = \mu ^\pm (x) \cos \alpha + o(1)\). Combined with (61), this yields (73), provided that 2γ ≤ 2 − (1∕β ±), again using the fact that \(x_1 = O ( \| x \|{ }^{1/\beta ^\pm } )\). This completes the proof. □

4 Proofs of Main Results

We obtain our recurrence classification and quantification of passage-times via Foster–Lyapunov criteria (cf. [14]). As we do not assume any irreducibility, the most convenient form of the criteria are those for discrete-time adapted processes presented in [26]. However, the recurrence criteria in [26, §3.5] are formulated for processes on \({\mathbb R}_+\), and, strictly, do not apply directly here. Thus we present appropriate generalizations here, as they may also be useful elsewhere. The following recurrence result is based on Theorem 3.5.8 of [26].

Lemma 9

Let X 0, X 1, … be a stochastic process on \({\mathbb R}^d\) adapted to a filtration \({\mathcal F}_0, {\mathcal F}_1, \ldots \) . Let \(f : {\mathbb R}^d \to {\mathbb R}_+\) be such that f(x) →∞ asx∥→∞, and \(\mathbb {E} f(X_0) < \infty \) . Suppose that there exist \(r_0 \in {\mathbb R}_+\) and C < ∞ for which, for all \(n \in {\mathbb Z}_+\),

$$\displaystyle \begin{aligned} \mathbb{E} [ f (X_{n+1} ) - f(X_n) \mid {\mathcal F}_n] & \leq 0, \mathit{\text{ on }} \{ \| X_n \| \geq r_0 \}; \\ \mathbb{E} [ f (X_{n+1} ) - f(X_n) \mid {\mathcal F}_n] & \leq C, \mathit{\text{ on }} \{ \| X_n \| < r_0 \}.\end{aligned} $$

Then if \({\mathbb P} ( \limsup _{n \to \infty } \| X_n \| = \infty ) =1\) , we have \({\mathbb P} ( \liminf _{n \to \infty } \| X_n \| \leq r_0 ) = 1\).

Proof

By hypothesis, \(\mathbb {E} f(X_n) < \infty \) for all n. Fix \(n \in {\mathbb Z}_+\) and let \(\lambda _{n} := \min \{ m \geq n : \| X_m \| \leq r_0 \}\) and, for some r > r 0, set \(\sigma _n := \min \{ m \geq n : \| X_m \| \geq r \}\). Since limsupnX n∥ =  a.s., we have that σ n < , a.s. Then \(f(X_{m \wedge \lambda _n \wedge \sigma _n } )\), m ≥ n, is a non-negative supermartingale with \(\lim _{m \to \infty } f(X_{m \wedge \lambda _n \wedge \sigma _n } ) = f(X_{\lambda _n \wedge \sigma _n } )\), a.s. By Fatou’s lemma and the fact that f is non-negative,

$$\displaystyle \begin{aligned} \mathbb{E} f (X_n) \geq \mathbb{E} f(X_{\lambda_n \wedge \sigma_n } ) \geq {\mathbb P} ( \sigma_n < \lambda_n ) \inf_{y : \| y \| \geq r} f (y) .\end{aligned}$$

So

$$\displaystyle \begin{aligned} {\mathbb P} \left( \inf_{m \geq n} \| X_m \| \leq r_0 \right) & \geq {\mathbb P} (\lambda_n < \infty) \geq {\mathbb P} (\lambda_n < \sigma_n ) \geq 1 - \frac{\mathbb{E} f(X_n)}{\inf_{y : \| y \| \geq r} f (y) } .\end{aligned} $$

Since r > r 0 was arbitrary, and infy:∥y∥≥r f(y) → as r →, it follows that, for fixed \(n \in {\mathbb Z}_+\), \({\mathbb P} ( \inf _{m \geq n} \| X_m \| \leq r_0 ) = 1\). Since this holds for all \(n \in {\mathbb Z}_+\), the result follows. □

The corresponding transience result is based on Theorem 3.5.6 of [26].

Lemma 10

Let X 0, X 1, … be a stochastic process on \({\mathbb R}^d\) adapted to a filtration \({\mathcal F}_0, {\mathcal F}_1, \ldots \) . Let \(f : {\mathbb R}^d \to {\mathbb R}_+\) be such that supx f(x) < ∞, f(x) → 0 asx∥→∞, and infx:∥x∥≤r f(x) > 0 for all \(r \in {\mathbb R}_+\) . Suppose that there exists \(r_0 \in {\mathbb R}_+\) for which, for all \(n \in {\mathbb Z}_+\),

$$\displaystyle \begin{aligned} \mathbb{E} [ f (X_{n+1} ) - f(X_n) \mid {\mathcal F}_n] & \leq 0, \mathit{\text{ on }} \{ \| X_n \| \geq r_0 \}.\end{aligned} $$

Then if \({\mathbb P} ( \limsup _{n \to \infty } \| X_n \| = \infty ) =1\) , we have that \({\mathbb P} ( \lim _{n \to \infty } \| X_n \| = \infty ) = 1\).

Proof

Since f is bounded, \(\mathbb {E} f(X_n) < \infty \) for all n. Fix \(n \in {\mathbb Z}_+\) and r 1 ≥ r 0. For \(r \in {\mathbb Z}_+\) let \(\sigma _r := \min \{ n \in {\mathbb Z}_+ : \| X_n \| \geq r \}\). Since \({\mathbb P} ( \limsup _{n \to \infty } \| X_n \| = \infty ) =1\), we have σ r < , a.s. Let \(\lambda _{r} := \min \{ n \geq \sigma _r : \| X_n \| \leq r_1 \}\). Then \(f(X_{n \wedge \lambda _r} )\), n ≥ σ r, is a non-negative supermartingale, which converges, on {λ r < }, to \(f(X_{\lambda _r} )\). By optional stopping (e.g. Theorem 2.3.11 of [26]), a.s.,

$$\displaystyle \begin{aligned} \sup_{x : \| x \| \geq r} f(x) \geq f (X_{\sigma_r}) \geq \mathbb{E} [ f(X_{\lambda_r} ) \mid {\mathcal F}_{\sigma_r} ] \geq {\mathbb P} ( \lambda_r < \infty \mid {\mathcal F}_{\sigma_r} ) \inf_{x : \| x \| \leq r_1} f(x) .\end{aligned}$$

So

$$\displaystyle \begin{aligned} {\mathbb P} ( \lambda_r < \infty ) \leq \frac{\sup_{x : \| x \| \geq r} f(x)}{\inf_{x : \| x \| \leq r_1} f(x) } ,\end{aligned} $$

which tends to 0 as r →, by our hypotheses on f. Thus,

$$\displaystyle \begin{aligned} {\mathbb P} \left( \liminf_{n \to \infty} \| X_n \| \leq r_1 \right) = {\mathbb P} \left( \cap_{r \in {\mathbb Z}_+} \left\{ \lambda_r < \infty \right\} \right) = \lim_{r \to \infty} {\mathbb P} ( \lambda_r < \infty ) = 0 .\end{aligned}$$

Since r 1 ≥ r 0 was arbitrary, we get the result. □

Now we can complete the proof of Theorem 3, which includes Theorem 1 as the special case α = 0.

Proof (of Theorem 3 )

Let \(\beta = \max (\beta ^+,\beta ^-)\), and recall the definition of β c from (5) and that of s 0 from (7). Suppose first that 0 ≤ β < 1 ∧ β c. Then s 0 > 0 and we may (and do) choose w ∈ (0, 2s 0). Also, take γ ∈ (0, 1); note 0 < γw < 1. Consider the function \(f_w^\gamma \) with θ 0 = θ 1 given by (23). Then from (30), we see that there exist c > 0 and r 0 <  such that, for all x ∈ S I,

$$\displaystyle \begin{aligned} \mathbb{E} [ f_w^\gamma ( \xi_{n+1} ) - f_w^\gamma ( \xi_n ) \mid \xi_n = x ] \leq - c \| x \|{}^{\gamma w -2}, \text{ for all } \| x \| \geq r_0 . \end{aligned} $$
(75)

By choice of w, we have β − (1 − w)β c < 0, so (31) shows that, for all \(x \in S_B^\pm \),

$$\displaystyle \begin{aligned} \mathbb{E} [ f_w^\gamma ( \xi_{n+1} ) - f_w^\gamma ( \xi_n ) \mid \xi_n = x ] \leq - c \| x \|{}^{\gamma w -2 +\beta^\pm} ,\end{aligned}$$

for some c > 0 and all ∥x∥ sufficiently large. In particular, this means that (75) holds throughout S. On the other hand, it follows from (39) and (Mp) that there is a constant C <  such that

$$\displaystyle \begin{aligned} \mathbb{E} [ f_w^\gamma ( \xi_{n+1} ) - f_w^\gamma ( \xi_n ) \mid \xi_n = x ] \leq C, \text{ for all } \| x \| \leq r_0 . \end{aligned} $$
(76)

Since w, γ > 0, we have that \( f_w^\gamma (x) \to \infty \) as ∥x∥→. Then by Lemma 9 with the conditions (75) and (76) and assumption (N), we establish recurrence.

Next suppose that β c < β < 1. If β + = β  = β, we use the function \(f_w^\gamma \), again with θ 0 = θ 1 given by (23). We may (and do) choose γ ∈ (0, 1) and w < 0 with w > −2|s 0| and γw > w > 2 − p. By choice of w, we have β − (1 − w)β c > 0. We have from (30) and (31) that (75) holds in this case also, but now \(f_w^\gamma (x) \to 0\) as ∥x∥→, since γw < 0. Lemma 10 then gives transience when β + = β .

Suppose now that β c < β < 1 with β + ≠ β . Without loss of generality, suppose that β = β + > β . We now use the function \(F_w^{\gamma ,\nu }\) defined at (48), where, as above, we take γ ∈ (0, 1) and w ∈ (−2|s 0|, 0), and we choose the constants λ, ν with λ < 0 and γw + β − 2 < 2ν < γw + β + − 2. Note that 2ν < γw − 1, so \(F_w^{\gamma ,\nu } (x) = f_w^\gamma (x) ( 1 + o(1))\). With θ 0 = θ 1 given by (23), and this choice of ν, Lemma 5 applies. The choice of γ ensures that the right-hand side of (49) is eventually negative, and the choice of w ensures the same for (50). Since λ < 0, the right-hand side of (51) is also eventually negative. Combining these three estimates shows, for all x ∈ S with ∥x∥ large enough,

$$\displaystyle \begin{aligned} \mathbb{E} [ F_w^{\gamma,\nu} (\xi_{n+1} ) - F_w^{\gamma,\nu} (\xi_n) \mid \xi_n = x ] \leq 0 .\end{aligned}$$

Since \(F_w^{\gamma ,\nu } (x) \to 0\) as ∥x∥→, Lemma 10 gives transience.

Of the cases where β +, β  < 1, it remains to consider the borderline case where β = β c ∈ (0, 1). Here Lemma 7 together with Lemma 9 proves recurrence. Finally, if β +, β  > 1, we apply Lemma 8 together with Lemma 9 to obtain recurrence. Note that both of these critical cases require (D+) and (C+). □

Next we turn to moments of passage times: we prove Theorem 4, which includes Theorem 2 as the special case α = 0. Here the criteria we apply are from [26, §2.7], which are heavily based on those from [5].

Proof (of Theorem 4 )

Again let \(\beta = \max (\beta ^+,\beta ^-)\). First we prove the existence of moments part of (a)(i). Suppose that 0 ≤ β < 1 ∧ β c, so s 0 as defined at (7) satisfies s 0 > 0. We use the function \(f_w^\gamma \), with γ ∈ (0, 1) and w ∈ (0, 2s 0) as in the first part of the proof of Theorem 3. We saw in that proof that for these choices of γ, w we have that (75) holds for all x ∈ S. Rewriting this slightly, using the fact that \(f_w^\gamma (x)\) is bounded above and below by constants times ∥xγw for all ∥x∥ sufficiently large, we get that there are constants c > 0 and r 0 <  for which

$$\displaystyle \begin{aligned} \mathbb{E} [f_w^\gamma ( \xi_{n+1} ) - f_w^\gamma ( \xi_n ) \mid \xi_n = x ] \leq -c ( f_w^\gamma ( x ) )^{1-\frac{2}{\gamma w}}, \text{ for all }x \in S\text{ with }\| x \| \geq r_0 . \end{aligned} $$
(77)

Then we may apply Corollary 2.7.3 of [26] to get \(\mathbb {E}_x ( \tau _r^s ) < \infty \) for any r ≥ r 0 and any s < γw∕2. Taking γ < 1 and w < 2s 0 arbitrarily close to their upper bounds, we get \(\mathbb {E}_x ( \tau _r^s ) < \infty \) for all s < s 0.

Next suppose that 0 ≤ β ≤ β c. Let s > s 0. First consider the case where β + = β . Then we consider \(f_w^\gamma \) with γ > 1, w > 2s 0 (so w > 0), and 0 <  < 2. Then, since β − (1 − w)β c = β c − β + (w − 2s 0)β c > 0, we have from (30) and (31) that

$$\displaystyle \begin{aligned} \mathbb{E} [f_w^\gamma ( \xi_{n+1} ) - f_w^\gamma ( \xi_n ) \mid \xi_n = x ] \geq 0, \end{aligned} $$
(78)

for all x ∈ S with ∥x∥ sufficiently large. Now set \(Y_n := f_w^{1/w} ( \xi _n)\), and note that Y n is bounded above and below by constants times ∥ξ n∥, and \(Y_n^{\gamma w} = f_w^\gamma (\xi _n)\). Write \({\mathcal F}_n = \sigma ( \xi _0, \xi _1, \ldots , \xi _n)\). Then we have shown in (78) that

$$\displaystyle \begin{aligned} \mathbb{E} [ Y^{\gamma w}_{n+1} - Y^{\gamma w}_n \mid {\mathcal F}_n ] \geq 0, \text{ on } \{ Y_n > r_1 \} ,\end{aligned} $$
(79)

for some r 1 sufficiently large. Also, from the γ = 1∕w case of (30) and (31),

$$\displaystyle \begin{aligned} \mathbb{E} [ Y_{n+1} - Y_n \mid {\mathcal F}_n ] \geq - \frac{B}{Y_n}, \text{ on } \{ Y_n > r_2 \} ,\end{aligned} $$
(80)

for some B <  and r 2 sufficiently large. (The right-hand side of (31) is still eventually positive, while the right-hand-side of (30) will be eventually negative if γ < 1.) Again let E x = {∥Δ∥ < ∥xδ} for δ ∈ (0, 1). Then from the γ = 1∕w case of (41),

$$\displaystyle \begin{aligned} \left| f_w^{1/w} (\xi_1 ) - f_w^{1/w} (\xi_0 ) \right|{}^2 {\mathbf 1}_ {E_x} \leq C \| \varDelta \|{}^2 ,\end{aligned}$$

while from the γ = 1∕w case of (39) we have

$$\displaystyle \begin{aligned} \left| f_w^{1/w} (\xi_1 ) - f_w^{1/w} (\xi_0 ) \right|{}^2 {\mathbf 1}_ {E_x^{\mathrm{c}} } \leq C \| \varDelta \|{}^{2/\delta} .\end{aligned}$$

Taking δ ∈ (2∕p, 1), it follows from (Mp) that for some C < , a.s.,

$$\displaystyle \begin{aligned} \mathbb{E} [ ( Y_{n+1} - Y_n )^2 \mid {\mathcal F}_n ] \leq C .\end{aligned} $$
(81)

The three conditions (79)–(81) show that we may apply Theorem 2.7.4 of [26] to get \(\mathbb {E}_x ( \tau _r^s ) = \infty \) for all s > γw∕2, all r sufficiently large, and all x ∈ S with ∥x∥ > r. Hence, taking γ > 1 and w > 2s 0 arbitrarily close to their lower bounds, we get \(\mathbb {E}_x ( \tau _r^s ) = \infty \) for all s > s 0 and appropriate r, x. This proves the non-existence of moments part of (a)(i) in the case β + = β .

Next suppose that 0 ≤ β +, β ≤ β c with β + ≠ β . Without loss of generality, suppose that 0 ≤ β  < β + = β ≤ β c. Then 0 ≤ s 0 < 1∕2. We consider the function \(F_w^{\gamma ,\nu }\) given by (48) with θ 0 = θ 1 given by (23), λ > 0, w ∈ (2s 0, 1), and γ > 1 such that γw < 1. Also, take ν for which γw + β − 2 < 2ν < γw + β + − 2. Then by choice of γ and w, we have that the right-hand sides of (49) and (50) are both eventually positive. Since λ > 0, the right-hand side of (51) is also eventually positive. Thus

$$\displaystyle \begin{aligned} \mathbb{E} [ F_w^{\gamma,\nu} ( \xi_{n+1} ) - F_w^{\gamma,\nu} (\xi_n ) \mid \xi_n = x] \geq 0 ,\end{aligned}$$

for all x ∈ S with ∥x∥ sufficiently large. Take \(Y_n := ( F_w^{\gamma ,\nu } (\xi _n) )^{1/(\gamma w)}\). Then we have shown that, for this Y n, the condition (79) holds. Moreover, since γw < 1 we have from convexity that (80) also holds. Again let E x = {∥Δ∥ < ∥xδ}. From (41) and (52),

$$\displaystyle \begin{aligned} \left| F^{\gamma,\nu}_w (x +y ) - F^{\gamma,\nu}_w (x) \right| \leq C \| y \| \| x\|{}^{\gamma w -1} ,\end{aligned}$$

for all y ∈ B r∕2(x). Then, by another Taylor’s theorem calculation,

$$\displaystyle \begin{aligned} \left| \big( F^{\gamma,\nu}_w (x +y ) \big)^{1/(\gamma w)} - \big( F^{\gamma,\nu}_w (x ) \big)^{1/(\gamma w)} \right| \leq C \| y \| , \end{aligned} $$

for all y ∈ B r∕2(x). It follows that \(\mathbb {E}_x [ ( Y_1 - Y_0 )^2 {\mathbf 1}_ { E_x } ] \leq C\). Moreover, by a similar argument to (40), |Y 1 − Y 0|2 ≤ CΔ2γwδ on \(E_x^{\mathrm {c}}\), so taking δ ∈ (2∕p, 1) and using the fact that γw < 1, we get \(\mathbb {E}_x [ ( Y_1 - Y_0 )^2 {\mathbf 1}_ { E^{\mathrm {c}}_x } ] \leq C\) as well. Thus we also verify (81) in this case. Then we may again apply Theorem 2.7.4 of [26] to get \(\mathbb {E}_x ( \tau _r^s ) = \infty \) for all s > γw∕2, and hence all s > s 0. This completes the proof of (a)(i).

For part (a)(ii), suppose first that β + = β  = β, and that β c ≤ β < 1. We apply the function \(f_w^\gamma \) with w > 0 and γ > 1. Then we have from (30) and (31) that (78) holds. Repeating the argument below (78) shows that \(\mathbb {E}_x ( \tau _r^s ) = \infty \) for all s > γw∕2, and hence all s > 0. The case where β + ≠ β is similar, using an appropriate \(F_w^{\gamma ,\nu }\). This proves (a)(ii).

It remains to consider the case where β +, β  > 1. Now we apply \(f_w^\gamma \) with γ > 1 and w ∈ (0, 1∕2) small enough, noting Remark 4. In this case (30) with (32) and Lemma 3 show that (78) holds, and repeating the argument below (78) shows that \(\mathbb {E}_x ( \tau _r^s ) = \infty \) for all s > 0. This proves part (b). □