1 Introduction

Classical fluctuation theory deals with the fine structure (to steal an expression used by Chung in his textbook [20, Section 8.4]) of ordinary random walks on \(\mathbb {R}\), i.e., partial sums \(S_{n}=\sum _{k=1}^{n}X_{k}\) of iid real-valued random variables \(X_{1},X_{2},\ldots \) It comprises, among others,

  1. (1)

    the basic trichotomy regarding the almost sure behavior of \(S_{n}\) as \(n\rightarrow \infty \),

  2. (2)

    results about the existence of moments for related quantities like \(\min _{n\ge 0}S_{n}\) or the number of nonpositive sums \(N(0)=\sum _{n\ge 0}\mathbf {1}_{\{S_{n}\le 0\}}\),

A short review of the results relevant for this article will be given in Sect. 3.

The present work aims at providing results of type (1) and (2) for the more general situation when the increments \(X_{1},X_{2},\ldots \) are modulated or driven by a positive recurrent Markov chain \({\varvec{M}}=(M_{n})_{n\ge 0}\) with countable state space \(\mathcal {S}\). More precisely, the \(X_{n}\) are conditionally independent given \({\varvec{M}}\), and

$$\begin{aligned} \mathbb {P}((X_{1},\ldots ,X_{n})\in \cdot \,|M_{0}=i_{0},\ldots ,M_{n}=i_{n})\ =\ K_{i_{0}i_{1}}\otimes \cdots \otimes K_{i_{n-1}i_{n}} \end{aligned}$$

for all \(n\ge 1\), \(s_{0},\ldots ,s_{n}\in \mathcal {S}\) and some stochastic kernel K from \(\mathcal {S}^{2}\) to \(\mathbb {R}\). Then \((M_{n},S_{n})_{n\ge 0}\), and sometimes also its additive part \((S_{n})_{n\ge 0}\), is called a Markov random walk (MRW) or Markov additive process and \({\varvec{M}}\) its driving chain. Let \(\mathbf {P}=(p_{ij})_{i,j\in \mathcal {S}}\) denote the transition matrix of \({\varvec{M}}\) and \(\pi =(\pi _{i})_{i\in \mathcal {S}}\) its unique stationary distribution. For any \(i\in \mathcal {S}\), put further \(\mathbb {P}_{i}:=\mathbb {P}(\cdot |M_{0}=i)\) and let \((\tau _{n}(i))_{n\ge 1}\) denote the renewal sequence of successive return epochs to i.

The well-known fact that, for each \(i\in \mathcal {S}\), \((S_{\tau _{n}(i)})_{n\ge 0}\) with \(\tau _{0}(i):=0\) constitutes an ordinary zero-delayed random walk under \(\mathbb {P}_{i}\) suggests that results of the above kind for \((S_{n})_{n\ge 0}\) should be obtainable by drawing on the known results for these embedded walks. On the other hand, it should also be clear that the excursions between the successive return epochs require additional analysis and may in fact be intriguing and result in surprising effects. For instance, it is possible that \(S_{\tau _{n}(i)}\rightarrow \infty \) a.s. for all \(i\in \mathcal {S}\) while \((S_{n})_{n\ge 0}\) is oscillating (see Example 7.2). Our main results will actually show that the attempt to simply “lift” fluctuation-theoretic results for ordinary random walks to the class of MRW is not at all straightforward and often even fails without proper adjustments. In other words, despite the fact that \((S_{n})_{n\ge 0}\) can be viewed as the (generally infinite) union of the ordinary random walks \((S_{\tau _{n}(i)})_{n\ge 0}\), \(i\in \mathcal {S}\), the way those are intertwined may lead to nontrivial complications that must be taken care of in the analysis.

There is an extensive literature on MRW with discrete driving chain and finite stationary drift \(\mathbb {E}_{\pi }X_{1}\), mostly within the framework of Markov renewal theory and focussing on the Markov renewal theorem and results derived therefrom. Çinlar [21, 22] provides good accounts of the early developments and references, while Asmussen’s monography [14, Ch. XI] and [6] may be consulted for more recent treatments of some aspects of the theory including the discrete Markov renewal theorem, the dual process, and Wiener–Hopf factorization, for the latter see also [12, 24, 44]. The ladder variables and the associated ladder chain of a MRW have been studied in [4, 7], see also Sect. 4 for further information, an arcsine law for the number of positive sums is derived in [8], and the topological recurrence of \((S_{n})_{n\ge 0}\) in the case when \(\mathbb {E}_{\pi }X_{1}=0\) is shown in [5]. For conditional Markov renewal theorems in the case when \(\mathcal {S}\) is countable, we mention an article by Lalley [34]. Finally, there are two papers, by Newbould [40] and Prabhu et al. [44], of closer relation to the present work by providing the basic trichotomy for MRW, the first one in the case when the driving chain has finite state space, the second one without this restriction and including a discussion of degeneracy. We refer to Sect. 5 for further details.

Regarding the more general situation when the driving chain has continuous state space and is positive Harris recurrent, we believe that an extension of our results is possible, at least to some extent, but not without considerable additional work. To explain, we note that the natural substitute in our approach for the return times \(\tau _{n}(i)\) to some state i, which are generally no longer a.s. finite, is a sequence of regeneration epochs \((\tau _{n})_{n\ge 1}\), marked by the successive epochs where the Harris chain returns to some recurrent small set as defined by Meyn and Tweedie [39, Sect. 5.2] and a regeneration occurs in the sense of Athreya and Ney [16, 17] or Nummelin [42]. Unfortunately, the associated embedded random walk \((S_{\tau _{n}})_{n\ge 1}\) has iid increments only in special cases, namely when the bivariate chain \((M_{n},X_{n})_{n\ge 0}\) satisfies a certain Harris-type condition (see [15] and also [41]). In general, however, the increments are 1-dependent and stationary. Therefore, in all places where we have drawn on rather deep fluctuation-theoretic results for ordinary RW due to Spitzer [46], Erickson [23], and Kesten and Maller [33] (see Sect. 3 for further information) extensions to the case of stationary, 1-dependent increments are needed. Since this cannot be done shortly, we have restricted this work to the case when the driving chain has countable state space.

Recently, results of similar type as in this article have been derived by the first author with Iksanov and Meiners [10] for another generalization of ordinary RW, called perturbed random walks (PRW), which have interesting connections with perpetuities, the Bernoulli sieve, regenerative and shot-noise processes (see their Sects. 1 and 3 for further information). We refer to Sect. 12 for a more detailed discussion of how the results here relate to those in [10].

Further organization: Three examples where MRW play a prominent role are described in some detail in Sect. 2, in particular random difference equations in Markovian environment which have been a major motivation for this work. Section 3 provides a short survey of the relevant fluctuation-theoretic results for ordinary RW, followed by a preliminary section giving some basic facts about MRW with discrete driving chain. A classification of MRW as to their fluctuation type, a short discussion of null-homologous MRW, which are the counterpart to ordinary RW with zero increments, and an extension of Kesten’s trichotomy to MRW (Theorem 5.5) form the content of Sect. 5. All main results are presented in Sect. 6. For their proofs, provided in Sect. 8, various quantities, defined as functions of \(i\in \mathcal {S}\), must be considered and shown to share certain properties for all i. Such solidarity results will be collected in Sect. 7. Section 9 is devoted to a further discussion of the level x first passage times \(\inf \{n\ge 1:S_{n}>x\}\) the behavior of which is more difficult to describe than for ordinary RW. A short discussion of the strong law of large numbers can be found in Sect. 10, while Sect. 11 collects some counterexamples that, as a supplement to our main results, will show that various equivalences given in the theorems by Spitzer–Erickson and Kesten–Maller for ordinary RW do not carry over to MRW. Finally, Sect. 12 provides the above-mentioned discussion of PRW, followed by an Appendix containing some auxiliary lemmata of purely technical nature and a Glossary providing a comprehensive list of the most important notation used in this article.

2 Examples

Markov modulation forms a common tool in Applied Probability, for instance in queuing, risk or reliability theory, to provide models of greater variability by allowing input parameters (like interarrival or service times, claim sizes, lifetime distributions or the type of agents in the model) to depend on the state of an underlying Markov chain. Examples of this kind may be found, e.g., in the monographs by Asmussen [13, Chs. VI and VIII], [14, Ch. 11], Prabhu [43, Part III] or Limnios and Oprişan [37]. In the following, we give three examples where the occurrence of MRW may be less known and begin with one related to random difference equations (iterations of random affine linear maps) that has been an area of very active research over the last twenty years, see the recent monograph by Buraczewski et al. [19] and also [11].

2.1 Random Difference Equations (Perpetuities) in Markovian Environment

A main motivation for the present work originated from the question of convergence of iterations of affine linear maps \({\varPsi }_{n}(x)=A_{n}x+B_{n}\), \(n=1,2,\ldots \), in the situation when \((A_{n},B_{n})_{n\ge 1}\) of \(\mathbb {R}^{2}\)-valued random vectors is modulated by a positive recurrent Markov chain \((M_{n})_{n\ge 0}\) with countable state space \(\mathcal {S}\), see [9]. This means that, conditioned upon \(M_{0}=i_{0},M_{1}=i_{1},\ldots \) for arbitrary \(i_{0},i_{1},\ldots \in \mathcal {S}\),

  • \((A_{1},B_{1}),\,(A_{2},B_{2}),\ldots \) are conditionally independent,

  • the conditional law of \((A_{n},B_{n})\) depends only on \((i_{n-1},i_{n})\) and is temporally homogeneous, i.e.,

    $$\begin{aligned} \mathbb {P}((A_{n},B_{n})\in \cdot |M_{n-1}=i_{n-1},\,M_{n}=i_{n})\ =\ K_{i_{n-1}i_{n}} \end{aligned}$$

    for a stochastic kernel K from \(\mathcal {S}^{2}\) to \(\mathbb {R}^{2}\) and all \(n\ge 1\).

The goal is to find necessary and sufficient conditions for the convergence in law of the iterated function system

$$\begin{aligned} R_{n}\ :=\ {\varPsi }_{n}(R_{n-1})\ =\ {\varPsi }_{n}\circ \cdots \circ {\varPsi }_{1}(R_{0}),\quad n=1,2\ldots , \end{aligned}$$
(2.1)

also called forward iterations, as well as such conditions for the convergence, almost surely or in law, of the corresponding backward iterations

$$\begin{aligned} Z_{n}\,=\ {\varPsi }_{1}\circ \cdots \circ {\varPsi }_{n}(R_{0})\ =\ {\varPi }_{n}R_{0}\ +\ \sum _{k=1}^{n}{\varPi }_{k-1}B_k,\quad n=1,2,\ldots \end{aligned}$$
(2.2)

where

$$\begin{aligned} {\varPi }_{0}\ :=\ 1\quad \text {and}\quad {\varPi }_{n}\ :=\ A_{1} A_{2} \cdots A_{n},\quad n=1,2,\ldots \end{aligned}$$

and \(R_{0}\) is a real number (this can be generalized but is ignored here for simplicity). Note that, if \(R_{0}=0\) and the a.s. limit of \(Z_{n}\) exists, then it equals \(Z_{\infty }=\sum _{n\ge 1}{\varPi }_{n-1}B_{n}\), called perpetuity.

In the case of iid \((A_{n},B_{n})\), the aforementioned stability questions were finally settled by Goldie and Maller [25, Theorems 2.1 and 3.1], based on earlier work by Vervaat [48] and Grincevičius [26, 27]. One of their central results reads as follows: If

$$\begin{aligned} \mathbb {P}(A_{1}=0)\,=\,0\quad \text {and}\quad \mathbb {P}(B_{1}=0)\,<\,1, \end{aligned}$$

then \(Z_{n}\) converges a.s. to \(Z_{\infty }\) (regardless of the initial value \(Z_{0}=R_{0}\)) iff

$$\begin{aligned} {\varPi }_{n}\,\rightarrow \,0\quad \text {a.s.}\quad \text {and}\quad \mathbb {E}J(\log ^{+}|B|)\,<\,\infty , \end{aligned}$$
(2.3)

where, for \(x>0\),

$$\begin{aligned} J(0)\,:=\,1\quad \text {and}\quad J(x)\,:=\, {\left\{ \begin{array}{ll} \displaystyle {\frac{x}{\mathbb {E}((\log ^{-}|A|)\wedge x)}}, \quad \text {if }\mathbb {P}(\log |A|<0)>0,\\ x, \quad \text {otherwise}. \end{array}\right. } \end{aligned}$$

Since \(Z_{n}\) and \(R_{n}\) have obviously the same distribution for each n, we also infer the convergence in law of \(R_{n}\) to \(Z_{\infty }\). Let us note here in passing that this equality in law does no longer generally hold in Markovian environment. According to Theorem 2.1 in [10], Condition (2.3) is equivalent to the negative divergence of the PRW

$$\begin{aligned} W_{n}\ =\ \log |{\varPi }_{n-1}|+\log |B_{n}|,\quad n\ge 0, \end{aligned}$$

which means that \(W_{n}\rightarrow -\infty \) a.s. (see also Sect. 12). This equivalence in turn is obtained by drawing on a fluctuation-theoretic result, stated as Theorem 3.1 in the next section, due to Spitzer [46] and Erickson [23].

In view of this, it can be expected that an extension of the Goldie-Maller theorem to the Markov-modulated setup requires for an extension of the Spitzer–Erickson result to MRW, this being so because \((\log |{\varPi }_{n}|)_{n\ge 0}\) now forms a MRW with driving chain \((M_{n})_{n\ge 0}\). The latter extension is indeed obtained here as Theorem 6.1 and utilized for the proofs of some of the results in [9] (stated there in Sect. 3). For all further details including relevant references, the interested reader is referred to that paper which actually gives a complete description of necessary and sufficient conditions for the convergence of forward and backward iterations. Unlike the iid case, this requires the distinction of various additional subregimes related to the lattice-type of \((M_{n},\log |{\varPi }_{n}|)_{n\ge 0}\).

2.2 Branching Random Walk in Random Environment

Recently, Mallein and Miłos [38] have studied the maximal displacement in a supercritical branching random walk (BRW) in random environment, the latter given by a sequence \(\mathcal {L}=(\mathcal {L}_{n})_{n\ge 0}\) of iid point process laws on \(\mathbb {R}\). The basic assumptions about these laws are

$$\begin{aligned} \mathbb {P}(\mathcal {L}_{0}(\text {number of points}=0))&=\ 0, \end{aligned}$$
(2.4)
$$\begin{aligned} \mathbb {P}(\mathcal {L}_{0}(\text {number of points}>1))&>\ 0. \end{aligned}$$
(2.5)

In other words, if \(\mathcal {Z}=\sum _{k=1}^{N}\delta _{Z_{k}}\) has law \(\mathcal {L}_{0}\), then \(N\ge 1\) a.s. and \(\mathbb {P}(N>1)>0\). Then a BRW in random environment \(\mathcal {L}\) originates from an initial particle \(\varnothing \) sitting at the origin at time 0. At time 1, the particle dies while giving birth to a random number of children with random positions relative to their mother in accordance with the law \(\mathcal {L}_{0}\). At time 2, these offspring particles die while independently giving birth to a random number of children with random positions relative to their own position in accordance with the law \(\mathcal {L}_{1}\). Generally, at time n all particles born at time \(n-1\) die and independently give birth to a random number of children with random positions relative to their own position in accordance with the law \(\mathcal {L}_{n-1}\). Due to the assumptions on \(\mathcal {L}\), the genealogy of this process is described by an a.s. non-extinctive, thus supercritical Galton–Watson tree \(\mathbb {T}\), say, in iid random environment. For \(v\in \mathbb {T}\), let S(v) denote the position of the particle v. It is obtained by summing the relative displacements of all particles along the unique path from the ancestor \(\varnothing \) (root of \(\mathbb {T}\)) to v. Then the maximal displacement of the particles born at time n is defined by

$$\begin{aligned} {\varLambda }_{n}\ :=\ \max _{v\in \mathbb {T},|v|=n}S(v), \end{aligned}$$

where |v| is the generation to which v belongs.

Next, let \(\varphi _{n}:\mathbb {R}_{\geqslant }\rightarrow \mathbb {R}\cup \{\infty \}\) for \(n\in \mathbb {N}_{0}\) denote the log-Laplace transform of the random point process law \(\mathcal {L}_{n}\), thus

$$\begin{aligned} \varphi _{n}(\theta )\ :=\ \log \mathbb {E}_{\mathcal {L}}\sum _{x\in \mathcal {Z}_{n}}e^{\theta x} \end{aligned}$$

where \(\mathcal {Z}_{n}\) is a point process with law \(\mathcal {L}_{n}\) under the conditional measure \(\mathbb {P}_{\mathcal {L}}\). Note that \(\varphi _{n}(\theta )>-\infty \) is guaranteed by (2.4) and that \(\varphi _{0},\varphi _{1},\ldots \) are iid random functions. As in [38], we further assume that \(\varphi _{n}(\theta )^{-}\), the negative part of \(\varphi _{n}(\theta )\), has finite mean for all \(\theta \ge 0\). Then \(\overline{\varphi }:\mathbb {R}_{\geqslant }\rightarrow \mathbb {R}\cup \{\infty \}\), \(\theta \mapsto \mathbb {E}\varphi _{0}(\theta )\) is well defined, and further a smooth and convex function on \(\textit{int}\,(\mathbb {D})\), the interior of \(\mathbb {D}=\{\theta :\overline{\varphi }(\theta )<\infty \}\), if this interval is nonempty which will also be assumed hereafter along with the existence of \(\theta ^{*}\in \textit{int}\,(\mathbb {D})\) such that

$$\begin{aligned} \theta ^{*}\overline{\varphi }'(\theta ^{*})-\overline{\varphi }(\theta ^{*})\ =\ 0. \end{aligned}$$

It then follows that \(\overline{\varphi }'(\theta ^{*})=\mathbb {E}\varphi _{0}'(\theta ^{*})=\nu \), where

$$\begin{aligned} \nu \ :=\ \inf _{\theta >0}\frac{\overline{\varphi }(\theta )}{\theta }. \end{aligned}$$

Under these conditions and some further technical ones omitted here (see [38, (1.6)–(1.8)]), the main result of Mallein and Miłos asserts that, for some \(\beta ^{*}>0\),

$$\begin{aligned}&\lim _{n\rightarrow \infty }\mathbb {P}_{\mathcal {L}}\left( {\varLambda }_{n}-\frac{1}{\theta ^{*}}\sum _{k=0}^{n-1}\varphi _{k}(\theta ^{*})\ge -\beta \log n\right) \\&\quad = {\left\{ \begin{array}{ll} 1,&{} \text {if }\beta >\beta ^{*},\\ 0,&{}\text {if }\beta <\beta ^{*} \end{array}\right. }\quad \text {in }\mathbb {P}\text {-probability}, \end{aligned}$$

see [38, Theorem 1.1]), also for the definition of the threshold \(\beta ^{*}\). This result extends earlier ones by Addario-Berry and Reed [1], Hu and Shi [30] and Aïdékon [2] for the case of constant environment.

An essential tool in the study of the extremal behavior of BRW is the so-called many-to-one lemma. In the present context of iid random environment, it renders the connection with a MRW. Certain fluctuation-theoretic properties of this walk, see [38, Section 3], are then used in the analysis of \(M_{n}\). To see the connection, define the stochastic kernel K by

$$\begin{aligned} K(\ell ,(-\infty ,t])\ :=\ \mathbb {E}\left( \sum _{x\in \mathcal {Z}_{0}}\mathbf {1}_{(-\infty ,t]}(x)e^{\theta ^{*}x-\varphi _{n}(\theta ^{*})}\Bigg |\mathcal {L}_{0}=\ell \right) \end{aligned}$$

where \(\mathcal {Z}_{0}\) is a point process with law \(\ell \) given \(\mathcal {L}_{0}=\ell ,\mathcal {L}_{1},\mathcal {L}_{2},\ldots \) Let \((X_{n})_{n\ge 1}\) be a sequence of real-valued random variables which, conditioned upon \(\mathcal {L}\), are conditionally independent and such that the conditional law of \(X_{n}\) equals \(K(\mathcal {L}_{n},\cdot )\). Putting \(S_{n}:=\sum _{k=1}^{n}X_{k}\) and \(W_{n}:=\theta ^{*}S_{n}-\sum _{k=1}^{n}\varphi _{k}(\theta ^{*})\) for \(n\ge 1\) and \(S_{0}=W_{0}:=0\), it follows that \((\mathcal {L}_{n},S_{n})_{n\ge 0}\) as well as \((\mathcal {L}_{n},W_{n})_{n\ge 0}\) constitute MRW with driving chain \(\mathcal {L}\). They fit into the framework of this article if further the \(\mathcal {L}_{n}\) take values in a countable set. The many-to-one lemma for the given model now states the following, see [38, Lemma 2.1]: For any \(n\in \mathbb {N}\) and any nonnegative measurable function f on \(\mathbb {R}^{n}\), we have

$$\begin{aligned} \mathbb {E}_{\mathcal {L}}\left( \sum _{|v|=n}f(S(v^{1}),\ldots ,S(v^{n}))\right) \ =\ \mathbb {E}_{\mathcal {L}}\left( e^{-W_{n}}f(S_{1},\ldots ,S_{n})\right) \end{aligned}$$

or, equivalently,

$$\begin{aligned} \mathbb {E}_{\mathcal {L}}\left( \sum _{|v|=n}f(W(v^{1}),\ldots ,W(v^{n}))\right) \ =\ \mathbb {E}_{\mathcal {L}}\left( e^{-W_{n}}f(W_{1},\ldots ,W_{n})\right) , \end{aligned}$$

where \(v^{0}=\varnothing \rightarrow v^{1}\rightarrow \cdots \rightarrow v^{n-1}\rightarrow v^{n}=v\) denotes the unique path from the root to v and \(W(u):=\theta ^{*}S(u)-\sum _{k=1}^{|u|}\varphi _{k}(\theta ^{*})\) for \(u\in \mathbb {T}\). This means that, in quenched regime (under \(\mathbb {P}_{\mathcal {L}}\)), the average over all walks along the rays in \(\mathbb {T}\) up to level n is described by a MRW. The relevance of this walk for the asymptotic behavior of \({\varLambda }_{n}\) stems from the fact that, roughly speaking, it carries the main information of the extremal paths in the BRW.

2.3 Superpositions of Renewal Processes

Consider a single-server queue with \(p\ge 2\) time-slotted input channels which are described by independent, integrable and discrete renewal sequences \((S_{k,n})_{n\ge 0}\), \(k=1,\ldots ,p\) taking values in \(\mathbb {N}_{0}\). Thus,

$$\begin{aligned} S_{k,n}\ =\ S_{k,0}+X_{k,1}+\cdots +X_{k,n} \end{aligned}$$

for each \(n\ge 1\) and \(k=1,\ldots ,p\), where

  • \((X_{k,n})_{n\ge 1}\), \(k=1,\ldots ,p\), are independent sequences of iid positive integer-valued random variables with \(\mu _{k}:=\mathbb {E}X_{k,1}<\infty \) for each k,

  • \(S_{1,0},\ldots ,S_{p,0}\) take values in \(\mathbb {N}_{0}\) and are mutually independent as well as independent of all \(X_{k,n}\).

Further defining the residual waiting time sequences \((R_{k,n})_{n\ge 0}\) by

$$\begin{aligned} R_{k,n}\ :=\ \min \{S_{k,l}-n:S_{k,l}\ge n,\,l\ge 0\} \end{aligned}$$

for \(k=1,\ldots ,p\), it is a well-known fact from renewal theory that these sequences constitute independent discrete Markov chains on \(\mathbb {N}_{0}\) with stationary distributions \(\lambda _{k,\bullet }:=(\lambda _{k,n})_{n\ge 0}\), where

$$\begin{aligned} \lambda _{k,n}\ =\ \mu _{k}^{-1}\,\mathbb {P}(X_{k,1}>n), \end{aligned}$$

see, for example, [29, Theorem 6.2 on p. 62]. As a consequence, the p-variate sequence

$$\begin{aligned} R_{n}\ :=\ (R_{1,n},\ldots ,R_{p,n}),\quad n\ge 0 \end{aligned}$$

forms a positive discrete Markov chain on \(\mathbb {N}_{0}^{p}\), its stationary distribution being the product of the \(\lambda _{k,\bullet }\), i.e., \(\lambda =\lambda _{1,\bullet }\otimes \cdots \otimes \lambda _{p,\bullet }\). Let \((M_{n})_{n\ge 0}\) be the associated hit chain of the set

$$\begin{aligned} \mathcal {S}\ :=\ \big \{(n_{1},\ldots ,n_{p})\in \mathbb {N}_{0}^{p}:n_{k}=0\quad \text { for some }\,\,1\le k\le p\big \}, \end{aligned}$$

thus \(M_{n}:=R_{\tau _{n}}\) for \(n\ge 0\) with \(\tau _{0}:=0\) and \(\tau _{n}:=\inf \{k>\tau _{n-1}:R_{k}\in \mathcal {S}\}\) for \(n\ge 1\). Its stationary distribution equals \(\pi :=\lambda (\cdot \cap \mathcal {S})/\lambda (\mathcal {S})\). Note that an arrival in one of the channels occurs iff the Markov chain \((R_{n})_{n\ge 0}\) hits the set \(\mathcal {S}\). Now the aggregated arrival process \((S_{n})_{n\ge 0}\), where simultaneous arrivals from different channels are viewed as one arrival epoch, is obtained as the superposition of the \((S_{k,n})_{n\ge 0}\) (that is, the order statistics of the sample \(\{S_{k,n}:k=1,\ldots ,p,\,n\ge 0\}\) with multiple points aggregated into one) and can be shown to constitute a Markov random walk with driving chain \((M_{n})_{n\ge 0}\). For more details see [3] where this has been verified for the (more difficult) case when the renewal sequences are continuous and, as a consequence, the driving chain has continuous state space. We also refer to this article and [36] for further results on \((S_{n})_{n\ge 0}\) by making use of Markov renewal theory.

3 Ordinary Random Walks

It is a well-known fact that any ordinary random walk \((S_{n})_{n\ge 0}\) in \(\mathbb {R}\) whose iid increments \(X_{1},X_{2},\ldots \) are not degenerate at 0 exhibits exactly one of the following behaviors:

  • (PD) Positive divergence: \(\lim _{n\rightarrow \infty }\,S_{n}=\infty \) a.s.

  • (ND) Negative divergence: \(\lim _{n\rightarrow \infty }\,S_{n}=-\infty \) a.s.

  • (Osc) Oscillation: \(\liminf _{n\rightarrow \infty }\,S_{n}=-\infty \) and \(\limsup _{n\rightarrow \infty }\,S_{n}=\infty \) a.s.

Let X denote a generic copy of the \(X_{n}\) and suppose that \(\mathbb {E}X\) exists, i.e., \(\mathbb {E}X^{-}<\infty \) or \(\mathbb {E}X^{+}<\infty \). Then

$$\begin{aligned} \textsf {(PD)}&\Leftrightarrow \ \mathbb {E}X^{-}<\mathbb {E}X^{+}\le \infty ,\\ \textsf {(ND)}&\Leftrightarrow \ \mathbb {E}X^{+}<\mathbb {E}X^{-}\le \infty ,\\ \textsf {(Osc)}&\Leftrightarrow \ \mathbb {E}X^{-}=\mathbb {E}X^{+}<\infty ,\text { i.e., }\mathbb {E}X=0, \end{aligned}$$

see Chung [20, Theorem 8.3.4]. Moreover, if \(\mathbb {E}|X|=\infty \), thus \(\mathbb {E}X^{-}\vee \mathbb {E}X^{+}=\infty \), then Kesten [32] showed that the above trichotomy even holds with \(n^{-1}S_{n}\) instead of \(S_{n}\). We refer to this result as Kesten’s trichotomy.

Suppose \(S_{0}=0\) hereafter. Many authors have dealt with the problem of relating the three fundamental types with other quantities of interest related to \((S_{n})_{n\ge 0}\), notably the level x first passage times

$$\begin{aligned}&\sigma ^{>}(x) := \inf \{n\ge 1:S_{n}>x\},\\&\sigma ^{\leqslant }(-x) := \inf \{n\ge 1:S_{n}\le -x\} \end{aligned}$$

for \(x\in \mathbb {R}_{\geqslant }\), the level x last exit time

$$\begin{aligned} \rho (x)\,:=\,\sup \{n\ge 0:S_{n}\le x\} \end{aligned}$$

for \(x\in \mathbb {R}\), the hitting of the minimum epoch

$$\begin{aligned} \sigma _{\min }\ := \inf \left\{ n\ge 1:S_{n}=\inf _{k\ge 1}S_{k}\right\} , \end{aligned}$$

the renewal counting process

$$\begin{aligned} N(x)\,:=\,\sum _{n\ge 1}\mathbf {1}_{\{S_{n}\le x\}} \end{aligned}$$

for \(x\in \mathbb {R}\), and the weighted renewal (or occupation) measures

$$\begin{aligned} \sum _{n\ge 1}n^{\alpha -1}\mathbb {P}(S_{n}\le x)\ =\ \mathbb {E}\left( \sum _{n\ge 1}n^{\alpha -1}\mathbf {1}_{\{S_{n}\le x\}}\right) \end{aligned}$$

for \(x\in \mathbb {R}\) and \(\alpha \ge 0\). The following theorem, treating the positive divergent case, is a blend of results due to Spitzer [46, Theorem 4.1 on p. 331] and Erickson [23] (see also [33, Theorem 2.1 with \(\alpha =0\)]). Let \(A(x):=\mathbb {E}(X^{+}\wedge x)-\mathbb {E}(X^{-}\wedge x)\) for \(x\in \mathbb {R}_{\geqslant }\), and

$$\begin{aligned} J(0)\,:=\,1\quad \text {and}\quad J(x)\,:=\, {\left\{ \begin{array}{ll} \displaystyle {\frac{x}{\mathbb {E}(X^{+}\wedge x)}},\quad \text {if }\mathbb {P}(X>0)>0 \\ x, \ \text {otherwise} \end{array}\right. },\quad x>0. \end{aligned}$$

Theorem 3.1

The following assertions are equivalent:

  1. (a)

    \(\lim _{n\rightarrow \infty }S_{n}=\infty \) a.s.

  2. (b)

    \(A(x)>0\) for all sufficiently large x and \(\mathbb {E}J(X^{-})<\infty \).

  3. (c)

    \(\sum _{n\ge 1}n^{-1}\mathbb {P}(S_{n}\le x)<\infty \) for some/all \(x\in \mathbb {R}_{\geqslant }\).

  4. (d)

    \(\mathbb {E}\sigma ^{>}(x)<\infty \) for some/all \(x\in \mathbb {R}_{\geqslant }\).

Remark 3.2

Erickson [23, Corollary 1] actually showed that, if \(\mathbb {E}|X|=\infty \), then the positive divergence of \((S_{n})_{n\ge 0}\) is already entailed by \(\mathbb {E}J(X^{-})<\infty \) alone, i.e., one may dispense with \(A(x)>0\) for all sufficiently large x. On the other hand, if \(\mathbb {E}|X|<\infty \), then \(J(x)=O(x)\) as \(x\rightarrow \infty \) and therefore \(\mathbb {E}J(X^{-})<\infty \). Consequently, (b) reduces to \(0<\mathbb {E}X=\lim _{x\rightarrow \infty }A(x)\) in this case.

Due to the fluctuation-type trichotomy of nontrivial RW, each of \(\mathbb {P}(\sigma _{\min }<\infty )=1\), \(\mathbb {P}(\rho (x)<\infty )=1\) for some/all \(x\in \mathbb {R}_{\geqslant }\), \(\mathbb {P}(N(x)<\infty )=1\) for some/all \(x\in \mathbb {R}_{\geqslant }\), and \(\mathbb {P}(\sigma ^\leqslant (-x)=\infty )>0\) for some/all \(x\in \mathbb {R}_{\geqslant }\) is immediately seen to be also equivalent to the positive divergence of \((S_{n})_{n\ge 0}\).

Plainly, the corresponding result for negative divergent \((S_{n})_{n\ge 0}\) can be read off directly from the previous one by replacing \((S_{n})_{n\ge 0}\) with \((-S_{n})_{n\ge 0}\), and the result for oscillating \((S_{n})_{n\ge 0}\) then follows by contraposition. Let us also note that the function J(x) in (b) may be replaced with \(xA(x)^{-1}\), see [33, proof of Lemma 3.1], for A(x) and \(\mathbb {E}(X^{+}\wedge x)\) are of the same order of magnitude as \(x\rightarrow \infty \). The series occurring in (d) is the harmonic renewal function of \((S_{n})_{n\ge 0}\) and its evaluation at 0 is often called Spitzer series.

Kesten and Maller [33, Theorem 2.1 and p. 27] generalized the above result, stated as Theorem 3.3 below, by establishing equivalent conditions for the finiteness of moments of \(\sigma ^{>}(x),\rho (x),N(x),\min _{n\ge 0}S_{n}\), and \(\sigma _{\min }\). In the case when \(0<\mathbb {E}X\le \mathbb {E}|X|<\infty \), this had already been done by other authors, most notably Gut [28] and Janson [31], the last one by providing a comprehensive result with a total of ten equivalences and economical proofs. For a good survey of the relevant literature, the reader is referred to Gut’s monography [29, Ch. 3].

Theorem 3.3

Given a positive divergent RW \((S_{n})_{n\ge 0}\), the following conditions are equivalent for \(\alpha >0\):

  1. (a)

    \(\mathbb {E}J(X^{-})^{1+\alpha }<\infty \).

  2. (b)

    \(\mathbb {E}\rho (x)^{\alpha }<\infty \) for some/all \(x\in \mathbb {R}_{\geqslant }\).

  3. (c)

    \(\mathbb {E}\sigma _{\min }^{\alpha }<\infty \).

  4. (d)

    \(\mathbb {E}\sigma ^{\leqslant }(-x)^{\alpha }\mathbf {1}_{\{\sigma ^{\leqslant }(-x)<\infty \}}<\infty \) for some/all \(x\in \mathbb {R}_{\geqslant }\).

  5. (e)

    \(\sum _{n\ge 1}n^{\alpha -1}\mathbb {P}(S_{n}\le x)<\infty \) for some/all \(x\in \mathbb {R}_{\geqslant }\).

  6. (f)

    \(\mathbb {E}N(x)^{\alpha }<\infty \) for some/all \(x\in \mathbb {R}_{\geqslant }\).

  7. (g)

    \(\mathbb {E}\sigma ^{>}(x)^{1+\alpha }<\infty \) for some/all \(x\in \mathbb {R}_{\geqslant }\).

Putting \(S_{n}^{*}:=\max _{0\le k\le n}S_{k}\) for \(n\in \mathbb {N}_{0}\), Kesten and Maller [33, Theorem 2.2] further showed that

$$\begin{aligned} \sum _{n\ge 1}\frac{1}{n}\,\mathbb {P}\left( S_{n}^{*}\le x\right) \,\asymp \,\sum _{n\ge 1}\frac{1}{n}\,\mathbb {P}(S_{n}\le x)\,\asymp \,\log J(x) \end{aligned}$$
(3.1)

holds under the conditions of Theorem 3.1, and

$$\begin{aligned} \mathbb {E}\rho (x)^{\alpha }\,\asymp \,\mathbb {E}N(x)^{\alpha }\,\asymp \,\sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}\left( S_{n}^{*}\le x\right) \,\asymp \,\sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}(S_{n}\le x)\,\asymp \,J(x)^{\alpha } \end{aligned}$$
(3.2)

for \(\alpha >0\) under the conditions of Theorem 3.3. Here \(f(x)\asymp g(x)\) means that f and g are of the same order of magnitude as \(x\rightarrow \infty \), viz.

$$\begin{aligned} 0<\ \liminf _{x\rightarrow \infty }\frac{f(x)}{g(x)}\quad \text {and}\quad \limsup _{x\rightarrow \infty }\frac{f(x)}{g(x)}\ <\ \infty . \end{aligned}$$
(3.3)

We also write \(f(x)\gtrsim g(x)\) and \(f(x)\lesssim g(x)\) as shorthand for the left and the right relation in (3.3), respectively. Finally, given two expressions AB (series or integrals), \(A\asymp B\), \(A\lesssim B\) and \(A\gtrsim B\) will be used if, for some \(c\in \mathbb {R}_{>}\), \(c^{-1}B\le A\le cB\), \(A\le cB\) and \(A\ge cB\), respectively. Note that \(\{\sigma ^{>}(x)>n\}=\{S_{n}^{*}\le x\}\) for all \(x\in \mathbb {R}_{\geqslant }\) and \(n\in \mathbb {N}_{0}\) implies

$$\begin{aligned} \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}\left( S_{n}^{*}\le x\right) \,\asymp \, {\left\{ \begin{array}{ll} \mathbb {E}\log \sigma ^{>}(x),&{}\text {if }\alpha =0,\\ \mathbb {E}\sigma ^{>}(x)^{\alpha },&{}\text {if }\alpha >0. \end{array}\right. } \end{aligned}$$

Regarding the finiteness of \(\mathbb {E}|\min _{n\ge 0}S_{n}|^{\alpha }\) for \(\alpha >0\), an equivalent condition of similar type as Theorem 3.3(a) has also been given in [33, Proposition 4.1], but is actually stronger unless \(\mathbb {E}X^{+}\) is finite and thus \(\mathbb {E}X>0\).

Theorem 3.4

Given a positive divergent RW \((S_{n})_{n\ge 0}\), the following conditions are equivalent for \(\alpha >0\):

  1. (a)

    \(\mathbb {E}(X^{-})^{\alpha }J(X^{-})<\infty \).

  2. (b)

    \(\mathbb {E}|\min _{n\ge 0}S_{n}|^{\alpha }<\infty \).

  3. (c)

    \(\mathbb {E}|S_{\sigma ^{\leqslant }(-x)}|^{\alpha }\mathbf {1}_{\{\sigma ^{\leqslant }(-x)<\infty \}}<\infty \) for some/all \(x\in \mathbb {R}_{\geqslant }\).

4 Preliminaries

We return to the Markov-modulated setup and suppose for the rest of this article a MRW \((M_{n},S_{n})_{n\ge 0}\) be given which has positive recurrent discrete driving chain with stationary distribution \(\pi =(\pi _{i})_{i\in \mathcal {S}}\). For any distribution \(\lambda =(\lambda _{i})_{i\in \mathcal {S}}\) on \(\mathcal {S}\), put \(\mathbb {P}_{\lambda }:=\sum _{i\in \mathcal {S}}\lambda _{i}\mathbb {P}_{i}\). Since \(\pi _{i}=(\mathbb {E}_{i}\tau (i))^{-1}>0\) for all \(i\in \mathcal {S}\), it follows that “\(\,\mathbb {P}_{\pi }\)-a.s.” means the same as “\(\,\mathbb {P}_{i}\)-a.s. for all \(i\in \mathcal {S}\,\),” and this will henceforth be abbreviated by “a.s.”

Due to its particular Markovian structure, \((M_{n},X_{n})_{n\ge 1}\) forms a stationary sequence under \(\mathbb {P}_{\pi }\), and for any measurable function \(f:\mathcal {S}\times \mathbb {R}\rightarrow \mathbb {R}\) satisfying

$$\begin{aligned} \mathbb {E}_{\pi }f^{-}(M_{1},X_{1})<\infty \quad \text {or}\quad \mathbb {E}_{\pi }f^{+}(M_{1},X_{1})<\infty \end{aligned}$$

we have the useful occupation measure formula

$$\begin{aligned} \mathbb {E}_{\pi }f(M_{1},X_{1}) = \frac{1}{\mathbb {E}_{i}\tau (i)}\mathbb {E}_{i}\left( \sum _{n=1}^{\tau (i)}f(M_{n},X_{n})\right) , \end{aligned}$$
(4.1)

valid for any \(i\in \mathcal {S}\). As a particular consequence,

$$\begin{aligned} \mathbb {E}_{\pi }X_{1} = \frac{1}{\mathbb {E}_{i}\tau (i)}\mathbb {E}_{i}S_{\tau (i)}\ =\ \pi _{i}\,\mathbb {E}_{i}S_{\tau (i)} \end{aligned}$$
(4.2)

whenever \(\mathbb {E}_{\pi }X_{1}\) exists, i.e., \((\mathbb {E}_{\pi }X_{1}^{-})\wedge (\mathbb {E}_{\pi }X_{1}^{+})<\infty \). Note, however, that the right-hand side in (4.2) may be finite for all \(i\in \mathcal {S}\) even if \(\mathbb {E}_{\pi }X_{1}\) does not exist. In other words, \(\mathbb {P}_{\pi }\)-integrability of the \(S_{\tau (i)}\) does not generally imply the very same for \(X_{1}\) (see Example 10.2).

Ladder variables are well known to form an important tool in the analysis of random walks. We define

$$\begin{aligned} \sigma ^{>}= \sigma ^{>}_{1}\,:=\,\inf \{n\ge 1:S_{n}>0\},\quad \sigma ^{\leqslant }\ =\sigma ^{\leqslant }_{1}\,:=\,\inf \{n\ge 1:S_{n}\le 0\}, \end{aligned}$$

thus \(\sigma ^{>}=\sigma ^{>}(0)\) and \(\sigma ^{\leqslant }=\sigma ^{\leqslant }(0)\), and then recursively for \(n\ge 2\)

$$\begin{aligned} \sigma _{n}^{>}\,:=\,\inf \left\{ k>\sigma ^{>}_{n-1}:S_{k}>S_{\sigma ^{>}_{n-1}}\right\} ,\quad \sigma _{n}^{\leqslant }\,:=\,\inf \left\{ k>\sigma ^{\leqslant }_{n-1}:S_{k}\le S_{\sigma ^{>}_{n-1}}\right\} . \end{aligned}$$

We further put \(M_{n}^{>}:=M_{\sigma _{n}^{>}}\mathbf {1}_{\{\sigma _{n}^{>}<\infty \}}+M_{\sigma _{*}^{>}}\mathbf {1}_{\{\sigma _{n}^{>}=\infty \}}\), where \(\sigma _{*}^{>}:=\sup \{\sigma _{n}^{>}:\sigma _{n}^{>}<\infty \}\).

The dual of \((M_{n},S_{n})_{n\ge 0}\), denoted \(({}^{\#}M_{n},{}^{\#}S_{n})_{n\ge 0}\) hereafter, is again a MRW and its driving chain \(({}^{\#}M_{n})_{n\ge 0}\) the time reversal of \((M_{n})_{n\ge 0}\) under \(\mathbb {P}_{\pi }\) with transition matrix

$$\begin{aligned} {}^{\#}\mathbf {P}\ =\ \left( \frac{\pi _{j}p_{ji}}{\pi _{i}}\right) _{i,j\in \mathcal {S}}. \end{aligned}$$

Moreover,

$$\begin{aligned} \mathbb {P}\left( {}^{\#}X_{1}\in \cdot \,|{}^{\#}M_{0}=i,{}^{\#}M_{1}=j\right) = K_{ji} \end{aligned}$$

for all \(i,j\in \mathcal {S}\). More generally, the duality relation

$$\begin{aligned} \begin{aligned} \pi _{i}\,\mathbb {P}_{i}&((M_{k},X_{k})_{0\le k\le n}\in \cdot \,,M_{n}=j)\\&=\ \pi _{j}\,\mathbb {P}_{j}\left( \left( {}^{\#}M_{n-k},{}^{\#}X_{n-k}\right) _{0\le k\le n}\in \cdot \,,{}^{\#}M_{n}=i\right) \end{aligned} \end{aligned}$$
(4.3)

holds true for all \(i,j\in \mathcal {S}\) and \(n\in \mathbb {N}_{0}\). Considering a doubly infinite extension \((M_{n},X_{n})_{n\in \mathbb {Z}}\) of the stationary unilateral chain \((M_{n},X_{n})_{n\ge 1}\) under \(\mathbb {P}_{\pi }\) and putting \(S_{0}:=0\) and \(S_{n}:=S_{n-1}+X_{n}\) for \(n\ne 0\), thus

$$\begin{aligned} S_{n}\,:=\, {\left\{ \begin{array}{ll} X_{1}+\cdots +X_{n},&{}\text {if }n\ge 1,\\ 0, \text { if }n=0, &{}\\ -X_{0}-\cdots -X_{n+1},&{}\text {if }n\le -1, \end{array}\right. } \end{aligned}$$

one can easily verify that the laws of the dual and of \((M_{-n},-S_{-n})_{n\ge 0}\) are the same under \(\mathbb {P}_{\pi }\), thus \(({}^{\#}M_{n},{}^{\#}X_{n})_{n\ge 1}\) has the law of \((M_{-n},X_{-n+1})_{n\ge 1}\) under \(\mathbb {P}_{\pi }\). Let \({}^{\#}\sigma _{n}^{>},{}^{\#}\sigma _{n}^{\leqslant }\) denote the counterparts of \(\sigma _{n}^{>},\sigma _{n}^{\leqslant }\) for \(({}^{\#}M_{n},{}^{\#}S_{n})_{n\ge 0}\). For later use, we quote from [7] the following result about the positive recurrence of the ladder chain \((M_{n}^{>})_{n\ge 0}\).

Proposition 4.1

Given a MRW \((M_{n},S_{n})_{n\ge 0}\) with positive recurrent driving chain, suppose that the dual sequence \(({}^{\#}S_{n})_{n\ge 0}\) is positive divergent, that is \({}^{\#}S_{n}\rightarrow \infty \) a.s. Then the ladder chain \((M_{n}^{>})_{n\ge 0}\) has the unique stationary law

$$\begin{aligned} \pi _{i}^{>}\ =\ \frac{\pi _{i}\mathbb {P}_{i}({}^{\#}\sigma ^{\leqslant }=\infty )}{\mathbb {P}_{\pi }({}^{\#}\sigma ^{\leqslant }=\infty )},\quad i\in \mathcal {S}, \end{aligned}$$

and is positive recurrent on \(\mathcal {S}^{>}=\{i:\pi _{i}^{>}>0\}=\{i:\mathbb {P}_{\pi }({}^{\#}\sigma ^{\leqslant }=\infty )>0\}\).

Later, we will also need the strictly ascending ladder epochs of \((S_{\tau _{n}(i)})_{n\ge 0}\), denoted \((\tau ^{>}_{n}(i))_{n\ge 0}\) and defined by \(\tau ^{>}_{0}(i):=0\), \(\tau ^{>}_{n}(i):=\tau _{\zeta _{n}}(i)\) for \(n\ge 1\), where

$$\begin{aligned} \zeta _{1}\ =\ \zeta _{1}(i)\ :=\ \inf \{k\ge 1:S_{\tau _{k}(i)}>0\} \end{aligned}$$
(4.4)

and

$$\begin{aligned} \zeta _{n}\ =\ \zeta _{n}(i) := \inf \{k>\zeta _{n-1}(i):S_{\tau _{k}(i)}>S_{\tau ^{>}_{n-1}(i)}\} \end{aligned}$$

for \(n\ge 2\). Put \(\tau ^{>}(i):=\tau ^{>}_{1}(i)\). Finally, let

$$\begin{aligned} \begin{aligned} \nu (x)&=\ \nu (i,x)\ :=\ \inf \{n\ge 1:S_{\tau _{n}(i)}>x\},\\ \nu ^{>}(x)&=\ \nu ^{>}(i,x)\ :=\ \inf \{n\ge 1:S_{\tau ^{>}_{n}(i)}>x\} \end{aligned} \end{aligned}$$
(4.5)

for \(x\in \mathbb {R}_{\geqslant }\) and notice that \(\nu (0)=\zeta _{1}\), \(\nu (x)=\zeta _{\nu ^{>}(x)}\), \(\sigma ^{>}(x)\le \tau _{\nu (x)}=\tau ^{>}_{\nu ^{>}(x)}\) a.s.

5 Null-Homology and Classification of Markov Random Walks

A natural starting point for our analysis is to provide the basic classification of MRW as to their divergence type. Unlike ordinary RW, which can be bounded only if their increments are a.s. 0 (trivial case), there is an infinite subclass of MRW exhibiting this kind of behavior. After a short discussion in this section, those will therefore be ruled out from the subsequent analysis. For the remaining ones, the same trichotomy as for ordinary RW holds true (Proposition 5.4). This was shown by Newbould [40, Theorem 1] for finite \(\mathcal {S}\) and by Prabhu et al. [44, Theorem 7] in the general situation.

Following Lalley [35], we call a MRW \((M_{n},S_{n})_{n\ge 0}\) null-homologous hereafter if there exists a function \(g:\mathcal {S}\rightarrow \mathbb {R}\) such that

$$\begin{aligned} X_{n}\ =\ g(M_{n})-g(M_{n-1})\quad \mathbb {P}_{\pi }\text {-a.s.} \end{aligned}$$
(5.1)

or, equivalently,

$$\begin{aligned} S_{n}\ =\ g(M_{n})-g(M_{0})\quad \mathbb {P}_{\pi }\text {-a.s.} \end{aligned}$$
(5.2)

for all \(n\in \mathbb {N}\). Otherwise, the MRW is called nontrivial. Obviously, \((S_{n})_{n\ge 0}\) fluctuates within a compact subset of \(\mathbb {R}\) if g is a bounded function. The following two lemmata show that all embedded RW \((S_{\tau _{n}(i)})_{n\ge 0}\), \(i\in \mathcal {S}\), of a null-homologous MRW are trivial and that the finiteness of one of \(\liminf _{n\rightarrow \infty }S_{n}\) or \(\limsup _{n\rightarrow \infty }S_{n}\) entails null-homology. A complete classification of null-homologous MRW as to their asymptotic behavior is then very easy and stated without proof as Proposition 5.3. The result is preceded by the following two lemmata the first of which was again obtained by Prabhu et al. [44, Theorem 2] regarding the equivalence of (b) and (c).

Lemma 5.1

Given a MRW \((M_{n},S_{n})_{n\ge 0}\), the following assertions are equivalent:

  1. (a)

    \((M_{n},S_{n})_{n\ge 0}\) is null-homologous.

  2. (b)

    \((S_{\tau _{n}(i)})_{n\ge 0}\) has zero increments for some \(i\in \mathcal {S}\).

  3. (c)

    \((S_{\tau _{n}(i)})_{n\ge 0}\) has zero increments for all \(i\in \mathcal {S}\).

Proof

Only “(c)\(\Rightarrow \)(a)” remains to be proved, for “(a)\(\Rightarrow \)(c)” is trivial. Let \(\psi _{ij}\) be the characteristic function of \(S_{\tau (j)}\) under \(\mathbb {P}_{i}\) for \(i,j\in \mathcal {S}\). Since \((S_{\tau _{n}(i)})_{n\ge 0}\) has zero increments, we easily find that

$$\begin{aligned} \psi _{ii_{1}}(t)\psi _{i_{1}i_{2}}(t)\cdots \psi _{i_{n-1}i_{n}}(t)\psi _{i_{n}i}(t)\ =\ 1 \end{aligned}$$

for all \(t\in \mathbb {R}\), \(n\ge 2\) and \(i_{1},\ldots ,i_{n}\in \mathcal {S}\), in particular \(|\psi _{ij}|=|\psi _{ij}\psi _{ji}|\equiv 1\) for all \(i,j\in \mathcal {S}\). Therefore, with \(\mathbf {i}:=\sqrt{-1}\), we have \(\psi _{ij}(t)=e^{\mathbf {i}h(i,j)t}\) for some function \(h:\mathcal {S}^{2}\rightarrow \mathbb {R}\) and

$$\begin{aligned} e^{\mathbf {i}h(j,i)t}\ =\ \psi _{ji}(t)\ =\ \overline{\psi _{ij}(t)}\ =\ e^{-\mathbf {i}h(i,j)t}, \end{aligned}$$

thus \(h(j,i)=-h(i,j)\) and particularly \(h(i,i)=0\) for all \(i,j\in \mathcal {S}\). Finally, fix any \(i\in \mathcal {S}\), define \(g(j):=h(i,j)\) for \(j\in \mathcal {S}\) and use \(\psi _{ij}\psi _{jk}\psi _{ki}\equiv 1\) to infer

$$\begin{aligned} 0= & {} h(i,j)+h(j,k)+h(k,i) = h(i,j)+h(j,k)-h(i,k) \\= & {} g(j)+h(j,k)-g(k), \end{aligned}$$

i.e., \(h(j,k)=g(k)-g(j)\) for all \(j,k\in \mathcal {S}\). But the latter means that \(S_{\tau (k)}=g(k)-g(j)\,\mathbb {P}_{j}\)-a.s., and therefore,

$$\begin{aligned} \mathbb {P}_{j}(X_{1}=g(M_{1})-g(M_{0}))&=\ \sum _{k\in \mathcal {S}}\mathbb {P}_{j}(X_{1}=g(k)-g(j),\,M_{1}=k)\\&=\ \sum _{k\in \mathcal {S}}\mathbb {P}_{j}(S_{\tau (k)}=g(k)-g(j),\tau (k)=1)\\&=\ \sum _{k\in \mathcal {S}}\mathbb {P}_{j}(\tau (k)=1)\ =\ 1 \end{aligned}$$

for all \(j,k\in \mathcal {S}\) which shows that \((M_{n},S_{n})_{n\ge 0}\) is indeed null-homologous.\(\square \)

Lemma 5.2

If \(\liminf _{n\rightarrow \infty }\,S_{n}\) or \(\limsup _{n\rightarrow \infty }\,S_{n}\) is \(\mathbb {P}_{i}\)-a.s. finite for some \(i\in \mathcal {S}\), then \((M_{n},S_{n})_{n\ge 0}\) is null-homologous.

Proof

First observe that, if Y is a copy of \(\liminf _{n\rightarrow \infty }S_{n}\) or \(\limsup _{n\rightarrow \infty }S_{n}\) and is independent of \(S_{\tau (i)}\) under \(\mathbb {P}_{i}\) for each i, then the stochastic fixed point equation

$$\begin{aligned} Y\ \mathop {=}\limits ^{d}\ S_{\tau (i)}+Y\quad \text {under }\mathbb {P}_{i} \end{aligned}$$
(5.3)

holds for all \(i\in \mathcal {S}\), where \(\mathop {=}\limits ^{d}\) means equality in law. This follows from

$$\begin{aligned} \liminf _{n\rightarrow \infty }\,S_{n}\ =\ S_{\tau (i)}\ +\ \liminf _{n\rightarrow \infty }\,(S_{n}-S_{\tau (i)})\quad \mathbb {P}_{i}\text {-a.s.} \end{aligned}$$

and a similar equation for \(\limsup _{n\rightarrow \infty }S_{n}\).

Now if \(\liminf _{n\rightarrow \infty }\,S_{n}\) or \(\limsup _{n\rightarrow \infty }\,S_{n}\) is a.s. real-valued with respect to some \(\mathbb {P}_{i}\), then (5.3) has a real-valued solution which in turn entails \(S_{\tau (i)}=0\) \(\mathbb {P}_{i}\)-a.s. as one can easily check by using characteristic functions. By Lemma 5.1, we infer that \((M_{n},S_{n})_{n\ge 0}\) is null-homologous. \(\square \)

Here is the classification result for null-homologous MRW, the straightforward proof of which can be omitted.

Proposition 5.3

If \((M_{n},S_{n})_{n\ge 0}\) is null-homologous with g as in (5.1), then exactly one of the following five alternatives holds true:

  • (NH-1) \(g\equiv 0\) and \(S_{n}=0\) a.s. for all \(n\ge 1\).

  • (NH-2) \(0\ne \sup _{i\in \mathcal {S}}|g(i)|<\infty \) and

    $$\begin{aligned} -\infty<\ \liminf _{n\rightarrow \infty }\,S_{n}\ \le \ \limsup _{n\rightarrow \infty }\,S_{n} <\ \infty \quad \mathbb {P}_{\pi }\text {-a.s.} \end{aligned}$$
  • (NH-3) \(-\infty =\inf _{i\in \mathcal {S}}g(i)<\sup _{i\in \mathcal {S}}g(i)<\infty \) and

    $$\begin{aligned} \liminf _{n\rightarrow \infty }\,S_{n}\ =\ -\infty \quad \text {and}\quad \limsup _{n\rightarrow \infty }\,S_{n}\ \in \ \mathbb {R}\quad \mathbb {P}_{\pi }\text {-a.s.} \end{aligned}$$
  • (NH-4) \(-\infty<\inf _{i\in \mathcal {S}}g(i)<\sup _{i\in \mathcal {S}}g(i)=\infty \) and

    $$\begin{aligned} \liminf _{n\rightarrow \infty }\,S_{n}\ \in \ \mathbb {R}\quad \text {and}\quad \limsup _{n\rightarrow \infty }\,S_{n}\ =\ \infty \quad \mathbb {P}_{\pi }\text {-a.s.} \end{aligned}$$
  • (NH-5) \(\inf _{i\in \mathcal {S}}g(i)=-\infty \), \(\sup _{i\in \mathcal {S}}g(i)=\infty \) and

    $$\begin{aligned} \liminf _{n\rightarrow \infty }\,S_{n}\ =\ -\infty \quad \text {and}\quad \limsup _{n\rightarrow \infty }\,S_{n}\ =\ \infty \quad \mathbb {P}_{\pi }\text {-a.s.} \end{aligned}$$

Moreover, whenever \(\sigma _{n}^{\dagger }<\infty \) a.s. for some \(\dagger \in \{>,<\}\) and all \(n\ge 0\), then the associated ladder chain \((M_{n}^{\dagger })_{n\ge 0}\) is transient, that is

$$\begin{aligned} \mathbb {P}_{i}\left( M_{n}^{\dagger }=i\, \mathrm{infinitely \, often}\right) <\ 1 \end{aligned}$$

for all \(i\in \mathcal {S}\).

Notice that alternatives (NH-3)(NH-5) are only possible if \(\mathcal {S}\) is infinite. We also point out that the null-homology of \((M_{n},S_{n})_{n\ge 0}\) (with g as in (5.1)) and its dual (with \(-g\)) are equivalent as following immediately from the fact that \(({}^{\#}M_{n},{}^{\#}X_{n})_{n\ge 1}\) and \((M_{-n},X_{-n+1})_{n\ge 1}\) are equal in law under \(\mathbb {P}_{\pi }\) (see Sect. 4).

With the help of Lemma 5.2, it is easy to prove the following zero-one law which in turn entails the announced trichotomy for nontrivial MRW, Proposition 5.4 below. Recall that \(S_{n}^{*}=\max _{0\le k\le n}S_{k}\) for \(n\in \mathbb {N}\).

Proposition 5.4

For the additive part of a nontrivial MRW \((M_{n},S_{n})_{n\ge 0}\), exactly one of the three alternatives (PD), (ND), or (Osc) occurs a.s.

Proof

We first show that \(Y\in \{\limsup _{n\rightarrow \infty }S_{n},\liminf _{n\rightarrow \infty }S_{n}\}\) satisfies

$$\begin{aligned} \mathbb {P}_{i}(Y=\infty )\ =\ 1-\mathbb {P}_{i}(Y=-\infty )\ \in \ \{0,1\} \end{aligned}$$

for all \(i\in \mathcal {S}\). Fixing any i, it suffices to consider \(Y=\limsup _{n\rightarrow \infty }S_{n}\). Put \(\kappa :=\mathbb {P}_{i}(Y=\infty )\) and \(\tau _{n}^{*}(i):=\inf \{\tau _{k}(i):\tau _{k}(i)\ge n\}\) for \(n\ge 0\). Then

$$\begin{aligned} \kappa&= \lim _{m\rightarrow \infty }\,\mathbb {P}_{i}\left( Y=\infty ,\,S_{m}^{*}>t\right) \\&= \lim _{m\rightarrow \infty }\,\mathbb {P}_{i}\left( \limsup _{n\rightarrow \infty }\,\left( S_{n}-S_{\tau _{m}^{*}(i)}\right) =\infty ,\,S_{m}^{*}>t\right) \\&= \mathbb {P}_{i}(Y=\infty )\,\lim _{m\rightarrow \infty }\,\mathbb {P}_{i}\left( S_{m}^{*}>t\right) \\&= \kappa \,\mathbb {P}_{i}\left( \sup _{n\ge 0}\,S_{n}>t\right) \end{aligned}$$

for all \(t\in \mathbb {R}_{>}\). Hence, either \(\kappa =0\), or \(\sup _{n\ge 0}S_{n}=\infty \,\mathbb {P}_{i}\)-a.s. and thus \(\kappa =1\).

Now if \((M_{n},S_{n})_{n\ge 0}\) is nontrivial, then Lemma 5.2 implies that any of \(\liminf _{n\rightarrow \infty }S_{n}\) and \(\limsup _{n\rightarrow \infty }S_{n}\) equals one of the values \(-\infty \) or \(+\infty \) with probability one under \(\mathbb {P}_{\pi }\). Hence, if (ND) and (Osc) fail to hold, (PD) remains as the only alternative.\(\square \)

An analog of Kesten’s trichotomy in the case when \(\mathbb {E}_{i}|S_{\tau (i)}|=\infty \) for some \(i\in \mathcal {S}\) can also be given and will be proved in Sect. 10.2. On the other hand, Example 7.2 will show that the more natural, but weaker condition \(\mathbb {E}_{\pi }|X_{1}|=\infty \) is not sufficient for this result.

Theorem 5.5

If \(\mathbb {E}_{i}|S_{\tau (i)}|=\infty \) for some/all \(i\in \mathcal {S}\), then exactly one of the following three alternatives holds true:

  • (PD+) \(\lim _{n\rightarrow \infty }\,n^{-1}S_{n}=\infty \) a.s.

  • (ND+) \(\lim _{n\rightarrow \infty }\,n^{-1}S_{n}=-\infty \) a.s.

  • (Osc+) \(\liminf _{n\rightarrow \infty }\,n^{-1}S_{n}=-\infty \) and \(\limsup _{n\rightarrow \infty }\,n^{-1}S_{n}=\infty \) a.s.

6 Main Results

Our main results are generalizations of Theorems 3.1, 3.3 and 3.4 to nontrivial MRW \((M_{n},S_{n})_{n\ge 0}\). In order to state them, a number of quantities must be introduced. For \(i\in \mathcal {S}\), \(n\in \mathbb {N}\) and \(x\in \mathbb {R}_{\geqslant }\), put

$$\begin{aligned}&\quad A_{i}(x)\,:=\,\mathbb {E}_{i}\left( S_{\tau (i)}^{+}\wedge x\right) -\mathbb {E}_{i}\left( S_{\tau (i)}^{-}\wedge x\right) ,\\&\quad J_{i}(0)\,:=\,1\quad \text {and}\quad J_{i}(x)\,:=\, {\left\{ \begin{array}{ll} \displaystyle {\frac{x}{\mathbb {E}_{i}\left( S_{\tau (i)}^{+}\wedge x\right) }},&{}\text {if }{\mathbb {P}_{i}(S_{\tau (i)}> 0)>0} \\ x, \ \text {otherwise} &{} \end{array}\right. },\quad x>0,\\&\quad D_{n}^{i}\,:=\,\max _{\tau _{n-1}(i)<k\le \tau _{n}(i)}\left( S_{k}-S_{\tau _{n-1}(i)}\right) ^{-} \end{aligned}$$

and let \(D^{i}\) be a copy of the iid \(D_{n}^{i}\) under \(\mathbb {P}_{i}\) which is independent of all other occurring random variables. Note that the \(D_{n}^{i}\) describe the maximal downward excursions within the cycles of the driving chain determined by successive visits of state i. Their common law, thus \(\mathbb {P}_{i}(D^{i}\in \cdot )\), is one of the relevant excursion measures that appear in our results, but we also need the measures \(\mathbb {V}_{i}^{\alpha }\), defined on \(\mathbb {R}_{>}\) by

$$\begin{aligned} \mathbb {V}_{i}^{\alpha }((x,\infty ))\,:=\,\mathbb {E}_{i}\left( \sum _{n=1}^{\tau (i)}\mathbf {1}_{\{S_{n}^{-}>x\}}\right) ^{\alpha },\quad x\in \mathbb {R}_{\geqslant }, \end{aligned}$$
(6.1)

for \(\alpha >0\) and \(i\in \mathcal {S}\). They obviously satisfy

$$\begin{aligned} \mathbb {P}_{i}(D^{i}>x)\ \le \ \mathbb {V}_{i}^{\alpha }((x,\infty ))\ \le \ \mathbb {E}_{i}\big (\tau (i)^{\alpha }\mathbf {1}_{\{D^{i}>x\}}\big ) \end{aligned}$$
(6.2)

for all \(x\in \mathbb {R}_{\geqslant }\) and have therefore heavier tails than \(D^{i}\) under \(\mathbb {P}_{i}\). In the case \(\alpha =1\), we simply write \(\mathbb {V}_{i}\) for \(\mathbb {V}_{i}^{1}\).

Theorem 6.1

For a nontrivial MRW \((M_{n},S_{n})_{n\ge 0}\), consider the following assertions:

  1. (a)

    \((M_{n},S_{n})_{n\ge 0}\) is positive divergent, that is \(\lim _{n\rightarrow \infty }S_{n}=\infty \) a.s.

  2. (b)

    \(A_{i}(x)>0\) for all sufficiently large x and \(\mathbb {E}_{i}J_{i}(D^{i})<\infty \) for some/all \(i\in \mathcal {S}\).

  3. (c)

    \(\sum _{n\ge 1}n^{-1}\,\mathbb {P}_{i}(S_{n}\le x)<\infty \) for some/all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\).

  4. (d)

    \(\mathbb {E}_{i}\sigma ^{>}(x)<\infty \) for all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\).

Then (a) \(\Leftrightarrow \) (b) \(\Longrightarrow \) (c) \(\Longrightarrow \) (d). Moreover, if \(\mathbb {E}_{i}\tau (i)\log \tau (i)<\infty \) for some/all \(i\in \mathcal {S}\) is assumed, then part (c) holds true iff

$$\begin{aligned} \int \log J_{i}(x)\ \mathbb {V}_{i}(dx) <\ \infty \end{aligned}$$
(6.3)

for some/all \(i\in \mathcal {S}\).

Remark 6.2

Using Erickson’s result mentioned in Remark 3.2, our proof will show that, if \(\mathbb {E}_{i}|S_{\tau (i)}|=\infty \) and thus \(\mathbb {E}_{i}S_{\tau (i)}^{+}+\mathbb {E}_{i}D^{i}=\infty \), then (a) follows from (b) without the assumption of ultimate positivity of \(A_{i}(x)\). On the other hand, if \(\mathbb {E}_{i}|S_{\tau (i)}|<\infty \) and thus \(J_{i}(x)=O(x)\) as \(x\rightarrow \infty \), then (b) does not reduce to the ultimate positivity of \(A_{i}(x)\) as in the case of ordinary random walks because \(\mathbb {E}_{i}D^{i}=\infty \) is still possible.

Further conditions equivalent to (a) are \(\mathbb {P}_{i}(\sigma _{\min }<\infty )=1\), \(\mathbb {P}_{i}(\rho (x)<\infty )=1\) and \(\mathbb {P}_{i}(N(x)<\infty )=1\) for some/all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\) and also \(\mathbb {P}_{i}(\sigma ^{\leqslant }(-x)=\infty )>0\) for some \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\). This follows, as for ordinary RW (see after Remark 3.2), from the basic fluctuation-type trichotomy for nontrivial MRW. On the other hand, the last condition does not need to be true for all (ix). For instance, it is easy to provide an example of a nontrivial positive divergent MRW \((M_{n},S_{n})_{n\ge 0}\) with \(\mathbb {P}_{i}(X_{1}\le -x)=1\) for some \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\), thus \(\mathbb {P}_{i}(\sigma ^{\leqslant }(-x)=1)=1\).

Every ordinary random walk \((S_{n})_{n\ge 0}\) can be viewed as a MRW with single-state, say 0, driving chain \((M_{n})_{n\ge 0}\), giving \(S_{\tau (0)}^{+}=X_{1}^{+}\) and \(D_{1}^{0}= X_{1}^{-}\). Therefore, assertion (b) of Theorem 6.1 and of Theorem 3.1 are identical. On the other hand, parts (c) and (d) of Theorem 6.1 are no longer sufficient for the positive divergence of \((S_{n})_{n\ge 0}\) as will be shown in Example 7.2.

It would be desirable and is plausible to believe that fluctuation-theoretic properties of \((S_{n})_{n\ge 0}\) are encoded in the stationary increment distribution \(\mathbb {P}_{\pi }(X_{1}\in \cdot )\). As for Theorem 6.1, this could mean to replace \(\mathbb {E}_{i}J_{i}(D^{i})<\infty \) in part (b) with

$$\begin{aligned} \int _{\mathbb {R}_{>}}\frac{x}{\mathbb {E}_{\pi }\left( X_{1}^{+}\wedge x\right) }\ \mathbb {P}_{\pi } \left( X_{1}^{-}\in dx\right) <\ \infty \end{aligned}$$

However, Example 11.3 will show that this fails to work in general. The tail behavior of \(X_{1}^{\pm }\) under \(\mathbb {P}_{\pi }\) can actually be very different from the tail behavior of \(S_{\tau (i)}^{\pm }\) under \(\mathbb {P}_{i}\) in the strong sense that

$$\begin{aligned} \lim _{x\rightarrow \infty }\frac{\mathbb {P}_{\pi }\left( X_{1}^{\pm }>x\right) }{\mathbb {P}_{i}\left( S_{\tau (i)}^{\pm }>x\right) }\ =\ \infty . \end{aligned}$$

The occurrence of \(D^{i}\) in Theorem 6.1 and also in some of the subsequent results actually indicates that within a cycle between two successive visits to a state i of the driving chain, it is the extremal behavior of the random walk rather than the average one that determines some of its characteristic features.

The next four results combined provide the counterpart of Theorem 3.3 and once again a more complex picture than in the case of ordinary random walks. For better presentation, we call a MRW \((M_{n},S_{n})_{n\ge 0}\) with positive recurrent driving chain \((M_{n})_{n\ge 0}\) to be of type \(\alpha \) for \(\alpha >0\) if \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \) for some/all \(i\in \mathcal {S}\).

Theorem 6.3

Let \(\alpha >0\) and \((M_{n},S_{n})_{n\ge 0}\) be a nontrivial, positive divergent MRW of type \(\alpha \). Then the following assertions are equivalent:

  1. (a)

    \(\mathbb {E}_{i}J_{i}(D^{i})^{1+\alpha }<\infty \) for some/all \(i\in \mathcal {S}\).

  2. (b)

    \(\mathbb {E}_{i}\rho (x)^{\alpha }<\infty \) for some/all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\).

  3. (c)

    \(\mathbb {E}_{i}\sigma _{\min }^{\alpha }<\infty \) for some/all \(i\in \mathcal {S}\).

  4. (d)

    For some/all \(i\in \mathcal {S}\), there exists \(x\in \mathbb {R}_{\geqslant }\) such that \(\mathbb {P}_{i}(\sigma ^{\leqslant }(-x)=\infty )>0\) and \(\mathbb {E}_{i}\sigma ^{\leqslant }(-x)^{\alpha }\mathbf {1}_{\{\sigma ^{\leqslant }(-x)<\infty \}}<\infty \). In this case, the last expectation is also finite for any other \(x\in \mathbb {R}_{\geqslant }\).

Theorem 6.4

Given a nontrivial MRW \((M_{n},S_{n})_{n\ge 0}\) of type \(\alpha >0\), the following assertions are equivalent:

  1. (a)

    \(A_{i}(x)>0\) for all sufficiently large x, \(\mathbb {E}_{i}J_{i}(S_{\tau (i)}^{-})^{1+\alpha }<\infty \) and

    $$\begin{aligned} \int J_{i}(x)^{\alpha }\ \mathbb {V}_{i}(dx) <\ \infty \end{aligned}$$
    (6.4)

    for some/all \(i\in \mathcal {S}\).

  2. (b)

    \(\sum _{n\ge 1}n^{\alpha -1}\mathbb {P}_{i}(S_{n}\le x)<\infty \) for some/all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\).

As stated at the end of Theorem 6.1, the equivalence remains valid in the case \(\alpha =0\) when replacing \(J_{i}(x)^{\alpha }\) with \(\log J_{i}(x)\) in (6.4) and the assumption \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \) with \(\mathbb {E}_{i}\tau (i)\log \tau (i)<\infty \).

Theorem 6.5

Given a nontrivial, positive divergent MRW \((M_{n},S_{n})_{n\ge 0}\) of type \(\alpha \), consider the following assertions:

  1. (a)

    \(\mathbb {E}_{i}J_{i}(S_{\tau (i)}^{-})^{1+\alpha }<\infty \) and

    $$\begin{aligned} \int J_{i}(x)\ \mathbb {V}_{i}^{\alpha }(dx) <\ \infty \end{aligned}$$
    (6.5)

    for some/all \(i\in \mathcal {S}\).

  2. (b)

    \(\mathbb {E}_{i}N(x)^{\alpha }<\infty \) for some/all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\).

Then (a) \(\Leftrightarrow \) (b) if \(\alpha \ge 1\), and (a) \(\Rightarrow \) (b) if \(0<\alpha <1\).

Theorem 6.6

Given a nontrivial, positive divergent MRW \((M_{n},S_{n})_{n\ge 0}\) of type \(\alpha \), the following implications hold true:

$$\begin{aligned}&\alpha \ge 1:\quad \text {Theorem }6.3\text {(a)--(d)}\ \Longrightarrow \ \text {Theorem }6.4\text {(a), (b)}\ \Longrightarrow \ \text {Theorem }6.5\text {(a), (b)}.\\&\alpha \le 1:\quad \text {Theorem }6.3\text {(a)--(d)}\ \Longrightarrow \ \text {Theorem }6.5\text {(a), (b)}\ \Longrightarrow \ \text {Theorem }6.4\text {(a), (b)}. \end{aligned}$$

Furthermore, any of the conditions in the aforementioned theorems implies

$$\begin{aligned} \mathbb {E}_{i}\sigma ^{>}(x)^{1+\alpha }<\infty \quad \text { for all }(i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant } \end{aligned}$$
(6.6)

Again, the previous theorems are weaker than their counterpart in the iid case. For \(\alpha >0\), \(\mathbb {E}_{i}\sigma ^{>}(x)^{1+\alpha }<\infty \) and the conditions provided in Theorems 6.4 and 6.5 are only necessary for those in Theorem 6.3. Moreover, Theorem 6.6 holds the surprise that \(\mathbb {E}_{i}N(x)^{\alpha }<\infty \) for all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\) implies \(\sum _{n\ge 1}n^{\alpha -1}\mathbb {P}_{i}(S_{n}\le x)<\infty \) for all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\) if \(\alpha \le 1\), whereas the converse is true if \(\alpha \ge 1\) (in the case \(\alpha =1\) both assertions are obviously identical). Example 11.2 will show that equivalence of all stated conditions generally fails to hold. In fact, the assertions of Theorem 6.4 for \(\alpha \in (0,1)\) may be valid although \((S_{n})_{n\ge 0}\) is not even positive divergent, thus \(\mathbb {P}_{i}(\rho (x)<\infty )=0\) for all \(i\in \mathcal {S}\) and \(x\in \mathbb {R}\). We further point out that one cannot generally dispense with

$$\begin{aligned} \mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \end{aligned}$$
(6.7)

for some/all \(i\in \mathcal {S}\). To see this, consider a MRW \((M_{n},S_{n})_{n\ge 0}\) such that, for some distribution F on \(\mathbb {R}_{>}\), the conditional law of \(X_{n}\) given \((M_{n-1},M_{n})\) equals F if \(M_{n}=i\) and \(\delta _{0}\) otherwise. Then \(\rho (0)+1=N(0)+1=\sigma ^>(0)=\tau (i)\) \(\mathbb {P}_{i}\)-a.s. and thus \(\mathbb {E}_{i}\rho (0)^{1+\alpha }<\infty \) is indeed equivalent to (6.7).

The counterpart of Theorem 3.4 is stated as the next theorem.

Theorem 6.7

For \(\alpha >0\) and a positive divergent MRW \((M_{n},S_{n})_{n\ge 0}\), consider the following assertions:

  1. (a)

    \(\mathbb {E}_{i}(D^{i})^{\alpha }J_{i}(D^{i})<\infty \) for some/all \(i\in \mathcal {S}\).

  2. (b)

    \(\mathbb {E}_{i}|\min _{n\ge 0}S_{n}|^{\alpha }<\infty \) for some/all \(i\in \mathcal {S}\).

  3. (c)

    \(\mathbb {E}_{i}|S_{\sigma ^{\leqslant }(-x)}|^{\alpha }\mathbf {1}_{\{\sigma ^{\leqslant }(-x)<\infty \}}\) for some/all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\).

Then (a) \(\Leftrightarrow \) (b) \(\Longrightarrow \) (c).

Example 11.4 will show that (c) does not generally imply (b).

Despite the previous disclaiming remarks regarding the equivalence of all conditions in the above theorems, it is natural to ask whether this is at least true in the case when the driving chain has finite state space. The positive answer is provided by the three subsequent theorems.

Theorem 6.8

Given the situation of Theorem 6.1 with finite state space \(\mathcal {S}\), all its assertions (a)–(d) as well as

$$\begin{aligned} A_{\pi }(x)>0\text { for all sufficiently large }x\quad \text { and}\quad \mathbb {E}_{\pi }J_{\pi }(X_{1}^{-})<\infty \end{aligned}$$
(6.8)

are equivalent, where \(A_{\pi }(x):=\mathbb {E}_{\pi }(X_{1}\wedge x)^{+}-\mathbb {E}_{\pi }(X_{1}\wedge x)^{-}\) and

$$\begin{aligned} J_{\pi }(0)\,:=\,1\quad \text {and}\quad J_{\pi }(x)\,:=\, {\left\{ \begin{array}{ll} \displaystyle {\frac{x}{\mathbb {E}_{\pi }(X_{1}\wedge x)^{+}}},&{}\text {if }\mathbb {P}_{\pi }(X_{1}> 0)>0 \\ x, \ \text {otherwise} &{} \end{array}\right. },\quad x>0. \end{aligned}$$

Moreover, \(D^{i}\) in 6.1(b) may be replaced with \(S_{\tau (i)}^{-}\).

Theorem 6.9

Let \((M_{n},S_{n})_{n\ge 0}\) be a MRW of type \(\alpha \) and \(\mathcal {S}\) be finite. Then all assertions of Theorems 6.3, 6.4, 6.5 and also (6.6) are equivalent to

$$\begin{aligned} \mathbb {E}_{\pi }J_{\pi }(X^{-})^{1+\alpha }<\infty . \end{aligned}$$
(6.9)

Moreover, \(D^{i}\) in 6.3(b) may be replaced with \(S_{\tau (i)}^{-}\).

Theorem 6.10

Given the situation of Theorem 6.7 with finite state space \(\mathcal {S}\), all its assertions (a)–(c) as well as (6.6) and

$$\begin{aligned} \mathbb {E}_{\pi }\left( X_{1}^{-}\right) ^{\alpha }J_{\pi }(X^{-})<\infty . \end{aligned}$$
(6.10)

are equivalent. Moreover, \(D^{i}\) in 6.7(b) may be replaced with \(S_{\tau (i)}^{-}\).

Without the conditions (6.8), (6.9) and (6.10) involving \(J_{\pi }\), the last three results would merely be corollaries to the previous ones. However, the inclusion of these conditions will cause some extra work provided by Lemmata 8.16 and 8.17.

7 Solidarity Results

Observe that the additive part of a nontrivial MRW \((M_{n},S_{n})_{n\ge 0}\) is actually a countable union of ordinary RW, namely

$$\begin{aligned} \{S_{n}:n\ge 1\}\ =\ \bigcup _{i\in \mathcal {S}}\left\{ S_{\tau _{n}(i)}:n\ge 1\right\} , \end{aligned}$$

which are, however, randomly intertwined. The following simple solidarity lemma shows that these embedded sequences share the fluctuation type, but the subsequent counterexample will disprove the natural conjecture that this type is also shared by \((S_{n})_{n\ge 0}\) itself. It further illustrates that the fluctuation types of \((S_{n})_{n\ge 0}\) and its dual \(({}^{\#}S_{n})_{n\ge 0}\) may be different as well (Fig. 1).

Lemma 7.1

If \((M_{n},S_{n})_{n\ge 0}\) is nontrivial, then all \((S_{\tau _{n}(i)})_{n\ge 0}\), \(i\in \mathcal {S}\), are nontrivial and of the same fluctuation type (PD), (ND) or (Osc).

Proof

All \((S_{\tau _{n}(i)})_{n\ge 0}\), \(i\in \mathcal {S}\), are nontrivial by Lemma 5.1. Fix any distinct \(i,j\in \mathcal {S}\). Since \(\limsup _{n\rightarrow \infty }S_{\tau _{n}(i)}=x\) a.s. for \(x\in \{\pm \infty \}\), we can choose a subsequence \((\tau _{n}'(i))_{n\ge 1}\) of \((\tau _{n}(i))_{n\ge 1}\) such that each \(\tau _{n}'(i)\) is a stopping time for \((M_{n},S_{n})_{n\ge 0}\) and \(\lim _{n\rightarrow \infty }S_{\tau _{n}'(i)}=x\) a.s. Now pick \(m\in \mathbb {N}\) and \(t>0\) such that \(\mathbb {P}_{i}(M_{m}=j,|S_{m}|\le t)>0\). We then infer by a geometric trials argument that

$$\begin{aligned} \mathbb {P}_{\pi }\left( M_{\tau _{n}'(i)+m}=j,\,|S_{\tau _{n}'(i)+m}-S_{\tau _{n}'(i)}|\le t\text { infinitely often}\right) \ =\ 1 \end{aligned}$$

and thus \(\limsup _{n\rightarrow \infty }S_{\tau _{n}(j)}\ge \limsup _{n\rightarrow \infty }S_{\tau _{n}(i)}\) a.s. The reverse inequality and thus equality follows by interchanging the roles of i and j. Finally, by switching to the reflected MRW \((M_{n},-S_{n})_{n\ge 0}\), we find \(\liminf _{n\rightarrow \infty }S_{\tau _{n}(j)}=\liminf _{n\rightarrow \infty }S_{\tau _{n}(i)}\) a.s. as well.\(\square \)

Example 7.2

(MRW driven by an infinite petal flower chain) Consider a Markov chain \((M_{n})_{n\ge 0}\) on the set \(\mathbb {N}_{0}\) of nonnegative integers which, when in state 0, picks an arbitrary \(i\in \mathbb {N}\) with positive probability \(p_{0i}\) and jumps back to 0, otherwise, thus \(p_{i0}=1\). If we figure the \(i\in \mathbb {N}\) being placed on a circle around 0, the transition diagram of this chain looks like a flower with infinitely many petals, each of the petals representing a transition from 0 to some i and back. With all \(p_{0i}\) being positive, the chain is clearly irreducible and positive recurrent with stationary probabilities \(\pi _{0}=\frac{1}{2}\) and

$$\begin{aligned} \pi _{i}\ =\ \frac{1}{2}\,\mathbb {E}_{0}\left( \sum _{n=0}^{\tau (0)-1}\mathbf {1}_{\{M_{n}=i\}}\right) \ =\ \frac{1}{2}\,\mathbb {P}_{0}(M_{1}=i)\ =\ \frac{p_{0i}}{2}. \end{aligned}$$

In fact, under \(\mathbb {P}_{0}\), the chain consists of independent random variables which are 0 for even n and iid with common distribution \((p_{0i})_{i\ge 1}\) for odd n.

Fig. 1
figure 1

Transition graph of an infinite petal flower chain

Turning to the additive component, we define the \(X_{n}\) by

$$\begin{aligned} X_{n}\,:=\, {\left\{ \begin{array}{ll} -p_{0i}^{-1},&{}\text {if }M_{n-1}=0,\,M_{n}=i,\\ 2+p_{0i}^{-1},&{}\text {if }M_{n-1}=i,\,M_{n}=0 \end{array}\right. } \end{aligned}$$

for \(i,n\in \mathbb {N}\), i.e., \(K_{0i}=\delta _{-p_{0i}^{-1}}\) and \(K_{i0}=\delta _{2+p_{0i}^{-1}}\). Then \(\mathbb {E}_{\pi }|X_{1}|=\infty \) and

$$\begin{aligned} S_{n}\,:=\, {\left\{ \begin{array}{ll} n-1-p_{0M_{n}}^{-1},&{}\text {if }n\text { is odd},\\ n,&{}\text {if }n\text { is even} \end{array}\right. } \quad \mathbb {P}_{0}\text {-a.s.} \end{aligned}$$

It follows that \((M_{n},S_{n})_{n\ge 0}\) is oscillating, for

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{S_{2n}}{2n} = 1 \end{aligned}$$
(7.1)

and

$$\begin{aligned} \liminf _{n\rightarrow \infty }S_{2n+1} = \liminf _{n\rightarrow \infty }\left( n-1-\frac{1}{p_{0M_{2n+1}}}\right) \ =\ -\infty \quad \mathbb {P}_{0}\text {-a.s.} \end{aligned}$$

The last assertion follows from the fact that, for any \(a>0\),

$$\begin{aligned} \sum _{n\ge 0}\mathbb {P}_{0}\left( \frac{1}{p_{0M_{2n+1}}}>an\right) \ =\ \sum _{n\ge 0}\mathbb {P}_{0}(X_{1}^->an)\ =\ \frac{\mathbb {E}_{0}X_{1}^-}{a}\ =\ \infty \end{aligned}$$

and an appeal to the Borel–Cantelli lemma, giving

$$\begin{aligned} \mathbb {P}_{0}\left( \frac{1}{p_{0M_{2n+1}}}>an\text { i.o.}\right) \ =\ 1. \end{aligned}$$
(7.2)

On the other hand, \((S_{\tau _{n}(0)})_{n\ge 0}\), which equals \((S_{2n})_{n\ge 0}\) under \(\mathbb {P}_{0}\), is positive divergent, whence the same holds true for all other \((S_{\tau _{n}(i)})_{n\ge 0}\) by Lemma 7.1. Regarding Theorem 5.5, this example shows that \(\mathbb {E}_{\pi }|X_{1}|=\infty \) is not sufficient for Kesten’s trichotomy to hold [see (7.1)].

Let us further point out that the dual \(({}^{\#}M_{n},{}^{\#}S_{n})_{n\ge 0}\) has increments

$$\begin{aligned} {}^{\#}X_{n}\,:=\, {\left\{ \begin{array}{ll} 2+p_{0i}^{-1},&{}\text {if }{}^{\#}M_{n-1}=0,\,{}^{\#}M_{n}=i,\\ -p_{0i}^{-1},&{}\text {if }{}^{\#}M_{n-1}=i,\,{}^{\#}M_{n}=0 \end{array}\right. } \end{aligned}$$

for \(n\ge 1\) and is therefore positive divergent, for

$$\begin{aligned} {}^{\#}S_{n}\,:=\, {\left\{ \begin{array}{ll} n+1+p_{0{}^{\#}\!M_{n}}^{-1},&{}\text {if }n\text { is odd},\\ n, \ \text {if }n\text { is even}&{} \end{array}\right. } \quad \mathbb {P}_{0}\text {-a.s.} \end{aligned}$$

Regarding the strictly descending ladder epochs \(\sigma _{n}^{<}\) and the associated ladder walk \((M_{n}^{<},S_{n}^{<})_{n\ge 0}\), the most notable consequence of the properties assessed before, especially (7.2), is that all \(\sigma _{n}^{<}\) are a.s. finite, but the ladder chain \(M_{n}^{<}\) must be transient, for otherwise \(\liminf _{n\rightarrow \infty }S_{\tau _{n}(i)}=-\infty \) \(\mathbb {P}_{i}\)-a.s. would hold for at least one \(i\in \mathcal {S}\).

If we alter the definition of the \(X_{n}\) by setting

$$\begin{aligned} X_{n}\,:=\, {\left\{ \begin{array}{ll} -p_{0i}^{-1},&{}\text {if }M_{n-1}=0,\,M_{n}=i\text { and }i\text { is even},\\ -p_{0i}^{-1},&{}\text {if }M_{n-1}=i,\,M_{n}=0\text { and }i\text { is odd},\\ 2+p_{0i}^{-1},&{}\text {if }M_{n-1}=i,\,M_{n}=0\text { and }i\text { is even},\\ 2+p_{0i}^{-1},&{}\text {if }M_{n-1}=0,\,M_{n}=i\text { and }i\text { is odd} \end{array}\right. } \end{aligned}$$

for \(n\ge 1\), then even both, \((S_{n})_{n\ge 0}\) and its dual \(({}^{\#}S_{n})_{n\ge 0}\), are easily seen to be oscillating despite still having positive divergent embedded random walks \((S_{\tau _{n}(i)})_{n\ge 0}\). As a consequence, putting \(\nu (x)=\inf \{n\ge 1:S_{\tau _{n}(i)}>x\}\) for \(x\in \mathbb {R}_{\geqslant }\), we have that \(\mathbb {E}_{i}\nu (x)<\infty \) and then, by using Wald’s identity, also

$$\begin{aligned} \mathbb {E}_{i}\tau _{\nu (x)}<\infty . \end{aligned}$$

Since \(\sigma ^{>}(x)\le \tau _{\nu (x)}\), we obtain \(\mathbb {E}_{i}\sigma ^{>}(x)<\infty \) for all \(x\in \mathbb {R}_{\geqslant }\), and this also ensures \(\mathbb {E}_{j}\sigma ^{>}(x)<\infty \) for all \(x\in \mathbb {R}_{\geqslant }\) and any other \(j\in \mathcal {S}\), because

$$\begin{aligned} \infty \>\ \mathbb {E}_{i}\sigma ^{>}(x)&\ge \ \int _{(-\infty ,x/2]}\mathbb {E}_{j}\sigma ^{>}(x-y)\ \mathbb {P}_{i}(S_{\tau (j)}\in dy,\sigma ^{>}(x)>\tau (j))\\&\ge \ \mathbb {P}_{j}(S_{\tau (i)}\le x/2,\sigma ^{>}(x)>\tau (j))\,\mathbb {E}_{j}\sigma ^{>}(x/2) \end{aligned}$$

for all \(x\in \mathbb {R}_{\geqslant }\). We have thus shown that Theorem 6.1(d) may hold although \((S_{n})_{n\ge 0}\) is oscillating. In fact, since \(\mathbb {E}_{0}\tau (0)\log \tau (0)<\infty \) trivially holds and (6.3) for \(i=0\) is readily verified to take the form

$$\begin{aligned} \mathbb {E}_{0}\log ^{+}S_{\tau (0)-1}^{-}\ =\ \sum _{i\ge 1}p_{0i}|\log p_{0i}| <\ \infty , \end{aligned}$$

we further see that Theorem 6.1(c) may or may not be valid, depending on whether or not \( \sum _{i\ge 1}p_{0i}|\log p_{0i}|\) is finite. In other words, the two assertions (c) and (d) of that theorem are only sufficient but not necessary for the positive divergence of \((S_{n})_{n\ge 0}\).

In view of the previous example, a nontrivial MRW \((M_{n},S_{n})_{n\ge 0}\) is called regular if it shares its fluctuation type with its dual \(({}^{\#}M_{n},{}^{\#}S_{n})_{n\ge 0}\) as well as all its embedded random walks \((S_{\tau _{n}(i)})_{n\ge 0}\). Nonregularity can only occur if \(\mathbb {E}_{\pi }X_{1}^{+}=\mathbb {E}_{\pi }X_{1}^{-}=\infty \) as the next result shows.

Proposition 7.3

A nontrivial MRW \((M_{n},S_{n})_{n\ge 0}\) is regular if its stationary drift \(\mu =\mathbb {E}_{\pi }X_{1}\) exists, i.e., \(\mathbb {E}_{\pi }X_{1}^{+}<\infty \) or \(\mathbb {E}_{\pi }X_{1}^{-}<\infty \).

Proof

If \(\mu =\mathbb {E}_{\pi }X_{1}\) exists, then \(\mathbb {E}_{i}S_{\tau (i)}=\pi _{i}^{-1}\mu \) [see (4.2)] exists as well for any \(i\in \mathcal {S}\). Now use Birkhoff’s ergodic theorem (see e.g., [18, Theorem 6.21]), applied to the ergodic stationary sequences \((X_{n})_{n\ge 1}\) and \(({}^{\#}X_{n})_{n\ge 1}\) under \(\mathbb {P}_{\pi }\), to infer \(n^{-1}S_{n}\rightarrow \mu \) and \(n^{-1}{}^{\#}S_{n}\rightarrow \mu \) a.s. Moreover, \(n^{-1}S_{\tau _{n}(i)}\rightarrow \pi _{i}^{-1}\mu \) a.s. for all \(i\in \mathcal {S}\) by the strong law of large numbers. But this shows that \((S_{n})_{n\ge 0}\), its dual and all embedded sequences \((S_{\tau _{n}(i)})_{n\ge 0}\) do indeed share the fluctuation type. \(\square \)

For the proof of our main results, it is a crucial ingredient that the embedded RW \((S_{\tau _{n}(i)})_{n\ge 0}\), \(i\in \mathcal {S}\), not only share their fluctuation type (Lemma 7.1) but also finite power moments of the fluctuation-theoretic quantities we have introduced. In other words, we need solidarity of these RW as to validity of Theorems 3.3 and 3.4 which in turn follows if the respective criteria

$$\begin{aligned} \mathbb {E}_{i}J_{i}\left( S_{\tau (i)}^{-}\right) ^{1+\alpha }<\ \infty \quad \text {and}\quad \mathbb {E}_{i}\left( S_{\tau (i)}^{-}\right) ^{\alpha }J_{i}\left( S_{\tau (i)}^{-}\right) < \infty \end{aligned}$$
(7.3)

are valid for all \(i\in \mathcal {S}\) if valid for some i. The simple observation that these criteria may be rewritten as integrals involving the tail functions of \(S_{\tau (i)}^{-}\) and \(S_{\tau (i)}^{+}\) suggests to arrive at this conclusion by showing \(\mathbb {P}_{i}(S_{\tau (i)}^{\pm }>y)\asymp \mathbb {P}_{j}(S_{\tau (j)}^{\pm }>y)\) as \(y\rightarrow \infty \) for all \(i,j\in \mathcal {S}\) (tail solidarity). Unfortunately, by another look at the previous example, where \(\mathbb {P}_0(S_{\tau (0)}>y)=0\) for \(y>2\) but \(\mathbb {P}_{i}(S_{\tau (i)}>y)>0\) for all \(y\in \mathbb {R}_{\geqslant }\) and any \(i\in \mathbb {N}\), we see that this cannot generally hold. We will therefore resort to a weaker but still sufficient kind of tail solidarity. The result will then be stated as Lemma 7.6.

For distinct \(i,j\in \mathcal {S}\), let

$$\begin{aligned} \upsilon \ :=\ \upsilon (i,j)\ :=\ \inf \{n\ge 1: \tau _n(i)>\tau (j)\} \end{aligned}$$

denote the first return time to i after the first visit to state j. Notice that \(\mathbb {E}_{i}\upsilon ^{1+\alpha }<\infty \) for any \(\alpha \ge 0\) because \(\mathbb {P}_{i}(\upsilon >n)=\mathbb {P}_{i}(\tau (i)<\tau (j))^n\) for all \(n\ge 1\).

Lemma 7.4

Let \(i,j\in \mathcal {S}\) be distinct states.

  1. (a)

    There exists \(x\in \mathbb {R}_{\geqslant }\) such that, as \(y\rightarrow \infty \),

    $$\begin{aligned} \mathbb {P}_{j}(S_{\tau (j)}>y)\ \lesssim \ \mathbb {P}_{i}(S_{\tau _{\upsilon }(i)}>y-x). \end{aligned}$$
  2. (b)

    \(\mathbb {P}_{j}(S_{\tau (j)}>0)>0\) implies \(\mathbb {P}_{i}(S_{\tau _{\upsilon }(i)}>0)>0\).

Proof

(a) First note that

$$\begin{aligned} \mathbb {P}_{j}(S_{\tau (j)}>y)\ =\ \mathbb {P}_{j}(S_{\tau (j)}>y,\tau (i)<\tau (j))\ +\ \mathbb {P}_{j}(S_{\tau (j)}>y,\, \tau (i)>\tau (j)). \end{aligned}$$

Regarding the first term on the right-hand side and using

$$\begin{aligned} \mathbb {P}_{i}(S_{\tau _{\upsilon }(i)}-S_{\tau (j)}\in \cdot )\ =\ \mathbb {P}_{j}(S_{\tau (i)}\in \cdot ), \end{aligned}$$

we infer for any \(y\in \mathbb {R}\)

$$\begin{aligned} \mathbb {P}_{j}(S_{\tau (j)}>y,\,\tau (i)<\tau (j))&= \mathbb {P}_{j}(S_{\tau (i)} +(S_{\tau (j)}-S_{\tau (i)})>y,\, \tau (i)<\tau (j))\nonumber \\&\le \int \mathbb {P}_{i}(S_{\tau (j)}>y-x)\ \mathbb {P}_{j}(S_{\tau (i)}\in dx)\nonumber \\&= \int \mathbb {P}_{i}(S_{\tau (j)}>y-x)\ \mathbb {P}_{i}(S_{\tau _{\upsilon }(i)}-S_{\tau (j)}\in dx)\nonumber \\&= \mathbb {P}_{i}(S_{\tau _{\upsilon }(i)}>y), \end{aligned}$$
(7.4)

and this proves the assertion if \(\mathbb {P}_{j}(\tau (i)>\tau (j))=0\). Otherwise, pick \(x\in \mathbb {R}_{\geqslant }\) such that

$$\begin{aligned} p_{1} := \mathbb {P}_{i}(S_{\tau (j)}\ge -x/4,\, \tau (i)>\tau (j))> 0\quad \text {and}\quad p_{2}\ :=\ \mathbb {P}_{j}(S_{\tau (i)}\ge -x/4) > 0 \end{aligned}$$

and note that \(q:=\mathbb {P}_{i}(\tau (i)>\tau (j))>0\). Then we obtain with \(p:=p_{1}p_{2}q>0\)

$$\begin{aligned}&\mathbb {P}_{j}\big (S_{\tau (j)}>y,\, \tau (i)>\tau (j)\big )\\&\quad = q^{-1}\,\mathbb {P}_{i}\big (S_{\tau _{2}(j)}-S_{\tau (j)}>y,\,\tau (i)>\tau _{2}(j)\big )\\&\quad \le q^{-1}\,\mathbb {P}_{i}\big (S_{\tau _{2}(j)}-S_{\tau (j)}>y,\,M_{k}\ne i\text { for }\tau (j)<k<\tau _{2}(j)\big )\\&\quad \le p^{-1}\,\mathbb {P}_{i}\big (S_{\tau (i)}>y-x/2,\,S_{\tau (j)}\ge -x/4,\, S_{\tau (i)}-S_{\tau _{2}(j)}\ge -x/4,\,\tau (i)>\tau _{2}(j)\big )\\&\quad \le p^{-1}\,\mathbb {P}_{i}\big (S_{\tau (i)}>y-x/2\big ). \end{aligned}$$

Moreover,

$$\begin{aligned}&\mathbb {P}_{i}\big (S_{\tau _{\upsilon }(i)}>y-x\big )\\&\quad = \mathbb {P}_{i}\big (S_{\tau (i)}>y-x,\, \upsilon =1\big )\ +\ \mathbb {P}_{i}\big (S_{\tau (i)}+(S_{\tau _{\upsilon }(i)}-S_{\tau (i)})>y-x,\, \upsilon>1\big )\\&\quad \ge \mathbb {P}_{i}\big (S_{\tau (i)}>y-x/2,\,\upsilon =1\big )\\&\qquad \quad +\ \mathbb {P}_{i}\big (S_{\tau (i)}+(S_{\tau _{\upsilon }(i)}-S_{\tau (i)})>y-x,\, S_{\tau _{\upsilon }(i)}-S_{\tau (i)}\ge -x/2,\,\upsilon>1\big )\\&\quad \ge \mathbb {P}_{i}\big (S_{\tau (i)}>y-x/2,\, \upsilon =1\big )\ +\ p_1\, p_{2}\,\mathbb {P}_{i}\big (S_{\tau (i)}>y-x/2,\,\upsilon>1\big )\\&\quad \ge p_{1}\,p_{2}\,\mathbb {P}_{i}\big (S_{\tau (i)}>y-x/2\big ) \end{aligned}$$

which in combination with the previous estimate provides the assertion (with x as chosen above) if \(\mathbb {P}_{j}(\tau (i)>\tau (j))>0\).

(b) If \(\mathbb {P}_{j}(S_{\tau (j)}>0,\tau (i)<\tau (j))>0\), the assertion follows immediately from (7.4). Otherwise,

$$\begin{aligned} \mathbb {P}_{j}(S_{\tau (j)}>\varepsilon ,\, \tau (i)>\tau (j))\ >\ 0 \end{aligned}$$

for some \(\varepsilon >0\). Then upon choosing \(x\in \mathbb {R}_{\geqslant }\) and \(p_{1}, p_{2}>0\) as in (a), the assertion follows from

$$\begin{aligned} \mathbb {P}_{i}(S_{\tau _{\upsilon }(i)}>0)&\ge \ p_{1}\,p_{2}\,\mathbb {P}_{j}(S_{\tau _{n}(j)}>x/2,\,\tau (i)>\tau _{n}(j))\\&\ge \ p_{1}\,p_{2}\,\mathbb {P}_{j}(S_{\tau (j)}>\varepsilon ,\, \tau (i)>\tau (j))^{n}\ >\ 0, \end{aligned}$$

where \(n=\lceil x/2\varepsilon \rceil \). \(\square \)

Defining, for \(i\in \mathcal {S}\), \(\gamma \in [0,1]\) and \(x\in \mathbb {R}_{\geqslant }\),

$$\begin{aligned} J_{i,\gamma }(x)\,&:=\, {\left\{ \begin{array}{ll} \displaystyle {\frac{x}{[\mathbb {E}_{i}(S_{\tau (i)}^{+}\wedge x)]^{\gamma }}},&{}\text {if }\mathbb {P}_{i}(S_{\tau (i)}>0)>0 \\ x, \ \text {otherwise}&{} \end{array}\right. }, \end{aligned}$$

where \(0/[\mathbb {E}_{i}(S_{\tau (i)}^{+}\wedge 0)]^\gamma :=1\) if \(\gamma >0\), the expectations in (7.3) can be treated hereafter in a unified manner because \(J_{i}(x)=J_{i,1}(x)\) and \(y^{\alpha }J_{i}(x)=J_{i,1/(1+\alpha )}(x)\). The next lemma collects some relevant properties of \(J_{i,\gamma }\) and \(A_{i}\).

Lemma 7.5

The following assertions are true for any \(\gamma \in [0,1]\):

  1. (a)

    \(J_{i,\gamma }\) is subadditive and nondecreasing for all \(i\in \mathcal {S}\).

  2. (b)

    \(J_{i,\gamma }(y)\asymp J_{i,\gamma }(x+y)\) as \(y\rightarrow \infty \) for all \((i,x)\in \mathcal {S}\times \mathbb {R}\).

  3. (c)

    \(J_{i,\gamma }(y)\asymp J_{j,\gamma }(y)\) as \(y\rightarrow \infty \) for all \(i,j\in \mathcal {S}\).

If the \((S_{\tau _{n}(i)})_{n\ge 0}\) are positive divergent, then furthermore

  1. (d)

    \(A_{i}(y)>0\) for all sufficiently large y,

  2. (e)

    \(A_{i}(y) \asymp \mathbb {E}_{i}(S_{\tau (i)}^{+}\wedge y)\asymp \mathbb {E}_{i}(S_{\tau ^{>}(i)}\wedge y)\) as \(y\rightarrow \infty \)

for all \(i\in \mathcal {S}\).

Proof

(a) Subadditivity follows from the monotonicity of the denominator in the definition of \(J_{i,\gamma }\) and monotonicity from the identity

$$\begin{aligned} J_{i,\gamma }(y)\ = \ \frac{y^{1-\gamma } }{ \left[ y^{-1}\int _{0}^{y} \mathbb {P}_{i}\left( S_{\tau (i)}^{+}>x\right) \ dx\right] ^\gamma }\ = \ \frac{y^{1-\gamma }}{\left[ \int _{0}^{1} \mathbb {P}_{i}\left( S_{\tau (i)}>xy\right) \, dx\right] ^\gamma },\quad y\in \mathbb {R}_{\geqslant } \end{aligned}$$

(cf. the proof of [10, Lemma 5.4(a)]).

(b) follows immediately from the properties asserted in (a).

(c) Fix any distinct \(i,j\in \mathcal {S}\) and then \(x\in \mathbb {R}_{\geqslant }\) such that Lemma 7.4(a) holds. The assertion is obvious if we verify that

$$\begin{aligned} \mathbb {E}_{i}\left( S_{\tau (i)}^{+}\wedge y\right) \, \asymp \, \mathbb {E}_{j}\left( S_{\tau (j)}^{+}\wedge y\right) \end{aligned}$$

as \(y\rightarrow \infty \). We obtain

$$\begin{aligned} \mathbb {E}_{j}(S_{\tau (j)}^{+}\wedge y)&= \int _{0}^{y}\mathbb {P}_{j}(S_{\tau (j)}>z)\ dz\ \lesssim \ \int _{0}^{y} \mathbb {P}_{i}(S_{\tau _{\upsilon }(i)}>z-x)\ dz \\&\lesssim \int _{0}^{y-x} \mathbb {P}_{i}(S_{\tau _{\upsilon }(i)}>z)\ dz\ =\ \mathbb {E}_{i}\big [S_{\tau _{\upsilon }(i)}^{+}\wedge (y-x)\big ] \\&= \sum _{n\ge 1 }\mathbb {P}_{i}(\upsilon =n)\, \mathbb {E}_{i}\big [S_{\tau _n(i)}^{+}\wedge (y-x)\big |\upsilon =n\big ] \\&\le \sum _{n\ge 1 }\mathbb {P}_{i}(\upsilon =n)\, \sum _{k=1}^n \mathbb {E}_{i}\big [(S_{\tau _{k}(i)}-S_{\tau _{k-1}(i)})^{+}\wedge (y-x)\big |\upsilon =n\big ], \end{aligned}$$

where Lemma 7.4(b) has been utilized in the third step. Notice that assertion (b) particularly yields

$$\begin{aligned} \mathbb {E}_{i}\left( S_{\tau (i)}^{+}\wedge (y-x)\right) \ \asymp \ \mathbb {E}_{i}\left( S_{\tau (i)}^{+}\wedge y\right) \end{aligned}$$

as \(y\rightarrow \infty \). Next, we have

$$\begin{aligned} \mathbb {E}_{i}\big [(S_{\tau _n(i)}-S_{\tau _{n-1}(i)})^{+}\wedge (y-x)\big |\upsilon =n\big ]&= \mathbb {E}_{i}[S_{\tau (i)}^{+}\wedge (y-x)|\tau (i)>\tau (j)] \\&\lesssim \mathbb {E}_{i}\left[ S_{\tau (i)}^{+}\wedge (y-x)\right] \ \asymp \ \mathbb {E}_{i}\left( S_{\tau (i)}^{+}\wedge y\right) \end{aligned}$$

as \(y\rightarrow \infty \) and, given \(\mathbb {P}_{i}(\tau (i)<\tau (j))>0\),

$$\begin{aligned} \mathbb {E}_{i}\big [(S_{\tau _{k}(i)}-S_{\tau _{k-1}(i)})^{+}\wedge (y-x)\big |\upsilon =n\big ]&= \mathbb {E}_{i}[S_{\tau (i)}^{+}\wedge (y-x)|\tau (i)<\tau (j)] \\&\lesssim \ \mathbb {E}_{i}(S_{\tau (i)}^{+}\wedge y)\qquad \text {as }y\rightarrow \infty \end{aligned}$$

for \(1\le k<n\). Consequently,

$$\begin{aligned} \mathbb {E}_{j}\left( S_{\tau (j)}^{+}\wedge y\right) \ \lesssim \ \sum _{n\ge 1} \mathbb {P}_{i}(\upsilon =n)\, n\cdot \mathbb {E}_{i}\left( S_{\tau (i)}^{+}\wedge y\right) \ = \ \mathbb {E}_{i}\upsilon \cdot \mathbb {E}_{i}\left( S_{\tau (i)}^{+}\wedge y\right) , \end{aligned}$$

thus \(\mathbb {E}_{j}(S_{\tau (j)}^{+}\wedge y)\lesssim \mathbb {E}_{i}(S_{\tau (i)}^{+}\wedge y)\). The reverse relation follows by symmetry of the argument in i and j.

(d) and (e) can be extracted from [33, Lemma 3.2, the proof of Lemma 3.1 and (4.5)]. \(\square \)

Lemma 7.6

The following assertions hold true either for all \(i\in \mathcal {S}\) or none:

  1. (a)

    \(\mathbb {E}_{i}J_{i}(S_{\tau (i)}^{-})^{1+\alpha }<\infty \).

  2. (b)

    \(\mathbb {E}_{i}(S_{\tau (i)}^{-})^{\alpha }\,J_{i}(S_{\tau (i)}^{-})<\infty \).

  3. (c)

    \(\mathbb {E}_{i}S_{\tau (i)}^{-}<\infty \).

By applying the lemma to the MRW \((M_{n},-S_{n})_{n\ge 0}\), we see it remains valid with \(S_{\tau (i)}^{+}\) instead of \(S_{\tau (i)}^{-}\), and a combination of both results shows that \(\mathbb {E}_{i}|S_{\tau (i)}|<\infty \) is either true for all \(i\in \mathcal {S}\) or none.

Proof

All parts follow if we can prove that, for any \(\gamma \in [0,1]\), \(\mathbb {E}_{i} J_{i,\gamma }(S_{\tau (i)}^{-})^{1+\alpha }<\infty \) is true either for all \(i\in \mathcal {S}\) or none.

Suppose \(\mathbb {E}_{i} J_{i,\gamma }(S_{\tau (i)}^{-})^{1+\alpha }<\infty \) and pick an arbitrary \(j\in \mathcal {S}\setminus \{i\}\). An application of Lemma 7.4(a) to \((M_{n},-S_{n})_{n\ge 0}\) ensures the existence of \(x\in \mathbb {R}_{\geqslant }\) with

$$\begin{aligned} \mathbb {P}_{j}\left( S_{\tau (j)}^{-}>y\right) \ \lesssim \ \mathbb {P}_{i}\left( S_{\tau _{\upsilon }(i)}^{-}>y-x\right) \end{aligned}$$

as \(y\rightarrow \infty \). Hence, by an appeal to Lemma 7.5(b), it suffices to prove

$$\begin{aligned} \mathbb {E}_{i} J_{i,\gamma }\left( S_{\tau _{\upsilon }(i)}^{-}\right) ^{1+\alpha } <\ \infty . \end{aligned}$$

Setting \(Y_{k}:=S_{\tau _{k}(i)}-S_{\tau _{k-1}(i)}\) for \(k\ge 1\), subadditivity of \(J_{i,\gamma }\) yields

$$\begin{aligned} \mathbb {E}_{i} J_{i,\gamma }(S_{\tau _{\upsilon }(i)}^{-})^{1+\alpha }&\le \mathbb {E}_{i}\left[ \sum _{k=1}^{\upsilon } J_{i,\gamma }\big (Y_{k}^{-}\big )\right] ^{1+\alpha }\ \le \ \mathbb {E}_{i}\left[ \upsilon ^{\alpha }\sum _{k=1}^{\upsilon }J_{i,\gamma }\big (Y_{k}^{-}\big )^{1+\alpha }\right] \\&= \sum _{n\ge 1} \mathbb {P}_{i}(\upsilon =n)\,n^{\alpha }\,\sum _{k=1}^{n}\mathbb {E}_{i}\Big [J_{i,\gamma }\big (Y_{k}^{-}\big )^{1+\alpha }\Big |\upsilon =n\Big ]. \end{aligned}$$

Now use

$$\begin{aligned} \mathbb {E}_{i}\left[ J_{i,\gamma }\big (Y_{n}^{-}\big )^{1+\alpha }\Big |\upsilon =n\right] \,=\,\mathbb {E}_{i} \big (J_{i,\gamma }(S_{\tau (i)}^{-})^{1+\alpha }\big |\tau (i)>\tau (j)\big )\,=:\,c_{1}\,<\,\infty \end{aligned}$$

and, given \(\mathbb {P}_{i}(\tau (i)<\tau (j))>0\),

$$\begin{aligned} \mathbb {E}_{i} \left[ J_{i,\gamma }\big (Y_{k}^{-}\big )^{1+\alpha }\Big |\upsilon =n\right] \,=\,\mathbb {E}_{i} \big (J_{i,\gamma }(S_{\tau (i)}^{-})^{1+\alpha }\big |\tau (i)<\tau (j)\big )\,=:\,c_{2}\,<\,\infty \end{aligned}$$

for \(1\le k<n\) to arrive at the desired conclusion

$$\begin{aligned} \mathbb {E}_{i} J_{i,\gamma }\left( S_{\tau _{\upsilon }(i)}^{-}\right) ^{1+\alpha }\ \le \ (c_{1}\vee c_{2})\, \mathbb {E}_{i}\upsilon ^{1+\alpha } <\ \infty . \end{aligned}$$

\(\square \)

The next solidarity lemma is of similar kind, but for \(D^{i}\) instead of \(S_{\tau (i)}^{-}\).

Lemma 7.7

The following assertions hold either for all \(i\in \mathcal {S}\) or none:

  1. (a)

    \(\mathbb {E}_{i}J_{i}(D^{i})^{1+\alpha }<\infty \).

  2. (b)

    \(\mathbb {E}_{i}(D^{i})^{\alpha }J_{i}(D^{i})<\infty \).

  3. (c)

    \(\mathbb {E}_{i}(D^{i})^{1+\alpha }<\infty \).

Proof

Again, it suffices to prove that, for any \(\gamma \in [0,1]\), \(\mathbb {E}_{i} J_{i,\gamma }(D^{i})^{1+\alpha }<\infty \) holds either for all \(i\in \mathcal {S}\) or none.

Suppose \(\mathbb {E}_{i} J_{i,\gamma }(D^{i})^{1+\alpha }<\infty \) for some \(i\in \mathcal {S}\) and \(\gamma \in [0,1]\). Pick an arbitrary \(j\in \mathcal {S}\backslash \{i\}\), define

$$\begin{aligned} D_{\upsilon }\ := \ \max _{1\le k\le \tau _{\upsilon }(i)} S_{k}^{-} \end{aligned}$$

and use \(D_{\upsilon }\le \sum _{k=1}^\upsilon D_{k}^{i}\) to infer \(\mathbb {E}_{i} J_{i,\gamma }(D_{\upsilon })^{1+\alpha ^{}}<\infty \) in the same manner as the finiteness of \(\mathbb {E}_{i} J_{i,\gamma }(S_{\tau _{\upsilon }(i)}^{-})^{1+\alpha }\) in the proof of Lemma 7.6. Then put

$$\begin{aligned} \upsilon _{2}\ :=\ \inf \{n\ge 1: \tau _n(i)>\tau _{2}(j)\}\quad \text {and}\quad D_{\upsilon _{2}}\ := \ \max _{1\le k\le \tau _{\upsilon _{2}}(i)}S_{k}^{-} \end{aligned}$$

and notice that

$$\begin{aligned} D_{\upsilon _{2}}\ \le \ D_{\upsilon }\ +\ \max _{\tau _{\upsilon }(i)< k\le \tau _{\upsilon _{2}}(i)} (S_{k}-S_{\tau _{\upsilon }(i)})^{-}. \end{aligned}$$

The second summand on the right-hand side is 0 if \(\tau _{\upsilon }(i)=\tau _{\upsilon _{2}}(i)\) and an independent copy of \(D_{\upsilon }\) otherwise. Hence, in both cases one easily obtains \(\mathbb {E}_{i} J_{i,\gamma }(D_{\upsilon _{2}})^{1+\alpha }<\infty \). Picking \(x\in \mathbb {R}_{\geqslant }\) with \(\mathbb {P}_{i}(S_{\tau (j)}\le x)>0\), we finally infer with the help of Lemma 7.5

$$\begin{aligned} \infty&> \mathbb {E}_{i} J_{i,\gamma }(D_{\upsilon _{2}})^{1+\alpha }\ \gtrsim \ \int _{(0,\infty )} J_{i,\gamma }(y)^{1+\alpha }\ \mathbb {P}_{i}(D_{\upsilon _{2}}\in dy\,|S_{\tau (j)}\le x)\\&\ge \ \int _{(0,\infty )} J_{i,\gamma }(y)^{1+\alpha }\ \mathbb {P}_{i}\Big (\max _{\tau _{1}(j)< k\le \tau _{2}(j)}S_{k}^{-}\in dy\,\Big |S_{\tau _{1}(j)}\le x\Big )\\&\ge \ \int _{(0,\infty )} J_{i,\gamma }(y)^{1+\alpha }\ \mathbb {P}_{i}\Big ( \max _{\tau _{1}(j)< k\le \tau _{2}(j)}(S_{k}-S_{\tau _{1}(j)})^{-}-x\in dy\,\Big |S_{\tau _{1}(j)}\le x\Big )\\&= \int _{(0,\infty )} J_{i,\gamma }(y)^{1+\alpha }\ \mathbb {P}_{j}\left( D_{1}^{j}-x\in dy\right) \\&\asymp \ \mathbb {E}_{j} J_{j,\gamma }(D^j)^{1+\alpha }. \end{aligned}$$

\(\square \)

Our final solidarity lemma provides sufficient conditions for the existence of power moments of \(\sigma ^{>}\), see Sect. 9. Note that a geometric number of cycles marked by successive visits to a state \(j\ne i\) contains a visit to \(i\in \mathcal {S}\). Hence, \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \) is satisfied either for all \(i\in \mathcal {S}\) or none.

Lemma 7.8

Let \(\alpha \ge 0\) and \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \) for some/all \(i\in \mathcal {S}\). The following conditions are equivalent:

  1. (a)

    \(\mathbb {E}_{i} \tau _{\nu (x)}(i)^{1+\alpha }<\infty \) for some/all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\).

  2. (b)

    \(A_{i}(x)>0\) for all sufficiently large x and \(\mathbb {E}_{i}J_{i}(S_{\tau (i)}^{-})^{1+\alpha }<\infty \) for some/all \(i\in \mathcal {S}\).

In particular, these conditions imply (6.6).

Proof

By Lemmata 7.1, 7.6 and Theorem 3.1, (b) holds true either for all \(i\in \mathcal {S}\) or none. Moreover, (6.6) follows from \(\tau _{\nu (x)}(i)\ge \sigma ^{>}(x)\).

Suppose \(\mathbb {E}_{i}\tau _{\nu (x)}(i)^{1+\alpha }<\infty \) for some \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\). Since \(\tau _{\nu (x)}(i)\ge \nu (x)\), we obtain \(\mathbb {E}_{i}\nu (x)^{1+\alpha }<\infty \) which is equivalent to (b) by Theorems 3.1 and 3.3.

Since \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \), the reverse implication follows directly from \(\mathbb {E}_{i}\nu (x)^{1+\alpha }<\infty \) and [29, Theorem 1.5.4].\(\square \)

8 Proofs of the Main Results

For ease of notation, we write \(\tau ,\tau _{n},{}^{\#}\tau ,\tau ^{>}\), etc., for \(\tau (i),\tau _{n}(i),{}^{\#}\tau (i),\tau ^{>}(i)\), etc., in all subsequent proofs when a fixed \(i\in \mathcal {S}\) is considered. We further put

$$\begin{aligned} {\varSigma }_{\alpha }(i,x)\ :=\ \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}(S_{n}\le x) \end{aligned}$$

for \(\alpha \in \mathbb {R}_{\geqslant }\) and \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\). In analogy to \(J_{i}\), let \(J_{i}^{>}\) be defined by

$$\begin{aligned} J_{i}^{>}(0)\,:=\,1\quad \text {and}\quad J_{i}^{>}(x)\,:=\, \frac{x}{\mathbb {E}_{i}(S_{\tau ^{>}(i)}\wedge x)}\quad \text {for }x>0, \end{aligned}$$

and put

$$\begin{aligned} D_{n}^{i,>}\,:=\,\max _{\tau ^{>}_{n-1}(i)<k\le \tau ^{>}_{n}(i)}\left( S_{k}-S_{\tau ^{>}_{n-1}(i)}\right) ^{-} \end{aligned}$$

for \(n\in \mathbb {N}\). Let \(D^{i,>}\) denote a generic copy of these iid random variables under \(\mathbb {P}_{i}\) which is independent of all other occurring random variables. Note that

$$\begin{aligned} D_{1}^{i}\ \le \ D_{1}^{i,>}\ \le \ \sum _{k=1}^{\zeta _{1}}D_{k}^{i}, \end{aligned}$$

where \(\tau ^{>}(i)=\tau _{\zeta _{1}}(i)\) with \(\zeta _{1}=\inf \{n\ge 1:S_{\tau _{n}(i)}>0\}\) should be recalled from the end of Sect. 4. Then use Lemma 7.5(e), giving \(J_{i}(x)\asymp J_{i}^{>}(x)\), in combination with the monotonicity and subadditivity of \(J_{i}\) and Theorem 1.5.4 in [29] to infer that, if \((S_{\tau _{n}(i)})_{n\ge 0}\) is positive divergent, then

$$\begin{aligned} \mathbb {E}_{i}J_{i}^{>}(D^{i,>})^{1+\beta }\,<\,\infty \quad \Longleftrightarrow \quad \mathbb {E}_{i}J_{i}(D^{i})^{1+\beta }\,<\,\infty \end{aligned}$$
(8.1)

for any \(\beta \in \mathbb {R}_{\geqslant }\).

8.1 Proof of Theorem 6.1

The Proof of Theorem 6.1 can be found at the end of this subsection after some auxiliary lemmata. We start by quoting the following straightforward generalization of a result by Erickson [23, Lemma 4].

Lemma 8.1

Let \((X_{n},Y_{n})_{n\ge 1}\) be a sequence of iid pairs of nonnegative random variables with generic copy (XY) and \(\mathbb {E}X+\mathbb {E}Y=\infty \). Then

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{Y_{n+1}}{\sum _{k=1}^{n}X_{k}} = 0\quad \text {or}\quad = \infty \quad \text {a.s.} \end{aligned}$$

according as

$$\begin{aligned} \int _{\mathbb {R}_{>}}\frac{y}{\mathbb {E}(X\wedge y)} \mathbb {P}(Y\in dy) < \infty \quad \text {or}\quad = \infty . \end{aligned}$$

Proof

(of Theorem 6.1 ) “(a)\(\Rightarrow \)(b).” Suppose that \((S_{n})_{n\ge 0}\) is positive divergent. Noting that, for any \(i\in \mathcal {S}\), \((S_{\tau _{n}}-D_{n+1}^{i})_{n\ge 0}\) forms a subsequence of \((S_{n})_{n\ge 0}\) in the sense that the former sequence equals \((S_{\xi _{n}})_{n\ge 0}\) for an increasing random sequence \((\xi _{n})_{n\ge 0}\), we see that \(S_{n}\rightarrow \infty \) a.s. ensures \(S_{\tau _{n}}-D_{n+1}^{i}\rightarrow \infty \) a.s., hence \(S_{\tau _{n}}\rightarrow \infty \) a.s. and

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{D_{n+1}^{i}}{\sum _{k=1}^{n}(S_{\tau _{k}}-S_{\tau _{k-1}})^{+}}\ \le \ \limsup _{n\rightarrow \infty }\frac{D_{n+1}^{i}}{S_{\tau _{n}}} <\ \infty \quad \text {a.s.} \end{aligned}$$

Consequently, \(\mathbb {E}_{i}J_{i}(D^{i})<\infty \) follows by Erickson’s Lemma 8.1. Finally, \(A_{i}(x)>0\) for all sufficiently large x can now be inferred from Lemma 3.2 by Kesten and Maller [33].

“(b)\(\Rightarrow \)(a).” Fix an arbitrary \(i\in \mathcal {S}\) and consider two cases.

Case 1. If \(\mathbb {E}_{i}S_{\tau }^{+}+\mathbb {E}_{i}D^{i}<\infty \), then \(0<\lim _{y\rightarrow \infty }A_{i}(y)=\mathbb {E}_{i}S_{\tau }^{+}<\infty \) ensures the positive divergence of \((S_{\tau _{n}})_{n\ge 0}\) and \(\lim _{n\rightarrow \infty }n^{-1}S_{\tau _{n}}=\mathbb {E}_{i}S_{\tau }\) a.s. Moreover,

$$\begin{aligned} \sum _{n\ge 1}\mathbb {P}_{i}\left( D_{n}^{i}>n\right) \ \le \ \mathbb {E}_{i}D^{i} <\ \infty \end{aligned}$$

entails \(\lim _{n\rightarrow \infty }n^{-1}D_{n}^{i}=0\) a.s. As a consequence,

$$\begin{aligned} \lim _{n\rightarrow \infty }S_{n}&\ge \lim _{n\rightarrow \infty }\left( S_{\tau _{{\varLambda }(n)}}-D_{{\varLambda }(n)+1}^{i}\right) \\&\ge \lim _{n\rightarrow \infty }{\varLambda }(n)\left( \mathbb {E}_{i}S_{\tau (i)}-\frac{D_{{\varLambda }(n)+1}^{i}}{{\varLambda }(n)}\right) = \infty \quad \text {a.s.,} \end{aligned}$$

where \({\varLambda }(x):=\sup \{n\ge 0:\tau _{n}\le x\}\).

Case 2. If \(\mathbb {E}_{i}S_{\tau }^{+}+\mathbb {E}_{i}D^{i}=\infty \), then, again by Erickson’s Lemma 8.1, \(\mathbb {E}_{i}J_{i}(D^{i})<\infty \) is equivalent to

$$\begin{aligned} Q_{n}\,:=\,\frac{D_{n+1}^{i}}{\sum _{k=1}^{n}(S_{\tau _{k}}-S_{\tau _{k-1}})^{+}}\ \mathop {\longrightarrow }\limits ^{n\rightarrow \infty }\ 0\quad \text {a.s.} \end{aligned}$$
(8.2)

Therefore, by invoking a result of Pruitt [45, Lemma 8.1], we also have the equivalence of \(\mathbb {E}_{i}J_{i}(D^{i})<\infty \) with

$$\begin{aligned} R_{n}\,:=\,\frac{\sum _{k=1}^{n}(S_{\tau _{k}}-S_{\tau _{k-1}})^{-}}{\sum _{k=1}^{n}(S_{\tau _{k}}-S_{\tau _{k-1}})^{+}} \mathop {\longrightarrow }\limits ^{n\rightarrow \infty }\ 0\quad \text {a.s.} \end{aligned}$$
(8.3)

Now we arrive at the desired conclusion via

$$\begin{aligned} S_{n}&\ge \sum _{k=1}^{{\varLambda }(n)}(S_{\tau _{k}}-S_{\tau _{k-1}})^{+}-\sum _{k=1}^{{\varLambda }(n)}(S_{\tau _{k}}-S_{\tau _{k-1}})^{-}-D_{{\varLambda }(n)+1}^{i} \\&= \sum _{k=1}^{{\varLambda }(n)}(S_{\tau _{k}}-S_{\tau _{k-1}})^{+}\left( 1-R_{{\varLambda }(n)}-Q_{{\varLambda }(n)}\right) \mathop {\longrightarrow }\limits ^{n\rightarrow \infty } \infty \quad \text {a.s.} \end{aligned}$$

Note that \(A_{i}(x)>0\) for all sufficiently large x has not been used here.

Lemma 8.2 below establishes “(b)\(\Rightarrow \)(c).” For “(c)\(\Leftrightarrow \)(6.3)” under the additional condition on \(\tau (i)\), we refer to the Proof of Theorem 6.4 for the case \(\alpha =0\). Left with “(c)\(\Rightarrow \)(d),” Lemma 13.3 in the Appendix provides us with

$$\begin{aligned} \sum _{n\ge 1}\frac{1}{n}\,\mathbb {P}_{i}(S_{\tau _{n}}\le x)&\asymp \ \mathbb {E}_{i}\left( \frac{1}{\tau _{n}}\,\mathbf {1}_{\{S_{\tau _{n}}\le x\}}\right) \ \le \ {\varSigma }_{0}(i,x) <\ \infty , \end{aligned}$$

for all \(x\in \mathbb {R}_{\geqslant }\), which entails \(S_{\tau _{n}}\rightarrow \infty \) a.s. and \(\mathbb {E}_{i}\nu (x)<\infty \) by Theorem 3.1. Finally, use Wald’s equation to conclude

$$\begin{aligned} \mathbb {E}_{i}\sigma ^{>}(x)\ \le \ \mathbb {E}_{i}\tau _{\nu (x)}\ =\ \mathbb {E}_{i}\tau \,\mathbb {E}_{i}\nu (x) <\ \infty \end{aligned}$$
(8.4)

for any \(x\in \mathbb {R}_{\geqslant }\). \(\square \)

Lemma 8.2

Given a nontrivial MRW \((M_{n},S_{n})_{n\ge 0}\), positive divergence implies \({\varSigma }_{0}(i,x)<\infty \) for some/all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\), and the converse is also true provided that \(\mathbb {E}_{\pi }X_{1}\) exists.

Proof

Suppose that \(S_{n}\rightarrow \infty \) a.s. Fixing any \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\) and recalling that \(\mathbb {E}_{i}\tau ^{>}<\infty \), we obtain

$$\begin{aligned} {\varSigma }_{0}(i,x)&= \sum _{n\ge 1}\mathbb {E}_{i}\left( \sum _{k=\tau ^{>}_{n-1}+1}^{\tau ^{>}_{n}}\frac{1}{k}\,\mathbf {1}_{\left\{ S_{\tau ^{>}_{n-1}} +\left( S_{k}-S_{\tau ^{>}_{n-1}}\right) \le x\right\} }\right) \\&\le \sum _{n\ge 1}\mathbb {E}_{i}\left( \frac{1}{\tau ^{>}_{n-1}+1}\sum _{k=\tau ^{>}_{n-1}+1}^{\min (2\tau ^{>}_{n-1},\tau ^{>}_{n})}\mathbf {1}_{ \left\{ S_{\tau ^{>}_{n-1}}-D_{n}^{i,>}\le x\right\} }\right) \\&\qquad + \sum _{n\ge 1}\mathbb {E}_{i}\left( \sum _{k=2\tau ^{>}_{n-1}+1}^{\tau ^{>}_{n}}\frac{1}{k}\,\mathbf {1}_{ \left\{ \tau ^{>}_{n}-\tau ^{>}_{n-1}>\tau ^{>}_{n-1}\right\} }\right) \\&\le \ \sum _{n\ge 0}\mathbb {P}_{i}\left( S_{\tau ^{>}_{n}}-D_{n+1}^{i,>}\le x\right) \ +\ \sum _{n\ge 1}\mathbb {E}_{i}\left( \sum _{k=2\tau ^{>}_{n-1}+1}^{\tau ^{>}_{n}}\frac{1}{k}\,\mathbf {1}_{\left\{ \tau ^{>}_{n}-\tau ^{>}_{n-1}>\tau ^{>}_{n-1}\right\} }\right) . \end{aligned}$$

Finiteness of the first series will be established in the subsequent Lemma 8.3. As for the second one, notice that \(S_{\tau _{n}}\rightarrow \infty \,\mathbb {P}_{i}\)-a.s. ensures \(\mathbb {E}_{i}\tau ^{>}<\infty \). Therefore,

$$\begin{aligned}&\sum _{n\ge 1}\mathbb {E}_{i}\left( \sum _{k=2\tau ^{>}_{n-1}+1}^{\tau ^{>}_{n}}\frac{1}{k}\,\mathbf {1}_{\left\{ \tau ^{>}_{n}-\tau ^{>}_{n-1}>\tau ^{>}_{n-1}\right\} }\right) \\&\quad \le \sum _{n\ge 1} \sum _{k\ge 1} \mathbb {E}_{i} \left[ \frac{1}{2\tau ^{>}_{n-1}+k}\,\mathbf {1}_{\{\tau ^{>}_{n}-\tau ^{>}_{n-1}\ge n-1+k}\right] \\&\quad \le \sum _{n\ge 1} \sum _{k\ge n}\frac{1}{k}\,\mathbb {P}_{i}(\tau ^{>}\ge k)\ =\ \sum _{n\ge 1} \mathbb {P}_{i}(\tau ^{>}\ge n) = \mathbb {E}_{i} \tau ^{>}< \infty \end{aligned}$$

as claimed.

Now suppose that \(\mathbb {E}_{\pi }X_{1}\) exists and, furthermore, that \((S_{n})_{n\ge 0}\) is not positive divergent which, by Proposition 7.3, entails that the same holds true for any \((S_{\tau _{n}(i)})_{n\ge 0}\), \(i\in \mathcal {S}\) and hence \(\sum _{n\ge 1}n^{-1}\mathbb {P}_{i}(S_{\tau _{n}(i)}\le x)=\infty \) for all \(x\in \mathbb {R}_{\geqslant }\). Now use Lemma 13.3 to conclude

$$\begin{aligned} \infty = \mathbb {E}_{i}\left( \sum _{n\ge 1}\frac{1}{\tau _{n}(i)}\,\mathbf {1}_{\{S_{\tau _{n}(i)}\le x\}}\right) \ \le \ {\varSigma }_{0}(i,x) \end{aligned}$$

for all \(x\in \mathbb {R}_{\geqslant }\).\(\square \)

Lemma 8.3

Given a nontrivial MRW \((M_{n},S_{n})_{n\ge 0}\), positive divergence implies

$$\begin{aligned} \sum _{n\ge 1}\mathbb {P}_{i}\left( S_{\tau ^{>}_{n}(i)}-D_{n+1}^{i,>}\le x\right) <\ \infty \end{aligned}$$

for all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\).

Proof

Fix any \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\). We have already proved that \(S_{n}\rightarrow \infty \) a.s. implies \(\mathbb {E}_{i}J_{i}(D^{i})<\infty \) and thus also \(\mathbb {E}_{i}J_{i}^{>}(D^{i,>})<\infty \) by (8.1). By combining this fact with

$$\begin{aligned} \sum _{n\ge 1}\mathbb {P}_{i}\left( S_{\tau ^{>}_{n}}\le x+y\right) \ \asymp \ J_{i}^{>}(x+y) \end{aligned}$$

as \(y\rightarrow \infty \) [see (3.2)], we obtain

$$\begin{aligned} \sum _{n\ge 1}\mathbb {P}_{i}\left( S_{\tau ^{>}_{n}}-D_{n+1}^{i,>}\le x\right) \ =\ \mathbb {E}_{i}J_{i}^{>}(x+D^{i,>})\ \asymp \ \mathbb {E}_{i}J_{i}^>(D^{i,>}) <\ \infty \end{aligned}$$

which proves the assertion. \(\square \)

8.2 Proof of Theorem 6.7

Since (c) is a direct consequence of (b) when noting that

$$\begin{aligned} |S_{\sigma ^{\leqslant }(-x)}|\mathbf {1}_{\{\sigma ^{\leqslant }(-x)<\infty \}}\ \le \ |\min _{n\ge 0}S_{n}| \end{aligned}$$

for all \(x\in \mathbb {R}_{\geqslant }\), we must only show the equivalence of (a) and (b) which is accomplished by the subsequent lemma.

Lemma 8.4

Let \((M_{n},S_{n})_{n\ge 0}\) be positive divergent and \(\alpha >0\). Then

$$\begin{aligned} \mathbb {E}_{i}(D^{i})^{\alpha }J_{i}(D^{i}) <\ \infty \end{aligned}$$
(8.5)

and

$$\begin{aligned} \mathbb {E}_{i}\Big |\min _{n\ge 0}S_{n}\Big |^{\alpha } <\ \infty \end{aligned}$$
(8.6)

are equivalent conditions and, if valid for one \(i\in \mathcal {S}\), actually hold for all i.

Proof

We first point out that (8.5), if true for one i, is actually true for all i by Lemma 7.7. A similar solidarity holds for (8.6) because

$$\begin{aligned} \mathbb {E}_{i}\left| \min _{n\ge 0}S_{n}\right| ^{\alpha }&= \mathbb {E}_{i}\left| \min _{n\ge 1}S_{n}\wedge 0 \right| ^{\alpha } \\&\ge \mathbb {E}_{i}\left| \left( \min _{n\ge \tau (i)+1} (S_{n}-S_{\tau (i)})+S_{\tau (i)}\right) \wedge 0\right| ^{\alpha }\,\mathbf {1}_{\{S_{\tau (i)}\le x\}} \\&\ge p\,\mathbb {E}_{i}\left| \left( \min _{n\ge 1} S_{n}+x\right) \wedge 0\right| ^{\alpha }\ \asymp \ \mathbb {E}_{i}\left| \min _{n\ge 0}S_{n}\right| ^{\alpha }, \end{aligned}$$

where \(x\in \mathbb {R}_{\geqslant }\) is chosen so large that \(p:= \mathbb {P}_{i}(S_{\tau (i)}\le x)>0\).

By the positive divergence of \((M_{n},S_{n})_{n\ge 0}\), we can fix i such that \(p:=\mathbb {P}_{i}(\sigma ^{\leqslant }=\infty )>0\). Again, we write \(\tau ,\tau _{n}\) as shorthand for \(\tau (i),\tau _{n}(i)\), respectively. Define

$$\begin{aligned} \eta = \eta _{1} := \inf \left\{ k\ge 1:S_{\tau _{k-1}}-D_{k}^{i}< 0\right\} \end{aligned}$$

and, recursively,

$$\begin{aligned} \eta _{n} := \inf \{k>\eta _{n-1}:S_{\tau _{k-1}}-S_{\tau _{\eta _{n-1}}}-D_{k}^{i}< 0\} \end{aligned}$$

with the usual convention \(\inf \emptyset :=\infty \). Then

$$\begin{aligned} \nu \,:=\,\inf \{n\ge 1:\eta _{n}=\infty \} \end{aligned}$$

has a geometric distribution with parameter p. Moreover,

$$\begin{aligned} \Big |\min _{n\ge 0}S_{n}\Big |&\le \sum _{n=1}^{\nu -1}\left| S_{\tau _{\eta _{n}-1}}-S_{\tau _{\eta _{n-1}}}-D_{\eta _{n}}^{i}\right| \end{aligned}$$

with \(\eta _{0}:=0\) and the fact that the \(S_{\tau _{\eta _{k}-1}}-S_{\tau _{\eta _{k-1}}}-D_{\eta _{k}}^{i}\) for \(k=1,\ldots ,n\) are conditionally iid given \(\nu >n\) can be used as in [31, p. 871,(v)\(_{0}\Rightarrow \)(vii)] to show that (8.6) holds iff

$$\begin{aligned} \mathbb {E}_{i}\left| S_{\tau _{\eta -1}}-D_{\eta }^{i}\right| ^{\alpha }\mathbf {1}_{\{\eta<\infty \}}\ <\ \infty . \end{aligned}$$
(8.7)

Therefore, it suffices to show the equivalence of (8.7) and (8.5).

Use Lemma 13.1 to obtain

$$\begin{aligned}&F_{i}(x) := \mathbb {P}_{i}\left( -S_{\tau _{\eta -1}(i)}+D_{\eta }^{i}\ge x,\, \eta <\infty \right) \\&\quad = \sum _{n\ge 1} \mathbb {P}_{i}\left( -S_{\tau _{n-1}(i)}+D_{n}^{i}\ge x,\, \eta =n\right) \\&\quad \le \sum _{n\ge 1} \mathbb {P}_{i}\left( S_{\tau _{n-1}(i)}\ge 0,\, -S_{\tau _{n-1}(i)}+D_{n}^{i}\ge x\right) \\&\quad = \int _{[x,\infty )} \sum _{n\ge 1} \mathbb {P}_{i}\left( 0\le S_{\tau _{n-1}(i)}\le y-x\right) \ \mathbb {P}_{i}(D^{i}\in dy) \\&\quad \asymp \int _{[x,\infty )} J_{i}(y-x)\ \mathbb {P}_{i}(D^{i}\in dy) \end{aligned}$$

for \(x\in \mathbb {R}_{\geqslant }\). Since \(J_{i}\) is nondecreasing, we then infer

$$\begin{aligned} \mathbb {E}_{i}|S_{\tau _{\eta -1}(i)}-D_{\eta }^{i}|^{\alpha }\,\mathbf {1}_{\{\eta <\infty \}}&\asymp \ \int _{0}^{\infty }x^{\alpha -1}\,F_{i}(x)\ dx\\&\lesssim \ \int _0^\infty \bigg (x^{\alpha -1} \int _{[x,\infty )} J_{i}(y-x) \ \mathbb {P}_{i}(D^{i}\in dy)\bigg ) \ dx\\&\le \int \bigg ( \int _0^y x^{\alpha -1}\,J_{i}(y)\ dx\bigg ) \ \mathbb {P}_{i}(D^{i}\in dy)\\&\asymp \int y^\alpha \, J_{i}(y)\ \mathbb {P}_{i}(D^{i}\in dy)\\&= \mathbb {E}_{i}(D^{i})^\alpha \,J_{i}(D^{i}). \end{aligned}$$

the last integral being finite iff (8.5) holds.

On the other hand, Lemma 8.5 below (with \(\alpha =0\)) provides us with

$$\begin{aligned} F_{i}(x)&= \sum _{n\ge 1} \mathbb {P}_{i}(-S_{\tau _{n-1}(i)}+D_{n}^{i}\ge x,\, \eta =n)\\&= \int _{[x,\infty )}\sum _{n\ge 1}\mathbb {P}_{i}\bigg (S_{\tau _{n-1}}\le y-x,\min _{1\le k\le \tau _{n-1}}S_{k}>0\bigg )\ \mathbb {P}_{i}(D^{i}\in dy)\\&\gtrsim \int _{(x,\infty )}J_{i}(y-x)\ \mathbb {P}_{i}(D^{i}\in dy), \end{aligned}$$

and this implies

We thus see that (8.7) and (8.5) are indeed equivalent.\(\square \)

The next lemma will also be needed in the next section (see Proof of Lemma 8.11).

Lemma 8.5

Suppose that \((M_{n},S_{n})_{n\ge 0}\) is positive divergent and let \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\) be such that \(\mathbb {P}_{i}(\sigma ^{\leqslant }(-x)=\infty )>0\). Then as \(y\rightarrow \infty \),

$$\begin{aligned} J_{i}(y)^{1+\alpha }\,\lesssim \ \mathbb {E}_{i}\left( \sum _{n\ge 1}\tau _{n}(i)^{\alpha }\,\mathbf {1}_{\left\{ S_{\tau _{n}(i)}\le y,\min _{1\le k\le \tau _{n}(i)}S_{k}>-x\right\} }\right) \end{aligned}$$
(8.8)

for any \(\alpha >0\).

Proof

We begin with some preliminary considerations. Pick \(j\in \mathcal {S}\) (possibly \(=i\)) such that \(\mathbb {P}_{j}(\sigma ^{\leqslant }=\infty )>0\). By Proposition 4.1, j is a recurrent state for the dual ladder chain \(({}^{\#}M_{n}^{>})_{n\ge 0}\). Hence, if \(\kappa _{m}\) denotes the mth strictly ascending ladder epoch of \(({}^{\#}S_{n})_{n\ge 0}\) with \({}^{\#}M_{\kappa _{m}}=j\) for each \(m\in \mathbb {N}\), then these epochs are all \(\mathbb {P}_{i}\)-a.s. finite and \(({}^{\#}S_{\kappa _{n}})_{n\ge 0}\) forms a subsequence of \(({}^{\#}S_{{}^{\#}\tau ^{>}_{n}(j)})_{n\ge 0}\) and an ordinary RW with positive increments. Moreover,

$$\begin{aligned} \kappa _{1}\ =\ {}^{\#}\tau ^{>}_{\vartheta }(j) \end{aligned}$$

for a stopping time \(\vartheta \) with respect to the filtration

$$\begin{aligned} \sigma \left( {}^{\#}\tau ^{>}_{1}(j),\ldots ,{}^{\#}\tau ^{>}_{n}(j),\, \left( {}^{\#}M_{k},{}^{\#}S_{k}\right) _{1\le k\le {}^{\#}\tau ^{>}_{n}(j)}\right) ,\quad n\ge 0, \end{aligned}$$

and \(\mathbb {E}_{j}\vartheta \le \mathbb {E}_{j}\kappa _{1}<\infty \). As a consequence, by using Wald’s identity,

$$\begin{aligned} \mathbb {E}_{j}\left( {}^{\#}S_{{}^{\#}\tau (j)}^{+}\wedge y\right)&\le \mathbb {E}_{j}\left( {}^{\#}S_{\kappa _{1}}\wedge y\right) \\&\le \mathbb {E}_{j}\left( \sum _{k=1}^{\vartheta }\left( \left( {}^{\#}S_{{}^{\#}\tau ^{>}_{k}(j)}-{}^{\#}S_{{}^{\#}\tau ^{>}_{k-1}(j)}\right) \wedge y\right) \right) \\&= \mathbb {E}_{j}\left( {}^{\#}S_{{}^{\#}\tau ^{>}(j)}\wedge y\right) \,\mathbb {E}_{j}\vartheta , \end{aligned}$$

giving

$$\begin{aligned} \mathbb {E}_{j}\left( {}^{\#}S_{\kappa _{1}}\wedge y\right) \ \asymp \ \mathbb {E}_{j}\big ({}^{\#}S_{{}^{\#}\tau (j)}^{+}\wedge y\big )\ =\ \mathbb {E}_{j}\big (S_{\tau (j)}^{+}\wedge y\big ) \end{aligned}$$
(8.9)

when recalling Lemma 7.5(e) and observing that \(S_{\tau (j)},{}^{\#}S_{{}^{\#}\tau (j)}\) have the same distribution under \(\mathbb {P}_{j}\). Finally, we infer with the help of (3.2) for the ordinary RW \(({}^{\#}S_{\kappa _{m}})_{m\ge 0}\) that

$$\begin{aligned} \sum _{m\ge 1}m^{\alpha }\,\mathbb {P}_{j}\left( {}^{\#}S_{\kappa _{m}}\le y\right)&\asymp \ \left( \frac{y}{\mathbb {E}_{j}\left( {}^{\#}S_{\kappa _{1}}\wedge y\right) }\right) ^{1+\alpha }\ \asymp \ J_{j}(y)^{1+\alpha } \end{aligned}$$
(8.10)

as \(y\rightarrow \infty \).

Now we are ready to prove (8.8). Since \(\mathbb {P}_{i}(\sigma ^{\leqslant }(-x)=\infty )>0\), there exist \(x_{1}\in \mathbb {R}_{\geqslant }\) and \(n_{1},n_{2}\in \mathbb {N}\) such that

$$\begin{aligned} E\ :=\ \left\{ \min _{1\le k\le n_{1}}S_{k}>-x,\,S_{n_{1}}\le x_{1},\,M_{n_{1}}=j,\,\tau _{n_{2}}\le n_{1}<\tau _{n_{2}+1}\right\} \end{aligned}$$

has positive probability under \(\mathbb {P}_{i}\).

$$\begin{aligned}&\mathbb {E}_{i}\left( \sum _{n\ge 1}\tau _{n}^{\alpha }\,\mathbf {1}_{\{S_{\tau _{n}}\le y,\min _{1\le k\le \tau _{n}}S_{k}>-x\}}\right) \\&\quad \ge \mathbb {E}_{i}\left( \mathbf {1}_{E}\sum _{n>n_{2}}\tau _{n}^{\alpha }\,\mathbf {1}_{\{S_{\tau _{n}}-S_{n_{1}}\le y-x_{1},\min _{n_{1}<k\le \tau _{n}}(S_{k}-S_{n_{1}})>0\}}\right) \\&\quad \ge \mathbb {P}_{i}(E)\,\mathbb {E}_{j}\left( \sum _{n\ge 1}\tau _{n}^{\alpha }\,\mathbf {1}_{\{S_{\tau _{n}}\le y-x_{1},\min _{1\le k\le \tau _{n}}S_{k}>0\}}\right) \\&\quad \gtrsim \sum _{m,n\ge 1}m^{\alpha }a_{m,n} \end{aligned}$$

where

$$\begin{aligned} a_{m,n} := \mathbb {P}_{j}\big (S_{\tau _{n}}\le y-x_{1},\min _{1\le k\le \tau _{n}}S_{k}>0,\,\tau _{m}(j)\le \tau _{n}<\tau _{m+1}(j)\big ). \end{aligned}$$

Now use the duality relation [see (4.3)]

$$\begin{aligned} a_{m,n}&= \frac{\pi _{i}}{\pi _{j}}\,\mathbb {P}_{i}\big (y-x_{1}\ge {}^{\#}S_{^{\#}\tau _{m+1}(j)}>{}^{\#}S_{k}\text { for }0\le k<{}^{\#}\tau _{m+1}(j),\\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad {}^{\#}\tau _{n-1}\le {}^{\#}\tau _{m+1}(j)<{}^{\#}\tau _{n}\big )\\&=\ \frac{\pi _{i}}{\pi _{j}}\sum _{l=1}^{m+1}\mathbb {P}_{i}\big ({}^{\#}S_{\kappa _{l}}\le y-x_{1},{}^{\#}\tau _{n-1}\le \kappa _{l}<{}^{\#}\tau _{n},{}^{\#}\tau _{m+1}(j)=\kappa _{l}\big ) \end{aligned}$$

and choose \(x_{2}\) so large that \(\mathbb {P}_{i}({}^{\#}S_{\kappa _{1}}\le x_{2})>0\). Then

$$\begin{aligned} \sum _{m,n\ge 1}m^{\alpha }a_{m,n}&\ge \frac{\pi _{i}}{\pi _{j}}\sum _{m,n\ge 1}\sum _{l=1}^{m+1}l^{\alpha }\,\mathbb {P}_{i} \big ({}^{\#}S_{\kappa _{l}}\le y-x_{1},\\&\qquad \qquad \qquad \qquad \qquad \qquad \quad {}^{\#}\tau _{n-1}\le \kappa _{l}<{}^{\#}\tau _{n},{}^{\#}\tau _{m+1}(j)=\kappa _{l}\big ) \\&= \frac{\pi _{i}}{\pi _{j}}\sum _{l\ge 1}l^{\alpha }\,\mathbb {P}_{i}\big ({}^{\#}S_{\kappa _{l}}\le y-x_{1}\big )\\&\gtrsim \sum _{l\ge 1}l^{\alpha }\,\mathbb {P}_{j}\big ({}^{\#}S_{\kappa _{l}}\le y-x_{1}-x_{2}\big )\ \asymp \ J_{j}(y-x_{1}-x_{2}), \end{aligned}$$

where (8.10) has been used for the last relation. The proof is herewith complete because \(J_{j}(y-x_{1}-x_{2})\asymp J_{j}(y)\asymp J_{i}(y)\) as \(y\rightarrow \infty \) by Lemma 7.5.\(\square \)

8.3 Proof of Theorem 6.3

We have organized the proof of the theorem as follows: After the auxiliary Lemma 8.6, Lemmata 8.7 and 8.8 will establish “(a)\(\Rightarrow \)(b)” and its converse, respectively, the latter even without the assumption that \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \). Then “(c)\(\Rightarrow \)(d)” will be shown by Lemma 8.10 and “(d)\(\Rightarrow \)(a)” by Lemma 8.11. Since (b)\(\Rightarrow \)(c)” is clear upon noting that \(\sigma _{\min }\le \rho (0)\); this completes the proof of the equivalence of (a)–(d).

Lemma 8.6

Let \((M_{n},S_{n})_{n\ge 0}\) be a nontrivial MRW with \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \) and T an arbitrary nonnegative random variable. Then

$$\begin{aligned} \mathbb {E}_{i}T^{\alpha }<\infty \quad \text {iff}\quad \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}(T>\tau _{n}(i))<\infty . \end{aligned}$$

Proof

Use (13.2) in Appendix (cf. Proof of Lemma 13.2) to infer

$$\begin{aligned} \mathbb {E}_{i}T^{\alpha }&\asymp \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}\left( T>\frac{n}{2\,\mathbb {E}_{i}\tau },\,\tau _{n}\le 2n\,\mathbb {E}_{i}\tau \right) \\&\asymp \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}\left( T>\tau _{n},\,\tau _{n}\le 2n\,\mathbb {E}_{i}\tau \right) \\&\asymp \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}(T>\tau _{n}), \end{aligned}$$

and thus the asserted equivalence.\(\square \)

Lemma 8.7

Let \((M_{n},S_{n})_{n\ge 0}\) be a positive divergent MRW with \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \). Then \(\mathbb {E}_{i}J_{i}(D^{i})^{1+\alpha }<\infty \) implies \(\mathbb {E}_{i}\rho (x)^{\alpha }<\infty \) for all \(x\in \mathbb {R}_{\geqslant }\) and then also \(\mathbb {E}_{j}\rho (x)^{\alpha }<\infty \) for all \((j,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\).

Proof

Under the stated assumptions, \((S_{n})_{n\ge 0}\) is positive divergent and either

$$\begin{aligned} \mathbb {E}_{i}S_{\tau }^{+}<\ \infty \quad \text {and}\quad \mathbb {E}_{i} J_{i}(D^{i})^{1+\alpha }\ \asymp \ \mathbb {E}_{i}(D^{i})^{1+\alpha } <\ \infty , \end{aligned}$$
(8.11)

or

$$\begin{aligned} \mathbb {E}_{i}J_{i}(D^{i})^{1+\alpha } <\ \infty = \mathbb {E}_{i}S_{\tau }^{+}. \end{aligned}$$
(8.12)

These two cases will be treated separately hereafter.

Suppose (8.11) be true, in particular \(0<\lim _{y\rightarrow \infty }A_{i}(y)=\mathbb {E}_{i}S_{\tau }<\infty \), for \((S_{\tau _{n}})_{n\ge 0}\) is positive divergent. Put \(\mu :=\mathbb {E}_{i}S_{\tau }/(2\,\mathbb {E}_{i}\tau )\). Then

$$\begin{aligned} \mathbb {E}_{i}(S_{\tau }-\mu \tau ) <\ \infty , \end{aligned}$$

and since \(\mathbb {E}_{i}\tau ^{1+\alpha }<\infty \), we have

$$\begin{aligned} \mathbb {E}_{i}(D^{i}+\mu \tau )^{1+\alpha } <\ \infty . \end{aligned}$$

Consequently, we may use Theorem 6.7 for the MRW \((M_{n},S_{n}-\mu n)_{n\ge 0}\) to infer

$$\begin{aligned} \mathbb {E}_{i}\left| \min _{n\ge 0}\left( S_{n}-\mu n\right) \right| ^{\alpha }\ <\ \infty . \end{aligned}$$

Now \(\mathbb {E}_{i}\rho (x)<\infty \) for all \(x\in \mathbb {R}_{\geqslant }\) follows from

$$\begin{aligned} \mu \rho (x)\ \le \ x-\left( S_{\rho (x)}-\mu \rho (x)\right) \ \le \ x-\min _{n\ge 0}\left( S_{n}-\mu n\right) \end{aligned}$$

(see [31, p. 871,(i)\(\Rightarrow \)(ii)]).

Assuming (8.12), it suffices to prove \(\mathbb {E}_{i}\widehat{\rho }_{i}(x)^{\alpha }<\infty \) for all \(x\in \mathbb {R}_{\geqslant }\) and

$$\begin{aligned} \widehat{\rho }_{i}(x)\,:=\,\sup \left\{ n\ge 0:S_{\tau _{n}}-D_{n+1}^{i}\le x\right\} \end{aligned}$$

because \(\{\rho (x)>\tau _{n}\}=\{\widehat{\rho }_{i}(x)\ge n\}\) for all \(n\ge 1\) and an appeal to Lemma 8.6. Put \(\widehat{X}_{n}:=S_{\tau _{n}}-S_{\tau _{n-1}}\) for \(n\ge 1\) and observe that

$$\begin{aligned} S_{\tau _{n}}-D_{n+1}^{i}&= \sum _{k=1}^{n}\widehat{X}_{k}^{+}-\sum _{k=1}^{n}\widehat{X}_{k}^{-}-D_{n+1}^{i} \ge \sum _{k=1}^{n}\widehat{X}_{k}^{+}-\sum _{k=1}^{n+1}D_{k}^{i}\nonumber \\&\ge \sum _{\varepsilon \in \{0,1\}}\left( \sum _{k=1}^{n}R_{k}^{\varepsilon }+\left( R_{n+1}^{\varepsilon }\right) ^{-}\right) , \end{aligned}$$
(8.13)

where

$$\begin{aligned}&R_{n}^{0} := \widehat{X}_{k}^{+}\mathbf {1}_{\{\theta _{k}=1\}}-D_{k}^{i}\mathbf {1}_{\{\theta _{k}=0\}}\quad \text {and}\quad R_{n}^{1} := \widehat{X}_{k}^{+}\mathbf {1}_{\{\theta _{k}=0\}}-D_{k}^{i}\mathbf {1}_{\{\theta _{k}=1\}} \end{aligned}$$

for a sequence \((\theta _{n})_{n\ge 1}\) of iid symmetric Bernoulli variables which are independent of all other occurring random variables. Notice that \((\sum _{k=1}^{n}R_{k}^{\varepsilon })_{n\ge 0}\) forms an ordinary random walk for each \(\varepsilon \in \{0,1\}\). We claim that its last level x exit time \(\rho _{\varepsilon }(x)\) is the same as

$$\begin{aligned} \rho _{\varepsilon }'(x) := \sup \left\{ n\ge 0:\sum _{k=1}^{n}R_{k}^{\varepsilon }-\left( R_{n+1}^{\varepsilon }\right) ^{-}\le x\right\} . \end{aligned}$$

Indeed, \(\rho _{\varepsilon }'(x)\ge \rho _{\varepsilon }(x)\) is obvious. For the reverse inequality, suppose \(\rho _{\varepsilon }'(x)=n\) and thus

$$\begin{aligned} \sum _{k=1}^{n}R_{k}^{\varepsilon }-\left( R_{n+1}^{\varepsilon }\right) ^{-}\ \le \ x. \end{aligned}$$
(8.14)

Then \(R_{n+1}^{\varepsilon }\ge 0\) entails \(\rho _{\varepsilon }(x)\ge n\), while \(R_{n+1}^{\varepsilon }<0\) entails \(x\ge \sum _{k=1}^{n}R_{k}^{\varepsilon }-(R_{n+1}^{\varepsilon })^{-}=\sum _{k=1}^{n+1}R_{k}^{\varepsilon }\) and therefore \(\rho _{\varepsilon }(x)\ge n+1\).

Now (8.13) implies \(\rho (x)\le \rho _{0}(x)\vee \rho _{1}(x)\). Since

$$\begin{aligned} \left( R_{1}^{\varepsilon }\right) ^{+} = S_{\tau }^{+}\mathbf {1}_{\{\theta _{1}=\varepsilon \}}\quad \text {and}\quad \left( R_{1}^{\varepsilon }\right) ^{-}\ =\ D_{1}^{i}\mathbf {1}_{\{\theta =1-\varepsilon \}}, \end{aligned}$$

we have, by (8.12), that

$$\begin{aligned} C(\beta ) := \int \left( \frac{y}{\mathbb {E}_{i}\left( \left( R_{1}^{\varepsilon }\right) ^{+}\wedge y\right) }\right) ^{1+\beta }\ \mathbb {P}_{i}\left( \left( R_{1}^{\varepsilon }\right) ^{-}\in ds\right) <\ \infty \end{aligned}$$

for any \(\beta \in [0,\alpha ]\) and \(\mathbb {E}_{i}|R_{1}^{\varepsilon }|\ge \mathbb {E}_{i}(R_{1}^{\varepsilon })^{+}=\mathbb {E}_{i}S_{\tau }^{+}/2=\infty \). The latter in combination with \(C(0)<\infty \) ensures the positive divergence of \((\sum _{k=1}^{n}R_{k}^{\varepsilon })_{n\ge 0}\) for each \(\varepsilon \in \{0,1\}\) as pointed out in Remark 3.2. Consequently, \(\mathbb {E}_{i}\rho (x)^{\alpha }\le \mathbb {E}_{i}(\rho _{0}(x)\vee \rho _{1}(x))^{\alpha }<\infty \), and the extension of the last conclusion to all \(i\in \mathcal {S}\) follows because \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \) and \(\mathbb {E}_{i}J_{i}(D^{i})^{1+\alpha }<\infty \) are all solidarity properties (for the last two assertion use Lemma 7.7 with \(\gamma =1\)).\(\square \)

Lemma 8.8

If \(\mathbb {E}_{i}\rho (x)^{\alpha }<\infty \) for some \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\) and \(\alpha >0\), then \(\mathbb {E}_{i}J_{i}(D^{i})^{1+\alpha }<\infty \).

Proof

Since \((S_{n})_{n\ge 0}\) is positive divergent under the proviso, there exists \(x_{0}>0\) such that \(\mathbb {P}_{i}(\sigma ^{\leqslant }(-x_{0})=\infty )=:p>0\). Plainly, \(\mathbb {E}_{i}\rho (-x_{0})^{\alpha }\le \mathbb {E}_{i}\rho (x)^{\alpha }<\infty \). Use

$$\begin{aligned}&\{\rho (-x_{0})>n/2\}\ \supset \ \bigcup _{n/2<k\le n}\left\{ S_{\tau ^{>}_{k}}\le D_{k+1}^{i,>}-x_{0}\right\} \\&\quad \supset \ \bigcup _{n/2<k\le n}\left\{ S_{\tau ^{>}_{k}}\le D_{k+1}^{i,>}-x_{0},\,\inf _{l\ge k+1}\left( S_{\tau ^{>}_{l}}-D_{l+1}^{i,>}\right)>-x_{0}\right\} \\&\quad \supset \ \bigcup _{n/2<k\le n}\left\{ S_{\tau ^{>}_{k}}\le D_{k+1}^{i,>}-x_{0},\,\inf _{l\ge \tau ^{>}_{k+1}}\left( S_{l}-S_{\tau ^{>}_{k+1}}\right) >-x_{0}\right\} \end{aligned}$$

to infer that

$$\begin{aligned} \mathbb {P}_{i}(\rho (-x_{0})>n/2)\ \ge \ \frac{pn}{3}\,\mathbb {P}_{i}\left( S_{\tau ^{>}_{n}}\le D_{n+1}^{i,>}-x_{0}\right) \end{aligned}$$

and thereupon

$$\begin{aligned} \sum _{n\ge 1}n^{\alpha }\,\mathbb {P}_{i}\left( S_{\tau ^{>}_{n}}\le D_{n+1}^{i,>}-x_{0}\right) \ \lesssim \ \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}(\rho (-x_{0})>n/2) <\ \infty , \end{aligned}$$

as \(\mathbb {E}_{i}\rho (-x_{0})^{\alpha }<\infty \). Now use (3.2) for the ordinary RW \((S_{\tau ^{>}_{n}})_{n\ge 0}\) to infer

$$\begin{aligned} \sum _{n\ge 1}n^{\alpha }\,\mathbb {P}_{i}\left( S_{\tau ^{>}_{n}}\le y\right) \ \asymp \ J_{i}^{>}(y)^{1+\alpha } \end{aligned}$$

and thereupon

$$\begin{aligned} \infty> \sum _{n\ge 1}n^{\alpha }\,\mathbb {P}_{i}\big (S_{\tau ^{>}_{n}}\le D_{n+1}^{i,>}-x_{0}\big )\ \asymp \ \int _{[x_{0},\infty )}J_{i}^{>}(y-x_0)^{1+\alpha }\ \mathbb {P}_{i}(D^{i,>}\in dy). \end{aligned}$$

Finally, Lemma 7.5(e) provides us with

$$\begin{aligned} J_{i}^{>}(y-x_{0})\ \asymp \ J_{i}(y) \end{aligned}$$

for all \(y\in \mathbb {R}_{\geqslant }\), and since these functions are nondecreasing in x and \(D^{i}\le D^{i,>}\), we see that \(\mathbb {E}_{i}J_{i}(D^{i})^{1+\alpha }<\infty \) as claimed.\(\square \)

Lemma 8.9

For any \(i\in \mathcal {S}\), \(\mathbb {E}_{i}\rho (0)^{\alpha }<\infty \) implies \(\mathbb {E}_{i}\sigma _{\min }^{\alpha }<\infty \).

Proof

The assertion follows directly from \(\rho (S_{\tau })-\tau \mathop {=}\limits ^{d}\rho (0)\) under \(\mathbb {P}_{i}\) and \(\rho (S_{\tau })\ge \sigma _{\min }\) \(\mathbb {P}_{i}\)-a.s.\(\square \)

Lemma 8.10

Let \(\alpha >0\) and \(i\in \mathcal {S}\). Then \(\mathbb {E}_{i}\sigma _{\min }^{\alpha }<\infty \) implies

$$\begin{aligned} \mathbb {E}_{i}\sigma ^{\leqslant }(-x)^{\alpha }\mathbf {1}_{\{\sigma ^{\leqslant }(-x)<\infty \}} <\ \infty \end{aligned}$$

for all \(x\in \mathbb {R}_{\geqslant }\) as well as \(\mathbb {P}_{i}(\sigma ^{\leqslant }(-x)=\infty )>0\) for all sufficiently large x.

Proof

The first assertion follows from the obvious inequality

$$\begin{aligned} \sigma ^{\leqslant }(-x)^{\alpha }\mathbf {1}_{\{\sigma ^{\leqslant }(-x)<\infty \}}\ \le \ \sigma _{\min }\mathbf {1}_{\{\sigma ^{\leqslant }(-x)<\infty \}} \end{aligned}$$

\(\square \)

for any \(x\in \mathbb {R}_{\geqslant }\), while the second one must hold because \(\sigma _{\min }<\infty \) \(\mathbb {P}_{i}\)-a.s. entails the positive divergence of \((S_{n})_{n\ge 0}\). \(\square \)

Lemma 8.11

If, for some \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\), \(\mathbb {E}_{i}\sigma ^{\leqslant }(-x)^{\alpha }\mathbf {1}_{\{\sigma ^{\leqslant }(-x)<\infty \}}<\infty \) and \(\mathbb {P}_{i}(\sigma ^{\leqslant }(-x)=\infty )>0\), then \(A_{i}(y)>0\) for all sufficiently large y and \(\mathbb {E}_{i}J_{i}(D^{i})^{1+\alpha }<\infty \).

Proof

First, \(\mathbb {P}_{i}(\sigma ^{\leqslant }(-x)=\infty )>0\) implies \(S_{n}\rightarrow \infty \) \(\mathbb {P}_{i}\)-a.s. and thus \(A_{i}(y)>0\) for all sufficiently large y by Theorem 6.1. Next, we infer

$$\begin{aligned} \infty&> \mathbb {E}_{i}\sigma ^{\leqslant }(-x)^{\alpha }\mathbf {1}_{\{\sigma ^{\leqslant }(-x)<\infty \}}\ =\ \mathbb {E}_{i}\left( \sum _{n\ge 1}n^{\alpha }\,\mathbf {1}_{\{\sigma ^{\leqslant }(-x)=n\}}\right) \\&\ge \mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k=\tau _{n}+1}^{\tau _{n+1}}k^{\alpha }\,\mathbf {1}_{\{\sigma ^{\leqslant }(-x)=k\}}\right) \\&\ge \mathbb {E}_{i}\left( \sum _{n\ge 1}\tau _{n}^{\alpha }\,\mathbf {1}_{\{\tau _{n}<\sigma ^{\leqslant }(-x)\le \tau _{n+1}\}}\right) \\&= \int \mathbb {E}_{i}\left( \sum _{n\ge 1}\tau _{n}^{\alpha }\,\mathbf {1}_{\{S_{\tau _{n}}\le y-x,\min _{1\le k\le \tau _{n}}S_{k}>-x\}}\right) \,\mathbb {P}_{i}(D^{i}\in dy), \end{aligned}$$

and then \(\mathbb {E}_{i}J_{i}(D^{i})^{1+\alpha }<\infty \) by an appeal to Lemma 8.5.\(\square \)

8.4 Proof of Theorem 6.4

We first point out that if \({\varSigma }_{\alpha }(i,0)<\infty \) for some \(i\in \mathcal {S}\), then \({\varSigma }_{\alpha }(j,0)<\infty \) for all \(j\in \mathcal {S}\). To see this, we note that

$$\begin{aligned} \sum _{n\ge 1}n^{\alpha -1}\mathbb {P}_{i}(S_{\tau _{n}(i)}\le x)\ \asymp \ \mathbb {E}_{i}\left( \sum _{n\ge 1}\tau _{n}(i)^{\alpha -1}\mathbf {1}_{\{S_{\tau _{n}(i)}\le x\}}\right) \ \le \ {\varSigma }_{\alpha }(i,x) <\ \infty \end{aligned}$$
(8.15)

(see Lemma 13.3(b) in Appendix) in combination with Theorem 3.3 implies \(A_{i}(x)>0\) for all sufficiently large x and \(\mathbb {E}_{i}J_{i}(S_{\tau (i)}^{-})<\infty \). In particular, we obtain \(\mathbb {E}_{i}\tau ^{>}(i)^{1+\alpha }<\infty \) and then also \(\mathbb {E}_{j}\tau ^{>}(i)^{1+\alpha }<\infty \) by a straightforward argument. Now

$$\begin{aligned} {\varSigma }_{\alpha }(i,0)&\le \mathbb {E}_{j}\left( \sum _{n=1}^{2 \tau ^{>}(i)}n^{\alpha -1} + \sum _{n\ge 1} (\tau ^{>}(i)+n)^{\alpha -1}\, \mathbf {1}_{\{(S_{\tau ^{>}(i)+n}-S_{\tau ^{>}(i)})+S_{\tau ^{>}(i)}\le 0\}}\right) \\&\lesssim \mathbb {E}_{j}\tau ^{>}(i)^{1+\alpha }\ +\ \mathbb {E}_{j}\left( \sum _{n\ge 1} (2n)^{\alpha -1}\,\mathbf {1}_{\{S_{\tau ^{>}(i)+n}-S_{\tau ^{>}(i)}\le 0\}}\right) \\&\le \mathbb {E}_{j}\tau ^{>}(i)^{1+\alpha }\ +\ 2^{\alpha -1}{\varSigma }_{\alpha }(i,0) \end{aligned}$$

In order to prove Theorem 6.4, it is therefore enough to show that, for any fixed \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\) (and with \(\tau _{n}=\tau _{n}(i)\) and \(\tau ^{>}=\tau ^{>}(i)\)),

$$\begin{aligned} {\varSigma }_{\alpha }(i,x)\ \asymp \ \int \sum _{n\ge 1}n^{\alpha -1}\mathbb {P}_{i}(S_{\tau _{n}}\le x+y)\ \mathbb {V}_{i}(dy) \end{aligned}$$
(8.16)

where

$$\begin{aligned} \sum _{n\ge 1}n^{\alpha -1}\mathbb {P}_{i}(S_{\tau _{n}}\le x+y)\ \asymp \ {\left\{ \begin{array}{ll} \log J_{i}(x+y)\,\asymp \,\log J_{i}(y),&{}\text {if }\alpha =0,\\ J_{i}(x+y)^{\alpha }\,\asymp \,J_{i}(y)^{\alpha },\quad \text {if }\alpha >0, &{} \end{array}\right. } \end{aligned}$$

as \(y\rightarrow \infty \) should be recalled [see Lemma 13.3, (3.2) and the subsequent remark]. Note that we may replace \(\mathbb {V}_{i}\) with \(\mathbb {W}_{i}:=\mathbb {E}_{i}\big (\sum _{n=1}^{\tau (i)}\mathbf {1}_{\{-S_{n}\in \cdot \}}\big )\) in (8.16), for

$$\begin{aligned} \int&\sum _{n\ge 1}n^{\alpha -1}\mathbb {P}_{i}(S_{\tau _{n}}\le x+y)\ \mathbb {V}_{i}(dy)\\&\le \ \int \sum _{n\ge 1}n^{\alpha -1}\mathbb {P}_{i}(S_{\tau _{n}}\le x+y)\ \mathbb {W}_{i}(dy)\ +\ \mathbb {V}_{i}(\{0\})\sum _{n\ge 1}n^{\alpha -1}\mathbb {P}_{i}(S_{\tau _{n}}\le x)\\&\le \ \int \sum _{n\ge 1}n^{\alpha -1}\mathbb {P}_{i}(S_{\tau _{n}}\le x+y)\ \mathbb {V}_{i}(dy)\ +\ \mathbb {V}_{i}(\{0\})\sum _{n\ge 1}n^{\alpha -1}\mathbb {P}_{i}(S_{\tau _{n}}\le x)\\&\le \ 2\int \sum _{n\ge 1}n^{\alpha -1}\mathbb {P}_{i}(S_{\tau _{n}}\le x+y)\ \mathbb {V}_{i}(dy). \end{aligned}$$

We now distinguish between the cases \(0\le \alpha \le 1\) and \(\alpha >1\) and put \(\chi _{n}:=\tau _{n}-\tau _{n-1}\) for \(n\ge 1\) which are iid copies of \(\tau =\tau (i)\) under \(\mathbb {P}_{i}\).

Case 1. \(0\le \alpha \le 1\).

Using \((\tau _{n-1}+k)^{\alpha -1}\le n^{\alpha -1}\) for \(k=1,\ldots ,\chi _{n}\) and \(n\ge 1\), we find

$$\begin{aligned} {\varSigma }_{\alpha }(i,x)&= \mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k=1}^{\chi _{n}}(\tau _{n-1}+k)^{\alpha -1}\mathbf {1}_{\{S_{\tau _{n-1}+k}\le x\}}\right) \\&\le \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {W}_{i}([y-x,\infty ))\ \mathbb {P}_{i}(S_{\tau _{n-1}}\in dy)\\&= \int \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}(S_{\tau _{n-1}}\le x+y)\ \mathbb {W}_{i}(dy)\\&\lesssim \int \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}(S_{\tau _{n}}\le x+y)\ \mathbb {V}_{i}(dy). \end{aligned}$$

For the reverse inequality note first that

$$\begin{aligned}&\mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k=1}^{\chi _{n}}(\tau _{n-1}+k)^{\alpha -1}\mathbf {1}_{\{\tau _{n-1}>2n\,\mathbb {E}_{i}\tau \}}\right) \\&\quad \lesssim \mathbb {E}_{i}\left( \sum _{n\ge 1}n^{\alpha -1}\chi _{n}\,\mathbf {1}_{\{\tau _{n-1}>2n\,\mathbb {E}_{i}\tau \}}\right) \\&\quad = \mathbb {E}_{i}\tau \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}\left( \tau _{n-1}>2n\,\mathbb {E}_{i}\tau \right) <\ \infty , \end{aligned}$$

the finiteness of the last series following from (13.2) in Appendix. We also need that

$$\begin{aligned} \begin{aligned}&\mathbb {E}_{i}\left( \sum _{n\ge 1}n^{\alpha -1}\chi _{n}\mathbf {1}_{\{\chi _{n}>n\}}\right) \ =\ \mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k=1}^{\chi _{n}}n^{\alpha -1}\mathbf {1}_{\{\chi _{n}>n\}}\right) \\&\quad \le \ \mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k=1}^{n}n^{\alpha -1}\mathbf {1}_{\{\chi _{n}>n\}}\right) \ +\ \mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k\ge n}n^{\alpha -1}\mathbf {1}_{\{\chi _{n}>k\}}\right) \\&\quad \lesssim \sum _{n\ge 1}n^{\alpha }\,\mathbb {P}_{i}(\tau>n)\ +\ \sum _{k\ge 1}\sum _{n=1}^{k}n^{\alpha -1}\,\mathbb {P}_{i}(\tau >k)\\&\quad \lesssim \mathbb {E}_{i}\tau ^{1+\alpha }\vee \mathbb {E}_{i}(\tau \log \tau )<\ \infty , \end{aligned} \end{aligned}$$
(8.17)

where \(\sum _{k=1}^{n}k^{-1}\asymp \log n\) and \(\sum _{k=1}^{n}n^{\alpha -1}\asymp n^{\alpha }\) for any \(\alpha >0\) has been utilized for the final estimate. With this at hand, we infer

$$\begin{aligned} {\varSigma }_{\alpha }(i,x)&= \mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k=1}^{\chi _{n}}(\tau _{n-1}+k)^{\alpha -1}\mathbf {1}_{\{S_{\tau _{n-1}+k}\le x\}}\right) \\&\ge \mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k=1}^{\chi _{n}}(\tau _{n-1}+k)^{\alpha -1}\mathbf {1}_{\{S_{\tau _{n-1}+k}\le x,\,\tau _{n-1}\le 2n\,\mathbb {E}_{i}\tau \}}\right) \\&\gtrsim \mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k=1}^{\chi _{n}}(n+k)^{\alpha -1}\mathbf {1}_{\{S_{\tau _{n-1}+k}\le x,\,\chi _{n}\le n\}}\right) \\&\gtrsim \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {E}_{i}\left( \sum _{k=1}^{\chi _{n}}\mathbf {1}_{\{S_{\tau _{n-1}+k}\le x\}}\right) \qquad \text {[here }(8.17)\text { enters]}\\&\asymp \int \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}(S_{\tau _{n}}\le x+y)\ \mathbb {V}_{i}(dy) \end{aligned}$$

as required.

Case 2. \(\alpha >1\).

Here, \((\tau _{n-1}+k)^{\alpha -1}\ge n^{\alpha -1}\) for \(k=1,\ldots ,\chi _{n}\) and \(n\ge 1\) implies

$$\begin{aligned} {\varSigma }_{\alpha }(i,x)&= \mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k=1}^{\chi _{n}}(\tau _{n-1}+k)^{\alpha -1}\mathbf {1}_{\left\{ S_{\tau _{n-1}+k}\le x\right\} }\right) \\&\ge \int \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}(S_{\tau _{n-1}}\le x+y)\ \mathbb {W}_{i}(dy)\\&\gtrsim \int \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}(S_{\tau _{n}}\le x+y)\ \mathbb {V}_{i}(dy). \end{aligned}$$

For the reverse estimation, we embark on the inequality

$$\begin{aligned} \mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k=1}^{\chi _{n}}(\tau _{n-1}+k)^{\alpha -1}\mathbf {1}_{\left\{ S_{\tau _{n-1}+k}\le x\right\} }\right) \, \lesssim \, I_{1}+I_{2}, \end{aligned}$$

where

$$\begin{aligned} I_{1} := \mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k=1}^{\chi _{n}}(n+k)^{\alpha -1}\mathbf {1}_{\left\{ S_{\tau _{n-1}+k}\le x\right\} }\right) \end{aligned}$$

and

$$\begin{aligned} I_{2} := \mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k=1}^{\chi _{n}}(\tau _{n-1}+k)^{\alpha -1}\mathbf {1}_{\left\{ \tau _{n-1}>2n\,\mathbb {E}_{i}\tau \right\} }\right) . \end{aligned}$$

As for \(I_{1}\), we then obtain

$$\begin{aligned} I_{1}&\lesssim \mathbb {E}_{i}\left( \sum _{n\ge 1}n^{\alpha -1}\sum _{k=1}^{\chi _{n}}\mathbf {1}_{\{S_{\tau _{n-1}+k}\le x\}}\right) \\&\qquad + \mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k=1}^{\chi _{n}}(n+k)^{\alpha -1}\mathbf {1}_{\{S_{\tau _{n-1}+k}\le x,\,\chi _{n}>n\}}\right) \\&\lesssim \int \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}(S_{\tau _{n}}\le x+y)\ \mathbb {V}_{i}(dy)\ +\ \mathbb {E}_{i}\left( \sum _{n\ge 1}\chi _{n}^{\alpha }\mathbf {1}_{\{\chi _{n}>n\}}\right) \\&\lesssim \int \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}(S_{\tau _{n}}\le x+y)\ \mathbb {V}_{i}(dy)\ +\ \mathbb {E}_{i}\tau ^{\alpha +1}. \end{aligned}$$

Finally, since \((\tau _{n-1}+k)^{\alpha -1}\le 2^{\alpha -1}(\tau _{n-1}^{\alpha -1}+\chi _{n}^{\alpha -1})\) for \(k=1,\ldots ,\chi _{n}\) and \(n\ge 1\),

$$\begin{aligned} I_{2}&\lesssim \mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k=1}^{\chi _{n}}(\tau _{n-1}^{\alpha -1}+\chi _{n}^{\alpha -1})\mathbf {1}_{\left\{ \tau _{n-1}>2n\,\mathbb {E}_{i}\tau \right\} }\right) \\&\lesssim \mathbb {E}_{i}\left( \sum _{n\ge 1}\tau _{n}^{\alpha -1}\mathbf {1}_{\{\tau _{n}>2n\,\mathbb {E}_{i}\tau \}}\right) \ +\ \mathbb {E}_{i}\left( \sum _{n\ge 1}\chi _{n}^{\alpha }\,\mathbf {1}_{\{\tau _{n-1}>2n\,\mathbb {E}_{i}\tau \}}\right) , \end{aligned}$$

and the last two expectations are finite because \(\mathbb {E}_{i}\tau ^{\alpha +1}<\infty \) (for the second expectation this is obvious, for the first one see Lemma 13.2 in Appendix).

8.5 Proof of Theorem 6.5

Here, we start with two auxiliary lemmata.

Lemma 8.12

Let \(\alpha >0\) and \(\mathbb {E}_{i}\tau (i)^{\alpha }<\infty \) for some/all \(i\in \mathcal {S}\). Then the set

$$\begin{aligned} \{(i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }:\mathbb {E}_{i}N(x)^{\alpha }<\infty \} \end{aligned}$$

is either empty or equal to \(\mathcal {S}\times \mathbb {R}_{\geqslant }\).

Proof

Suppose that \(\mathbb {E}_{i}N(0)^{\alpha }<\infty \) for some \(i\in \mathcal {S}\), so that in particular

$$\begin{aligned} \mathbb {E}_{i}\left( \sum _{n\ge 1}\mathbf {1}_{\{S_{\tau _{n}}\le 0\}}\right) ^{\alpha } <\ \infty , \end{aligned}$$

which in turn is equivalent to \(\mathbb {E}_{i}\nu (x)^{1+\alpha }<\infty \) for all \(x\in \mathbb {R}_{\geqslant }\) by Theorem 3.3. Using [29, Theorem 1.5.1] for \(\alpha \in (0,1)\) and [29, Theorem 1.5.2] for \(\alpha \ge 1\), we obtain

$$\begin{aligned} \mathbb {E}_{i}\tau _{\nu (x)}^{\alpha }\ \lesssim \ \mathbb {E}_{i}\tau ^{\alpha } \cdot \mathbb {E}_{i}\nu (x)^{1\vee \alpha } <\ \infty \end{aligned}$$

for all \(x\in \mathbb {R}_{\geqslant }\). For arbitrary \(j\in \mathcal {S}\), pick \(x_{1}\in \mathbb {R}_{\geqslant }\) such that

$$\begin{aligned} 0 <\ p\ :=\ \mathbb {P}_{i}(S_{\tau (j)}\le x_{1},\,\tau \ge \tau (j)). \end{aligned}$$

It follows that

$$\begin{aligned} \infty \ >\ \mathbb {E}_{i} \tau _{\nu (x+x_{1})}^{\alpha }\ \ge \ \mathbb {E}_{i} \tau _{\nu (x+x_{1})}^{\alpha }\,\mathbf {1}_{\{S_{\tau (j)}\le x_1,\, \tau \ge \tau (j)\}}\ \ge \ p\,\mathbb {E}_{j}\tau _{\nu (x)}^{\alpha } \end{aligned}$$

for all \(x\in \mathbb {R}_{\geqslant }\). Now put

$$\begin{aligned} \widetilde{N}(0)\ :=\ \sum _{n\ge \tau _{\nu (x)}+1} \mathbf {1}_{\{S_{n}-S_{\tau _{\nu (x)}}\le 0\}}, \end{aligned}$$

having the law \(\mathbb {P}_{i}(N(0)\in \cdot )\) under any \(\mathbb {P}_{j}\), and observe that \(N(x)\le \tau _{\nu (x)}+\widetilde{N}(0)\) \(\mathbb {P}_{j}\)-a.s. Therefore,

$$\begin{aligned} \mathbb {E}_{j}N(x)^{\alpha }\ \lesssim \ \mathbb {E}_{j}\tau _{\nu (x)}^{\alpha }\,+\,\mathbb {E}_{i}\, N(0)^{\alpha } <\ \infty \end{aligned}$$

for all \(x\in \mathbb {R}_{\geqslant }\) which completes the proof because \(j\in \mathcal {S}\) was arbitrarily chosen. \(\square \)

For the next result, recall that \(\chi _{n}(i)=\tau _{n}(i)-\tau _{n-1}(i)\) for \(n\in \mathbb {N}\).

Lemma 8.13

For \(\alpha >0\), let \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \) and \(\mathbb {E}_{i}J_{i}(S_{\tau (i)}^{-})^{1+\alpha }<\infty \). Then

$$\begin{aligned} \mathbb {E}_{i}\left( \sum _{n\ge 1}\chi _{n}(i)\,\mathbf {1}_{\{S_{\tau _{n-1}(i)}\le 0\}}\right) ^{\alpha } <\ \infty . \end{aligned}$$

Proof

Consider an auxiliary MRW \((M_{n}',S_{n}')_{n\ge 0}\) such that \(M'=(M_{n}')_{n\ge 0}\) has state space \(\mathcal {S}'\subset \{0\}\cup \mathbb {N}^{2}\), transition probabilities (with \(\tau =\tau (i)\) as usual)

$$\begin{aligned} p_{0,(n,1)}'=\mathbb {P}_{i}(\tau =n),\quad p_{(n,n-1),0}'=1\quad \text {and}\quad p_{(n,k-1),(n,k)}'=1 \end{aligned}$$

for \(n\in \mathbb {N}\) and \(k=2,\ldots ,n-1\), and the stationary distribution \(\pi _{0}'=(\mathbb {E}_{i}\tau )^{-1}\), \(\pi _{(n,k)}'=(\mathbb {E}_{i}\tau )^{-1}\mathbb {P}_{i}(\tau =n)\) for \(n\in \mathbb {N}\) and \(k=1,\ldots ,n-1\). The conditional increment distributions of \((S_{n}')_{n\ge 0}\) are given by

$$\begin{aligned}&\mathbb {P}\left( S_{1}'\in \cdot |M_{0}'=j,M_{1}'=k\right) \\&\quad = {\left\{ \begin{array}{ll} \mathbb {P}_{i}(S_{\tau }\in \cdot |\tau =n),&{}\text {if }j=(n,n-1)\text { for }n\in \mathbb {N}\text { and }k=0, \\ \delta _{0}, \ \text {otherwise.}&{} \end{array}\right. } \end{aligned}$$

Then \(\tau ':=\inf \{n\ge 1:M_{n}'=0\}\) and \(D':=\max _{0\le k\le \tau '}{S_{k}'}^{-}\) satisfy

$$\begin{aligned} \mathbf{P}_{0}\left( S_{\tau '}'\in \cdot \right) = \mathbb {P}_{i}(S_{\tau }\in \cdot )\quad \text {and}\quad \mathbf{P}_{0}(D'\in \cdot )\ = \mathbf{P}_{0}\left( {S_{\tau '}'}^{-}\in \cdot \right) , \end{aligned}$$

where \(\mathbf{P}_{0}:=\mathbb {P}(\cdot |M_{0}'=0)\). Hence, Theorem 6.3 for the given \(\alpha \) applies to the MRW \((M_{n}',S_{n}')_{n\ge 0}\) and provides us with \(\mathbf{E}_{0}\rho '(0)^{\alpha }<\infty \) for \(\rho '(0):=\sup \{n\ge 0:S_{n}'\le 0\}\). Furthermore, one can easily infer from the definition of the \(S_{n}'\) that

$$\begin{aligned} \mathbb {E}_{i}\left( \sum _{n\ge 1}\chi _{n}\,\mathbf {1}_{\{S_{\tau _{n-1}}\le 0\}}\right) ^{\alpha }\ \le \ \mathbf{E}_{0}\left( \sum _{n\ge 1}\mathbf {1}_{\{S_{n}'\le 0\}}\right) ^{\alpha }\ \le \ \mathbf{E}_{0}\rho '(0)^{\alpha } <\ \infty \end{aligned}$$

which completes the proof.\(\square \)

Proof

(of Theorem 6.5 ) By Lemma 8.12, it suffices to verify the equivalence of \(\mathbb {E}_{i}N(0)^{\alpha }<\infty \) with \(A_{i}(x)>0\) for all large x, \(\mathbb {E}_{i}J_{i}(S_{\tau (i)}^{-})^{1+\alpha }<\infty \) and (6.5) for any fixed \(i\in \mathcal {S}\). Let \(\mathbb {U}_{i}\) denote the renewal measure of \((S_{\tau _{n}})_{n\ge 0}\) under \(\mathbb {P}_{i}\) and note that \(\mathbb {U}_{i}((0,x])\asymp J_{i}(x)\) by Lemma 13.1 in Appendix. In the following, the cases \(\alpha \le 1\) and \(\alpha >1\) are treated separately.

Case 1. \(0<\alpha \le 1\).

“(a)\(\Rightarrow \)(b)” Since

$$\begin{aligned} N(0)\ \le \ \sum _{n\ge 1}\chi _{n}\mathbf {1}_{\{S_{\tau _{n-1}}\le 0\}}\ +\ \sum _{n\ge 1}\sum _{k=1}^{\chi _{n}}\mathbf {1}_{\{S_{\tau _{n-1}+k}\le 0<S_{\tau _{n-1}}\}} \end{aligned}$$

it suffices to show by the previous lemma that

$$\begin{aligned} I\ :=\ \mathbb {E}_{i}\left( \sum _{n\ge 1}\sum _{k=1}^{\chi _{n}}\mathbf {1}_{\{S_{\tau _{n-1}+k}\le 0<S_{\tau _{n-1}}\}}\right) ^{\alpha } <\ \infty . \end{aligned}$$
(8.18)

Using the subadditivity of \(x\mapsto x^{\alpha }\), this follows from

$$\begin{aligned} \begin{aligned} I&\le \ \sum _{n\ge 1}\mathbb {E}_{i}\left( \sum _{k=1}^{\chi _{n}}\mathbf {1}_{\{S_{\tau _{n-1}+k}\le 0<S_{\tau _{n-1}}\}}\right) ^{\alpha }\ =\ \int _{\mathbb {R}_{>}}\mathbb {V}_{i}^{\alpha }([y,\infty ))\ \mathbb {U}_{i}(dy)\\&=\ \int \mathbb {U}_{i}((0,y])\ \mathbb {V}_{i}^{\alpha }(dy)\ \asymp \ \int J_{i}(y)\ \mathbb {V}_{i}^{\alpha }(dy). \end{aligned} \end{aligned}$$
(8.19)

Case 2. \(\alpha \ge 1\).

“(b)\(\Rightarrow \)(a)” If \(\mathbb {E}_{i}N(0)^{\alpha }<\infty \) and thus (8.18) holds, then the superadditivity of \(x\mapsto x^{\alpha }\) implies that (8.19) holds with reverse inequality sign, giving (6.5). Moreover, \(N'(0):=\sum _{n\ge 1}\mathbf {1}_{\{S_{\tau _{n}}\le 0\}}\le N(0)\) a.s. entails \(\mathbb {E}_{i}N'(0)^{\alpha }<\infty \) and thus \(A_{i}(x)>0\) for all large x and \(\mathbb {E}_{i}J_{i}(S_{\tau }^{-})^{1+\alpha }<\infty \) by an appeal to Theorem 3.3. Note that this completes already the proof in the case \(\alpha =1\).

“(a)\(\Rightarrow \)(b)” Regarding the proviso, when stated as \(\int _{\mathbb {R}_{>}}\mathbb {V}_{i}^{\alpha }([x,\infty ))\,\mathbb {U}_{i}(dx)<\infty \) [see (8.19)], we point out that it may be extended to

$$\begin{aligned} c\ :=\ \int _{\mathbb {R}}\mathbb {V}_{i}^{\alpha }([x,\infty ))\ \mathbb {U}_{i}(dx) <\ \infty , \end{aligned}$$

for the integral over \(\mathbb {R}_{\leqslant }\) is bounded by \(\mathbb {E}_{i}\tau ^{\alpha }\,\mathbb {U}_{i}(\mathbb {R}_{\leqslant })<\infty \). As a consequence, we also have

$$\begin{aligned} \sup _{\beta \in (0,\alpha ]}\int _{\mathbb {R}}\mathbb {V}_{i}^{\beta }([x,\infty ))\ \mathbb {U}_{i}(dx) = c. \end{aligned}$$

Let \(q\in \mathbb {N}\) and \(\delta \in (0,1]\) be such that \(\alpha =q+\delta \). The subsequent inductive argument (in m) will show that (a) for some \(\alpha \le m\in \mathbb {N}\) implies

$$\begin{aligned} \mathbb {E}_{i}N(0)^{\beta } <\ \infty \quad \text {for all }0\le \beta \le \alpha . \end{aligned}$$
(8.20)

Since this has already been verified for \(m=1\), we proceed to the inductive step \(m\rightarrow m+1\), assume \(q=m\) and (8.20) be true for \(\alpha =m\) (inductive hypothesis). The following argument is taken from [10] (see their Theorem 3.7). Defining

$$\begin{aligned} N_{n}(x) := \sum _{k\ge \tau _{n}+1}\mathbf {1}_{\{S_{k}-S_{\tau _{n}}\le x\}}\quad \text {and}\quad L_{n}\ :=\ \sum _{k=1}^{\chi _{n}}\mathbf {1}_{\{S_{\tau _{n-1}+k}\le 0\}} \end{aligned}$$

for \(n\in \mathbb {N}\) and \(x\in \mathbb {R}\), we obviously have

$$\begin{aligned} \mathbb {E}_{i}\left( \sum _{n\ge 1}L_{n}^{\beta }\right) \ \le \ \mathbb {E}_{i}\left( \sum _{n\ge 1}\mathbb {V}_{i}^{\beta }([S_{\tau _{n-1}},\infty ))\right) \ \le \ c <\ \infty . \end{aligned}$$

Observe that \(N_{n}(0)\) and \(L_{n}\) are independent and

$$\begin{aligned} \mathbb {P}_{i}(N_{n}(0)\in \cdot )\ =\ \mathbb {P}_{i}(N(0)\in \cdot ) \end{aligned}$$

for all \(n\ge 1\). Moreover,

$$\begin{aligned} N_{n}(-S_{\tau _{n}})\ =\ L_{n+1}+N_{n+1}(-S_{\tau _{n+1}}) \end{aligned}$$

for all \(n\in \mathbb {N}_{0}\). By making use of the inequality [10, Lemma 5.6]

$$\begin{aligned} (x+y)^{\alpha }\ \le \ x^{\alpha }+y^{\alpha }+\alpha \,2^{\alpha -1}\big (xy^{\alpha -1}+x^{q}y^{\delta }\big ), \end{aligned}$$

valid for all \(x,y\in \mathbb {R}_{\geqslant }\) and \(\alpha =q+\delta \ge 1\), we infer

$$\begin{aligned} N(0)^{\alpha }&= (L_{1}+N_{1}(-S_{\tau }))^{\alpha }\\&\le L_{1}^{\alpha }\ +\ N_{1}(-S_{\tau })^{\alpha } + \alpha \,2^{\alpha -1}\left( L_{1}N_{1}(-S_{\tau })^{\alpha -1}+L_{1}^{q}N_{1}(-S_{\tau })^{\delta }\right) . \end{aligned}$$

Since \(\mathbb {E}_{i}N(0)<\infty \) and \(S_{\tau _{n}}\rightarrow \infty \) a.s., we have \(N_{n}(-S_{\tau _{n}})\le N(-S_{\tau _{n}})\rightarrow 0\) a.s. Therefore, by an iteration of the previous inequality, we find

$$\begin{aligned} N(0)^{\alpha } \le \sum _{n\ge 1}L_{n}^{\alpha }\ +\ \alpha \,2^{\alpha -1}\sum _{n\ge 1}\left( L_{n}N_{n}(-S_{\tau _{n}})^{\alpha -1}+L_{n}^{q}N_{n}(-S_{\tau _{n}})^{\delta }\right) . \end{aligned}$$

Using further

$$\begin{aligned} N_{n}(-S_{\tau _{n}})&\le \sum _{l\ge n}\sum _{k=1}^{\chi _{l+1}}\left( \mathbf {1}_{\{S_{\tau _{l}+k}-S_{\tau _{l}}\le -S_{\tau _{l}}<0\}}+\mathbf {1}_{\{S_{\tau _{l}+k}-S_{\tau _{l}}\le -S_{\tau _{l}},\,S_{\tau _{l}}\le 0\}}\right) \\&\le N_{n}(0)\ +\ \sum _{n\ge 1}\chi _{n}\mathbf {1}_{\{S_{\tau _{n-1}}\le 0\}}, \end{aligned}$$

Lemma 8.13 and the inductive hypothesis (8.20) yield upon taking means

$$\begin{aligned} \mathbb {E}_{i}N(0)^{\alpha }\ \lesssim \ c\left( 1+\alpha \,2^{\alpha -1}\left[ \mathbb {E}_{i}N(0)^{\alpha -1}+\mathbb {E}_{i}N(0)^{\delta }\right] \right) \ <\ \infty . \end{aligned}$$

\(\square \)

8.6 Proof of Theorem 6.6

The following two lemmata will easily establish the asserted implications of Theorem 6.6.

Lemma 8.14

Let \(\alpha >0\) and \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \) for some \(i\in \mathcal {S}\). Then \(\mathbb {E}_{i}J_{i}(D^{i})^{1+\alpha }<\infty \) implies \(\mathbb {E}_{i}J_{i}(S_{\tau (i)}^{-})^{1+\alpha }<\infty \) as well as (6.4) and (6.5).

Proof

The first implication is obvious because \(S_{\tau }^{-}\le D^{i}\). Regarding the other two, use (6.1) and Hölder’s inequality to infer

$$\begin{aligned} \int _{\mathbb {R}_{>}}J_{i}(x) V_{i}^{\alpha }(dx)&\asymp \int _{\mathbb {R}_{>}}J_{i}'(x)\,\mathbb {V}_{i}^{\alpha }((x,\infty ))\ dx\\&\le \mathbb {E}_{i}\left( \tau ^{\alpha }\int _{0}^{D^{i}}J_{i}'(x)\ dx\right) \ =\ \mathbb {E}_{i}\tau ^{\alpha }J_{i}(D^{i})\\&\le \left( \mathbb {E}_{i}\tau ^{1+\alpha }\right) ^{\alpha /(1+\alpha )}\left( \mathbb {E}_{i}J_{i}(D^{i})^{1+\alpha }\right) ^{1/(1+\alpha )} <\ \infty . \end{aligned}$$

Similarly, one finds

$$\begin{aligned} \int _{\mathbb {R}_{>}}J_{i}(x)^{\alpha } \mathbb {V}_{i}(dx)&\asymp \int _{\mathbb {R}_{>}}J_{i}'(x)J_{i}(x)^{\alpha -1}\,\mathbb {V}_{i}((x,\infty ))\ dx\ =\ \mathbb {E}_{i}\tau J_{i}(D^{i})^{\alpha }\\&\le \left( \mathbb {E}_{i}\tau ^{1+\alpha }\right) ^{1/(1+\alpha )}\left( \mathbb {E}_{i}J_{i}(D^{i})^{1+\alpha }\right) ^{\alpha /(1+\alpha )} <\ \infty . \end{aligned}$$

\(\square \)

Lemma 8.15

Let \(\alpha >0\), \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \) for some \(i\in \mathcal {S}\) and \(x\in \mathbb {R}_{\geqslant }\). If \(\alpha \ge 1\), then \({\varSigma }_{\alpha }(i,x)<\infty \) implies \(\mathbb {E}_{i}N(x)^{\alpha }<\infty \), while the reverse implication holds true if \(0<\alpha \le 1\).

Proof

If \(\alpha \ge 1\) and \({\varSigma }_{\alpha }(i,x)<\infty \), then

$$\begin{aligned} \mathbb {E}_{i}(N(x)\wedge m)^{\alpha }&\le \mathbb {E}_{i}\left( \sum _{n\ge 1}(N(x)\wedge m)^{\alpha -1}\mathbf {1}_{\{S_{n}\le x\}}\right) \\&\le \mathbb {E}_{i}\left( \sum _{n\ge 1}(N(x)\wedge m)^{\alpha -1}\mathbf {1}_{\{N(x)\wedge m\ge 2n\}}\right) \ +\ 2^{\alpha -1}{\varSigma }_{\alpha }(i,x)\\&\le \frac{1}{2}\,\mathbb {E}_{i}(N(x)\wedge m)^{\alpha }\ +\ 2^{\alpha -1}{\varSigma }_{\alpha }(i,x) \end{aligned}$$

for all \(m\ge 1\), thus \(\mathbb {E}_{i}N(x)^{\alpha }\le 2^{\alpha }{\varSigma }_{\alpha }(i,x)<\infty \).

If \(0<\alpha \le 1\) and \(\mathbb {E}_{i}N(x)^{\alpha }<\infty \), then

$$\begin{aligned} \mathbb {E}_{i}N(x)^{\alpha }&= \mathbb {E}_{i}\left( \sum _{n\ge 1}N(x)^{\alpha -1}\mathbf {1}_{\{S_{n}\le x\}}\right) \ \ge \ \mathbb {E}_{i}\left( \sum _{n\ge 1}N(x)^{\alpha -1}\mathbf {1}_{\{N(x)\le n,\,S_{n}\le x\}}\right) \\&\ge \mathbb {E}_{i}\left( \sum _{n\ge 1}n^{\alpha -1}\mathbf {1}_{\{N(x)\le n,\,S_{n}\le x\}}\right) \ \ge \ {\varSigma }_{\alpha }(i,x)\ -\ \mathbb {E}_{i}\left( \sum _{n=1}^{N(x)}n^{\alpha -1}\right) \\&\ge {\varSigma }_{\alpha }(i,x)\ -\ \mathbb {E}_{i}N(x)^{\alpha }, \end{aligned}$$

thus \({\varSigma }_{\alpha }(i,x)\le 2\,\mathbb {E}_{i}N(x)^{\alpha }<\infty \). \(\square \)

Proof

(of Theorem 6.6 ) In view of the previous two lemmata, the stated implications between Theorems 6.3, 6.4 and 6.5 are now immediate. As for the finiteness of \(\mathbb {E}_{i}\sigma ^{>}(x)^{1+\alpha }\) for all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\) and any given \(\alpha >0\), fix \(i\in \mathcal {S}\) and recall that Theorem 6.4(b) entails \(\sum _{n\ge 1}n^{\alpha -1}\mathbb {P}_{i}(S_{\tau _{n}}\le x)<\infty \) [see (8.15)] and that Theorem 6.5(b) trivially entails \(\mathbb {E}_{i}(\sum _{n\ge 1}\mathbf {1}_{\{S_{\tau _{n}}\le x\}})^{\alpha }<\infty \) for all \(x\in \mathbb {R}_{\geqslant }\). Hence, under any of these conditions, we infer from Theorem 3.3, applied to the ordinary random walk \((S_{\tau _{n}})_{n\ge 0}\), that \(\mathbb {E}_{i}\tau _{\nu (x)}^{1+\alpha }<\infty \) for all \(x\in \mathbb {R}_{\geqslant }\). Since \(\sigma ^{>}(x)\le \tau _{\nu (x)}\), this completes the proof. \(\square \)

8.7 Proofs of Theorems 6.8 and 6.9

Throughout this subsection, the state space \(\mathcal {S}\) of \((M_{n})_{n\ge 0}\) is assumed to be finite. It is a well-known fact that the return times \(\tau (i)\) then have exponential moments, in particular \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \) for all \(\alpha \in \mathbb {R}_{\geqslant }\). The proofs of the theorems will be furnished by two subsequent lemmata.

Lemma 8.16

Given a MRW \((M_{n},S_{n})_{n\ge 0}\) with finite state space \(\mathcal {S}\), the following assertions hold true for all \(i\in \mathcal {S}\):

  1. (a)

    There exists \(x\in \mathbb {R}_{\geqslant }\) such that \(\mathbb {P}_{i}(S_{\tau (i)}^{\pm }>y)\gtrsim \mathbb {P}_{\pi }(X_{1}^{\pm }>y+x)\) as \(y\rightarrow \infty \).

  2. (b)

    \(\mathbb {E}_{\pi }|X_{1}|<\infty \) if and only if \(\mathbb {E}_{i}|S_{\tau (i)}|<\infty \).

  3. (c)

    \(\mathbb {E}_{i}(S_{\tau (i)}^{\pm }\wedge y)\asymp \mathbb {E}_{\pi }(X_{1}^{\pm }\wedge y)\) as \(y\rightarrow \infty \).

Proof

We will prove (a) for \((X_{1}^{-},S_{\tau (i)}^{-})\) and (c) for \((X_{1}^{+},S_{\tau (i)}^{+})\) because this allows us to directly refer to results stated in this paper. The other cases then follow by switching to \((M_{n},-S_{n})_{n\ge 0}\).

(a) Fixing an arbitrary \(i\in \mathcal {S}\), we will show that for all \(i,j\in \mathcal {S}\) with \(p_{ij}>0\), there exists \(x_{ij}\in \mathbb {R}_{\geqslant }\) such that

$$\begin{aligned} \mathbb {P}_{i}\left( S_{\tau (i)}^{-}>y\right) \ \gtrsim \ \mathbb {P}_{i}\left( X_{1}^{-}>y+x_{ij}|M_{1}=j\right) \end{aligned}$$
(8.21)

as \(y\rightarrow \infty \). Then by the finiteness of \(\mathcal {S}\), we can choose \(x:=\max _{i,j\in \mathcal {S}}x_{ij}<\infty \) to obtain the desired result

$$\begin{aligned} \mathbb {P}_{i}\left( S_{\tau (i)}^->y\right) \ \gtrsim \ \sum _{i\in \mathcal {S}} \pi _{i}\,p_{ij}\,\mathbb {P}_{i}\left( X_{1}^{-}>y+x|M_{1}=j\right) \ = \ \mathbb {P}_{\pi }\left( X_{1}^{-}>y+x\right) \end{aligned}$$

as \(y\rightarrow \infty \).

For \(u\in \mathcal {S}\), define \(\tau ^{0}(u):=\inf \{n\ge 0: M_{n}=u\}\). Pick \(i,j\in \mathcal {S}\) with \(p_{ij}>0\). There exist \(m_{1},m_{2}\in \mathbb {N}_{0}\) and \(z\in \mathbb {R}_{\geqslant }\) such that

$$\begin{aligned} p_{1} := \mathbb {P}_{i}\left( \tau ^{0}(i)=m_1<\tau (i),\,|S_{m_{1}}|\le z\right) > 0 \end{aligned}$$

and

$$\begin{aligned} p_{2} := \mathbb {P}_{j}\left( \tau ^{0}(i)=m_{2},\, |S_{m_{2}}|\le z\right) \ >\ 0. \end{aligned}$$

With \(m:=m_{1}+m_{2}+1\), it follows that

$$\begin{aligned}&\mathbb {P}_{i}\left( S_{\tau (i)}^{-}>y\right) \\&\quad \ge \mathbb {P}_{i}\left( \tau ^{0}(j)=m_{1},\, |S_{m_{1}}|\le z,\, M_{m_{1}+1}=j, \right. \\&\qquad \quad \left. \tau (j)=m,\,|S_{m}-S_{m_{1}+1}|\le z,\, S_{\tau (i)}^{-}>y\right) \\&\quad \ge p_{1}p_{2}\,\mathbb {P}_{i}\left( X_{1}^{-}>y+2z|M_{1}=j\right) \ \asymp \ \mathbb {P}_{i}\left( X_{1}^{-}>y+2z|M_{1}=j\right) . \end{aligned}$$

and this proves (8.21).

(b) By using (a) for the positive and the negative part, we infer that \(\mathbb {E}_{i}|S_{\tau (i)}|<\infty \) implies \(\mathbb {E}_{\pi }|X_{1}|<\infty \). The reverse implication follows from (4.2).

(c) In view of (b), we must only consider the case when \(\mathbb {E}_{\pi }X_{1}^{+}=\infty \). Let \(x\in \mathbb {R}_{\geqslant }\) be the constant provided by part (a) for \((S_{\tau (i)}^{+},X_{1}^{+})\). Then

$$\begin{aligned} \mathbb {E}_{i}\left( S_{\tau (i)}^{+}\wedge y\right) \ =\ \int _{0}^{y} \mathbb {P}_{i}\left( S_{\tau (i)}^{+}>z\right) \ dz\ \gtrsim \ \int _{0}^{y} \mathbb {P}_{\pi }\left( X_{1}^{+} >z+x\right) \ dz \end{aligned}$$

as \(y\rightarrow \infty \), and the last integral is positive because \(\mathbb {E}_{\pi } X_{1}^{+}=\infty \). Therefore,

$$\begin{aligned} \int _{0}^{y}\mathbb {P}_{\pi }\left( X_{1}^{+}>z+x\right) \ dz\ \asymp \ \int _{0}^{y+x}\mathbb {P}_{\pi }\left( X_{1}^{+}>z\right) \ dz\ =\ \mathbb {E}_{\pi }\left[ X_{1}^{+}\wedge (y+x)\right] \end{aligned}$$

as \(y\rightarrow \infty \). Using \(\mathbb {E}_{\pi }[X_{1}^{+}\wedge (y+x)]\asymp \mathbb {E}_{\pi }(X_{1}^{+}\wedge y)\) (cf. Lemma 7.5(b)), we arrive at the conclusion \(\mathbb {E}_{\pi }(X_{1}\wedge y)\lesssim \mathbb {E}_{i}(S_{\tau (i)}^{+}\wedge y)\). For a reverse estimate, use the occupation measure formula (4.1) to infer

$$\begin{aligned} \mathbb {E}_{\pi }\left( X_{1}^{+}\wedge y\right)= & {} \pi _{i}\, \mathbb {E}_{i}\left[ \sum _{k=1}^{\tau (i)}\left( X_{k}^{+}\wedge y\right) \right] \ \ge \ \pi _{i}\,\mathbb {E}_{i}\left[ \left( \sum _{k=1}^{\tau (i)}X_{k}^{+}\right) \wedge y\right] \\\ge & {} \pi _{i}\,\mathbb {E}_{i}\left( S_{\tau (i)}^{+}\wedge y\right) \end{aligned}$$

for all \(y\in \mathbb {R}_{\geqslant }\). \(\square \)

For \(\gamma \in [0,1]\), let

$$\begin{aligned} J_{\pi ,\gamma }(x) := {\left\{ \begin{array}{ll} \displaystyle {\frac{x}{\left[ \mathbb {E}_{\pi }\left( X_{1}^{+}\wedge x\right) \right] ^{\gamma }}},&{} \text {if }\mathbb {P}_{\pi }(X_{1}>0)>0\\ x, &{}\text {otherwise} \end{array}\right. }, \end{aligned}$$

where \(0/[\mathbb {E}_{\pi }(X_{1}^{+}\wedge 0)]^{\gamma }:=1\) if \(\gamma >0\). Note that \(J_{\pi }=J_{\pi ,1}\). The second lemma relates the moments of \(J_{i,\gamma }(S_{\tau (i)}^{-})\) and \(J_{i,\gamma }(D^{i})\) to the respective moments of \(J_{\pi ,\gamma }(X_{1}^{-})\).

Lemma 8.17

Given a nontrivial MRW \((M_{n},S_{n})_{n\ge 0}\) with finite state space \(\mathcal {S}\), the following assertions are equivalent for any \(\alpha \in \mathbb {R}_{\geqslant }\) and \(\gamma \in [0,1]\).

  1. (a)

    \(\mathbb {E}_{i} J_{i,\gamma }(S_{\tau (i)}^{-})^{1+\alpha } < \infty \) for some/all \(i\in \mathcal {S}\).

  2. (b)

    \(\mathbb {E}_{i}J_{i,\gamma }(D^{i})^{1+\alpha } < \infty \) for some/all \(i\in \mathcal {S}\).

  3. (c)

    \(\mathbb {E}_{\pi }J_{\pi ,\gamma }(X_{1}^{-})^{1+\alpha }<\infty \).

Proof

If \(\mathbb {E}_{\pi }X_{1}^{+}<\infty \), then, by another application of the occupation measure formula (4.1),

$$\begin{aligned} \mathbb {E}_{\pi }X_{1}^{+}\ =\ \pi _{i}\,\mathbb {E}_{i}\left( \sum _{k=1}^{\tau (i)}X_{k}^{+}\right) \ \ge \ \pi _{i}\,\mathbb {E}_{i}S_{\tau (i)}^{+} \end{aligned}$$

and therefore \(J_{\pi ,\gamma }(y)\asymp y\asymp J_{i,\gamma }(y)\) as \(y\rightarrow \infty \). If \(\mathbb {E}_{\pi } X_{1}^{+}=\infty \), then Lemma 8.16(c) entails \(J_{\pi ,\gamma }(y) \asymp J_{i,\gamma }(y)\) as \(y\rightarrow \infty \). In any case, it therefore suffices prove the lemma when replacing \(J_{i,\gamma }\) with \(J_{\pi ,\gamma }\) in (a) and (b). Let \(i\in \mathcal {S}\) be an arbitrarily chosen state hereafter.

“(b)\(\Rightarrow \)(a)” follows directly from \(D^{i}\ge S_{\tau (i)}^{-}\).

“(a)\(\Rightarrow \)(c)” With \(x\in \mathbb {R}_{\geqslant }\) as provided by Lemma 8.16(a), we have \(J_{\pi ,\gamma }(y-x)\asymp J_{\pi ,\gamma }(y)\) and thus

$$\begin{aligned} \infty > \mathbb {E}_{i} J_{\pi ,\gamma }\left( S_{\tau (i)}^- \right) ^{1+\alpha }\ \gtrsim \ \mathbb {E}_{\pi }J_{\pi ,\gamma }\left( \left( X_{1}^{-} -x \right) ^+\right) ^{1+\alpha } \, \asymp \mathbb {E}_{\pi } J_{\pi ,\gamma }\left( X_{1}^{-}\right) ^{1+\alpha }. \end{aligned}$$

“(c)\(\Rightarrow \)(b)” Let \(F_{jk}\) be the distribution function of \(X_{1}^{-}\) given \(M_{0}=j\) and \(M_{1}=k\) and \(F_{jk}^{-1}\) its pseudo-inverse. Given a sequence \((U_{n})_{n\ge 1}\) of iid uniformly distributed random variables on (0, 1) which are independent of all other occurring random variables, the sequence \((M_{n},\widehat{X}_{n})_{n\ge 1}\) with \(\widehat{X}_{n}:=F_{M_{n-1},M_{n}}^{-1}(U_{n})\) forms a distributional copy of \((M_{n},X_{n}^{-})_{n\ge 1}\). Put \(\widehat{S}_n=\sum _{k=1}^{n}\widehat{X}_{k}\) for \(n\ge 1\) and

$$\begin{aligned} G(y) := \max _{j,k\in \mathcal {S}}\mathbb {P}_{j}\left( X_{1}^{-}\le y|M_{1}=k\right) , \end{aligned}$$

which is a proper distribution function as \(\mathcal {S}\) is finite. Now \((W_{n})_{n\ge 1}:=(G^{-1}(U_{n}))_{n\ge 1}\) forms an iid sequence independent of \((M_{n})_{n\ge 0}\) and with \(\widehat{X}_{n}\le W_{n}\) for all \(n\ge 1\). Use that \(J_{\pi ,\gamma }\) is a subadditive and nondecreasing to infer

$$\begin{aligned} \mathbb {E}_{i}J_{\pi ,\gamma }(D^{i})^{{1+\alpha }}&\le \mathbb {E}_{i}J_{\pi ,\gamma }\left( \sum _{k=1}^{\tau (i)}X_{k}^{-}\right) ^{{1+\alpha }} = \mathbb {E}_{i}J_{\pi ,\gamma }(\widehat{S}_{\tau (i)})^{{1+\alpha }}\\&\le \mathbb {E}_{i}\left( \sum _{k=1}^{\tau (i)}J_{\pi ,\gamma }(\widehat{X}_{k})\right) ^{{1+\alpha }}\ \le \ \mathbb {E}\left( \sum _{k=1}^{\tau (i)}J_{\pi ,\gamma }(W_{k})\right) ^{{1+\alpha }}. \end{aligned}$$

By [29, Theorem 1.5.4], the upper bound is finite iff \(\mathbb {E}_{i}\tau (i)^{{1+\alpha }}\) and \(\mathbb {E}J_{\pi ,\gamma }(W_{1})^{1+\alpha }\) are both finite. We must only verify the finiteness of the second expectation which follows from

$$\begin{aligned} \infty> \mathbb {E}_{\pi } J_{\pi ,\gamma }\left( X_{1}^{-}\right) ^{{1+\alpha }}&= \sum _{j,k\in \mathcal {S}} \pi _{j}\,p_{jk}\, \mathbb {E}_{j}\left( J_{\pi ,\gamma }\left( X_{1}^{-}\right) ^{1+\alpha }|M_{1}=k\right) \\&\ge c \sum _{j,k\in \mathcal {S}:\, p_{jk}>0 }\,\int J_{\pi ,\gamma }(y)^{1+\alpha }\ \mathbb {P}_{j}\left( X_{1}^{-}\in dy|M_{1}=k\right) \\&\ge c\,\mathbb {E}J_{\pi ,\gamma }(W)^{1+\alpha }, \end{aligned}$$

where \(c:=\min \{\pi _{j}\,p_{jk}: j,k\in \mathcal {S}\text { and }p_{jk}>0\}\). \(\square \)

Proof of Theorem 6.8

By Lemma 8.17, we may replace \(D^{i}\) with \(S_{\tau (i)}^{-}\) in (b) which is assumed hereafter. Let us establish the equivalence of this modified condition and (6.8). If \(\mathbb {E}_{\pi }|X_{1}|<\infty \), then \(\mathbb {E}_{i}|S_{\tau (i)}|<\infty \) by Lemma 8.16(b), and since, by another use of (4.1), we then also have

$$\begin{aligned} \pi _{i}\,\lim _{y\rightarrow \infty }A_{i}(y)\ =\ \pi _{i}\,\mathbb {E}_{i}S_{\tau (i)}\ =\ \mathbb {E}_{\pi }X_{1}\ =\ \lim _{y\rightarrow \infty }A_{\pi }(y), \end{aligned}$$

the asserted equivalence follows. If \(\mathbb {E}_{\pi }|X_{1}|=\infty \) and thus \(\mathbb {E}_{\pi }|S_{\tau (i)}|=\infty \), then only equivalence of \(\mathbb {E}_{i}J_{i}(S_{\tau (i)}^{-})<\infty \) and \(\mathbb {E}_{\pi }J_{\pi }(X_{1}^{-})<\infty \) must be verified, as pointed out in Remark 3.2. But this is ensured by the previous lemma for \(\gamma =1\).

It remains to verify that 6.1(d), i.e., \(\mathbb {E}_{i}\sigma ^{>}(x)<\infty \) for all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\), implies the modified condition 6.1(b). By Proposition 4.1 (note that the dual MRW is trivially also positive divergent if \(|\mathcal {S}|<\infty \)), the ladder chain \((M_{n}^{>})_{n\ge 0}\) is positive recurrent on some \(\mathcal {S}^{>}\subset \mathcal {S}\) with unique stationary law \(\pi ^{>}\) vanishing outside \(\mathcal {S}^{>}\). In particular, \(\kappa (i):=\inf \{n:M_{n}^{>}=s\}\) has finite mean under \(\mathbb {P}_{i}\) for each \(i\in \mathcal {S}^{>}\). Moreover, the sequence \((M_{n}^{>},\sigma _{n}^{>})_{n\ge 0}\) forms a MRW with \(\mathbb {E}_{\pi ^>}\sigma ^{>}<\infty \), for \(\mathcal {S}\) is finite. Now fix any \(i\in \mathcal {S}^{>}\) and recall that \(\tau ^{>}(i)\) denotes the first ascending ladder epoch of \((S_{\tau _{n}(i)})_{n\ge 0}\). Then we obviously have \(\tau ^{>}(i)\le \sigma _{\kappa (i)}=\inf \{\sigma _{n}^{>}:M_{\sigma _{n}^{>}}=s\}\), and thus, by making use of the occupation measure formula (4.1)

$$\begin{aligned} \mathbb {E}_{i}\tau ^{>}(i)\ \le \ \mathbb {E}_{i}\sigma _{\kappa (i)}\ =\ \mathbb {E}_{\pi ^>}\sigma ^{>}\,\mathbb {E}_{i}\kappa (i) <\ \infty \end{aligned}$$

which in turn implies the modified 6.1(b) by invoking Theorem 3.1 for the ordinary random walk \((S_{\tau _{n}(i)})_{n\ge 0}\). \(\square \)

Proof of Theorem 6.9

By Lemmata 8.17 and 7.8, only “(6.6)\(\,\Rightarrow \mathbb {E}_{i}J_{i}(S_{\tau (i)}^-)^{1+\alpha }<\infty \)” remains to be proved. On the other hand, this requires further results on the moments of \(\sigma ^{>}(x)\), notably Proposition 9.4, and will therefore be postponed to the end of Sect. 9. \(\square \)

9 A Closer Look at the Moments of \(\sigma ^{>}(x)\)

This section is devoted to a more detailed discussion of some aspects regarding the moments of \(\sigma ^{>}(x)\) for which none of the conditions provided by Theorems 6.3(a), 6.4(a) and 6.5(a) appears to be necessary.

9.1 A Counterexample

We begin with an example that will actually show that, for any given \(\alpha \ge 0\), we can define a nontrivial oscillating MRW \((M_{n},S_{n})_{n\ge 0}\) having negative divergent embedded RW \((S_{\tau _{n}(i)})_{n\ge 0}\) and yet \(\mathbb {E}_{i}\sigma ^{>}(x)^{1+\alpha }<\infty \) for all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\). In other words, the last property does not entail the positive divergence of \((S_{n})_{n\ge 0}\) nor any of the other assertions stated in the theorems of Sect. 6, which is in sharp contrast to the case of ordinary RW.

Example 9.1

Let \((W_{n})_{n\ge 0}\) be an ordinary zero-delayed integer-valued random walk with generic increment Y satisfying \(\mathbb {P}(Y=-n)>0\) for all \(n\in \mathbb {N}_{0}\),

$$\begin{aligned} \mathbb {P}(Y^{-}>n)\,=\,\frac{1}{n^{1+\alpha }}\quad \text {for all sufficiently large }n\in \mathbb {N}, \end{aligned}$$

and \(\mathbb {P}(Y^{+}\in \cdot )\) such that \(\mathbb {E}\big (\inf \{n:W_{n}>0\})^{1+\alpha }=\infty \) for any fixed \(\alpha \ge 0\), a particular choice being \(Y^{+}\equiv 0\). Further defining \(f:\mathbb {R}_{>}\rightarrow \mathbb {R}_{>}\) by \(f(x):=2^{\theta x^{1+\alpha }}\) for some \(\theta >1+\alpha \), we have \(f(x)\ge x\) for all \(x\ge 1\) and also find that

$$\begin{aligned} \begin{aligned} \lim _{n\rightarrow \infty }n\,\mathbb {P}(f(Y^{-})>c2^{n})&= \lim _{n\rightarrow \infty }n\,\mathbb {P}\left( Y^{-}>f^{-1}(c2^{n})\right) \\&= \lim _{n\rightarrow \infty }n\,\mathbb {P}\left( Y^{-}>\left( \frac{n+\log _{2}c}{\theta }\right) ^{1/(1+\alpha )}\right) = \theta \end{aligned} \end{aligned}$$
(9.1)

for all \(c\in \mathbb {R}_{>}\).

Now consider a MRW \((M_{n},S_{n})_{n\ge 0}\) such that \((M_{n})_{n\ge 0}\) has state space \(\mathbb {N}_{0}\) and transition probabilities \(p_{00}=\mathbb {P}(Y\ge 0)\), \(p_{0i}=\mathbb {P}(Y=-i)\) and \(p_{i0}=1\) for \(i\in \mathbb {N}\). Furthermore,

$$\begin{aligned} K_{00} = \mathbb {P}(Y\in \cdot \,|Y\ge 0),\quad K_{0i}\ =\ \delta _{f(i)}\quad \text {and}\quad K_{i0}\ =\ \delta _{-f(i)-i} \end{aligned}$$

for all \(i\in \mathbb {N}\). Notice that \(\tau (0)\le 2\) a.s. and that the law of \((S_{\tau _{n}(0)})_{n\ge 0}\) under \(\mathbb {P}_{0}\) equals the law of \((W_{n})_{n\ge 0}\).

Fixing any \(x>0\), the following property of the MRW under \(\mathbb {P}_{0}\) is essential for our considerations, namely

$$\begin{aligned} X_{\tau _{n}(0)+1}\ \le \ x\quad \Longrightarrow \quad S_{\tau _{n+1}(0)}-S_{\tau _{n}(0)}\ \ge \ -x. \end{aligned}$$

As a consequence of this property, we infer that

$$\begin{aligned} \{\sigma ^{>}(x)>\tau (0),M_{0}=0\}&\subset \{S_{\tau (0)}\ge -x,M_{0}=0\},\\ \{\sigma ^{>}(x)>\tau _{2}(0),M_{0}=0\}&\subset \{S_{\tau (0)}\ge -x,X_{\tau (0)+1}\le 2x,M_{0}=0\}\\&\subset \{S_{\tau _{2}(0)}\ge -3x,M_{0}=0\} \end{aligned}$$

and then inductively

$$\begin{aligned} \{\sigma ^{>}(x)>\tau _{n}(0),M_{0}=0\} \subset \left\{ S_{\tau _{n}(0)}\ge -(2^{n}-1)x,M_{0}=0\right\} \end{aligned}$$

for all \(n\in \mathbb {N}\). Defining \(\widehat{\sigma }(x):=\inf \{n\ge 1:X_{\tau _{n}(0)+1}>x2^{n}\}\), this implies \(\sigma ^{>}(x)\le \widehat{\sigma }(x)\) and the subsequent argument will show that \(\mathbb {E}_{0}\widehat{\sigma }(x)^{1+\alpha }<\infty \) for all \(x\in \mathbb {R}_{\geqslant }\), giving

$$\begin{aligned} \mathbb {E}_{i}\sigma ^{>}(x)^{1+\alpha }\ \le \ \mathbb {E}_{0}[1+\sigma ^{>}(x+i+f(i))]^{1+\alpha } <\ \infty \end{aligned}$$

for all \(i\in \mathbb {N}\) and \(x\in \mathbb {R}_{\geqslant }\) as desired.

Put \(F(x):=\mathbb {P}_{0}(X_{1}\le x)\) and note that

$$\begin{aligned} 1-F(x)\ =\ \mathbb {P}(f(Y^{-})>x) \end{aligned}$$

for \(x\in \mathbb {R}_{>}\). We start by pointing out that

$$\begin{aligned} \mathbb {E}_{0}\widehat{\sigma }(x)^{1+\alpha }\ \asymp \ \sum _{n\ge 1}n^{\alpha }\,\mathbb {P}_{0}(\widehat{\sigma }(x)>n)\ =\ \sum _{n\ge 1}n^{\alpha }\prod _{k=1}^{n}F(x2^{k}) \end{aligned}$$

because the \(X_{\tau _{n}(0)+1}\) are iid under \(\mathbb {P}_{0}\) with distribution function F. Put \(b_{n}:=n^{\alpha }\prod _{k=1}^{n}F(x2^{k})\) for \(n\in \mathbb {N}\). Now \(\mathbb {E}_{0}\widehat{\sigma }^{1+\alpha }<\infty \) follows by Raabe’s test (see, e.g., Stromberg [47, (7.16)]) if

$$\begin{aligned} \liminf _{n\rightarrow \infty }n\left( \frac{b_{n}}{b_{n+1}}-1\right) \ > 1. \end{aligned}$$

To this end, use (9.1) to obtain

$$\begin{aligned} n(1-F(x2^{n+1})) = n\,\mathbb {P}(f(Y^{-})>x2^{n+1})\ =\ \theta +o(1) \end{aligned}$$

as \(n\rightarrow \infty \) and then finally conclude

$$\begin{aligned} \liminf _{n\rightarrow \infty }n\left( \frac{b_{n}}{b_{n+1}}-1\right)&= \liminf _{n\rightarrow \infty }n\left( \frac{n^{\alpha }-(n+1)^{\alpha }F(x2^{n+1})}{(n+1)^{\alpha }F(x2^{n+1})}\right) \\&= \liminf _{n\rightarrow \infty }\frac{n}{F(x2^{n+1})}\left( \left( 1+\frac{1}{n}\right) ^{-\alpha }-F(x2^{n+1})\right) \\&= \liminf _{n\rightarrow \infty }n\left( 1-\frac{\alpha }{n}+o\left( \frac{1}{n}\right) -F(x2^{n+1})\right) \\&= \theta -\alpha > 1. \end{aligned}$$

Finally, the finiteness of \(\mathbb {E}_{0}\sigma ^{>}(x)^{1+\alpha }\) particularly entails \(\limsup _{n\rightarrow \infty }S_{n}=\infty \) a.s. and thus confirms that \((S_{n})_{n\ge 0}\) is indeed oscillating. \(\square \)

9.2 Solidarity

It is clear that \(\mathbb {E}_{i}\sigma ^{>}(x)^{1+\alpha }<\infty \) for some \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\) does not necessarily entail

$$\begin{aligned} \mathbb {E}_{i}\sigma ^{>}(x)^{1+\alpha }\,<\,\infty \quad \text {for all }(i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }, \end{aligned}$$
(9.2)

as we may have \(\mathbb {P}_{i}(\sigma ^{>}(x)=1)=1\) for some (ix) even when \((M_{n},S_{n})_{n\ge 0}\) is negative divergent and thus \(\mathbb {P}_{j}(\sigma ^{>}(y)=\infty )>0\) for some other \((j,y)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\). The subsequent lemma provides sufficient conditions for (9.2). Consider the condition

$$\begin{aligned} q(i,x)\,:=\,\mathbb {P}_{i}(\sigma ^{>}(x)>\tau (i),S_{\tau (i)}<0)\,>\,0 \end{aligned}$$
(9.3)

for some \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\) and note that q(ix) is nondecreasing in x.

Lemma 9.2

Let \(\alpha \ge 0\) and \((M_{n},S_{n})_{n\ge 0}\) be a nontrivial MRW. Then any of the following three assumptions implies (9.2).

  1. (a)

    \(\mathbb {E}_{i}\sigma ^{>}(x)^{1+\alpha }<\infty \) for some \(i\in \mathcal {S}\) and all \(x\in \mathbb {R}_{\geqslant }\).

  2. (b)

    \(\mathbb {E}_{i}\sigma ^{>}(x)^{1+\alpha }<\infty \) and \(q(i,x)>0\) for some \((i,x)\in \mathbb {R}_{\geqslant }\).

  3. (c)

    \(\mathbb {E}_{i}\sigma ^{>}(0)^{1+\alpha }<\infty \) and \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \) for all \(i\in \mathcal {S}\).

Proof

(a) Pick any \(j\ne i\). Then there exists \(x_{0}\in \mathbb {R}_{\geqslant }\) such that

$$\begin{aligned} \mathbb {P}_{i}(\sigma ^{>}(2x)>\tau (j),S_{\tau (j)}\le x)\ \ge \ \mathbb {P}_{i}(\sigma ^{>}(2x_{0})>\tau (j),S_{\tau (j)}\le x_{0}) =: p > 0 \end{aligned}$$

for all \(x\ge x_{0}\). For any such x, we now infer

$$\begin{aligned} \infty> \mathbb {E}_{i}\sigma ^{>}(2x)^{1+\alpha } \ge \mathbb {E}_{i}\sigma ^{>}(2x)^{1+\alpha }\mathbf {1}_{\{\sigma ^{>}(2x)>\tau (j),\,S_{\tau (j)}\le x\}})\ \ge \ p\,\mathbb {E}_{j}\sigma ^{>}(x)^{1+\alpha }, \end{aligned}$$

that is \(\mathbb {E}_{j}\sigma ^{>}(x)^{1+\alpha }<\infty \). Since \(\sigma ^{>}(x)\) is nondecreasing in x, (9.2) follows.

(b) Since \(q(i,x)>0\), we can find \(h>0\) small enough such that

$$\begin{aligned} p(x)\ :=\ \mathbb {P}_{i}(\sigma ^{>}(x)>\tau (i),S_{\tau (i)}\le -h) > 0. \end{aligned}$$

Consequently,

$$\begin{aligned} \infty> \mathbb {E}_{i}\sigma ^{>}(x)^{1+\alpha }\mathbf {1}_{\{\sigma ^{>}(x)>\tau (i),S_{\tau (i)}\le -h\}}\ \ge \ p(x)\,\mathbb {E}_{i}\sigma ^{>}(x+h)^{1+\alpha }. \end{aligned}$$

Since \(p(x+nh)\ge p(x+(n-1)h)\) for all \(n\in \mathbb {N}\), an induction over n provides us with \(\mathbb {E}_{i}\sigma ^{>}(x+nh)^{1+\alpha }<\infty \), and this implies (9.2) by an appeal to (a).

(c) If \(\mathbb {P}_{i}(S_{\tau (i)}\ge 0)=1\) for some \(i\in \mathcal {S}\) and thus \(\mathbb {P}_{i}(S_{\tau (i)}>0)>0\) by nontriviality, then the assertion follows easily from Lemma 7.8.

Left with the case \(\mathbb {P}_{i}(S_{\tau (i)}<0)>0\) for all \(i\in \mathcal {S}\), fix i. If \(q(i,0)>0\), then the assertion follows from (b). Assuming \(q(i,0)=0\) and thus \(\mathbb {P}_{i}(\sigma ^{>}(0)<\tau (i),S_{\tau (i)}<0)>0\), we will show below that

$$\begin{aligned} q_{n}(j,0)\,:=\,\mathbb {P}_{j}(\sigma ^{>}(0)>\tau _{n}(j),S_{\tau _{n}(j)}<0)\,>\,0 \end{aligned}$$

for some \(j\in \mathcal {S}\) and \(n\in \mathbb {N}\). Then (9.2) can be concluded in a similar manner as in (b).

Define

$$\begin{aligned} \kappa := \inf \left\{ 0\le n<\tau (i):S_{n}=\max _{0\le k\le \tau (i)}S_{k}\right\} \end{aligned}$$

By assumption, there exists \(j\in \mathcal {S}\backslash \{i\}\) such that

$$\begin{aligned}&\mathbb {P}_{i}\left( M_{\kappa }=i,\kappa =\tau _{m}(j)\ge \sigma ^{>}(0),S_{\tau (i)}<0,\tau _{m+l}(j)<\tau (i)<\tau _{m+l+1}(j)\right) \\&\quad =:\ p' > 0 \end{aligned}$$

for some \(m\in \mathbb {N}\) and \(l\in \mathbb {N}_{0}\). Put \(E:=\{S_{k}\le 0\text { for }1\le k\le \tau (i),\tau _{l}(j)<\tau (i)<\tau _{l+1}(j)\}\).

Then

$$\begin{aligned} p'&= \mathbb {P}_{i}\big (S_{k}<S_{\tau _{m}(j)}\text { for }0\le k<\tau _{m}(j),\,S_{\tau (i)}-S_{\tau _{m}(j)}\le -S_{\tau _{m}(j)},\\&\quad \qquad S_{\tau _{m}(j)+k}-S_{\tau _{m}(j)}\le 0\text { for }1\le k\le \tau (i)-\tau _{m}(j), \\&\qquad \quad \tau _{m+l}(j)<\tau (i)<\tau _{m+l+1}(j)\big )\\&=\ \int _{\mathbb {R}_{>}}\mathbb {P}_{j}\big (E\cap \{S_{\tau (i)}<-x\}\big )\ \mathbb {P}_{i}(S_{\tau _{m}(j)}\in dx,S_{k}<S_{\tau _{m}(j)} \text { for }0 \le k<\tau _{m}(j)<\tau (i))\\&=\ \int _{\mathbb {R}_{>}}\mathbb {P}_{j}\big (E\cap \{S_{\tau (i)}<-x\}\big ) \\&\qquad \qquad \qquad \quad \times \mathbb {P}_{j}(S_{\tau _{m+l}(j)}-S_{\tau (i)}\in dx,S_{k}<S_{\tau _{m+l}(j)}\text { for }\tau (i)\le k<\tau _{m}(j)<\tau _{2}(i))\\&=\ \mathbb {P}_{j}\big (E\cap \{S_{\tau _{m+l}(j)}<0,S_{k}<S_{\tau _{m+l}(j)}\text { for }\tau (i)\le k<\tau _{m+l}(j)<\tau _{2}(i)\}\big )\\&\le \ \mathbb {P}_{j}\big (S_{\tau _{m+l}(j)}<0,\,S_{k}\le 0\text { for }1\le k<\tau _{m+l}(j)\big )\\&=\ q_{m+l}(j,0) \end{aligned}$$

hence \(q_{m+l}(j,0)>0\). \(\square \)

Remark 9.3

Returning to part (c) of the previous lemma, its proof holds the surprise that \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \) is needed if \(\mathbb {P}_{i}(S_{\tau (i)}\ge 0)=1\) for all \(i\in \mathcal {S}\), but not otherwise. The following simple example illustrates that one cannot dispense with this assumption there. Given any \(\alpha \ge 0\), consider a Sisyphus chain on \(\mathbb {N}_{0}\) with transition probabilities

$$\begin{aligned} p_{01}\ =\ 1\quad \text {and}\quad p_{n,n+1}\ =\ 1-p_{n0} = \left( \frac{n}{n+1}\right) ^{1+\alpha }\text { for }n\ge 1, \end{aligned}$$

so that \(\mathbb {P}_{0}(\tau (0)>n)=p_{01}\cdots p_{n-1,n}=n^{-1-\alpha }\) for \(n\ge 1\) and thus \(\mathbb {E}_{0}\tau (0)^{1+\alpha }=\infty \). Further defining \(X_{n}=(M_{n}+1)^{-2}>0\), we obviously have \(\sigma ^{>}(0)=1\) a.s. and

$$\begin{aligned} S_{\tau (0)} = 1+\sum _{k=0}^{\tau (0)-1}\frac{1}{(M_{k}+1)^{2}} < 1+\frac{\pi ^{2}}{6}\ =:\ x\quad \mathbb {P}_{0}\text {-a.s.} \end{aligned}$$

Consequently, \(\mathbb {P}_{0}(\sigma ^{>}(x)>\tau (0))=1\) and therefore \(\mathbb {E}_{0}\sigma ^{>}(x)^{1+\alpha }=\infty \).

9.3 Moment Result for a Modification of \(\sigma ^{>}(x)\)

In view of Example 9.1 and Lemma 9.2, a look at the stopping time

$$\begin{aligned} {\overline{\sigma }}^{>}(x)\ :=\ \inf \{n>\tau (M_{0}):S_{n}>x\} \end{aligned}$$

appears to be natural. When the driving chain has initial state \(i\in \mathcal {S}\), it is the post- \(\tau (i)\) -passage time of \((S_{n})_{n\ge 0}\) beyond x. Evidently, \(\sigma ^{>}(x)\le {\overline{\sigma }}^{>}(x)\) a.s. for all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\). The next result provides an equivalent condition for the finiteness of \(\mathbb {E}_{i}{\overline{\sigma }}^{>}(x)^{1+\alpha }\).

Proposition 9.4

Let \(\alpha \ge 0\) and \((M_{n},S_{n})_{n\ge 0}\) be a nontrivial MRW with positive divergent embedded RW \((S_{\tau _{n}(i)})_{n\ge 0}\) and \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \) for some/all \(i\in \mathcal {S}\). Then \(A_{i}(x)\) is ultimately positive for all \(i\in \mathcal {S}\) and the following assertions are equivalent:

  1. (a)

    \(\mathbb {E}_{i}J_{i}(S_{\tau (i)}^{-})^{1+\alpha }<\infty \) for some/all \(i\in \mathcal {S}\).

  2. (b)

    \(\mathbb {E}_{i}{\overline{\sigma }}^{>}(x)^{1+\alpha }<\infty \) for some/all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\).

  3. (c)

    \(\mathbb {E}_{i}\tau _{\nu (i,x)}(i)^{1+\alpha }<\infty \) for some/all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\).

Moreover, these conditions imply \(\mathbb {E}_{i}\sigma ^{>}(x)^{1+\alpha }<\infty \) for all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\).

Proof

Fix any \(i\in \mathcal {S}\) and write again \(\tau ,\tau _{n},\) etc. as shorthand for \(\tau (i),\tau _{n}(i),\) etc. Recalling \(\nu (x)=\inf \{n\ge 1:S_{\tau _{n}}>x\}\), observe also that \({\overline{\sigma }}^{>}(x)\le \tau _{\nu (x)}\) \(\mathbb {P}_{i}\)-a.s.

“(a)\(\Rightarrow \)(b)” By Theorems 3.1 or 3.3, (a) ensures \(\mathbb {E}_{i}\nu (x)^{1+\alpha }<\infty \) for all \(x\in \mathbb {R}_{\geqslant }\) which in combination with \(\mathbb {E}_{i}\tau ^{1+\alpha }<\infty \) provides us with \(\mathbb {E}_{i}\tau _{\nu (x)}^{1+\alpha }<\infty \) for all \(x\in \mathbb {R}_{\geqslant }\) as pointed out earlier (use Wald’s equation for \(\alpha =0\) and see the Proof of Lemma 8.12 for \(\alpha >0\)). Hence, (b) follows from \({\overline{\sigma }}^{>}(x)\le \tau _{\nu (x)}\).

“(b)\(\Rightarrow \)(a)” Assuming \(\mathbb {E}_{i}{\overline{\sigma }}^{>}(0)^{1+\alpha }<\infty \) and putting \(p:=\mathbb {P}_{i}({\overline{\sigma }}^{>}(0)=\tau )\), we distinguish the two cases \(p=1\) and \(p<1\).

If \(p=1\) and thus \({\overline{\sigma }}^{>}(0)=\tau _{\nu (0)}\), then (a) follows by an appeal to Theorem 3.1 or 3.3. If \(p<1\), then we infer with the help of Lemma 9.5 below

$$\begin{aligned} \infty&> \mathbb {E}_{i}{\overline{\sigma }}^{>}(0)^{1+\alpha }\ \ge \ \mathbb {E}_{i}{\overline{\sigma }}^{>}(0)^{1+\alpha }\mathbf {1}_{\{{\overline{\sigma }}^{>}(0)>\tau \}}\\&\ge \mathbb {E}_{i}({\overline{\sigma }}^{>}(0)-\tau )^{1+\alpha }\mathbf {1}_{\{{\overline{\sigma }}^{>}(0)>\tau \}}\\&= \int \mathbb {E}_{i}\sigma ^{>}(y)^{1+\alpha }\ \mathbb {P}_{i}\left( S_{\tau }^{-}\in dy,\,{\overline{\sigma }}^{>}(0)>\tau \right) \\&\ge \int \mathbb {E}_{i}\sigma ^{>}(y)^{1+\alpha }\ \mathbb {P}_{i} \left( S_{\tau }^{-}\in dy,\,S_{\tau }<0\right) \\&\gtrsim \int J_{i}(y)^{1+\alpha }\ \mathbb {P}_{i}\left( S_{\tau }^{-}\in dy\right) \end{aligned}$$

and thus the assertion.

“(a)\(\Leftrightarrow \)(c)” This follows directly from Lemma 7.8 when noting that its proviso in (b), namely \(A_{i}(y)>0\) for all sufficiently large y, is guaranteed here by the positive divergence of \((S_{\tau _{n}})_{n\ge 0}\). \(\square \)

The following auxiliary lemma forms an extension of Lemma 3.5 in [33] to MRW \((M_{n},S_{n})_{n\ge 0}\), but we give the rather technical proof of the result only under the stronger assumption of positive divergence of the \((S_{\tau _n}(i))_{n\ge 0}\) because this is enough for our purposes.

Lemma 9.5

Let \(\alpha \ge 0\) and \((M_{n},S_{n})_{n\ge 0}\) be a nontrivial MRW such that \(S_{\tau _{n}(i)}\rightarrow \infty \) in probability for some \(i\in \mathcal {S}\). Then as \(x\rightarrow \infty \),

$$\begin{aligned} \sum _{n\ge 1}\frac{1}{n}\,\mathbb {P}_{j}\left( S_{n}^{*}\le x\right) \ \gtrsim \ \log J_{j}(x) \end{aligned}$$

and

$$\begin{aligned} \mathbb {E}_{j}\sigma ^{>}(x)^{\alpha }\ \asymp \ \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{j}\left( S_{n}^{*}\le x\right) \gtrsim J_{j}(x)^{\alpha } \end{aligned}$$

for all \(j\in \mathcal {S}\) and \(\alpha >0\).

Proof

By Theorem 3.1, positive divergence of the \((S_{\tau _{n}(i)})_{n\ge 0}\) ensures \(A_{i}(x)>0\) for sufficiently large x and \(\lim _{n\rightarrow \infty }\mathbb {P}_{i}(S_{n}^{*}\le x)=0\) for all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\), the latter because \(\sigma ^{>}(x)\le \tau _{\nu (i,x)}(i)<\infty \) a.s. Fix \(i\in \mathcal {S}\) and define

$$\begin{aligned} m_{\delta }(x)\ :=\ \inf \left\{ n\ge 1:\mathbb {P}_{i}\left( S_{n}^{*}\le x\right) <1-\delta \right\} \end{aligned}$$

for \(x\in \mathbb {R}_{>}\) and \(0<\delta <1\) which are all finite with \(m_{\delta }(x)\uparrow \infty \) as \(x\uparrow \infty \). Then

$$\begin{aligned} \sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}\left( S_{n}^{*}\le x\right)&\ge \ \sum _{n=1}^{m_{\delta }(x)}n^{\alpha -1}\,\mathbb {P}_{i}\left( S_{n}^{*}\le x\right) \ \ge \ {\left\{ \begin{array}{ll} (1-\delta )\log m_{\delta }(x),&{}\text {if }\alpha =0,\\ c(1-\delta )m_{\delta }(x)^{\alpha },&{}\text {if }\alpha >0. \end{array}\right. } \end{aligned}$$

for some \(c\in \mathbb {R}_{>}\). Using \(J_{i}(x)\le x/A_{i}(x)\), it therefore remains to show that

$$\begin{aligned} \frac{x}{A_{i}(x)} \lesssim m_{\delta }(x)\quad \text {as }x\rightarrow \infty . \end{aligned}$$
(9.4)

We will now assume that (9.4) fails and produce a contradiction. First note that under this assumption, we find for all \(\varepsilon \in (0,1)\) an increasing nonnegative and unbounded sequence \((x_{l})_{l\ge 1}\) (depending on \(\varepsilon \)) such that

$$\begin{aligned} \sup _{l\ge 1}\,2m_{\delta }(x_{l})\frac{A_{i}(x_{l})}{x_{l}} \le \varepsilon \end{aligned}$$
(9.5)

which may be restated as

$$\begin{aligned} (1-\varepsilon )x_{l}+2m_{\delta }(x_{l}) \le x_{l}\quad \text {for all }l\ge 1. \end{aligned}$$
(9.6)

Putting

$$\begin{aligned} H_{n}^{i} := \max _{\tau _{n-1}<k\le \tau _{n}}(S_{k}-S_{\tau _{n}})^{+} \end{aligned}$$
(9.7)

for \(n\in \mathbb {N}\) with generic copy \(H^{i}\) under \(\mathbb {P}_{i}\) and writing \(m_{l}\) as shorthand for \(m_{\delta }(x_{l})\), we infer

$$\begin{aligned} A\,:=\mathbb {P}_{i}\left( S_{\tau _{m_{l}}}^{*}>x_{l}\right)&= \mathbb {P}_{i}\left( \max _{1\le k\le m_{l}}\left( S_{\tau _{k-1}}+H_{k}^{i}\right) >x_{l}\right) \\&= \sum _{n=1}^{m_{l}}\mathbb {P}_{i}\left( \max _{1\le k\le n-1}\left( S_{\tau _{k-1}}+H_{k}^{i}\right) \le x_{l}<S_{\tau _{n-1}}+H_{n}^{i}\right) \\&= \sum _{n=1}^{m_{l}}\mathbb {P}_{i}(E_{n}), \end{aligned}$$

where \(W_{j,k}:=S_{\tau _{j+k-1}}-S_{\tau _{j}}\) and

$$\begin{aligned} E_{n} := \left\{ \max _{1\le k\le n-1}\left( W_{2m_{l}-n+1,k}+H_{2m_{l}-n+k+1}^{i}\right) \le x_{l}<W_{2m_{l}-n+1,n}+H_{2m_{l}+1}^{i}\right\} . \end{aligned}$$

Moreover, we have used that \((S_{\tau _{n}}-S_{\tau _{n-1}},H_{n}^{i})_{n\ge 1}\) forms a sequence of iid random vectors under \(\mathbb {P}_{i}\). Since \(S_{\tau _{n}}\rightarrow \infty \) a.s., we can pick \(h>0\) and \(n_{0}\in \mathbb {N}\) such that

$$\begin{aligned} \mathbb {P}_{i}(H^{i}>h) < \frac{\delta }{4}\quad \text {and}\quad \theta := \inf _{n\ge n_{0}}\mathbb {P}_{i}(S_{\tau _{n}}>h)\ >\ \frac{1}{2}. \end{aligned}$$

Choosing l so large that \(m_{l}>n_{0}\), we now further estimate

$$\begin{aligned} A&\le \sum _{n=1}^{m_{l}}\mathbb {P}_{i}(E_{n})\,\frac{1}{\theta }\,\mathbb {P}_{i}(S_{\tau _{2m_{l}-n+1}}>h) \\&\le \frac{1}{\theta }\sum _{n=1}^{m_{l}}\mathbb {P}_{i}\big (E_{n}\cap \{S_{\tau _{2m_{l}}}+H_{2m_{l}+1}^{i}>x_{l}+h\}\big )\\&\le \frac{1}{\theta }\,\mathbb {P}_{i}\left( S_{\tau _{2m_{l}}}+H_{2m_{l}+1}^{i}>x_{l}+h\right) \\&\le \ \frac{1}{\theta }\left( \mathbb {P}_{i}(S_{\tau _{2m_{l}}}>x_{l}) + \mathbb {P}_{i}(H^{i}>h)\right) \\&\le \frac{1}{\theta }\,\mathbb {P}_{i}(S_{\tau _{2m_{l}}}>x_{l})\ +\ \frac{\delta }{2}. \end{aligned}$$

Put \(\zeta _{k,l}:=[(S_{\tau _{k}}-S_{\tau _{k-1}})\vee (-x_{l})]\wedge x_{l}\) and observe that \(A_{i}(x_{l})=\mathbb {E}_{i}\zeta _{k,l}\). Use (3.71) in [33] to obtain

$$\begin{aligned} \mathbb {P}_{i}(S_{\tau _{2m_{l}}}> x_{l})\ \le \ \mathbb {P}_{i}\left( \sum _{k=1}^{2m_{l}}\zeta _{k,l}>x_{l}\right) \ +\ 2m_{l}\,\mathbb {P}_{i}(S_{\tau }>x_{l}). \end{aligned}$$

Now use (9.6), Chebychev’s inequality and \(x_{l}^{2}\,\mathbb {P}_{i}(S_{\tau }>x_{l})\le \mathbb {E}_{i}\zeta _{1,l}^{2}\) to obtain

$$\begin{aligned} \mathbb {P}_{i}(S_{\tau _{2m_{l}}}> x_{l})&\le \mathbb {P}_{i}\left( \sum _{k=1}^{2m_{l}}\big (\zeta _{k,l}-A_{i}(x_{l})\big )>(1-\varepsilon )x_{l}\right) \,+\,2m_{l}\,\mathbb {P}_{i}(S_{\tau }>x_{l})\\&\le \frac{2m_{l}\,\mathbb {E}_{i}\zeta _{1,l}^{2}}{(1-\varepsilon )^{2}x_{l}^{2}}\ +\ 2m_{l}\,\mathbb {P}_{i}(S_{\tau }>x_{l})\ \le \ \frac{4m_{l}\,\mathbb {E}_{i}\zeta _{1,l}^{2}}{(1-\varepsilon )^{2}x_{l}^{2}}. \end{aligned}$$

for all \(l\ge 1\). By [33, Lemma 3.2], \(\mathbb {E}_{i}\zeta _{1,l}^{2}\le 3x_{l}A_{i}(x_{l})\) for all sufficiently large l which in combination with (9.5) provides us with

$$\begin{aligned} \mathbb {P}_{i}\left( S_{\tau _{2m_{l}}}^{*}>x_{l}\right) \ \le \ \frac{12m_{l}A_{i}(x_{l})}{(1-\varepsilon )^{2}x_{l}}\ \le \ \frac{6\varepsilon }{(1-\varepsilon )^{2}} \end{aligned}$$

for all such l. We have thus shown that

$$\begin{aligned} \mathbb {P}_{i}\left( S_{\tau _{m_{l}}}^{*}>x_{l}\right) \ \le \ \frac{6\varepsilon }{\theta (1-\varepsilon )^{2}}\,+\,\frac{\delta }{2} \end{aligned}$$

for any \(\varepsilon \in (0,1)\) and sufficiently large l (not depending on \(\varepsilon \)), say \(l\ge l_{0}\). Finally, fix any \(l\ge l_{0}\) and choose \(\varepsilon \) so small that

$$\begin{aligned} \frac{6\varepsilon }{\theta (1-\varepsilon )^{2}} <\ \frac{\delta }{2}. \end{aligned}$$

Then we arrive at

$$\begin{aligned} \delta> \mathbb {P}_{i}\left( S_{\tau _{2m_{l}}}^{*}>x_{l}\right) \ \le \ \mathbb {P}_{i}\left( S_{m_{l}}^{*}>x_{l}\right) \end{aligned}$$

which contradicts our definition of \(m_{l}=m_{\delta }(x_{l})\). \(\square \)

Proof

(of Theorem 6.9,“(6.6) \(\Rightarrow \mathbb {E}_{i}J_{i}(S_{\tau (i)}^-)^{1+\alpha }<\infty \)) Suppose (6.6) is true and recall that \(\mathcal {S}^{>}\) denotes the set of recurrent states of the ladder chain \((M_{n}^{>})_{n\ge 0}\). Positive divergence ensures \(\mathcal {S}^{>}\ne \emptyset \). Pick any \(i\in \mathcal {S}^{>}\) and put \(\kappa :=\inf \{n\ge 0: S_{n}=H_{1}^{i}\}\), \(H_{1}^{i}\) as defined in (9.7), and \(\widehat{\sigma }^{>}:=\inf \{n: S_{\kappa +n}-S_{\kappa }>0\}\). Using \(|\mathcal {S}^{>}|<\infty \), we then obtain

$$\begin{aligned} \mathbb {E}_{i}{\overline{\sigma }}^{>}(0)^{1+\alpha }&\le \mathbb {E}_{i}\left( \sigma ^>\,\mathbf {1}_{\left\{ H_{1}^{i}=0\right\} }+ \kappa \,\mathbf {1}_{\{H^{i}>0\}} +\sum _{j\in \mathcal {S}^{>}}\mathbf {1}_{\left\{ M_{\kappa }=j,\,H_{1}^{i}>0\right\} }\,\widehat{\sigma }^{>}\right) ^{1+\alpha }\\&\le (|\mathcal {S}^{>}|+2)^\alpha \,\left( \mathbb {E}_{i}(\sigma ^{>})^{1+\alpha }+\mathbb {E}_{i}\tau (i)^{1+\alpha }+ \sum _{j\in \mathcal {S}^{>}}\mathbb {E}_j(\sigma ^>)^{1+\alpha }\right) < \infty , \end{aligned}$$

which is equivalent to (6.6) by Proposition 9.4. \(\square \)

10 Asymptotic Behavior of \(S_{n}/n\)

10.1 Strong Law of Large Numbers

It is well known that an ordinary random walk \((S_{n})_{n\ge 0}\) satisfies the strong law of large numbers (SLLN), viz. \(n^{-1}S_{n}\rightarrow \mu \) a.s. for some \(\mu \in \mathbb {R}\), iff \(X_{1}\) is integrable and \(\mu =\mathbb {E}X_{1}\). The natural substitute for the latter condition in the case of a MRW \((M_{n},S_{n})_{n\ge 0}\) is that \(X_{1}\) is \(\mathbb {P}_{\pi }\)-integrable and \(\mathbb {E}_{\pi }X_{1}=\mu \). However, this is only sufficient but not necessary for the SLLN to hold as shown by the next theorem and a subsequent example. For \(n\ge 1\), put

$$\begin{aligned} S_{n}^{\oplus }\,:=\,\sum _{k=1}^{n}X_{k}^{+}\quad \text {and}\quad S_{n}^{\ominus }\,:=\,\sum _{k=1}^{n}X_{k}^{-} \end{aligned}$$

which are clearly again MRW with driving chain \((M_{n})_{n\ge 0}\).

Theorem 10.1

Given a MRW \((M_{n},S_{n})_{n\ge 0}\), the following assertions are equivalent for any \(\mu \in \mathbb {R}\):

  1. (a)

    \(X_{1}\) is \(\mathbb {P}_{\pi }\)-integrable and \(\mathbb {E}_{\pi }X_{1}=\mu \).

  2. (b)

    \(n^{-1}S_{n}\rightarrow \mu \) a.s. and \(\mathbb {E}_{\pi }X_{1}\) exists, i.e., \(\mathbb {E}_{\pi }X_{1}^{-}<\infty \) or \(\mathbb {E}_{\pi }X_{1}^{+}<\infty \).

  3. (c)

    \(n^{-1}S_{n}^{\,\ominus }\rightarrow \mu ^{-}\), \(n^{-1}S_{n}^{\,\oplus }\rightarrow \mu ^{+}\) a.s. and \(\mu ^{+}-\mu ^{-}=\mu \).

  4. (d)

    \(\tau _{n}(i)^{-1}S_{\tau _{n}(i)}^{\,\ominus }\rightarrow \mu ^{-}\), \(\tau _{n}(i)^{-1}S_{\tau _{n}(i)}^{\,\oplus }\rightarrow \mu ^{+}\) \(\mathbb {P}_{i}\)-a.s. for some/all \(i\in \mathcal {S}\) and \(\mu ^{+}-\mu ^{-}=\mu \).

  5. (e)

    \(S_{\tau (i)}^{\,\ominus },S_{\tau (i)}^{\,\oplus }\) are \(\mathbb {P}_{i}\)-integrable and \(\mathbb {E}_{i}S_{\tau (i)}=\pi _{i}^{-1}\mu \) for some/all \(i\in \mathcal {S}\).

  6. (f)

    \(\mathbb {E}_{\pi }X_{1}\) exists and \(\sum _{n\ge 1}n^{-1}\mathbb {P}_{i}(|n^{-1}S_{n}-\mu |>\varepsilon )<\infty \) for all \(\varepsilon >0\) and some/all \(i\in \mathcal {S}\).

Proof

“(a)\(\Rightarrow \)(b)” Since \((X_{n}^{-})_{n\ge 1}\) and \((X_{n}^{+})_{n\ge 1}\) are ergodic stationary sequences under \(\mathbb {P}_{\pi }\) with finite means \(\mu ^{-}=\mathbb {E}_{\pi }X_{1}^{-}\) and \(\mu ^{+}=\mathbb {E}_{\pi }X_{1}^{+}\), respectively, \(n^{-1}S_{n}\rightarrow \mu \) a.s. with \(\mu =\mu ^{+}-\mu ^{-}\) follows from Birkhoff’s ergodic theorem.

“(b)\(\Rightarrow \)(c)” Suppose that \(\mu ^{+}:=\mathbb {E}_{\pi }X_{1}<\infty \). By using “(a)\(\Rightarrow \)(b)” for the MRW \((M_{n},S_{n}^{\oplus })_{n\ge 0}\), we infer \(n^{-1}S_{n}^{\oplus }\rightarrow \mu ^{+}\) a.s. and then further

$$\begin{aligned} n^{-1}S_{n}^{\ominus }\ =\ n^{-1}\left( S_{n}-S_{n}^{\oplus }\right) \ \rightarrow \ \mu -\mu ^{+}\ =:\ \mu ^{-}\quad \mathbb {P}_{\pi }\text {-a.s.} \end{aligned}$$

Assuming \(\mathbb {E}_{\pi }X_{1}^{-}<\infty \), we may argue in a similar manner.

“(c)\(\Rightarrow \)(d)” is trivial.

“(d)\(\Rightarrow \)(e)” Since \(n^{-1}\tau _{n}(i)\rightarrow \mathbb {E}_{i}\tau (i)=\pi _{i}^{-1}\) \(\mathbb {P}_{i}\)-a.s., we have that

$$\begin{aligned} n^{-1}S_{\tau _{n}(i)}^{\,\ominus }\rightarrow \pi _{i}^{-1}\mu ^{-}\quad \text {and}\quad n^{-1}S_{\tau _{n}(i)}^{\,\oplus }\rightarrow \pi _{i}^{-1}\mu ^{+}\quad \mathbb {P}_{i}\text {-a.s.} \end{aligned}$$

which implies the \(\mathbb {P}_{i}\)-integrability of \(S_{\tau (i)}^{\,\ominus },S_{\tau (i)}^{\,\oplus }\) and

$$\begin{aligned} \mathbb {E}_{i}S_{\tau (i)}\ =\ \mathbb {E}_{i}S_{\tau (i)}^{\,\oplus }-\mathbb {E}_{i}S_{\tau (i)}^{\,\ominus }\ =\ \pi _{i}^{-1}(\mu ^{+}-\mu ^{-})\ =\ \pi _{i}^{-1}\mu . \end{aligned}$$

“(e)\(\Rightarrow \)(a)” If \(S_{\tau (i)}^{\,\ominus }\) is \(\mathbb {P}_{i}\)-integrable, then \(\mathbb {E}_{i}S_{\tau (i)}^{\,\ominus }=\pi _{i}^{-1}\mathbb {E}_{\pi }X_{1}^{-}\) implies that \(X_{1}^{-}\) is \(\mathbb {P}_{\pi }\)-integrable. Similarly, the \(\mathbb {P}_{i}\)-integrability of \(S_{\tau (i)}^{\,\oplus }\) implies the \(\mathbb {P}_{\pi }\)-integrability of \(X_{1}^{+}\). Hence, we may use (4.2) to obtain

$$\begin{aligned} \mathbb {E}_{\pi }X_{1}\,\mathbb {E}_{i}\tau (i)\ =\ \mathbb {E}_{i}S_{\tau (i)}\ =\ \mu \,\pi _{i}^{-1} \end{aligned}$$

and thus \(\mathbb {E}_{\pi }X_{1}=\mu \).

“(b)\(\Rightarrow \)(f)” If \(n^{-1}S_{n}\rightarrow \mu \) a.s., then, for any \(\varepsilon >0\), \((S_{n}-(\mu -\varepsilon )n)_{n\ge 0}\) is positive divergent and \((S_{n}-(\mu +\varepsilon )n)_{n\ge 0}\) negative divergent and hence

$$\begin{aligned} \sum _{n\ge 1}\frac{1}{n}\mathbb {P}_{i}(S_{n}\le (\mu -\varepsilon )n)<\ \infty \quad \text {and}\quad \sum _{n\ge 1}\frac{1}{n}\mathbb {P}_{i}(S_{n}\ge (\mu +\varepsilon )n) <\ \infty \end{aligned}$$

for all \(i\in \mathcal {S}\) by Theorem 6.1(c), which in turn yields the asserted finiteness of the series \(\sum _{n\ge 1}n^{-1}\mathbb {P}_{i}(|n^{-1}S_{n}-\mu |\ge \varepsilon )\).

“(f)\(\Rightarrow \)(b)” If \(\mathbb {E}_{\pi }X_{1}\) exists, w.l.o.g. positive and then \(n^{-1}S_{n}\rightarrow \mathbb {E}_{\pi }X_{1}>0\) a.s. Moreover, \(\sum _{n\ge 1}n^{-1}\mathbb {P}(|n^{-1}S_{n}-\mu |\ge \varepsilon )<\infty \) for all \(\varepsilon >0\) in combination with a simple Borel–Cantelli argument entails the positive divergence of \((S_{n}-n(\mu +\varepsilon ))_{n\ge 0}\) and the negative divergence of \((S_{n}-n(\mu -\varepsilon ))_{n\ge 0}\) for any such \(\varepsilon \). Consequently, by another appeal to Birkhoff’s ergodic theorem, we infer

$$\begin{aligned} \mathbb {E}_{\pi }X_{1}-(\mu +\varepsilon )\ =\ \lim _{n\rightarrow \infty }\frac{S_{n}-n(\mu -\varepsilon )}{n}\ \ge \ 0\quad \text {a.s.} \end{aligned}$$

for all \(\varepsilon >0\) and thus \(\mathbb {E}_{\pi }X_{1}\ge \mu \), and a similar argument for \((S_{n}-n(\mu -\varepsilon ))_{n\ge 0}\) shows \(\mathbb {E}_{\pi }X_{1}\le \mu \). Hence, \(n^{-1}S_{n}\rightarrow \mu \) a.s. as claimed.\(\square \)

It is readily seen that the above equivalences (a)-(e) remain valid if \(\mu =\mathbb {E}_{\pi }X_{1}\in \{\pm \infty \}\). On the other hand, in contrast to ordinary random walks with iid increments, one cannot dispense with the finiteness of \(\mathbb {E}_{\pi }X_{1}^{-}\) or \(\mathbb {E}_{\pi }X_{1}^{+}\) in condition (b) as shown by the following example, where \(n^{-1}S_{n}\) converges a.s. to 0 although \(\mathbb {E}_{\pi }X_{1}^{-}=\mathbb {E}_{\pi }X_{1}^{+}=\infty \).

Example 10.2

Let \((M_{n})_{n\ge 0}\) be a birth–death chain on \(\mathbb {N}_{0}\) with transition probabilities

$$\begin{aligned} p_{i,i-1}\ =\ 1-p_{i,i+1}\ =\ \frac{i+2}{2(i+1)}, \end{aligned}$$

thus \(p_{01}=1\). The stationary distribution is given by

$$\begin{aligned} \pi _{i} = \frac{p_{01}\cdots p_{i-1,i}}{p_{10}\cdots p_{i,i-1}}\pi _{0} \asymp \frac{1}{i^{2}} \end{aligned}$$

for all \(i\in \mathbb {N}_{0}\) and \(\pi _{0}\) such that \(\sum _{i\ge 0}\pi _{i}=1\). Define \(\gamma :\mathbb {N}_{0}\rightarrow \mathbb {R}\) by \(\gamma (0)=0\) and

$$\begin{aligned} \gamma (2i)\ =\ -\gamma (2i-1)\,:=\,i \end{aligned}$$

for \(i\ge 1\). Then since \(p_{2i-1\,2i}\asymp \frac{1}{2}\) as \(i\rightarrow \infty \), we obtain

$$\begin{aligned} \mathbb {E}_{\pi }X_{1}^{+}&\ge \sum _{i\ge 1}\pi _{2i-1}\cdot p_{2i-1\, 2i}\cdot \mathbb {E}(X_1|M_0=2i-1, M_1=2i) \\&\asymp \ \sum _{i\ge 1} \frac{1}{(2i)^2}\cdot \frac{1}{2}\cdot 2i\ =\ \infty \end{aligned}$$

and \(\mathbb {E}_{\pi } X_{1}^{-}=\infty \) follows analogously, whereas each \(S_{\tau (i)}\) is \(\mathbb {P}_{i}\)-a.s. vanishing and thus particularly integrable. Put \(N_n:=\inf \{k: \tau _k(0)\ge n\}\). Since

$$\begin{aligned} |S_{\tau _{n}(0)+k}|\ \le \ |X_{\tau _{n}(0)+k}|\ \le \ k\quad \mathbb {P}_{0}\text {-a.s.} \end{aligned}$$

for all \(n\in \mathbb {N}_{0}\) and \(\tau _{n}(0)\le k<\tau _{n+1}(0)\) and \(\mathbb {E}_{0}\tau (0)<\infty \), we infer that

$$\begin{aligned} \left| \frac{S_{n}}{n}\right| \ \le \ \left| \frac{X_{n}}{n}\right| \ \le \ \frac{\nu _{N_{n}}}{n}\ \mathop {\longrightarrow }\limits ^{n\rightarrow \infty }\ 0\quad \mathbb {P}_{0}\text {-a.s.} \end{aligned}$$

and then the same convergence a.s.

Now choosing any integrable sequence \((Y_{n})_{n\ge 1}\) of nondegenerate iid random variables independent of \((M_{n})_{n\ge 0}\) and putting

$$\begin{aligned} X_{n}\,:=\,Y_{n}+\gamma (M_{n})-\gamma (M_{n-1}) \end{aligned}$$

for \(n\ge 1\), the associated MRW \((M_{n},S_{n})_{n\ge 0}\) is easily seen to be regular with \(n^{-1}S_{n}\rightarrow \mu =:\mathbb {E}Y_{1}\) a.s., and

$$\begin{aligned} \mathbb {E}_{i}S_{\tau (i)} = \mathbb {E}_{i}\left( \sum _{k=1}^{\tau (i)}Y_{k}\right) = \mu \,\mathbb {E}_{i}\tau (i) <\ \infty \end{aligned}$$

for all \(i\in \mathbb {N}_{0}\), although \(\mathbb {E}_{\pi }X_{1}\) does not exist.

10.2 Proof of Theorem 5.5 (Kesten Trichotomy)

Since \((X_{n})_{n\ge 0}\) forms an ergodic stationary sequence under \(\mathbb {P}_{\pi }\),

$$\begin{aligned} \liminf _{n\rightarrow \infty }\,n^{-1}\,S_{n}\quad \text {and}\quad \limsup _{n\rightarrow \infty }\,n^{-1}\,S_{n} \end{aligned}$$

are both a.s. constants. Therefore, it suffices to prove the trichotomy under \(\mathbb {P}_{i}\) for any fixed \(i\in \mathcal {S}\). Note that \(\mathbb {E}_{i}|S_{\tau (i)}|=\infty \) rules out that the given MRW is null-homologous.

If \((S_{n})_{n\ge 0}\) is positive divergent, then (8.2) and (8.3) provide us with

$$\begin{aligned} \liminf _{n\rightarrow \infty }\,\frac{S_{n}}{n}\ \ge \ \liminf _{n\rightarrow \infty }\,\frac{S_{\tau _{n}}-D_{n+1}^{i}}{\tau _{n+1}}\ =\ \liminf _{n\rightarrow \infty }\, \frac{S_{\tau _{n}}^{+}\,(1-o(1))}{\tau _{n+1}}\ =\ \infty \quad \text {a.s.} \end{aligned}$$

and thus the desired result (PD+). In the negative divergent case, the same conclusion holds for \((-S_{n})_{n\ge 0}\), thus giving (ND+).

If \((S_{n})_{n\ge 0}\) is oscillating, then \(\mathbb {E}_{i}|S_{\tau (i)}|=\infty \) entails at least one of

$$\begin{aligned} \liminf _{n\rightarrow \infty }n^{-1}S_{\tau _{n}}\ =\ -\infty \quad \text {or}\quad \limsup _{n\rightarrow \infty }n^{-1}S_{\tau _{n}}\ =\ \infty \quad \text {a.s.} \end{aligned}$$

by Kesten’s trichotomy for ordinary random walks, and thus also

$$\begin{aligned} \liminf _{n\rightarrow \infty }n^{-1}S_{n}\ =\ -\infty \quad \text {or}\quad \limsup _{n\rightarrow \infty }n^{-1}S_{n}\ =\ \infty \quad \text {a.s.} \end{aligned}$$

W.l.o.g. assuming the second alternative, let \(c:=\liminf _{n\rightarrow \infty }n^{-1}S_{n}\le 0\) be finite. Then \(\liminf _{n\rightarrow \infty }\,n^{-1}(S_{n}+n\,(c+1))=1>0\) which in turn entails the positive divergence of \((S_{n}+n(c+1))_{n\ge 0}\) and so, by the first part of this proof,

$$\begin{aligned} \infty = \liminf _{n\rightarrow \infty }\,n^{-1}(S_{n}+ n\,(c+1))\ =\ \liminf _{n\rightarrow \infty }\,n^{-1}S_{n}+c+1\quad \text {a.s.} \end{aligned}$$

which contradicts the finiteness of c. \(\square \)

11 Counterexamples

11.1 Theorem 6.1: \(\mathbb {E}_{i}\tau (i)\log \tau (i)<\infty \) is Necessary for the Equivalence of Part (c) and (6.3)

Our first counterexample will show that one cannot dispense with \(\mathbb {E}_{i}\tau (i)\log \tau (i)<\infty \) for the equivalence of part (c) and the integral criterion (6.3) in Theorem 6.1, not even when \((M_{n},S_{n})_{n\ge 0}\) is positive divergent with finite stationary drift.

Example 11.1

The infinite petal flower chain from 7.2 may be generalized by having excursions of variable length away from the central state 0. To be more precise, suppose that \((M_{n})_{n\ge 0}\) has state space \(\mathcal {S}\subset \{0\}\cup \mathbb {N}^{2}\) and satisfies

$$\begin{aligned} p_{0,(n,1)}\,=\,\mathbb {P}({\varGamma }=n)\quad \text {and}\quad p_{(n,k),(n,k+1)}\,=\,1\,=\,p_{(n,n-1),0} \end{aligned}$$

for all \(n\ge 2\) and \(k=1,\ldots ,n-2\), where \({\varGamma }\) denotes an \(\mathbb {N}\)-valued random variable with finite mean. In other words, whenever in state 0, the chain picks a state \((n,n-1)\) with probability \(\mathbb {P}({\varGamma }=n)\) and then moves deterministically through \((n,n-2),\ldots ,(n,1)\) before returning to 0 and hence \(\mathbb {P}_{0}(\tau (0)\in \cdot )=\mathbb {P}({\varGamma }\in \cdot )\).

Let \({\varGamma }\) further be such that \(\mathbb {E}{\varGamma }\log {\varGamma }=\infty \). Define the increments of \((M_{n},S_{n})_{n\ge 0}\) by

$$\begin{aligned} X_{n} := {\left\{ \begin{array}{ll} -(k-1),\quad \text {if }M_{n}=(k,1)\text { for }k\in \mathbb {N},\\ k, \ \text {if }M_{n-1}=(k,k-1),M_{n}=0\text { for }k\in \mathbb {N},&{}\\ 0, \ \text {otherwise}.&{} \end{array}\right. } \end{aligned}$$

Again, we then have \(S_{\tau (0)}=1\), in particular positive divergence of \((S_{\tau _{n}(0)})_{n\ge 0}\) and also \(\mathbb {E}_{\pi }|X_{1}|<\infty \), for

$$\begin{aligned} \mathbb {E}_{\pi }|X_{1}|\ =\ \frac{1}{\mathbb {E}_{0}\tau (0)}\sum _{n\ge 1}(2n-1)\,\mathbb {P}_{0}(\tau (0)=n)\ =\ \frac{2\,\mathbb {E}_{0}\tau (0)-1}{\mathbb {E}_{0}\tau (0)}. \end{aligned}$$

Hence, by Proposition 7.3, \((M_{n},S_{n})_{n\ge 0}\) is regular and therefore also positive divergent, in particular Theorem 6.1(c) holds true. On the other hand, using \(J_{0}(x)\asymp x\) as \(x\rightarrow \infty \), we find

$$\begin{aligned} \int \log J_{0}(x)\ \mathbb {V}_{0}(dx)&\asymp \int \log x\ \mathbb {V}_{0}(dx)\ =\ \mathbb {E}_{0}\left( \sum _{n=1}^{\tau (0)}\log S_{n}^{-}\right) \\&= \mathbb {E}_{0}(\tau (0)-1)\log (\tau (0)-1) \\&= \mathbb {E}({\varGamma }-1)\log ({\varGamma }-1)\ =\ \infty . \end{aligned}$$

Here we have used that (4.1) and \(S_{1}^{-}=\cdots =S_{\tau (0)-1}^{-}=\tau (0)-1\,\mathbb {P}_{0}\)-a.s.

11.2 The Integral Criteria 6.3(a), (6.4) and (6.5)

Let \(\alpha >0\) and \((M_{n},S_{n})_{n\ge 0}\) be such that \(S_{\tau _{n}(i)}\rightarrow \infty \) a.s. and \(\mathbb {E}_{i}J_{i}(S_{\tau (i)}^{-})^{1+\alpha }<\infty \). The following example shows that the integral criteria

  1. (1)

    \(\mathbb {E}_{i}J_{i}(D^{i})^{1+\alpha }<\infty \) (equivalent to \(\mathbb {E}_{i}\rho (0)^{\alpha }<\infty \)),

  2. (2)

    \(\int J_{i}(y)^\alpha \ \mathbb {V}_{i}^1(dy)<\infty \) (equivalent to \(\sum _{n\ge 1} n^{\alpha -1}\,\mathbb {P}_{i}(S_{n}\le 0) <\infty \)),

  3. (3)

    \( \int J_{i}(y)\ \mathbb {V}_{i}^{\alpha }(dy)<\infty \) (equivalent to \(\mathbb {E}_{i}N(0)^{\alpha }<\infty \) for \(\alpha \ge 1\))

are generally not equivalent.

Example 11.2

Let \((M_{n},S_{n})_{n\ge 0}\) be a MRW with infinite petal flower driving chain from Example 7.2 and

$$\begin{aligned} X_{n} := {\left\{ \begin{array}{ll} -x_{i},&{} \text {if }M_{n-1}=0,\,M_{n}=i,\\ x_{i}+2,&{} \text {if }M_{n-1}=i,\,M_{n}=0, \end{array}\right. } \quad n\ge 1, \end{aligned}$$

for a sequence of positive numbers \((x_{i})_{i\ge 1}\). Then \(\tau (0)=2\) and thus, by (6.2),

$$\begin{aligned} \mathbb {V}_{0}^{\alpha }((y,\infty ))\ \asymp \ \mathbb {P}_{0}(D^{0}>y) \end{aligned}$$

for any \(\alpha >0\). Moreover, \(S_{\tau (0)}=2\), thus \(J_{0}(x)\asymp x\), and \(D_{1}^{0}=x_{M_{1}}\,\mathbb {P}_{0}\)-a.s. Therefore, (1)–(3) for \(i=0\) may here be restated as

  1. (1)

    \(\mathbb {E}_{0}(D^{0})^{1+\alpha }=\sum _{i\ge 1}p_{0i}\,x_{i}^{1+\alpha }<\infty \),

  2. (2)

    \(\int y^{\alpha }\ \mathbb {V}_{0}^{1}(dy)=\mathbb {E}_{0}(D^{0})^{\alpha }=\sum _{i\ge 1}p_{0i}\,x_{i}^{\alpha }<\infty \),

  3. (3)

    \(\int y\ \mathbb {V}_{0}^{1}(dy)=\mathbb {E}_{0}D^{0}=\sum _{i\ge 1}p_{0i}\,x_{i}<\infty \),

respectively, and it is now obvious that, by appropriate choice of the \(x_{i}\), any of the three cases “only (3) holds,” “(2) and (3) hold, but (1) fails,” “(1)–(3) all fail to hold” may occur while at the same time \(\mathbb {E}_{0}J_{0}(S_{\tau }^{-})^{\beta }<\infty \) is obviously satisfied for all \(\beta \in \mathbb {R}_{>}\).

11.3 Tail Comparison: \(\mathbb {P}_{\pi }(X_{1}\in \cdot )\) Versus \(\mathbb {P}_{i}(S_{\tau (i)}\in \cdot )\)

We show next that the tails of \(\mathbb {P}_{\pi }(X_{1}\in \cdot )\) versus \(\mathbb {P}_{i}(S_{\tau (i)}\in \cdot )\) can be very different.

Example 11.3

Let \(\alpha \in (1,2)\) and \((M_{n})_{n\ge 0}\) be the Sisyphus chain on \(\mathbb {N}_{0}\) as introduced in Remark 9.3. This chain has stationary probabilities

$$\begin{aligned} \pi _{n}\ =\ c\,\mathbb {E}_{0}\left( \sum _{k=1}^{\tau (0)}\mathbf {1}_{\{M_{k}=n\}}\right) \ =\ c\,\mathbb {P}_{0}(\tau (0)>n)\ =\ \frac{c}{n^{\alpha }} \end{aligned}$$

with \(c=1/\mathbb {E}_{0}\tau (0)\). Define \(X_{n}=M_{n}\) and thus \(S_{n}=M_{1}+\cdots +M_{n}\) for \(n\ge 1\). Moreover, \(S_{\tau (0)}=(\tau (0)-1)\tau (0)/2\) \(\mathbb {P}_{0}\)-a.s. and therefore

$$\begin{aligned} \mathbb {P}_{0}(S_{\tau (0)}>n)\ =\ \mathbb {P}((\tau (0)-1)\tau (0)>2n)\ \asymp \ \mathbb {P}(\tau (0)>n^{1/2})\ \asymp \ \frac{1}{n^{\alpha /2}} \end{aligned}$$

as \(n\rightarrow \infty \). On the other hand,

$$\begin{aligned} \mathbb {P}_{\pi }(X_{1}>n)\ =\ \mathbb {P}_{\pi }(M_{1}>n)\ =\ \sum _{k>n}\pi _{k}\ \asymp \ \frac{1}{n^{\alpha -1}}. \end{aligned}$$

As a consequence, \(\mathbb {E}_{0}(S_{\tau (0)}\wedge x)=\int _{0}^{x}\mathbb {P}(S_{\tau (0)}>y)\ dy \) grows like \(x^{(2-\alpha )/2}\), while \(\mathbb {E}_{\pi }(X_{1}\wedge x)\) grows like \(x^{2-\alpha }\), as \(x\rightarrow \infty \).

11.4 Theorem 6.7: Part (c) Does Not Imply Part (b)

Our last example will provide an instance where \(\mathbb {E}_{i}|S_{\sigma ^{\leqslant }(-x)}|^{\alpha }\mathbf {1}_{\{\sigma ^{\leqslant }(-x)<\infty \}}<\infty \) for all \((i,x)\in \mathcal {S}\times \mathbb {R}_{\geqslant }\), while \(\mathbb {E}_{i}\left| \min _{n\ge 0}S_{n}\right| ^{\alpha }=\infty \).

Example 11.4

Given any \(\alpha >1\), let \((M_{n})_{n\ge 0}\) be the generalized infinite petal flower chain from Example 11.1, but with \({\varGamma }\) satisfying the moment conditions

$$\begin{aligned} \mathbb {E}{\varGamma }^{2(1+1/\alpha )}<\ \infty \ =\ \mathbb {E}{\varGamma }^{(1+\alpha )(1+1/\alpha )}\quad \text {and}\quad \mathbb {E}{\varGamma }^{1+\alpha } < \infty . \end{aligned}$$

Define the increments of \((M_{n},S_{n})_{n\ge 0}\) by

$$\begin{aligned} X_{n} := {\left\{ \begin{array}{ll} -l^{1/\alpha }, \quad \text {if }M_{n}=(k,l)\text { for }k,l\in \mathbb {N}, \\ 1+\sum _{l=1}^{k-1}l^{1/\alpha },\quad \text {if }M_{n-1}=(k,k-1),M_{n}=0\text { for }k\in \mathbb {N}, \end{array}\right. } \end{aligned}$$

hence \(S_{\tau (0)}=1\,\mathbb {P}_{0}\)-a.s., in particular \(J_{0}(x)\asymp x\) as \(x\rightarrow \infty \). Observe that

$$\begin{aligned} D^{0} = \sum _{k=1}^{\tau (0)-1}k^{1/\alpha } \asymp \tau (0)^{1+1/\alpha }\quad \mathbb {P}_{0}\text {-a.s.} \end{aligned}$$

which, by construction, entails

$$\begin{aligned} \mathbb {E}_{0}D^{0}\ \le \ \mathbb {E}_{0}(D^{0})^{2}\ \asymp \ \mathbb {E}_{0}\tau (0)^{2(1+1/\alpha )} <\ \infty \ =\ \mathbb {E}_{0}\tau (0)^{(1+\alpha )(1+1/\alpha )}\ \asymp \ \mathbb {E}_{0}(D^{0})^{1+\alpha }. \end{aligned}$$

and thereupon the positive divergence of \((M_{n},S_{n})_{n\ge 0}\) and \(\mathbb {E}_{i}\left| \min _{n\ge 0}S_{n}\right| ^{\alpha }=\infty \) for all \(i\in \mathbb {N}_{0}\) by invoking Theorem 6.7.

Finally, we prove \(\mathbb {E}_{0}|S_{\sigma ^{\leqslant }(-x)}|^{\alpha }\mathbf {1}_{\{\sigma ^{\leqslant }(-x)<\infty \}}<\infty \) for all \(x\in \mathbb {R}_{\geqslant }\) which easily implies that \(\mathbb {E}_{i}|S_{\sigma ^{\leqslant }(-x)}|^{\alpha }\mathbf {1}_{\{\sigma ^{\leqslant }(-x)<\infty \}}<\infty \) for all \((i,x)\in \mathbb {N}_{0}\times \mathbb {R}_{\geqslant }\). Define

$$\begin{aligned} \kappa (x) := \inf \{n\ge 1:\tau _{n}(0)\ge \sigma ^{\leqslant }(-x)\} \end{aligned}$$

and notice that \(\kappa (x)\le \sigma ^{\leqslant }(-x)\) as well as

$$\begin{aligned} |S_{\sigma ^{\leqslant }(-x)}|\ \le \ \tau _{\kappa (x)}(0)^{1/\alpha }\quad \mathbb {P}_{0}\text {-a.s. on }\{\sigma ^{\leqslant }(-x)<\infty \}. \end{aligned}$$

Consequently, by making use of Wald’s identity,

$$\begin{aligned} \mathbb {E}_{0}|S_{\sigma ^{\leqslant }(-x)}|^{\alpha }\mathbf {1}_{\{\sigma ^{\leqslant }(-x)<\infty \}}&\le \mathbb {E}_{0}\tau _{\kappa (x)}(0)\mathbf {1}_{\{\kappa (x)<\infty \}}\\&= \mathbb {E}_{0}\tau (0)\,\mathbb {E}_{0}\kappa (x)\mathbf {1}_{\{\kappa (x)<\infty \}} \lesssim \mathbb {E}_{0}\sigma ^{\leqslant }(-x)\mathbf {1}_{\{\sigma ^{\leqslant }(-x)<\infty \}}. \end{aligned}$$

The last expectation is finite by an appeal to Theorem 6.3.

12 Comparison with Perturbed Random Walks

There is partial overlap of the present work with a recent article by the first author with Iksanov and Meiners [10] on fluctuation theory for perturbed random walks (PRW), defined by \((\sum _{k=1}^{n-1}Z_{k}+\eta _{n})_{n\ge 1}\) for an iid \(\mathbb {R}^{2}\)-valued sequence \((Z_{n},\eta _{n})_{n\ge 1}\). Although MRW and PRW are quite different stochastic sequences, in some regards and under additional assumptions, their study reduces to similar objects. For example, positive divergence of a MRW \((M_{n},S_{n})_{n\ge 0}\) discussed in this paper is equivalent to the positive divergence of the PRW \((S_{\tau _{n-1}(i)}-D_{n}^{i})_{n\ge 1}\) for some/all \(i\in \mathcal {S}\). Moreover, if \(\mathbb {E}_{i}\tau (i)^{1+\alpha }<\infty \), then Lemma 8.6 implies that \(\mathbb {E}_{i}\rho (i)^{\alpha }<\infty \) holds iff the \(\alpha \)-moment of the last exit time of \((S_{\tau _{n-1}(i)}-D_{n}^{i})_{n\ge 1}\) is finite. This indicates that one can translate results for a MRW in terms of a suitable PRW and then draw on the fluctuation theory for the latter class of sequences developed in [10]. The translation actually also goes the other way when assuming the \(\eta _{n}\) to be integer-valued which is possible without loss of generality for only the tails of their distribution matters here. On the other hand, this correspondence has its limitations. There are in fact equivalences with no counterpart for PRW where we have used the particular structure of a MRW, notably its dual \(({}^{\#}M_{n},{}^{\#}S_{n})_{n\ge 0}\) and the ladder chain \((M_{n}^{>})_{n\ge 0}\). Theorem 6.7 on the power moments of

$$\begin{aligned} \left| \min _{n\ge 0}S_{n}\right| \ =\ \left| \min _{n\ge 1}\,(S_{\tau _{n-1}(i)}-D_{n}^{i})\right| \quad \mathbb {P}_{i}\text {-a.s.} \end{aligned}$$

may serve as another example where we have benefitted from the use of MRW and which has no counterpart in [10]. Last but not least, the study of power moments of N(x) and of the weighted renewal measures \(\sum _{n\ge 1}n^{\alpha -1}\,\mathbb {P}_{i}(S_{n}\le x)\) provide instances where PRW cannot be used at all because the behavior of these quantities depends on the entire excursions of the \(S_{n}\) between visits of the driving chain to a state i rather than just their minima \(D_{n}^{i}\).