Keywords

1 Introduction and Previous Results

Let \(\left( X_n\right) _{n\geqslant 1}\) be independent identically distributed random variables. Consider the random walk \(S_n = X_1+ \dots + X_n.\) For a starting point \(y>0\) denote by \(\tau _y\) the exit time of the process \(\left( y+S_n \right) _{n\geqslant 1}\) from the positive part of the real line. Many authors have investigated the asymptotic behavior of the probability of the event \(\tau _y \geqslant n\) and of the conditional law of \(y+S_n\) given \(\tau _y \geqslant n\) as \(n \rightarrow +\infty \). There is a waste literature on this subject. We refer the reader to Iglehart [18], Bolthausen [2], Doney [11], Bertoin and Doney [1], Borovkov [3, 4]. Eichelsbacher and Köning [12], Denisov, Vatutin and Wachtel [7], Denisov and Wachtel [8, 10] have considered random walks in \(\mathbb {R}^d\) and studied the exit times from the cones. Walks with increments forming a Markov chain have been considered by Presman [21, 22], Varapoulos [23, 24], Dembo [6], Denisov and Wachtel [9]. Varapoulos [23, 24] studied Markov chains with bounded increments and obtained lower and upper bounds for the probabilities of the exit time from cones.

The purpose of this paper is to present some recent results on the asymptotic of the exit time and on the conditioned law for two particular cases of Markov chains. In Sect. 2 we treat products of i.i.d. random matrices which lead to the study of a certain Markov chain. The results of this section have been obtained in collaboration with Émile Le Page and Marc Peigné [15]. The second case deals with a Markov chain defined by affine transformations on the real line. The results of the Sect. 3 have been obtained in collaboration with Ronan Lauvergnat and Émile Le Page [16]. In both cases our proofs rely upon a strong approximation result for Markov chains established in [14]. A short sketch of the proofs is given in Sect. 4 based on the results of [15].

2 Products of i.i.d Random Matrices

Let \(\mathbb {G}=GL\left( d,\mathbb {R}\right) \) be the general linear group of \(d\times d\) invertible matrices w.r.t. ordinary matrix multiplication. If g is an element of of \(\mathbb {G}\) by \(\left\| g \right\| \) we mean the operator norm and if v is an element of the vector space \(\mathbb {V}=\mathbb {R}^{d}\) the norm \(\left\| v \right\| \) is Euclidean. Let \(\varvec{\mu }\) be a probability measure on \(\mathbb {G}\) and suppose that on the probability space \(\left( \Omega ,\mathcal {F},\mathbf {Pr}\right) \) we are given an i.i.d. sequence \(\left( g_{n}\right) _{n\ge 1}\) of \(\mathbb {G}\)-valued random elements of the same law \(\mathbf {Pr}\left( g_{1}\in dg\right) =\varvec{\mu }\left( dg\right) .\) Let \(G_{n}=g_{n} \dots g_{1}\) and \(v\in \mathbb {V}\smallsetminus \left\{ 0\right\} \) be a starting point. The object of interest is the size of the vector \(G_{n}v\) which is controlled by the quantity \(\log \left\| G_{n}v\right\| .\) It follows from the results of Le Page [19] that, under appropriate assumptions, the sequence \(\left( \log \left\| G_{n}v\right\| \right) _{n\ge 1}\) behaves like a sum of i.i.d. r.v.’s and satisfies standard classical properties such as the law of large numbers, law of iterated logarithm and the central limit theorem.

Introduce the following conditions. Let \(N\left( g\right) =\max \left\{ \left\| g\right\| ,\left\| g\right\| ^{-1}\right\} \), supp\(\varvec{\mu }\) be the support of the measure \(\varvec{\mu }\) and \(\mathbb {P}\left( \mathbb {V}\right) \) be the projective space of \(\mathbb {V}.\)

P1. There exists \(\delta _{0}>0\) such that

$$\begin{aligned} \int _{\mathbb {G}}N\left( g\right) ^{\delta _{0}}\varvec{\mu }\left( dg\right) <\infty , \end{aligned}$$

The next condition requires, roughly speaking, that the dimension of the support of supp\(\varvec{\mu }\) cannot be reduced.

P2 (Strong irreducibility). The support supp\(\varvec{\mu }\) of \(\varvec{\mu }\) acts strongly irreducibly on \(\mathbb {V}\), i.e. no proper union of finite vector subspaces of \(\mathbb {V}\) is invariant with respect to all elements g of the group generated by supp\(\varvec{\mu }.\)

The sequence \(\left( h_{n}\right) _{n\ge 1}\) of elements of \( \mathbb {G}\) is said to be contracting for the projective space \(\mathbb {P}\left( \mathbb {V}\right) \) if \(\lim _{n\rightarrow \infty }\log \frac{a_{1}\left( n\right) }{a_{2}\left( n\right) }=\infty ,\) where \(a_{1}\left( n\right) \ge \ldots \ge a_{d}\left( n\right) \) are the eigenvalues of the symmetric matrix \( h_{n}^{\prime }h_{n}\) and \(h_{n}^{\prime }\) is the transpose of \(h_{n}.\)

P3 (Proximality). The closed semigroup generated by supp\(\varvec{\mu }\) contains a contracting sequence for the projective space \( \,\mathbb {P}\left( \mathbb {V} \right) \).

For example P3 is satisfied if the closed semigroup generated by supp\(\varvec{\mu }\) contains a matrix with a unique simple eigenvalue of maximal modulus. For more details we refer to Bougerol and Lacroix [5] and to the references therein.

In the sequel for any \(v\in \mathbb {V}\smallsetminus \left\{ 0\right\} \) we denote by \(\overline{v}=\mathbb {R}v\in \mathbb {P}\left( \mathbb {V}\right) \) its direction and for any direction \(\overline{v}\in \mathbb {P}\left( \mathbb {V}\right) \) we denote by v a vector in \(\mathbb {V}\smallsetminus \left\{ 0\right\} \) of direction \(\overline{v}.\) Define the function \(\rho :\mathbb {G}\times \mathbb {P}\left( \mathbb {V} \right) \rightarrow \mathbb {R}\) called norm cocycle by setting

$$\begin{aligned} \rho \left( g,\overline{v}\right) :=\log \frac{\left\| gv\right\| }{ \left\| v\right\| },\ \text {for}\ \left( g,\overline{v}\right) \in \mathbb {G}\times \mathbb {P}\left( \mathbb {V}\right) . \end{aligned}$$
(1)

It is well known (see Le Page [19] and Bougerol and Lacroix [5]) that under conditions P1–P3 there exists an unique \(\varvec{\mu }\)-invariant measure \(\varvec{\nu }\) on \( \mathbb {P}\left( \mathbb {V}\right) \) such that, for any continuous function \( \varphi \) on \(\mathbb {P}\left( \mathbb {V}\right) \),

$$\begin{aligned} \left( \varvec{\mu }*\varvec{\nu }\right) \left( \varphi \right) =\varvec{\nu }\left( \varphi \right) . \end{aligned}$$

Moreover the upper Lyapunov exponent

$$\begin{aligned} \gamma =\gamma _{\varvec{\mu }}=\int _{\mathbb {G\times P}\left( \mathbb {V} \right) }\rho \left( g,\overline{v}\right) \varvec{\mu }\left( dg\right) \varvec{\nu }\left( d\overline{v}\right) \end{aligned}$$

is finite and there exists a constant \(\sigma >0\) such that for any \(v\in \mathbb {V\smallsetminus }\left\{ 0\right\} \) and any \(t\in \mathbb {R},\)

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbf {Pr}\left( \frac{\log \left\| G_{n}v\right\| -n\gamma }{\sigma \sqrt{n}}\le t\right) =\Phi \left( t\right) , \end{aligned}$$

where \(\Phi \left( \cdot \right) \) is the standard normal distribution.

Denote by \(\mathbb {B}\) the closed unit ball in \(\mathbb {V}\) and by \(\mathbb {B}^{c}\) its complement. For any \(v\in \mathbb {B}^{c}\) define the exit time of the random process \(G_{n}v\) from \(\mathbb {B}^{c}\) by

$$\begin{aligned} \tau _{v}=\min \left\{ n\ge 1:G_{n}v\in \mathbb {B}\right\} . \end{aligned}$$

In the sequel, we consider that the upper Lyapunov exponent \(\gamma \) is equal to 0. The fact that \(\gamma =0\) does not imply that the events

$$\begin{aligned} \left\{ \tau _{v}>n\right\} =\left\{ G_{k}v\in \mathbb {B}^{c}:k=1,\ldots ,n \right\} ,\ n\ge 1 \end{aligned}$$

occur with positive probability for any \(v\in \mathbb {B}^{c}.\) To ensure this we need the following additional condition:

P4. There exists \(\delta >0\) such that

$$\begin{aligned} \inf _{s\in \mathbb {S}^{d-1}}\varvec{\mu }\left( g:\log \left\| gs\right\|>\delta \right) >0. \end{aligned}$$

Our first result gives the asymptotic of the probability of the exit time.

Theorem 2.1

Under conditions P1-P4, for any \(v\in \mathbb {B}^{c},\)

$$\begin{aligned} \mathbf {Pr}\left( \tau _{v}>n\right) =\frac{2V\left( v\right) }{\sigma \sqrt{ 2\pi n}}\left( 1+o\left( 1\right) \right) \;\text {as }n\rightarrow \infty , \end{aligned}$$

where V is a positive function on \(\mathbb {B}^{c}\).

Moreover, we prove that the limit law of the quantity \(\frac{1}{\sigma \sqrt{n}}\log \left\| G_{n}v\right\| ,\) given the event \(\left\{ \tau _{v}>n\right\} \) coincides with the Rayleigh distribution \(\Phi ^{+}\left( t\right) =1-\exp \left( -\frac{t^{2}}{2}\right) :\)

Theorem 2.2

Under conditions P1–P4, for any \(v\in \mathbb {B}^{c}\) and for any \( t\ge 0,\)

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbf {Pr}\left( \left. \frac{\log \left\| G_{n}v\right\| }{\sigma \sqrt{n}}\le t\right| \tau _{v}>n\right) =\Phi ^{+}\left( t\right) . \end{aligned}$$

The study of the products of random matrices is reduced to the case of a Markov chain in the following way. Consider the homogenous Markov chain \(\left( X_{n}\right) _{n\ge 0}\) with values in the product space \(\mathbb {X}=\mathbb {G}\times \mathbb {P}\left( \mathbb {V}\right) \) and initial value \(X_{0}=\left( g,\overline{v}\right) \in \mathbb {X}\) by setting \(X_{1}=\left( g_{1},g\cdot \overline{v}\right) \) and

$$\begin{aligned} X_{n+1}=\left( g_{n+1},g_{n}\ldots g_{1}g\cdot \overline{v}\right) ,\;n\ge 1. \end{aligned}$$

Let \(v\in \mathbb {V\smallsetminus }\left\{ 0\right\} \) be a starting vector and \(\overline{v}\) be its direction. Iterating the cocycle property \(\rho \left( g_2 g_1,\overline{v}\right) =\rho \left( g_2,g_1\cdot \overline{v}\right) +\rho \left( g_1,\overline{v} \right) \) one gets the basic representation

$$\begin{aligned} \log \left\| G_{n}gv\right\| =y+\sum _{k=1}^{n}\rho \left( X_{k}\right) ,\ n\ge 1, \end{aligned}$$

where \(y=\log \left\| g v\right\| \) determines the “size” of the vector gv. We deal with the random walk \(\left( y+S_{n}\right) _{n\ge 0}\) associated to the Markov chain \(\left( X_{n}\right) _{n\ge 0},\) where \(X_{0}=x=\left( g,\overline{v}\right) \) is an arbitrary element of \( \mathbb {X},\) y is any real number and

$$\begin{aligned} S_{0}=0,\;S_{n}=\sum _{k=1}^{n}\rho \left( X_{k}\right) ,\;n\ge 1. \end{aligned}$$

The results for \(\log \left\| G_{n}v\right\| \) stated in this section are obtained by taking \(X_{0}=x=\left( I,\overline{v}\right) \) as the initial state of the Markov chain \(\left( X_{n}\right) _{n\ge 0}\) and setting \(y=\ln \Vert v\Vert \) and \(V\left( v\right) =V\left( \left( I,\overline{v}\right) ,\ln \Vert v\Vert \right) .\) The function V is the harmonic function related to the transition probability of the Markov chain \(\left( X_{n}, y+S_{n}\right) _{n\ge 0}\).

3 Results for Affine Markov Walks

On the probability space \((\Omega ,\mathcal F, \mathbb {P})\) consider the affine recursion

$$ X_{n+1} = a_{n+1} X_n + b_{n+1}, \qquad n\ge 0, $$

where \((a_i,b_i)\), \(i\ge 1\) is a sequence of independent real random pairs of the same law as the generic random pair (ab) and \(X_0=x\in \mathbb {R}\) is a starting point. Denote by \(\mathbb {E}\) the expectation pertaining to \(\mathbb {P}\). Denote by \(\mathbb {P}_x\) and \(\mathbb {E}_x\) the probability and the corresponding expectation generated by the finite dimensional distributions of \((X_n)_{n\ge 0}\) starting at \(X_0=x\).

We make use of the following conditions:

A1. 1. There exists a constant \(\alpha >2\) such that \(\mathbb {E} \left( \left|a\right|^{\alpha } \right) < 1\) and \(\mathbb {E} \left( \left|b\right|^{\alpha } \right) < +\infty .\)

2. The random variable b is non-zero with positive probability, \(\mathbb {P} (b\not =0) > 0\), and centered, \(\mathbb {E} (b) = 0\).

A2. For all \(x \in \mathbb {R}\) and \(y>0\),

$$ \mathbb {P}_x \left( \tau _y> 1 \right) = \mathbb {P} \left( a x + b> -y \right) > 0. $$

A3. For any \(x \in \mathbb {R}\) and \(y>0\), there exists \(p_0 \in (2,\alpha )\) such that for any constant \(c>0\), there exists \(n_0 \ge 1\) such that,

$$ \mathbb {P}_x \left( \left( X_{n_0}, y+S_{n_0} \right) \in K_{p_0,c} \,,\, \tau _y> n_0 \right) > 0, $$

where

$$ K_{p_0, c} = \left\{ (x,y) \in \mathbb {R} \times \mathbb {R}_+^*, y \ge c \left( 1+ \left|x\right|^{p_0} \right) \right\} . $$

Using the techniques from [17] it can be shown that, under condition A1, the Markov chain \((X_n)_{n\ge 0}\) has a unique invariant measure \(\mathbf m\) and its partial sum \(S_n\) satisfies the central limit theorem

$$\begin{aligned} \mathbb {P}_x \left( \frac{S_n - n \mu }{\sigma \sqrt{n} } \le t \right) \rightarrow \varvec{\Phi } \left( t \right) \quad \text {as} \quad n\rightarrow +\infty , \end{aligned}$$

with

$$\begin{aligned} \mu = \frac{ \mathbb E (b)}{ 1-\mathbb E (a)} = 0 \end{aligned}$$

and

$$\begin{aligned} \sigma ^2 = \frac{\mathbb {E}(b^2)}{1-\mathbb {E}(a^2)} \frac{1+\mathbb {E}(a)}{1-\mathbb {E}(a)} >0. \end{aligned}$$

Moreover, it is easy to see that under A1 the Markov chain \((X_n)_{n\ge 0}\) has no fixed point: \(\mathbb {P} \left( a x +b = x \right) < 1\), for any \(x\in \mathbb {R}\).

For any \(y \in \mathbb {R}\) consider the affine Markov walk \(\left( y+ S_n\right) _{n\ge 0}\) starting at y and define its exit time

$$ \tau _y = \min \{ k \ge 1,\,\, y+S_k \le 0 \}. $$

Our first result gives the asymptotic of the probability of the exit time.

Theorem 3.1

Assume either conditions A1, A2, A3 and \(\mathbb {E}(a) \ge 0\), or Conditions A1 and A3. For any \(x \in \mathbb {R}\) and \(y>0\),

$$ \mathbb {P}_x \left( \tau _y > n \right) = \frac{2V(x,y)}{\sqrt{2\pi n} \sigma } \left( 1+o\left( 1\right) \right) \;\text {as }n\rightarrow \infty , $$

where V is a positive function on \(\mathbb {R} \times \mathbb {R}_{+}^{*}\).

As in the previous section the function V is the harmonic function related to the transition probability of the two dimensional Markov chain \(\left( X_{n}, y+S_{n} \right) _{n\ge 0}\).

Our second result gives the asymptotic of the law of \(\left( y+S_n\right) _{n\ge 0}\) conditioned to stay positive.

Theorem 3.2

Assume either conditions A1, A2, A3 and \(\mathbb {E}(a) \ge 0\), or Conditions A1 and A3. For any \(x \in \mathbb {R}\), \(y>0\) and \(t>0\),

$$ \mathbb {P}_x \left( { \frac{y+S_n}{\sigma \sqrt{n}} \le t}\Big |{\tau _y > n} \right) \underset{n\rightarrow +\infty }{\longrightarrow } \varvec{\Phi }^+(t), $$

where \(\varvec{\Phi }^+(t) = 1-{{\mathrm{e}}}^{-\frac{t^2}{2}}\) is the Rayleigh distribution function.

4 Sketch of the Proof

We start by giving a sketch of the proof of the results in Sect. 2.

We follow the arguments in [15] (we also refer the reader to the proof in [10] where the case of sums of independent random variables in \(\mathbb R^d\) is considered). Denote by \(\mathbb {P}_{x}\) the probability measure generated by the finite dimensional distributions of \(\left( X_{k}\right) _{k\ge 0}\) starting at \( X_{0}=x\in \mathbb {X}\) and by \(\mathbb {E}_{x}\) the corresponding expectation. For any \(\left( x,y\right) \in \mathbb {X}\times \mathbb {R}\) consider the transition kernel

$$\begin{aligned} \mathbf {Q}_{+}\left( x,y,\cdot \right) =1_{\mathbb {X}\times \mathbb {R}^{*}_{+}}\left( \cdot \right) \mathbf {Q}\left( x,y,\cdot \right) , \end{aligned}$$

where \(\mathbf {Q}\left( x,y,dx^{\prime }\times dy^{\prime }\right) \) is the transition probability of the two dimensional Markov chain \(\left( X_{n},y+S_{n}\right) _{n\ge 0}\) under the measure \(\mathbb {P}_{x}.\) A positive \(\mathbf {Q}_{+}\)-harmonic function V is any function \(V:\mathbb {X}\times \mathbb {R}^{*}_{+}\rightarrow \mathbb {R}^{*}_{+}\) satisfying

$$\begin{aligned} \mathbf {Q}_{+}V=V. \end{aligned}$$
(2)

Extend V by setting \(V\left( x,y\right) = 0\) for \(\left( x,y\right) \in \mathbb {X}\times \mathbb {R}_{-}.\)

We first should prove the existence of a positive \(\mathbf {Q}_{+}\)-harmonic function. For any \(y>0\) denote by \(\tau _{y}\) the first time when the Markov walk \( \left( y+S_{n}\right) _{n\ge 0}\) becomes negative: \( \tau _{y}=\min \left\{ n\ge 1:y+S_{n}\le 0\right\} \).

Theorem 4.1

Assume hypotheses P1–P5.

1. For any \(x\in \mathbb {X}\) and \(y>0\) the limit

$$\begin{aligned} V\left( x,y\right) =\lim _{n\rightarrow +\infty }\mathbb {E}_{x}\left( y+S_{n};\tau _{y}>n\right) \end{aligned}$$

exists and satisfies \(V\left( x,y\right) >0.\)

2. The function V is \(\mathbf {Q}_{+}\)-harmonic, i.e., for any \( x\in \mathbb {X}\) and \(y>0,\)

$$\begin{aligned} \mathbb {E}_{x}\left( V\left( X_{1},y+S_{1}\right) ;\tau _{y}>1\right) =V\left( x,y\right) . \end{aligned}$$

The proof of this theorem is rather lengthy. Skipping the technical details, the main difficulty is to show the integrability of the random variable \(S_{\tau _{y}}\), i.e., that for any \(x \in \mathbb X\) and \(y>0\) it holds \( \mathbb {E}_{x}\left( \left| y+S_{\tau _{y}}\right| \right) \le c\left( 1+y\right) < +\infty . \) The integrability is obtained by using a martingale approximation (see Gordin [13]) \(M_{n}=\sum _{k=1}^{n}\left( \theta \left( X_{k}\right) -\mathbf {P}\theta \left( X_{k-1}\right) \right) \), \(n\ge 1\), where \(\theta \) is the solution of the Poisson equation \(\rho =\theta -\mathbf {P}\theta \) and the norm cocycle \(\rho \) is defined in (1).

Lemma 4.2

It holds \( \displaystyle \sup _{n\ge 0}\left| S_{n}-M_{n}\right| \le a=2\left\| \mathbf {P}\theta \right\| _{\infty }.\ \mathbb P_x\)-a.s. for any \(x \in \mathbb X\).

Once integrabilty of \(S_{\tau _{y}}\) established, for any \(x\in \mathbb {X}\) set

$$\begin{aligned} V\left( x,y\right) =\left\{ \begin{array}{cc} -\mathbb {E}_{x}M_{\tau _{y}} &{} \text {if }y>0, \\ 0 &{} \text {if }y\le 0. \end{array} \right. \end{aligned}$$

The following proposition presents some properties of the function V.

Proposition 4.3

The function V satisfies

1. For any \(y>0\) and \(x\in \mathbb {X}\),

$$\begin{aligned} V\left( x,y\right)= & {} \lim _{n\rightarrow +\infty }\mathbb {E}_{x}\left( y+M_{n};\tau _{y}>n\right) =\lim _{n\rightarrow +\infty }\mathbb {E}_{x}\left( y+S_{n};\tau _{y}>n\right) . \end{aligned}$$

2. For any \(y>0\) and \(x\in \mathbb {X},\)

$$\begin{aligned} 0\vee \left( y-a\right) \le V\left( x,y\right) \le c\left( 1+y\right) . \end{aligned}$$

3. For any \(x\in \mathbb {X}, \quad \lim _{y\rightarrow +\infty }\frac{V\left( x,y\right) }{y}=1\).

4. For any \(x\in \mathbb {X},\) the function \(V\left( x,\cdot \right) \) is increasing.

The harmonicity of V is established in the following way. Let \(x\in \mathbb {X}\) and \(y>0\) and set \(V_{n}\left( x,y\right) =\mathbb {E} _{x}\left( y+S_{n};\tau _{y}>n\right) \), for any \(n\ge 1.\) By the Markov property we have

$$\begin{aligned} V_{n+1}\left( x,y\right)= & {} \mathbb {E}_{x}\left( y+S_{n+1};\tau _{y}>n+1\right) \\= & {} \mathbb {E}_{x}\left( \left( V_{n}\left( X_{1};y+S_{1}\right) \right) ;\tau _{y}>1\right) . \end{aligned}$$

Taking the limit as \(n\rightarrow +\infty \), by Lebesgue’s dominated convergence theorem, we get

$$\begin{aligned} V\left( x,y\right)= & {} \lim _{n\rightarrow +\infty }\mathbb {E}_{x}\left( V_{n}\left( X_{1};y+S_{1}\right) ;\tau _{y}>1\right) \nonumber \\= & {} \mathbb {E}_{x}\left( \lim _{n\rightarrow +\infty }V_{n}\left( X_{1};y+S_{1}\right) ;\tau _{y}>1\right) \nonumber \\= & {} \mathbb {E}_{x}V\left( X_{1},y+S_{1}\right) 1_{\left\{ \tau _{y}>1\right\} } \nonumber \\= & {} \mathbf {Q}_{+}V\left( x,y\right) , \end{aligned}$$
(3)

which proves that V is harmonic. We refer to [15] for all the details.

We now give some hints how to prove Theorem 4.5. From the strong approximation result [14] it follows that without loss of generality we can reconstruct the Markov walk \(\left( S_n \right) _{n\geqslant 1}\) on the same probability space with the standard Brownian motion \(\left( B_{t}\right) _{t\ge 0}\) such that for any \(\varepsilon \in (0, \varepsilon _0),\) \(x\in \mathbb {X}\) and \(n\ge 1\),

$$\begin{aligned} \mathbb {P}_{x}\left( \sup _{0\le t\le 1}\left| S_{\left[ nt\right] }-\sigma B_{nt}\right| > n^{1/2-2\varepsilon }\right) \le c_{\varepsilon } n^{-2\varepsilon }, \end{aligned}$$
(4)

where \(c_{\varepsilon }\) depends on \(\varepsilon \) and \(\varepsilon _0>0.\) Using the strong approximation (4) and the well-known results on the exit time for standard Brownian motion (see Lévy [20]) we establish the following:

Lemma 4.4

Let \(\varepsilon \in (0, \varepsilon _0)\) and \(\left( \theta _{n}\right) _{n\ge 1}\) be a sequence of positive numbers such that \( \theta _{n}\rightarrow 0\) and \(\theta _{n}n^{\varepsilon /4 }\rightarrow +\infty \) as \(n\rightarrow +\infty \). Then

1. There exists a constant \(c>0\) such that, for n sufficiently large,

$$\begin{aligned} \sup _{x\in \mathbb {X},\ y\in \left[ n^{1/2-\varepsilon },\theta _{n}n^{1/2}\right] } \left| \frac{\mathbb {P}_{x}\left( \tau _{y}>n\right) }{\frac{2y}{\sqrt{2\pi n}\sigma }}-1\right| \le c\theta _{n}. \end{aligned}$$

2. There exists a constant \(c_\varepsilon >0\) such that for any \(n\ge 1\) and \(y\ge n^{1/2-\varepsilon },\)

$$\begin{aligned} \sup _{x\in \mathbb {X}}\mathbb {P}_{x}\left( \tau _{y}>n\right) \le c_{\varepsilon }\frac{y}{\sqrt{n}}. \end{aligned}$$

The previous result holds for y in the interval \( \left[ n^{1/2-\varepsilon },\theta _{n}n^{1/2}\right] \). To extend it to a fixed \(y>0\) consider the first time \(\nu _{n}\) when \(\left| y+M_{k}\right| \) exceeds \(2 n^{1/2-\varepsilon }:\)

$$\begin{aligned} \nu _{n}=\min \left\{ k\ge 1:\left| y+M_{k}\right| \ge 2 n^{1/2-\varepsilon }\right\} , \end{aligned}$$
(5)

where \(\varepsilon >0\) is small enough. Using Markov property and Lemma 4.4 we show that

$$\begin{aligned} \mathbb {P}_{x}\left( \tau _{y}>n\right)= & {} \frac{2 }{\sqrt{2\pi n}\sigma }\mathbb {E}_{x}\left( y+S_{\nu _{n}};\tau _{y}>\nu _{n},\nu _{n}\le n^{1-\varepsilon }\right) \\&+o\left( n^{-1/2} \right) . \end{aligned}$$

To end the proof one has to prove that for any \(x\in \mathbb {X}\) and \(y>0,\)

$$\begin{aligned} \lim _{n\rightarrow +\infty }\mathbb {E}_{x}\left( y+S_{\nu _{n}};\tau _{y}>\nu _{n},\nu _{n}\le n^{1-\varepsilon }\right) =V\left( x,y\right) . \end{aligned}$$
(6)

Again, for details, we refer to [15]. Our main result concerning the limit behavior of the exit time \(\tau _{y}\) is as follows:

Theorem 4.5

Assume hypotheses P1–P5. Then, for any \(x\in \mathbb {X}\) and \(y>0,\)

$$\begin{aligned} \mathbb {P}_{x}\left( \tau _{y}>n\right) \sim \frac{2V\left( x,y\right) }{ \sigma \sqrt{2\pi n}}\text { as }n\rightarrow +\infty . \end{aligned}$$

Moreover, there exists a constant c such that for any \(y>0\) and \(x \in \mathbb X,\)

$$\begin{aligned} \sup _{n \ge 1}\sqrt{n}\mathbb {P}_{x}\left( \tau _{y}>n\right) \le c\frac{1+y}{\sigma }. \end{aligned}$$

The proof of Theorem 2.2 follows the same line using the following:

Lemma 4.6

Let \(\varepsilon \in (0, \varepsilon _0),\) \(t>0\) and \(\left( \theta _{n}\right) _{n\ge 1}\) be a sequence such that \(\theta _{n}\rightarrow 0\) and \(\theta _{n}n^{\varepsilon /4 }\rightarrow +\infty \) as \(n\rightarrow +\infty \). Then

$$\begin{aligned} \lim _{n\rightarrow +\infty }\sup \left| \frac{\mathbb {P}_{x}\left( \tau _{y}>n-k,\frac{y+S_{n-k}}{\sqrt{n}}\le t\right) }{\frac{2y}{\sqrt{2\pi n} } \frac{1}{\sigma ^{3}}\int _{0}^{t}u\exp \left( -\frac{u^{2}}{2\sigma ^{2}} \right) du}-1\right| =0, \end{aligned}$$
(7)

where \(\sup \) is taken over \(x\in \mathbb {X},\ k\le n^{1-\varepsilon }\) and \( n^{1/2-\varepsilon } \le y\le \theta _{n}n^{1/2}\).

The results exposed in Sect. 3 are more delicate but can be proved using similar technics which can be found in [16].