1 Introduction

Assume that on the probability space \((\varOmega , {\mathscr {F}}, {\mathbb {P}})\) we are given a sequence of real valued random variables \(( X_n )_{n\geqslant 1}\). Consider the random walk \(S_n=\sum _{k=1}^{n}X_k\), \(n\geqslant 1.\) Suppose first that \(( X_n )_{n\geqslant 1}\) are independent identically distributed of zero mean and finite variance. For any \(y>0\) denote by \(\tau _y\) the first time when \(y+S_n\) becomes non-positive. The study of the asymptotic behaviour of the probability \({\mathbb {P}}(\tau _y>n)\) and of the law of \(y+S_n\) conditioned to stay positive (i.e. given the event \(\{\tau _y>n\}\)) has been initiated by Spitzer [31] and developed subsequently by Iglehart [20], Bolthausen [2], Doney [10], Bertoin and Doney [1], Borovkov [3, 4], to cite only a few. Important progress has been achieved recently by employing a new approach based on the existence of the harmonic function by Denisov and Wachtel [6,7,8] (see also Varopoulos [33, 34] and Eichelbacher and König [11]). In this line Grama, Le Page and Peigné [17] and the authors in [13, 14] have studied sums of functions defined on Markov chains under spectral gap assumptions. The goal of the present paper is to complete these investigations by establishing local limit theorems for random walks defined on finite Markov chains and conditioned to stay positive.

Local limit theorems for the sums of independent random variables without conditioning have attracted much attention, since the pioneering work of Gnedenko [12] and Stone [32]. The first local limit theorem for a random walk conditioned to stay positive has been established in Iglehart [21] in the context of walks with negative drift \({\mathbb {E}} X_1 < 0\). Caravenna [5] studied conditioned local limit theorems for random variables in the domain of attraction of the normal law and Vatutin and Wachtel [35] for random variables \(X_k\) in the domain of attraction of the stable law. Denisov and Wachtel [8] obtained a local limit theorem for random walks in \({\mathbb {Z}}^d\) conditioned to stay in a cone based on the harmonic function approach.

Local limit theorems without conditioning for Markov chains are known as early as the work of Kolmogorov [24] and the background contributions due to Nagaev [27, 28] who initiated the study of Markov chains by spectral methods. The work of Doeblin–Fortet [9] and the theorem of Ionescu-Tulcea and Marinescu [22] allowed to weaken Nagaev’s conditions to deal with Markov kernels having a contraction property. In this spirit Le Page [25] proved a local limit theorem for products of random matrices and Guivarc’h and Hardy [18] and Hennion and Hervé [19] obtained local limit theorems for sums \(S_n=\sum _{k=1}^{n} f(X_k)\), where \(( X_n )_{n\geqslant 0}\) is a Markov chain and f a real function defined on the state space of the chain, which will be also the setting of our paper.

Much less is known on the conditioned local limit theorems. We are aware only of the results of Presman [29, 30] who has considered the case of finite Markov chains in a more general setting but which, because of rather stringent assumptions, do not cover the results of this paper. We can note also the work of Le Page and Peigné [26] where a conditioned local limit theorem is established for the stochastic recursion in a rather different setting.

Let us briefly review the main results of the paper concerning conditioned local limit behaviour of the walk \(S_n=\sum _{k=1}^{n} f(X_k)\) defined on a finite Markov chain \(( X_n )_{n\geqslant 0}\). From more general statement of Theorem 2.4, under the conditions that the underlying Markov chain is irreducible and aperiodic and that \(( S_n )_{n\geqslant 0}\) is centred and non-lattice, for fixed \(x\in {\mathbb {X}}\) and \(y\in {\mathbb {R}}\), it follows that, uniformly in \(z \geqslant 0,\)

$$\begin{aligned} \lim _{n\rightarrow \infty } \left( n {\mathbb {P}}_x \left( y+S_{n} \in [z,z+a],\, \tau _y > n \right) - \frac{2a V(x,y)}{\sqrt{2\pi } \sigma ^2} \varphi _+ \left( \frac{z}{\sqrt{n}\sigma } \right) \right) =0,\nonumber \\ \end{aligned}$$
(1.1)

where \(\varphi _+(t) = te^{-\frac{t^2}{2}} \mathbb {1}_{\{t\geqslant 0\}}\) is the Rayleigh density. The relation (1.1) is an extension of the classical local limit theorem by Stone [32] to the case of Markov chains. We refer to Caravenna [5] and Vatutin and Wachtel [35], where the corresponding results have been obtained for independent random variables in the domains of attraction of the normal and stable law respectively.

We note that while (1.1) is consistent for large z, it is not informative for z in a compact set. A meaningful local limit behaviour for fixed values of z can be obtained from our Theorem 2.5. Under the same assumptions, for any fixed \(x\in {\mathbb {X}}\), \(y\in {\mathbb {R}}\) and \(z \geqslant 0,\)

$$\begin{aligned}&\lim _{n\rightarrow +\infty } n^{3/2} {\mathbb {P}}_x \left( y+S_{n} \in [z,z+a] ,\, \tau _y > n \right) \nonumber \\&\qquad = \frac{2V(x,y)}{\sqrt{2\pi }\sigma ^3} \int _z^{z+a} \int _{{\mathbb {X}}} V^*\left( x', z' \right) {\varvec{\nu }} (\text {d}x') \text {d}z'. \end{aligned}$$
(1.2)

For sums of independent random variables similar limit behaviour was found in Vatutin and Wachtel [35]. It should be noted that (1.1) and (1.2) complement each other: the main term in (1.1) is meaningful for large z such that \(z \sim n^{1/2} \) as \(n\rightarrow \infty \), while (1.2) holds for z in compact sets.

We also state extensions of (1.1) and (1.2) to the joint law of \(X_n\) and \(y+S_n\). These extensions are useful in applications, in particular, for determining the exact asymptotic behaviour of the survival time for branching processes in a Markovian environment. They also allow us to infer the local limit behaviour of the exit time \(\tau _y\) (see Theorem 2.8): under the assumptions mentioned before, for any \(x\in {\mathbb {X}}\) and \(y \in {\mathbb {R}}\),

$$\begin{aligned} \lim _{n\rightarrow +\infty } n^{3/2}{\mathbb {P}}_x \left( \tau _y = n \right) = \frac{2V(x,y)}{\sqrt{2\pi } \sigma ^3} \int _0^{+\infty } {\mathbb {E}}_{{\varvec{\nu }}}^* \left( V^*(X_1^*,z);\, S_1^* \geqslant z \right) \text {d}z. \end{aligned}$$

The approach employed in this paper is different from that in [26, 29, 30] which all are based on Wiener-Hopf arguments. Our technique is close to that in Denisov and Wachtel [8], however, in order to make it work for the random walk \(S_n=\sum _{k=1}^{n} f(X_k)\) defined on the Markov chain \(( X_n )_{n\geqslant 0}\), we have to overcome some essential difficulties. One of them is related to the problem of the reversibility of the Markov walk \((S_n )_{n\geqslant 0}\). Let us explain this point in more details. When \(( X_n )_{n\geqslant 1}\) are \({\mathbb {Z}}\)-valued independent identically distributed random variables, let \(( S_n^* )_{n\geqslant 1}\) be the reversed walk given by \(S_n^* = \sum _{k=1}^n X_k^*\), where \(( X_n^* )_{n\geqslant 1}\) is a sequence of independent identically distributed random variables of the same law as \(-X_1\). Denote by \(\tau ^*_z\) the first time when \(( z+S_k^* )_{k\geqslant 0}\) becomes non-positive. Then, due to exchangeability of the random variables \(( X_n )_{n\geqslant 1}\), we have

$$\begin{aligned} {\mathbb {P}}(y+S_n=z, \tau _y>n) = {\mathbb {P}}(z+S^*_n=y, \tau ^*_z >n). \end{aligned}$$
(1.3)

This relation does not hold any more for the walk \(S_n=\sum _{k=1}^{n} f(X_k)\), where \(( X_n )_{n\geqslant 0}\) is a Markov chain. Even though \(( X_n )_{n\geqslant 0}\) takes values on a finite state space \({\mathbb {X}}\) and there exists a dual chain \(( X^*_n )_{n\geqslant 0},\) the main difficulty is that the function \(f:{\mathbb {X}} \mapsto {\mathbb {R}}\) can be arbitrary and therefore the Markov walk \((S_n )_{n\geqslant 0}\) is not necessarily lattice valued. In this case the Markov chain formed by the couple \(( X_n, y+S_n )_{n\geqslant 0}\) cannot be reversed directly as in (1.3). We cope with this by altering the arrival interval \([z,z+h]\) in the following two-sided bound

$$\begin{aligned}&\sum _{x^* \in {\mathbb {X}}} {\mathbb {E}}_{x^*}^* \left( \psi _{x}^*(X_n^*) \mathbb {1}_{\left\{ z+S_n^* \in [y-h,y],\, \tau _{z}^*> n \right\} } \right) {\varvec{\nu }}(x^*) \nonumber \\&\qquad \leqslant {\mathbb {P}}_x (y+S_n\in [z,z+h], \tau _y> n) \nonumber \\&\qquad \leqslant \sum _{x^* \in {\mathbb {X}}} {\mathbb {E}}_{x^*}^* \left( \psi _{x}^*(X_n^*) \mathbb {1}_{\left\{ z+h+S_n^* \in [y,y+h],\, \tau _{z+h}^* > n \right\} } \right) {\varvec{\nu }}(x^*), \end{aligned}$$
(1.4)

where \({\varvec{\nu }}\) is the invariant probability of the Markov chain \(( X_n )_{n\geqslant 1}\), \(\psi _{x}^*: {\mathbb {X}} \mapsto {\mathbb {R}}_+\) is a function such that \({\varvec{\nu }} \left( \psi _{x}^* \right) = 1\) (see (6.2) for a precise definition) and \(S_n^* = -\sum _{k=1}^n f \left( X_k^* \right) \), \(\forall n \geqslant 1.\) Following this idea, for a fixed \(a >0\) we split the interval \([z,z+a]\) into p subintervals of length \(h=a/p\) and we determine the exact upper and lower bounds for the corresponding expectations in (1.4). We then patch up the obtained bounds to obtain a precise asymptotic as \(n \rightarrow +\infty \) for the probabilities \({\mathbb {P}}_x (y+S_n\in [z,z+a], \tau _y > n)\) for a fixed \(a>0\) and let then p go to \(+\infty \). This resumes very succinctly how we suggest generalizing (1.3) to the non-lattice case. Together with some further developments in Sects. 7 and 8, this allows us to establish Theorems 2.4 and 2.5.

The outline of the paper is as follows:

  • Section 2: We give the necessary notations and formulate the main results.

  • Section 3: Introduce the dual Markov chain and state some of its properties.

  • Section 4: Introduce and study the perturbed transition operator.

  • Section 5: We prove a local limit theorem for sums defined on Markov chains.

  • Section 6: We collect some auxiliary bounds.

  • Sections 78 and 9: Proofs of Theorems 2.42.5 and 2.72.8, respectively.

  • Section 10: We state auxiliary assertions which are necessary for the proofs.

Let us end this section by fixing some notations. The symbol c will denote a positive constant depending on the all previously introduced constants. Sometimes, to stress the dependence of the constants on some parameters \(\alpha ,\beta ,\dots \) we shall use the notations \( c_{\alpha }, c_{\alpha ,\beta },\dots \). All these constants are likely to change their values every occurrence. The indicator of an event A is denoted by \(\mathbb {1}_A\). For any bounded measurable function f on \({\mathbb {X}}\), random variable X in \({\mathbb {X}}\) and event A, the integral \(\int _{{\mathbb {X}}} f(x) {\mathbb {P}} (X \in \text {d}x, A)\) means the expectation \({\mathbb {E}}\left( f(X); A\right) ={\mathbb {E}} \left( f(X) \mathbb {1}_A\right) \).

2 Notations and results

Let \(( X_n )_{n\geqslant 0}\) be a homogeneous Markov chain on the probability space \((\varOmega , {\mathscr {F}}, {\mathbb {P}})\) with values in the finite state space \({\mathbb {X}}\). Denote by \({\mathscr {C}}\) the set of complex functions defined on \({\mathbb {X}}\) endowed with the norm \(\left\| \cdot \right\| _{\infty }\): \(\left\| g\right\| _{\infty } = \sup _{x\in {\mathbb {X}}} \left|g(x)\right|\), for any \(g\in {\mathscr {C}}\). Let \({\mathbf {P}}\) be the transition kernel of the Markov chain \(( X_n )_{n\geqslant 0}\) to which we associate the following transition operator: for any \(x\in {\mathbb {X}}\) and \(g \in {\mathscr {C}}\),

$$\begin{aligned} {\mathbf {P}} g(x) = \sum _{x'\in {\mathbb {X}}} g(x') {\mathbf {P}}(x,x'). \end{aligned}$$

For any \(x\in {\mathbb {X}}\), denote by \({\mathbb {P}}_x\) and \({\mathbb {E}}_x\) the probability, respectively the expectation, generated by the finite dimensional distributions of the Markov chain \(( X_n )_{n\geqslant 0}\) starting at \(X_0 = x\). We assume that the Markov chain is irreducible and aperiodic, which is equivalent to the following hypothesis.

Hypothesis M1

The matrix \({\mathbf {P}}\) is primitive: there exists \(k_0 \geqslant 1\) such that for any \(x \in {\mathbb {X}}\) and any non-negative and non identically zero function \(g \in {\mathscr {C}}\),

$$\begin{aligned} {\mathbf {P}}^{k_0}g(x) > 0. \end{aligned}$$

Let f be a real valued function defined on \({\mathbb {X}}\) and let \((S_n)_{n\geqslant 0}\) be the process defined by

$$\begin{aligned} S_0 = 0 \qquad \text {and} \qquad S_n = f \left( X_1 \right) + \cdots + f \left( X_n \right) , \quad \forall n \geqslant 1. \end{aligned}$$

For any starting point \(y \in {\mathbb {R}}\) we consider the Markov walk \((y+S_n)_{n\geqslant 0}\) and we denote by \(\tau _y\) the first time when the Markov walk becomes non-positive:

$$\begin{aligned} \tau _y := \inf \left\{ k \geqslant 1, \; y+S_k \leqslant 0 \right\} . \end{aligned}$$

Under M1, by the Perron–Frobenius theorem, there is a unique positive invariant probability \({\varvec{\nu }}\) on \({\mathbb {X}}\) satisfying the following property: there exist \(c_1>0\) and \(c_2>0\) such that for any function \(g \in {\mathscr {C}}\) and \(n \geqslant 1\),

$$\begin{aligned} \sup _{x\in {\mathbb {X}}} \left|{\mathbb {E}}_x \left( g\left( X_n \right) \right) - {\varvec{\nu }}(g)\right| = \left\| {\mathbf {P}}^ng - {\varvec{\nu }}(g)\right\| _{\infty } \leqslant \left\| g\right\| _{\infty } c_1e^{-c_2n}, \end{aligned}$$
(2.1)

where \({\varvec{\nu }}(g) = \sum _{x\in {\mathbb {X}}} g(x) {\varvec{\nu }}(x)\).

The following two hypotheses ensure that the Markov walk has no drift and is non-lattice, respectively.

Hypothesis M2

The function f is centred:

$$\begin{aligned} {\varvec{\nu }} \left( f \right) = 0. \end{aligned}$$

Hypothesis M3

For any \((\theta ,a) \in {\mathbb {R}}^2\), there exists a sequence \(x_0, \dots , x_n\) in \({\mathbb {X}}\) such that

$$\begin{aligned} {\mathbf {P}}(x_0,x_1) \cdots {\mathbf {P}}(x_{n-1},x_n) {\mathbf {P}}(x_n,x_0) > 0 \end{aligned}$$

and

$$\begin{aligned} f(x_0) + \cdots + f(x_n) - (n+1)\theta \notin a{\mathbb {Z}}. \end{aligned}$$

Under Hypothesis M1, it is shown in Sect. 4 that Hypothesis M3 is equivalent to the condition that the perturbed operator \({\mathbf {P}}_t \) has a spectral radius less than 1 for \( t\ne 0\); for more details we refer to Sect. 4. Furthermore, in the Appendix (see Lemma 10.3, Sect. 10), we show that Hypotheses M1M3 imply that the following number \(\sigma ^2\), which is the limit of \({\mathbb {E}}_x ( S_n^2 )/n\) as \(n \rightarrow +\infty \) for any \(x \in {\mathbb {X}}\), is not zero:

$$\begin{aligned} \sigma ^2 := {\varvec{\nu }}(f^2) + 2\sum _{n=1}^{+\infty } {\varvec{\nu }} \left( f {\mathbf {P}}^n f \right) > 0. \end{aligned}$$
(2.2)

Under spectral gap assumptions, the asymptotic behaviour of the probability \({\mathbb {P}}_x \left( \tau _y > n \right) \) and of the conditional law of the Markov walk \(\frac{y+S_n}{\sqrt{n}}\) given the event \(\{ \tau _y > n \}\) have been studied in [14]. It is easy to see that under M1M2 and (2.2) the conditions of [14] are satisfied (see Sect. 10). We summarize the main results of [14] in the following propositions.

Proposition 2.1

(Preliminary results, part I) Assume Hypotheses M1M3. There exists a non-degenerate non-negative function V on \({\mathbb {X}} \times {\mathbb {R}}\) such that

  1. 1.

    For any \((x,y) \in {\mathbb {X}} \times {\mathbb {R}}\) and \(n \geqslant 1\),

    $$\begin{aligned} {\mathbb {E}}_x \left( V \left( X_n, y+S_n \right) ;\, \tau _y > n \right) = V(x,y). \end{aligned}$$
  2. 2.

    For any \(x \in {\mathbb {X}}\), the function \(V(x,\cdot )\) is non-decreasing and for any \((x,y) \in {\mathbb {X}} \times {\mathbb {R}}\),

    $$\begin{aligned} V(x,y) \leqslant c \left( 1+\max (y,0) \right) . \end{aligned}$$
  3. 3.

    For any \(x \in {\mathbb {X}}\), \(y \in {\mathbb {R}}\) and \(\delta \in (0,1)\),

    $$\begin{aligned} \left( 1- \delta \right) \max (y,0) - c_{\delta } \leqslant V(x,y) \leqslant \left( 1+\delta \right) \max (y,0) + c_{\delta }. \end{aligned}$$

Since the function V satisfies the point 1, it is said to be harmonic for the killed Markov walk \((y+S_n)_{n\geqslant 0}\).

Proposition 2.2

(Preliminary results, part II) Assume Hypotheses M1M3.

  1. 1.

    For any \((x,y) \in {\mathbb {X}} \times {\mathbb {R}}\),

    $$\begin{aligned} \lim _{n\rightarrow +\infty } \sqrt{n}{\mathbb {P}}_x \left( \tau _y > n \right) = \frac{2V(x,y)}{\sqrt{2\pi } \sigma }, \end{aligned}$$

    where \(\sigma \) is defined by (2.2).

  2. 2.

    For any \((x,y) \in {\mathbb {X}} \times {\mathbb {R}}\) and \(n\geqslant 1\),

    $$\begin{aligned} {\mathbb {P}}_x \left( \tau _y > n \right) \leqslant c\frac{ 1 + \max (y,0) }{\sqrt{n}}. \end{aligned}$$

Define the support of V by

$$\begin{aligned} supp(V) := \{ (x,y) \in {\mathbb {X}} \times {\mathbb {R}} : V(x,y)> 0 \}. \end{aligned}$$
(2.3)

Note that from property 3 of Proposition 2.1, for any fixed \(x\in {\mathbb {X}}\), the function \(y \mapsto V(x,y)\) is positive for large y. For further details on the properties of \(supp(V)\) we refer to [14].

Proposition 2.3

(Preliminary results, part III) Assume Hypotheses M1M3.

  1. 1.

    For any \((x,y) \in supp(V)\) and \(t\geqslant 0\),

    where \({\varvec{\Phi }}^+(t) = 1-e^{-\frac{t^2}{2}}\) is the Rayleigh distribution function.

  2. 2.

    There exists \(\varepsilon _0 >0\) such that, for any \(\varepsilon \in (0,\varepsilon _0)\), \(n\geqslant 1\), \(t_0 > 0\), \(t\in [0,t_0]\) and \((x,y) \in {\mathbb {X}} \times {\mathbb {R}}\),

    $$\begin{aligned} \left| {\mathbb {P}}_x \left( y+S_n \leqslant t \sqrt{n} \sigma ,\, \tau _y > n \right) - \frac{2V(x,y)}{\sqrt{2\pi n}\sigma } {\varvec{\Phi }}^+(t) \right| \leqslant c_{\varepsilon ,t_0} \frac{\left( 1+\max (y,0)^2 \right) }{n^{1/2+\varepsilon }}. \end{aligned}$$

In the point 1 of Proposition 2.2 and the point 2 of Proposition 2.3, the function V can be zero, so that for all pairs (xy) satisfying \(V(x,y)=0\) it holds

$$\begin{aligned} \lim _{n\rightarrow +\infty } \sqrt{n} {\mathbb {P}}_x \left( \tau _y > n \right) = 0 \end{aligned}$$

and

$$\begin{aligned} \lim _{n\rightarrow +\infty } \sqrt{n} {\mathbb {P}}_x \left( y+S_n \leqslant t \sqrt{n} \sigma ,\, \tau _y > n \right) = 0. \end{aligned}$$

We note that, for the convenience of the reader, Propositions 2.12.2 and 2.3 are formulated here under Hypotheses M1M3, but they can be stated under much more general conditions, in particular for Markov chains with countable state spaces, see [14].

Now we proceed to formulate the main results of the paper. Our first result is an extension of Gnedenko–Stone local limit theorem originally stated for sums of independent random variables. The following theorem generalizes it to the case of sums of random variables defined on Markov chains conditioned to stay positive.

Theorem 2.4

Assume Hypotheses M1M3. Let \(a>0\) be a positive real. Then there exists \(\varepsilon _0 \in (0,1/4)\) such that for any \(\varepsilon \in (0,\varepsilon _0)\), non-negative function \(\psi \in {\mathscr {C}}\), \(y \in {\mathbb {R}}\) and \( n \geqslant 3\varepsilon ^{-3}\), we have

$$\begin{aligned}&\sup _{x\in {\mathbb {X}},\, z \geqslant 0} n\left| {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, \tau _y > n \right) - \frac{2a {\varvec{\nu }} \left( \psi \right) V(x,y)}{\sqrt{2\pi } \sigma ^2 n} \varphi _+ \left( \frac{z}{\sqrt{n}\sigma } \right) \right| \\&\quad \leqslant c \left( 1 + \max (y,0) \right) \left\| \psi \right\| _{\infty } \left( \sqrt{\varepsilon } + \frac{c_{\varepsilon }\left( 1+\max (y,0) \right) }{n^{\varepsilon }} \right) , \end{aligned}$$

where \(\varphi _+(t) = te^{-\frac{t^2}{2}} \mathbb {1}_{\{t\geqslant 0\}}\) is the Rayleigh density and the constants c and \(c_{\varepsilon }\) may depend on a.

Note that Theorem 2.4 is meaningful only for large values of z such that \(z \sim n^{1/2} \) as \(n\rightarrow \infty \). Indeed, the remainder term is of order \(n^{-1-\varepsilon },\) with some small \(\varepsilon >0,\) while for a fixed z the leading term is of order \(n^{-3/2}\). When \(z = cn^{1/2}\) the leading term becomes of order \(n^{-1}\) while the remainder is still \(o(n^{-1})\). To deal with the case of z in compact sets a more refined result will be given below. We will deduce it from Theorem 2.4, however for the proof we need the concept of duality.

Let us introduce the dual Markov chain and the corresponding associated Markov walk. Since \({\varvec{\nu }}\) is positive on \({\mathbb {X}}\), the following dual Markov kernel \({\mathbf {P}}^*\) is well defined:

$$\begin{aligned} {\mathbf {P}}^* \left( x,x^* \right) = \frac{{\varvec{\nu }} \left( x^* \right) }{{\varvec{\nu }} (x)} {\mathbf {P}} \left( x^*,x \right) , \quad \forall (x,x^*) \in {\mathbb {X}}^2. \end{aligned}$$
(2.4)

It is easy to see that \({\varvec{\nu }}\) is also \({\mathbf {P}}^*\)-invariant. The dual of \(( X_n)_{n\geqslant 0}\) is the Markov chain \(\left( X_n^* \right) _{n\geqslant 0}\) with values in \({\mathbb {X}}\) and transition probability \({\mathbf {P}}^*\). Without loss of generality we can consider that the dual Markov chain \(\left( X_n^* \right) _{n\geqslant 0}\) is defined on an extension of the probability space \((\varOmega , {\mathscr {F}}, {\mathbb {P}})\) and that it is independent of the Markov chain \(( X_n)_{n\geqslant 0}\). We define the associated dual Markov walk by

$$\begin{aligned} S_0^* = 0 \qquad \text {and} \qquad S_n^* = \sum _{k=1}^n -f \left( X_k^* \right) , \quad \forall n \geqslant 1. \end{aligned}$$
(2.5)

For any \(z\in {\mathbb {R}}\), define also the exit time

$$\begin{aligned} \tau _z^* := \inf \left\{ k \geqslant 1 : z+S_k^* \leqslant 0 \right\} . \end{aligned}$$
(2.6)

For any \(\in {\mathbb {X}}\), denote by \({\mathbb {P}}_x^*\) and \({\mathbb {E}}_x^*\) the probability, respectively the expectation, generated by the finite dimensional distributions of the Markov chain \(( X_n^* )_{n\geqslant 0}\) starting at \(X_0^* = x\). It is shown in Sect. 3 that the dual Markov chain \(\left( X_n^* \right) _{n\geqslant 0}\) satisfies Hypotheses M1M3 as do the original chain \(\left( X_n \right) _{n\geqslant 0}\). Thus, Propositions 2.12.3 hold also for \(\left( X_n^* \right) _{n\geqslant 0}\) with V\(\tau ,\)\((S_n)_{n\geqslant 0}\) and \({\mathbb {P}}_x\) replaced by \(V^*,\)\(\tau ^*,\)\((S_n^*)_{n\geqslant 0}\) and \({\mathbb {P}}_x^*\). Note also that both chains have the same invariant probability \({\varvec{\nu }}\). Denote by \({\mathbb {E}}_{{\varvec{\nu }}}\), \({\mathbb {E}}_{{\varvec{\nu }}}^*\) the expectations generated by the finite dimensional distributions of the Markov chains \(( X_n )_{n\geqslant 0}\) and \(( X_n^* )_{n\geqslant 0}\) in the stationary regime.

Our second result is a conditional version of the local limit theorem for fixed xy and z.

Theorem 2.5

Assume Hypotheses M1M3.

  1. 1.

    For any non-negative function \(\psi \in {\mathscr {C}}\), \(a>0\), \(x\in {\mathbb {X}}\), \(y \in {\mathbb {R}}\) and \(z \geqslant 0\),

    $$\begin{aligned} \lim _{n\rightarrow +\infty } n^{3/2}&{\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, \tau _y> n \right) \\&\qquad = \frac{2V(x,y)}{\sqrt{2\pi }\sigma ^3} \int _z^{z+a} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z'+S_1^* \right) ;\, \tau _{z'}^* > 1 \right) \text {d}z'. \end{aligned}$$
  2. 2.

    Moreover, there exists \(c > 0\) such that for any \(a>0\), non-negative function \(\psi \in {\mathscr {C}}\), \(y \in {\mathbb {R}}\), \(z \geqslant 0\) and \(n \geqslant 1\),

    $$\begin{aligned}&\sup _{x\in {\mathbb {X}}} {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, \tau _y > n \right) \\&\quad \leqslant \frac{c \left\| \psi \right\| _{\infty }}{n^{3/2}} \left( 1+a^3 \right) \left( 1+z \right) \left( 1+\max (y,0) \right) . \end{aligned}$$

In the particular case when \(\psi =1\), the previous theorem rewrites as follows:

Corollary 2.6

Assume Hypotheses M1M3.

  1. 1.

    For any \(a>0\), \(x\in {\mathbb {X}}\), \(y \in {\mathbb {R}}\) and \(z \geqslant 0\),

    $$\begin{aligned} \lim _{n\rightarrow +\infty } n^{3/2}&{\mathbb {P}}_x \left( y+S_{n} \in [z,z+a],\, \tau _y > n \right) \\&\qquad = \frac{2V(x,y)}{\sqrt{2\pi }\sigma ^3} \int _z^{z+a} \int _{{\mathbb {X}}} V^*\left( x', z' \right) {\varvec{\nu }} (\text {d}x') \text {d}z'. \end{aligned}$$
  2. 2.

    Moreover, there exists \(c > 0\) such that for any \(a>0\), \(y \in {\mathbb {R}}\), \(z \geqslant 0\) and \(n \geqslant 1\),

    $$\begin{aligned} \sup _{x\in {\mathbb {X}}} {\mathbb {P}}_x \left( y+S_{n} \in [z,z+a],\, \tau _y > n \right) \leqslant \frac{c}{n^{3/2}} \left( 1+a^3 \right) \left( 1+z \right) \left( 1+\max (y,0) \right) . \end{aligned}$$

Note that the assertion 1 of Theorem 2.5 and assertion 1 of Corollary 2.6 hold for fixed \(a>0\), \(x\in {\mathbb {X}}\), \(y \in {\mathbb {R}}\) and \(z \geqslant 0\) and that these results do not cover the case when z is not in a compact set, for instance when \(z \sim n^{1/2}\).

The following result extends Theorem 2.5 to some functionals of the trajectories of the chain \(( X_n )_{n\geqslant 0}\). For any \((x,x^*) \in {\mathbb {X}}^2\), the probability generated by the finite dimensional distributions of the two dimensional Markov chain \(( X_n, X_n^*)_{n\geqslant 0}\) starting at \((X_0,X_0^*) = (x,x^*)\) is given by \({\mathbb {P}}_{x,x^*}={\mathbb {P}}_{x} \times {\mathbb {P}}_{x^*}^*\). Let \({\mathbb {E}}_{x,x^*}\) be the corresponding expectation. For any \(l \geqslant 1\), denote by \({\mathscr {C}}^+ ( {\mathbb {X}}^l \times {\mathbb {R}}_+ )\) the set of non-negative functions g: \({\mathbb {X}}^l \times {\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+\) satisfying the following properties:

  • for any \((x_1,\dots ,x_l) \in {\mathbb {X}}^l\), the function \(z \mapsto g(x_1,\dots ,x_l,z)\) is continuous,

  • there exists \(\varepsilon > 0\) such that \(\max _{x_1,\dots x_l \in {\mathbb {X}}} \sup _{z \geqslant 0} g(x_1,\dots ,x_l,z) (1+z)^{2+\varepsilon } < +\infty \).

Theorem 2.7

Assume Hypotheses M1M3. For any \(x \in {\mathbb {X}}\), \(y \in {\mathbb {R}}\), \(l \geqslant 1\), \(m \geqslant 1\) and \(g \in {\mathscr {C}}^+ \left( {\mathbb {X}}^{l+m} \times {\mathbb {R}}_+ \right) \),

$$\begin{aligned}&\lim _{n\rightarrow +\infty } n^{3/2} {\mathbb {E}}_x \left( g \left( X_1, \dots , X_l, X_{n-m+1}, \dots , X_n, y+S_n \right) ;\, \tau _y> n \right) \\&\quad = \frac{2}{\sqrt{2\pi }\sigma ^3} \int _0^{+\infty } \sum _{x^* \in {\mathbb {X}}} {\mathbb {E}}_{x,x^*} \left( g \left( X_1, \dots , X_l,X_m^{*},\dots ,X_1^{*},z \right) \right. \\&\qquad \left. \times V \left( X_l, y+S_l \right) V^* \left( X_m^*, z+S_m^* \right) ;\, \tau _y> l,\, \tau _z^* > m \right) {\varvec{\nu }}(x^*) \text {d}z. \end{aligned}$$

As a consequence of Theorem 2.7 we deduce the following asymptotic behaviour of the probability of the event \(\left\{ \tau _y=n \right\} \) as \(n\rightarrow +\infty \).

Theorem 2.8

Assume Hypotheses M1M3. For any \(x\in {\mathbb {X}}\) and \(y \in {\mathbb {R}}\),

$$\begin{aligned} \lim _{n\rightarrow +\infty } n^{3/2}{\mathbb {P}}_x \left( \tau _y = n \right) = \frac{2V(x,y)}{\sqrt{2\pi } \sigma ^3} \int _0^{+\infty } {\mathbb {E}}_{{\varvec{\nu }}}^* \left( V^*(X_1^*,z);\, S_1^* \geqslant z \right) \text {d}z. \end{aligned}$$

3 Properties of the dual Markov chain

In this section we establish some properties of the dual Markov chain and of the corresponding Markov walk.

Lemma 3.1

Suppose that the operator \({\mathbf {P}}\) satisfies Hypotheses M1M3. Then the dual operator \({\mathbf {P}}^*\) satisfies also M1M3.

Proof

By the definition of \({\mathbf {P}}^*,\) for any \(x^* \in {\mathbb {X}}\),

$$\begin{aligned} \sum _{x \in {\mathbb {X}}} {\varvec{\nu }} (x) {\mathbf {P}}^* \left( x,x^* \right) = \sum _{x \in {\mathbb {X}}} {\mathbf {P}} \left( x^*,x \right) {\varvec{\nu }} \left( x^* \right) = {\varvec{\nu }} (x^*), \end{aligned}$$

which proves that \({\varvec{\nu }}\) is also \({\mathbf {P}}^*\)-invariant. Thus Hypothesis M2, \({\varvec{\nu }}(f) = {\varvec{\nu }}(-f)=0\), is satisfied for both chains. Moreover, it is easy to see that for any \(n \geqslant 1\), \((x,x^*) \in {\mathbb {X}}^2\),

$$\begin{aligned} \left( {\mathbf {P}}^* \right) ^n (x,x^*) = {\mathbf {P}}^n (x^*,x) \frac{{\varvec{\nu }}(x^*)}{{\varvec{\nu }}(x)}. \end{aligned}$$

This shows that \({\mathbf {P}}^*\) satisfies M1 and M3. \(\square \)

Note that the operator \({\mathbf {P}}^*\) is the adjoint operator of \({\mathbf {P}}\) in the space \(L^2 \left( {\varvec{\nu }} \right) :\) for any functions g and h on \({\mathbb {X}},\)

$$\begin{aligned} {\varvec{\nu }} \left( g \left( {\mathbf {P}}^*\right) ^n h \right) = {\varvec{\nu }} \left( h {\mathbf {P}}^n g \right) . \end{aligned}$$

In particular for any \(n\geqslant 1\), \({\varvec{\nu }} \left( f \left( \mathbf {P}^*\right) ^n f \right) = {\varvec{\nu }} \left( f {\mathbf {P}}^n f \right) \) and we note that

$$\begin{aligned} \sigma ^2 = {\varvec{\nu }} \left( (-f)^2 \right) + \sum _{n} {\varvec{\nu }} \left( (-f) \left( {\mathbf {P}}^* \right) ^n (-f) \right) . \end{aligned}$$

The following assertion plays a key role in the proofs.

Lemma 3.2

(Duality) For any probability measure \({\mathfrak {m}}\) on \({\mathbb {X}}\), any \(n\geqslant 1\) and any function F from \({\mathbb {X}}^n\) to \({\mathbb {R}}\),

$$\begin{aligned} {\mathbb {E}}_{{\mathfrak {m}}} \left( F \left( X_1, \dots , X_{n-1}, X_n \right) \right) = {\mathbb {E}}_{{\varvec{\nu }}}^* \left( F \left( X_n^*, X_{n-1}^*, \dots , X_1^* \right) \frac{{\mathfrak {m}} \left( X_{n+1}^* \right) }{{\varvec{\nu }} \left( X_{n+1}^* \right) } \right) . \end{aligned}$$

Proof

We write

$$\begin{aligned} {\mathbb {E}}_{{\mathfrak {m}}}&\left( F \left( X_1, \dots , X_{n-1}, X_n \right) \right) \\&= \sum _{x_0,x_1, \dots , x_{n-1}, x_n, x_{n+1} \in {\mathbb {X}}} F \left( x_1, \dots , x_{n-1}, x_n \right) {\mathfrak {m}}(x_0) \\&\quad {\mathbb {P}}_{x_0} \left( X_1 = x_1, X_2 = x_2, \dots , X_{n-1} = x_{n-1}, X_n = x_n, X_{n+1} = x_{n+1} \right) . \end{aligned}$$

By the definition of \({\mathbf {P}}^*\), we have

$$\begin{aligned}&{\mathbb {P}}_{x_0} \left( X_1 = x_1, X_2 = x_2, \dots , X_{n-1} = x_{n-1}, X_n = x_n, X_{n+1} = x_{n+1} \right) \\&\quad = {\mathbf {P}}(x_0,x_1) {\mathbf {P}}(x_1,x_2) \dots {\mathbf {P}}(x_{n-1},x_n) {\mathbf {P}}(x_n,x_{n+1}) \\&\quad = {\mathbf {P}}^*(x_1,x_0) \frac{{\varvec{\nu }} (x_1)}{{\varvec{\nu }} (x_0)} {\mathbf {P}}^*(x_2,x_1) \frac{{\varvec{\nu }} (x_2)}{{\varvec{\nu }} (x_1)} \dots {\mathbf {P}}^*(x_n,x_{n-1}) \frac{{\varvec{\nu }} (x_n)}{{\varvec{\nu }} (x_{n-1})} {\mathbf {P}}^*(x_{n+1},x_n)\frac{{\varvec{\nu }} (x_{n+1})}{{\varvec{\nu }} (x_n)} \\&\quad = \frac{{\varvec{\nu }} (x_{n+1})}{{\varvec{\nu }} (x_0)} {\mathbb {P}}^*_{x_{n+1}} \left( X_1^* = x_n, X_2^* = x_{n-1}, \dots , X_n^* = x_1, X_{n+1}^* = x_0 \right) \end{aligned}$$

and the result of the lemma follows. \(\square \)

4 The perturbed operator

For any \(t \in {\mathbb {R}}\), denote by \({\mathbf {P}}_t\) the perturbed transition operator defined by

$$\begin{aligned} {\mathbf {P}}_tg(x) = {\mathbf {P}} \left( e^{{\mathbf {i}}tf} g \right) (x) = {\mathbb {E}}_x \left( e^{{\mathbf {i}}tf(X_1)} g(X_1) \right) , \quad \text {for any } g \in {\mathscr {C}},\; x \in {\mathbb {X}}, \end{aligned}$$

where \({\mathbf {i}}\) is the complex \({\mathbf {i}}^2 = -1\). Let also \(r_t\) be the spectral radius of \({\mathbf {P}}_t\). Note that for any \(g \in {\mathscr {C}}\), \(\left\| {\mathbf {P}}_t g\right\| _{\infty } \leqslant \left\| e^{{\mathbf {i}}tf} g\right\| _{\infty } = \left\| g\right\| _{\infty }\) and so

$$\begin{aligned} r_t \leqslant 1. \end{aligned}$$
(4.1)

We introduce the two following definitions:

  • A sequence \(x_0, x_1, \dots , x_n \in {\mathbb {X}}\), is a path (between \(x_0\) and \(x_n\)) if

    $$\begin{aligned} {\mathbf {P}}(x_0,x_1) \cdots {\mathbf {P}}(x_{n-1},x_n) > 0. \end{aligned}$$
  • A sequence \(x_0, x_1, \dots , x_n \in {\mathbb {X}}\), is an orbit if \(x_0, x_1, \dots , x_n, x_0\) is a path.

Note that under Hypothesis M1, for any \(x_0, x\in {\mathbb {X}}\) it is always possible to connect \(x_0\) and x by a path \(x_0, x_1, \dots , x_n, x\) in \({\mathbb {X}}\).

Lemma 4.1

Assume Hypothesis M1. The following statements are equivalent:

  1. 1.

    There exists \((\theta ,a) \in {\mathbb {R}}^2\) such that for any orbit \(x_0, \dots , x_n\) in \({\mathbb {X}}\), we have

    $$\begin{aligned} f(x_0) + \cdots + f(x_n) - (n+1)\theta \in a{\mathbb {Z}}. \end{aligned}$$
  2. 2.

    There exist \(t\in {\mathbb {R}}^*\), \(h\in {\mathscr {C}}{\setminus } \{0\}\) and \(\theta \in {\mathbb {R}}\) such that for any \((x,x') \in {\mathbb {X}}^2\),

    $$\begin{aligned} h(x')e^{{\mathbf {i}}tf(x')}{\mathbf {P}}(x,x') = h(x) e^{{\mathbf {i}}t\theta } {\mathbf {P}}(x,x'). \end{aligned}$$
  3. 3.

    There exists \(t \in {\mathbb {R}}^*\) such that

    $$\begin{aligned} r_t = 1. \end{aligned}$$

Proof

The point 1 implies the point 2. Suppose that the point 1 holds. Fix \(x_0 \in {\mathbb {X}}\) and set \(h(x_0) = 1\). For any \(x \in {\mathbb {X}}\), define h(x) in the following way: for any path \(x_0, \dots , x_n, x\) in \({\mathbb {X}}\) we set

$$\begin{aligned} h(x) = e^{{\mathbf {i}}t\theta (n+1)} e^{-{\mathbf {i}} t \left( f(x_1) + \cdots + f(x_n) + f(x) \right) }, \end{aligned}$$

where \(t = \frac{2\pi }{a}\). Note that if \(a=0\), then the point 1 holds also for \(a=1\) and so, without lost of generality, we assume that \(a\ne 0\). We first verify that h is well defined on \({\mathbb {X}}\). Recall that under Hypothesis M1, for any \(x\in {\mathbb {X}}\) it is always possible to connect \(x_0\) and x by a path. We have to check that the value of h(x) does not depend on the choice of the path. Let \(p,q \geqslant 1\) and \(x_0,x_1, \dots , x_p, x\) in \({\mathbb {X}}\) and \(x_0,y_1, \dots , y_q, x\) in \({\mathbb {X}}\) be two paths between \(x_0\) and x. We complete these paths to orbits as follows. Under Hypothesis M1, there exist \(n \geqslant 1\) and \(z_1, \dots , z_n\) in \({\mathbb {X}}\) such that

$$\begin{aligned} {\mathbf {P}}(x,z_1) \cdots {\mathbf {P}}(z_n,x_0) > 0, \end{aligned}$$

i.e. the sequence \(x, z_1, \dots , z_n, x_0\) is a path. So, the sequences \(x_0,x_1,\dots ,x_p,x,z_1,\dots ,z_n\) and \(x_0,y_1,\dots ,y_q,x,z_1,\dots ,z_n\) are orbits. By the point 1, there exist \(l_1,l_2 \in {\mathbb {Z}}\) such that

$$\begin{aligned}&f(x_1) + \cdots + f(x_p) + f(x) \\&\quad = al_1 - \left( f(z_1) + \cdots + f(z_n) + f(x_0) \right) + (p+n+2)\theta \\&\quad = al_1 - al_2 + \left( f(y_1) + \cdots + f(y_q) + f(x) \right) \\&\quad \quad - (q+n+2)\theta + (p+n+2)\theta . \end{aligned}$$

Therefore,

$$\begin{aligned} e^{{\mathbf {i}}t\theta (p+1)} e^{-{\mathbf {i}} t \left( f(x_1) + \cdots + f(x_p) + f(x) \right) } = e^{-{\mathbf {i}} t \left( al_1 - al_2 \right) } e^{{\mathbf {i}}t\theta (q+1)} e^{-{\mathbf {i}} t \left( f(y_1) + \cdots + f(y_q) + f(x) \right) } \end{aligned}$$

and since \(ta=2\pi \) it proves that h is well defined. Now let \((x,x') \in {\mathbb {X}}^2\) be such that \({\mathbf {P}}(x,x') > 0\). There exists a path \(x_0, x_1, \dots , x_n, x\) between \(x_0\) and x and so

$$\begin{aligned} h(x) = e^{{\mathbf {i}}t\theta (n+1)} e^{-{\mathbf {i}} t \left( f(x_1) + \cdots + f(x_n) + f(x) \right) }. \end{aligned}$$

Since \(x_0,x_1, \dots , x_n,x,x'\) is a path between \(x_0\) and \(x'\), we have also

$$\begin{aligned} h(x') = e^{{\mathbf {i}}t\theta (n+2)} e^{-{\mathbf {i}} t \left( f(x_1) + \cdots + f(x_n) + f(x)+f(x') \right) } = h(x) e^{{\mathbf {i}}t\theta } e^{-{\mathbf {i}} t f(x')}. \end{aligned}$$

Note that since the modulus of h is 1, this function belongs to \({\mathscr {C}} {\setminus }\{0\}\).

The point 2 implies the point 1 Suppose that the point 2 holds and let \(x_0, \dots , x_n\) be an orbit. Using the point 2 repeatedly, we have

$$\begin{aligned} h(x_0)&= h(x_1) e^{{\mathbf {i}}t\theta } e^{-{\mathbf {i}} t f(x_0)} = \cdots \\&= h(x_n) e^{{\mathbf {i}}t\theta n} e^{-{\mathbf {i}} t \left( f(x_0)+\cdots +f(x_{n-1}) \right) } = h(x_0) e^{{\mathbf {i}}t\theta (n+1)} e^{-{\mathbf {i}} t \left( f(x_0)+\cdots +f(x_n) \right) }. \end{aligned}$$

Since h is a non-identically zero function with a constant modulus, necessarily, h is never equal to 0 and so \(f(x_0)+\cdots +f(x_n) -(n+1)\theta \in \frac{2\pi }{t} {\mathbb {Z}}\).

The point 2 implies the point 3 Suppose that the point 2 holds. Summing on \(x'\) we have, for any \(x \in {\mathbb {X}}\),

$$\begin{aligned} {\mathbf {P}} \left( h e^{itf} \right) (x) = {\mathbf {P}}_t h(x) = h(x)e^{{\mathbf {i}} t \theta }. \end{aligned}$$

Therefore h is an eigenvector of \({\mathbf {P}}_t\) associated to the eigenvalue \(e^{{\mathbf {i}} t \theta }\) which implies that \(r_t \geqslant \left|e^{{\mathbf {i}} t \theta }\right| = 1\) and by (4.1), \(r_t = 1\).

The point 3 implies the point 2 Suppose that the point 3 holds. There exist \(h \in {\mathscr {C}} {\setminus } \{0\}\) and \(\theta \in {\mathbb {R}}\) such that \({\mathbf {P}}_t h = he^{{\mathbf {i}} t \theta }\). Without loss of generality, we suppose that \(\left\| h\right\| _{\infty } = 1\). Since \({\mathbf {P}}_t^n h = he^{{\mathbf {i}} t n \theta }\) for any \(n \geqslant 1\), by (2.1), for any \(x \in {\mathbb {X}}\), we have

$$\begin{aligned} \left|h(x)\right| = \left|{\mathbf {P}}_t^n h(x)\right| \leqslant {\mathbf {P}}^n \left|h\right|(x) \underset{n\rightarrow +\infty }{\longrightarrow } {\varvec{\nu }} \left( \left|h\right| \right) . \end{aligned}$$
(4.2)

From (4.2), letting \(x_0 \in {\mathbb {X}}\) be such that \(\left|h(x_0)\right| = \left\| h\right\| _{\infty } = 1\), it is easy to see that

$$\begin{aligned} \left|h(x_0)\right| \leqslant \sum _{x \in {\mathbb {X}}} \left|h(x)\right| {\varvec{\nu }} (x) \leqslant \left|h(x_0)\right|. \end{aligned}$$

From this it follows that the modulus of h is constant on \({\mathbb {X}}\): \(\left|h(x)\right| = \left|h(x_0)\right| = 1\) for any \(x \in {\mathbb {X}}\). Consequently, there exists \(\alpha \): \({\mathbb {X}} \rightarrow {\mathbb {R}}\) such that for any \(x \in {\mathbb {X}}\),

$$\begin{aligned} h(x) = e^{{\mathbf {i}} \alpha (x)}. \end{aligned}$$
(4.3)

With (4.3) the equation \({\mathbf {P}}_t h = he^{{\mathbf {i}} t \theta }\) can be rewritten as

$$\begin{aligned} \forall x \in {\mathbb {X}}, \qquad \sum _{x'\in {\mathbb {X}}} e^{{\mathbf {i}} \alpha (x')} e^{{\mathbf {i}} t f(x')} {\mathbf {P}}(x,x') = e^{{\mathbf {i}} \alpha (x)} e^{{\mathbf {i}} t \theta }. \end{aligned}$$

Since \(e^{{\mathbf {i}} \alpha (x)} e^{{\mathbf {i}} t \theta } \in \left\{ z \in {\mathbb {C}} : \left|z\right|=1 \right\} \) and \(e^{{\mathbf {i}} \alpha (x')} e^{{\mathbf {i}} f(x')} \in \left\{ z \in {\mathbb {C}} : \left|z\right|=1 \right\} \), for any \(x' \in {\mathbb {X}}\), the previous equation holds only if \(h(x')e^{{\mathbf {i}} t f(x')} = e^{{\mathbf {i}} \alpha (x')} e^{{\mathbf {i}} t f(x')} = e^{{\mathbf {i}} \alpha (x)} e^{{\mathbf {i}} t \theta } = h(x) e^{{\mathbf {i}} t \theta }\) for any \(x' \in {\mathbb {X}}\) such that \({\mathbf {P}}(x,x') > 0\). \(\square \)

Define the operator norm \(\left\| \cdot \right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}}\) on \({\mathscr {C}}\) as follows: for any operator R: \({\mathscr {C}} \rightarrow {\mathscr {C}}\), set

$$\begin{aligned} \left\| R\right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}} := \sup _{g \in {\mathscr {C}} {\setminus } \{0\}} \frac{\left\| R(g)\right\| _{\infty }}{\left\| g\right\| _{\infty }}. \end{aligned}$$

Lemma 4.2

Assume Hypotheses M1 and M3. For any compact set K included in \({\mathbb {R}}^*\) there exist constants \(c_K > 0\) and \(c_K' >0\) such that for any \(n \geqslant 1\),

$$\begin{aligned} \sup _{t\in K} \left\| {\mathbf {P}}_t^n\right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}} \leqslant c_K e^{-c_K'n}. \end{aligned}$$

Proof

By Lemma 4.1, under Hypotheses M1 and M3, we have \(r_t \ne 1\) for any \(t\ne 0\) and hence, using (4.1),

$$\begin{aligned} r_t < 1, \qquad \forall t \in {\mathbb {R}}^*. \end{aligned}$$

It is well known that

$$\begin{aligned} r_t = \lim _{n\rightarrow +\infty } \left\| {\mathbf {P}}_t^n\right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}}^{1/n}. \end{aligned}$$

Since \(t \mapsto {\mathbf {P}}_t\) is continuous, the function \(t \mapsto r_t\) is the infimum of the sequence of upper semi-continuous functions \(t \mapsto \left\| {\mathbf {P}}_t^n\right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}}^{1/n}\) and therefore is itself upper semi-continuous. In particular, for any compact set K included in \({\mathbb {R}}^*\), there exists \(t_0 \in K\) such that

$$\begin{aligned} \sup _{t\in K} r_t = r_{t_0} < 1. \end{aligned}$$

We deduce that for \(\varepsilon = (1- \sup _{t\in K} r_t)/2 >0\) there exists \(n_0 \geqslant 1\) such that for any \(n \geqslant n_0\),

$$\begin{aligned} \left\| {\mathbf {P}}_t^n\right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}}^{1/n} \leqslant \sup _{t\in K} r_t + \varepsilon < 1. \end{aligned}$$

Choosing \(c_{K'} = -\ln \left( \sup _{t\in K} r_t + \varepsilon \right) \) and \(c_K = \max _{n\leqslant n_0} \left\| {\mathbf {P}}_t^n\right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}} e^{c_{K'}n} +1\), the lemma is proved. \(\square \)

In the proofs we make use of the following assertion which is a consequence of the perturbation theory of linear operators (see for example [23]). The point 5 is proved in Lemma 2 of Guivarc’h and Hardy [18].

Proposition 4.3

Assume Hypotheses M1 and M2. There exist a real \(\varepsilon _0>0\) and operator valued functions \(\varPi _t\) and \(Q_t\) acting from \([-\varepsilon _0,\varepsilon _0]\) to the set of operators onto \({\mathscr {C}} \) such that

  1. 1.

    the maps \(t \mapsto \varPi _t\), \(t \mapsto Q_t\) and \(t \mapsto \lambda _t\) are analytic at 0,

  2. 2.

    the operator \({\mathbf {P}}_t\) has the following decomposition,

    $$\begin{aligned} {\mathbf {P}}_t = \lambda _t \varPi _t +Q_t, \qquad \forall t \in [-\varepsilon _0,\varepsilon _0], \end{aligned}$$
  3. 3.

    for any \(t\in [-\varepsilon _0,\varepsilon _0]\), \(\varPi _t\) is a one-dimensional projector and \(\varPi _t Q_t = Q_t \varPi _t = 0\),

  4. 4.

    there exist \(c_1>0\) and \(c_2>0\) such that, for any \(n\in {\mathbb {N}}^*\),

    $$\begin{aligned} \sup _{t\in [-\varepsilon _0,\varepsilon _0]} \left\| Q_t^n\right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}} \leqslant c_1 e^{-c_2 n}, \end{aligned}$$
  5. 5.

    the function \(\lambda _t\) has the following expansion at 0: for any \(t \in [-\varepsilon _0,\varepsilon _0]\),

    $$\begin{aligned} \left|\lambda _t - 1 + \frac{t^2 \sigma ^2}{2}\right| \leqslant c \left|t\right|^3. \end{aligned}$$

Note that \(\lambda _0=1\) and \(\varPi _0(\cdot ) = \varPi (\cdot ) = {\varvec{\nu }} (\cdot ) e\), where e is the unit function of \({\mathbb {X}}\): \(e(x) = 1\), for any \(x\in {\mathbb {X}}\).

Lemma 4.4

Assume Hypotheses M1 and M2. There exists \(\varepsilon _0 > 0\) such that for any \(n \geqslant 1\) and \(t \in [-\varepsilon _0 \sqrt{n}, \varepsilon _0\sqrt{n}]\),

$$\begin{aligned} \left\| {\mathbf {P}}_{\frac{t}{\sqrt{n}}}^n - e^{-\frac{t^2 \sigma ^2}{2}} \varPi \right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}} \leqslant \frac{c}{\sqrt{n}}e^{-\frac{t^2 \sigma ^2}{4}} + c e^{-c n}. \end{aligned}$$

Proof

By the points 2 and 3 of Proposition 4.3, for any \(t/\sqrt{n} \in [-\varepsilon _0,\varepsilon _0]\),

$$\begin{aligned} {\mathbf {P}}_{\frac{t}{\sqrt{n}}}^n = \lambda _{\frac{t}{\sqrt{n}}}^n \varPi _{\frac{t}{\sqrt{n}}} + Q_{\frac{t}{\sqrt{n}}}^n. \end{aligned}$$

By the points 1 and 4 of Proposition 4.3, for \(n\geqslant 1\),

$$\begin{aligned} \left\| \varPi _{\frac{t}{\sqrt{n}}} - \varPi \right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}}&\leqslant \sup _{u\in [-\varepsilon _0,\varepsilon _0]} \left\| \varPi _{u}'\right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}} \frac{\left|t\right|}{\sqrt{n}} \leqslant c\frac{\left|t\right|}{\sqrt{n}}, \end{aligned}$$
(4.4)
$$\begin{aligned} \sup _{t/\sqrt{n}\in [-\varepsilon _0,\varepsilon _0]} \left\| Q_{\frac{t}{\sqrt{n}}}^n\right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}}&\leqslant c e^{-c n}. \end{aligned}$$
(4.5)

Let \(\alpha \) be the complex valued function defined on \([-\varepsilon _0,\varepsilon _0]\) by \(\alpha (t) = \frac{1}{t^3} \left( \lambda _t - 1 + \frac{t^2 \sigma ^2}{2} \right) \) for any \(t \in [-\varepsilon _0,\varepsilon _0] {\setminus } \{0\}\) and \(\alpha (0) = 0\). By the point 5 of Proposition 4.3, there exists \(c >0\) such that

$$\begin{aligned} \forall t \in [-\varepsilon _0,\varepsilon _0], \qquad \left|\alpha (t)\right| \leqslant c. \end{aligned}$$
(4.6)

With this notation, we have for any \(t/\sqrt{n} \in [-\varepsilon _0,\varepsilon _0]\),

$$\begin{aligned} \left|\lambda _{\frac{t}{\sqrt{n}}}^n - e^{-\frac{t^2 \sigma ^2}{2}}\right|&\leqslant \underbrace{\left|\left( 1 - \frac{t^2 \sigma ^2}{2n} + \frac{t^3}{n^{3/2}} \alpha \left( \frac{t}{\sqrt{n}} \right) \right) ^n - \left( 1-\frac{t^2 \sigma ^2}{2n} \right) ^n\right|}_{=: I_1} \nonumber \\&\quad + \underbrace{\left|\left( 1-\frac{t^2 \sigma ^2}{2n} \right) ^n - e^{-\frac{t^2 \sigma ^2}{2}}\right|}_{=:I_2}. \end{aligned}$$
(4.7)

Without loss of generality, the value of \(\varepsilon _0> 0\) can be chosen such that \(\varepsilon _0^2 \sigma ^2 \leqslant 1\) and so for any \(t/\sqrt{n} \in [-\varepsilon _0,\varepsilon _0]\), we have \(1-\frac{t^2 \sigma ^2}{2n} \geqslant 1/2\). Therefore,

$$\begin{aligned} I_1&\leqslant \left( 1-\frac{t^2 \sigma ^2}{2n} \right) ^n \left|\left( 1 + \frac{t^3}{n^{3/2}\left( 1-\frac{t^2 \sigma ^2}{2n} \right) } \alpha \left( \frac{t}{\sqrt{n}} \right) \right) ^n - 1\right| \\&\leqslant \left( 1-\frac{t^2 \sigma ^2}{2n} \right) ^n \sum _{k=1}^n \begin{pmatrix} n \\ k \end{pmatrix} \left|\frac{t^3}{n^{3/2}\left( 1-\frac{t^2 \sigma ^2}{2n} \right) } \alpha \left( \frac{t}{\sqrt{n}} \right) \right|^k \\&= \left( 1-\frac{t^2 \sigma ^2}{2n} \right) ^n \left[ \left( 1 + \frac{\left|t\right|^3}{n^{3/2}\left( 1-\frac{t^2 \sigma ^2}{2n} \right) } \left|\alpha \left( \frac{t}{\sqrt{n}} \right) \right| \right) ^n - 1 \right] . \end{aligned}$$

Using the inequality \(1+u \leqslant e^{u}\) for \(u \in {\mathbb {R}}\), the fact that \(1-\frac{t^2 \sigma ^2}{2n} \geqslant 1/2\) and the bound (4.6), we have

$$\begin{aligned} I_1 \leqslant e^{-\frac{t^2 \sigma ^2}{2}} \left( e^{\frac{c\left|t\right|^3}{\sqrt{n}}} - 1 \right) . \end{aligned}$$

Next, using the inequality \(e^{u}-1 \leqslant u e^{u}\) for \(u \geqslant 0\) and the fact that \(\left|t\right|/\sqrt{n} \leqslant \varepsilon _0\),

$$\begin{aligned} I_1 \leqslant e^{-\frac{t^2 \sigma ^2}{2}} \frac{c}{\sqrt{n}} \left|t\right|^3 e^{c \varepsilon _0 t^2}. \end{aligned}$$
(4.8)

Again, without loss of generality, the value of \(\varepsilon _0> 0\) can be chosen such that \(c \varepsilon _0^2 \leqslant \sigma ^2/8\) (this have no impact on (4.6) which holds for any \([-\varepsilon _0',\varepsilon _0'] \subseteq [-\varepsilon _0,\varepsilon _0]\)). Thus, from (4.8) it follows that

$$\begin{aligned} I_1 \leqslant \frac{c}{\sqrt{n}} e^{-\frac{t^2 \sigma ^2}{4}}. \end{aligned}$$
(4.9)

Using the inequalities \(1-u \leqslant e^{-u}\) for \(u \in {\mathbb {R}}\) and \(\ln (1-u) \geqslant -u-u^2\) for \(u \leqslant 1\), we have

$$\begin{aligned} I_2= & {} e^{-\frac{t^2 \sigma ^2}{2}} - \left( 1-\frac{t^2 \sigma ^2}{2n} \right) ^n \leqslant e^{-\frac{t^2 \sigma ^2}{2}} - e^{-\frac{t^2 \sigma ^2}{2}-\frac{t^4 \sigma ^4}{4n}} \nonumber \\\leqslant & {} \frac{t^4 \sigma ^4}{4n}e^{-\frac{t^2 \sigma ^2}{2}} \leqslant \frac{c}{\sqrt{n}}e^{-\frac{t^2 \sigma ^2}{4}}. \end{aligned}$$
(4.10)

Putting together (4.7), (4.9) and (4.10), we obtain that, for any \(t/\sqrt{n} \in [-\varepsilon _0,\varepsilon _0]\),

$$\begin{aligned} \left|\lambda _{\frac{t}{\sqrt{n}}}^n - e^{-\frac{t^2 \sigma ^2}{2}}\right| \leqslant \frac{c}{\sqrt{n}}e^{-\frac{t^2 \sigma ^2}{4}}. \end{aligned}$$
(4.11)

In the same way, one can prove that

$$\begin{aligned} \left|t\right|\left|\lambda _{\frac{t}{\sqrt{n}}}^n\right| \leqslant e^{-\frac{t^2 \sigma ^2}{4}}. \end{aligned}$$
(4.12)

The right hand side in the assertion of the lemma can be bounded as follows:

$$\begin{aligned} \left\| {\mathbf {P}}_{\frac{t}{\sqrt{n}}}^n - e^{-\frac{t^2 \sigma ^2}{2}} \varPi \right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}}\leqslant & {} \left|\lambda _{\frac{t}{\sqrt{n}}}^n\right| \left\| \varPi _{\frac{t}{\sqrt{n}}} - \varPi \right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}} \\&+ \left|\lambda _{\frac{t}{\sqrt{n}}}^n - e^{-\frac{t^2 \sigma ^2}{2}}\right| \left\| \varPi \right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}} + \left\| Q_{\frac{t}{\sqrt{n}}}^n\right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}}. \end{aligned}$$

Using (4.4), (4.5), (4.11) and (4.12), we obtain that, for any \(t/ \sqrt{n} \in [\varepsilon _0,\varepsilon _0]\),

$$\begin{aligned} \left\| {\mathbf {P}}_{\frac{t}{\sqrt{n}}}^n - e^{-\frac{t^2 \sigma ^2}{2}} \varPi \right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}} \leqslant \frac{c}{\sqrt{n}} e^{-\frac{t^2 \sigma ^2}{4}} + c e^{-c n}. \end{aligned}$$

\(\square \)

5 A non asymptotic local limit theorem

In this section we establish a local limit theorem for the Markov walk jointly with the Markov chain. Our result is similar to that in Grama and Le Page [15] where the case of sums of independent random variables is considered under the Cramér condition. We refer to Guivarc’h and Hardy [18] for a local limit theorem for Markov chains with compact state spaces. In contrast to previous results for Markov chains our local limit theorem gives an explicit dependence of the constant in the remainder term on the target function h applied to the random walk \(y+S_n\) (see Lemmata 5.1, 5.4 and Corollary 5.5). All these results are stated for Markov chains with finite state spaces to shorten the exposition, but, a closer analysis of the proofs shows that, under appropriate spectral gap assumptions, these assertions can be extended to more general Markov chains, including the chains with denumerable state spaces.

We first establish a local limit theorem for integrable functions with Fourier transforms with compact supports. For any integrable function h: \({\mathbb {R}} \rightarrow {\mathbb {R}}\) denote by \({\widehat{h}}\) its Fourier transform:

$$\begin{aligned} {\widehat{h}}(t) = \int _{{\mathbb {R}}} e^{-itu} h(u) \text {d}u, \quad \forall t \in {\mathbb {R}}. \end{aligned}$$

When \({\widehat{h}}\) is integrable, by the inversion formula,

$$\begin{aligned} h(u) = \frac{1}{2\pi } \int _{{\mathbb {R}}} e^{itu} {\widehat{h}}(t) \text {d}t, \quad \forall u \in {\mathbb {R}}. \end{aligned}$$

For any integrable functions h and g, let

$$\begin{aligned} h*g(u) = \int _{{\mathbb {R}}} h(v)g(u-v) \text {d}v \end{aligned}$$

be the convolution of h and g. Denote by \(\varphi _{\sigma }\) the density of the centred normal law with variance \(\sigma ^2\):

$$\begin{aligned} \varphi _\sigma (u) = \frac{1}{\sqrt{2\pi }\sigma }e^{-\frac{u^2}{2\sigma ^2}}, \quad \forall u \in {\mathbb {R}}. \end{aligned}$$
(5.1)

Lemma 5.1

Assume Hypotheses M1M3. For any \(A > 0\), any integrable function h on \({\mathbb {R}}\) whose Fourier transform \({\widehat{h}}\) has a compact support included in \([-A,A]\), any real function \(\psi \) defined on \({\mathbb {X}}\) and any \(n \geqslant 1\),

$$\begin{aligned} \underset{y\in {\mathbb {R}}}{\sup } \sqrt{n}&\left|{\mathbb {E}}_x \left( h\left( y+S_n \right) \psi \left( X_n \right) \right) - h*\varphi _{\sqrt{n}\sigma }(y) {\varvec{\nu }} \left( \psi \right) \right| \\&\quad \leqslant \left\| \psi \right\| _{\infty } \left( \frac{c}{\sqrt{n}} \left\| h\right\| _{L^1} + \left\| {\widehat{h}}\right\| _{L^1} c_{A}e^{-c_{A}n} \right) . \end{aligned}$$

Proof

By the inversion formula and the Fubini theorem,

$$\begin{aligned} I_0&:= \sqrt{n}\left|{\mathbb {E}}_x \left( h\left( y+S_n \right) \psi \left( X_n \right) \right) - h*\varphi _{\sqrt{n}\sigma }(y) {\varvec{\nu }} \left( \psi \right) \right|\\&= \frac{\sqrt{n}}{2\pi }\left|{\mathbb {E}}_x \left( \int _{{\mathbb {R}}} e^{it\left( y+S_n \right) } {\widehat{h}}(t) \text {d}t \psi \left( X_n \right) \right) - \int _{{\mathbb {R}}} {\widehat{h}}(t) {\widehat{\varphi }}_{\sqrt{n}\sigma }(t) e^{ity} \text {d}t {\varvec{\nu }} \left( \psi \right) \right|\\&= \frac{\sqrt{n}}{2\pi }\left|\int _{{\mathbb {R}}} e^{ity} \left( {\mathbf {P}}_{t}^n \psi (x) - e^{-\frac{t^2\sigma ^2 n}{2}} {\varvec{\nu }} \left( \psi \right) \right) {\widehat{h}}(t) \text {d}t\right|. \end{aligned}$$

Since \({\widehat{h}}( t ) = 0\) for any \(t \notin [-A,A]\), we write

$$\begin{aligned} I_0 \leqslant \;&\underbrace{\frac{\sqrt{n}}{2\pi }\left|\int _{\varepsilon _0 \leqslant \left|t\right|\leqslant A} e^{ity} \left( {\mathbf {P}}_{t}^n \psi (x) - e^{-\frac{t^2\sigma ^2 n}{2}} {\varvec{\nu }} \left( \psi \right) \right) {\widehat{h}}(t) \text {d}t\right|}_{=:I_1} \nonumber \\&\quad + \underbrace{\frac{\sqrt{n}}{2\pi }\left|\int _{\left|t\right|\leqslant \varepsilon _0} e^{ity} \left( {\mathbf {P}}_{t}^n \psi (x) - e^{-\frac{t^2\sigma ^2 n}{2}} {\varvec{\nu }} \left( \psi \right) \right) {\widehat{h}}(t) \text {d}t\right|}_{=:I_2}, \end{aligned}$$
(5.2)

where \(\varepsilon _0\) is defined by Lemma 4.4.

Bound of\(I_1\) By Lemma 4.2, for any \(\varepsilon _0 \leqslant \left|t\right| \leqslant A\), we have

$$\begin{aligned} \left\| {\mathbf {P}}_{t}^n \psi \right\| _{\infty } \leqslant \left\| \psi \right\| _{\infty } c_{A,\varepsilon _0} e^{-c_{A,\varepsilon _0}n}. \end{aligned}$$

Consequently,

$$\begin{aligned} I_1&\leqslant \frac{\sqrt{n}}{2\pi } \left( \left\| \psi \right\| _{\infty } c_{A,\varepsilon _0} e^{-c_{A,\varepsilon _0}n} + e^{-\frac{\varepsilon _0^2 \sigma ^2 n}{2}} \left|{\varvec{\nu }}(\psi )\right| \right) \left\| {\widehat{h}}\right\| _{L^1} \nonumber \\&\leqslant \left\| \psi \right\| _{\infty } \left\| {\widehat{h}}\right\| _{L^1} c_{A,\varepsilon _0}e^{-c_{A,\varepsilon _0}n}. \end{aligned}$$
(5.3)

Bound of\(I_2\) Substituting \(s=t\sqrt{n}\), we write

$$\begin{aligned} I_2&= \frac{1}{2\pi }\left|\int _{\left|s\right|\leqslant \varepsilon _0 \sqrt{n}} e^{i\frac{sy}{\sqrt{n}}} \left( {\mathbf {P}}_{\frac{s}{\sqrt{n}}}^n \psi (x) - e^{-\frac{s^2\sigma ^2}{2}} {\varvec{\nu }} \left( \psi \right) \right) {\widehat{h}} \left( \frac{s}{\sqrt{n}} \right) \text {d}s\right| \\&\leqslant \frac{1}{2\pi }\int _{\left|s\right|\leqslant \varepsilon _0 \sqrt{n}} \left|{\mathbf {P}}_{\frac{s}{\sqrt{n}}}^n \psi (x) - e^{-\frac{s^2\sigma ^2}{2}} {\varvec{\nu }} \left( \psi \right) \right| \left|{\widehat{h}} \left( \frac{s}{\sqrt{n}} \right) \right| \text {d}s. \end{aligned}$$

By Lemma 4.4, for any \(\left|s\right|\leqslant \varepsilon _0 \sqrt{n}\), we have

$$\begin{aligned} \left|{\mathbf {P}}_{\frac{s}{\sqrt{n}}}^n \psi (x) - e^{-\frac{s^2\sigma ^2}{2}} {\varvec{\nu }} \left( \psi \right) \right|&\leqslant \left\| {\mathbf {P}}_{\frac{s}{\sqrt{n}}}^n \left( \psi \right) - e^{-\frac{s^2\sigma ^2}{2}} \varPi \left( \psi \right) \right\| _{\infty } \\&\leqslant \left\| \psi \right\| _{\infty } \left\| {\mathbf {P}}_{\frac{s}{\sqrt{n}}}^n - e^{-\frac{s^2\sigma ^2}{2}} \varPi \right\| _{{\mathscr {C}} \rightarrow {\mathscr {C}}} \\&\leqslant \left\| \psi \right\| _{\infty } \left( \frac{c}{\sqrt{n}}e^{-\frac{s^2 \sigma ^2}{4}} + c e^{-c n} \right) . \end{aligned}$$

Therefore,

$$\begin{aligned} I_2&\leqslant \left\| \psi \right\| _{\infty } \left( \frac{c}{\sqrt{n}} \int _{{\mathbb {R}}} e^{-\frac{s^2 \sigma ^2}{4}} \left\| {\widehat{h}}\right\| _{\infty } \text {d}s + c e^{-c n} \left\| {\widehat{h}}\right\| _{L^1} \right) \nonumber \\&\leqslant \left\| \psi \right\| _{\infty } \left( \frac{c}{\sqrt{n}} \left\| h\right\| _{L^1} + c e^{-c n} \left\| {\widehat{h}}\right\| _{L^1} \right) . \end{aligned}$$
(5.4)

Putting together (5.2), (5.3) and (5.4), concludes the proof. \(\square \)

We extend the result of Lemma 5.1 for any integrable function (with not necessarily integrable Fourier transform). As in Stone [32], we introduce the kernel \(\kappa \) defined on \({\mathbb {R}}\) by

$$\begin{aligned} \kappa (u) = \frac{1}{2\pi } \left( \frac{\sin \left( \frac{u}{2} \right) }{\frac{u}{2}} \right) ^2, \quad \forall u \in {\mathbb {R}}^* \qquad \text {and} \qquad \kappa (0) = \frac{1}{2\pi }. \end{aligned}$$

The function \(\kappa \) is integrable and its Fourier transform is given by

$$\begin{aligned} {\widehat{\kappa }}(t) = 1-\left|t\right|, \quad \forall t \in [-1,1], \qquad \text {and} \qquad {\widehat{\kappa }}(t) = 0 \quad \text {otherwise.} \end{aligned}$$

Note that

$$\begin{aligned} \int _{{\mathbb {R}}} \kappa (u) \text {d}u = {\widehat{\kappa }}(0) = 1 = \int _{{\mathbb {R}}} {\widehat{\kappa }}(t) \text {d}t. \end{aligned}$$

For any \(\varepsilon >0\), we define the function \(\kappa _{\varepsilon }\) on \({\mathbb {R}}\) by

$$\begin{aligned} \kappa _{\varepsilon } (u) = \frac{1}{\varepsilon } \kappa \left( \frac{u}{\varepsilon } \right) . \end{aligned}$$

Its Fourier transform is given by \({\widehat{\kappa }}_{\varepsilon } (t) = {\widehat{\kappa }}(\varepsilon t)\). Note also that, for any \(\varepsilon > 0\), we have

$$\begin{aligned} \int _{\left|u\right| \geqslant \frac{1}{\varepsilon }} \kappa (u) \text {d}u \leqslant \frac{1}{\pi } \int _{\frac{1}{\varepsilon }}^{+\infty } \frac{4}{u^2} \text {d}u = \frac{4}{\pi } \varepsilon . \end{aligned}$$
(5.5)

For any non-negative and locally bounded function h defined on \({\mathbb {R}}\) and any \(\varepsilon >0\), let \({\overline{h}}_{\varepsilon }\) and \({\underline{h}}_{\varepsilon }\) be the “thickened” functions: for any \(u \in {\mathbb {R}}\),

$$\begin{aligned} {\overline{h}}_{\varepsilon }(u) = \sup _{v \in [u-\varepsilon ,u+\varepsilon ]} h(v) \qquad \text {and} \qquad {\underline{h}}_{\varepsilon }(u) = \inf _{v \in [u-\varepsilon ,u+\varepsilon ]} h(v). \end{aligned}$$

For any \(\varepsilon > 0\), denote by \({\mathscr {H}}_{\varepsilon }\) the set of non-negative and locally bounded functions h such that h, \({\overline{h}}_{\varepsilon }\) and \({\underline{h}}_{\varepsilon }\) are measurable from \(\left( {\mathbb {R}}, {\mathscr {B}} \left( {\mathbb {R}} \right) \right) \) to \(\left( {\mathbb {R}}_+, {\mathscr {B}} \left( {\mathbb {R}}_+ \right) \right) \) and Lebesgue-integrable (where \({\mathscr {B}} \left( {\mathbb {R}} \right) \), \({\mathscr {B}} \left( {\mathbb {R}}_+ \right) \) are the Borel \(\sigma \)-algebras).

Lemma 5.2

For any function \(h \in {\mathscr {H}}_{\varepsilon }\), \(\varepsilon \in (0,1/4)\) and \(u \in {\mathbb {R}}\),

$$\begin{aligned} {\underline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} (u) - \int _{\left|v\right| \geqslant \varepsilon } {\underline{h}}_{\varepsilon } \left( u- v \right) \kappa _{\varepsilon ^2} (v) \text {d}v \leqslant h(u) \leqslant \left( 1+4\varepsilon \right) {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} (u). \end{aligned}$$

Proof

Note that for any \(\left|v\right| \leqslant \varepsilon \) and \(u\in {\mathbb {R}}\), we have \(u\in [ u- v - \varepsilon , u- v+\varepsilon ]\). So,

$$\begin{aligned} {\underline{h}}_{\varepsilon } \left( u- v \right) \leqslant h (u) \leqslant {\overline{h}}_{\varepsilon } \left( u- v \right) . \end{aligned}$$
(5.6)

Using the fact that \(\int _{{\mathbb {R}}} \kappa _{\varepsilon ^2} (u) \text {d}u = 1\) and (5.5), we write

$$\begin{aligned} h (u)&= \int _{\left|v\right| \leqslant \varepsilon } h(u) \kappa _{\varepsilon ^2} (v) \text {d}v + h(u) \int _{\left|v\right| \geqslant \varepsilon } \kappa _{\varepsilon ^2} (v) \text {d}v \\&\leqslant \int _{\left|v\right| \leqslant \varepsilon } {\overline{h}}_{\varepsilon } \left( u- v \right) \kappa _{\varepsilon ^2} (v) \text {d}v + h(u) \frac{4}{\pi } \varepsilon . \end{aligned}$$

Therefore,

$$\begin{aligned} h(u) \left( 1- \frac{4}{\pi } \varepsilon \right) \leqslant \int _{{\mathbb {R}}} {\overline{h}}_{\varepsilon } \left( u-v \right) \kappa _{\varepsilon ^2} (v) \text {d}v = {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} (u). \end{aligned}$$

For any \(\varepsilon \in (0,1/4)\),

$$\begin{aligned} h(u) \leqslant \frac{1}{1-2\varepsilon } {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} (u) \leqslant \left( 1+4\varepsilon \right) {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} (u). \end{aligned}$$

Moreover, from (5.6),

$$\begin{aligned} h(u)&\geqslant \int _{\left|v\right| \leqslant \varepsilon } h(u) \kappa _{\varepsilon ^2} (v) \text {d}v \\&\geqslant \int _{\left|v\right| \leqslant \varepsilon } {\underline{h}}_{\varepsilon } \left( u- v \right) \kappa _{\varepsilon ^2} (v) \text {d}v \\&= {\underline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} (u) - \int _{\left|v\right| \geqslant \varepsilon } {\underline{h}}_{\varepsilon } \left( u- v \right) \kappa _{\varepsilon ^2} (v) \text {d}v. \end{aligned}$$

\(\square \)

Lemma 5.3

Let \(\varepsilon >0\) and \(h \in {\mathscr {H}}_{\varepsilon }\).

  1. 1.

    For any \(y \in {\mathbb {R}}\) and \(n\geqslant 1\),

    $$\begin{aligned} \sqrt{n} \left( {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} \right) *\varphi _{\sqrt{n}\sigma }(y) \leqslant \sqrt{n}\left( h*\varphi _{\sqrt{n}\sigma } \right) (y) + c \left\| {\overline{h}}_{2\varepsilon } - h\right\| _{L^1} + c \varepsilon \left\| h\right\| _{L^1}, \end{aligned}$$

    where \(\varphi _{\sqrt{n}\sigma }(\cdot )\) is defined by (5.1).

  2. 2.

    For any \(y \in {\mathbb {R}}\) and \(n\geqslant 1\),

    $$\begin{aligned} \sqrt{n}\left( {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} \right) *\varphi _{\sqrt{n}\sigma }(y) \leqslant c \left\| {\overline{h}}_{\varepsilon }\right\| _{L^1}. \end{aligned}$$
  3. 3.

    For any \(y\in {\mathbb {R}}\) and \(n\geqslant 1\),

    $$\begin{aligned} \sqrt{n} \left( {\underline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} \right) *\varphi _{\sqrt{n}\sigma }(y) \geqslant \sqrt{n} \left( h*\varphi _{\sqrt{n}\sigma }\right) (y) - c \left\| h-{\underline{h}}_{2\varepsilon }\right\| _{L^1} - c \varepsilon \left\| h\right\| _{L^1}. \end{aligned}$$

Proof

For any \(\varepsilon > 0\), \(\left|v\right| \leqslant \varepsilon \) and \(u\in {\mathbb {R}}\) it holds \([u-v-\varepsilon ,u-v+\varepsilon ] \subset [u-2\varepsilon ,u+2\varepsilon ]\). Therefore,

$$\begin{aligned} {\underline{h}}_{\varepsilon }(u-v) \geqslant {\underline{h}}_{2\varepsilon }(u) \qquad \text {and} \qquad {\overline{h}}_{\varepsilon }(u-v) \leqslant {\overline{h}}_{2\varepsilon }(u). \end{aligned}$$
(5.7)

Consequently, for any \(u\in {\mathbb {R}}\),

$$\begin{aligned} {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} (u)&\leqslant {\overline{h}}_{2\varepsilon }(u) \int _{\left|v\right|\leqslant \varepsilon } \kappa _{\varepsilon ^2}(v) \text {d}v + \int _{\left|v\right|\geqslant \varepsilon } {\overline{h}}_{\varepsilon }(u-v)\kappa _{\varepsilon ^2}(v) \text {d}v \\&\leqslant {\overline{h}}_{2\varepsilon }(u) + \int _{\left|v\right|\geqslant \varepsilon } {\overline{h}}_{\varepsilon }(u-v)\kappa _{\varepsilon ^2}(v) \text {d}v. \end{aligned}$$

From this, using the bound \(\sqrt{n}\varphi _{\sqrt{n}\sigma }(\cdot ) \leqslant 1/(\sqrt{2\pi }\sigma )\) and (5.5), we obtain that

$$\begin{aligned} \sqrt{n} \left( {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} \right) *\varphi _{\sqrt{n}\sigma }(y)&\leqslant \sqrt{n} \left( {\overline{h}}_{2\varepsilon }*\varphi _{\sqrt{n}\sigma } \right) (y) \\&\quad + \frac{1}{\sqrt{2\pi }\sigma } \int _{{\mathbb {R}}} \int _{\left|v\right|\geqslant \varepsilon } {\overline{h}}_{\varepsilon }(u-v)\kappa _{\varepsilon ^2}(v) \text {d}v \text {d}u \\&\leqslant \sqrt{n} \left( {\overline{h}}_{2\varepsilon }*\varphi _{\sqrt{n}\sigma } \right) (y) + \frac{2\sqrt{2}}{\pi ^{3/2}\sigma } \varepsilon \left\| {\overline{h}}_{\varepsilon }\right\| _{L^1}. \end{aligned}$$

Using again the bound \(\sqrt{n}\varphi _{\sqrt{n}\sigma }(\cdot ) \leqslant 1/(\sqrt{2\pi }\sigma )\), we get

$$\begin{aligned}&\sqrt{n} \left( {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} \right) *\varphi _{\sqrt{n}\sigma }(y) \\&\quad \leqslant \sqrt{n} \left( h*\varphi _{\sqrt{n}\sigma } \right) (y) + \int _{{\mathbb {R}}} \left|{\overline{h}}_{2\varepsilon }(u) - h(u)\right| \frac{\text {d}u}{\sqrt{2\pi }\sigma } + c \varepsilon \left\| {\overline{h}}_{\varepsilon }\right\| _{L^1} \\&\quad \leqslant \sqrt{n} \left( h*\varphi _{\sqrt{n}\sigma } \right) (y) + c \left\| {\overline{h}}_{2\varepsilon } - h\right\| _{L^1} + c \varepsilon \left\| {\overline{h}}_{2\varepsilon }\right\| _{L^1} \\&\quad \leqslant \sqrt{n} \left( h*\varphi _{\sqrt{n}\sigma } \right) (y) + \left( c+ c\varepsilon \right) \left\| {\overline{h}}_{2\varepsilon } - h\right\| _{L^1} + c \varepsilon \left\| h\right\| _{L^1}, \end{aligned}$$

which proves the claim 1.

In the same way,

$$\begin{aligned} \sqrt{n} \left( {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} \right) *\varphi _{\sqrt{n}\sigma }(y) \leqslant \frac{1}{\sqrt{2\pi }\sigma } \left\| {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2}\right\| _{L^1} = \frac{1}{\sqrt{2\pi }\sigma } \left\| {\overline{h}}_{\varepsilon }\right\| _{L^1}, \end{aligned}$$

which establishes the claim 2.

By (5.7) and (5.5),

$$\begin{aligned} {\underline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2}(u) \geqslant {\underline{h}}_{2\varepsilon }(u)\int _{\left|v\right|\leqslant \varepsilon } \kappa _{\varepsilon ^2}(v) \text {d}v \geqslant \left( 1-\frac{4}{\pi } \varepsilon \right) {\underline{h}}_{2\varepsilon }(u). \end{aligned}$$

Integrating this inequality and using once again the bound \(\sqrt{n} \varphi _{\sqrt{n}\sigma }(\cdot ) \leqslant \frac{1}{\sqrt{2\pi }\sigma }\), we have

$$\begin{aligned} \sqrt{n} \left( {\underline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} \right) *\varphi _{\sqrt{n}\sigma }(y)&\geqslant \sqrt{n} \left( 1-\frac{4}{\pi } \varepsilon \right) {\underline{h}}_{2\varepsilon }*\varphi _{\sqrt{n}\sigma }(y) \\&\geqslant \sqrt{n} \left( {\underline{h}}_{2\varepsilon }*\varphi _{\sqrt{n}\sigma }\right) (y) - \frac{4}{\pi } \varepsilon \frac{1}{\sqrt{2\pi }\sigma } \left\| {\underline{h}}_{2\varepsilon }\right\| _{L^1}. \end{aligned}$$

Inserting h, we conclude that

$$\begin{aligned}&\sqrt{n} \left( {\underline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} \right) *\varphi _{\sqrt{n}\sigma }(y) \\&\quad \geqslant \sqrt{n} \left( h*\varphi _{\sqrt{n}\sigma }\right) (y) - \frac{1}{\sqrt{2\pi }\sigma } \left\| h-{\underline{h}}_{2\varepsilon }\right\| _{L^1} - c \varepsilon \left\| {\underline{h}}_{2\varepsilon }\right\| _{L^1} \\&\quad \geqslant \sqrt{n} \left( h*\varphi _{\sqrt{n}\sigma }\right) (y) - c \left\| h-{\underline{h}}_{2\varepsilon }\right\| _{L^1} - c \varepsilon \left\| h\right\| _{L^1}. \end{aligned}$$

\(\square \)

We are now equipped to prove a non-asymptotic theorem for a large class of functions h.

Lemma 5.4

Assume Hypotheses M1M3. Let \(\varepsilon \in (0,1/4)\). For any function \(h \in {\mathscr {H}}_{\varepsilon }\), any non-negative function \(\psi \in {\mathscr {C}}\) and any \(n \geqslant 1\),

$$\begin{aligned}&\underset{x \in {\mathbb {X}}, \,y\in {\mathbb {R}}}{\sup } \sqrt{n} \left| {\mathbb {E}}_x \left( h\left( y+S_n \right) \psi \left( X_n \right) \right) - h*\varphi _{\sqrt{n}\sigma }(y) {\varvec{\nu }} \left( \psi \right) \right| \\&\quad \leqslant c \left\| \psi \right\| _{\infty } \left( \left\| h-{\underline{h}}_{2\varepsilon }\right\| _{L^1} + \left\| {\overline{h}}_{2\varepsilon }-h\right\| _{L^1} \right) \\&\qquad + c \left\| \psi \right\| _{\infty } \left\| {\overline{h}}_{2\varepsilon }\right\| _{L^1} \left( \frac{1}{\sqrt{n}} + \varepsilon + c_{\varepsilon }e^{-c_{\varepsilon }n} \right) , \end{aligned}$$

where \(\varphi _{\sqrt{n}\sigma }(\cdot )\) is defined by (5.1). Moreover,

$$\begin{aligned} \underset{x \in {\mathbb {X}}, \,y\in {\mathbb {R}}}{\sup } \sqrt{n} {\mathbb {E}}_x \left( h\left( y+S_n \right) \psi \left( X_n \right) \right) \leqslant c \left\| \psi \right\| _{\infty } \left\| {\overline{h}}_{2\varepsilon }\right\| _{L^1} \left( 1 + c_{\varepsilon }e^{-c_{\varepsilon }n} \right) . \end{aligned}$$

Proof

We prove upper and lower bounds for \(\sqrt{n}{\mathbb {E}}_x \left( h\left( y+S_n \right) \psi \left( X_n \right) \right) \) from which the claim will follow.

The upper bound By Lemma 5.2, we have, for any \(x\in {\mathbb {X}}\), \(n\geqslant 1\), \(y\in {\mathbb {R}}\) and \(\varepsilon \in (0,1/4)\),

$$\begin{aligned} {\mathbb {E}}_x \left( h\left( y+S_n \right) \psi \left( X_n \right) \right) \leqslant \left( 1+4\varepsilon \right) {\mathbb {E}}_x \left( {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2}\left( y+S_n \right) \psi \left( X_n \right) \right) \end{aligned}$$

Since \({\overline{h}}_{\varepsilon }\) is integrable, the function \(u\mapsto {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2}(u)\) is integrable and its Fourier transform \(u\mapsto \widehat{{\overline{h}}}_{\varepsilon }(u) {\widehat{\kappa }}_{\varepsilon ^2}(u)\) has a support included in \([-1/\varepsilon ^2,1/\varepsilon ^2]\). Consequently, by Lemma 5.1,

$$\begin{aligned} I_0&:= \sqrt{n} {\mathbb {E}}_x \left( h\left( y+S_n \right) \psi \left( X_n \right) \right) \\&\leqslant \sqrt{n}\left( 1+4\varepsilon \right) \left( {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} \right) *\varphi _{\sqrt{n}\sigma }(y) {\varvec{\nu }} \left( \psi \right) \\&\quad + 2\left\| \psi \right\| _{\infty } \left( \frac{c}{\sqrt{n}} \left\| {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2}\right\| _{L^1} + \left\| \widehat{{\overline{h}}}_{\varepsilon } {\widehat{\kappa }}_{\varepsilon ^2}\right\| _{L^1} c_{\varepsilon }e^{-c_{\varepsilon }n} \right) . \end{aligned}$$

Using the points 1 and 2 of Lemma 5.3 and the fact that \(\left|{\varvec{\nu }} \left( \psi \right) \right| \leqslant \left\| \psi \right\| _{\infty }\), we deduce that

$$\begin{aligned} I_0&\leqslant \sqrt{n} \left( h*\varphi _{\sqrt{n}\sigma } \right) (y) {\varvec{\nu }} \left( \psi \right) + \left\| \psi \right\| _{\infty } \left( c \left\| {\overline{h}}_{2\varepsilon } - h\right\| _{L^1} + c \varepsilon \left\| h\right\| _{L^1} \right) \\&\quad + 4\varepsilon c \left\| {\overline{h}}_{\varepsilon }\right\| _{L^1} \left\| \psi \right\| _{\infty } \\&\quad + 2 \left\| \psi \right\| _{\infty } \left( \frac{c}{\sqrt{n}} \left\| {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2}\right\| _{L^1} + \left\| \widehat{{\overline{h}}}_{\varepsilon } {\widehat{\kappa }}_{\varepsilon ^2}\right\| _{L^1} c_{\varepsilon }e^{-c_{\varepsilon }n} \right) . \end{aligned}$$

Note that \(\left\| {\overline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2}\right\| _{L^1} = \left\| {\overline{h}}_{\varepsilon }\right\| _{L^1}\) and

$$\begin{aligned} \left\| \widehat{{\overline{h}}}_{\varepsilon } {\widehat{\kappa }}_{\varepsilon ^2}\right\| _{L^1} \leqslant \left\| {\overline{h}}_{\varepsilon }\right\| _{L^1} \int _{{\mathbb {R}}} {\widehat{\kappa }}_{\varepsilon ^2} (t) \text {d}t =\left\| {\overline{h}}_{\varepsilon }\right\| _{L^1} \int _{{\mathbb {R}}} {\widehat{\kappa }} (\varepsilon ^2 t) \text {d}t = \frac{1}{\varepsilon ^2} \left\| {\overline{h}}_{\varepsilon }\right\| _{L^1}. \end{aligned}$$

Consequently,

$$\begin{aligned} I_0&\leqslant \sqrt{n} \left( h*\varphi _{\sqrt{n}\sigma } \right) (y) {\varvec{\nu }} \left( \psi \right) + c\left\| \psi \right\| _{\infty } \left\| {\overline{h}}_{2\varepsilon } - h\right\| _{L^1} \nonumber \\&\quad + c\left\| \psi \right\| _{\infty } \left\| {\overline{h}}_{\varepsilon }\right\| _{L^1} \left( \frac{1}{\sqrt{n}} + \varepsilon + c_{\varepsilon }e^{-c_{\varepsilon }n} \right) . \end{aligned}$$
(5.8)

From (5.8), taking into account that \(\sqrt{n} \left( h*\varphi _{\sqrt{n}\sigma } \right) (y) \leqslant c\left\| h\right\| _{L^1}\), we deduce, in addition, that

$$\begin{aligned} I_0 \leqslant c\left\| \psi \right\| _{\infty } \left\| {\overline{h}}_{2\varepsilon }\right\| _{L^1} \left( 1 + c_{\varepsilon }e^{-c_{\varepsilon }n} \right) . \end{aligned}$$
(5.9)

The lower bound By Lemma 5.2, we write that

$$\begin{aligned} I_0&\geqslant \underbrace{\sqrt{n} {\mathbb {E}}_x \left( {\underline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} \left( y+S_n \right) \psi \left( X_n \right) \right) }_{=:I_1} \nonumber \\&\quad - \underbrace{\sqrt{n} {\mathbb {E}}_x \left( \int _{\left|v\right| \geqslant \varepsilon } {\underline{h}}_{\varepsilon } \left( y+S_n - v \right) \kappa _{\varepsilon ^2} (v) \text {d}v \psi \left( X_n \right) \right) }_{=:I_2}. \end{aligned}$$
(5.10)

Bound of\(I_1\) The Fourier transform of the convolution \({\underline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2}\) has a compact support included in \([-1/\varepsilon ^2,1/\varepsilon ^2]\). So by Lemma 5.1,

$$\begin{aligned} I_1\geqslant & {} \sqrt{n} \left( {\underline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2} \right) *\varphi _{\sqrt{n}\sigma }(y) {\varvec{\nu }} \left( \psi \right) \\&- \left\| \psi \right\| _{\infty } \left( \frac{c}{\sqrt{n}}\left\| {\underline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2}\right\| _{L^1} + \left\| \widehat{{\underline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2}}\right\| _{L^1} c_{\varepsilon }e^{-c_{\varepsilon }n} \right) , \end{aligned}$$

Using the point 3 of Lemma 5.3 and the fact that \(\left|{\varvec{\nu }} \left( \psi \right) \right| \leqslant \left\| \psi \right\| _{\infty }\),

$$\begin{aligned} I_1&\geqslant \sqrt{n} \left( h*\varphi _{\sqrt{n}\sigma } \right) (y) {\varvec{\nu }} \left( \psi \right) - c \left\| \psi \right\| _{\infty } \left( \left\| h-{\underline{h}}_{2\varepsilon }\right\| _{L^1} + \varepsilon \left\| h\right\| _{L^1} \right) \\&\quad - \left\| \psi \right\| _{\infty } \left( \frac{c}{\sqrt{n}} \left\| {\underline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2}\right\| _{L^1} + \left\| \widehat{{\underline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2}}\right\| _{L^1} c_{\varepsilon }e^{-c_{\varepsilon }n} \right) . \end{aligned}$$

Since \(\left\| {\underline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2}\right\| _{L^1} = \left\| {\underline{h}}_{\varepsilon }\right\| _{L^1} \leqslant \left\| h\right\| _{L^1}\) and since \(\left\| \widehat{{\underline{h}}_{\varepsilon }*\kappa _{\varepsilon ^2}}\right\| _{L^1} \leqslant \left\| {\underline{h}}_{\varepsilon }\right\| _{L^1} \left\| {\widehat{\kappa }}_{\varepsilon ^2}\right\| _{L^1} = \frac{1}{\varepsilon ^2}\left\| {\underline{h}}_{\varepsilon }\right\| _{L^1} \leqslant \frac{1}{\varepsilon ^2} \left\| h\right\| _{L^1}\), we deduce that

$$\begin{aligned} I_1&\geqslant \sqrt{n} \left( h*\varphi _{\sqrt{n}\sigma } \right) (y) {\varvec{\nu }} \left( \psi \right) - c \left\| \psi \right\| _{\infty } \left\| h-{\underline{h}}_{2\varepsilon }\right\| _{L^1} \nonumber \\&\quad - c \left\| \psi \right\| _{\infty } \left\| h\right\| _{L^1} \left( \frac{1}{\sqrt{n}} + \varepsilon + c_{\varepsilon }e^{-c_{\varepsilon }n} \right) . \end{aligned}$$
(5.11)

Bound of\(I_2\) With the notation \(g_{\varepsilon ,v}(u) = {\underline{h}}_{\varepsilon } \left( u - v \right) \), we have

$$\begin{aligned} I_2 = \int _{\left|v\right| \geqslant \varepsilon } \sqrt{n} {\mathbb {E}}_x \left( g_{\varepsilon ,v} \left( y+S_n \right) \psi \left( X_n \right) \right) \kappa _{\varepsilon ^2} (v) \text {d}v. \end{aligned}$$

Consequently, using (5.9), we find that

$$\begin{aligned} I_2 \leqslant c\left\| \psi \right\| _{\infty } \left( 1 + c_{\varepsilon }e^{-c_{\varepsilon }n} \right) \int _{\left|v\right| \geqslant \varepsilon } \left\| \overline{\left( g_{\varepsilon ,v} \right) }_{2\varepsilon }\right\| _{L^1} \kappa _{\varepsilon ^2} (v) \text {d}v. \end{aligned}$$

Note that, for any u and \(v \in {\mathbb {R}}\),

$$\begin{aligned} \overline{\left( g_{\varepsilon ,v} \right) }_{2\varepsilon } (u) = \sup _{w\in [u-2\varepsilon ,u+2\varepsilon ]} {\underline{h}}_{\varepsilon } \left( w - v \right) \leqslant \sup _{w\in [u-2\varepsilon ,u+2\varepsilon ]} h \left( w - v \right) = {\overline{h}}_{2\varepsilon }(u-v). \end{aligned}$$

So, \(\left\| \overline{\left( g_{\varepsilon ,v} \right) }_{2\varepsilon }\right\| _{L^1} \leqslant \left\| {\overline{h}}_{2\varepsilon }\right\| _{L^1}\) and

$$\begin{aligned} I_2 \leqslant c\left\| \psi \right\| _{\infty } \left\| {\overline{h}}_{2\varepsilon }\right\| _{L^1} \left( 1 + c_{\varepsilon }e^{-c_{\varepsilon }n} \right) \int _{\left|v\right| \geqslant \varepsilon } \kappa _{\varepsilon ^2} (v) \text {d}v. \end{aligned}$$

By (5.5),

$$\begin{aligned} I_2 \leqslant c\left\| \psi \right\| _{\infty } \left\| {\overline{h}}_{2\varepsilon }\right\| _{L^1} \left( \varepsilon + c_{\varepsilon }e^{-c_{\varepsilon }n} \right) . \end{aligned}$$
(5.12)

Putting together (5.10), (5.11) and (5.12), we obtain that

$$\begin{aligned} I_0&\geqslant \sqrt{n} \left( h*\varphi _{\sqrt{n}\sigma } \right) (y) {\varvec{\nu }} \left( \psi \right) - c \left\| \psi \right\| _{\infty } \left\| h-{\underline{h}}_{2\varepsilon }\right\| _{L^1} \nonumber \\&\quad - c \left\| \psi \right\| _{\infty } \left\| {\overline{h}}_{2\varepsilon }\right\| _{L^1} \left( \frac{1}{\sqrt{n}} + \varepsilon + c_{\varepsilon }e^{-c_{\varepsilon }n} \right) . \end{aligned}$$
(5.13)

Putting together the upper bound (5.8) and the lower bound (5.13), the first inequality of the lemma follows. The second inequality is proved in (5.9). \(\square \)

We now apply Lemma 5.4 when the function h is an indicator of an interval.

Corollary 5.5

Assume Hypotheses M1M3. For any \(a>0\), \(\varepsilon \in (0,1/4)\), any non-negative function \(\psi \in {\mathscr {C}}\) and any \(n\geqslant 1\),

$$\begin{aligned}&\sup _{x\in {\mathbb {X}},\,y\in {\mathbb {R}},\, z \geqslant 0} \sqrt{n} \left|{\mathbb {E}}_x \left( \psi \left( X_n \right) ;\, y+S_n \in [z,z+a] \right) - a\varphi _{\sqrt{n}\sigma } (z-y) {\varvec{\nu }} \left( \psi \right) \right| \\&\quad \leqslant c (a+\varepsilon ) \left\| \psi \right\| _{\infty } \left( \frac{1}{\sqrt{n}} + \frac{a}{n} + \varepsilon + c_{\varepsilon }e^{-c_{\varepsilon }n} \right) , \end{aligned}$$

where \(\varphi _{\sqrt{n}\sigma }(\cdot )\) is defined by (5.1). In particular, there exists \(c> 0\) such that for any \(a >0\),

$$\begin{aligned} \sup _{x\in {\mathbb {X}},\, y\in {\mathbb {R}},\, z \geqslant 0} \sqrt{n} {\mathbb {E}}_x \left( \psi \left( X_n \right) ;\, y+S_n \in [z,z+a] \right) \leqslant c (1+a^2) \left\| \psi \right\| _{\infty }. \end{aligned}$$
(5.14)

Proof

Let \(z \geqslant 0\), \(a>0\), \(\varepsilon \in (0,1/4)\). For any \(y \in {\mathbb {R}}\) set

$$\begin{aligned} h(y) = \mathbb {1}_{[z,z+a]}(y). \end{aligned}$$

It is clear that

$$\begin{aligned} {\overline{h}}_{\varepsilon }(y) = \mathbb {1}_{[z-\varepsilon ,z+a+\varepsilon ]}(y) \qquad \text {and} \qquad {\underline{h}}_{\varepsilon }(y) = \mathbb {1}_{[z+\varepsilon ,z+a-\varepsilon ]}(y), \end{aligned}$$

where by convention \(\mathbb {1}_{[z+\varepsilon ,z+a-\varepsilon ]}(y) = 0\) when \(a \leqslant 2\varepsilon \). It is also easy to see that

$$\begin{aligned} \left\| h-{\underline{h}}_{2\varepsilon }\right\| _{L^1} = \left\| {\overline{h}}_{2\varepsilon }-h\right\| _{L^1} = 4\varepsilon \qquad \text {and} \qquad \left\| {\overline{h}}_{2\varepsilon }\right\| _{L^1} = a+4\varepsilon . \end{aligned}$$

Taking into account these last equalities and using Lemma 5.4, we find that

$$\begin{aligned}&\left|{\mathbb {E}}_x \left( \psi \left( X_n \right) ;\, y+S_n \in [z,z+a] \right) - \mathbb {1}_{[z,z+a]}*\varphi _{\sqrt{n}\sigma }(y) {\varvec{\nu }} \left( \psi \right) \right| \nonumber \\&\quad \leqslant c (a+\varepsilon ) \left\| \psi \right\| _{\infty } \left( \frac{1}{\sqrt{n}} + \varepsilon + c_{\varepsilon }e^{-c_{\varepsilon }n} \right) . \end{aligned}$$
(5.15)

Moreover, the convolution \(\mathbb {1}_{[z,z+a]}*\varphi _{\sqrt{n}\sigma }\) is equal to

$$\begin{aligned} \mathbb {1}_{[z,z+a]}*\varphi _{\sqrt{n}\sigma }(y)= & {} \int _{{\mathbb {R}}} \mathbb {1}_{\left\{ z \leqslant y-u \leqslant z+a \right\} } \frac{e^{-\frac{u^2}{2n\sigma ^2}}}{\sqrt{2\pi n}\sigma } \text {d}u \\= & {} \varPhi _{\sqrt{n}\sigma }(y-z) - \varPhi _{\sqrt{n}\sigma }(y-z-a), \end{aligned}$$

where \(\varPhi _{\sqrt{n}\sigma }(t) = \int _{-\infty }^{t} \frac{e^{-\frac{u^2}{2n\sigma ^2}}}{\sqrt{2\pi n}\sigma } \text {d}u\) is the distribution function of the centred normal law of variance \(n\sigma ^2\). By the Taylor-Lagrange formula, there exists \(\xi \in (y-z-a,y-z)\) such that

$$\begin{aligned} \varPhi _{\sqrt{n}\sigma }(y-z-a) = \varPhi _{\sqrt{n}\sigma }(y-z) - a\varphi _{\sqrt{n}\sigma }(y-z) + \frac{a^2}{2} \varphi _{\sqrt{n}\sigma }'(\xi ). \end{aligned}$$

Using the fact that \(\sup _{u\in {\mathbb {R}}} \left|u\right|e^{-u^2} \leqslant c\),

$$\begin{aligned} \left|\mathbb {1}_{[z,z+a]}*\varphi _{\sqrt{n}\sigma }(y) - a\varphi _{\sqrt{n}\sigma }(z-y)\right| \leqslant \frac{c a^2}{n}. \end{aligned}$$
(5.16)

Putting together (5.15) and (5.16), we conclude that

$$\begin{aligned}&\left|{\mathbb {E}}_x \left( \psi \left( X_n \right) ;\, y+S_n \in [z,z+a] \right) - a\varphi _{\sqrt{n}\sigma }(z-y) {\varvec{\nu }} \left( \psi \right) \right| \\&\quad \leqslant c (a+\varepsilon ) \left\| \psi \right\| _{\infty } \left( \frac{1}{\sqrt{n}} + \frac{a}{n} + \varepsilon + c_{\varepsilon }e^{-c_{\varepsilon }n} \right) . \end{aligned}$$

\(\square \)

6 Auxiliary bounds

We state two bounds on the expectation \({\mathbb {E}}_x \left( \psi (X_n) \,;\, y+S_n \in [z,z+a],\, \tau _y > n \right) \). The first one is of order 1 / n and independent of z. Then we reverse the Markov chain to improve it to a bound of order \(1/n^{3/2}.\) We refer to Denisov and Wachtel [8] for related results in the case of lattice valued independent random variables.

Lemma 6.1

Assume Hypotheses M1M3. There exists \(c > 0\) such that for any \(a >0\), non-negative function \(\psi \in \mathscr {C}\), \(y \in {\mathbb {R}}\) and \(n \geqslant 1\)

$$\begin{aligned}&\sup _{x\in {\mathbb {X}},\, z \geqslant 0} {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, \tau _y > n \right) \\&\quad \leqslant \frac{c}{n} \left\| \psi \right\| _{\infty } (1+a^2) \left( 1 + \max (y,0) \right) . \end{aligned}$$

Proof

We split the time n into two parts \(k := \left\lfloor n/2\right\rfloor \) and \(n-k\). By the Markov property,

$$\begin{aligned} E_0&:= {\mathbb {E}}_x \left( \psi \left( X_n \right) ;\, y+S_n \in [z,z+a],\, \tau _y> n \right) \\&= \sum _{x'\in {\mathbb {X}}} \int _{0}^{+\infty } {\mathbb {E}}_{x'} \left( \psi \left( X_k \right) ;\, y'+S_k \in [z,z+a],\, \tau _{y'}> k \right) \\&\quad \times {\mathbb {P}}_x \left( X_{n-k} = x',\, y+S_{n-k} \in \text {d}y',\, \tau _y> n-k \right) \\&\leqslant \sum _{x'\in {\mathbb {X}}} \int _{0}^{+\infty } {\mathbb {E}}_{x'} \left( \psi \left( X_k \right) ;\, y'+S_k \in [z,z+a] \right) \\&\quad \times {\mathbb {P}}_x \left( X_{n-k} = x',\, y+S_{n-k} \in \text {d}y',\, \tau _y > n-k \right) . \end{aligned}$$

Using the uniform bound (5.14) in Corollary 5.5, we obtain that

$$\begin{aligned} E_0 \leqslant \frac{c \left\| \psi \right\| _{\infty }}{\sqrt{k}} (1+a^2) {\mathbb {P}}_x \left( \tau _y > n-k \right) . \end{aligned}$$

By the point 2 of Proposition 2.2, we get

$$\begin{aligned} E_0 \leqslant \frac{c \left\| \psi \right\| _{\infty } (1+a^2) \left( 1+\max (y,0) \right) }{\sqrt{k}\sqrt{n-k}}. \end{aligned}$$

Since \(n-k \geqslant n/2\) and \(k \geqslant n/4\) for any \(n \geqslant 4\), the lemma is proved (the case when \(n\leqslant 4\) is trivial). \(\square \)

Lemma 6.2

Assume Hypotheses M1M3. There exists \(c > 0\) such that for any \(a >0\), non-negative function \(\psi \in \mathscr {C}\), \(y \in {\mathbb {R}}\), \(z \geqslant 0\) and \(n \geqslant 1\)

$$\begin{aligned}&\sup _{x\in {\mathbb {X}}} {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, \tau _y > n \right) \\&\quad \leqslant \frac{c \left\| \psi \right\| _{\infty }}{n^{3/2}} (1+a^3)\left( 1+z \right) \left( 1+\max (y,0) \right) . \end{aligned}$$

Proof

Set again \(k=\left\lfloor n/2\right\rfloor \). By the Markov property

$$\begin{aligned} E_0&:= {\mathbb {E}}_x \left( \psi \left( X_n \right) ;\, y+S_n \in [z,z+a],\, \tau _y> n \right) \nonumber \\&= \sum _{x'\in {\mathbb {X}}} \int _{0}^{+\infty } \underbrace{{\mathbb {E}}_{x'} \left( \psi \left( X_k \right) ;\, y'+S_k \in [z,z+a],\, \tau _{y'}> k \right) }_{=:E_0'} \nonumber \\&\quad \times {\mathbb {P}}_x \left( X_{n-k} = x',\, y+S_{n-k} \in \text {d}y',\, \tau _y > n-k \right) . \end{aligned}$$
(6.1)

Using Lemma 3.2 with \({\mathfrak {m}} = {\varvec{\delta }}_{x'}\) and

$$\begin{aligned} F(x_1,\dots ,x_k) = \psi (x_k) \mathbb {1}_{\left\{ y'+f(x_1) \dots +f(x_k) \in [z,z+a],\, \forall i \in \{ 1, \dots , k \},\, y'+f(x_1)+\cdots +f(x_i) > 0 \right\} }, \end{aligned}$$

we have

$$\begin{aligned} E_0'&= {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) \frac{\mathbb {1}_{\{x'\}}\left( X_{k+1}^* \right) }{{\varvec{\nu }} \left( X_{k+1}^* \right) };\, y'+f\left( X_k^* \right) + \cdots + f\left( X_1^* \right) \in [z,z+a],\, \right. \\&\quad \left. \forall i \in \{ 1, \dots , k \},\, y'+f\left( X_k^* \right) +\cdots +f \left( X_{k-i+1}^* \right) > 0 \right) . \end{aligned}$$

By the Markov property,

$$\begin{aligned} E_0'&= {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) \psi _{x'}^*\left( X_k^* \right) ;\, y'+f\left( X_k^* \right) + \cdots +f\left( X_1^* \right) \in [z,z+a],\, \right. \\&\quad \left. \forall i \in \{ 1, \dots , k \},\, y'+f\left( X_k^* \right) +\cdots +f \left( X_{k-i+1}^* \right) > 0 \right) . \end{aligned}$$

where

$$\begin{aligned} \psi _{x'}^*(x^*) = {\mathbb {E}}_{x^*}^* \left( \frac{\mathbb {1}_{\{x'\}}\left( X_1^* \right) }{{\varvec{\nu }} \left( X_1^* \right) } \right) = \frac{ {\mathbf {P}}^*(x^*,x')}{{\varvec{\nu }}(x')} = \frac{ {\mathbf {P}}(x',x^*)}{{\varvec{\nu }}(x^*)} \leqslant \frac{1}{\inf _{x\in {\mathbb {X}}} {\varvec{\nu }}(x)}. \end{aligned}$$
(6.2)

On the event \(\left\{ y'+f\left( X_k^* \right) + \cdots + f\left( X_1^* \right) \in \left[ z,z+a \right] \right\} = \left\{ z+a + S_k^* \in \left[ y',y'+a \right] \right\} \), we have

$$\begin{aligned}&\left\{ \forall i \in \{ 1, \dots , k \},\, y'+f\left( X_k^* \right) +\cdots +f \left( X_{k-i+1}^* \right)> 0, y'>0 \right\} \\&\qquad \subset \left\{ \forall i \in \{ 1, \dots , k-1 \},\, z+a-f \left( X_{k-i}^* \right) -\cdots -f\left( X_1^* \right)> 0, z+a+S_k^*> 0 \right\} \\&\qquad = \left\{ \tau _{z+a}^* > k \right\} . \end{aligned}$$

So, for any \(y' > 0\),

$$\begin{aligned} E_0' \leqslant c\left\| \psi \right\| _{\infty } {\mathbb {P}}_{{\varvec{\nu }}}^* \left( z+a+S_k^* \in [y',y'+a],\, \tau _{z+a}^* > k \right) . \end{aligned}$$

Using Lemma 6.1 we have uniformly in \(y' >0\),

$$\begin{aligned} E_0' \leqslant \frac{c \left\| \psi \right\| _{\infty }}{k} (1+a^2) \left( 1+\max (z+a,0) \right) \leqslant \frac{c \left\| \psi \right\| _{\infty }}{k} (1+a^3) \left( 1+z \right) . \end{aligned}$$
(6.3)

Putting together (6.3) and (6.1) and using the point 2 of Proposition 2.2,

$$\begin{aligned} E_0\leqslant & {} \frac{c \left\| \psi \right\| _{\infty }}{k} (1+a^3) \left( 1+z \right) {\mathbb {P}}_x \left( \tau _y > n-k \right) \\\leqslant & {} \frac{c \left\| \psi \right\| _{\infty }}{k\sqrt{n-k}} (1+a^3) \left( 1+z \right) \left( 1+\max (y,0) \right) . \end{aligned}$$

Since \(n-k \geqslant n/2\) and \(k \geqslant n/4\) for any \(n \geqslant 4\), the lemma is proved. \(\square \)

7 Proof of Theorem 2.4

The aim of this section is to bound

$$\begin{aligned} E_0 := {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, \tau _y > n \right) \end{aligned}$$
(7.1)

uniformly in the end point z. The point is to split the time n into \(n=n_1+n_2\), where \(n_2 = \left\lfloor \varepsilon ^3 n\right\rfloor \) and \(n_1 = n- \left\lfloor \varepsilon ^3 n\right\rfloor \), and \(\varepsilon \in (0,1)\). Using the Markov property, we shall bound the process between \(n_1\) and n by the local limit theorem (Corollary 5.5) and between 1 and \(n_1\) by the integral theorem (Proposition 2.3). Following this idea we write

$$\begin{aligned} E_0&= \underbrace{{\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, \tau _y > n_1 \right) }_{=:E_1} \nonumber \\&\quad - \underbrace{{\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, n_1 < \tau _y \leqslant n \right) }_{=:E_2}. \end{aligned}$$
(7.2)

For the ease of reading the bounds of \(E_1\) and \(E_2\) are given in separate sections.

7.1 Control of \(E_1\)

Lemma 7.1

Assume Hypotheses M1M3. For any \(a>0\) and \(\varepsilon \in (0, 1/4)\) there exist \(c = c_a >0\) depending only on a and \(c_{\varepsilon }>0\) such that for any non-negative function \(\psi \in {\mathscr {C}}\), any \(y \in {\mathbb {R}}\) and \(n\ \in {\mathbb {N}}\), such that \(\varepsilon ^3 n\geqslant 1\) we have

$$\begin{aligned}&\sup _{x\in {\mathbb {X}},z\geqslant 0} n\left|E_1 - \frac{a}{\sqrt{n_2}\sigma } {\varvec{\nu }} \left( \psi \right) {\mathbb {E}}_x \left( \varphi \left( \frac{y-z+S_{n_1}}{\sqrt{n_2}\sigma } \right) ;\, \tau _y > n_1 \right) \right| \\&\quad \leqslant c \left( 1 + \max (y,0) \right) \left\| \psi \right\| _{\infty } \left( \varepsilon + \frac{c_{\varepsilon }}{\sqrt{n}} \right) . \end{aligned}$$

where \(E_1 = {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, \tau _y > n_1 \right) \), \(n_2= \left\lfloor \varepsilon ^3 n\right\rfloor \), \(n_1 = n- \left\lfloor \varepsilon ^3 n\right\rfloor \) and \(\varphi (t) = e^{-\frac{t^2}{2}}/\sqrt{2\pi }\).

Proof

By the Markov property,

$$\begin{aligned} E_1&= \sum _{x' \in {\mathbb {X}}} \int _{0}^{+\infty } \underbrace{{\mathbb {E}}_{x'} \left( \psi \left( X_{n_2} \right) ;\, y'+S_{n_2} \in [z,z+a] \right) }_{=:E_1'} \nonumber \\&\quad \times {\mathbb {P}}_x \left( y+S_{n_1} \in \text {d}y',\, X_{n_1} = x',\, \tau _y > n_1 \right) . \end{aligned}$$
(7.3)

From now on we consider that the real \(a>0\) is fixed. By Corollary 5.5, for any \(\varepsilon ^{5/2} \leqslant \varepsilon \in (0,1/4)\),

$$\begin{aligned} \sqrt{n_2} \left|E_1' - a\varphi _{\sqrt{n_2}\sigma } (z-y') {\varvec{\nu }} \left( \psi \right) \right| \leqslant c \left\| \psi \right\| _{\infty } \left( \frac{1}{\sqrt{n_2}} + \varepsilon ^{5/2} + c_{\varepsilon }e^{-c_{\varepsilon }n_2} \right) , \end{aligned}$$

with c depending only on a. Consequently, using (7.3) and the fact that \(n_2 = \left\lfloor \varepsilon ^3 n\right\rfloor \geqslant c_{\varepsilon } n\),

$$\begin{aligned}&\left|E_1 - a{\varvec{\nu }} \left( \psi \right) {\mathbb {E}}_x \left( \varphi _{\sqrt{n_2}\sigma } \left( y-z+S_{n_1} \right) ;\, \tau _y> n_1 \right) \right| \\&\quad \leqslant \frac{c\left\| \psi \right\| _{\infty }}{\sqrt{n_2}} \left( \frac{c_{\varepsilon }}{\sqrt{n}} + \varepsilon ^{5/2} + c_{\varepsilon }e^{-c_{\varepsilon }n} \right) {\mathbb {P}}_x \left( \tau _y > n_1 \right) . \end{aligned}$$

Therefore, by (5.1) and the point 2 of Proposition 2.2, we obtain that

$$\begin{aligned}&\left|E_1 - \frac{a}{\sqrt{n_2}\sigma } {\varvec{\nu }} \left( \psi \right) {\mathbb {E}}_x \left( \varphi \left( \frac{y-z+S_{n_1}}{\sqrt{n_2}\sigma } \right) ;\, \tau _y > n_1 \right) \right| \\&\quad \leqslant c \left\| \psi \right\| _{\infty } \frac{1 + \max (y,0)}{\sqrt{n_2}\sqrt{n_1}} \left( \frac{c_{\varepsilon }}{\sqrt{n}} + \varepsilon ^{5/2} \right) . \end{aligned}$$

Since \(n_2 \geqslant \varepsilon ^3 n \left( 1 - \frac{1}{\varepsilon ^3 n} \right) \) and \(n_1 \geqslant \frac{n}{2}\), we have

$$\begin{aligned}&c \left\| \psi \right\| _{\infty } \frac{1 + \max (y,0)}{\sqrt{n_2}\sqrt{n_1}} \left( \frac{c_{\varepsilon }}{\sqrt{n}} + \varepsilon ^{5/2} \right) \\&\quad \leqslant c \left\| \psi \right\| _{\infty } \frac{1 + \max (y,0)}{\varepsilon ^{3/2} n} \left( 1 + \frac{c_{\varepsilon }}{n} \right) \left( \frac{c_{\varepsilon }}{\sqrt{n}} + \varepsilon ^{5/2} \right) \\&\quad \leqslant c \left\| \psi \right\| _{\infty } \frac{1 + \max (y,0)}{n} \left( \varepsilon + \frac{c_{\varepsilon }}{\sqrt{n}} \right) \end{aligned}$$

and the lemma follows. \(\square \)

To find the limit behaviour of \(E_1\), we will develop \(\frac{1}{\sqrt{n_2}}{\mathbb {E}}_x \left( \varphi \left( \frac{y+S_{n_1}-z}{\sqrt{n_2}\sigma } \right) ;\, \tau _y > n_1 \right) \). To this aim, we prove the following lemma which we will apply first with the standard normal density function \(\varphi \), and later on with the Rayleigh density \(\varphi _+\).

Lemma 7.2

Assume Hypotheses M1M3. Let \(\varPsi \): \({\mathbb {R}} \rightarrow {\mathbb {R}}\) be a non-negative differentiable function such that \(\varPsi (t) \rightarrow 0\) as \(t \rightarrow +\infty \). Moreover we suppose that \(\varPsi '\) is a continuous function on \({\mathbb {R}}\) such that \(\max (\left|\varPsi (t)\right|,\left|\varPsi '(t)\right|) \leqslant c e^{-\frac{t^2}{4}}\). There exists \(\varepsilon _0 \in (0,1/2)\) such that for any \(\varepsilon \in (0,\varepsilon _0)\), \(y \in {\mathbb {R}}\), \(m_1 \geqslant 1\) and \(m_2 \geqslant 1\), we have

$$\begin{aligned}&\sup _{x\in {\mathbb {X}}, \, z \geqslant 0} \left| {\mathbb {E}}_x \left( \varPsi \left( \frac{y+S_{m_1}-z}{\sqrt{m_2}\sigma } \right) ;\, \tau _y > m_1 \right) \right. \\&\left. \qquad - \frac{2V(x,y)}{\sqrt{2\pi m_1}\sigma } \int _{0}^{+\infty } \varPsi \left( \sqrt{\frac{m_1}{m_2}}t- \frac{z}{\sqrt{m_2}\sigma } \right) \varphi _+ (t) \text {d}t\right| \\&\quad \leqslant c_{\varepsilon } \frac{\left( 1+\max (y,0) \right) ^2}{m_1^{\varepsilon } \sqrt{m_2}} + c \frac{1+\max (y,0)}{\sqrt{m_1}} \left( e^{-c\frac{m_1}{m_2} } + \varepsilon ^4 \right) , \end{aligned}$$

where \(\varphi _+(t) = te^{-\frac{t^2}{2}}\).

Proof

Let \(x \in {\mathbb {X}}\), \(y \in {\mathbb {R}}\), \(z \geqslant 0\), \(m_1 \geqslant 1\) and \(m_2 \geqslant 1\) and fix \(\varepsilon _1 \in (0,1)\). We consider two cases. Assume first that \(z \leqslant \sqrt{m_1}\sigma /\varepsilon _1\). Using the regularity of the function \(\varPsi \), we note that

$$\begin{aligned} J_0&:= {\mathbb {E}}_x \left( \varPsi \left( \frac{y+S_{m_1}-z}{\sqrt{m_2}\sigma } \right) ;\, \tau _y> m_1 \right) \\&= -\int _0^{+\infty } \sqrt{\frac{m_1}{m_2}} \varPsi ' \left( \sqrt{\frac{m_1}{m_2}} t - \frac{z}{\sqrt{m_2}\sigma } \right) {\mathbb {P}}_x \left( \frac{y+S_{m_1}}{\sqrt{m_1}\sigma } \leqslant t,\, \tau _y > m_1 \right) \text {d}t. \end{aligned}$$

Denote by \(J_1\) the following integral:

$$\begin{aligned} J_1 := -\frac{2V(x,y)}{\sqrt{2\pi m_1}\sigma } \int _0^{+\infty } \sqrt{\frac{m_1}{m_2}} \varPsi ' \left( \sqrt{\frac{m_1}{m_2}} t - \frac{z}{\sqrt{m_2}\sigma } \right) \left( 1 - e^{-\frac{t^2}{2}} \right) \text {d}t. \end{aligned}$$
(7.4)

Using the point 2 of Proposition 2.3, with \(t_0 = 2/\varepsilon _1\), there exists \(\varepsilon _0 > 0\) such that for any \(\varepsilon \in (0,\varepsilon _0)\),

$$\begin{aligned} \left|J_0 - J_1\right|&\leqslant c_{\varepsilon ,\varepsilon _1} \frac{\left( 1+\max (y,0) \right) ^2}{m_1^{1/2+\varepsilon }} \int _0^{\frac{2}{\varepsilon _1}} \sqrt{\frac{m_1}{m_2}} \left|\varPsi ' \left( \sqrt{\frac{m_1}{m_2}} t - \frac{z}{\sqrt{m_2}\sigma } \right) \right| \text {d}t \\&\quad + \left( \frac{2V(x,y)}{\sqrt{2\pi m_1}\sigma } + {\mathbb {P}}_x \left( \tau _y > m_1 \right) \right) \int _{\frac{2}{\varepsilon _1}}^{+\infty } \sqrt{\frac{m_1}{m_2}} \left|\varPsi ' \left( \sqrt{\frac{m_1}{m_2}} t - \frac{z}{\sqrt{m_2}\sigma } \right) \right| \text {d}t. \end{aligned}$$

By the point 2 of Proposition 2.1 and the point 2 of Proposition 2.2, with \(\left\| \varPsi '\right\| _{\infty } = \sup _{t\in {\mathbb {R}}} \left|\varPsi '(t)\right|\),

$$\begin{aligned} \left|J_0 - J_1\right|&\leqslant c_{\varepsilon ,\varepsilon _1} \frac{\left( 1+\max (y,0) \right) ^2}{m_1^{\varepsilon } \sqrt{m_2}} \left\| \varPsi '\right\| _{\infty } \\&\quad \quad + c \frac{1+\max (y,0)}{\sqrt{m_1}} \sqrt{\frac{m_1}{m_2}} \int _{\frac{2}{\varepsilon _1}}^{+\infty } e^{-\frac{ \left( \sqrt{\frac{m_1}{m_2}} t - \frac{z}{\sqrt{m_2}\sigma } \right) ^2}{4} } \text {d}t \\&\leqslant c_{\varepsilon ,\varepsilon _1} \frac{\left( 1+\max (y,0) \right) ^2}{m_1^{\varepsilon } \sqrt{m_2}} + c \frac{1+\max (y,0)}{\sqrt{m_1}} \int _{\sqrt{\frac{m_1}{m_2}} \left( \frac{2}{\varepsilon _1}-\frac{z}{\sqrt{m_1}\sigma } \right) }^{+\infty } e^{-\frac{s^2}{4} } \text {d}s. \end{aligned}$$

Since \(z \leqslant \frac{\sqrt{m_1}\sigma }{\varepsilon _1}\), we have \(\frac{2}{\varepsilon _1} - \frac{z}{\sqrt{m_1}\sigma } \geqslant \frac{1}{\varepsilon _1} \geqslant 1\) and so

$$\begin{aligned} \left|J_0 - J_1\right| \leqslant c_{\varepsilon ,\varepsilon _1} \frac{\left( 1+\max (y,0) \right) ^2}{m_1^{\varepsilon } \sqrt{m_2}} + c \frac{1+\max (y,0)}{\sqrt{m_1}} e^{-\frac{m_1}{8m_2} } \int _{{\mathbb {R}}} e^{-\frac{s^2}{8} } \text {d}s. \end{aligned}$$
(7.5)

Moreover, by the definition of \(J_1\) in (7.4), we have

$$\begin{aligned} J_1&= \frac{2V(x,y)}{\sqrt{2\pi m_1}\sigma } \left[ - \varPsi \left( \sqrt{\frac{m_1}{m_2}} t - \frac{z}{\sqrt{m_2}\sigma } \right) \left( 1 - e^{-\frac{t^2}{2}} \right) \right] _{t=0}^{t=+\infty } \nonumber \\&\quad + \frac{2V(x,y)}{\sqrt{2\pi m_1}\sigma } \int _{0}^{+\infty } \varPsi \left( \sqrt{\frac{m_1}{m_2}} t - \frac{z}{\sqrt{m_2}\sigma } \right) t e^{-\frac{t^2}{2}} \text {d}t \nonumber \\&=\frac{2V(x,y)}{\sqrt{2\pi m_1}\sigma } \int _{0}^{+\infty } \varPsi \left( \sqrt{\frac{m_1}{m_2}} t - \frac{z}{\sqrt{m_2}\sigma } \right) \varphi _+(t) \text {d}t. \end{aligned}$$
(7.6)

Now, assume that \(z > \frac{\sqrt{m_1}\sigma }{\varepsilon _1}\). We write

$$\begin{aligned} J_0&\leqslant c{\mathbb {E}}_x \left( e^{-\frac{\left( y+S_{m_1}-z \right) ^2}{4m_2\sigma ^2}};\, y+S_{m_1} \leqslant \frac{\sqrt{m_1}\sigma }{2\varepsilon _1},\, \tau _y> m_1 \right) \\&\quad + \left\| \varPsi \right\| _{\infty } {\mathbb {P}}_x \left( y+S_{m_1}> \frac{\sqrt{m_1}\sigma }{2\varepsilon _1},\, \tau _y> m_1 \right) \\&\leqslant c e^{-\frac{m_1}{16m_2\varepsilon _1^2}} {\mathbb {P}}_x \left( \tau _y> m_1 \right) + \left\| \varPsi \right\| _{\infty } \frac{2\varepsilon _1}{\sqrt{m_1}\sigma } {\mathbb {E}}_x \left( y+S_{m_1} ;\, \tau _y > m_1 \right) . \end{aligned}$$

Using the points 3 and 1 of Proposition 2.1, we can verify that

$$\begin{aligned} {\mathbb {E}}_x \left( y+S_{m_1} ;\, \tau _y> m_1 \right) \leqslant {\mathbb {E}}_x \left( 2V \left( y+S_{m_1}, X_{m_1} \right) + c ;\, \tau _y > m_1 \right) \leqslant 2V(x,y) + c. \end{aligned}$$

So by the point 2 of Proposition 2.2 and the point 2 of Proposition 2.1,

$$\begin{aligned} J_0 \leqslant c \frac{1+\max (y,0)}{\sqrt{m_1}} e^{-\frac{cm_1}{m_2}} + \frac{c\varepsilon _1}{\sqrt{m_1}} \left( 1+\max (y,0) \right) . \end{aligned}$$

In the same way,

$$\begin{aligned} J_1&= \frac{2V(x,y)}{\sqrt{2\pi m_1}\sigma } \int _{0}^{+\infty } \varPsi \left( \sqrt{\frac{m_1}{m_2}} t - \frac{z}{\sqrt{m_2}\sigma } \right) \varphi _+(t) \text {d}t \\&\leqslant \frac{c\left( 1+\max (y,0) \right) }{\sqrt{m_1}} \left[ \int _0^{\frac{1}{2\varepsilon _1}} e^{-\frac{m_1}{4m_2} \left( t-\frac{z}{\sqrt{m_1}\sigma } \right) ^2} \varphi _+(t) \text {d}t + \left\| \varPsi \right\| _{\infty } \int _{\frac{1}{2\varepsilon _1}}^{+\infty } te^{-\frac{t^2}{2}} \text {d}t \right] \\&\leqslant \frac{c\left( 1+\max (y,0) \right) }{\sqrt{m_1}} \left[ e^{-\frac{m_1}{16m_2\varepsilon _1^2}} \int _0^{+\infty } \varphi _+(t) \text {d}t + \left\| \varPsi \right\| _{\infty } e^{-\frac{1}{16 \varepsilon _1^2}} \int _0^{+\infty } te^{-\frac{t^2}{4}} \text {d}t \right] \\&\leqslant \frac{c\left( 1+\max (y,0) \right) }{\sqrt{m_1}} \left( e^{-\frac{cm_1}{m_2}} + e^{-\frac{c}{\varepsilon _1^2}} \right) . \end{aligned}$$

From the last two bounds it follows that for any \(z > \frac{\sqrt{m_1}\sigma }{\varepsilon _1}\),

$$\begin{aligned} \left|J_0 - J_1\right| \leqslant J_0 + J_1 \leqslant \frac{c\left( 1+\max (y,0) \right) }{\sqrt{m_1}} \left( e^{-\frac{cm_1}{m_2}} + \varepsilon _1 \right) . \end{aligned}$$
(7.7)

Putting together (7.6), (7.7) and (7.5) and taking \(\varepsilon _1 = \varepsilon ^4\), we obtain the desired inequality for any \(z \geqslant 0\),

$$\begin{aligned} \left|J_0 - J_1\right| \leqslant c_{\varepsilon } \frac{\left( 1+\max (y,0) \right) ^2}{m_1^{\varepsilon } \sqrt{m_2}} + \frac{c\left( 1+\max (y,0) \right) }{\sqrt{m_1}} \left( e^{-\frac{cm_1}{m_2}} + \varepsilon ^4 \right) . \end{aligned}$$

\(\square \)

Lemma 7.3

Assume Hypotheses M1M3. There exists \(\varepsilon _0 \in (0,1/2)\) such that for any \(\varepsilon \in (0,\varepsilon _0)\), \(y \in {\mathbb {R}}\), \(n \in {\mathbb {N}}\) such that \(\varepsilon ^3 n \geqslant 1\), we have

$$\begin{aligned} \sup _{x\in {\mathbb {X}}, \, z \geqslant 0}&\left|\frac{n}{\sqrt{n_2}} {\mathbb {E}}_x \left( \varphi \left( \frac{y+S_{n_1}-z}{\sqrt{n_2}\sigma } \right) \,;\, \tau _y > n_1 \right) - \frac{2V(x,y)}{\sqrt{2\pi }\sigma } \varphi _+ \left( \frac{z}{\sqrt{n}\sigma } \right) \right| \\&\quad \leqslant c_{\varepsilon } \frac{\left( 1+\max (y,0) \right) ^2}{n^{\varepsilon }} + c \left( 1+\max (y,0) \right) \varepsilon , \end{aligned}$$

where \(\varphi (t) = e^{-\frac{t^2}{2}}/\sqrt{2\pi }\), \(\varphi _+(t) = te^{-\frac{t^2}{2}} \mathbb {1}_{\{t\geqslant 0\}}\), \(n_2 = \left\lfloor \varepsilon ^3 n\right\rfloor \) and \(n_1 = n-\left\lfloor \varepsilon ^3 n\right\rfloor \).

Proof

Denote

$$\begin{aligned} J_0 := {\mathbb {E}}_x \left( \varphi \left( \frac{y+S_{n_1}-z}{\sqrt{n_2}\sigma } \right) ;\, \tau _y > n_1 \right) \end{aligned}$$

and

$$\begin{aligned} J_1&:= \frac{2V(x,y)}{\sqrt{2\pi n_1}\sigma } \int _{0}^{+\infty } \varphi \left( \sqrt{\frac{n_1}{n_2}} t - \frac{z}{\sqrt{n_2}\sigma } \right) \varphi _+(t) \text {d}t \nonumber \\&=\frac{2V(x,y)}{\sqrt{2\pi n_1}\sigma } \int _{0}^{+\infty } \sqrt{\frac{n_2}{n_1}} \varphi _{\sqrt{\frac{n_2}{n_1}}} \left( t - \frac{z}{\sqrt{n_1}\sigma } \right) \varphi _+(t) \text {d}t \nonumber \\&= \frac{2V(x,y)}{\sqrt{2\pi }\sigma } \frac{\sqrt{n_2}}{n_1} \varphi _{\sqrt{\frac{n_2}{n_1}}}*\varphi _+ \left( \frac{z}{\sqrt{n_1}\sigma } \right) , \end{aligned}$$
(7.8)

where \(\varphi _{\{\cdot \}}(\cdot )\) is defined in (5.1). By Lemma 7.2 we have

$$\begin{aligned} \frac{n_1}{\sqrt{n_2}} \left|J_0 - J_1\right| \leqslant c_{\varepsilon } n_1 \frac{\left( 1+\max (y,0) \right) ^2}{n_1^{\varepsilon } n_2} + c n_1 \frac{ 1+\max (y,0) }{\sqrt{n_1}\sqrt{n_2}} \left( e^{-c\frac{n_1}{n_2} } + \varepsilon ^4 \right) . \end{aligned}$$

Since \(\frac{n}{2} \leqslant n_1 \leqslant n\) and \(\varepsilon ^3n-1 \leqslant n_2 \leqslant \varepsilon ^3 n\),

$$\begin{aligned} \frac{n}{\sqrt{n_2}} \left|J_0 - J_1\right|&\leqslant c_{\varepsilon } \frac{\left( 1+\max (y,0) \right) ^2}{n^{\varepsilon }} + c \frac{ 1+\max (y,0) }{\varepsilon ^{3/2}} \left( 1+\frac{c_{\varepsilon }}{n} \right) \left( e^{-\frac{c}{\varepsilon ^3}} + \varepsilon ^4 \right) \nonumber \\&\leqslant c_{\varepsilon } \frac{\left( 1+\max (y,0) \right) ^2}{n^{\varepsilon }} + c \left( 1+\max (y,0) \right) \varepsilon . \end{aligned}$$
(7.9)

Let \(J_2\) be the following term:

$$\begin{aligned} J_2 := \frac{2V(x,y)}{\sqrt{2\pi }\sigma } \frac{\sqrt{n_2}}{n_1} \varphi _+ \left( \frac{z}{\sqrt{n_1}\sigma } \right) . \end{aligned}$$
(7.10)

Using (7.8),

$$\begin{aligned} \left|J_1-J_2\right| \leqslant \frac{2V(x,y)}{\sqrt{2\pi }\sigma } \frac{\sqrt{n_2}}{n_1} \int _{{\mathbb {R}}} \varphi _{\sqrt{\frac{n_2}{n_1}}} (t) \left| \varphi _+ \left( \frac{z}{\sqrt{n_1}\sigma } - t \right) - \varphi _+ \left( \frac{z}{\sqrt{n_1}\sigma } \right) \right| \text {d}t. \end{aligned}$$

By the point 2 of Proposition 2.1, we write

$$\begin{aligned} \frac{n}{\sqrt{n_2}}\left|J_1-J_2\right|\leqslant & {} c \left( 1+\max (y,0) \right) \left\| \varphi _+' \right\| _{\infty } \int _{{\mathbb {R}}} \varphi _{\sqrt{\frac{n_2}{n_1}}} (t) \left|t\right| \text {d}t \nonumber \\\leqslant & {} c \left( 1+\max (y,0) \right) \sqrt{\frac{n_2}{n_1}} \int _{{\mathbb {R}}} \varphi (s) \left|s\right| \text {d}s \nonumber \\\leqslant & {} c \left( 1+\max (y,0) \right) \varepsilon ^{3/2}. \end{aligned}$$
(7.11)

Putting together (7.9) and (7.11), we obtain that

$$\begin{aligned} \sup _{x\in {\mathbb {X}},z \geqslant 0} \frac{n}{\sqrt{n_2}} \left|J_0 - J_2\right| \leqslant c_{\varepsilon } \frac{\left( 1+\max (y,0) \right) ^2}{n^{\varepsilon }} + c \left( 1+\max (y,0) \right) \varepsilon . \end{aligned}$$
(7.12)

It remains to link \(J_2\) from (7.10) to the desired equivalent. We distinguish two cases. If \(\frac{z}{\sigma } \leqslant \frac{\sqrt{n}}{\varepsilon }\),

$$\begin{aligned}&\left|\frac{n}{\sqrt{n_2}} J_2 - \frac{2V(x,y)}{\sqrt{2\pi }\sigma } \varphi _+ \left( \frac{z}{\sqrt{n}\sigma } \right) \right| \\&\quad \leqslant c V(x,y) \left| \frac{n}{n_1}\varphi _+ \left( \frac{z}{\sqrt{n_1}\sigma } \right) - \varphi _+ \left( \frac{z}{\sqrt{n}\sigma } \right) \right| \\&\quad \leqslant c V(x,y) \left( \left\| \varphi _+\right\| _{\infty } \left|\frac{n}{n_1} - 1\right| + \left|\frac{1}{\sqrt{n_1}} - \frac{1}{\sqrt{n}}\right| \left|\frac{z}{\sigma }\right| \left\| \varphi _+'\right\| _{\infty } \right) \\&\quad \leqslant c V(x,y) \left( \frac{n_2}{n_1} + \frac{1}{\sqrt{n_1}} \left| 1 - \sqrt{1-\frac{n_2}{n}} \right| \frac{\sqrt{n}}{\varepsilon } \right) \\&\quad \leqslant c V(x,y) \left( \varepsilon ^3 + \frac{\varepsilon ^3}{\varepsilon } \right) . \end{aligned}$$

If \(\frac{z}{\sigma } > \frac{\sqrt{n}}{\varepsilon } \geqslant \frac{\sqrt{n_1}}{\varepsilon }\), we have

$$\begin{aligned} \left|\frac{n}{\sqrt{n_2}} J_2 - \frac{2V(x,y)}{\sqrt{2\pi }\sigma } \varphi _+ \left( \frac{z}{\sqrt{n}\sigma } \right) \right| \leqslant c V(x,y) \sup _{u \geqslant \frac{1}{\varepsilon }} \varphi _+ \left( u \right) \leqslant c V(x,y) e^{-\frac{c}{\varepsilon ^2}}. \end{aligned}$$

Therefore, using the point 2 of Proposition 2.1, we obtain that in each case

$$\begin{aligned} \left|\frac{n}{\sqrt{n_2}} J_2 - \frac{2V(x,y)}{\sqrt{2\pi }\sigma } \varphi _+ \left( \frac{z}{\sqrt{n}\sigma } \right) \right| \leqslant c \left( 1+\max (y,0) \right) \varepsilon ^2. \end{aligned}$$
(7.13)

Putting together (7.12) and (7.13), proves the lemma. \(\square \)

Another consequence of Lemma 7.2 is the following lemma which will be used in Sect. 8.

Lemma 7.4

Assume Hypotheses M1M3. There exists \(\varepsilon _0 \in (0,1/2)\) such that for any \(\varepsilon \in (0,\varepsilon _0)\), \(y \in {\mathbb {R}}\), \(n \in {\mathbb {N}}\) such that \(\varepsilon ^3 n \geqslant 2\), we have

$$\begin{aligned} \sup _{x\in {\mathbb {X}}}&\left|\frac{n^{3/2}}{n_2-1} {\mathbb {E}}_x \left( \varphi _+ \left( \frac{y+S_{n_1}}{\sqrt{n_2-1}\sigma } \right) ;\, \tau _y > n_1 \right) - \frac{V(x,y)}{\sigma }\right| \\&\quad \leqslant c_{\varepsilon } \frac{\left( 1+\max (y,0) \right) ^2}{n^{\varepsilon }} + c \left( 1+\max (y,0)\right) \varepsilon , \end{aligned}$$

where \(\varphi _+(t) = te^{-\frac{t^2}{2}} \mathbb {1}_{\{t\geqslant 0\}}\) is the Rayleigh density function, \(n_1 = n-\left\lfloor \varepsilon ^3n\right\rfloor \) and \(n_2 = \left\lfloor \varepsilon ^3n\right\rfloor \).

Proof

Using Lemma 7.2 with \(\varPsi = \varphi _+\), \(m_1=n_1\), \(m_2 = n_2-1\) and \(z=0,\)

$$\begin{aligned}&\frac{n^{3/2}}{n_2-1} \left|J_0-J_1\right| \nonumber \\&\quad \leqslant c_{\varepsilon } \frac{\left( 1+\max (y,0) \right) ^2 n^{3/2}}{(n_2-1)^{3/2} n_1^{\varepsilon }} + c \frac{\left( 1+\max (y,0)\right) n^{3/2}}{(n_2-1)\sqrt{n_1}} \left( e^{-c\frac{n_1}{(n_2-1)}} + \varepsilon ^4 \right) \nonumber \\&\quad \leqslant c_{\varepsilon } \frac{\left( 1+\max (y,0) \right) ^2}{n^{\varepsilon }} + c \frac{\left( 1+\max (y,0)\right) }{\varepsilon ^3} \left( 1+\frac{c_{\varepsilon }}{n} \right) \left( e^{-\frac{c}{\varepsilon ^3}} + \varepsilon ^4 \right) \nonumber \\&\quad \leqslant c_{\varepsilon } \frac{\left( 1+\max (y,0) \right) ^2}{n^{\varepsilon }} + c \left( 1+\max (y,0)\right) \varepsilon , \end{aligned}$$
(7.14)

where

$$\begin{aligned} J_0 := {\mathbb {E}}_x \left( \varphi _+ \left( \frac{y+S_{n_1}}{\sqrt{n_2-1}\sigma } \right) ;\, \tau _y > n_1 \right) \end{aligned}$$

and

$$\begin{aligned} \frac{n^{3/2}}{n_2-1} J_1&:= \frac{n^{3/2}}{n_2-1} \frac{2V(x,y)}{\sqrt{2\pi n_1}\sigma } \int _{0}^{+\infty } \varphi _+ \left( \sqrt{\frac{n_1}{n_2-1}}t \right) \varphi _+ (t) \text {d}t \\&= \frac{n^{3/2}}{n_2-1} \frac{2V(x,y)}{\sqrt{2\pi n_1}\sigma } \sqrt{\frac{n_1}{n_2-1}} \int _{0}^{+\infty } t^2 e^{ -\frac{\left( \frac{n_1}{n_2-1}+1 \right) t^2}{2} } \text {d}t \\&= \frac{n^{3/2}}{(n_2-1)^{3/2}} \frac{2V(x,y)}{\sqrt{2\pi }\sigma } \int _{0}^{+\infty } t^2 \sqrt{\frac{2\pi (n_2-1)}{n-1}} \varphi _{\sqrt{\frac{n_2-1}{n-1}}} (t) \text {d}t \end{aligned}$$

where \(\varphi _{\{\cdot \}}(\cdot )\) is defined in (5.1). So,

$$\begin{aligned} \frac{n^{3/2}}{n_2-1} J_1&= \frac{n^{3/2}}{\sqrt{n-1}(n_2-1)} \frac{2V(x,y)}{\sigma } \frac{n_2-1}{2(n-1)} \\&= \frac{n^{3/2}}{(n-1)^{3/2}} \frac{V(x,y)}{\sigma }. \end{aligned}$$

By the point 2 of Proposition 2.1,

$$\begin{aligned} \left|\frac{n^{3/2}}{n_2-1} J_1 - \frac{V(x,y)}{\sigma }\right| \leqslant \frac{c}{n} \left( 1+\max (y,0) \right) . \end{aligned}$$
(7.15)

The lemma follows from (7.14) and (7.15). \(\square \)

Thanks to Lemmata 7.1 and 7.3 we can bound \(E_1\) from (7.2) as follows.

Lemma 7.5

Assume Hypotheses M1M3. For any \(a > 0\) there exists \(\varepsilon _0 \in (0, 1/4)\) such that for any \(\varepsilon \in (0,\varepsilon _0)\), any non-negative function \(\psi \in {\mathscr {C}}\), any \(y \in {\mathbb {R}}\) and \(n \in {\mathbb {N}}\) such that \(\varepsilon ^3 n \geqslant 1\), we have

$$\begin{aligned}&\sup _{x\in {\mathbb {X}},\, z \geqslant 0} n \left| E_1 - \frac{2a {\varvec{\nu }} \left( \psi \right) V(x,y)}{\sqrt{2\pi }\sigma ^2} \varphi _+ \left( \frac{z}{\sqrt{n}\sigma } \right) \right| \\&\quad \leqslant c \left( 1 + \max (y,0) \right) \left\| \psi \right\| _{\infty } \left( \varepsilon + \frac{c_{\varepsilon }\left( 1+\max (y,0) \right) }{n^{\varepsilon }} \right) , \end{aligned}$$

where \(E_1 = {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, \tau _y > n_1 \right) \), \(n_1 = n- \left\lfloor \varepsilon ^3 n\right\rfloor \) and \(\varphi _+\) is the Rayleigh density function: \(\varphi _+(t) = te^{-\frac{t^2}{2}} \mathbb {1}_{\{t\geqslant 0\}}\).

Proof

From Lemmas 7.1 and 7.3, it follows that

$$\begin{aligned}&n \left| E_1 - \frac{2a {\varvec{\nu }} \left( \psi \right) V(x,y)}{\sqrt{2\pi }\sigma ^2} \varphi _+ \left( \frac{z}{\sqrt{n}\sigma } \right) \right| \\&\quad \leqslant c \left( 1 + \max (y,0) \right) \left\| \psi \right\| _{\infty } \left( \varepsilon + \frac{c_{\varepsilon }}{\sqrt{n}} \right) \\&\qquad + \left|\frac{a {\varvec{\nu }} \left( \psi \right) }{\sigma }\right| \left( c_{\varepsilon } \frac{\left( 1+\max (y,0) \right) ^2}{n^{\varepsilon }} + c \left( 1+\max (y,0) \right) \varepsilon \right) \\&\quad \leqslant c \left( 1 + \max (y,0) \right) \left\| \psi \right\| _{\infty } \left( \varepsilon + \frac{c_{\varepsilon }\left( 1+\max (y,0) \right) }{n^{\varepsilon }} \right) . \end{aligned}$$

\(\square \)

7.2 Control of \(E_2\)

In this section we bound the term \(E_2\) defined by (7.2). To this aim let us recall and introduce some notations: for any \(\varepsilon \in (0,1)\), we consider \(n_2 = \left\lfloor \varepsilon ^3 n\right\rfloor \), \(n_1 = n-n_2 = n-\left\lfloor \varepsilon ^3 n\right\rfloor \), \(n_3 = \left\lfloor \frac{n_2}{2}\right\rfloor \) and \(n_4 = n_2-n_3\). We define also

$$\begin{aligned} E_{21}&:= {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, y+S_{n_1} \leqslant \varepsilon \sqrt{n},\, n_1 < \tau _y \leqslant n \right) \end{aligned}$$
(7.16)
$$\begin{aligned} E_{22}&:= {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, y+S_{n_1} > \varepsilon \sqrt{n},\, n_1 < \tau _y \leqslant n_1+n_3 \right) \end{aligned}$$
(7.17)
$$\begin{aligned} E_{23}&:= {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, y+S_{n_1} > \varepsilon \sqrt{n},\, n_1+n_3 < \tau _y \leqslant n \right) \end{aligned}$$
(7.18)

and we note that

$$\begin{aligned} E_2 = E_{21}+E_{22}+E_{23}. \end{aligned}$$
(7.19)

Lemma 7.6

Assume Hypotheses M1M3. For any \(a>0\) there exists \(\varepsilon _0 \in (0,1/4)\) such that for any \(\varepsilon \in (0,\varepsilon _0)\), any non-negative function \(\psi \in {\mathscr {C}}\), any \(y \in {\mathbb {R}}\) and \(n \in {\mathbb {N}}\) such that \(\varepsilon ^3 n \geqslant 1\), we have

$$\begin{aligned} \sup _{x\in {\mathbb {X}}, z \geqslant 0} n E_{21} \leqslant c \left\| \psi \right\| _{\infty }\left( 1+ \max (y,0) \right) \left( \sqrt{\varepsilon } + \frac{c_{\varepsilon } \left( 1+\max (y,0) \right) }{n^{\varepsilon }} \right) \end{aligned}$$

where \(E_{21}\) is given as in (7.16) by

$$\begin{aligned} E_{21} = {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, y+S_{n_1} \leqslant \varepsilon \sqrt{n},\, n_1 < \tau _y \leqslant n \right) \end{aligned}$$

and \(n_1 = n-\left\lfloor \varepsilon ^3 n\right\rfloor \).

Proof

Using the Markov property and the uniform bound (5.14) of Corollary 5.5, with \(n_2 = \left\lfloor \varepsilon ^3 n\right\rfloor \),

$$\begin{aligned} E_{21}&= \sum _{x' \in {\mathbb {X}}} \int _{0}^{+\infty } {\mathbb {E}}_{x'} \left( \psi \left( X_{n_2} \right) ;\, y'+S_{n_2} \in [z,z+a],\, \tau _{y'} \leqslant n_2 \right) \\&\quad \times {\mathbb {P}}_x \left( X_{n_1} = x',\, y+S_{n_1} \in \text {d}y',\, y+S_{n_1} \leqslant \varepsilon \sqrt{n},\, \tau _y> n_1 \right) \\&\leqslant \frac{c \left\| \psi \right\| _{\infty }}{\sqrt{n_2}} {\mathbb {P}}_x \left( y+S_{n_1} \leqslant \varepsilon \sqrt{n},\, \tau _y > n_1 \right) . \end{aligned}$$

We note that \(\frac{\varepsilon \sqrt{n}}{\sigma \sqrt{n_1}} \leqslant \frac{\varepsilon }{\sigma \sqrt{1-\varepsilon ^3}} \leqslant \frac{2}{\sigma } \varepsilon \) and so by the point 2 of Proposition 2.3 with \(t_0=2\varepsilon /\sigma \):

$$\begin{aligned} E_{21}&\leqslant \frac{c \left\| \psi \right\| _{\infty }}{\sqrt{n_2}} \left( \frac{cV(x,y)}{\sqrt{n_1}} {\varvec{\Phi }}^+ \left( \frac{\varepsilon \sqrt{n}}{\sigma \sqrt{n_1}} \right) + \frac{c_{\varepsilon } \left( 1+\max (y,0)^2 \right) }{n_1^{1/2+\varepsilon }} \right) . \end{aligned}$$

Using the point 2 of Proposition 2.1 and taking into account that \(n_2 \geqslant \varepsilon ^3 n \left( 1-\frac{c_{\varepsilon }}{n} \right) \), \(n_1 \geqslant n/2\) and that \({\varvec{\Phi }}^+(t) \leqslant {\varvec{\Phi }}^+(t_0) \leqslant \frac{t_0^2}{2}\) for any \(t\in (0,t_0)\),

$$\begin{aligned} n E_{21}&\leqslant \frac{c \left\| \psi \right\| _{\infty }}{\varepsilon ^{3/2}} \left( 1+\frac{c_{\varepsilon }}{n} \right) \left( 1+ \max (y,0) \right) \left( \varepsilon ^2 + \frac{c_{\varepsilon } \left( 1+\max (y,0) \right) }{n^{\varepsilon }} \right) \\&\leqslant c \left\| \psi \right\| _{\infty } \left( 1+ \max (y,0) \right) \left( \sqrt{\varepsilon } + \frac{c_{\varepsilon } \left( 1+\max (y,0) \right) }{n^{\varepsilon }} \right) , \end{aligned}$$

which implies the assertion of the lemma. \(\square \)

Lemma 7.7

Assume Hypotheses M1M3. For any \(a>0\) there exists \(\varepsilon _0 \in (0,1/4)\) such that for any \(\varepsilon \in (0,\varepsilon _0)\), any non-negative function \(\psi \in {\mathscr {C}}\), any \(y \in {\mathbb {R}}\), and \(n \in {\mathbb {N}}\) satisfying \(\varepsilon ^3 n \geqslant 2\), we have

$$\begin{aligned} \sup _{x\in {\mathbb {X}}, z \geqslant 0} n E_{22} \leqslant c\left\| \psi \right\| _{\infty } \left( 1+\max (y,0) \right) \left( e^{-\frac{c}{\varepsilon }} + \frac{c_{\varepsilon }}{n^{\varepsilon }} \right) , \end{aligned}$$

where \(E_{22}\) is given as in (7.17) by

$$\begin{aligned} E_{22} = {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, y+S_{n_1} > \varepsilon \sqrt{n},\, n_1 < \tau _y \leqslant n_1+n_3 \right) \end{aligned}$$

and \(n_1 = n-\left\lfloor \varepsilon ^3 n\right\rfloor \), \(n_2 = \left\lfloor \varepsilon ^3 n\right\rfloor \) and \(n_3 = \left\lfloor \frac{n_2}{2}\right\rfloor \).

Proof

By the Markov property,

$$\begin{aligned} E_{22}&= \sum _{x' \in {\mathbb {X}}} \int _0^{+\infty } \underbrace{{\mathbb {E}}_{x'} \left( \psi \left( X_{n_2} \right) ;\, y'+S_{n_2} \in [z,z+a],\, \tau _{y'} \leqslant n_3 \right) }_{E_{22}'} \nonumber \\&\quad \times {\mathbb {P}}_x \left( X_{n_1} = x',\, y+S_{n_1} \in \text {d}y',\, y+S_{n_1}> \varepsilon \sqrt{n},\, \tau _y > n_1 \right) . \end{aligned}$$
(7.20)

Bound of\(E_{22}'\) By the Markov property and the uniform bound (5.14) in Corollary 5.5, with \(n_4 = n_2 - n_3 = n - n_1-n_3\),

$$\begin{aligned} E_{22}'&= \sum _{x'' \in {\mathbb {X}}} \int _{{\mathbb {R}}} {\mathbb {E}}_{x''} \left( \psi \left( X_{n_4} \right) ;\, y''+S_{n_4} \in [z,z+a] \right) \\&\quad \times {\mathbb {P}}_{x'} \left( X_{n_3} = x'',\, y'+S_{n_3} \in \text {d}y'',\, \tau _{y'} \leqslant n_3 \right) \\&\leqslant \frac{c\left\| \psi \right\| _{\infty }}{\sqrt{n_4}} {\mathbb {P}}_{x'} \left( \tau _{y'} \leqslant n_3 \right) . \end{aligned}$$

Let \((B_t)_{t\geqslant 0}\) be the Brownian motion defined by Proposition 10.4. Denote by \(A_n\) the following event:

$$\begin{aligned} A_n = \left\{ \sup _{t\in [0,1]} \left|S_{\left\lfloor tn\right\rfloor } - \sigma B_{tn}\right| \leqslant n^{1/2-\varepsilon } \right\} , \end{aligned}$$

and by \({\overline{A}}_n\) its complement. We have

$$\begin{aligned} E_{22}' \leqslant \frac{c\left\| \psi \right\| _{\infty }}{\sqrt{n_4}} \left[ {\mathbb {P}}_{x'} \left( \tau _{y'} \leqslant n_3,\, A_{n_3} \right) + {\mathbb {P}}_{x'} \left( \tau _{y'} \leqslant n_3,\, {\overline{A}}_{n_3} \right) \right] . \end{aligned}$$
(7.21)

Note that for any \(x' \in {\mathbb {X}}\) and any \(y' > \varepsilon \sqrt{n}\),

$$\begin{aligned} {\mathbb {P}}_{x'} \left( \tau _{y'} \leqslant n_3,\, A_{n_3} \right) \leqslant {\mathbb {P}} \left( \tau _{y'-n_3^{1/2-\varepsilon }}^{bm} \leqslant n_3 \right) , \end{aligned}$$

where, for any \(y'' > 0\), \(\tau _{y''}^{bm}\) is the exit time of the Brownian motion starting at \(y''\) defined by (10.7). Since \(y' > \varepsilon \sqrt{n}\), it implies that

$$\begin{aligned} {\mathbb {P}}_{x'} \left( \tau _{y'} \leqslant n_3,\, A_{n_3} \right)&\leqslant {\mathbb {P}} \left( \inf _{t\in [0,1]} \sigma B_{tn_3} \leqslant n_3^{1/2-\varepsilon } - y' \right) \\&\leqslant {\mathbb {P}} \left( \inf _{t\in [0,1]} \sigma B_{tn_3} \leqslant \left( \frac{\varepsilon ^3 n}{2} \right) ^{1/2-\varepsilon } - \varepsilon \sqrt{n} \right) \\&\leqslant {\mathbb {P}} \left( \inf _{t\in [0,1]} \sigma B_{tn_3} \leqslant -\varepsilon \sqrt{n} \left( 1- \frac{\varepsilon ^{1/2-3\varepsilon }}{n^{\varepsilon }} \right) \right) . \end{aligned}$$

Since \(\sqrt{n}/\sqrt{n_3} \geqslant \sqrt{2}/\varepsilon ^{3/2}\),

$$\begin{aligned} {\mathbb {P}}_{x'} \left( \tau _{y'} \leqslant n_3,\, A_{n_3} \right)&\leqslant {\mathbb {P}} \left( \left|\frac{B_{n_3}}{\sqrt{n_3}}\right| \geqslant \frac{\varepsilon \sqrt{n}}{\sigma \sqrt{n_3}} \left( 1- \frac{1}{n^{\varepsilon }} \right) \right) \nonumber \\&\leqslant {\mathbb {P}} \left( \left|B_1\right| \geqslant \frac{\sqrt{2}}{\sigma \sqrt{\varepsilon }} \left( 1- \frac{1}{n^{\varepsilon }} \right) \right) \nonumber \\&\leqslant c e^{-\frac{c}{\varepsilon } \left( 1- \frac{c}{n^{\varepsilon }} \right) }. \end{aligned}$$
(7.22)

Therefore, putting together (7.21) and (7.22) and using Proposition 10.4,

$$\begin{aligned} E_{22}' \leqslant \frac{c\left\| \psi \right\| _{\infty }}{\sqrt{n_4}} \left( c e^{-\frac{c}{\varepsilon } \left( 1- \frac{c}{n^{\varepsilon }} \right) } + {\mathbb {P}}_{x'} \left( {\overline{A}}_{n_3} \right) \right) \leqslant \frac{c\left\| \psi \right\| _{\infty }}{\sqrt{n_4}} \left( e^{-\frac{c}{\varepsilon } \left( 1- \frac{c}{n^{\varepsilon }} \right) } + \frac{c_{\varepsilon }}{n_3^{\varepsilon }} \right) . \end{aligned}$$

Since \(n_4 \geqslant n_2/2 \geqslant \frac{\varepsilon ^3 n}{2} \left( 1-\frac{c_{\varepsilon }}{n} \right) \) and \(n_3 \geqslant n_2/2-1 \geqslant \frac{\varepsilon ^3 n}{2} \left( 1-\frac{c_{\varepsilon }}{n} \right) \), we have

$$\begin{aligned} E_{22}' \leqslant \frac{c\left\| \psi \right\| _{\infty }}{\varepsilon ^{3/2} \sqrt{n}} \left( 1+\frac{c_{\varepsilon }}{n} \right) \left( e^{-\frac{c}{\varepsilon }} e^{\frac{c_{\varepsilon }}{n^{\varepsilon }}} + \frac{c_{\varepsilon }}{n^{\varepsilon }} \right) \leqslant \frac{c\left\| \psi \right\| _{\infty }}{\sqrt{n}} \left( e^{-\frac{c}{\varepsilon }} + \frac{c_{\varepsilon }}{n^{\varepsilon }} \right) . \end{aligned}$$
(7.23)

Inserting (7.23) in (7.20) and using the point 2 of Proposition 2.2 and the fact that \(n_1 \geqslant n/2\), we conclude that

$$\begin{aligned} E_{22} \leqslant \frac{c\left\| \psi \right\| _{\infty } \left( 1+\max (y,0) \right) }{n} \left( e^{-\frac{c}{\varepsilon }} + \frac{c_{\varepsilon }}{n^{\varepsilon }} \right) . \end{aligned}$$

\(\square \)

Lemma 7.8

Assume Hypotheses M1M3. For any \(a>0\) there exists \(\varepsilon _0 \in (0,1/4)\) such that for any \(\varepsilon \in (0,\varepsilon _0)\), any non-negative function \(\psi \in {\mathscr {C}}\), any \(y \in {\mathbb {R}}\), and \(n \in {\mathbb {N}}\) such that \(\varepsilon ^3 n \geqslant 3\), we have

$$\begin{aligned} \sup _{x\in {\mathbb {X}}, z\geqslant 0} nE_{23} \leqslant c \left\| \psi \right\| _{\infty } \left( 1+\max (y,0) \right) \left( \varepsilon + \frac{c_{\varepsilon }}{n^{\varepsilon }} \right) , \end{aligned}$$

where \(E_{23}\) is given as in (7.18) by

$$\begin{aligned} E_{23} = {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, y+S_{n_1} > \varepsilon \sqrt{n},\, n_1+n_3 < \tau _y \leqslant n \right) \end{aligned}$$

and \(n_1 = n-\left\lfloor \varepsilon ^3 n\right\rfloor \), \(n_2 = \left\lfloor \varepsilon ^3 n\right\rfloor \) and \(n_3 = \left\lfloor \frac{n_2}{2}\right\rfloor \).

Proof

By the Markov property,

$$\begin{aligned} E_{23}&\leqslant \sum _{x'\in {\mathbb {X}}} \int _{0}^{+\infty } \underbrace{{\mathbb {E}}_{x'} \left( \psi \left( X_{n_2} \right) ;\, y'+S_{n_2} \in [z,z+a],\, n_3 < \tau _{y'} \leqslant n_2 \right) }_{=:E_{23}'} \nonumber \\&\quad {\mathbb {P}}_{x} \left( X_{n_1} = x',\, y+S_{n_1} \in \text {d}y',\, y+S_{n_1}> \varepsilon \sqrt{n},\, \tau _y > n_1 \right) . \end{aligned}$$
(7.24)

We consider two cases: when \(z \leqslant \frac{\varepsilon \sqrt{n}}{2}\) and when \(z > \frac{\varepsilon \sqrt{n}}{2}\).

Fix first \(0 \leqslant z \leqslant \frac{\varepsilon \sqrt{n}}{2}\). Using Corollary 5.5, we have for any \(y' > \varepsilon \sqrt{n}\),

$$\begin{aligned} E_{23}'&\leqslant {\mathbb {E}}_{x'} \left( \psi \left( X_{n_2} \right) ;\, y'+S_{n_2} \in [z,z+a] \right) \\&\leqslant \frac{a {\varvec{\nu }}(\psi )}{\sqrt{2\pi n_2} \sigma } e^{-\frac{(z-y')^2}{2n_2\sigma ^2}} + \frac{c \left\| \psi \right\| _{\infty }}{\sqrt{n_2}} \left( \frac{1}{\sqrt{n_2}} + \varepsilon ^{5/2} + c_{\varepsilon }e^{-c_{\varepsilon }n_2} \right) \\&\leqslant \frac{c \left\| \psi \right\| _{\infty }}{\varepsilon ^{3/2}\sqrt{n}} \left( 1+\frac{c_{\varepsilon }}{n} \right) \left( e^{-\frac{\varepsilon ^2 n}{8n_2\sigma ^2}} + \frac{c_{\varepsilon }}{\sqrt{n}} + \varepsilon ^{5/2} + c_{\varepsilon }e^{-c_{\varepsilon }n} \right) \\&\leqslant \frac{c \left\| \psi \right\| _{\infty }}{\varepsilon ^{3/2}\sqrt{n}} \left( 1+\frac{c_{\varepsilon }}{n} \right) \left( e^{-\frac{c}{\varepsilon }} + \frac{c_{\varepsilon }}{\sqrt{n}} + \varepsilon ^{5/2} \right) . \end{aligned}$$

So, when \(0 \leqslant z \leqslant \frac{\varepsilon \sqrt{n}}{2}\), we have

$$\begin{aligned} E_{23}' \leqslant \frac{c \left\| \psi \right\| _{\infty }}{\sqrt{n}} \left( \frac{c_{\varepsilon }}{\sqrt{n}} + \varepsilon \right) . \end{aligned}$$
(7.25)

Now we consider that \(z > \frac{\varepsilon \sqrt{n}}{2}\). Using Lemma 3.2 with \({\mathfrak {m}} = {\varvec{\delta }}_{x'}\) and

$$\begin{aligned}&F(x_1,\dots ,x_{n_2}) \\&\quad = \psi (x_{n_2}) \mathbb {1}_{\left\{ y'+f(x_1)+\cdots +f(x_{n_2}) \in [z,z+a],\, \exists k \in \{ n_3+1, \dots , n_2-1\},\, y'+f(x_1)+\cdots +f(x_k) \leqslant 0 \right\} }, \end{aligned}$$

we obtain

$$\begin{aligned} E_{23}'&:= {\mathbb {E}}_{x'} \left( \psi \left( X_{n_2} \right) ;\, y'+S_{n_2} \in [z,z+a],\, n_3 < \tau _{y'} \leqslant n_2 \right) \\&\leqslant {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) \frac{\mathbb {1}_{\{x'\}}\left( X_{n_2+1}^* \right) }{{\varvec{\nu }} \left( X_{n_2+1}^* \right) };\, y'+f\left( X_{n_2}^* \right) + \cdots + f\left( X_1^* \right) \in [z,z+a],\, \right. \\&\quad \quad \left. \exists k \in \{ n_3+1, \dots , n_2-1\},\, y'+f\left( X_{n_2}^* \right) +\cdots +f \left( X_{n_2-k+1}^* \right) \leqslant 0 \right) . \end{aligned}$$

By the Markov property,

$$\begin{aligned} E_{23}'&\leqslant \left\| \psi \right\| _{\infty } {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi _{x'}^*\left( X_{n_2}^* \right) ;\, y'+f\left( X_{n_2}^* \right) + \cdots + f\left( X_1^* \right) \in [z,z+a],\, \right. \\&\quad \left. \exists k \in \{ n_3+1, \dots , n_2-1 \},\, y'+f\left( X_{n_2}^* \right) +\cdots +f \left( X_{n_2-k+1}^* \right) \leqslant 0 \right) . \end{aligned}$$

where \(\psi _{x'}^*\) is a function defined on \({\mathbb {X}}\) by the equation (6.2). We note that, on the event \(\left\{ y'+f\left( X_{n_2}^* \right) + \cdots + f\left( X_1^* \right) \in \left[ z,z+a \right] \right\} = \left\{ z + S_{n_2}^* \in \left[ y'-a,y' \right] \right\} \), we have

$$\begin{aligned}&\left\{ \exists k \in \{ n_3+1, \dots , n_2-1\},\, y'+f\left( X_{n_2}^* \right) +\cdots +f \left( X_{n_2-k+1}^* \right) \leqslant 0 \right\} \\&\quad \subset \left\{ \exists k \in \{ n_3+1, \dots , n_2-1 \},\, z-f \left( X_{n_2-k}^* \right) -\cdots -f\left( X_1^* \right) \leqslant 0 \right\} \\&\quad = \left\{ \tau _z^* \leqslant n_2-n_3-1 \right\} . \end{aligned}$$

Consequently,

$$\begin{aligned} E_{23}' \leqslant c\left\| \psi \right\| _{\infty } {\mathbb {P}}_{{\varvec{\nu }}}^* \left( z+S_{n_2}^* \in [y'-a,y'],\, \tau _z^* \leqslant n_4-1 \right) , \end{aligned}$$

with \(n_4 = n_2-n_3 = \left\lfloor \varepsilon ^3 n\right\rfloor - \left\lfloor \frac{\varepsilon ^3 n}{2}\right\rfloor \geqslant \frac{\varepsilon ^3n}{2} \left( 1-\frac{c_{\varepsilon }}{n} \right) \). Proceeding in the same way as for the term \(E_{22}'\) in (7.23) and using the fact that z is larger than \(c\varepsilon \sqrt{n}\), we have

$$\begin{aligned} E_{23}' \leqslant \frac{c\left\| \psi \right\| _{\infty }}{\sqrt{n}} \left( e^{-\frac{c}{\varepsilon }} + \frac{c_{\varepsilon }}{n^{\varepsilon }} \right) . \end{aligned}$$
(7.26)

Putting together (7.25) and (7.26), for any \(z \geqslant 0\), we obtain

$$\begin{aligned} E_{23}' \leqslant \frac{c\left\| \psi \right\| _{\infty }}{\sqrt{n}} \left( \varepsilon + \frac{c_{\varepsilon }}{n^{\varepsilon }} \right) . \end{aligned}$$

Inserting this bound in (7.24) and using the point 2 of Proposition 2.2, we conclude that

$$\begin{aligned} E_{23} \leqslant \frac{c \left\| \psi \right\| _{\infty } \left( 1+\max (y,0) \right) }{n} \left( \varepsilon + \frac{c_{\varepsilon }}{n^{\varepsilon }} \right) . \end{aligned}$$

\(\square \)

Putting together Lemmas 7.67.7 and 7.8, by (7.19), we obtain the following bound for \(E_2\):

Lemma 7.9

Assume Hypotheses M1M3. For any \(a>0\) there exists \(\varepsilon _0 \in (0,1/4)\) such that for any \(\varepsilon \in (0,\varepsilon _0)\), any non-negative function \(\psi \in {\mathscr {C}}\), any \(y \in {\mathbb {R}}\) and \(n \in {\mathbb {N}}\) such that \(\varepsilon ^3 n \geqslant 3\), we have

$$\begin{aligned} \sup _{x\in {\mathbb {X}}, z\geqslant 0} nE_2 \leqslant c \left\| \psi \right\| _{\infty } \left( 1+\max (y,0) \right) \left( \sqrt{\varepsilon } + \frac{c_{\varepsilon } \left( 1+\max (y,0) \right) }{n^{\varepsilon }} \right) , \end{aligned}$$

where \(E_2\) is given as in (7.2) by

$$\begin{aligned} E_2 = {\mathbb {E}}_x \left( \psi \left( X_n \right) ;\, y+S_n \in [z,z+a],\, n_1 < \tau _y \leqslant n \right) \end{aligned}$$

and \(n_1 = n-\left\lfloor \varepsilon ^3 n\right\rfloor \).

7.3 Proof of Theorem 2.4

By (7.1) and (7.2),

$$\begin{aligned} {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, \tau _y > n \right) = E_1+E_2. \end{aligned}$$

Lemma 7.5 estimates \(E_1\) and Lemma 7.9 bounds \(E_2\). Taking into account these two lemmas, Theorem 2.4 follows.

8 Proof of Theorem 2.5

8.1 Preliminary results

Lemma 8.1

Assume Hypotheses M1M3. For any \(a>0\) and \(p \in {\mathbb {N}}^*\), there exists \(\varepsilon _0 \in (0,1/4)\) such that for any \(\varepsilon \in (0,\varepsilon _0)\) there exists \(n_0(\varepsilon ) \geqslant 1\) such that any non-negative function \(\psi \in {\mathscr {C}}\), any \(y' >0\), \(z \geqslant 0\), \(k\in \{0, \dots , p-1\}\) and \(n \geqslant n_0(\varepsilon )\), we have

$$\begin{aligned} \sup _{x' \in {\mathbb {X}}} E_k'&\leqslant \frac{2a}{\sqrt{2\pi }p(n_2-1)\sigma ^2} \varphi _+ \left( \frac{y'}{\sigma \sqrt{n_2-1}} \right) \\&\quad \quad {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z_k+\frac{a}{p}+S_1^* \right) \,;\, \tau _{z_k+\frac{a}{p}}^* > 1 \right) \\&\quad + \frac{c \left\| \psi \right\| _{\infty }}{n} (1+z)\left( \varepsilon + \frac{c_{\varepsilon }\left( 1+z \right) }{n^{\varepsilon ^8}} \right) \end{aligned}$$

and

$$\begin{aligned} \inf _{x' \in {\mathbb {X}}} E_k'&\geqslant \frac{2a}{\sqrt{2\pi }p(n_2-1)\sigma ^2} \varphi _+ \left( \frac{y'}{\sigma \sqrt{n_2-1}} \right) \\&\qquad {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z_k+S_1^* \right) ;\, \tau _{z_k}^* > 1 \right) \\&\quad - \frac{c \left\| \psi \right\| _{\infty }}{n} (1+z)\left( \varepsilon + \frac{c_{\varepsilon }\left( 1+z \right) }{n^{\varepsilon ^8}} \right) \end{aligned}$$

where \(E_k' = {\mathbb {E}}_{x'} \left( \psi \left( X_{n_2} \right) ;\, y'+S_{n_2} \in \left( z_k,z_k+\frac{a}{p} \right] ,\, \tau _{y'} > n_2 \right) \), \(z_k =z+\frac{ka}{p}\) and \(n_2 = \left\lfloor \varepsilon ^3 n\right\rfloor \).

Proof

Using Lemma 3.2 with \({\mathfrak {m}} = {\varvec{\delta }}_{x'}\) and

$$\begin{aligned}&F(x_1,\dots ,x_{n_2}) \\&\quad = \psi (x_{n_2}) \mathbb {1}_{\left\{ y'+f(x_1) \dots +f(x_{n_2}) \in \left( z_k,z_k+\frac{a}{p} \right] ,\, \forall i \in \{ 1, \dots , n_2 \},\, y'+f(x_1)+\cdots +f(x_i) > 0 \right\} }, \end{aligned}$$

we have

$$\begin{aligned} E_k'&= {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) \psi _{x'}^*\left( X_{n_2}^* \right) ;\, y'+f\left( X_{n_2}^* \right) + \cdots + f\left( X_1^* \right) \in \left( z_k,z_k+\frac{a}{p} \right] ,\, \right. \\&\qquad \left. \forall i \in \{ 1, \dots , n_2 \},\, y'+f\left( X_{n_2}^* \right) +\cdots +f \left( X_{n_2-i+1}^* \right) > 0 \right) . \end{aligned}$$

where \(\psi _{x'}^*\) is the function defined on \({\mathbb {X}}\) by (6.2).

The upper bound Note that, on the event \(\left\{ y'+f\left( X_{n_2}^* \right) + \cdots + f\left( X_1^* \right) \in \right. \left. \left( z_k,z_k+\frac{a}{p} \right] \right\} = \left\{ z_k +\frac{a}{p} + S_{n_2}^* \in \left[ y',y'+\frac{a}{p} \right) \right\} \), we have

$$\begin{aligned}&\left\{ \forall i \in \{ 1, \dots , n_2 \},\, y'+f\left( X_{n_2}^* \right) +\cdots +f \left( X_{n_2-i+1}^* \right)> 0, y'>0 \right\} \nonumber \\&\quad \subset \left\{ \forall i \in \{ 1, \dots , n_2-1 \},\, z_k +\frac{a}{p}-f \left( X_{n_2-i}^* \right) -\cdots -f\left( X_1^* \right)> 0, \right. \nonumber \\&\quad \quad \left. z_k +\frac{a}{p} + S_{n_2}^*>0 \right\} \nonumber \\&\quad = \left\{ \tau _{z_k+\frac{a}{p}}^* > n_2 \right\} . \end{aligned}$$
(8.1)

So, for any \(y'>0\),

$$\begin{aligned} E_k'&\leqslant {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) \psi _{x'}^*\left( X_{n_2}^* \right) ;\, z_k +\frac{a}{p} + S_{n_2}^* \in \left[ y',y'+\frac{a}{p} \right) ,\, \tau _{z_k+\frac{a}{p}}^*> n_2 \right) \\&\leqslant \sum _{x'' \in {\mathbb {X}}} \int _{0}^{+\infty } \psi \left( x'' \right) {\mathbb {E}}_{x''}^* \left( \psi _{x'}^*\left( X_{n_2-1}^* \right) \,;\, z'' \right. \\&\left. \qquad + S_{n_2-1}^* \in \left[ y',y'+\frac{a}{p} \right] ,\, \tau _{z''}^*> n_2-1 \right) \\&\quad \times {\mathbb {P}}_{{\varvec{\nu }}}^* \left( X_1^* = \text {d}x'',\, z_k +\frac{a}{p} +S_1^* \in \text {d}z'',\, \tau _{z_k +\frac{a}{p}}^* > 1 \right) . \end{aligned}$$

Using Theorem 2.4 for the reversed chain with \(\varepsilon '=\varepsilon ^{8}\), we obtain that

$$\begin{aligned} E_k' \leqslant&\frac{2a {\varvec{\nu }} \left( \psi _{x'}^* \right) }{\sqrt{2\pi }(n_2-1)p\sigma ^2} \varphi _+ \left( \frac{y'}{\sqrt{n_2-1}\sigma } \right) \sum _{x'' \in {\mathbb {X}}} \int _{0}^{+\infty } \psi \left( x'' \right) V^* \left( x'',z'' \right) \\&\times {\mathbb {P}}_{{\varvec{\nu }}}^* \left( X_1^* = \text {d}x'',\, z_k +\frac{a}{p} +S_1^* \in \text {d}z'',\,\tau _{z_k +\frac{a}{p}}> 1 \right) \\&+ \frac{c \left\| \psi _{x'}^*\right\| _{\infty } \left\| \psi \right\| _{\infty }}{n_2-1} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \left( 1 + \max \left( z_k +\frac{a}{p} +S_1^*,0 \right) \right) \right. \\&\times \left. \left( \sqrt{\varepsilon ^{8}} + \frac{c_{\varepsilon }\left( 1+\max \left( z_k +\frac{a}{p} +S_1^*,0 \right) \right) }{(n_2-1)^{\varepsilon ^{8}}} \right) ,\, \tau _{z_k +\frac{a}{p}}^* > 1 \right) . \end{aligned}$$

Note that by (6.2), \({\varvec{\nu }} \left( \psi _{x'}^* \right) = 1\) and \(\left\| \psi _{x'}^*\right\| _{\infty } \leqslant c\). So,

$$\begin{aligned} E_k'&\leqslant \frac{2a}{\sqrt{2\pi }(n_2-1)p\sigma ^2} \varphi _+ \left( \frac{y'}{\sqrt{n_2-1}\sigma } \right) \\&\quad \times {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^* \left( X_1^*,z_k +\frac{a}{p} +S_1^* \right) ,\, \tau _{z_k +\frac{a}{p}}^* > 1 \right) \\&\quad + \frac{c \left\| \psi \right\| _{\infty }}{\varepsilon ^3 n} \left( 1+\frac{c_{\varepsilon }}{n} \right) (1+z)\left( \varepsilon ^4 + \frac{c_{\varepsilon }\left( 1+z \right) }{n^{\varepsilon ^8}} \right) \end{aligned}$$

and the upper bound of the lemma is proved.

The lower bound Similarly as in the proof of the upper bound we note that, on the event \(\left\{ y'+f\left( X_{n_2}^* \right) + \cdots + f\left( X_1^* \right) \in \left( z_k,z_k+\frac{a}{p} \right] \right\} = \left\{ z_k + S_{n_2}^* \in \left[ y'-\frac{a}{p},y' \right) \right\} \), we have

$$\begin{aligned}&\left\{ \forall i \in \{ 1, \dots , n_2 \},\, y'+f\left( X_{n_2}^* \right) +\cdots +f \left( X_{n_2-i+1}^* \right)> 0 \right\} \nonumber \\&\quad \supset \left\{ \forall i \in \{ 1, \dots , n_2-1 \},\, z_k -f \left( X_{n_2-i}^* \right) -\cdots -f\left( X_1^* \right)> 0 \right\} \nonumber \\&\quad = \left\{ \tau _{z_k}^*> n_2-1 \right\} \supset \left\{ \tau _{z_k}^* > n_2 \right\} . \end{aligned}$$
(8.2)

Let \(y_+' := \max (y'-a/p,0)\) and \(a' := \min (y',a/p) \in (0,a]\). For any \(\eta \in (0,a')\),

$$\begin{aligned} E_k'&\geqslant {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) \psi _{x'}^*\left( X_{n_2}^* \right) ;\, z_k + S_{n_2}^* \in \left[ y'-\frac{a}{p},y' \right) ,\, \tau _{z_k}^*> n_2 \right) \\&\geqslant \sum _{x'' \in {\mathbb {X}}} \int _{0}^{+\infty } \psi \left( x'' \right) {\mathbb {E}}_{x''}^* \left( \psi _{x'}^*\left( X_{n_2-1}^* \right) \,;\, z'' \right. \\&\left. \quad + S_{n_2-1}^* \in \left[ y_+',y_+'+a'-\eta \right] ,\, \tau _{z''}^*> n_2-1 \right) \\&\quad \times {\mathbb {P}}_{{\varvec{\nu }}}^* \left( X_1^* = \text {d}x'',\, z_k +S_1^* \in \text {d}z'',\, \tau _{z_k}^* > 1 \right) . \end{aligned}$$

Using Theorem 2.4,

$$\begin{aligned} E_k' \geqslant&\frac{2(a'-\eta ) {\varvec{\nu }} \left( \psi _{x'}^* \right) }{\sqrt{2\pi }(n_2-1)\sigma ^2} \varphi _+ \left( \frac{y_+'}{\sqrt{n_2-1}\sigma } \right) \sum _{x'' \in {\mathbb {X}}} \int _{0}^{+\infty } \psi \left( x'' \right) V^* \left( x'',z'' \right) \\&\times {\mathbb {P}}_{{\varvec{\nu }}}^* \left( X_1^* = \text {d}x'',\, z_k +S_1^* \in \text {d}z'',\, \tau _{z_k}^*> 1 \right) \\&- \frac{c \left\| \psi _{x'}^*\right\| _{\infty } \left\| \psi \right\| _{\infty }}{n_2-1} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \left( 1 + \max \left( z_k +S_1^*,0 \right) \right) \right. \\&\times \left. \left( \sqrt{\varepsilon ^{8}} + \frac{c_{\varepsilon }\left( 1+\max \left( z_k +S_1^*,0 \right) \right) }{(n_2-1)^{\varepsilon ^{8}}} \right) ,\, \tau _{z_k}^*> 1 \right) \\ \geqslant&\frac{2(a'-\eta )}{\sqrt{2\pi }(n_2-1)\sigma ^2} \varphi _+ \left( \frac{y_+'}{\sqrt{n_2-1}\sigma } \right) {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^* \left( X_1^*,z_k +S_1^* \right) ,\, \tau _{z_k}^* > 1 \right) \\&- \frac{c \left\| \psi \right\| _{\infty }}{\varepsilon ^3 n} \left( 1+\frac{c_{\varepsilon }}{n} \right) (1+z)\left( \varepsilon ^4 + \frac{c_{\varepsilon }\left( 1+z \right) }{n^{\varepsilon ^8}} \right) . \end{aligned}$$

Note that, if \(y' \geqslant a/p\) we have

$$\begin{aligned} (a'-\eta ) \varphi _+ \left( \frac{y_+'}{\sqrt{n_2-1}\sigma } \right)&= \left( \frac{a}{p}-\eta \right) \varphi _+ \left( \frac{y'-\frac{a}{p}}{\sqrt{n_2-1}\sigma } \right) \\&\geqslant \left( \frac{a}{p}-\eta \right) \varphi _+ \left( \frac{y'}{\sqrt{n_2-1}\sigma } \right) - \left\| \varphi _+'\right\| _{\infty }\frac{a^2}{p^2 \sqrt{n_2-1}\sigma } \end{aligned}$$

and if \(0 < y' \leqslant a/p\) we have

$$\begin{aligned}&(a'-\eta ) \varphi _+ \left( \frac{y_+'}{\sqrt{n_2-1}\sigma } \right) = 0 \\&\quad \geqslant \left( \frac{a}{p}-\eta \right) \varphi _+ \left( \frac{y'}{\sqrt{n_2-1}\sigma } \right) - \left\| \varphi _+'\right\| _{\infty }\frac{a y'}{p \sqrt{n_2-1}\sigma } \\&\quad \geqslant \left( \frac{a}{p}-\eta \right) \varphi _+ \left( \frac{y'}{\sqrt{n_2-1}\sigma } \right) - \left\| \varphi _+'\right\| _{\infty }\frac{a^2}{p^2 \sqrt{n_2-1}\sigma }. \end{aligned}$$

Moreover, using the points 1 and 2 of Proposition 2.1, we observe that

$$\begin{aligned} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^* \left( X_1^* ,z_k +S_1^* \right) ,\, \tau _{z_k}^* > 1 \right) \leqslant c \left\| \psi \right\| _{\infty } \left( 1+z \right) . \end{aligned}$$

Consequently, for any \(y' > 0\),

$$\begin{aligned} E_k'&\geqslant \frac{2\left( \frac{a}{p}-\eta \right) }{\sqrt{2\pi }(n_2-1)\sigma ^2} \varphi _+ \left( \frac{y'}{\sqrt{n_2-1}\sigma } \right) {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^* \left( X_1^*,z_k +S_1^* \right) ,\, \tau _{z_k}^* > 1 \right) \\&\quad - \frac{c_{\varepsilon } \left\| \psi \right\| _{\infty }}{n^{3/2}} \left( 1+z \right) - \frac{c \left\| \psi \right\| _{\infty }}{n} (1+z)\left( \varepsilon + \frac{c_{\varepsilon }\left( 1+z \right) }{n^{\varepsilon ^8}} \right) . \end{aligned}$$

Taking the limit as \(\eta \rightarrow 0\), the lower bound of the lemma follows. \(\square \)

Lemma 8.2

Assume Hypotheses M1M3. For any \(a>0\) and \(p \in {\mathbb {N}}^*\), there exists \(\varepsilon _0 \in (0,1/4)\) such that for any \(\varepsilon \in (0,\varepsilon _0)\) there exists \(n_0(\varepsilon ) \geqslant 1\) such that any non-negative function \(\psi \in {\mathscr {C}}\), any \(y \in {\mathbb {R}}\), \(z \geqslant 0\) and \(n \geqslant n_0(\varepsilon )\), we have

$$\begin{aligned} \sup _{x \in {\mathbb {X}}} n^{3/2} E_0&\leqslant \frac{2aV(x,y)}{p\sqrt{2\pi }\sigma ^3} \sum _{k=0}^{p-1} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z_k+\frac{a}{p}+S_1^* \right) ;\, \tau _{z_k+\frac{a}{p}}^* > 1 \right) \\&\quad + p c \left\| \psi \right\| _{\infty } (1+z)\left( 1+\max (y,0) \right) \left( \varepsilon + \frac{c_{\varepsilon }\left( 1+z+\max (y,0) \right) }{n^{\varepsilon ^8}} \right) \end{aligned}$$

and

$$\begin{aligned} \inf _{x \in {\mathbb {X}}} n^{3/2} E_0&\geqslant \frac{2aV(x,y)}{p\sqrt{2\pi }\sigma ^3} \sum _{k=0}^{p-1} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z_k+S_1^* \right) ;\, \tau _{z_k}^* > 1 \right) \\&\quad - p c \left\| \psi \right\| _{\infty } (1+z)\left( 1+\max (y,0) \right) \left( \varepsilon + \frac{c_{\varepsilon }\left( 1+z+\max (y,0) \right) }{n^{\varepsilon ^8}} \right) \end{aligned}$$

where \(E_0 = {\mathbb {E}}_{x} \left( \psi \left( X_n \right) ;\, y+S_n \in \left( z,z+a \right] ,\, \tau _y > n \right) \) and for any \(k\in \{0,\dots ,p-1\}\), \(z_k = z+\frac{ka}{p}\).

Proof

Set \(n_1 = n-\left\lfloor \varepsilon ^3 n\right\rfloor \) and \(n_2 = \left\lfloor \varepsilon ^3n\right\rfloor \). By the Markov property, for any \(p \geqslant 1\),

$$\begin{aligned} E_0&= \sum _{x'\in {\mathbb {X}}} \int _{0}^{+\infty } {\mathbb {E}}_{x'} \left( \psi \left( X_{n_2} \right) ;\, y'+S_{n_2} \in \left( z,z+a \right] ,\, \tau _{y'}> n_2 \right) \\&\quad \times {\mathbb {P}}_x \left( X_{n_1} = \text {d}x',\, y+S_{n_1} \in \text {d}y',\, \tau _y> n_1 \right) \\&= \sum _{x'\in {\mathbb {X}}} \int _{0}^{+\infty } \sum _{k=0}^{p-1} \; E_k' \; \times {\mathbb {P}}_x \left( X_{n_1} = \text {d}x',\, y+S_{n_1} \in \text {d}y',\, \tau _y > n_1 \right) , \end{aligned}$$

where for any \(k \in \{ 0, \dots , p-1 \}\),

$$\begin{aligned} E_k' = {\mathbb {E}}_{x'} \left( \psi \left( X_{n_2} \right) ;\, y'+S_{n_2} \in \left( z_k,z_k+\frac{a}{p} \right] ,\, \tau _{y'} > n_2 \right) \end{aligned}$$

and \(z_k = z+\frac{ka}{p}\).

The upper bound By Lemma 8.1,

$$\begin{aligned} E_0&\leqslant \frac{2a}{p(n_2-1)\sqrt{2\pi }\sigma ^2} \sum _{k=0}^{p-1} {\mathbb {E}}_x \left( \varphi _+ \left( \frac{y+S_{n_1}}{\sigma \sqrt{n_2-1}} \right) ;\, \tau _y> n_1 \right) J_1(k) \\&\quad + \sum _{k=0}^{p-1} \frac{c \left\| \psi \right\| _{\infty }}{n} (1+z)\left( \varepsilon + \frac{c_{\varepsilon }\left( 1+z \right) }{n^{\varepsilon ^8}} \right) {\mathbb {P}}_x \left( \tau _y > n_1 \right) , \end{aligned}$$

where \(J_1(k)= {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z_k+\frac{a}{p}+S_1^* \right) ;\, \tau _{z_k+\frac{a}{p}}^* > 1 \right) \), for any \(k\in \{0,\dots ,p-1\}\). By Lemma 7.4 and the point 2 of Proposition 2.2,

$$\begin{aligned} n^{3/2} E_0&\leqslant \frac{2a}{p\sqrt{2\pi }\sigma ^2} \sum _{k=0}^{p-1} J_1(k) \frac{V(x,y)}{\sigma } \\&\quad + \frac{1}{p} \sum _{k=0}^{p-1} J_1(k) \left( \frac{c_{\varepsilon }\left( 1+\max (y,0) \right) ^2}{n^{\varepsilon }} + c \left( 1+\max (y,0)\right) \varepsilon \right) \\&\quad + p c \left\| \psi \right\| _{\infty }(1+z)\left( \varepsilon + \frac{c_{\varepsilon }\left( 1+z \right) }{n^{\varepsilon ^8}} \right) \left( 1+\max (y,0) \right) . \end{aligned}$$

Note that, using the points 1 and 2 of Proposition 2.1, we have

$$\begin{aligned} \frac{1}{p} \sum _{k=0}^{p-1} J_1(k) \leqslant c \left\| \psi \right\| _{\infty } (1+z). \end{aligned}$$

Therefore

$$\begin{aligned} n^{3/2} E_0&\leqslant \frac{2aV(x,y)}{p\sqrt{2\pi }\sigma ^3} \sum _{k=0}^{p-1} J_1(k) \\&\quad + p c \left\| \psi \right\| _{\infty } (1+z)\left( 1+\max (y,0) \right) \left( \varepsilon + \frac{c_{\varepsilon }\left( 1+z+\max (y,0) \right) }{n^{\varepsilon ^8}} \right) \end{aligned}$$

and the upper bound of the lemma is proved.

The lower bound The proof of the lower bound is similar to the proof of the upper bound and therefore will not be detailed. \(\square \)

8.2 Proof of Theorem 2.5

The second point of Theorem 2.5 was proved by Lemma 6.2. It remains to prove the first point. Let \(\psi \in {\mathscr {C}}\), \(a>0\), \(x\in {\mathbb {X}}\), \(y \in {\mathbb {R}}\) and \(z \geqslant 0\). Suppose first that \(z>0\). For any \(n \geqslant 1\) and \(\eta \in (0,\min (z,1))\),

$$\begin{aligned} {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, \tau _y > n \right) \leqslant E_0(\eta ), \end{aligned}$$
(8.3)

where \(E_0(\eta ) = {\mathbb {E}}_{x} \left( \psi \left( X_n \right) ;\, y+S_n \in \left( z-\eta ,z+a \right] ,\, \tau _y > n \right) \). Taking the limit as \(n\rightarrow +\infty \) in Lemma 8.2, we have, for any \(p \in {\mathbb {N}}^*\) and \(\varepsilon \in (0, \varepsilon _0(p))\),

$$\begin{aligned}&\limsup _{n\rightarrow +\infty } n^{3/2} E_0(\eta ) \\&\quad \leqslant \frac{2(a+\eta )V(x,y)}{\sqrt{2\pi }p\sigma ^3} \sum _{k=0}^{p-1}{\mathbb {E}}_{{\varvec{\nu }}}^* ( \psi (X_1^*) V^*( X_1^*, z_{k,\eta } \\&\qquad +\frac{a+\eta }{p}+S_1^* ) ;\, \tau _{z_{k,\eta }+\frac{a+\eta }{p}}^* > 1 ) \\&\qquad + p c \left\| \psi \right\| _{\infty } (1+z-\eta )\left( 1+\max (y,0) \right) \varepsilon , \end{aligned}$$

with \(z_{k,\eta } = z-\eta +\frac{k(a+\eta )}{p}\) for \(k\in \{0,\dots ,p-1\}\). Taking the limit as \(\varepsilon \rightarrow 0\),

$$\begin{aligned}&\limsup _{n\rightarrow +\infty } n^{3/2} E_0(\eta ) \\&\quad \leqslant \frac{2(a+\eta )V(x,y)}{\sqrt{2\pi }p\sigma ^3} \sum _{k=0}^{p-1}{\mathbb {E}}_{{\varvec{\nu }}}^* (\psi (X_1^*) V^*( X_1^*, z_{k,\eta } \\&\qquad +\frac{a+\eta }{p}+S_1^*) ;\, \tau _{z_{k,\eta }+\frac{a+\eta }{p}}^* > 1). \end{aligned}$$

By the point 2 of Proposition 2.1, the function \(u \mapsto V^*\left( x^*, u-f(x^*) \right) \mathbb {1}_{\left\{ u-f(x^*) >0 \right\} }\) is monotonic and so is Riemann integrable. Since \({\mathbb {X}}\) is finite, we have

$$\begin{aligned}&\lim _{p\rightarrow +\infty } \frac{a+\eta }{p} \sum _{k=0}^{p-1} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z_{k,\eta } +\frac{a+\eta }{p}+S_1^* \right) ;\, \tau _{z_{k,\eta } +\frac{a+\eta }{p}}^*> 1 \right) \\&\quad = {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) \int _{z-\eta }^{z+a} V^*\left( X_1^*, z'+S_1^* \right) \mathbb {1}_{\left\{ z'+S_1^*>0 \right\} } \text {d}z' \right) \\&\quad = \int _{z-\eta }^{z+a} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z'+S_1^* \right) ;\, \tau _{z'}^* > 1 \right) \text {d}z'. \end{aligned}$$

Therefore,

$$\begin{aligned} \limsup _{n\rightarrow +\infty } n^{3/2} E_0(\eta ) \leqslant \frac{2V(x,y)}{\sqrt{2\pi }\sigma ^3} \int _{z-\eta }^{z+a} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z'+S_1^* \right) ;\, \tau _{z'}^* > 1 \right) \text {d}z'. \end{aligned}$$

Taking the limit as \(\eta \rightarrow 0\) and using (8.3), we obtain that, for any \(z >0\),

$$\begin{aligned}&\limsup _{n\rightarrow +\infty } n^{3/2} {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, \tau _y> n \right) \nonumber \\&\quad = \frac{2V(x,y)}{\sqrt{2\pi }\sigma ^3} \int _z^{z+a} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z'+S_1^* \right) ;\, \tau _{z'}^* > 1 \right) \text {d}z'. \end{aligned}$$
(8.4)

If \(z=0\), we have

$$\begin{aligned} {\mathbb {E}}_x \left( \psi \left( X_n \right) ;\, y+S_n \in [0,a],\, \tau _y> n \right) = {\mathbb {E}}_x \left( \psi \left( X_n \right) ;\, y+S_n \in (0,a],\, \tau _y > n \right) . \end{aligned}$$

Using Lemma 8.2 and the same arguments as before, it is easy to see that (8.4) holds for \(z=0\).

Since \([z,z+a] \supset (z,z+a]\) we have obviously

$$\begin{aligned}&{\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in [z,z+a],\, \tau _y> n \right) \\&\quad \geqslant {\mathbb {E}}_x \left( \psi \left( X_{n} \right) ;\, y+S_{n} \in (z,z+a],\, \tau _y > n \right) . \end{aligned}$$

Using this and Lemma 8.2 we obtain (8.4) with \(\liminf \) instead of \(\limsup ,\) which concludes the proof of the theorem.

9 Proof of Theorems 2.7 and 2.8

9.1 Preliminaries results

Lemma 9.1

Assume Hypotheses M1M3. For any \(x \in {\mathbb {X}}\), \(y \in {\mathbb {R}}\), \(z \geqslant 0\), \(a >0\), any non-negative function \(\psi \): \({\mathbb {X}} \rightarrow {\mathbb {R}}_+\) and any non-negative and continuous function g: \([z,z+a] \rightarrow {\mathbb {R}}_+\), we have

$$\begin{aligned}&\lim _{n\rightarrow +\infty } n^{3/2} {\mathbb {E}}_x \left( g \left( y+S_n \right) \psi \left( X_n \right) ;\, y+S_n \in [z,z+a),\, \tau _y> n \right) \\&\quad = \frac{2V(x,y)}{\sqrt{2\pi }\sigma ^3} \int _z^{z+a} g(z') {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z'+S_1^* \right) ;\, \tau _{z'}^* > 1 \right) \text {d}z'. \end{aligned}$$

Proof

Fix \(x \in {\mathbb {X}}\), \(y \in {\mathbb {R}}\), \(z \geqslant 0\), \(a >0\), and let \(\psi \): \({\mathbb {X}} \rightarrow {\mathbb {R}}_+\) be a non-negative function and g: \([z,z+a] \rightarrow {\mathbb {R}}_+\) be a non-negative and continuous function. For any measurable non-negative and bounded function \(\varphi \): \({\mathbb {R}} \rightarrow {\mathbb {R}}_+\), we define

$$\begin{aligned} I_0(\varphi ) := n^{3/2} {\mathbb {E}}_x \left( \psi \left( X_n \right) \varphi \left( y+S_n \right) ;\, \tau _y > n \right) . \end{aligned}$$

We first prove that for any \(0 \leqslant \alpha < \beta \) we have

$$\begin{aligned} I_0 \left( \mathbb {1}_{[\alpha ,\beta )} \right) \underset{n \rightarrow +\infty }{\longrightarrow } \frac{2V(x,y)}{\sqrt{2\pi }\sigma ^3} \int _\alpha ^\beta {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z'+S_1^* \right) ;\, \tau _{z'}^* > 1 \right) \text {d}z'. \nonumber \\ \end{aligned}$$
(9.1)

Since \([\alpha ,\beta ) \subset [\alpha ,\beta ]\), the upper limit is a straightforward consequence of Theorem 2.5:

$$\begin{aligned} \limsup _{n\rightarrow +\infty } I_0 \left( \mathbb {1}_{[\alpha ,\beta )} \right)&\leqslant \limsup _{n\rightarrow +\infty } n^{3/2} {\mathbb {E}}_x \left( \psi \left( X_n \right) ;\, y+S_n \in [\alpha ,\beta ],\, \tau _y> n \right) \\&= \frac{2V(x,y)}{\sqrt{2\pi }\sigma ^3} \int _\alpha ^\beta {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z'+S_1^* \right) ;\, \tau _{z'}^* > 1 \right) \text {d}z'. \end{aligned}$$

and for the lower limit, we write for any \(\eta \in (0,\beta -\alpha )\),

$$\begin{aligned} \liminf _{n\rightarrow +\infty } I_0 \left( \mathbb {1}_{[\alpha ,\beta )} \right)&\geqslant \liminf _{n\rightarrow +\infty } n^{3/2} {\mathbb {E}}_x \left( \psi \left( X_n \right) ;\, y+S_n \in [\alpha ,\beta -\eta ],\, \tau _y> n \right) \\&= \frac{2V(x,y)}{\sqrt{2\pi }\sigma ^3} \int _\alpha ^{\beta -\eta } {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z'+S_1^* \right) ;\, \tau _{z'}^* > 1 \right) \text {d}z'. \end{aligned}$$

Taking the limit as \(\eta \rightarrow 0\), it proves (9.1).

From (9.1), by linearity, for any non-negative staircase function \(\varphi = \sum _{k=1}^N \gamma _k \mathbb {1}_{[\alpha _k,\beta _k)}\), where \(N \geqslant 1\), \(\gamma _1, \dots , \gamma _N \in {\mathbb {R}}_+\) and \(0< \alpha _1< \beta _1 = \alpha _2< \cdots < \beta _N\), we have

$$\begin{aligned} \lim _{n\rightarrow +\infty } I_0 \left( \varphi \right) = \frac{2V(x,y)}{\sqrt{2\pi }\sigma ^3} \int _{\alpha _1}^{\beta _N} \varphi (z') {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z'+S_1^* \right) ;\, \tau _{z'}^* > 1 \right) \text {d}z'. \end{aligned}$$

Since g is continuous on \([z,z+a]\), for any \(\varepsilon \in (0,1)\) there exists \(\varphi _{1,\varepsilon }\) and \(\varphi _{2,\varepsilon }\) two stepwise functions on \([z,z+a)\) such that \(g-\varepsilon \leqslant \varphi _{1,\varepsilon } \leqslant g \leqslant \varphi _{2,\varepsilon } \leqslant g+\varepsilon \). Consequently,

$$\begin{aligned}&\left|\lim _{n\rightarrow +\infty } I_0(g) - \frac{2V(x,y)}{\sqrt{2\pi }\sigma ^3} \int _{z}^{z+a} g(z') {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z'+S_1^* \right) ;\, \tau _{z'}^*> 1 \right) \text {d}z'\right| \\&\quad \leqslant \frac{2V(x,y)}{\sqrt{2\pi }\sigma ^3} \varepsilon \int _{z}^{z+a} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z'+S_1^* \right) ;\, \tau _{z'}^* > 1 \right) \text {d}z'. \end{aligned}$$

Taking the limit as \(\varepsilon \rightarrow 0\), concludes the proof of the lemma. \(\square \)

For any \(l \geqslant 1\) we denote by \({\mathscr {C}}_b^+ \left( {\mathbb {X}}^l \times {\mathbb {R}} \right) \) the set of measurable non-negative functions g: \({\mathbb {X}}^l \times {\mathbb {R}} \rightarrow {\mathbb {R}}_+\) bounded and such that for any \((x_1,\dots ,x_l) \in {\mathbb {X}}^l\), the function \(z \mapsto g(x_1,\dots ,x_l,z)\) is continuous.

Lemma 9.2

Assume Hypotheses M1M3. For any \(x \in {\mathbb {X}}\), \(y \in {\mathbb {R}}\), \(z \geqslant 0\), \(a >0\), \(l \geqslant 1\), any non-negative functions \(\psi \): \({\mathbb {X}} \rightarrow {\mathbb {R}}_+\) and \(g\in \mathscr {C}_b^+ \left( {\mathbb {X}}^l \times {\mathbb {R}} \right) \), we have

$$\begin{aligned}&\lim _{n\rightarrow +\infty } n^{3/2} {\mathbb {E}}_x \left( g \left( X_1, \dots , X_l, y+S_n \right) \psi \left( X_n \right) ;\, y+S_n \in [z,z+a),\, \tau _y> n \right) \\&\quad = \frac{2}{\sqrt{2\pi }\sigma ^3} \int _z^{z+a} {\mathbb {E}}_x \left( g \left( X_1, \dots , X_l, z' \right) V\left( X_l, y+S_l \right) \,;\, \tau _y> l \right) \\&\qquad \times {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z'+S_1^* \right) ;\, \tau _{z'}^* > 1 \right) \text {d}z'. \end{aligned}$$

Proof

We reduce the proof to the previous case using the Markov property. Fix \(x \in {\mathbb {X}}\), \(y \in {\mathbb {R}}\), \(z \geqslant 0\), \(a >0\), \(l \geqslant 1\), \(\psi \): \({\mathbb {X}} \rightarrow {\mathbb {R}}_+\) and \(g \in {\mathscr {C}}_b^+ \left( {\mathbb {X}}^l \times {\mathbb {R}} \right) \). For any \(n \geqslant l+1\), by the Markov property,

$$\begin{aligned} I_0&:= n^{3/2} {\mathbb {E}}_x \left( g \left( X_1, \dots , X_l,y+S_n \right) \psi \left( X_n \right) ;\, y+S_n \in [z,z+a),\, \tau _y> n \right) \\&= {\mathbb {E}}_x \left( n^{3/2} J_{n-l} \left( X_1, \dots , X_l, y+S_l \right) ,\, \tau _y > l \right) , \end{aligned}$$

where for any \((x_1,\dots ,x_l) \in {\mathbb {X}}^l\), \(y' \in {\mathbb {R}}\) and \(k \geqslant 1\),

$$\begin{aligned} J_k(x_1,\dots ,x_l,y')= & {} {\mathbb {E}}_{x_l} \left( g \left( x_1,\dots ,x_l,y'+S_k \right) \psi \left( X_k \right) ;\, y'\right. \\&\left. +S_k \in [z,z+a),\, \tau _{y'} > k \right) . \end{aligned}$$

By the point 2 of Theorem 2.5,

$$\begin{aligned} n^{3/2} J_{n-l} \left( X_1, \dots , X_l, y+S_l \right) \leqslant c \left\| g\right\| _{\infty } \left\| \psi \right\| _{\infty } (1+z)\left( 1+\max \left( y+S_l,0 \right) \right) . \end{aligned}$$

Consequently, by the Lebesgue dominated convergence theorem (in fact the expectation \({\mathbb {E}}_x\) is a finite sum) and Lemma 9.1,

$$\begin{aligned} \lim _{n\rightarrow +\infty } I_0&= \frac{2}{\sqrt{2\pi }\sigma ^3} \int _z^{z+a} {\mathbb {E}}_x \left( g \left( X_1, \dots , X_l,z' \right) V \left( X_l, y+S_l \right) ;\, \tau _y> l \right) \\&\quad \times {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi \left( X_1^* \right) V^*\left( X_1^*, z'+S_1^* \right) ;\, \tau _{z'}^* > 1 \right) \text {d}z'. \end{aligned}$$

\(\square \)

Lemma 9.2 can be reformulated for the dual Markov walk as follows:

Lemma 9.3

Assume Hypotheses M1M3. For any \(x' \in {\mathbb {X}}\), \(z \geqslant 0\), \(y' \geqslant 0\), \(a >0\), \(m \geqslant 1\) and any function \(g \in {\mathscr {C}}_b^+ \left( {\mathbb {X}}^m \times {\mathbb {R}} \right) \), we have

$$\begin{aligned}&\lim _{n\rightarrow +\infty } n^{3/2} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( X_m^*, \dots , X_1^*,y'-S_n^* \right) \frac{\mathbb {1}_{\left\{ X_{n+1}^* = x' \right\} }}{{\varvec{\nu }} \left( X_{n+1}^* \right) };\, \right. \\&\left. \qquad z+S_n^* \in [y',y'+a),\, \tau _z^*> n \right) \\&\quad = \frac{2}{\sqrt{2\pi }\sigma ^3} \int _{y'}^{y'+a} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( X_m^*, \dots , X_1^*, y'-y''+z \right) V^* \left( X_m^*, z+S_m^* \right) ;\, \tau _z^* > m \right) \\&\qquad V\left( x', y'' \right) \text {d}y''. \end{aligned}$$

Proof

Fix \(x' \in {\mathbb {X}}\), \(z \geqslant 0\), \(y' \geqslant 0\), \(a >0\), \(m \geqslant 1\) and \(g \in {\mathscr {C}}_b^+ \left( {\mathbb {X}}^m \times {\mathbb {R}} \right) \). Let \(\psi _{x'}^*\) be the function defined on \({\mathbb {X}}\) by (6.2) and consider for any \(n \geqslant m+1\),

$$\begin{aligned} I_0 := n^{3/2} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( X_m^*, \dots , X_1^*,y'-S_n^* \right) \psi _{x'}^* \left( X_n^* \right) ;\, z+S_n^* \in [y',y'+a),\, \tau _z^* > n \right) . \end{aligned}$$

By Lemma 9.2 applied to the dual Markov walk, we have

$$\begin{aligned}&I_0 \underset{n\rightarrow +\infty }{\longrightarrow } \frac{2}{\sqrt{2\pi }\sigma ^3} \sum _{x^* \in {\mathbb {X}}} \int _{y'}^{y'+a} {\mathbb {E}}_{x^*}^* \left( g \left( X_m^*, \dots , X_1^*, y'+z-y'' \right) \right. \\&\left. \qquad V^* \left( X_m^*, z+S_m^* \right) ;\, \tau _z^*> m \right) \\&\quad \times {\mathbb {E}}_{{\varvec{\nu }}} \left( \psi _{x'}^* \left( X_1 \right) \right. \\&\left. \qquad V\left( X_1, y''+S_1 \right) ;\, \tau _{y''} > 1 \right) \text {d}y'' {\varvec{\nu }}(x^*). \end{aligned}$$

Moreover, using (6.2) and the fact that \({\varvec{\nu }}\) is \({\mathbf {P}}\)-invariant, for any \(x' \in {\mathbb {X}}\), \(y'' \geqslant 0\),

$$\begin{aligned}&{\mathbb {E}}_{{\varvec{\nu }}} \left( \psi _{x'}^* \left( X_1 \right) V\left( X_1, y''+S_1 \right) ;\, \tau _{y''}> 1 \right) \\&\quad = \sum _{x_1 \in {\mathbb {X}}} \frac{{\mathbf {P}}(x',x_1)}{{\varvec{\nu }}(x_1)} V\left( x_1, y''+f(x_1) \right) \mathbb {1}_{\{ y''+f(x_1)> 0 \}} {\varvec{\nu }}(x_1) \\&\quad = {\mathbb {E}}_{x'} \left( V\left( X_1, y''+S_1 \right) ;\, \tau _{y''} > 1 \right) . \end{aligned}$$

By the point 1 of Proposition 2.1, the function V is harmonic and so

$$\begin{aligned} \lim _{n\rightarrow +\infty } I_0&= \frac{2}{\sqrt{2\pi }\sigma ^3} \int _{y'}^{y'+a} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( X_m^*, \dots , X_1^*, y'-y''+z \right) \right. \\&\left. \qquad V^* \left( X_m^*, z+S_m^* \right) ;\, \tau _z^* > m \right) \\&\quad \times V\left( x', y'' \right) \text {d}y''. \end{aligned}$$

\(\square \)

Lemma 9.4

Assume Hypotheses M1M3. For any \(x \in {\mathbb {X}}\), \(y \in {\mathbb {R}}\), \(z \geqslant 0\), \(a >0\), \(m \geqslant 1\) and any function \(g \in {\mathscr {C}}_b^+ \left( {\mathbb {X}}^m \times {\mathbb {R}} \right) \), we have

$$\begin{aligned}&\lim _{n\rightarrow +\infty } n^{3/2} {\mathbb {E}}_x \left( g \left( X_{n-m+1}, \dots , X_n, y+S_n \right) ;\, y+S_n \in (z,z+a],\, \tau _y> n \right) \\&\quad = \frac{2 V(x,y)}{\sqrt{2\pi }\sigma ^3} \int _z^{z+a} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( X_m^*, \dots , X_1^*, z' \right) V^* \left( X_m^*, z'+S_m^* \right) ;\, \tau _{z'}^* > m \right) \text {d}z'. \end{aligned}$$

Proof

Fix \(x \in {\mathbb {X}}\), \(y \in {\mathbb {R}}\), \(z \geqslant 0\), \(a >0\), \(m \geqslant 1\) and \(g \in {\mathscr {C}}_b^+ \left( {\mathbb {X}}^m \times {\mathbb {R}} \right) \). For any \(n \geqslant m\), consider

$$\begin{aligned} I_n(x,y) := {\mathbb {E}}_x \left( g \left( X_{n-m+1}, \dots , X_n, y+S_n \right) ;\, y+S_n \in (z,z+a],\, \tau _y > n \right) . \end{aligned}$$
(9.2)

For any \(l \geqslant 1\) and \(n \geqslant l+m\), by the Markov property, we have

$$\begin{aligned} n^{3/2} I_n(x,y) = {\mathbb {E}}_x \left( n^{3/2} I_{n-l} \left( X_l, y+S_l \right) ;\, \tau _y > l \right) . \end{aligned}$$
(9.3)

For any \(p \geqslant 1\) and \(0 \leqslant k \leqslant p\) we define \(z_k := z+\frac{a k}{p}\). For any \(x' \in {\mathbb {X}}\), \(y' > 0\), \(n \geqslant l+m\) and \(p \geqslant 1\), we write

$$\begin{aligned} n^{3/2} I_{n-l}(x',y')&= \sum _{k=0}^{p-1} n^{3/2} {\mathbb {E}}_{x'} \left( g \left( X_{n-l-m+1}, \dots , X_{n-l}, y'+S_{n-l} \right) ;\,\right. \\&\quad \left. y' +S_{n-l} \in ( z_k,z_{k+1}],\, \tau _{y'} > n-l \right) . \end{aligned}$$

Using Lemma 3.2, we get

$$\begin{aligned} n^{3/2} I_{n-l}(x',y')&= \sum _{k=0}^{p-1} n^{3/2} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( X_m^*, \dots , X_1^*, y'-S_{n-l}^* \right) \psi _{x'}^* \left( X_{n-l}^* \right) ;\, y'\right. \\&\quad -S_{n-l}^* \in ( z_k,z_{k+1} ],\, \\&\quad \left. \forall i \in \{1, \dots , n-l \}, \; y'+f\left( X_{n-l}^* \right) + \cdots + f\left( X_{n-l-i+1}^* \right) > 0 \right) , \end{aligned}$$

where \(\psi _{x'}^*\) is defined by (6.2).

The upper bound Using (8.1), we have

$$\begin{aligned}&n^{3/2} I_{n-l}(x',y') \\&\quad \leqslant \sum _{k=0}^{p-1} n^{3/2} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( X_m^*, \dots , X_1^*, y'-S_{n-l}^* \right) \psi _{x'}^* \left( X_{n-l}^* \right) ;\, \right. \\&\quad \quad \left. z_{k+1} + S_{n-l}^* \in \left[ y',y'+a/p \right) ,\, \tau _{z_{k+1}}^* > n-l \right) . \end{aligned}$$

By Lemma 9.3,

$$\begin{aligned}&\limsup _{n\rightarrow +\infty } n^{3/2} I_{n-l}(x',y') \\&\quad \leqslant \frac{2}{\sqrt{2\pi } \sigma ^3} \sum _{k=0}^{p-1} \int _{y'}^{y'+a/p} J_k(y'-y'') V\left( x', y'' \right) \text {d}y'', \end{aligned}$$

where for any \(k \geqslant 0\) and \(t \in {\mathbb {R}}\),

$$\begin{aligned} J_k(t) := {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( X_m^*, \dots , X_1^*, t+z_{k+1} \right) V^* \left( X_m^*, z_{k+1}+S_m^* \right) ;\, \tau _{z_{k+1}}^* > m \right) . \end{aligned}$$

Note that for any \(t \in [-a/p,0]\)

$$\begin{aligned} J_k(t) \leqslant \underbrace{{\mathbb {E}}_{{\varvec{\nu }}}^* \left( \sup _{t \in [-a/p,0]} g \left( X_m^*, \dots , X_1^*, t+z_{k+1} \right) V^* \left( X_m^*, z_{k+1}+S_m^* \right) ;\, \tau _{z_{k+1}}^* > m \right) }_{=:J_k^p}.\nonumber \\ \end{aligned}$$
(9.4)

Since \(y'' \mapsto V\left( x', y'' \right) \) is non-decreasing (see the point 2 of Proposition 2.1), we have

$$\begin{aligned} \limsup _{n\rightarrow +\infty } n^{3/2} I_{n-l}(x',y') \leqslant \frac{a}{p} \sum _{k=0}^{p-1} \frac{2 J_k^p}{\sqrt{2\pi } \sigma ^3} V\left( x', y'+\frac{a}{p} \right) . \end{aligned}$$

Moreover, by (9.2) and the point 2 of Theorem 2.5,

$$\begin{aligned} n^{3/2} I_{n-l}(X_l,y+S_l) \leqslant \left\| g\right\| _{\infty } c \left( 1+z \right) \left( 1+\max (y+S_l,0) \right) . \end{aligned}$$

Consequently, by (9.3) and the Lebesgue dominated convergence theorem (or using just the fact that \({\mathbb {X}}\) is finite),

$$\begin{aligned} \limsup _{n\rightarrow +\infty } n^{3/2} I_n(x,y) \leqslant \frac{a}{p} \sum _{k=0}^{p-1} \frac{2 J_k^p}{\sqrt{2\pi }\sigma ^3} {\mathbb {E}}_x \left( V\left( X_l, y+S_l+\frac{a}{p} \right) ;\, \tau _y > l \right) . \end{aligned}$$

Using the point 3 of Proposition 2.1, for any \(\delta \in (0,1)\),

$$\begin{aligned} \limsup _{n\rightarrow +\infty } n^{3/2} I_n(x,y) \leqslant \frac{a}{p} \sum _{k=0}^{p-1} \frac{2 J_k^p}{\sqrt{2\pi }\sigma ^3} {\mathbb {E}}_x \left( \left( 1+\delta \right) \left( y+S_l+\frac{a}{p} \right) + c_{\delta } \,;\, \tau _y > l \right) \end{aligned}$$

and again using the point 3 of Proposition 2.1, for any \(\delta \in (0,1)\),

$$\begin{aligned}&\limsup _{n\rightarrow +\infty } n^{3/2} I_n(x,y) \\&\quad \leqslant \frac{a}{p} \sum _{k=0}^{p-1} \frac{2 J_k^p}{\sqrt{2\pi }\sigma ^3} {\mathbb {E}}_x \left( \frac{1+\delta }{1-\delta } V \left( X_l, y+S_l \right) +2\frac{a}{p} + c_{\delta };\, \tau _y > l \right) . \end{aligned}$$

Using the point 1 of Proposition 2.1 and the point 2 of Proposition 2.2 and taking the limit as \(l \rightarrow +\infty \),

$$\begin{aligned} \limsup _{n\rightarrow +\infty } n^{3/2} I_n(x,y) \leqslant \frac{a}{p} \sum _{k=0}^{p-1} \frac{2 J_k^p}{\sqrt{2\pi }\sigma ^3} \frac{1+\delta }{1-\delta } V(x,y). \end{aligned}$$

Taking the limit as \(\delta \rightarrow 0\),

$$\begin{aligned} \limsup _{n\rightarrow +\infty } n^{3/2} I_n(x,y) \leqslant \frac{a}{p} \sum _{k=0}^{p-1} \frac{2 J_k^p}{\sqrt{2\pi }\sigma ^3} V(x,y). \end{aligned}$$
(9.5)

For any \((x_1^*,\dots ,x_m^*) \in {\mathbb {X}}^m\) and \(u \in {\mathbb {R}}\), let

$$\begin{aligned} g_m(u)&:= g \left( x_m^*, \dots , x_1^*, u \right) , \nonumber \\ V_m^*(u)&:= V^* (x_m^*, u-f(x_1^*) - \cdots - f(x_m^*)) \mathbb {1}_{\{u-f(x_1^*)> 0, \dots , u-f(x_1^*)-\cdots -f(x_m^*) > 0\}}. \end{aligned}$$
(9.6)

The function \(u \mapsto g_m(u)\) is uniformly continuous on \([z,z+a]\). Consequently, for any \(\varepsilon > 0\), there exists \(p_0 \geqslant 1\) such that for any \(p \geqslant p_0\),

$$\begin{aligned} \frac{a}{p} \sum _{k=0}^{p-1} \sup _{t \in [-a/p,0]} g_m \left( t+z_{k+1} \right) V_m^* (z_{k+1}) \leqslant \frac{a}{p} \sum _{k=0}^{p-1} \left( g_m \left( z_{k+1} \right) + \varepsilon \right) V_m^* (z_{k+1}). \end{aligned}$$

Moreover, using the point 2 of Proposition 2.1, it is easy to see that the function \(u \mapsto V_m^*(u)\) is non-decreasing and so is Riemann-integrable. Therefore, as \(p \rightarrow +\infty \), we have

$$\begin{aligned} \limsup _{p\rightarrow +\infty } \frac{a}{p} \sum _{k=0}^{p-1} \sup _{t \in [-a/p,0]} g_m \left( t+z_{k+1} \right) V_m^* (z_{k+1}) \leqslant \int _z^{z+a} \left( g_m \left( z' \right) + \varepsilon \right) V_m^* (z') \text {d}z'. \end{aligned}$$

Thus, when \(\varepsilon \rightarrow 0\),

$$\begin{aligned} \limsup _{p\rightarrow +\infty } \frac{a}{p} \sum _{k=0}^{p-1} \sup _{t \in [-a/p,0]} g_m \left( t+z_{k+1} \right) V_m^* (z_{k+1}) \leqslant \int _z^{z+a} g_m \left( z' \right) V_m^* (z') \text {d}z'. \end{aligned}$$
(9.7)

Moreover, since \(u \mapsto V_m^*(u)\) is non-decreasing,

$$\begin{aligned} \frac{a}{p} \sum _{k=0}^{p-1} \sup _{t \in [-a/p,0]} g_m \left( t+z_{k+1} \right) V_m^* (z_{k+1}) \leqslant \left\| g\right\| _{\infty } V_m^* (z+a) a. \end{aligned}$$

Consequently, by the Lebesgue dominated convergence theorem, (9.4), (9.7) and the Fubini theorem,

$$\begin{aligned}&\limsup _{p\rightarrow +\infty } \frac{a}{p} \sum _{k=0}^{p-1} \frac{2 J_k^p}{\sqrt{2\pi }\sigma ^3} V(x,y) \\&\quad = \frac{2 V(x,y)}{\sqrt{2\pi }\sigma ^3} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \limsup _{p\rightarrow +\infty } \frac{a}{p} \sum _{k=0}^{p-1} \sup _{t \in [-a/p,0]} g \left( X_m^*, \dots , X_1^*, t+z_{k+1} \right) \right. \\&\qquad \left. \times V^* \left( X_m^*, z_{k+1}+S_m^* \right) ;\, \tau _{z_{k+1}}^*> m \right) \\&\quad \leqslant \frac{2 V(x,y)}{\sqrt{2\pi }\sigma ^3} \int _z^{z+a} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( X_m^*, \dots , X_1^*, z' \right) V^* \left( X_m^*, z'+S_m^* \right) ;\, \tau _{z'}^* > m \right) \text {d}z'. \end{aligned}$$

By (9.5), we obtain that,

$$\begin{aligned}&\limsup _{n\rightarrow +\infty } n^{3/2} I_n(x,y) \\&\quad \leqslant \frac{2 V(x,y)}{\sqrt{2\pi }\sigma ^3} \int _z^{z+a} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( X_m^*, \dots , X_1^*, z' \right) V^* \left( X_m^*, z'+S_m^* \right) ;\, \tau _{z'}^* > m \right) \text {d}z'. \end{aligned}$$

The lower bound Repeating similar arguments as in the upper bound, by (8.2), we have for any \(x' \in {\mathbb {X}}\), \(y' > 0\), \(l \geqslant 1\), \(n \geqslant l+m+1\), \(p \geqslant 1\),

$$\begin{aligned} n^{3/2} I_{n-l}(x',y')&\geqslant \sum _{k=0}^{p-1} n^{3/2} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( X_m^*, \dots , X_1^*,y'-S_{n-l}^* \right) \psi _{x'}^* \left( X_{n-l}^* \right) ;\, \right. \\&\quad \left. z_k+S_{n-l}^* \in [ y'-a/p,y' ),\, \tau _{z_k}^*> n-l \right) \\&= \sum _{k=0}^{p-1} n^{3/2} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( X_m^*, \dots , X_1^*,y_+'+a'-S_{n-l}^* \right) \psi _{x'}^* \left( X_{n-l}^* \right) ;\, \right. \\&\quad \left. z_k+S_{n-l}^* \in [ y_+',y_+'+a' ),\, \tau _{z_k}^* > n-l \right) , \end{aligned}$$

where \(y_+' = \max (y'-a/p,0)\) and \(a' = \min (y',a/p) \in (0,a/p)\). Using Lemma 9.3,

$$\begin{aligned} \liminf _{n\rightarrow +\infty } n^{3/2} I_{n-l}(x',y') \geqslant \sum _{k=0}^{p-1} \frac{2}{\sqrt{2\pi }\sigma ^3} \int _{y_+'}^{y_+'+a'} L_k(y_+'+a'-y'') V\left( x', y'' \right) \text {d}y'', \end{aligned}$$

where, for any \(t \in {\mathbb {R}}\),

$$\begin{aligned} L_k(t) := {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( X_m^*, \dots , X_1^*, t+z_k \right) V^* \left( X_m^*, z_k+S_m^* \right) ;\, \tau _{z_k}^* > m \right) . \end{aligned}$$

Since \(y'' \mapsto V\left( x', y'' \right) \) is non-decreasing (see the point 2 of Proposition 2.1), we have

$$\begin{aligned} \liminf _{n\rightarrow +\infty } n^{3/2} I_{n-l}(x',y') \geqslant a' \sum _{k=0}^{p-1} \frac{2 L_k^p}{\sqrt{2\pi }\sigma ^3} V\left( x', y_+' \right) , \end{aligned}$$

where

$$\begin{aligned} L_k^p := {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \inf _{t\in [0,a/p]} g \left( X_m^*, \dots , X_1^*, t+z_k \right) V^* \left( X_m^*, z_k+S_m^* \right) ;\, \tau _{z_k}^* > m \right) . \end{aligned}$$
(9.8)

Moreover, by the point 3 of Proposition 2.1, for any \(\delta \in (0,1)\),

$$\begin{aligned} a' V(x', y_+')&\geqslant (1-\delta ) a' y_+' - c_{\delta } \geqslant (1-\delta ) \left( y' - \frac{a}{p} \right) \frac{a}{p} - c_{\delta } \\&\geqslant \frac{a}{p} \frac{1-\delta }{1+\delta } V(x',y') - \frac{a}{p} c_{\delta } - \left( \frac{a}{p} \right) ^2 -c_{\delta }. \end{aligned}$$

Consequently, using (9.3) and the Fatou Lemma,

$$\begin{aligned}&\liminf _{n\rightarrow +\infty } n^{3/2} I_n(x,y) \\&\quad \geqslant \sum _{k=0}^{p-1} \frac{2 L_k^p}{\sqrt{2\pi }\sigma ^3} {\mathbb {E}}_x \left( \frac{a}{p}\frac{1-\delta }{1+\delta } V \left( X_l, y+S_l \right) -c_{\delta } \left( 1+a^2\right) ;\, \tau _y > l \right) . \end{aligned}$$

Using the point 1 of Proposition 2.1 and the point 2 of Proposition 2.2 and taking the limit as \(l \rightarrow +\infty \) and then as \(\delta \rightarrow 0\),

$$\begin{aligned} \liminf _{n\rightarrow +\infty } n^{3/2} I_n(x,y) \geqslant \frac{a}{p} \sum _{k=0}^{p-1} \frac{2 L_k^p}{\sqrt{2\pi }\sigma ^3} V(x,y). \end{aligned}$$
(9.9)

Using the notation from (9.6) and the fact that \(u \mapsto g_m(u)\) is uniformly continuous on \([z,z+a]\), for any \(\varepsilon > 0\),

$$\begin{aligned} \liminf _{p\rightarrow +\infty } \frac{a}{p} \sum _{k=0}^{p-1} \inf _{t \in [0,a/p]} g_m \left( t+z_k \right) V_m^* (z_k) \geqslant \int _z^{z+a} \left( g_m \left( z' \right) - \varepsilon \right) V_m^* (z') \text {d}z'. \end{aligned}$$

Taking the limit as \(\varepsilon \rightarrow 0\),

$$\begin{aligned} \liminf _{p\rightarrow +\infty } \frac{a}{p} \sum _{k=0}^{p-1} \inf _{t \in [0,a/p]} g_m \left( t+z_k \right) V_m^* (z_k) \geqslant \int _z^{z+a} g_m \left( z' \right) V_m^* (z') \text {d}z'. \end{aligned}$$

By the Fatou lemma, (9.8) and (9.9), we conclude that

$$\begin{aligned}&\liminf _{n\rightarrow +\infty } n^{3/2} I_n(x,y) \geqslant \frac{2V(x,y)}{\sqrt{2\pi }\sigma ^3} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \liminf _{p\rightarrow +\infty } \frac{a}{p} \sum _{k=0}^{p-1} \inf _{t\in [0,a/p]} g \left( X_m^*, \dots , X_1^*, t+z_k \right) \right. \\&\qquad \left. \times V^* \left( X_m^*, z_k+S_m^* \right) ;\, \tau _{z_k}^*> m \right) \\&\quad \geqslant \frac{2V(x,y)}{\sqrt{2\pi }\sigma ^3} \int _z^{z+a} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( X_m^*, \dots , X_1^*, z' \right) V^* \left( X_m^*, z'+S_m^* \right) ;\, \tau _{z'}^* > m \right) \text {d}z'. \end{aligned}$$

\(\square \)

From now on, we consider that the dual Markov chain \(\left( X_n^* \right) _{n\geqslant 0}\) is independent of \(\left( X_n \right) _{n\geqslant 0}\). Recall that its transition probability \({\mathbf {P}}^*\) is defined by (2.4) and that, for any \(z \geqslant 0\), the associated Markov walk \(( z+S_n^* )_{n\geqslant 0}\) and the associated exit time \(\tau _z^*\) are defined by (2.5) and (2.6) respectively. Recall also that for any \((x,x^*) \in {\mathbb {X}}^2\), we denote by \({\mathbb {P}}_{x,x^*}\) and \({\mathbb {E}}_{x,x^*}\) the probability and the expectation generated by the finite dimensional distributions of the Markov chains \(( X_n )_{n\geqslant 0}\) and \(( X_n^* )_{n\geqslant 0}\) starting at \(X_0 = x\) and \(X_0^* = x^*\) respectively.

Lemma 9.5

Assume Hypotheses M1M3. For any \(x \in {\mathbb {X}}\), \(y \in {\mathbb {R}}\), \(z \geqslant 0\), \(a >0\), \(l \geqslant 1\), \(m \geqslant 1\) and any function \(g \in {\mathscr {C}}_b^+ \left( {\mathbb {X}}^{l+m} \times {\mathbb {R}} \right) \), we have

$$\begin{aligned}&\lim _{n\rightarrow +\infty } n^{3/2} {\mathbb {E}}_x \left( g \left( X_1, \dots , X_l, X_{n-m+1}, \dots , X_n, y+S_n \right) ;\, y\right. \\&\left. \qquad +S_n \in (z,z+a],\, \tau _y> n \right) \\&\quad = \frac{2}{\sqrt{2\pi }\sigma ^3} \int _{z}^{z+a} \sum _{x^* \in {\mathbb {X}}} {\mathbb {E}}_{x,x^*} \left( g \left( X_1, \dots , X_l, X_m^*, \dots , X_1^*, z' \right) \right. \\&\qquad \left. \times V\left( X_l, y+S_l \right) V^* \left( X_m^*, z'+S_m^* \right) ;\, \tau _y> l,\, \tau _{z'}^* > m \right) \text {d}z' {\varvec{\nu }}(x^*). \end{aligned}$$

Proof

Fix \(x \in {\mathbb {X}}\), \(y \in {\mathbb {R}}\), \(z \geqslant 0\), \(a >0\), \(l \geqslant 1\), \(m \geqslant 1\) and \(g \in {\mathscr {C}}_b^+ \left( {\mathbb {X}}^{l+m} \times {\mathbb {R}} \right) \). For any \(n \geqslant l+m\), by the Markov property,

$$\begin{aligned} I_0&:= n^{3/2} {\mathbb {E}}_x \left( g \left( X_1, \dots , X_l, X_{n-m+1}, \dots , X_n, y+S_n \right) ;\, y+S_n \in (z,z+a],\, \tau _y> n \right) \\&= \sum _{x_1, \dots , x_l \in {\mathbb {X}}^l} n^{3/2} {\mathbb {E}}_{x_l} \left( g \left( x_1, \dots , x_l, X_{n-l-m+1}, \dots , X_{n-l}, y_l+S_{n-l} \right) ;\, \right. \\&\quad \left. y_l+S_{n-l} \in (z,z+a],\, \tau _{y_l}> n-l \right) \times {\mathbb {P}}_x \left( X_1 = x_1, \dots , X_l = x_l, \tau _y > l \right) , \end{aligned}$$

where \(y_l = x_1+\cdots +x_l\). Using the Lebesgue dominated convergence theorem (or simply the fact that \({\mathbb {X}}^l\) is finite) and Lemma 9.4, we conclude that

$$\begin{aligned} \lim _{n\rightarrow +\infty } I_0&= \frac{2}{\sqrt{2\pi }\sigma ^3} \sum _{x_1, \dots , x_l \in {\mathbb {X}}^l} V\left( x_l, y_l \right) {\mathbb {P}}_x \left( X_1 = x_1, \dots , X_l = x_l, \tau _y> l \right) \\&\quad \times \int _{z}^{z+a} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( x_1,\dots ,x_l, X_m^*, \dots , X_1^*,z' \right) \right. \\&\left. \qquad V^* \left( X_m^*, z'+S_m^* \right) ;\, \tau _{z'}^* > m \right) \text {d}z'. \end{aligned}$$

\(\square \)

9.2 Proof of Theorem 2.7

For any \(l \geqslant 1\), denote by \({\mathscr {C}}^+ ( {\mathbb {X}}^l \times {\mathbb {R}}_+ )\) the set of non-negative functions g: \({\mathbb {X}}^l \times {\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+\) satisfying the following properties:

  • for any \((x_1,\dots ,x_l) \in {\mathbb {X}}^l\), the function \(z \mapsto g(x_1,\dots ,x_l,z)\) is continuous,

  • there exists \(\varepsilon > 0\) such that \(\max _{x_1,\dots x_l \in {\mathbb {X}}} \sup _{z \geqslant 0} g(x_1,\dots ,x_l,z) (1+z)^{2+\varepsilon } < +\infty \).

Fix \(x \in {\mathbb {X}}\), \(y \in {\mathbb {R}}\), \(l \geqslant 1\), \(m \geqslant 1\) and \(g \in {\mathscr {C}}^+ \left( {\mathbb {X}}^{l+m} \times {\mathbb {R}} \right) \). For brevity, denote

$$\begin{aligned} g_{l,m}(y+S_n) = g \left( X_1, \dots , X_l, X_{n-m+1}, \dots , X_n, y+S_n \right) . \end{aligned}$$

Set

$$\begin{aligned} I_0&:= n^{3/2} {\mathbb {E}}_x \left( g_{l,m}(y+S_n);\, \tau _y> n \right) \\&= \sum _{k=0}^{+\infty } \underbrace{n^{3/2} {\mathbb {E}}_x \left( g_{l,m}(y+S_n);\, y+S_n \in (k,k+1],\, \tau _y > n \right) }_{=:I_k(n)}. \end{aligned}$$

Since \(g \in {\mathscr {C}}^+ \left( {\mathbb {X}}^{l+m} \times {\mathbb {R}} \right) \), we have

$$\begin{aligned} I_k(n) \leqslant \frac{N(g)}{(1+k)^{2+\varepsilon }} n^{3/2} {\mathbb {P}}_x \left( y+S_n \in (k,k+1],\, \tau _y > n \right) , \end{aligned}$$

where \(N(g) = \max _{x_1,\dots ,x_{l+m} \in {\mathbb {X}}} \sup _{z \geqslant 0} g(x_1,\dots ,x_{l+m},z)(1+z)^{2+\varepsilon } <+\infty \). By the point 2 of Theorem 2.5, we have

$$\begin{aligned} I_k(n) \leqslant \frac{c N(g) (1+\max (y,0))}{(k+1)^{1+\varepsilon }}. \end{aligned}$$

Consequently, by the Lebesgue dominated convergence theorem,

$$\begin{aligned} \lim _{n\rightarrow +\infty } I_0 = \sum _{k=0}^{+\infty } \lim _{n\rightarrow +\infty } n^{3/2} {\mathbb {E}}_x \left( g_{l,m}(y+S_n);\, y+S_n \in (k,k+1],\, \tau _y > n \right) . \end{aligned}$$

By Lemma 9.5,

$$\begin{aligned}&\lim _{n\rightarrow +\infty } I_0 \\&\quad = \frac{2}{\sqrt{2\pi }\sigma ^3} \sum _{k=0}^{+\infty } \int _k^{k+1} \sum _{x^* \in {\mathbb {X}}} {\mathbb {E}}_{x,x^*} \left( g \left( X_1, \dots , X_l, X_m^*, \dots , X_1^*,z' \right) V\left( X_l, y+S_l \right) \right. \\&\quad \quad \left. \times V^* \left( X_m^*, z'+S_m^* \right) ;\, \tau _y> l,\, \tau _{z'}^* > m \right) \text {d}z' {\varvec{\nu }}(x^*), \end{aligned}$$

which establishes Theorem 2.7.

9.3 Proof of Theorem 2.8

Theorem 2.8 will be deduced from Theorem 2.7.

Let \(x \in {\mathbb {X}}\), \(y \in {\mathbb {R}}\) and \(n \geqslant 1\). Since \({\mathbb {X}}\) is finite we note that \(\left\| f\right\| _{\infty } = \sup _{x \in {\mathbb {X}}} \left|f(x)\right|\) exists. This implies

$$\begin{aligned} {\mathbb {P}}_x \left( \tau _y = n+1 \right) = {\mathbb {P}}_x \left( y+S_n+f(X_{n+1}) \leqslant 0,\, y+S_n \in \left[ 0,\left\| f\right\| _{\infty } \right] ,\, \tau _y > n \right) . \end{aligned}$$

By the Markov property,

$$\begin{aligned} {\mathbb {P}}_x \left( \tau _y = n+1 \right) = {\mathbb {E}}_x \left( g(X_n,y+S_n) \,;\, \tau _y > n \right) , \end{aligned}$$

where, for any \((x',y') \in {\mathbb {X}} \times {\mathbb {R}}\),

$$\begin{aligned} g(x',y')= & {} {\mathbb {P}}_{x'} \left( y'+f(X_1) \leqslant 0 \right) \mathbb {1}_{\left\{ y' \in \left[ 0,\left\| f\right\| _{\infty } \right] \right\} } \\= & {} \mathbb {1}_{\left\{ y' \in \left[ 0,\left\| f\right\| _{\infty } \right] \right\} } \sum _{x_1\in {\mathbb {X}}} {\mathbf {P}}(x',x_1) \mathbb {1}_{\{y'+f(x_1) \leqslant 0 \}}. \end{aligned}$$

Since \(g(x',\cdot )\) is a staircase function, for any \(\varepsilon > 0\) there exist two functions \(\varphi _{\varepsilon }\) and \(\psi _{\varepsilon }\) on \({\mathbb {X}} \times {\mathbb {R}}\) and \(N \subset {\mathbb {X}} \times {\mathbb {R}}\) such that

  • for any \(x' \in {\mathbb {X}}\), the functions \(\varphi _{\varepsilon }(x',\cdot )\) and \(\psi _{\varepsilon }(x',\cdot )\) are continuous and have a compact support included in \(\left[ -1,\left\| f\right\| _{\infty }+1 \right] \),

  • for any \((x',y') \in \left( {\mathbb {X}} \times {\mathbb {R}} \right) {\setminus } N\), it holds \(\varphi _{\varepsilon }(x',y') = g(x',y') = \psi _{\varepsilon }(x',y')\),

  • for any \((x',y') \in {\mathbb {X}} \times {\mathbb {R}}\), it holds \(0 \leqslant \varphi _{\varepsilon }(x',y') \leqslant g(x',y') \leqslant \psi _{\varepsilon }(x',y') \leqslant 1\),

  • the set N is sufficiently small:

    $$\begin{aligned} \int _{-1}^{\left\| f\right\| _{\infty }+1} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( V^*\left( X_1, z+S_1^* \right) ;\, \tau _z^* > 1,\, \left( X_1, z \right) \in N \right) \text {d}z \leqslant \varepsilon . \end{aligned}$$
    (9.10)

The upper bound For any \(\varepsilon > 0\), using Theorem 2.7, we have

$$\begin{aligned} I^+&:= \limsup _{n\rightarrow +\infty } n^{3/2}{\mathbb {P}}_x \left( \tau _y = n+1 \right) \\&\leqslant \limsup _{n\rightarrow +\infty } n^{3/2}{\mathbb {E}}_x \left( \psi _{\varepsilon }(X_n,y+S_n);\, \tau _y> n \right) \\&= \frac{2}{\sqrt{2\pi } \sigma ^3} \int _0^{+\infty } \sum _{x^* \in {\mathbb {X}}} {\mathbb {E}}_{x,x^*} \left( \psi _{\varepsilon } \left( X_1^*,z \right) V(X_1,y+S_1) \right. \\&\quad \left. V^*(X_1^*,z+S_1^*);\, \tau _y> 1,\, \tau _z^* > 1 \right) {\varvec{\nu }}(x^*) \text {d}z. \end{aligned}$$

Using the point 1 of Proposition 2.1,

$$\begin{aligned} I^+&\leqslant \frac{2V(x,y)}{\sqrt{2\pi } \sigma ^3} \int _0^{\left\| f\right\| _{\infty }+1} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \psi _{\varepsilon } \left( X_1^*,z \right) V^*(X_1^*,z+S_1^*);\, \tau _z^*> 1 \right) \text {d}z \nonumber \\&\leqslant \underbrace{\frac{2V(x,y)}{\sqrt{2\pi } \sigma ^3} \int _0^{\left\| f\right\| _{\infty }} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( g \left( X_1^*,z \right) V^*(X_1^*,z+S_1^*);\, \tau _z^*> 1 \right) \text {d}z}_{=:I_1} \nonumber \\&\quad + \underbrace{\frac{2V(x,y)}{\sqrt{2\pi } \sigma ^3} \int _0^{\left\| f\right\| _{\infty }+1} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( V^*(X_1^*,z+S_1^*);\, \tau _z^* > 1,\, \left( X_1^*,z \right) \in N \right) \text {d}z}_{=:I_2}. \end{aligned}$$
(9.11)

Since \({\varvec{\nu }}\) is \({\mathbf {P}}^*\)-invariant, we have

$$\begin{aligned} I_1&= \frac{2V(x,y)}{\sqrt{2\pi } \sigma ^3} \int _0^{\left\| f\right\| _{\infty }} \sum _{x^* \in {\mathbb {X}}} g \left( x^*,z \right) V^*(x^*,z-f(x^*)) \\&\quad \quad \mathbb {1}_{\left\{ z-f(x^*)> 0 \right\} } {\varvec{\nu }}(x^*) \text {d}z \\&= \frac{2V(x,y)}{\sqrt{2\pi } \sigma ^3} \int _0^{\left\| f\right\| _{\infty }} \sum _{x^*,x_1 \in {\mathbb {X}}} \mathbb {1}_{\left\{ z+f(x_1) \leqslant 0 \right\} } {\mathbf {P}}(x^*,x_1){\varvec{\nu }}(x^*) V^*(x^*,z-f(x^*)) \\&\quad \quad \mathbb {1}_{\left\{ z-f(x^*)> 0 \right\} } \text {d}z \\&= \frac{2V(x,y)}{\sqrt{2\pi } \sigma ^3} \int _0^{\left\| f\right\| _{\infty }} \sum _{x^*,x_1 \in {\mathbb {X}}} \mathbb {1}_{\left\{ z+f(x_1) \leqslant 0 \right\} } {\mathbf {P}}^*(x_1,x^*){\varvec{\nu }}(x_1) V^*(x^*,z-f(x^*)) \\&\qquad \mathbb {1}_{\left\{ z-f(x^*)> 0 \right\} } \text {d}z \\&= \frac{2V(x,y)}{\sqrt{2\pi } \sigma ^3} \int _0^{\left\| f\right\| _{\infty }} \sum _{x_1 \in {\mathbb {X}}} \mathbb {1}_{\left\{ z+f(x_1) \leqslant 0 \right\} } {\varvec{\nu }}(x_1) {\mathbb {E}}_{x_1}^* \left( V^*(X_1^*,z+S_1^*);\, \tau _z^* > 1 \right) \text {d}z. \end{aligned}$$

Using the point 1 of Proposition 2.1,

$$\begin{aligned} I_1 = \frac{2V(x,y)}{\sqrt{2\pi } \sigma ^3} \int _0^{\left\| f\right\| _{\infty }} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( V^*(X_1^*,z) \,;\, S_1^* \geqslant z \right) \text {d}z. \end{aligned}$$
(9.12)

Moreover, by (9.10), we get

$$\begin{aligned} I_2 \leqslant \frac{2V(x,y)}{\sqrt{2\pi } \sigma ^3} \varepsilon . \end{aligned}$$
(9.13)

Putting together (9.11), (9.12) and (9.13) and taking the limit as \(\varepsilon \rightarrow 0\), we obtain that

$$\begin{aligned} I^+ \leqslant \frac{2V(x,y)}{\sqrt{2\pi } \sigma ^3} \int _0^{\left\| f\right\| _{\infty }} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( V^*(X_1^*,z) \,;\, S_1^* \geqslant z \right) \text {d}z. \end{aligned}$$
(9.14)

Lower bound In a similar way, using Theorem 2.7, we write

$$\begin{aligned} I^-&:= \liminf _{n\rightarrow +\infty } n^{3/2}{\mathbb {P}}_x \left( \tau _y = n+1 \right) \\&\geqslant \liminf _{n\rightarrow +\infty } n^{3/2}{\mathbb {E}}_x \left( \varphi _{\varepsilon }(X_n,y+S_n);\, \tau _y> n \right) \\&= \frac{2V(x,y)}{\sqrt{2\pi } \sigma ^3} \int _0^{\left\| f\right\| _{\infty }+1} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( \varphi _{\varepsilon } \left( X_1^*,z \right) V^*(X_1^*,z+S_1^*);\, \tau _z^* > 1 \right) \text {d}z \\&\geqslant I_1 - I_2. \end{aligned}$$

Using (9.12) and (9.13) and taking the limit as \(\varepsilon \rightarrow 0\), we obtain that

$$\begin{aligned} I^- \geqslant \frac{2V(x,y)}{\sqrt{2\pi } \sigma ^3} \int _0^{\left\| f\right\| _{\infty }} {\mathbb {E}}_{{\varvec{\nu }}}^* \left( V^*(X_1^*,z);\, S_1^* \geqslant z \right) \text {d}z, \end{aligned}$$

which together with (9.14) concludes the proof.