1 Introduction

The general ruin problem can be formulated as follows. We are given a family of scalar processes \(X^{u}\) with initial values \(u> 0\). The object of interest is the exit probability of \(X^{u}\) from the positive half-line as a function of \(u\). More formally, let \(\tau ^{u}:= \inf \{t: X^{u}_{t}\le 0\}\). The question is to determine the function

$$ \Psi (u,T):=\mathbf{P}[\tau ^{u}\le T] $$

(the ruin probability on a finite interval \([0,T]\)) or \(\Psi (u):= \mathbf{P}[\tau ^{u}< \infty ]\) (the ruin probability on \([0,\infty )\)).

The exact solution of the problem is available only in a few rare cases e.g. for \(X^{u}=u+W\), where \(W\) is the Wiener process, \(\Psi (u,T)= \mathbf{P}[\sup _{t\le T}W_{t}\ge u] \) and it remains to recall that the explicit formula for the distribution of the supremum of the Wiener process was obtained already in Louis Bachelier’s thesis of 1900, which is probably the first ever mathematical study on continuous-time stochastic processes. Another example is the well-known explicit formula for \(\Psi (u)\) in the Lundberg model of the ruin of an insurance company with exponential claims, i.e., when \(X^{u}=u+P\) and \(P\) is a compound Poisson process with drift and exponentially distributed jumps. Of course, for more complicated cases, explicit formulae are not available and only asymptotic results or bounds can be obtained as it is done e.g. in the Lundberg–Cramér theory. In particular, if \(\mathbf{E}[P _{1}]>0\) and the sizes of jumps are random variables satisfying the Cramér condition (i.e., with finite exponential moments), then \(\Psi (u)\) is exponentially decreasing as \(u\to \infty \).

In this paper, we consider the ruin problem for a rather general model, suggested by Paulsen in [30], in which \(X^{u}\) (sometimes called generalised Ornstein–Uhlenbeck process) is given as the solution of the linear stochastic equation

$$ X^{u}_{t}=u+P_{t}+\int _{(0,t]}X^{u}_{s-}\,dR_{s}, $$
(1.1)

where \(R\) and \(P\) are independent Lévy processes with Lévy triplets \((a,\sigma ^{2},\Pi )\) and \((a_{P},\sigma _{P}^{2},\Pi _{P})\), respectively.

There is a growing interest in models of this type because they describe the evolution of reserves of insurance companies investing in a risky asset with the price process \(S\). In the financial–actuarial context, \(R\) is interpreted as the relative price process with \(dR_{t}=dS_{t}/S_{t-}\), i.e., the price process \(S\) is the stochastic (Doléans) exponential \(\mathcal{E}(R)\). Equation (1.1) means that the (infinitesimal) increment \(d X^{u}_{t}\) of the capital reserve is the sum of the increment \(dP_{t}\) due to the insurance business activity and the increment due to a risky placement which is the product of the number \(X^{u}_{t-}/S_{t-}\) of owned shares and the price increment \(dS_{t}\) of a share, that is, \(X^{u}_{t-}dR_{t}\).

In this model, the log-price process\(V=\ln {\mathcal{E}}(R)\) is also a Lévy process with the triplet \((a_{V},\sigma ^{2},\Pi _{V})\). Recall that the behaviour of the ruin probability in such models is radically different from that in classical actuarial models. For instance, if the price of the risky asset follows a geometric Brownian motion, that is, \(R_{t}=at+\sigma W_{t}\), and the risk process \(P\) is as in the Lundberg model, then \(\Psi (u)=O(u^{1-2a/\sigma ^{2}})\), \(u\to \infty \), if \(2a/\sigma ^{2}> 1\), and \(\Psi (u)\equiv 1\) otherwise; see [14, 21, 34].

We exclude degenerate cases by assuming that \(\Pi ((-\infty ,-1])=0\) (otherwise \(\Psi (u)=1\) for all \(u>0\), see the discussion in Sect. 2) and \(P\) is not a subordinator (otherwise \(\Psi (u)=0\) for all \(u>0\) because \(X^{u}>0\); see (3.2), (3.1)). Also we exclude the case \(R \equiv 0\) well studied in the literature; see [24].

We are especially interested in the case where the process \(P\) describing the “business part” of the model has only upward jumps (in other words, \(P\) is spectrally positive). In the classical actuarial literature, such models are referred to as annuity insurance models (or models with negative risk sums), see [16, Sect. 1.1], [36], while in modern sources, they serve also to describe the capital reserve of a venture company investing in the development of new technologies and selling innovations; sometimes they are referred to as dual models, see [1], [2, Chap. 3], [3, 5], etc.

In models with only upward jumps, the downcrossing of zero may happen only in a continuous way. This allows us to obtain the exact (up to a multiplicative constant) asymptotics of the ruin probability under weak assumptions on the price dynamics.

Let \(H: q\mapsto \ln {\mathbf{E}}[e^{-q V_{1}}]\) be the cumulant-generating function of the increment of the log-price process \(V\) on the interval \([0,1]\). The function \(H\) is convex and its effective domain \(\mathrm{dom\,} H\) is a convex subset of ℝ containing zero.

If the distribution of the jumps of the business process has not too heavy tails, the asymptotic behaviour of the ruin probability \(\Psi (u)\) as \(u\to \infty \) is determined by the strictly positive root \(\beta \) of \(H\), assumed existing and lying in the interior of \(\mathrm{dom\,} H\). Unfortunately, the existing results are overloaded by numerous integrability assumptions on the processes \(R\) and \(P\), while the law \(\mathcal{L}(V_{T})\) of the random variable \(V_{T}\) is required to contain an absolutely continuous component, where \(T\) is an independent random variable uniformly distributed on \([0,1]\); see e.g. [32, Theorem 3.2], whose part (b) provides information how heavier tails may change the asymptotics.

The aim of our study is to obtain the exact asymptotics of the exit probability in this now classical framework under the weakest conditions. Our main result has the following easy to memorise formulation.

Theorem 1.1

Suppose that\(H\)has a root\(\beta >0\)not lying on the boundary of\(\mathrm{dom\,} H\)and\(\int _{{\mathbb{R}}} |x|^{\beta }I_{\{|x|>1\}} \Pi _{P}(dx)<\infty \). Then

$$ 0< \liminf _{u\to \infty } u^{\beta }\Psi (u)\le \limsup _{u\to \infty } u^{\beta }\Psi (u)< \infty . $$

If, moreover, \(P\)jumps only upward and the distribution\(\mathcal{L}(V _{1})\)is non-arithmetic,Footnote 1then\(\Psi (u)\sim C_{\infty }u^{-\beta }\)as\(u\to \infty \), where\(C_{\infty }>0\)is a constant.

In our argument, we are based, as many other authors, on the theory of distributional equations as presented in the paper by Goldie [15]. Unfortunately, Goldie’s theorem does not give a clear answer when the constant defining the asymptotics of the tail of the solution of an affine distributional equation is strictly positive. The striking simplicity of our formulation is due to recent progress in this theory, namely the criterion by Guivarc’h and Le Page [18]; its simple proof can be found in the paper [9] by Buraczewski and Damek. This criterion gives a necessary and sufficient condition for the strict positivity of the constant in the Kesten–Goldie theorem determining the rate of decay of the tail of the solution at infinity. Its obvious corollary allows us to simplify radically the proofs and get rid of additional assumptions presented in earlier papers; see [22, 4, 28, 29, 30, 31, 32, 33] and references therein. Our technique involves only affine distributional equations and avoids more demanding Letac-type equations.

The question whether the concluding statement of the theorem holds when \(P\) has downward jumps remains open.

The structure of the paper is the following. In Sect. 2, we formulate the model and provide some prerequisites from the theory of Lévy processes. Section 3 contains a well-known reduction of the ruin problem to the study of the asymptotic behaviour of a stochastic integral (called in the actuarial literature continuous perpetuity; see [11]). In Sect. 4, we prove moment inequalities for maximal functions of stochastic integrals needed to analyse the limiting behaviour of an exponential functional in Sect. 5. The latter section is concluded by the proof of the main result and some comments on its formulation. In Sect. 6, we establish Theorem 6.4 on ruin with probability one using the technique suggested in [34]. This theorem implies in particular that in the classical model with negative risk sums and investments in a risky asset with a price following a geometric Brownian motion, ruin is imminent if \(a\le \sigma ^{2}/2\); see [21]. In Sect. 7, we discuss examples.

Our presentation is oriented towards a reader with preferences towards Lévy processes rather than the theory of distributional equations (called also implicit renewal theory). That is why in the appendix, we provide rather detailed information on the latter covering the arithmetic case. In particular, we give a proof of a version of the Grincevic̆ius theorem under slightly weaker conditions than in the original paper [17].

We express our gratitude to E. Damek, D. Buraczewski and Z. Palmowski for fruitful discussions and a number of useful references on distributional equations.

2 Preliminaries from the theory of Lévy processes

Let \((a,\sigma ^{2},\Pi )\) and \((a_{P},\sigma _{P}^{2},\Pi _{P})\) be the Lévy triplets of the processes \(R\) and \(P\) corresponding to the standardFootnote 2 truncation function \(h(x):=xI_{\{|x|\le 1\}}\).

Putting \(\bar{h}(x):=xI_{\{|x|> 1\}}\), we can write the canonical decomposition of \(R\) in the form

$$ R_{t}=at+\sigma W_{t}+h*(\mu -\nu )_{t}+\bar{h}*\mu _{t}, $$

where \(W\) is a standard Wiener process and the Poisson random measure \(\mu (dt,dx)\) is the jump measure of \(R\) having a deterministic compensator (the mean of \(\mu \)) of the form \(\nu (dt,dx)=dt\Pi (dx)\). For notions and results, see the books [20, Chap. 2] and also [10, Chaps. 2 and 3].

As in [20], we use ∗ for the standard notation of stochastic calculus for integrals with respect to random measures. For instance,

$$ h*(\mu -\nu )_{t}=\int _{0}^{t}\int _{{\mathbb{R}}} h(x)(\mu -\nu )(ds,dx). $$

We hope that the reader will be not confused that \(f(x)\) may denote the whole function \(f\) or its value at \(x\); the typical example is \(\ln (1+x)\) explaining why such a flexibility is convenient. The symbol \(\Pi (f)\) or \(\Pi (f(x))\) stands for the integral of \(f\) with respect to the measure \(\Pi \). Recall that

$$ \Pi (x^{2}\wedge 1):=\int _{{\mathbb{R}}} (x^{2}\wedge 1) \Pi (dx)< \infty , $$

and that the condition \(\sigma =0\) and \(\Pi (|h|)<\infty \) is necessary and sufficient for \(R\) to have trajectories of (locally) finite variation; see [10, Proposition 3.9].

The process \(P\) describing the actuarial (“business”) part of the model admits a similar representation as

$$ P_{t}=a_{P}t+\sigma _{P} W^{P}_{t}+h*(\mu ^{P} -\nu ^{P})_{t}+\bar{h}*\mu ^{P}_{t}. $$

The Lévy processes \(R\) and \(P\) generate the filtration \(\mathbf{F}^{R,P}=(\mathcal{F}^{R,P}_{t})_{t\ge 0}\), completed to satisfy the usual conditions. Our standing assumption is

Assumption 2.1

The Lévy measure\(\Pi \)is concentrated on the interval\((-1,\infty )\); \(\sigma ^{2}\)and\(\Pi \)do not vanish simultaneously; the process\(P\)is not a subordinator.

Recall that if \(\Pi \) charges \((-\infty ,-1]\), then ruin happens at the instant \(\tau \) of the first jump of the Poisson process \(I_{\{x \le -1\}}*\mu \) having strictly positive intensity. Indeed, the independence of the processes \(P\) and \(R\) implies that their trajectories have no common instants of jumps (except on a null set).

Note that \(\tau =\inf \{t\ge 0: xI_{\{x\le -1\}}*\mu _{t}\le -1\}< \infty \) when \(\Pi ((-\infty ,-1])>0\), and \(\Delta R_{\tau }\le -1\). According to (1.1), \(\Delta X_{\tau }=X_{\tau -}\Delta R_{ \tau }\), that is,

$$ X_{\tau }=X_{\tau -}(\Delta R_{\tau }+1). $$

It follows that \(\tau ^{u}\le \tau <\infty \).

If \(\Pi \) does not charge \((-\infty ,-1]\) but \(P\) is a subordinator, that is, an increasing Lévy process, then ruin never happens. According to [10, Proposition 3.10], the process \(P\) is not a subordinator if and only if either \(\sigma ^{2}_{P}>0\) or one of the following three conditions holds:

1) \(\Pi _{P}((-\infty ,0))>0\);

2) \(\Pi _{P}((-\infty ,0))=0\), \(\Pi _{P}(xI_{\{x>0\}})=\infty \);

3) \(\Pi _{P}((-\infty ,0))=0\), \(\Pi _{P}(xI_{\{x>0\}})<\infty \), \(a_{P}-\Pi _{P}(xI_{\{0< x\le 1\}})<0\).

The first condition in Assumption 2.1 implies that \(\Delta R>-1\) and the stochastic exponential, solution of the linear equation \(dZ=Z_{-}dR\) with the initial condition \(Z_{0}=1\), has the form

$$ \mathcal{E}_{t}(R)=e^{R_{t} -\frac{1}{2}\sigma ^{2}t+\sum _{0< s\le t}( \ln (1+\Delta R_{s})-\Delta R_{s})}. $$

In the context of financial models, this stands for the price of a risky asset (e.g. stock). The log price \(V:=\ln {\mathcal{E}}(R)\) is a Lévy process and can be written in the form

$$ V_{t}=at-\frac{1}{2} \sigma ^{2} t + \sigma W_{t}+ h*(\mu -\nu )_{t}+ \big(\ln (1+x)-h\big)*\mu _{t}. $$
(2.1)

Its Lévy triplet is \((a_{V},\sigma ^{2},\Pi _{V})\), where

$$ a_{V}=a-\frac{\sigma ^{2}}{2}+\Pi \Big(h\big(\ln (1+x)\big)-h\Big) $$

and \(\Pi _{V}=\Pi \varphi ^{-1}\) with \(\varphi : x\mapsto \ln (1+x)\).

The cumulant-generating function \(H:q\to \ln {\mathbf{E}}[e^{-qV_{1}}]\) of the random variable \(V_{1}\) admits an explicit expression, namely

$$ H(q):=-a_{V} q+\frac{\sigma ^{2}}{2}q^{2} +\Pi \Big(e^{-q \ln (1+x)}-1+q h\big(\ln (1+x)\big)\Big). $$

Its effective domain \(\mathrm{dom\,} H=\{q: \ H(q)<\infty \}\) is the set \(\{J(q)<\infty \}\), where

$$ J(q):= \Pi \big(I_{\{|\ln (1+x)|>1\}} e^{-q\ln (1+x)}\big)=\Pi \big(I _{\{|\ln (1+x)|>1\}}(1+x)^{-q}\big). $$

Its interior is the open interval \((\underline{q},\bar{q})\) with

$$ \underline{q}:= \inf \{q\le 0: J(q)< \infty \},\qquad \bar{q}:= \sup \{q\ge 0: J(q)< \infty \}. $$

Being a convex function, \(H\) is continuous and admits finite right and left derivatives on \((\underline{q},\bar{q})\). If \(\bar{q}>0\), then the right derivative

$$ D^{+}H(0)=-a_{V}-\Pi \Big(\bar{h}\big(\ln (1+x)\big)\Big)< \infty , $$

though it may be equal to \(-\infty \), a case we do not exclude.

In the formulations of our asymptotic results, we always assume that \(\bar{q}>0\) and the equation \(H(q)=0\) has a root \(\beta \in (0,\bar{q})\). Since \(H\) is not constant, such a root is unique. Clearly, it exists if and only if \(D^{+}H(0)<0\) and \(\limsup _{q\uparrow \bar{q}}H(q)/q>0\). In the case where \(\underline{q}<0\), the condition \(D^{-}H(0)>0\) is necessary to ensure that \(H(q)<0\) for \(q<0\) sufficiently small in absolute value. If \(J(q)<\infty \), then the process \(m=(m_{t}(q))_{t\le 1}\) with

$$ m_{t}(q):=e^{-qV_{t}-tH(q)} $$

is a martingale and

$$ {\mathbf{E}}\big[e^{-qV_{t}}\big]= e^{tH(q)}, \qquad t\in [0,1]. $$

In particular, we have that \(H(q)=\ln {\mathbf{E}}[e^{-qV_{1}} ]= \ln {\mathbf{E}}[M^{q}]\), where \(M:=e^{-V_{1}}\). For the above properties, see e.g. [35, Theorem 25.17]. Note that

$$ {\mathbf{E}}\bigg[\sup _{t\le 1}e^{-qV_{t}}\bigg]< \infty , \qquad \forall q\in (\underline{q},\bar{q}). $$
(2.2)

Indeed, let \(q\in (0,\bar{q})\). Take \(r\in (1,\bar{q}/q)\). Then \(\mathbf{E}[m^{r}_{1}(q) ]=e^{H(qr)-rH(q)}<\infty \). By virtue of the Doob inequality, the maximal function \(m^{*}_{1}(q):=\sup _{t\le 1}m _{t}(q)\) belongs to \(L^{r}\), and it remains to observe that \(e^{-qV_{t}}\le C_{q} m_{t}(q)\) with \(C_{q}=\sup _{t\le 1}e^{t H(q)}\). Similar arguments work for \(q\in (\underline{q},0)\).

3 Ruin problem: a reduction

Let us introduce the process

$$ Y_{t}:=-\int _{(0,t]} {\mathcal{E}}^{-1}_{s-}(R)\,dP_{s}=-\int _{(0,t]} e ^{-V_{s-}}\,dP_{s}. $$
(3.1)

Due to the independence of \(P\) and \(R\), the joint quadratic characteristic \([P,R]\) is zero, and a straightforward application of the product formula for semimartingales shows that the process

$$ X_{t}^{u}:=\mathcal{E}_{t}(R)(u-Y_{t}) $$
(3.2)

solves the non-homogeneous linear equation (1.1), i.e., the solution of the latter is given by this stochastic version of the Cauchy formula. The strict positivity of the process \(\mathcal{E}(R)=e^{V}\) implies that \(\tau ^{u}=\inf \{t\ge 0:\ Y_{t} \ge u\}\).

The following lemma is due to Paulsen [30].

Lemma 3.1

If\(Y_{t}\to Y_{\infty }\)almost surely as\(t\to \infty \), where\(Y_{\infty }\)is a finite random variable unbounded from above, then for all\(u>0\), we have

$$ \bar{G}(u)\le \Psi (u)=\frac{\bar{G}(u)}{\mathbf{E}[\bar{G}(X_{\tau ^{u}})\, \vert \, \tau ^{u}< \infty ]}\le \frac{\bar{G}(u)}{\bar{G}(0)}, $$
(3.3)

where\(\bar{G}(u):=\mathbf{P}[Y_{\infty }>u]\). If\(\Pi _{P}((-\infty ,0))=0\), then\(\Psi (u)= \bar{G}(u)/\bar{G}(0)\).

Proof

Let \(\tau \) be an arbitrary stopping time with respect to the filtration \(\mathbf{F}^{R,P}\). As we assume that the finite limit \(Y_{\infty }\) exists, the random variable

$$ Y_{\tau ,\infty }:= \textstyle\begin{cases} -\lim _{N\to \infty } \int _{(\tau ,\tau +N]} e^{-(V_{t-}-V_{\tau })}\,dP _{t}, &\quad \tau < \infty , \\ 0, & \quad \tau =\infty , \end{cases} $$

is well defined. On the set \(\{\tau <\infty \}\), we have

$$ Y_{\tau ,\infty }=e^{V_{\tau }}(Y_{\infty }-Y_{\tau })=X_{\tau }^{u} +e ^{V_{\tau }}(Y_{\infty }-u). $$
(3.4)

Let \(\xi \) be an \(\mathcal{F}_{\tau }^{R,P}\)-measurable random variable. Since the Lévy process \(Y\) starts afresh at \(\tau \), the conditional distribution of \(Y_{\tau ,\infty }\) given \((\tau ,\xi )\) is the same as the distribution of \(Y_{\infty }\). It follows that

$$ \mathbf{P}[ Y_{\tau ,\infty }>\xi , \tau < \infty ] =\mathbf{E}[ \bar{G}(\xi ) \mathbf{1}_{\{ \tau < \infty \}}]. $$

Thus if \(\mathbf{P}[\tau <\infty ]>0\), then

$$ \mathbf{P}[Y_{\tau ,\infty }>\xi , \tau < \infty ] =\mathbf{E}[\bar{G}( \xi )\, \vert \, \tau < \infty ]{\mathbf{P}}[\tau < \infty ]. $$

Noting that \(\Psi (u):=\mathbf{P}[\tau ^{u}<\infty ]\ge {\mathbf{P}}[Y _{\infty }>u]>0\), we deduce from here using (3.4) that

$$\begin{aligned} \bar{G}(u) &= \mathbf{P}[ Y_{\infty }>u, \tau ^{u}< \infty ]= \mathbf{P}[Y _{\tau ^{u},\infty }>X_{\tau ^{u}}^{u}, \tau ^{u}< \infty ] \\ &=\mathbf{E}[\bar{G}(X_{\tau ^{u}}^{u})\, \vert \, \tau ^{u}< \infty ] {\mathbf{P}}[\tau ^{u}< \infty ] \end{aligned}$$

which implies the equality in (3.3). The result follows since \(X_{\tau ^{u}}^{u}\le 0\) on \(\{\tau ^{u}<\infty \}\), and in the case where \(\Pi _{P}((-\infty ,0))=0\), the process \(X^{u}\) crosses zero in a continuous way, i.e., \(X_{\tau ^{u}}^{u}= 0\) on this set. □

In view of the above lemma, the proof of Theorem 1.1 is reduced to establishing the existence of a finite limit \(Y_{\infty }\) and finding the asymptotics of the tail of its distribution.

4 Moments of the maximal function

In this section, we prove a simple but important result implying the existence of moments of the random variable \(Y_{1}^{*}\). Here and in the sequel, we use the standard notation of stochastic calculus for the maximal function of a process, i.e., \(Y_{t}^{*}:=\sup _{s \le t} |Y_{s}|\).

Before the formulation, we recall the Novikov inequalities [27], also referred to as the Bichteler–Jacod inequalities, see [8, 26], providing bounds for the moments of the maximal function \(I^{*}_{1}\) of a stochastic integral \(I=g*(\mu ^{P}-\nu ^{P})\), where \(g^{2}*\nu ^{P}_{1}<\infty \). In dependence of the parameter \(\alpha \in [1,2]\), they have the form

$$ {\mathbf{E}}[I_{1}^{*p} ]\le C_{p,\alpha } \textstyle\begin{cases} {\mathbf{E}}[ (|g|^{\alpha }*\nu ^{P}_{1})^{p/\alpha } ], &\quad p \in (0,\alpha ], \\ {\mathbf{E}}[ (|g|^{\alpha }*\nu ^{P}_{1})^{p/\alpha } ]+\mathbf{E}[|g|^{p}* \nu ^{P}_{1}], &\quad p\in [\alpha ,\infty ). \end{cases} $$

Let \(U\) be a càdlàg process adapted with respect to a filtration under which the semimartingale \(P\) has deterministic triplet \((a_{P},\sigma _{P}^{2},\Pi _{P})\) and let \(\Upsilon _{t}:=\int _{(0,t]} U _{s-}\,dP_{s}\).

Lemma 4.1

If\(p>0\)is such that\(\Pi _{P}(|\bar{h}|^{p})<\infty \)and\(K_{p}:= \mathbf{E}[U_{1}^{*p} ]<\infty \), then\(\mathbf{E}[\Upsilon _{1}^{*p}]< \infty \).

Proof

The two elementary inequalities \(|x+y|^{p}\le |x|^{p}+|y|^{p}\) for \(p\in (0,1]\) and \(|x+y|^{p}\le 2^{p-1}(|x|^{p}+|y|^{p})\) for \(p>1\) allow us to treat separately the integrals corresponding to each term in the representation

$$ P_{t}=a_{P}t+ \sigma _{P}W^{P}_{t}+h*(\mu ^{P} -\nu ^{P})_{t}+\bar{h}*\mu ^{P}_{t}, $$

that is, by assuming that the other terms are zero.

The case of the integral with respect to \(dt\) is obvious (we dominate \(U\) by \(U^{*}\)). The estimation for the integral with respect to \(dW^{P}\) is reduced, by applying the Burkholder–Davis–Gundy inequality, to the estimation of the integral with respect to \(dt\).

Let \(p<1\). In more detailed notation, \(f*\mu ^{P}_{1}= \sum _{0< s\le 1:\ \Delta P_{s}>0}f(s,\Delta P_{s})\) and \(U_{-}=(U_{t-})\). Therefore we have

$$ \mathbf{E}[(|U_{-}||\bar{h}|*\mu ^{P}_{1})^{p} ]\le {\mathbf{E}}[ |U _{-}|^{p}|\bar{h}|^{p}*\mu ^{P}_{1} ]= \mathbf{E}[ |U_{-}|^{p}|\bar{h}|^{p}* \nu ^{P}_{1}]\le \Pi _{P}(|\bar{h}|^{p}) K_{p}. $$

Using the Novikov inequality (with \(\alpha =2\)), we have

$$\begin{aligned} {\mathbf{E}}\big[ \big( U_{-}h*(\mu ^{P} -\nu ^{P}) \big)_{1}^{*p} \big] \le & C_{p,2}\big(\Pi _{P}(h^{2})\big)^{p/2}{\mathbf{E}}\bigg[ \bigg(\int _{0}^{1}U_{t}^{2}\,dt\bigg)^{p/2}\bigg] \\ \le & C_{p,2}\big(\Pi _{P}(h^{2})\big)^{p/2} K_{p}. \end{aligned}$$

Let \(p\in (1,2)\). By the Novikov inequality with \(\alpha =1\), we have

$$\begin{aligned} {\mathbf{E}}\big[ \big(U_{-}\bar{h}*(\mu ^{P}-\nu ^{P})\big)_{1}^{*p} \big] \le & C_{p,1}\big(\mathbf{E}[ (|U_{-}||\bar{h}|*\nu _{1}^{P})^{p} ] +\mathbf{E}[ |U_{-}|^{p}|\bar{h}|^{p}*\nu _{1}^{P}]\big) \\ \le & \tilde{C}_{p,1} K_{p}, \end{aligned}$$

where \(\tilde{C}_{p,1}:=C_{p,1}((\Pi _{P}(|\bar{h}|) )^{p}+\Pi _{P}(| \bar{h}|^{p}))\). Using again the Novikov inequality but with \(\alpha =2\), we obtain that

$$ \mathbf{E}\big[\big(U_{-} h*(\mu ^{P}-\nu ^{P})\big)_{1}^{*p}\big] \le C_{p,2}{\mathbf{E}}[ (U_{-}^{2}h^{2}*\nu ^{P}_{1})^{p/2}]\le C_{p,2} \big(\Pi _{P}(h^{2})\big)^{p} K_{p}. $$

Finally, let \(p\ge 2\). Using the Novikov inequality with \(\alpha = 2\), we have

$$\begin{aligned} {\mathbf{E}}\big[\big(U_{-} x*(\mu ^{P}-\nu ^{P})\big)_{1}^{*p}\big] \le & C_{p,2} \big(\Pi _{P} (|x|^{2})\big)^{p/2} {\mathbf{E}}\bigg[ \bigg(\int ^{1}_{0} U^{2}\,d t \bigg)^{p/2}\bigg] \\ &{} +C_{p,2} \Pi _{P} (|x|^{p}) \mathbf{E}\bigg[\int ^{1}_{0} |U|^{p}\,d t \bigg] \\ \le & C_{p,2} \Big( \big(\Pi _{P} (|x|^{2})\big)^{p/2} + \Pi _{P} (|x|^{p}) \Big) K_{p}. \end{aligned}$$

Combining the above estimates, we conclude that \(\mathbf{E}[\Upsilon _{1}^{*p}]\le C K_{p}\) for some constant \(C\). □

5 Convergence of \(Y_{t}\)

Using Lemma 4.1, the almost sure convergence of \((Y_{t})\) given by (3.1) to a finite random variable \(Y_{\infty }\) can be easily established under very weak assumptions ensuring also that \(Y_{\infty }\) solves an affine distributional equation and is unbounded from above. Namely, we have the following result.

Proposition 5.1

If there is\(p>0\)such that\(H(p)<0\)and\(\Pi _{P}(|\bar{h}|^{p})< \infty \), then\((Y_{t})\)converges a.s. to a finite random variable\(Y_{\infty }\)unbounded from above. Its law\(\mathcal{L}(Y_{\infty })\)is the unique solution of the distributional equation

$$ Y_{\infty } \stackrel{d}{=}Y_{1}+M_{1} Y_{\infty },\qquad Y_{\infty } \textit{ independent of } (M_{1},Y_{1}), $$
(5.1)

where\(M_{1}:=e^{-V_{1}}\).

Proof

If the hypotheses hold for some \(p\), they hold also for smaller values. We assume without loss of generality that \(p<1\) and \(H(p+)< \infty \). For any integer \(j\ge 1\), we have the identity

$$ Y_{j}-Y_{j-1} = M_{1} \cdots M_{j-1}Q_{j}, $$

where \((M_{j},Q_{j})\) are independent random vectors with the components

$$ M_{j}:=e^{-(V_{j}-V_{{j-1}})}, \qquad Q_{j}:=-\int _{({j-1},j]} e ^{-(V_{v-}-V_{{j-1}})}\,dP_{v} $$
(5.2)

having distributions \(\mathcal{L}(M_{j})=\mathcal{L}(M_{1})\) and \(\mathcal{L}(Q_{j})=\mathcal{L}(Y_{1})\). By assumption, we have \(\rho :=\mathbf{E}[ M_{1}^{p}]=e^{H(p)}<1\) and \(\mathbf{E}[ |Y_{1}|^{p}]< \infty \) by virtue of (2.2) and Lemma 4.1. Since \(\mathbf{E}[( M_{1}\cdots M_{j-1}|Q_{j}|)^{p}]=\rho ^{j-1}{\mathbf{E}}[ |Y_{1}|^{p}]\), we have that

$$ \mathbf{E}\bigg[ \sum _{j\ge 1 } |Y_{j}-Y_{j-1}|^{p}\bigg]< \infty $$

and therefore \(\sum _{j\ge 1 } |Y_{j}-Y_{j-1}|^{p}<\infty \) a.s. But then also \(\sum _{j\ge 1} |Y_{j}-Y_{j-1}|<\infty \) a.s. and therefore the sequence \((Y_{n})\) converges almost surely to the random variable \(Y_{\infty }:=\sum _{j\ge 1} (Y_{j}-Y_{j-1})\). Put

$$ \Delta _{n}:=\sup _{n-1\le v\le n} \bigg\vert \int _{(n-1,v]} e^{-V_{s-}}\, d P_{s} \bigg\vert , \qquad n\ge 1. $$

Note that

$$ \mathbf{E}[\Delta _{n}^{p} ] = \mathbf{E}\bigg[\prod ^{n-1}_{j=1} M^{p} _{j} \sup _{n-1\le v\le n} \bigg\vert \int _{(n-1,v]} e^{-(V_{s-}-V _{n-1})}\, d P_{s} \bigg\vert ^{p}\bigg] =\rho ^{n-1} {\mathbf{E}}[Y_{1} ^{*p}]< \infty . $$

For any \({\varepsilon }>0\), we get by using the Chebyshev inequality that

$$ \sum _{n\ge 1} {\mathbf{P}}[\Delta _{n}>{\varepsilon }]\le {\varepsilon }^{-p} {\mathbf{E}}[Y_{1}^{*p}] \sum _{n\ge 1}\rho ^{n-1}< \infty . $$

By the Borel–Cantelli lemma, \(\Delta _{n}(\omega )\le {\varepsilon }\) for all \(n\ge n_{0}(\omega )\) for each \(\omega \in \Omega \) except a null set. This implies the convergence \(Y_{t}\to Y_{\infty }\) a.s. as \(t\to \infty \).

Let us consider the sequence

$$ Y_{1,n}:=Q_{2}+M_{2}Q_{3}+\dots +M_{2}\cdots M_{n}Q_{n+1} $$

converging a.s. to a random variable \(Y_{1,\infty }\) distributed as \(Y_{\infty }\). Passing to the limit in the obvious identity \(Y_{n}=Q_{1}+M_{1}Y_{1,n-1}\), we get that \(Y_{\infty }=Q_{1}+M_{1}Y _{1,\infty }\). For finite \(n\), the random variables \(Y_{1,n}\) and \((M_{1},Q_{1})\) are independent and \(\mathcal{L}(Y_{1,n})=\mathcal{L}(Y _{n})\). Therefore \(Y_{1,\infty }\) and \((M_{1},Q_{1})\) are independent random variables, \(\mathcal{L}(Y_{1,\infty })=\mathcal{L}(Y_{\infty })\) and \(\mathcal{L}(Y_{\infty })=\mathcal{L}(Q_{1}+M_{1}Y_{1,\infty })\). These are exactly the properties abbreviated by (5.1).

Note that our hypothesis ensures the uniqueness of the solution to the affine distributional equation (5.1). Indeed, any solution \(\tilde{Y}_{\infty }\) can be realised on the same probability space as \(Y_{\infty }\) as a random variable independent of the sequence \((M_{j},Q_{j})\). Then

$$ \mathcal{L}(\tilde{Y}_{\infty })=\mathcal{L}(Q_{1}+M_{1}\tilde{Y}_{ \infty })= \mathcal{L}(Q_{1}+M_{1}Q_{2}+ \cdots +M_{1}\cdots M_{n-1}Q _{n}+M_{1} \cdots M_{n} \tilde{Y}_{\infty }). $$

Since the product \(M_{1} \cdots M_{n}\to 0\) in \(L^{p}\) as \(n\to \infty \), hence in probability, the residual term \(M_{1} \cdots M_{n} \tilde{Y}_{\infty }\) also tends to zero in probability, hence in law. Thus \(\mathcal{L}(\tilde{Y}_{\infty })=\mathcal{L}(Y_{\infty })\). □

It remains to check that \(Y_{\infty }\) is unbounded from above. For this, the following simple observation is useful.

Lemma 5.2

If the random variables\(Q_{1}\)and\(Q_{1}/M_{1}\)are unbounded from above, then\(Y_{\infty }\)is also unbounded from above.

Proof

Since \(Q_{1}/M_{1}\) is unbounded from above and independent of \(Y_{1,\infty }\), we have that \(\mathbf{P}[Y_{1,\infty }>0]=\mathbf{P}[Y _{\infty }>0]= \mathbf{P}[Q_{1}/M_{1}+Y_{1,\infty }>0]>0\). Take an arbitrary \(u>0\). Then

$$\begin{aligned} {\mathbf{P}}[Y_{\infty }> u] \ge & \mathbf{P}[Q_{1}+M_{1}Y_{1,\infty }>u, Y_{1,\infty }> 0] \ge {\mathbf{P}}[Q_{1}>u, Y_{1,\infty }> 0] \\ =& \mathbf{P}[Q_{1}>u]{\mathbf{P}}[Y_{1,\infty }> 0]>0 \end{aligned}$$

and the lemma is proved. □

Notation

\({\mathcal{J}}_{\theta }:=\int _{[0,1]} e^{-\theta V_{v}}\,d v\), \({Q}_{\theta }:=-\int _{(0,1]} e^{-\theta V_{v-}}\,dP_{v}\), where \(\theta =\pm 1\).

Lemma 5.3

\(\mathcal{L}(Q_{-1})=\mathcal{L}(Q_{1}/M_{1})\).

Proof

We have

$$\begin{aligned} \int _{(0,1]} \sum _{k=1}^{n} e^{V_{k/n-}}I_{((k-1)/n,k/n]}(v)\,dP_{v} = & \sum _{k=1}^{n} e^{V_{k/n}} (P_{k/n}-P_{(k-1)/n}), \\ e^{V_{1}}\int _{(0,1]} \sum _{k=1}^{n} e^{-V_{k/n-}}I_{((k-1)/n,k/n]}(v)\,dP _{v} = &\sum _{k=1}^{n} e^{V_{1}-V_{k/n}} (P_{k/n}-P_{(k-1)/n}). \end{aligned}$$

Note that \(V\) and \(P\) are independent, the increments \(P_{k/n}-P_{(k-1)/n}\) are independent and identically distributed, and \(\mathcal{L}(V_{1}-V_{k/n})=\mathcal{L}(V_{(n-k)/n})\). Thus the right-hand sides of the above identities have the same distribution. The result follows because the left-hand sides tend in probability, respectively, to \(-Q_{-1}\) and \(-Q_{1}/M_{1}\). □

Thus \(Y_{\infty }\) is unbounded from above if so are the stochastic integrals \(Q_{\theta }\). Lemma 5.4 below shows that the \(Q_{\theta }\) are unbounded from above if the ordinary integrals \({\mathcal{J}}_{\theta }\) are unbounded from above. For the latter property, we prove necessary and sufficient conditions in terms of defining characteristics (Lemma 5.7). The case where these conditions are not fulfilled is treated separately (Lemma 5.8).

Lemma 5.4

If\(\mathcal{J}_{\theta }\)is unbounded from above, so is\(Q_{\theta }\).

Proof

We argue by using the following observation. Let \(\xi \) be a real-valued random variable and \(\eta \) a random variable taking values in a Polish space, with distributions \(\mathbf{P}_{\xi }\) and \(\mathbf{P}_{\eta }\). Let \(\mathbf{P}_{\xi | y}\) be a regular conditional distribution of \(\xi \) given \(\eta =y\). If for all real \(N\), the set \({\mathbf{P}_{ \xi | y} [\xi \ge N]>0}\) is not a \(\mathbf{P}_{\eta }\)-nonnull set, then \(\xi \) is unbounded from above.

In the case \(\sigma _{P}^{2}>0\), we use the representation

$$ Q_{\theta }=-\sigma _{P}\int _{[0,1]} e^{-\theta V_{v}}\,dW^{P}_{v} + \int _{(0,1]} e^{-\theta V_{v-}}\,d(\sigma _{P}W^{P}_{v}-P_{v}). $$

Applying the above observation with \(\eta =(R,P-\sigma _{P} W^{P})\) and \(\xi \) the integral with respect to \(W^{P}\), and noting that the Wiener integral of a nonzero deterministic function is a nonzero Gaussian random variable, we get that \(Q_{\theta }\) is unbounded.

Now consider the case where \(\sigma _{P}^{2}=0\). For \({\varepsilon }>0\), we denote by \(\zeta ^{\varepsilon }\) the locally square-integrable martingale with

$$ \zeta ^{\varepsilon }_{t}:= e^{-\theta V_{-}} I_{\{|x|\le {\varepsilon }\}} x*(\mu ^{P}-\nu ^{P})_{t}. $$
(5.3)

Since \(\langle \zeta ^{\varepsilon }\rangle _{1}=e^{-2\theta V_{-}} I _{\{|x|\le {\varepsilon }\}} x^{2}*\nu ^{P}_{1}\to 0\) as \({\varepsilon }\to 0\), we have that \(\sup _{t\le 1}|\zeta ^{\varepsilon }_{t}|\to 0\) in probability. Note that

$$ Q_{\theta }=\big(\Pi _{P}(xI_{\{{\varepsilon }\le |x|\le 1\}})-a_{P} \big) \mathcal{J}_{\theta }- \zeta ^{\varepsilon }_{1} - e^{-\theta V _{-}} I_{\{|x|> {\varepsilon }\}} x*\mu ^{P}_{1}. $$

Take \(N>1\). Since \(\mathcal{J}_{\theta }\) is unbounded from above, there is \(N_{1}>N+1\) such that the set \(\{N\le {\mathcal{J}}_{\theta } \le N_{1}, \inf _{t\le 1}e^{-V_{t}}\ge 1/N_{1}\} \) is nonnull. Then

$$ \Gamma ^{\varepsilon }:= \Big\{ N\le {\mathcal{J}}_{\theta }\le N_{1}, \inf _{t\le 1}e^{-V_{t}}\ge 1/N_{1}, |\zeta ^{\varepsilon }_{1}|\le 1 \Big\} $$

is also a nonnull set for all sufficiently small \({\varepsilon }>0\).

As the process \(P\) is not a subordinator, we have only three possible cases:

1) \(\Pi _{P}((-\infty ,0))>0\): Then \(\Pi _{P}((-\infty ,-{\varepsilon } _{0}))>0\) for some \({\varepsilon }_{0}>0\). Due to their independence, the intersection of \(\Gamma ^{\varepsilon }\) with the set

$$ \{| I_{\{x< -{\varepsilon }\}} x*\mu ^{P}_{1}|\ge N_{1}(a_{P}^{+}N_{1}+N), I_{\{x >{\varepsilon }\}}* \mu _{1}^{P}=0 \} $$

is nonnull when \({\varepsilon }\in (0,{\varepsilon }_{0})\). On this intersection, we have that

$$ Q_{\theta }\ge -a_{P} {\mathcal{J}}_{\theta }- \zeta ^{\varepsilon } _{1} - e^{-\theta V_{-}} I_{\{x< -{\varepsilon }\}} x*\mu ^{P}_{1} \ge -a_{P}^{+}N_{1}-1+a_{P}^{+}N_{1}+N\ge N-1. $$

2) \(\Pi _{P}((-\infty ,0))=0\), \(\Pi _{P}(h)=\infty \): Diminishing \({\varepsilon }\) if necessary to ensure the inequality \(\Pi _{P}(xI _{\{x>{\varepsilon }\}})\ge N_{1}(a_{P}^{+}N_{1}+N)\), we have that

$$ Q_{\theta }=-a_{P} {\mathcal{J}}_{\theta }- \zeta ^{\varepsilon }_{1} + e^{-\theta V_{-}} I_{\{x> {\varepsilon }\}} *\nu ^{P}_{1} \ge -a_{P} ^{+}N_{1}-1+a_{P}^{+}N_{1}+N\ge N-1 $$

on the nonnull set \(\Gamma ^{\varepsilon }\cap \{I_{\{x>{\varepsilon } \}}*\mu ^{P}_{1}=0\}\).

3) \(\Pi _{P}((-\infty ,0))=0\), \(\Pi _{P}(h)<\infty \) and \(\Pi _{P}(h)-a _{P}>0\): Then on the nonnull set \(\{\mathcal{J}_{\theta }\ge N\} \cap \{I_{\{x>0\}}*\mu ^{P}_{1}=0\}\), we have that

$$ Q_{\theta }= \big(\Pi _{P}(h)-a_{P}\big)\mathcal{J}_{\theta }\ge \big(\Pi _{P}(h)-a_{P}\big)N. $$

Since \(N\) is arbitrary, \(Q_{\theta }\) is unbounded from above in all three cases. □

Remark 5.5

If \(\mathcal{J}_{1}I_{\{V_{1}<0\}}\) is unbounded from above, so is \(Q_{1}I_{\{V_{1}<0\}}\).

Remark 5.6

The proof above shows that in the case where \(\sigma _{P}=0\), there is a constant \(\kappa >0\) such that if the set \(\{\mathcal{J}_{\theta }>N \}\) is nonnull, then \(Q_{\theta }>\kappa N\) on an \(\mathcal{F}^{R,P} _{1}\)-measurable nonnull subset of the latter set. The statement remains valid with obvious changes if the integration over the interval \([0,1]\) is replaced by the integral over an arbitrary finite interval \([0,T]\).

Lemma 5.7

(i) The random variable\(\mathcal{J}_{1}\)is unbounded from above if and only if\(\sigma ^{2}+\Pi ((-1,0))>0\)or\(\Pi (xI_{\{0< x \le 1\}})=\infty \).

(ii) The random variable\(\mathcal{J}_{-1}\)is unbounded from above if and only if we have\(\sigma ^{2}+\Pi ((0,\infty ))>0\)or\(\Pi (xI_{\{x<0\}})=-\infty \).

Proof

In the case where \(\sigma ^{2}>0\), the “if” parts of the statements are obvious: \(W\) is independent of the jump part of \(V\) and the distribution of the random variable \(\int _{0}^{1}e^{-\sigma \theta W_{v}}g(v)dv\), where \(g>0\) is a deterministic function, has a support unbounded from above. So suppose that \(\sigma =0\) and consider the “if” parts separately. Note that in this case,

$$ V_{t}=at + h*(\mu -\nu )_{t}+(\varphi -h)*\mu _{t}, $$
(5.4)

where \(\varphi =\varphi (x)=\ln (1+x)\).

(i) Consider first the case where \(\Pi ((-1,0))>0\), i.e., \(\Pi ((-1,-{\varepsilon }))>0\) for some \({\varepsilon }\in (0,1)\). Then the process \(V\) given by (5.4) admits the decomposition

$$ V_{t} =\big(a-\Pi (xI_{\{-1< x\le -{\varepsilon }\}})\big)t+V^{(1)} _{t}+V^{(2)}_{t}, $$

where \(V^{(1)}_{t}:=I^{(1)} x*(\mu -\nu )_{t}+(\varphi (x)-x)I^{(1)}* \mu _{t}+\varphi (x)I_{\{x> 1\}}*\mu _{t}\) with \(I^{(1)}=I_{\{ -{\varepsilon }< x\le 1\}}\) and \(V^{(2)}_{t}:=\varphi (x)I_{\{-1< x\le -{\varepsilon }\}}*\mu _{t}\). The processes \(V^{(1)}\) and \(V^{(2)}\) are independent. The decreasing process \(V^{(2)}\) has jumps of size not less than \(|\ln (1-{\varepsilon })|\) and the number of jumps on the interval \([0,t]\) is a Poisson random variable with parameter \(t\Pi ((-1,-{\varepsilon }))>0\). Hence \(V^{(2)}_{t}\) is unbounded from below for any \(t\in (0,1)\). In particular, for any \(N>0\), the set where \(e^{- V^{(2)}}\ge N\) on the interval \([1/2,1]\) is nonnull. The required property follows from these considerations.

Now suppose that we have \(\Pi (h(x)I_{\{x>0\}})=\infty \). We assume without loss of generality that \(\Pi (-1,0)=0\). In this case, the process \(V\) has only positive jumps. Take arbitrary \(N>1\) and choose \({\varepsilon }>0\) such that we have \(\Pi (xI_{\{{\varepsilon }< x \le 1 \}})>2N\) and \(\Pi (I_{\{0< x\le {\varepsilon }\}}\varphi ^{2}(x)) \le 1/(32N^{2})\). We have the decomposition

$$ V_{t}=ct +V^{(1)}_{t}+V^{(2)}_{t}+V^{(3)}_{t}, $$

where the processes \(V^{(1)}:=I_{\{0< x\le {\varepsilon }\}}\varphi (x)*( \mu -\nu )\), \(V^{(2)}:=I_{\{{\varepsilon }< x\le 1\}}\varphi (x)*( \mu -\nu )\) and \(V^{(3)}:= I_{\{x> 1\}}\varphi (x)*\mu \) are independent and \(c\!:= a+\Pi ((\varphi (x)-x)I_{\{0< x\le 1\}})\,{<}\,\infty \). By the Doob inequality, \(\mathbf{P}[\sup _{t\le 1}V^{(1)}_{t} < N/2]>1/2\). The processes \(V^{(2)}\) and \(V^{(3)}\) have no jumps on \([0,1]\) on a nonnull set. In the absence of jumps, the trajectory of \(V^{(2)}\) is the linear function \(y_{t}=-\Pi (\varphi (x)I_{\{{\varepsilon }< x \le 1 \}})t\le -2Nt\). It follows that

$$ \sup _{1/2\le t\le 1}V_{t}\le c-N/2 $$

on a set of positive probability. This implies that \(\mathcal{J}_{1}\) is unbounded from above.

(ii) Let first \(\Pi ((0,\infty ))>0\), i.e., \(\Pi (({\varepsilon },\infty ))>0\) for some \({\varepsilon }> 0\). Then

$$ V_{t} =\big(a-\Pi (hI_{\{x>{\varepsilon }\}})\big)t+ V^{(1)}_{t}+ V ^{(2)}_{t}, $$

where

$$\begin{aligned} V^{(1)}_{t} :=& I_{\{ x\le {\varepsilon }\}}h*(\mu -\nu )_{t}+\big( \varphi (x)-h\big)I_{\{x\le {\varepsilon }\}}*\mu _{t}, \\ V^{(2)}_{t} :=&\varphi (x)I_{\{x> {\varepsilon }\}}*\mu _{t}. \end{aligned}$$

The processes \(V^{(1)}\) and \(V^{(2)}\) are independent. The increasing process \(V^{(2)}\) has jumps of size not less than \(\varphi ({\varepsilon })\) and the number of jumps on the interval \([0,t]\) is a Poisson random variable with parameter \(t\Pi (({\varepsilon },\infty ))>0\). Hence \(V^{(2)}_{t}\) is unbounded from above for any \(t\in (0,1)\). In particular, for any \(N>0\), the set where \(e^{V^{(2)}}\ge N\) on the interval \([1/2,1]\) is nonnull. These facts imply the required property.

It remains to consider the case \(\Pi (xI_{\{x<0\}})=-\infty \) and \(\Pi (0,\infty )=0\). The process \(V\) has only negative jumps. Take arbitrary \(N>1\) and choose \({\varepsilon }\in (0,1/2)\) such that \(-\Pi (\varphi (x)I_{\{-1/2< x\le -{\varepsilon }\}})>2N\) and \(\Pi (I_{\{-{\varepsilon }< x<0 \}}\varphi ^{2}(x))\le 1/(32N^{2})\). This time, we use the representation

$$ V_{t}=ct +V^{(1)}_{t}+ V^{(2)}_{t}+ V^{(3)}_{t}, $$

where the processes

$$\begin{aligned} V^{(1)} &:=I_{\{-{\varepsilon }< x< 0\}}\varphi (x)*(\mu -\nu ), \\ V^{(2)} &:=I_{\{-1/2< x\le -{\varepsilon }\}}\varphi (x)*(\mu -\nu ), \\ V^{(3)} &:=I_{\{-1< x\le -1/2\}}\varphi (x)*\mu \end{aligned}$$

are independent and \(c:=a+\Pi (\varphi (x) I_{\{-1/2< x<0\}}-h)\). Due to the Doob inequality, \(\mathbf{P}[\sup _{t\le 1} V^{(1)}_{t} < N/2]>1/2\). The processes \(V^{(2)}\) and \(V^{(3)}\) have no jumps on \([0,1]\) with strictly positive probability. In the absence of jumps, the trajectory of \(V^{(2)}\) is the linear function \(y=-\Pi (\varphi (x)I_{\{-1/2< x \le -{\varepsilon }\}})t\ge 2Nt\). It follows that

$$ \sup _{1/2\le t\le 1}V_{t}\le c+N/2 $$

on a nonnull set. This implies that \(J_{-1}\) is unbounded from above.

Finally, the “only if” parts of the lemma are obvious. □

Summarising, we conclude that \(Q_{1}\) and \(Q_{-1}\) (and hence \(Y_{\infty }\)) are unbounded from above if \(\sigma ^{2}>0\), or \(\sigma ^{2}_{P}>0\), or \(\Pi (|h|)=\infty \), or \(\Pi ((-1,0))>0\) and \(\Pi ((0,\infty ))>0\). The remaining cases are treated in the following result.

Lemma 5.8

Let\(\sigma =0\), \(\Pi (|h|)<\infty \), \(\sigma _{P}=0\). If\(\Pi ((-1,0)) \!=\!0\)or\(\Pi ((0,\infty ))=0\), then the random variable\(Y_{\infty }\)is unbounded from above.

Proof

By our assumptions, \(V_{t}=ct+L\) with the constant \(c:=a-\Pi (h)\), \(\Pi \equiv 0\) and \(L_{t}:=\varphi *\mu _{t}\). The assumption \(\beta >0\) implies that \(\mathbf{P}[V_{1}<0]>0\) and \(\mathbf{P}[V_{1}>0]>0\). So there are two cases which we consider separately.

(i) \(c<0\) and \(\Pi ((0,\infty ))>0\): Take any \(T>1\). Then \(\int _{[0,T]}e^{-V_{t}}\,dt\ge T/e\) on the nonnull set \(\{L_{T}\le 1\}\). By virtue of Remark 5.6, on the nonnull \(\mathcal{F}^{R,P}_{T}\)-measurable subset \(\Gamma _{T}\subseteq \{L _{T}\le 1\}\), we have \(-\int _{[0,T]}e^{-V_{t-}}\,dP_{t}\ge K_{T} \), where \(K_{T}\to \infty \) as \(T\to \infty \). For every \(T>1\),

$$ \mathbf{P}[\Gamma _{T}\cap \{ L_{T+1}-L_{T}\ge |c|(T+1)\}]=\mathbf{P}[ \Gamma _{T}] {\mathbf{P}}[L_{T+1}-L_{T}\ge |c|(T+1)]>0. $$

Let \(\zeta ^{\varepsilon }\) be the square-integrable martingale given by (5.3) (note that \(-V\) is here bounded above by a deterministic function) with \(\theta =1\). Take \(N>1\) sufficiently large and \({\varepsilon }>0\) sufficiently small to ensure that the set \(\Gamma _{T}^{{\varepsilon },N}\) defined as the intersection of \(\Gamma _{T}\cap \{ L_{T+1}-L_{T}\ge |c|(T+1)\}\) and

$$ \bigg\{ \sup _{s\in [T,T+1]}e^{-V_{s}}\le N, \inf _{s\in [T,T+1]}e^{-V _{s}}\ge 1/N\bigg\} \cap \{ |\zeta ^{\varepsilon }_{T+1}- \zeta ^{\varepsilon }_{T}|\le 1\} $$

is nonnull. Let us consider the representation

$$\begin{aligned} Y_{\infty } =&-\int _{(0,T]}e^{-V_{t-}}\,dP_{t}+a_{P}^{\varepsilon } \int _{(T,T+1]}e^{-V_{t}} \,dt - \zeta _{T+1}^{\varepsilon }+\zeta _{T} ^{\varepsilon } \\ &-I_{(T,\infty )} e^{-V_{-}}xI_{\{|x|>{\varepsilon }\}}*\mu ^{P}_{T+1} + e^{-V_{T+1}}Y_{T+1,\infty }. \end{aligned}$$

Take an arbitrary \(y<0\) such that the set \(\{Y_{T+1,\infty }>y\}\) is nonnull. Since the process \(P\) is not a subordinator with \(\sigma _{P}=0\), it must satisfy one of the characterising conditions 1)–3) of Sect. 2. Let us consider them consecutively. If \(\Pi _{P}((-\infty ,0))>0\), then there is \({\varepsilon }_{0}>0\) such that \(\Pi _{P}((-\infty ,-{\varepsilon }_{0}))>0\). Due to their independence, the intersection of \(\Gamma _{T}^{{\varepsilon },N}\) with the set

$$ \tilde{\Gamma }_{T}^{{\varepsilon },N}:= \{ I_{[T,\infty )} I_{\{x< - {\varepsilon }\}} *\mu ^{P}_{T+1}\ge -(1/{\varepsilon }) N^{2}a_{P} ^{\varepsilon }, I_{[T,\infty )}I_{\{x >{\varepsilon }\}}*\mu _{T+1} ^{P}=0 \} $$

is nonnull when \({\varepsilon }\in (0,{\varepsilon }_{0})\). Due to their independence, the intersection of \(\Gamma _{T}^{{\varepsilon },N} \,{\cap}\, \tilde{\Gamma }_{T}^{{\varepsilon },N}\) and \(\{Y_{T+1,\infty }>y \}\) is also a nonnull set. But on this intersection, we have the inequality \(Y_{\infty }\ge K_{T}-1+y\), implying that \(Y_{\infty }\) is unbounded from above.

Suppose next that \(\Pi _{P}(-\infty ,0)=0\) and \(\Pi _{P}(h)=\infty \). Thus for sufficiently small \({\varepsilon }>0\), we have \(a_{P}^{\varepsilon }>0\). On the nonnull set

$$ \Gamma _{T}^{{\varepsilon },N}\cap \{I_{[T,\infty )}I_{\{x >{\varepsilon }\}}*\mu _{T+1}^{P}=0\} \cap \{Y_{T+1,\infty }>y\}, $$

the inequality \(Y_{\infty }\ge K_{T}-1+y\) holds and we conclude as above.

Finally, suppose that \(\Pi _{P}(-\infty ,0)=0\), \(\Pi _{P}(h)<\infty \) and \(\Pi _{P}(h)-a_{P}>0\). In this case, we can use the representation

$$\begin{aligned} Y_{\infty } =&-\int _{(0,T]}e^{-V_{t-}}\,dP_{t}+\big(\Pi _{P}(h)-a_{P} \big)\int _{(T,T+1]}e^{-V_{t}} \,dt \\ &{}-I_{(T,\infty )} e^{-V_{-}}xI_{\{x>0\}}*\mu ^{P}_{T+1} + e^{-V_{T+1}}Y _{T+1,\infty }. \end{aligned}$$

On the nonnull set \(\Gamma _{T}^{{\varepsilon },N}\cap \{I_{(T,\infty )} I_{\{x>0\}}*\mu ^{P}_{T+1}=0\}\cap \{Y_{T+1,\infty }>y\}\), we have that \(Y_{\infty }\ge K_{T}+y\), implying that \(Y_{\infty }\) is unbounded from above.

(ii) \(c>0\) and \(\Pi (-1,0)>0\): In this case, there are \(0<\gamma <\gamma _{1}<1\) such that the sets \(\{I_{(-1,-\gamma _{1})}* \mu _{1}= 0\}\), \(\{I_{(-\gamma _{1},-\gamma )}*\mu _{1/2} =I_{(-\gamma _{1},-\gamma )}*\mu _{1}=N\}\) and \(\{\varphi I_{(-\gamma _{1},0)}*\mu _{1}\ge -1\}\) are nonnull. Due to their independence, their intersection \(A_{N}\) is also nonnull. On \(A_{N}\), we have the bounds

$$ \begin{gathered} c + N\ln (1-\gamma _{1}) -1\le V_{1} \le c + N\ln (1-\gamma ), \\ \mathcal{J}_{1}:=\int _{[0,1]}e^{-V_{t}}\,dt \ge e^{-c}\int _{[0,1/2]}e ^{-\ln (1+x)*\mu _{t}}\,dt\ge \frac{1}{2} e^{-c}(1-\gamma )^{-N}. \end{gathered} $$

By virtue of Remark 5.6, there are a constant \(\kappa _{N}\) and an \(\mathcal{F}^{R,P}_{1}\)-measurable nonnull subset \(B_{N}\) of \(A_{N}\) such that \(Q_{1}\ge \kappa _{N}\) on \(B_{N}\) and \(\kappa _{N} \to \infty \) as \(N\to \infty \). Take \(T=T_{N}>0\) such that \(cT+N\ln (1-\gamma _{1}) - 2\ge 0 \). Then the set \(\{I_{]1,1+T[} \varphi (x)*\mu _{1+T}\ge -1\}\) is nonnull and its intersection with \(B_{N}\) is also nonnull. On this intersection, we have \(e^{-V_{1+T}} \le 1\) and \(c_{1}(N)\le V_{1+T}\le c_{2}(N) \), where \(c_{1}(N):=c + N \ln (1-\gamma _{1}) -2\) and \(c_{2}(N):=c(T+1) + N\ln (1-\gamma )\). With this, we accomplish the arguments by considering the cases corresponding to the properties 1)–3) with obvious modifications. □

With the above lemma, the proof of Proposition 5.1 is complete.  □

Proof of Theorem 1.1

First, we relate the notations and hypotheses of Theorem 1.1 with those used in the results from implicit renewal theory summarised in Theorem A.6 of Appendix A. The hypothesis that \(H(\beta )=0\) means that \(\mathbf{E}[M^{\beta }]=1\) with \(M=M_{1}=e^{-V_{1}}\). Also, \(\mathbf{E}[M^{\beta +{\varepsilon }}]< \infty \) for some \({\varepsilon }>0\) since \(\beta \) does not belong to the boundary of the effective domain of the function \(H\). In view of (2.2) and Lemma 4.1, we have that \(\mathbf{E}[|Q|^{ \beta }]<\infty \), where \(Q=Q_{1}=\int _{(0,1]}e^{-V_{v}}dv\). Proposition 5.1 provides the information that the almost sure limit \(Y_{\infty }\) of the process \(Y\) given by (3.1) exists, is finite, unbounded from above and has a law solving the distributional equation \(\mathcal{L}(Y_{\infty })=\mathcal{L}(Q+MY_{\infty })\), which can be written in the form (A.1). Thus all the conditions of Theorem A.6 are fulfilled. The latter gives the statements on the asymptotic behaviour of the tail function \(\bar{G}(u)=\mathbf{P}[Y _{\infty }>u]\) as \(u\to \infty \). Using Lemma 3.1 allows us to transform them into statements on the asymptotic behaviour of the ruin probability \(\Psi (u)\) and complete the proof. □

Remark 5.9

The constant \(C_{\infty }\) in Theorem 1.1 is of the form \(C_{\infty }=C_{+}/\bar{G}(0)\), where \(C_{+}\) is given in (A.3).

Remark 5.10

Note that the hypothesis \(\beta \in {\mathrm{int\,}} {\mathrm{dom \,}} H\) can be replaced by the slightly weaker assumption \(\mathbf{E}[e ^{-\beta V_{1}}V_{1}^{-}]<\infty \).

Remark 5.11

The hypothesis that \(\mathcal{L}(V_{1})\) is non-arithmetic can also be replaced by a weaker one: one can assume that \(\mathcal{L}(V _{T})\) is non-arithmetic for some \(T>0\). Indeed, due to the identity \(\ln {\mathbf{E}}[e^{-\beta V_{T}}]=TH(\beta )\), the root \(\beta \) does not depend on the choice of the time unit.

The following lemma shows that the condition on \(\mathcal{L}(V_{1})\) can be formulated in terms of the Lévy triplet.

Lemma 5.12

The (non-degenerate) distribution of the random variable\(V_{1}\)is arithmetic if and only if\(\sigma =0\), \(\Pi ({\mathbb{R}})<\infty \)and there is\(d>0\)such that\(\Pi _{V}\)is concentrated on the lattice\(\Pi (h)-a+{\mathbb{Z}}d\).

Proof

Recall that \(\sigma _{V}=\sigma \) and \(\Pi _{V}=\Pi \varphi ^{-1}\), where \(\varphi : x\mapsto \varphi (x)\). So we have \(\Pi _{V}({\mathbb{R}})= \Pi ({\mathbb{R}})\). If \(\sigma _{V}>0\) or \(\Pi _{V}({\mathbb{R}})= \infty \), the distribution of \(V_{1}\) has a density; see [10, Proposition 3.12]. If \(\sigma =0\) and \(0<\Pi _{V}({\mathbb{R}})< \infty \), then \(V\) is a compound Poisson process with drift \(c=a-\Pi (h)\) and distribution of jumps \(F_{V}:=\Pi _{V}/\Pi _{V}({\mathbb{R}})\). In that case, \(\mathcal{L}(V _{1})\) is concentrated on the lattice \({\mathbb{Z}}d\) if and only if \(\Pi _{V}\) is concentrated on the lattice \(-c+{\mathbb{Z}}d\). □

Remark 5.13

The property that \(Y_{\infty }\) is unbounded from above can be deduced from the much more general Theorem 1 on the support of exponential functionals from the paper [6]. However, the results for the supports of \(\mathcal{J}_{\theta }\) and \(Q_{\theta }\) and the arguments presented here have own interest and can also be used without assuming, as in [6], that the limit \(Y_{\infty }\) exists.

6 Ruin with probability one

In this section, we give conditions under which ruin is imminent for any initial reserve.

Recall the following ergodic property of the autoregressive process \((X_{n}^{u})_{n\ge 1}\) with random coefficients which is defined recursively by the relations

$$ X_{n}^{u}=A_{n}X_{n-1}^{u} +B_{n}, \qquad n\ge 1, X_{0}^{u}=u, $$
(6.1)

where \((A_{n},B_{n})_{n\ge 1}\) is a sequence of i.i.d. random variables in \({\mathbb{R}}^{2}\) (see [34, Proposition 7.1], and [12] for a deeper result).

Lemma 6.1

Suppose that\(\mathbf{E}[|A_{n}|^{\delta }]<1\)and\(\mathbf{E}[ |B _{n}|^{\delta }]<\infty \)for some\(\delta \in (0,1)\). Then for any\(u\in {\mathbb{R}}\), the sequence\((X_{n}^{u})\)converges in\(L^{\delta }\) (hence, in probability) to the random variable

$$ X_{\infty }^{0}=\sum _{n=1}^{\infty }B_{n}\prod _{j=1}^{n-1}A_{j}, $$

and for any bounded uniformly continuous function\(f\),

$$ \frac{1}{N}\sum _{n=1}^{N} f(X_{n}^{u})\longrightarrow {\mathbf{E}}[f(X _{\infty }^{0})] \qquad \textit{in probability as } N\to \infty . $$
(6.2)

Corollary 6.2

Suppose that\(\mathbf{E}[|A_{n}|^{\delta }]<1\)and\(\mathbf{E}[|B_{n}|^{ \delta }]<\infty \)for some\(\delta \in (0,1)\).

(i) If\(\mathbf{P}[X_{\infty }^{0}<0]>0\), then\(\inf _{n \ge 1}X_{n}^{u}<0\).

(ii) If\(A_{1}>0\)and\(B_{1}/A_{1}\)is unbounded from below, then\(\inf _{n\ge 1}X_{n}^{u}<0\).

Proof

We get (i) by a straightforward application of (6.2) to the function

$$ f(x):=I_{\{x< -1\}}+x I_{\{-1\le x< 0\}}. $$

The statement (ii) follows from (i). Indeed, put \(X_{\infty }^{0,1}:=\sum _{n= 2}^{\infty }B_{n}\prod _{j= 2}^{n-1}A_{j}\). Then

$$ X_{\infty }^{0}=B_{1}+A_{1} X_{\infty }^{0,1}=A_{1}(X_{\infty }^{0,1}+B _{1}/A_{1}). $$

Since \(B_{1}/A_{1}\) and \(X_{\infty }^{0,1}\) are independent and the random variable \(B_{1}/A_{1}\) is unbounded from below, \(\mathbf{P}[X _{\infty }^{0}<0)>0\). □

Let \(M_{j}\) and \(Q_{j}\) be the same as in (5.2).

Proposition 6.3

Suppose that\(\mathbf{E}[M_{1}^{-\delta }]<1\)and\(\mathbf{E}[M_{1} ^{-\delta }| Q_{1}|^{\delta }]<\infty \)for\(\delta \in (0,1)\). If\(Q_{1}\)is unbounded from above, then\(\Psi (u)\equiv 1\).

Proof

The process \(X^{u}\) solving (1.1) and restricted to integer values of the time scale admits the representation

$$ X_{n}^{u}=e^{V_{n}-V_{n-1}}X_{n-1}^{u} +e^{V_{n}} \int _{(n-1,n]}e^{-V _{t-}}\,dP_{t}, \qquad n\ge 1, X_{0}^{u}=u. $$

That is, \(X_{n}^{u}\) is given by (6.1) with \(A_{n}=M_{n}^{-1}\) and \(B_{n}=-M_{n}^{-1}Q_{n}\). The result follows from statement (ii) of Corollary 6.2. □

Now we give more specific conditions for ruin with probability one in terms of the triplets.

Theorem 6.4

Suppose that\(0\in {\mathrm{int\,}} {\mathrm{dom\,}} H\)and\(\Pi _{P}(|\bar{h}|^{\varepsilon })<\infty \)for some\({\varepsilon }>0\). If\(a_{V}+\Pi (\bar{h}(\ln (1+x)))\le 0\), then\(\Psi (u)\equiv 1\).

Proof

Note that \(D^{-}H(0)=-a_{V}-\Pi (\bar{h}(\ln (1+x)))\). If \(D^{-}H(0)>0\), then for all \(q<0\) sufficiently close to zero, we have \(H(q)<0\), that is, \(\mathbf{E}[M_{1}^{q}]<1\). By virtue of Lemma 5.3, we have \(\mathcal{L}(M_{1}^{-1}Q_{1})=\mathcal{L}(Q_{-1})\). If \(\Pi _{P}(| \bar{h}|^{\varepsilon })<\infty \) for some \({\varepsilon }>0\), Lemma 4.1 implies that \(\mathbf{E}[|Q_{-1}|^{q}]<\infty \) for sufficiently small \(q>0\). To get the result, we can use Proposition 6.3. Indeed, by virtue of Lemmas 5.4 and 5.7 (i), the random variable \(Q_{1}\) is unbounded from above, except possibly in the case where \(\sigma ^{2}=0\), \(\sigma ^{2}_{P}=0\), \(\Pi (|h|)<\infty \) and \(\Pi (-1,0)=0\), \(\Pi (0,\infty )>0\). Recall that in this special case, we have \(V_{t}=ct+L_{t}\), where \(c:=a-\Pi (h)\) and \(L_{t}:=\ln (1+x)* \mu _{t}\). Note that

$$ X_{n}^{0}=\int _{(0,n]}e^{V_{n}-V_{t-}}\,dP_{t} \stackrel{d}{=} \int _{(0,n]}e ^{V_{t-}}\,dP_{t}=:-\widehat{Y}_{n}, $$

where the equality in law holds by virtue of Lemma 5.3 (the latter is formulated for \([0,1]\), but its extension to arbitrary intervals is obvious). The random variable \(\widehat{Y}_{n}\) is defined by the same formula as \(Y_{n}\) with \(V\) replaced by \(-V\). As in Proposition 5.1, we show that \((\widehat{Y}_{n})\) converges to a finite value \(\widehat{Y}_{\infty }\) in probability. It follows that \(\mathcal{L}(X _{n}^{0})=\mathcal{L}(-\widehat{Y}_{n})\). As in Lemma 5.8 (i), we can show that \(\widehat{Y}_{\infty }\) is unbounded from above.

In the case where \(D^{-}H(0)= 0\), we consider, following [34], the discrete-time process \((\tilde{X}^{u}_{n})_{n \in {{\mathbb{N}}}}\), where \(\tilde{X}^{u}_{n}=X_{T_{n}}\) and the descending ladder times \(T_{n}\) of the random walk \((V_{n})_{n\in {{\mathbb{N}}}}\) are defined by \(T_{0}:=0\) and

$$ T_{n}:=\inf \{k>T_{n-1}: V_{k}-V_{T_{n-1}}< 0\}. $$

Since \(J(q)=\Pi (I_{\{|\ln (1+x)|>1\}}(1+x)^{-q})<\infty \) for any \(q\in (\underline{q},\bar{q})\), we have that \(\Pi (\ln ^{2}(1+x)))< \infty \). The formula (2.1) can be written as

$$ V_{t}=\sigma W_{t}+ \ln (1+x)*(\mu -\nu )_{t}, $$

i.e., \(V\) is a square-integrable martingale so that \(\mathbf{E}[V_{1}]=0\) and \(\mathbf{E}[V_{1}^{2}]<\infty \). According to Feller’s book [13, Chap. XII.7, Theorem 1a and the remark preceding it], the above properties imply that there is a finite constant \(c\) such that

$$ {\mathbf{P}}\left [T_{1} > n\right ] \le cn^{-1/2}. $$
(6.3)

It follows in particular that the differences \(T_{n}-T_{n-1}\) are well defined and form a sequence of finite independent random variables distributed as \(T_{1}\). The discrete-time process \((\tilde{X}^{u}_{n})=(X ^{u}_{T_{n}})\) has the representation

$$ \tilde{X}_{n}^{u}=e^{V_{T_{n}}-V_{T_{n-1}}}\tilde{X}_{n-1}^{u} +e^{V _{T_{n}}} \int _{(T_{n-1},T_{n}]}e^{-V_{t-}}\,dP_{t}, \quad n\ge 1, \qquad \tilde{X}_{0}^{u}=u, $$

and solves the linear equation

$$ \tilde{X}_{n}^{u}=\tilde{A}_{n}\tilde{X}_{n-1}^{u} +\tilde{B}_{n}, \quad n \ge 1, \qquad \tilde{X}_{0}^{u}=u, $$

where

$$ \tilde{A}_{n}:=e^{V_{T_{n}}-V_{T_{n-1}}}, \qquad \tilde{B}_{n}:= e ^{V_{T_{n}}} \int _{(T_{n-1},T_{n}]}e^{-V_{t-}}\,dP_{t} $$

and \(\tilde{B}_{1}/\tilde{A}_{1}=-Y_{T_{1}}\), where \(Y\) is given by (3.1). By construction, \(\tilde{A}^{\delta }_{1}<1\) for any \(\delta >0\). Using the definition of \(Q_{j}\) given by (5.2), we have that

$$ \vert \tilde{B}_{1} \vert \le \sum ^{T_{1}}_{j=1} e^{V_{T_{1}}-V_{j-1}} \vert {Q}_{j}\vert \le \sum ^{T_{1}}_{j=1}\vert Q_{j}\vert . $$

According to Lemma 4.1, \(\mathbf{E}[\vert Q_{1}\vert ^{p}]<\infty \) for some \(p\in (0,1)\). Taking \(r\in (0,p/5)\) and defining the sequence \(\ell _{n}:=\lfloor n^{4r}\rfloor \), using the Chebyshev inequality and (6.3) gives

$$\begin{aligned} {\mathbf{E}}[\vert \tilde{B}_{1} \vert ^{r} ] &\le 1+r\sum _{n\ge 1} {n^{r-1}}{\mathbf{P}}\bigg[ \sum ^{T_{1}}_{j=1} \vert Q_{j}\vert >n \bigg] \\ &\le 1+r\sum _{n\ge 1} {n^{r-1}} {\mathbf{P}}\bigg[ \sum ^{\ell _{n}} _{j=1} \vert Q_{j}\vert >n\bigg]+ r\sum _{n\ge 1} {n^{r-1}} {\mathbf{P}} [ T_{1}>\ell _{n}] \\ &\le 1+r\mathbf{E}[\vert Q_{1}\vert ^{p}] \sum _{n\ge 1} \ell _{n}n^{r-1-p} +rc\sum _{n\ge 1} n^{r-1}\ell _{n}^{-1/2}< \infty . \end{aligned}$$

To apply Corollary 6.2 (ii), it remains to check that \(Y_{T_{1}}\) is unbounded from above. Since \(\{Q_{1}>N, V_{1}<0\} \subseteq \{Y_{T_{1}}>N\}\), it is sufficient to check that the probability of the set on the left-hand side is strictly positive for all \(N>0\), or, by virtue of Remark 5.5, that

$$ {\mathbf{P}}[{\mathcal{J}}_{1}>N, V_{1}< 0]>0, \qquad \forall N>0. $$
(6.4)

If \(\sigma ^{2}>0\), the conditional distribution of the process \((W_{s})_{s\le 1}\) given \(W_{1}=x\) coincides with the (unconditional) distribution of the Brownian bridge \(B^{x}=(B_{s}^{x})_{s\le 1}\) with \(B^{x}_{s} =W_{s}+s(x-W_{1})\). Using this, we easily get for any bounded positive function \(g\) and any \(y,M \in {\mathbb{R}}\) that

$$ \mathbf{P}\bigg[ \int _{0}^{1} e^{-\sigma W_{v}}g(v)\,d v>y , W_{1}< M \bigg]>0; $$

cf. [21, Lemma 4.2]. This implies (6.4).

Now suppose that \(\sigma ^{2}=0\), but \(\Pi ((-1,0))>0\), i.e., \(\Pi ((-1,-{\varepsilon }))>0\) for some \({\varepsilon }\in (0,1)\). In the decomposition \(V=V^{(1)}+V^{(2)}\), where

$$\begin{aligned} V^{(1)}_{t} :=&I_{\{-1< x\le -{\varepsilon }\}}\ln (1+x)*\mu _{t}, \\ V^{(2)}_{t} :=&\big(a-\Pi (hI_{\{-1< x\le -{\varepsilon }\}})\big)t+I _{\{x> -{\varepsilon }\}}h*(\mu -\nu )_{t} \\ &+I_{\{x> -{\varepsilon }\}}\big(\ln (1+x)-h\big)*\mu _{t}, \end{aligned}$$

the processes \(V^{(1)}\) and \(V^{(2)}\) are independent. The process \(V^{(1)}\) is decreasing by negative jumps whose absolute values are at least \(|\ln (1-{\varepsilon })|\), and the number of jumps on the interval \([0,1/2]\) has a Poisson distribution with parameter \((1/2)\Pi ((-1,-{\varepsilon }))>0\). Thus \(\mathbf{P}[V^{(1)}_{1/2}< -n]>0\) for any real \(n\). It follows that

$$\begin{aligned} {\mathbf{P}}[{\mathcal{J}}_{1}>N, V_{1}< 0] \ge & \mathbf{P}\bigg[\int _{0}^{1}e^{-V_{t}} \,dt>N, V_{1}< 0, V^{(1)}_{1/2}< -n \bigg] \\ \ge & \mathbf{P}\bigg[e^{n}\int _{1/2}^{1}e^{-V^{(2)}_{t}}\, dt>N, V ^{(2)}_{1}< n, V^{(1)}_{1/2}< -n \bigg] \\ =& \mathbf{P}\bigg[\int _{1/2}^{1}e^{-V^{(2)}_{t}} \,dt>Ne^{-n}, V^{(2)} _{1}< n\bigg] {\mathbf{P}}[ V^{(1)}_{1/2}< -n ]. \end{aligned}$$

The right-hand side is strictly positive for sufficiently large \(n\) and so (6.4) holds. Finally, the case where \(\Pi (xI_{\{0< x\le 1\}})= \infty \) is treated similarly as in the last part of the proof of Lemma 5.7 (i). The exceptional case \(\Pi (|h|)<\infty \), \(\Pi ((-1,0))=0\), \(\Pi ((0,\infty ))>0\) is treated by a reduction to Corollary 6.2 (i). □

7 Examples

Example 7.1

Let us consider a model with negative risk sums and Lévy measure \(\Pi _{P}(dx)=\lambda F_{P}(dx)\) with a constant \(\lambda >0\), where the probability distribution \(F_{P}(dx)\) is concentrated on \((0,\infty )\), and set

$$ a_{P}^{0}:=\lambda \int _{[0,1]}x F_{P}(d x)-a_{P}. $$

The process \(P\) admits a representation as the sum of a Wiener process with drift and an independent compound Poisson process, i.e.,

$$ P_{t}=-a_{P}^{0}t +\sigma _{P}W_{t}^{P}+\sum ^{N^{P}_{t}}_{j=1} \xi _{j}, $$
(7.1)

where the Poisson process \(N^{P}\) with intensity \(\lambda _{P}\) is independent of the sequence \((\xi _{j})_{j\ge 1}\) of positive i.i.d. random variables with common distribution \(F_{P}\). Suppose that the price process is a geometric Brownian motion, i.e.,

$$ \mathcal{E}_{t}(R)=e^{V_{t}}=e^{(a-\sigma ^{2}/2)t+\sigma W_{t}}, $$

so that \(\sigma \neq 0\) and \(\Pi \equiv 0\).

For this model, we have \(\underline{q}=-\infty \) and \(\bar{q}=\infty \). The condition \(D^{+}H(0)<0\) is reduced to the inequality \(\sigma ^{2}/2< a\), and the function \(H(q)=(\sigma ^{2}/2-a+q\sigma ^{2}/2)q\) has the root \(\beta =2a/\sigma ^{2}-1>0\). Suppose that \(\sigma ^{2}_{P}+(a_{P}^{0})^{+}>0\). By Theorem 1.1, the exact asymptotic \(\Psi (u)\sim C_{\infty }u^{- \beta }\) as \(u\to \infty \) holds if \(\mathbf{E}[\xi _{1}^{\beta _{1}}]< \infty \). Since the exponential distribution has the above property, we recover as a very particular case the asymptotic result of [21] where it was assumed that \(\sigma ^{2}_{P}=0\) and \(a_{P}^{0}>0\).

If \(\sigma ^{2}_{P}+(a_{P}^{0})^{+}>0\), \(\sigma ^{2}/2\ge a\) and \(\mathbf{E}[\xi _{1}^{\epsilon }]<\infty \) for some \(\epsilon >0\), then Theorem 6.4 implies that \(\Psi (u)\equiv 1\).

Models with a price process given by a geometric Brownian motion were intensively studied by using the representation of \(\Psi \) as solution of integro-differential equations. To the reader interested not only in asymptotic results, but also in the behaviour of ruin probabilities for finite values of the initial capital, we recommend the very detailed study [7] with a number of simulation results.

Example 7.2

Let the process \(P\) again be given by (7.1) and suppose that the price process has a jump component, namely,

$$ \mathcal{E}_{t}(R)=\exp \bigg((a-\sigma ^{2}/2)t+\sigma W_{t}+\sum _{j=1} ^{N_{t}}\ln (1+\eta _{j})\bigg), $$

where the Poisson process \(N\) with intensity \(\lambda >0\) is independent of the sequence \((\eta _{j})_{j\ge 1}\) of i.i.d. random variables with common distribution \(F\) not concentrated at zero and such that \(F((-\infty ,-1])=0\); see [25, Chap. 7]. That is, the log price process is represented as

$$ V_{t}=(a-\sigma ^{2}/2)t+\sigma W_{t}+ \ln (1+x)*\mu _{t}, $$

where \(\Pi (dx)=\lambda F(dx)\). The function \(H\) is given by the formula

$$ H(q)=(\sigma ^{2}/2-a+q\sigma ^{2}/2)q+\lambda \big(\mathbf{E}[(1+\eta _{1})^{-q}]-1\big). $$

Suppose that \(\mathbf{E}[(1+\eta _{1})^{-q}]<\infty \) for all \(q>0\). Then \(\bar{q}=\infty \). Let \(\sigma \neq 0\). Then \(\limsup _{q\to \infty } H(q)/q= \infty \). If

$$ D^{+}H(0)=\sigma ^{2}/2-a -\lambda {\mathbf{E}}[\ln (1+\eta _{1})]< 0, $$

then the root \(\beta >0\) of the equation \(H(q)=0\) does exist. Thus if \(\mathbf{E}[\xi _{1}^{\beta } ]<\infty \), then Theorem 1.1 can be applied to get that \(\Psi (u)\sim C_{\infty }u^{-\beta }\), where \(C_{\infty }>0\).

If \(\mathbf{E}[(1+\eta _{1})^{1-2a/\sigma ^{2}}]<1\) (resp. \(\mathbf{E}[(1+ \eta _{1})^{1-2a/\sigma ^{2}}]>1\)), the root \(\beta \) is larger (resp. smaller) than \(2a/\sigma ^{2}-1\), the value of the root of \(H\) in the model of Example 7.1 where the price process is continuous.

Now let \(\sigma = 0\). If

$$ D^{+}H(0)=-a -\lambda {\mathbf{E}}[\ln (1+\eta _{1})]< 0 $$

and

$$ \limsup _{q\to \infty } q^{-1}{\mathbf{E}}[(1+\eta _{1})^{-q}-1]>a/ \lambda , $$

then the root \(\beta >0\) also exists. Theorem 1.1 can be applied when \(\mathbf{P}[\eta _{1}>0]\in (0,1)\), and then we have the exact asymptotics if the distribution of \(\ln (1+\eta _{1})\) is non-arithmetic.

Suppose again that \(\mathbf{E}[(1+\eta _{1})^{-q}]<\infty \) for all \(q\in {\mathbb{R}}\). Then \(\underline{q}=-\infty \) and \(\bar{q}= \infty \). If the conditions \(\sigma ^{2}/2-a -\lambda {\mathbf{E}}[ \ln (1+\eta _{1})]\ge 0\), \(\sigma ^{2}+\mathbf{P}[\eta _{1}<0]>0\) and \(\mathbf{E}[ |\xi _{1}|^{\varepsilon }]<\infty \) for some \({\varepsilon }>0\) hold, then \(\Psi (u)\equiv 1\) by virtue of Theorem 6.4.