1 Introduction

In the modern world, insurance companies operate in a financial environment. In particular, they invest their reserves into various assets and this may add more risk to the business. To our knowledge, the first model of an insurance company investing its capital into a risky asset has appeared in the short note [12] where the author provided arguments showing that the asymptotic behavior of the ruin probability is radically different from that in the classical Lundberg–Cramér model. A rigorous analysis in [13] confirmed the conjecture. Using Kalashnikov’s [20] estimates for linear finite difference equations with random coefficients, it was shown that independently of the safety loading, ruin is imminent with probability one when the volatility \(\sigma\) of the stock price is large with respect to the instantaneous rate of return \(a\) (namely, when \(2a/\sigma ^{2}<1\)), and the ruin probability is decreasing as a power function when the volatility is small (namely, when \(2a/\sigma^{2}>1\)). For the model with exponentially distributed claims, the exact asymptotics was found. The threshold case \(2a/\sigma^{2}=1\) was studied in [28, 29] where it was shown, using techniques based on an ergodic theorem, that the ruin is imminent with probability one. The setting of [13] and [28] is of the so-called non-life insurance: the company receives a flow of contributions and pays claims. It is also referred to as a model with positive risk sums. Ruin occurs when a new claim arrives and its value is too large to be covered by the reserve: the risk process exits from the positive half-axis by a jump. The model can be studied in the discrete-time framework, but the method of differential equations happens to be more efficient in the case of exponential claims where it allows getting the exact asymptotics.

In the present note, we consider the model of a company with an outgoing constant flow of payments and received random benefits; such models are called models with negative risk sums. The classical literature relates this setting to life annuity, or pension, insurance, when the company pays to the policyholder a pension (or annuity) and earns a premium when the insured person dies; see e.g. [16, Example 8, Chap. 1] or [31]. The modern literature refers to the balance sheet of a venture company funding R&D and selling innovations [4].

For the classical model, where the reserve process has upward jumps and may leave the positive half-axis only in a continuous way, the ruin problem can be easily reduced to the ruin problem for non-life insurance using the so-called duality method [2]. Its idea is to define the “dual” process by replacing the line segments between consecutive jumps of the original process by downward jumps of the same depth and the upward jumps by the line segments of the same height with positive fixed slope. Note that in the literature, models with upward jumps are often referred to as dual models [1, 3].

The duality method does not work in our setting where the capital of the company or a fraction of it is invested into a risky asset. The change of two signs to the opposite ones in the equation defining the dynamics of the reserve leads to technical complications. In particular, ruin may happen before the instant of the first jump and the latter is no more the instant of a regeneration, after which the process starts afresh provided it is strictly positive.

Nevertheless, a suitable modification of the arguments of [13] and [28] combined with some ideas allows us to obtain the asymptotic of the ruin probability as the initial reserve tends to infinity. Expectedly, it is the same as in the non-life insurance case. In many countries, there are rules allowing insurance companies to invest only a small share of their reserves into risky assets. Our simple model confirms that this is reasonable and even provides a quantitative answer. To avoid a situation when ruin happens with probability one, the proportion of investment into the risky asset should be strictly less than \(2a/\sigma^{2}\).

It should be emphasized that the case of the model with negative risk sums is rather different and its study is not a straightforward exercise. The main difficulty in deriving the integro-differential equation is to prove the smoothness of the ruin probability and the integrability of derivatives. This issue is already delicate in the non-life insurance case. Unfortunately, it was not discussed in [13] where the reader was directed for this towards the literature. Now we are not sure that we had at hand at that time a reliable reference. The smoothness of the exit probability is discussed in many papers; see e.g. the interesting article [32] where the explicit formula for an exponential functional of Brownian motion due to Marc Yor [33] is used, but the needed smoothness property was established only under constraints on the coefficients. The first author of the present note had a fruitful discussion with Marc on the possibility to deduce smoothness of the ruin probability and the integrability of its derivatives without using complicated explicit formulae. Marc’s suggestions are realized here for a model with negative risk sums. Note that in the literature on non-life insurance, one can find other methods to establish smoothness. For example, an approach similar to verification theorems in stochastic control theory was developed in [5].

The structure of the paper is as follows. Section 2 contains the formulation of the main results. In Sect. 3, we establish an upper asymptotic bound for the exit probability (from \((0,\infty)\)) for the solution of a non-homogeneous linear stochastic equation, and a lower asymptotic bound for the small volatility case. As a tool, we use the Dufresne theorem rather than Goldie’s renewal theorem from [15], which was the key ingredient of the arguments in [28]. The proof of Theorem 2.2 asserting that in the case of large volatility ruin is imminent is given in Sect. 4. The regularity of the non-ruin probability \(\varPhi\) is studied in Sect. 5 using a method based on integral representations. At the end of this section, we derive the integro-differential equation for \(\varPhi\). Section 6 contains the proof of the main theorem. Finally, in the Appendix we provide a formulation of an ergodic theorem for an autoregression with random coefficients proved in [28].

Kalashnikov’s approach (developed further in his joint work with Ragnar Norberg [21]) plays an important role in our study. Our technique is elementary. More profound and general results can be found in [19, 2327], among others.

2 The model

We are given a stochastic basis \((\varOmega,{\mathcal {F}},{\mathbf {F}}=({\mathcal {F}}_{t})_{t\ge 0},{\mathbf {P}} )\) with a Wiener process \(W\) independent of the integer-valued random measure \(p(dt,dx)\) with the compensator \(\tilde{p}(dt,dx)\).

Let us consider a process \(X=X^{u}\) of the form

$$ X_{t}=u+ a \int_{0}^{t} X_{s} \,ds+ \sigma\int_{0}^{t} X_{s} \,dW_{s} -ct +\int_{0}^{t}\int x p(ds,dx), $$
(2.1)

where \(a\) and \(\sigma\) are arbitrary constants and \(c< 0\).

We shall assume that \({\tilde{p}}(dt,dx)=\alpha \,dtF(dx)\), where \(F(dx)\) is a probability distribution on \((0,\infty)\). In this case, the integral with respect to the jump measure is simply a compound Poisson process. It can be written as \(\sum_{i=1}^{N_{t}}\xi_{i}\), where \(N\) is a Poisson process with intensity \(\alpha\) and \(\xi_{i}\) are random variables with common distribution \(F\); the random variables \(W\), \(N\), \(\xi_{i}\), \(i\in {\mathbf {N}}\), are independent. We denote by \(T_{n}\) the successive jumps of \(N\); the interarrival times \(T_{i}-T_{i-1}\) are independent and exponentially distributed with parameter \(\alpha\).

In our main result (Theorem 2.1), we assume that \(F\) is the exponential distribution with parameter \(\mu\).

Let \(\tau^{u}:=\inf\{t: X^{u}_{t}\le0\}\) (the instant of ruin), \(\varPsi(u):={\mathbf {P}}[\tau^{u}<\infty]\) (the ruin probability), and \(\varPhi(u):=1-\varPsi(u)\) (the survival probability).

The parameter values \(a=0\), \(\sigma=0\) correspond to the negative risk sum version of the Lundberg–Cramér model for which the risk process is usually written as

$$ r_{t}:=u - ct +\sum_{i=1}^{N_{t}}\xi_{i}. $$

This means that the capital evolves due to continuously outgoing cash flow with rate \(c\) and incoming random payoffs \(\xi_{i}\) at times forming an independent Poisson process \(N\) with intensity \(\alpha\). For the classical model with positive safety loading and \(F\) having a “non-heavy” tail, the Lundberg inequality provides an encouraging information: the ruin probability decreases exponentially as the initial capital \(u\) tends to infinity. Moreover, for exponentially distributed claims, the ruin probability admits an explicit expression; see [2, Ch. IV.3b] or [16, Sect. 1.1].

The more realistic case \(a>0\), \(\sigma=0\), corresponding to non-risky investments, does not pose any problem (see e.g. [18] for estimates of the exit probabilities covering this case).

We study here the case \(\sigma>0\). Now (2.1) describes the evolution of the reserve of an insurance company which pays an annuity and continuously reinvests its capital into an asset with the price following a geometric Brownian motion.

Notations. Throughout the paper, we use the following abbreviations:

$$ \kappa:=a-\frac{1}{2} \sigma^{2}, \qquad\beta:= \frac{2\kappa}{\sigma ^{2}}=\frac{2a}{\sigma^{2}}-1, \qquad\eta_{t}:=\kappa t+\sigma W_{t}. $$

The solution of the linear stochastic equation (2.1) can be written, using the Cauchy formula, as

$$ X_{t}=e^{\eta_{t}} \left(u+\int_{[0,t]}e^{-\eta_{s}}\,dr_{s}\right). $$

Theorem 2.1

Let \(F(x)=1-e^{-x/\mu}\), \(x>0\). Assume that \(\sigma,c>0\).

(i) If \(\beta>0\), then for some \(K>0\),

$$ \varPsi(u)= Ku^{-\beta}\big(1+o(1)\big), \quad u\to\infty. $$

(ii) If \(\beta\le0\), then \(\varPsi(u)=1\) for all \(u>0\).

The formulation of this theorem is exactly the same as in [13] for the non-life insurance model (the case \(\beta=0\) was analyzed in [28, 29]).

One must admit that in annuity insurance models, especially in the case of a company keeping a portfolio of viager contracts (lifetime annuity in exchange for a real estate), the hypothesis that the benefits (i.e., prices of houses) follow an exponential distribution is highly unrealistic. Nevertheless, we can claim, without any assumption on the distribution, that the ruin probabilities lie between two power functions; see Propositions 3.1 and 3.2.

The next result (implying the statement (ii) above) says that for \(\delta>0\), ruin is imminent. It requires only that the distribution \(F\) has some moments of positive order.

Theorem 2.2

Assume that there is \(\delta>0\) such that \({\mathbf {E}}\xi^{\delta}_{1}<\infty\). If \(\beta\le0\), then \(\varPsi(u)=1\) for any \(u>0\).

The same basic model serves well in the situation where only a fixed part \(\gamma\in(0,1)\) of the capital is invested in the risky asset: one should only replace the parameters \(a\) and \(\sigma\) in (2.1) by \(a\gamma\) and \(\sigma\gamma\). Theorem 2.1 implies that the ruin with probability one will be avoided only if \(2a\gamma/(\sigma\gamma)^{2}>1\), i.e., when the share \(\gamma\) of investment into the risky asset is strictly less than \(2a/\sigma^{2}\).

It is worth mentioning that our conclusions are robust and hold for more general models. The reader may criticize the hypothesis that the intensity \(c\) of outgoing payments is constant. Indeed, after the death of the person receiving the annuity, the payments stop and the intensity must decrease, while it increases with a new customer. Easy comparison arguments show that the above statements hold in the more realistic situation where \(c=(c_{t})\) is a random process such that \(0< C_{1}\le c\le C_{2}\), where \(C_{1}\) and \(C_{2}\) are constants.

The crucial part of the asymptotic analysis in Theorem 2.1 is based on the fact that for the Markov process given by (2.1), the non-exit probability \(\varPhi(u)\) is smooth and satisfies the equation

$$ \frac{1}{2} \sigma^{2}u^{2} \varPhi''(u)+ (au-c)\varPhi'(u) - \alpha\varPhi(u)+\alpha\int_{0}^{\infty}\varPhi(u+y)\,dF(y)=0. $$
(2.2)

With \(\sigma>0\), this equation is of the second order and hence requires two boundary conditions—in contrast to the classical case (\(a=0\), \(\sigma=0\)) where it degenerates to an equation of the first order requiring a single boundary condition; see [16]. The estimate given in Proposition 3.1 shows that \(\varPhi(\infty)=1\).

There is an extensive literature concerning the regularity of the survival probability for processes with jumps in the context of non-life insurance models and models based on Lévy processes; see e.g. [6, 7, 14, 17, 32]. Our Theorem 5.1 requiring only the smoothness of \(F\) and the integrability of \(F''\) seems to be the first result on the regularity of the survival probability in the considered setting.

3 Asymptotic bounds for the small volatility case

3.1 Upper asymptotic bound

Let us consider the exit probability problem for a more general process \(X=X^{u}\) of the form

$$ X_{t}=u+ a \int_{0}^{t} X_{s} \,ds+ \sigma\int_{0}^{t} X_{s}\,dW_{s} - ct + Z_{t}, $$

where \(a, \sigma, c\) are strictly positive constants and \(Z=(Z_{t})_{t\ge0}\) is an increasing adapted càdlàg process starting from zero.

Proposition 3.1

Assume that \(\beta>0\) in the general model (2.1). Then

$$ \limsup_{u\to\infty}\,u^{\beta}\,\varPsi(u)\le\frac{2^{\beta}\, c^{\beta}}{\sigma^{2\beta}\beta\varGamma(\beta)}. $$

Proof

Let \(Y\) be the solution of the linear stochastic equation

$$ Y_{t}=u+a\int^{t}_{0}\,Y_{s}\,d s+\sigma\,\int^{t}_{0}\,Y_{s}\,d W_{s}-c t. $$

Introducing the notation

$$ R_{t}:=c\int^{t}_{0}\,e^{-\eta_{s}}\,d s, $$
(3.1)

we express the solution as

$$ Y_{t}:=e^{\eta_{t}}\big(u-R_{t}). $$

The difference \(X-Y\) satisfies the same linear equation as \(X\), but with zero initial condition and \(c = 0\). Since \(Z\) is increasing, we have the inequality \(X\ge Y\), showing that the exit of \(X\) from \((0,\infty)\) implies the exit of \(Y\). Thus,

$$ \varPsi(u)\le{\mathbf {P}}[R_{\infty}>u], $$
(3.2)

and the asymptotic behavior of the ruin probability for this general model can be estimated by the tail behavior of the distribution function of \(R_{\infty}\). Using the change of variable \(s=(4/\sigma^{2}) t\) and observing that \(B_{t}:=-(1/2)\sigma W_{(4/\sigma^{2})t}\) is a Wiener process, we obtain the representation

$$ R_{\infty}=c\int_{0}^{\infty}e^{-(a-\sigma^{2}/2)s-\sigma W_{s}}\,ds= \frac{4c}{\sigma^{2}} \int_{0}^{\infty}e^{-2\beta t+2B_{t}}\,dt=:\frac {4c}{\sigma^{2}} A_{\infty}^{(-\beta)}. $$

The Dufresne theorem (see [9] or [22, Theorem 6.2]) claims that \(A_{\infty}^{(-\beta)}\) is distributed as the random variable \(1/(2\gamma)\), where \(\gamma\) has the gamma distribution with parameter \(\beta\). Thus,

$$\begin{aligned} {\mathbf {P}}[R_{\infty}>u] =&{\mathbf {P}}[2c/(\sigma^{2}\gamma) >u]\\ =&{\mathbf {P}}[\gamma< 2c/(\sigma^{2}u)]= \frac{1}{\varGamma(\beta)}\int_{0}^{2c/(\sigma^{2}u)}x^{\beta-1}e^{-x}\, dx\\ \sim& \frac{2^{\beta}\,c^{\beta}}{\sigma^{2\beta}\beta\varGamma(\beta )}u^{-\beta}, \end{aligned}$$

and the result follows from (3.2). □

3.2 Lower asymptotic bound

The next result shows that the ruin probability decreases, as the initial capital tends to infinity, not faster than a certain power function.

Proposition 3.2

Assume that \(\beta>0\). Then there exists \(\beta_{*}>0\) such that

$$ \liminf_{u\to\infty}\,u^{\beta_{*}}\,\varPsi(u)>0. $$

Proof

Let \(Y=(Y_{k})_{k\ge1}\) be the embedded discrete-time process, i.e., the sequence of random variables defined recursively by

$$ Y_{k}=M_{k} Y_{k-1}+Q_{k},\qquad Y_{0}=u, $$
(3.3)

where

$$ M_{k}=e^{\eta_{T_{k}}-\eta_{T_{k-1}}} \quad\mbox{and}\quad Q_{k}=\xi_{k}-c\,\int^{T_{k}}_{T_{k-1}}\, e^{\eta_{T_{k}}-\eta _{s}}\,d s. $$

Let \(\theta^{u}:=\inf\{k: Y_{k}\le0\}\). It is clear that \(X_{T_{k}}= Y_{k}\) for any \(k\ge1\). So, for any \(u>0\),

$$ \varPsi(u)= {\mathbf {P}}[\tau^{u}< \infty] \ge {\mathbf {P}}[\theta^{u}< \infty]. $$
(3.4)

Take \(\varrho\in(0,1)\) and chose \(B\) sufficiently large to ensure that

$$ B_{1}=B-\frac{1}{\varrho^{2}(1-\varrho)}>0. $$

Define the sets

$$\varGamma_{k}=\{M_{k}\le\varrho\}\cap\{Q_{k}\le\varrho^{-1}\}, \qquad D_{k}=\{M_{k}\le\varrho^{-1}\}\cap\{Q_{k}\le-B\}. $$

On the set \(\bigcap^{n}_{k=1}\,\varGamma_{k}\), we have

$$ Y_{n}=u\prod^{n}_{j=1}\,M_{j} +\sum^{n}_{k=1}\,Q_{k}\,\prod^{n}_{j=k+1}\,M_{j} \le u \varrho^{n}+\frac{1}{\varrho(1-\varrho)}. $$

Therefore, on the set \(\bigcap^{n}_{k=1}\,\varGamma_{k}\cap D_{n+1}\),

$$ Y_{n+1}=M_{n+1} Y_{n} + Q_{n+1}\le u \varrho^{n-1}+\frac{1}{\varrho ^{2}(1-\varrho)} -B= u \varrho^{n-1} -B_{1}. $$

It is easy to check that \(u \varrho^{n-1}\le B_{1}\), when \(u>B_{1}\) and

$$ n=3+\left[\frac{\ln(u/B_{1})}{\vert\ln\varrho\vert} \right], $$

where \([\,\cdot\,]\) means the integer part. Therefore,

$$ {\mathbf {P}}[\theta^{u}< \infty]\ge{\mathbf {P}}\Bigg[ \bigcap ^{n}_{k=1}\,\varGamma_{k}\cap D_{n+1}\Bigg] =({\mathbf {P}}[\varGamma_{1}])^{n}{\mathbf {P}}[D_{1}]. $$

Taking into account that \({\mathbf {P}}[\varGamma_{1}]>0\) and \({\mathbf {P}}[D_{1}]>0\), we obtain that

$$ \lim_{u\to\infty}\,u^{\beta_{*}} {\mathbf {P}}[\theta^{u}< \infty ]=\infty $$

for any

$$ \beta_{*}>\frac{\ln{\mathbf {P}}[\varGamma_{1}]}{\ln\varrho}. $$

This implies the claim. □

4 Large volatility: proof of Theorem 2.2

We consider separately the two cases \(\beta<0\) and \(\beta=0\) and show that in both cases, the ruin probability is equal to one.

Proposition 4.1

Assume that \(\beta<0\) and \({\mathbf {E}}\xi_{1}^{\delta}<\infty\) for some \(\delta >0\). Then \(\varPsi(u)=1\) for all \(u>0\).

Proof

As in the proof of Proposition 3.2, we consider the embedded discrete-time process \(Y=(Y_{k})_{k\ge1}\) defined by (3.3). By virtue of (3.4), it is sufficient to show that \({\mathbf {P}}[\theta^{u}<\infty]=1\). Note that for \(\delta\in(0,-\beta)\),

$$ {\mathbf {E}}e^{\delta\eta_{t}}= {\mathbf {E}}e^{\delta(\kappa t+\sigma W_{t})}=e^{\delta t(\beta+\delta)\sigma^{2}/2}< 1, $$

and therefore

$$ {\mathbf {E}}M_{1} ^{\delta}=\int_{0}^{\infty}{\mathbf {E}}e^{\delta\eta _{t}}\alpha e^{-\alpha t}\,dt < 1. $$

According to [8, Ch. 2, Sect. 1, Eq. (1.1.1)], if a random variable \(\nu\) is independent of \(W\) and has the exponential distribution with parameter \(\alpha\), then

$$ {\mathbf {E}}e^{q\max_{s\le\nu}(\mu s+W_{s})} =\frac{\sqrt{2\alpha+\mu^{2}}-\mu}{\sqrt{2\alpha+\mu^{2}}-\mu-q}, $$
(4.1)

provided that

$$ \sqrt{2\alpha+\mu^{2}}-\mu-q>0. $$

Changing variables and estimating the integrand by its maximal value, we get

$$\begin{aligned} {\mathbf {E}}\bigg(\int^{T_{1}}_{0}\, e^{\eta_{T_{1}}-\eta_{s}}\,d s \bigg)^{\delta} =& {\mathbf {E}}\bigg(\int^{T_{1}}_{0}\, e^{\kappa s+\sigma W_{s}}\,d s \bigg)^{\delta} \le {\mathbf {E}}T^{\delta}_{1} e^{\delta\max_{s\le T_{1}}(\kappa s+\sigma W_{s})}\\ =&\int_{0}^{\infty}t ^{\delta}{\mathbf {E}}e^{\delta\max_{s\le t}(\kappa s+\sigma W_{s})}\alpha e^{-\alpha t}\,dt\\ \le& 2 \sup_{t\ge0} \big(t^{\delta}\,e^{-\alpha t/2}\big) {\mathbf {E}}e^{\delta\max_{s\le\nu'}(\kappa s+\sigma W_{s})}\\ =& 2e^{\delta(\ln(2\delta/\alpha)-1)} {\mathbf {E}}e^{\delta\max_{s\le\nu'}(\kappa s+\sigma W_{s})}, \end{aligned}$$

where \(\nu'\) is an exponential random variable with parameter \(\alpha /2\). In view of the equality (4.1),

$$ {\mathbf {E}}e^{\delta\max_{s\le\nu'}(\kappa s+\sigma W_{s})}= \frac{\sqrt{\sigma^{2}\alpha+\kappa^{2}}-\kappa}{\sqrt{\sigma ^{2}\alpha +\kappa^{2}}-\kappa-\delta\sigma^{2}}, $$

provided that

$$ \delta< \frac{\sqrt{\sigma^{2}\alpha+\kappa^{2}}-\kappa}{\sigma^{2}}, $$

i.e., for such \(\delta>0\) we have the bound

$$ {\mathbf {E}}\bigg(\int^{T_{1}}_{0}\, e^{\eta_{T_{1}}-\eta_{s}}\,d s \bigg)^{\delta} \le 2\,e^{\delta(\ln(2\delta/\alpha)-1)}\, \frac{\sqrt{\sigma^{2}\alpha+\kappa^{2}}-\kappa}{\sqrt{\sigma ^{2}\alpha +\kappa^{2}}-\kappa-\delta\sigma^{2}}. $$
(4.2)

Using these estimates and the assumption of the proposition, we conclude that we have \({\mathbf {E}}|Q_{1}|^{\delta}<\infty\) for sufficiently small \(\delta>0\). Thus, the hypothesis of the ergodic theorem for autoregression with random coefficients is fulfilled (see Proposition A.1). The latter claims that for any bounded uniformly continuous function \(f\),

$$ {\mathbf {P}}\hbox{-}\lim_{N}\frac{1}{N}\sum^{N}_{k=1}\,f(Y_{k})= {\mathbf {E}}f(\zeta), $$
(4.3)

where

$$ \zeta=Q_{1}+\sum^{\infty}_{k=2}\,Q_{k}\, \prod^{k-1}_{j=1}\, M_{j}. $$

Let us represent \(\zeta\) in the form

$$ \zeta=\xi_{1}-\int_{0}^{T_{1}}e^{\eta_{T_{1}}-\eta_{s}}\,ds+e^{\eta _{T_{1}}}\zeta_{1}, \qquad\zeta_{1}:=\sum^{\infty}_{k=2}\,Q_{k}\, \prod^{k-1}_{j=2}\, M_{j}. $$

Clearly, the random variables \(\xi\), \(\zeta_{1}\) and \((\eta_{T_{1}},\int _{0}^{T_{1}}e^{\eta_{T_{1}}-\eta_{s}}\,ds)\) are independent. Moreover, Lemma 4.2 below implies that the support of the conditional distribution of the integral \(\int_{0}^{t}e^{\eta_{t}-\eta _{s}}\,ds\) given \(\eta_{t}=x\) is unbounded from above. From this we easily infer that the support of the distribution of \(\zeta \) is unbounded from below. Thus, for

$$ f(x)={\mathbf{1}}_{\{x\le-1\}}+\vert x\vert\,{\mathbf{1}}_{\{-1< x< 0\}}, $$

the right-hand side of (4.3) is strictly positive, and therefore \({\mathbf {P}}[\inf_{k\ge1}Y_{k}<0]=1\). □

Lemma 4.2

Let \(\sigma>0\). Then the support of the conditional distribution of the random variable

$$ I=\int_{0}^{1} e^{\sigma W_{s}}\,ds $$

given \(W_{1}=y\) is unbounded from above.

Proof

It is well known (see e.g. [30, Ch. 1, Eq. (3.16)]) that the conditional distribution of the Wiener process \((W_{s})_{ s\le1}\) given \(W_{1}=y\) coincides with the distribution of the Brownian bridge \(B^{y}\) with \(B^{y}_{t} =W_{t}+t(y-W_{1})\). Thus, the conditional distribution of \(I\) is the same as the unconditional distribution of

$$ \tilde{I}:=\int_{0}^{1} e^{\sigma(W_{s}+s(y-W_{1}))}\,ds. $$

Since Wiener measure has full support in the space \(C_{0}[0,1]\) of continuous functions on \([0,1]\) with zero initial value, the support of the distribution of \(\tilde{I}\) is unbounded from above. □

Proposition 4.3

Assume that \(\beta=0\) and \({\mathbf {E}}\xi_{1}^{\delta}<\infty\) for some \(\delta >0\). Then \(\varPsi(u)=1\) for all \(u>0\).

Proof

In the considered case, the embedded discrete-time process is defined by (3.3) with

$$ M_{k}=e^{\sigma\Delta V_{k}} \quad\mbox{and}\quad Q_{k}=\xi_{k}-c \int^{T_{k}}_{T_{k-1}}\, e^{\sigma( W_{T_{k}}-W_{s})}\,ds, $$

where \(V_{k}=W_{T_{k}}\) and \(\Delta V_{k}:=V_{k}-V_{k-1}\). To study the asymptotic properties of Eq. (3.3), we use the approach proposed in [28] for non-life insurance models. To this end, define recursively a sequence of random variables by \(\theta_{0}:=0\) and

$$ \theta_{n}:=\inf\{k>\theta_{n-1}\colon V_{k}-V_{\theta_{n-1}} < 0 \}, \qquad n\ge1. $$

Note that \(\theta_{n}=\sum^{n}_{j=1}\,\Delta\theta_{j} \), where \((\Delta\theta _{j})_{j\ge1}\) is a sequence of i.i.d. random variables distributed as \(\theta_{1}\). It is known (see e.g. [11, Theorem XII.7.1a]) that

$$ C:= \sup_{n\ge1}\,n^{1/2}{\mathbf {P}}[\theta_{1}>n]< \infty. $$

Putting \(y_{k}=Y_{\theta_{k}}\), we obtain from (3.3) that

$$ y_{k}={a}_{k}\,y_{k-1}+{b}_{k},\qquad {y}_{0}=u, $$

where

$$ {a}_{k}=\prod^{\Delta\theta_{k}}_{j=1} M_{\theta_{k-1}+j} = e^{\sigma(V_{\theta_{k}}-V_{\theta_{k-1}})} $$

and

$$ {b}_{k}= \sum^{\Delta\theta_{k}}_{\ell=1} \Bigg( \prod^{\Delta\theta_{k}}_{j=\ell+1}\,M_{\theta_{k-1}+j} \Bigg) Q_{\theta_{k-1}+\ell}. $$

It is clear that \({a}_{k}<1\) a.s. Moreover, the first condition in Theorem 2.2 and the inequality (4.2) with \(\kappa=0\) imply that \({\mathbf {E}}\vert Q_{1} \vert^{\delta}<\infty\) for any sufficiently small \(\delta\). Now, taking into account that

$$ \vert{b}_{1} \vert\le\sum^{\Delta\theta_{1}}_{\ell=1} \Bigg( \prod^{\Delta\theta_{1}}_{j=\ell+1}M_{j} \Bigg) \vert Q_{\ell}\vert = \sum^{\Delta\theta_{1}}_{\ell=1} \frac{{a}_{1}}{\prod^{\ell}_{j=1}\,M_{j}} \vert Q_{\ell}\vert \le \sum^{\Delta\theta_{1}}_{\ell=1} \vert Q_{\ell}\vert, $$

we can get for \(r\in(0,1)\) and an increasing sequence of integers \(\ell _{n}\) that

$$\begin{aligned} {\mathbf {E}}\vert{b}_{1} \vert^{r}&\le1+r\sum_{n\ge1}\,\frac {1}{n^{1-r}}\, {\mathbf {P}}[ \vert{b}_{1} \vert>n]\\ &\le 1+r\sum_{n\ge1}\,\frac{1}{n^{1-r}}\, {\mathbf {P}}\Bigg[ \sum^{\ell_{n}}_{j=1}\vert Q_{j} \vert>n\Bigg]+ r\sum_{n\ge1}\,\frac{1}{n^{1-r}}\, {\mathbf {P}}[ \vert\theta_{1} \vert>\ell_{n}] \\ &\le 1+r{\mathbf {E}}\vert Q_{1} \vert^{\delta}\, \sum_{n\ge1}\,\frac{\ell_{n}}{n^{1-r+\delta}}\, +r\,C\, \sum_{n\ge1}\,\frac{1}{n^{1-r}\ell^{1/2}_{n}}. \end{aligned}$$

Putting here \(\ell_{n}=[n^{4r}]\), we obtain that \({\mathbf {E}}\vert{b}_{1} \vert ^{r}<\infty\) for any \(r\in(0,\delta/5)\). Therefore, due to Proposition A.1, we obtain that for any bounded uniformly continuous function \(f\),

$$ {\mathbf {P}}\hbox{-}\lim_{N}\frac{1}{N}\sum^{N}_{k=1}\,f(y_{k})= {\mathbf {E}}f(\zeta), $$

where

$$ \zeta={b}_{1}+\sum^{\infty}_{k=2}\,{b}_{k}\, \prod^{k-1}_{j=1}\, {a}_{j}. $$
(4.4)

Now we show that

$$ {\mathbf {P}}[\zeta< -x]>0 \qquad \mbox{for any } x>0. $$

Indeed, the random variable (4.4) can be represented as

$$ \zeta={b}_{1}+{a}_{1}\,\zeta_{1}, \qquad\zeta_{1}=\sum^{\infty}_{k=2}{b}_{k}\, \prod^{k-1}_{j=2}{a}_{j}. $$

It is clear that \(\zeta_{1}\) is independent of \({b}_{1}\) and \({a}_{1}\). Note that on the set \(\{\Delta\theta_{1}=1\}\), we have \({a}_{1}=M_{1}\) and \({b}_{1}=Q_{1}\). Therefore, for any \(x>0\),

$$ {\mathbf {P}}[\zeta< -x]\ge {\mathbf {P}}[{b}_{1}+{a}_{1}\,\zeta_{1}< -x,\;\Delta\theta_{1}=1] = {\mathbf {P}}[Q_{1}+M_{1}\,\zeta< -x,W_{T_{1}}< 0], $$

and we conclude as in the previous proposition. □

5 Regularity of the ruin probability

5.1 Integral representations

The proof of smoothness of a function \(H\) admitting an integral representation is based on a simple idea which merits to be explained.

First, let us recall the following classical result on the differentiability of the integral \(H(u)=\int f(u,z)\,dz\), where \(f(u,\cdot)\in L^{1}\) for each \(u\) from an open subset \(U\subseteq {\mathbb {R}}\). If \(f(\cdot,z)\) is differentiable on an open interval \((u_{0}-{\varepsilon }, u_{0}+{\varepsilon})\) for almost all \(z\) and on this interval, \(|\partial f(\cdot,z)/\partial u| \le g(z)\) (a.e.) where \(g\in L^{1}\) , then \(H\) is differentiable at \(u_{0}\) and \(H'(u_{0})=\int\partial f(u_{0},z)/\partial u\, dz\).

Suppose that we are given a bounded measurable function \(h(z)\) and a Gaussian random variable \(\zeta\sim N(0,1)\). Let \(H(u)={\mathbf {E}}h(u+\zeta)=\int\,h(u+x)\varphi_{0,1} (x)\,dx\). Then \(H\) is differentiable and even of class \(C^{\infty}\). Of course, the above result cannot be applied directly. But using a change of variables, we get the representation

$$ H(u)=\int\,h(u+x)\varphi_{0,1} (x)\,dx= \int h(z)\varphi_{0,1} (z-u)\,dz. $$

Now the parameter \(u\) appears only in the function \(\varphi_{0,1}\), the integrand is differentiable in \(u\), and we can apply the classical sufficient condition.

The issues here are: an integral representation, the smoothness of the density, and the integrability of its derivatives. In the case of the survival probability \(\varPhi\), the integral representation is obtained from the strong Markov property. Unfortunately, the structure of the representation is rather complicated, the random variable standing for \(\zeta\) is not Gaussian, and its density is not given by a simple formula. Nevertheless, the idea of using a change of variables to move the parameter from the unknown function on which we have only limited information (essentially, boundedness and measurability) to the density still does work. The main difficulty is checking the smoothness of the density and finding appropriate bounds for its derivatives.

Theorem 5.1

Assume that the distribution of \(\xi_{1}\) has a density \(f\) differentiable on \({\mathbb {R}}_{+}\) and such that \(f'\in L^{1}({\mathbb {R}}_{+})\). Then \(\varPhi(u)\) is two times continuously differentiable on \((0,\infty)\).

Proof

We again consider the process

$$ Y^{u}_{t}=e^{\eta_{t}}\left(u -R_{t}\right), $$
(5.1)

where \(R_{t}\) is defined in (3.1). Put

$$ \theta^{u}:=\inf\{t\ge0\colon Y^{u}_{t}\le0\}. $$

By virtue of the strong Markov property of \(X ^{u}\),

$$ \varPhi(u)={\mathbf {E}}\varPhi(X^{u}_{\theta^{u}\wedge T_{1}}). $$
(5.2)

Note that the process \(Y^{u}\) is strictly positive before the time \(\theta ^{u}\), zero at \(\theta^{u}\), and strictly negative afterwards. Due to the independence of the Wiener process and the instants of jumps, \(\theta ^{u}\neq T_{1}\) a.s. Thus, \(\{Y^{u}_{T_{1}}>0\}=\{\theta^{u}>T_{1}\}\) a.s. Taking into account that \(\varPhi (0)=0\), we get that

$$ \varPhi(u)={\mathbf {E}}\, {\mathbf{1}}_{\{Y^{u}_{T_{1}}>0\}}\varPhi (X^{u}_{T_{1}})={\mathbf {E}}\, {\mathbf{1}}_{\{ Y^{u}_{T_{1}}>0\}}\varPhi(Y^{u}_{T_{1}}+\xi_{1})=\varPhi_{1}(u)+\varPhi_{2}(u), $$

where

$$ \varPhi_{1}(u):=\alpha\int_{0}^{2} {\mathbf {E}}G(Y_{t}^{u})e^{-\alpha t}\,dt,\qquad \varPhi_{2}(u):=\alpha\int_{2}^{\infty}{\mathbf {E}}G(Y_{t}^{u})e^{-\alpha t}\,dt $$

with

$$ G(y):= {\mathbf{1}}_{\{y> 0\}} {\mathbf {E}}\varPhi(y+\xi_{1})={\mathbf{1}}_{\{y> 0\}}\int_{0}^{\infty}\varPhi (y+x)\,dF(x). $$

We analyze separately the smoothness of \(\varPhi_{1}\) and \(\varPhi_{2}\) by using for these functions appropriate integral representations.

5.2 Smoothness of \(\varPhi_{2}\)

We start with the simpler case of \(\varPhi_{2}\) and show that this function is infinitely often differentiable without any assumptions on the distribution of \(\xi_{1}\).

From the representation

$$ Y_{t}^{u}= e^{\eta_{t}-\eta_{1}}Y_{1}^{u}-c \int_{1}^{t} e^{\eta_{t}-\eta_{s}} \,ds, \qquad t\ge1, $$

we obtain, using the independence of \(Y_{1}^{u}\) and the process \((\eta _{s}-\eta_{1})_{s\ge1}\), that

$$ {\mathbf {E}}[G(Y^{u}_{t})|Y^{u}_{1}]=G(t,Y_{1}^{u}), $$

where

$$ G(t,y):= {\mathbf {E}}G\left(e^{\eta_{t}-\eta_{1}}y- c\int_{1}^{t} e^{ \eta_{t} - \eta_{s}}\,d s\right). $$

Substituting the expression for \(Y^{u}_{1}\) given by (5.2), we have

$$ \varPhi_{2}(u)={\mathbf {E}}\int_{2}^{\infty}{\mathbf {E}}[G(Y^{u}_{t})|Y^{u}_{1}]\alpha e^{-\alpha t}\,dt = {\mathbf {E}}H\big(e^{\kappa+\sigma W_{1}}(u-R_{1}) \big), $$

where \(H\) is a function taking values in \([0,1]\) and given by the formula

$$ H(y):=\alpha\,\int_{2}^{\infty}G(t, y)e^{-\alpha t}\,d t. $$

Taking into account that the conditional distribution of the process \((W_{s})_{s\le1}\) given \(W_{1}=x\) is the same as the distribution of the Brownian bridge \(B^{x}=(B_{s}^{x})_{s\le1}\) with \(B^{x}_{s} =W_{s}+s(x-W_{1})\), we obtain the representation

$$ \varPhi_{2}(u)=\int{\mathbf {E}}H\big(e^{\kappa+\sigma x}(u - \zeta ^{x})\big)\varphi_{0,1}(x)\,dx, $$
(5.3)

where

$$ \zeta^{x}:=c\int_{0}^{1}e^{-(\kappa s +sx +\sigma(W_{s}-sW_{1}))}\,ds. $$
(5.4)

Lemma 5.2 below asserts that for every \(x\), the random variable \(\zeta^{x}\) has a density \(\rho(x,\cdot)\) on \((0,\infty)\), and we easily obtain from (5.3) by changing variables that

$$ \varPhi_{2}(u)=\int_{0}^{u}\int H(e^{\kappa+\sigma x} z) \rho(x,u - z)\varphi _{0,1}(x)\,dx\,dz. $$
(5.5)

Lemma 5.2

The random variable \(\zeta^{x}\) has a density \(\rho(x,\cdot)\in C^{\infty}\) such that for any \(n\ge0\),

$$ \sup_{y\ge0}\, \left\vert\frac{\partial^{n}}{\partial y^{n}} \rho(x,y) \right\vert \le C_{n} e^{C_{n} |x|} $$
(5.6)

with some constant \(C_{n}\), and \((\partial^{n} /\partial y^{n})\rho(x,0)=0\).

Proof

We obtain the result using again the integral representation. Let us introduce the random process

$$\begin{aligned} D_{s} :=&\big((W_{s}-2sW_{1/2})+s(W_{1/2}-W_{1})\big){\mathbf{1}}_{\{s\le 1/2\}}\\ &{} +\big((1-s)(W_{s}-W_{1/2})-s(W_{1}-W_{s})\big){\mathbf{1}}_{\{s>1/2 \}} \end{aligned}$$

and the piecewise linear function

$$ \gamma_{s}:=s{\mathbf{1}}_{\{s\le1/2\}}+(1-s){\mathbf{1}}_{\{s>1/2 \}}, \quad s\in[0,1]. $$

The following identity is obvious:

$$ W_{s}-sW_{1}=D_{s}+ \gamma_{s}W_{1/2}. $$

Since \(D_{s}\) and \(W_{1/2}\) are independent random variables, we have for any bounded Borel function \(g\) that

$$ {\mathbf {E}}g(\zeta^{x})={\mathbf {E}}\int g\bigg(c\int_{0}^{1}e^{-(\kappa s +sx + \sigma D_{s}+ \sigma\gamma_{s}v)}\,ds\bigg)\varphi_{0,1/2}(v)\,dv. $$

Let \(v(x,\cdot)\) be the inverse of the continuous strictly decreasing function

$$ y\mapsto c\int_{0}^{1}e^{-(\kappa s +sx +\sigma D_{s}+\sigma\gamma_{s}y)}\,ds, $$

depending on the parameter \(x\) (and also on \(\omega\) which is omitted as usual). Note that \(v(x,0+)=\infty\) and \(v(x,\infty)=0\). After a change of variables, we obtain, using the notation

$$ K(x,z):=c\sigma\int_{0}^{1}\gamma_{s}e^{-(\kappa s +sx +\sigma D_{s}+ \sigma \gamma_{s} z)}\,ds, $$

that

$$ {\mathbf {E}}g(\zeta^{x})=\int^{\infty}_{0} g(y)\rho(x,y) \,dy, $$

where

$$ \rho(x,\cdot):= {\mathbf {E}}\frac{\varphi_{0,1/2}(v(x,\cdot ))}{K(x,v(x,\cdot))}. $$

Thus, \(\rho(x,\cdot)\) is the density of the distribution of the random variable \(\zeta^{x}\). It remains to check that it is infinitely often differentiable and to find appropriate bounds for its derivatives.

Put

$$ Q^{(0)}(x,z):=\frac{\varphi_{0,1/2}(z)}{K(x,z)}, \qquad Q^{(n)}(x,z):=-\frac{Q^{(n-1)}_{z}(x,z)}{K(x,z)}, \quad n \ge1. $$

Then

$$\begin{aligned} \frac{\partial}{\partial y}\,Q^{(0)}\big(x,v(x,y)\big) =& Q^{(0)}_{z}\big(x,v(x,y)\big)v_{y}(x,y)\\ =& -\frac{Q^{(0)}_{z}(x,v(x,y))}{K(x,v(x,y))} =Q^{(1)}\big(x,v(x,y)\big), \end{aligned}$$

and similarly,

$$ \frac{\partial^{n} }{\partial y^{n}}\,Q^{(0)}\big(x,v(x,y)\big)= - \frac{Q^{(n-1)}_{z}(x,v(x,y))}{K(x,v(x,y))} =Q^{(n)}\big(x,v(x,y)\big). $$

It is easily seen that

$$ Q^{(n)}(x,z)=\frac{\varphi_{0,1/2}(z)}{K^{n+1}(x,z)}\sum _{k=0}^{n}P_{k}(z)R_{n-k}(x,z), $$
(5.7)

where \(P_{k}(z)\) is a polynomial of order \(k\) and \(R_{n-k}(x,z)\) is a linear combination of products of derivatives of \(K(x,z)\) in the variable \(z\). Note that for any \(x,y\in[0,1]\), we have the bounds

$$ {-|x| -\sigma|z| - 3\sigma W^{*}_{1}}\le{\kappa s +sx +\sigma D_{s}+ \sigma\gamma_{s} z}\le{\kappa+|x| +\sigma|z| +3\sigma W^{*}_{1}}, $$

where \(W^{*}_{1}=\sup_{s\le1}|W_{s}|\). It follows that there exists a constant \(C_{n}>0\) such that

$$ \vert Q^{(n)}(x,z)\vert \le C_{n} e^{C_{n} |x|}(1+z^{n})e^{-z^{2}} e^{C_{n}\,W^{*}_{1}}. $$
(5.8)

Since \({\mathbf {E}}e^{C_{n}\,W^{*}_{1}}<\infty\), the derivative \(({\partial^{n} }/{\partial y^{n}})\,Q^{(0)}(x,v(x,y))\) admits for each \(x,y\) and \(n\) a \({\mathbf {P}}\)-integrable bound. Thus, we can differentiate under the expectation sign and obtain that

$$\begin{aligned} \frac{\partial^{n} }{\partial y^{n}}\rho(x,y) & ={\mathbf {E}}\frac{\partial^{n} }{\partial y^{n}}\frac{\varphi _{0,1/2}(v(x,y))}{K(x,v(x,y))}\\ & = {\mathbf {E}}\frac{\partial^{n} }{\partial y^{n}}\,Q^{(0)}\big(x,v(x,y)\big)\\ &={\mathbf {E}}Q^{(n)}\big(x,v(x,y)\big). \end{aligned}$$

Moreover, the bound (5.8) ensures that for some constant \(\tilde{C}_{n}\),

$$ \sup_{y\ge0} {\mathbf {E}}\left\vert\frac{\partial^{n} }{\partial y^{n}}\,Q^{(0)}\big(x,v(x,y)\big) \right\vert \le {\mathbf {E}}\sup_{z\in{\mathbb {R}}} \vert Q^{(n)}(x,z)\vert \le\tilde{C}_{n} e^{C_{n}\vert x\vert} $$

and the bound (5.6) holds. Finally, since \(v(x,0+)=\infty\), the bound (5.8) implies that \((\partial^{n} /\partial y^{n})\rho(x,0)=0\). □

Remark 5.3

It is of interest to trace in these arguments the dependence of the constant \(\tilde{C}_{n}\) on the parameters \(c\) and \(\sigma\) when they are approaching zero. From the formula (5.7), it is clear that \(\tilde{C}_{n}\) should be proportional to \((c\sigma)^{-n}\).

Proposition 5.4

The function \(\varPhi_{2}(u)\) belongs to \(C^{\infty}((0,\infty))\).

Proof

Putting

$$ \widetilde{H}(u,z):=\int H(e^{\kappa+\sigma x} z) \rho(x,u - z)\varphi_{0,1}(x) \,dx, $$

we rewrite the formula (5.5) as

$$ \varPhi_{2}(u)= \int^{u}_{0}\,\widetilde{H}(u,z)\, dz. $$

Clearly, the function \(\widetilde{H}\) is continuous on \((0,\infty )\times (0,\infty)\). Using Lemma 5.2, we obtain

$$ \frac{\partial}{\partial u}\,\widetilde{H}(u,z)\, = \int H(e^{\kappa+\sigma x} z) \frac{\partial}{\partial u}\, \rho (x,u - z) \varphi_{0,1}(x) \,dx $$

and

$$ \sup_{u,z}\, \left\vert \frac{\partial}{\partial u}\,\widetilde{H}(u,z)\, \right\vert < \,\infty. $$

By induction, for every \(n\ge1\),

$$ \frac{\partial^{n}}{\partial u^{n}}\,\widetilde{H}(u,z) = \int H(e^{\kappa+\sigma x} z) \frac{\partial^{n}}{\partial u^{n}}\, \rho(x,u - z) \varphi_{0,1}(x) \,dx $$

and

$$ \sup_{u,z}\, \left\vert \frac{\partial^{n}}{\partial u^{n}}\,\widetilde{H}(u,z)\, \right\vert < \infty. $$

By virtue of Lemma 5.2, \(\rho(x,0)=0\), i.e., \(\widetilde{H}(u,u)=0\). So,

$$ \frac{d}{du}\varPhi_{2}(u) = \widetilde{H}(u,u)+ \int^{u}_{0}\, \frac{\partial}{\partial u}\, \widetilde{H}(u,z)\, dz = \int^{u}_{0}\, \frac{\partial}{\partial u}\, \widetilde{H}(u,z)\,dz. $$

In the same way, we check that

$$ \frac{d^{n}}{du^{n}}\varPhi_{2}(u) = \int^{u}_{0} \frac{\partial^{n}}{\partial u^{n}} \widetilde{H}(u,z)\, dz $$

for any \(n\ge1\). □

5.3 Smoothness of \(\varPhi_{1}\)

Arguing in the same spirit as in the previous subsection, but taking this time the conditional expectation with respect to \(W_{t}\), we obtain that

$$ {\mathbf {E}}\, {\mathbf{1}}_{\{R_{t}< u\}}\,h\big(e^{\kappa{t}+\sigma W_{t}}(u-R_{t})\big)= \frac{1}{\sqrt{t}}{\mathbf {E}} \int {\mathbf{1}}_{\{\zeta^{t,x}< u\}} h(u,t,x)\varphi_{0,1}\left(\frac{x}{\sqrt{t}}\right)\,dx, $$

where we use the abbreviations

$$ h(u,t,x):=h\big(e^{\kappa t+\sigma x} (u-\zeta^{t,x})\big),\qquad h(y)={\mathbf {E}}\varPhi(y+\xi_{1}) $$

and

$$ \zeta^{t,x}:=c\int^{t}_{0}e^{- (s x/t+\kappa s+ \sigma(W_{s}-(s/t)W_{t})) }\,d s. $$

It is easily seen that the random variable \(\zeta^{t,x}\) has an infinitely often differentiable density (the same as \(\zeta^{x}\) defined in (5.4), but with the parameters \(ct\), \(\kappa t\), and \(\sigma t^{1/2}\)). Unfortunately, the derivatives of this density have non-integrable singularities as \(t\) tends to zero (see Remark 5.3). For this reason, we cannot use the strategy of proof used for \(\varPhi _{2}\). Nevertheless, the hypothesis on the distribution of \(\xi_{1}\) allows us to establish the claimed result.

Note that the function \(x\to\zeta^{t,x}\) is strictly decreasing and maps ℝ onto \({\mathbb {R}}_{+}\). Let \(z(t,\cdot)\) denote its inverse. The derivative of the latter is given by the formula

$$ z_{x}(t,x)=-\frac{t}{L(t,z(t,x))}, $$

where

$$ L(t,z)=c \int^{t}_{0}se^{- (s z/t+\kappa s+ \sigma(W_{s}-(s/t)W_{t}))}\,ds. $$
(5.9)

Changing variables, we obtain that

$$ \int {\mathbf{1}}_{\{\zeta^{t,z}< u\}} {h}(u,t,z) \varphi_{0,1}\left(\frac{z}{\sqrt{t}}\right)\,d z =t \int^{u}_{0} {h}\big(u,t,z(t,x)\big) D\big(t,z(t,x)\big) \,d x, $$

where

$$ D(t,z)=\frac{\varphi_{0,1}({z}/\sqrt{t})}{L(t,z)}. $$

Summarizing, we get that

$$ \varPhi_{1}(u)=\alpha {\mathbf {E}}\int^{2}_{0} \sqrt{t} H(t,u)\,e^{-\alpha t}\,d t, $$

where

$$ H(t,u):= \int^{u}_{0} {h}\big(u,t,z(t,x)\big) D\big(t,z(t,x)\big) \,d x. $$
(5.10)

Proposition 5.5

Under the conditions of Theorem  5.1, the function \(H(t,u)\) defined in (5.10) satisfies for any fixed \(u_{0}>0\) the inequality

$$ \sup_{t\in(0, 2]} {\mathbf {E}} \sup_{u\ge u_{0}}\, \big(\vert H_{u}(t,u)\vert + \vert H_{uu}(t,u) \vert \big) < \infty. $$

Proof

In virtue of the hypothesis, the function \(h(y)={\mathbf {E}}\varPhi (y+\xi _{1})\) is differentiable. Differentiating (5.10), we get

$$ H_{u}(t,u)=h(0)\,D\big(t,z(t,u)\big)+ \int^{u}_{0}h_{u}\big(u,z(t,x)\big) D\big(t,z(t,x)\big) \,d x, $$

where \(h(0)={\mathbf {E}}\varPhi(\xi_{1})\) and

$$ h_{u}\big(u,t,z(t,x)\big) =h'\big(e^{\kappa t+\sigma z(t,x)}(u-x)\big) e^{\kappa t+\sigma z(t,x)}. $$

Note that

$$ \frac{\partial}{\partial x}h\big(u,t,z(t,x)\big)=h_{u}\big(u,t,z(t,x)\big)\big(\sigma z_{x}(t,x)(u-x)-1\big). $$

Therefore,

$$\begin{aligned} &\int^{u}_{0}h_{u}\big(u,z(t,x)\big) D\big(t,z(t,x)\big)\,d x\\ &= -\int^{u}_{0}\, \frac{\varphi_{0,1}({z(t,x)}/{\sqrt{t}})}{\sigma t(u-x)+L(t,z(t,x))} \,d_{x}\, h\big(u,t,z(t,x)\big). \end{aligned}$$

Integrating by parts and taking into account that \(z(t,0+)=\infty\), we get that

$$ H_{u}(t,u)= \int^{u}_{0} h\big(u,t,z(t,x)\big)\, \varTheta\big(u,t,z(t,x)\big)\,\varphi_{0,1}\big({z(t,x)}/{\sqrt {t}}\big)\, d x, $$
(5.11)

where

$$\begin{aligned} \varTheta(u,t,z)& =\frac{z}{L(t,z)(\sigma t\left(u-\zeta^{t,z})\right)+L(t,z))} - \frac{t\sigma L(t,z)+tL_{z}(t,z)}{L(t,z)\left(\sigma t\left(u-\zeta ^{t,z}\right)+L(t,z)\right)^{2}}. \end{aligned}$$

Inspecting the formula (5.9) defining \(L(t,z)\), we conclude that there exist positive constants \(C_{0}\) (“small”) and \(C_{1}\) (“large”) such that

$$ \max_{t\in(0,2]}\, \big( L(t,z)+ t|L_{z}(t,z)| \big) \le C_{1} e^{C_{1}(|z|+W^{*}_{t})} $$

and for any \(t\in(0,2]\),

$$ |L(t,z)|\ge C_{0}\,t^{2}\, e^{- C_{1}(|z|+W^{*}_{t})}, $$

where \(W^{*}_{t}:=\max_{s\le t}|W_{s}|\). Taking this into account, we obtain that for some \(C>0\),

$$ \left\vert \varTheta(u,t,z) \right\vert \le C\,t^{-6}\, e^{C(|z|+W^{*}_{t})}. $$
(5.12)

Using the generic notation \(C\) for a constant (which may vary even within a single formula), we obtain for any \(t\in(0, 2]\) that

$$\begin{aligned} \left\vert H_{u}(t,u) \right\vert &\le Ct^{-7}e^{CW^{*}_{t}}\int^{\infty}_{z(t,u)}\,e^{C|z|-\frac {z^{2}}{2t}}L(t,z)\, d z \\ &\le Ct^{-7} e^{C\,W^{*}_{2}}\,e^{-\frac{z^{2}(t,u)}{4t}}\, \int^{\infty}_{0}\,e^{C|z|-\frac{z^{2}}{4}}\,d z. \end{aligned}$$

So we have the bound

$$ \left\vert H_{u}(t,u) \right\vert \le C t^{-7} e^{CW^{*}_{2}}\,e^{-\frac{z^{2}(t,u)}{4t}}. $$

For any \(u\ge u_{0}>0\) and \(t\in(0,2]\),

$$ u_{0}\le u=\zeta^{t,z(t,u)}\le c t\, e^{2|\kappa|+\sigma|z(t,u)|+ \sigma W^{*}_{t}}, $$

i.e.,

$$ |z(t,u)|\ge\frac{1}{\sigma}\,\ln\,\frac{e^{-2|\kappa |}u_{0}}{tc}-W^{*}_{t}. $$

Put

$$ t_{0}:=\min\left(\frac{e^{-2|\kappa|-3\sigma} u_{0}}{c},\,2\right), \qquad \varGamma:=\{W^{*}_{t}\le t^{1/4}\}. $$

Thus, for \(t\in(0,t_{0}]\), we have on the set \(\varGamma\) the inequality

$$ |z(t,u)|\ge1. $$

Taking into account that \({\mathbf {E}}e^{aW^{*}_{t}}<\infty\) for any \(a\) and \(t>0\), we obtain that

$$ {\mathbf {E}} \max_{t\in[t_{0},2]}\,\sup_{u\ge0}\,\left\vert H_{u}(t,u) \right\vert\, \le C t_{0}^{-7}{\mathbf {E}} e^{CW^{*}_{2}} < \infty. $$

For \(t\in(0,t_{0}]\), we have

$$\begin{aligned} {\mathbf {E}}\max_{u\ge u_{0}}\,\vert H_{u}(t,u) \vert\,& \le C t^{-7} \big( e^{-\frac{1}{4t}} + {\mathbf {E}}e^{CW^{*}_{t}}\,{\mathbf{1}}_{\varGamma^{c}} \big)\\ &\le C t^{-7} \Big( e^{-\frac{1}{4t}} + \sqrt{{\mathbf {E}}e^{CW^{*}_{2}}} \sqrt{{\mathbf {P}}[W^{*}_{t}\ge t^{1/4}]} \Big). \end{aligned}$$

By the Chebyshev inequality, we have

$$ {\mathbf {P}}[W^{*}_{t}\ge t^{1/4}]\le e^{-t^{-1/4}}{\mathbf {E}}e^{\frac {W^{*}_{t}}{\sqrt {t}}}=e^{-t^{-1/4}}{\mathbf {E}}e^{W^{*}_{1}}, $$

so that

$$ \sup_{t\ge0} e^{t^{-1/4}}{\mathbf {P}}[W^{*}_{t}\ge t^{1/4}]< \infty. $$

This implies that

$$ \max_{t\in(0, t_{0}]} {\mathbf {E}}\max_{u\ge u_{0}}\left\vert H_{u}(t,u) \right\vert < \infty. $$

Therefore,

$$ \max_{t\in(0, 2]} {\mathbf {E}}\max_{u\ge u_{0}}\,\left\vert H_{u}(t,u) \right\vert < \infty. $$

Differentiating (5.11), we find that

$$\begin{aligned} H_{uu}(t,u) =& h(0) \varTheta\big(u,t,z(t,u)\big)\,\varphi\left(\frac{z(t,u)}{\sqrt {t}}\right )\\ &{}+ \int^{u}_{0} \varUpsilon\big(u,t,z(t,x)\big)\, \varphi\left(\frac{z(t,x)}{\sqrt{t}}\right)\, d x, \end{aligned}$$

where

$$ \varUpsilon(u,t,z)= {h}_{u}(u,t,z) \varTheta(u,t,z) + {h}(u,t,z) \varTheta_{u}(u,t,z). $$

By assumption, the distribution function \(F\) has the density \(f\) whose derivative \(f'\) is a continuous function on \({\mathbb {R}}_{+}\) which is integrable with respect to Lebesgue measure. Changing variables, we get that

$$\begin{aligned} {h'}(y)&=\frac{d}{dy}\int_{0}^{\infty}\varPhi(y+x)f(x)\,dx= \frac{d}{dy}\int_{y}^{\infty}\varPhi(x)f(x-y)\,dx\\ &=-\varPhi(y)\,f(0) - \int^{\infty}_{y}\,\varPhi(z){f'}(z-y)\,d z, \end{aligned}$$

i.e.,

$$ \sup_{x\ge0}\,|{h'}(x)|< \infty. $$

Similarly to (5.12), we obtain that

$$ \left\vert \varUpsilon_{t}(u,z) \right\vert \le C\,t^{-8}\, e^{C(|z|+W^{*}_{t})}. $$

Therefore,

$$ \left\vert H_{uu}(t,u) \right\vert \le C t^{-9}\, e^{C\,W^{*}_{t}}\,e^{-\frac{z^{2}(t,u)}{4t}}, $$

implying that

$$ \max_{t\in(0, 2]} {\mathbf {E}}\max_{u\ge u_{0}}\,\left\vert H_{uu}(t,u) \right\vert < \infty. $$

Proposition 5.5 is proved. □

5.4 Integro-differential equation for the survival probability

Proposition 5.6

Suppose that \(\varPhi\in C^{2}\). Then \(\varPhi\) satisfies Eq. (2.2).

Proof

For \(h>0\) and \(\epsilon>0\) small enough to ensure that \(u\in(\epsilon, \epsilon^{-1})\), we put

$$ \tau^{\epsilon}_{h}:=\inf\big\{ t\ge0: Y^{u}_{t} \notin[\epsilon,\epsilon^{-1}]\big\} \wedge h. $$

Using the Itô formula and taking into account that on the interval \([0,T_{1})\), the process \(X^{u}\) coincides with \(Y^{u}\), we obtain the representation

$$\begin{aligned} \varPhi(X^{u}_{\tau^{\epsilon}_{h}\wedge T_{1}}) =& \varPhi(u)+\sigma\int_{0}^{\tau^{\epsilon}_{h}\wedge T_{1}}\varPhi '(Y^{u}_{s})\, dW_{s}\\ &{}+ \int_{0}^{\tau^{\epsilon}_{h}\wedge T_{1}}\bigg(\frac{1}{2}{\sigma^{2}} (Y^{u}_{s})^{2}\varPhi''(Y^{u}_{s})+(aY^{u}_{s}-c)\varPhi'(Y^{u}_{s})\bigg)\,ds\\ &{}+ \big(\varPhi(Y^{u}_{T_{1}}+\xi_{1})-\varPhi(Y^{u}_{T_{1}})\big){\mathbf{1}}_{\{ T_{1}\le {\tau ^{\epsilon}_{h}}\}}. \end{aligned}$$

Due to the strong Markov property, \(\varPhi(u)={\mathbf {E}}\varPhi(X^{u}_{\tau ^{\epsilon }_{h}\wedge T_{1}})\). For every \(\epsilon>0\), the integrands above are bounded by constants, and hence the expectation of the stochastic integral is zero. Moreover, \(\tau^{\epsilon}_{h}\wedge T_{1}=h\) when \(h\) is sufficiently small (the threshold below which we have this equality of course depends on \(\omega\)). It follows that, independently of \(\epsilon\),

$$\begin{aligned} &\frac{1}{h} {\mathbf {E}}\int_{0}^{\tau^{\epsilon}_{h}\wedge T_{1}}\bigg(\frac{1}{2}{\sigma^{2}} (Y^{u}_{s})^{2}\varPhi''(Y^{u}_{s})+aY^{u}_{s}\varPhi'(Y^{u}_{s})-c\bigg)\,ds\\ &\longrightarrow\frac{1}{2}{\sigma^{2}} u^{2}\varPhi''(u)+(au-c)\varPhi'(u). \end{aligned}$$

Finally,

$$ \frac{1}{h} {\mathbf {E}}\big(\varPhi(Y^{u}_{T_{1}}+\xi_{1})-\varPhi(Y^{u}_{T_{1}})\big){\mathbf{1}} _{\{ T_{1}\le{\tau^{\epsilon}_{h}}\}}=\alpha{\mathbf {E}}\frac{1}{h} \int _{0}^{\tau ^{\epsilon}_{h}} \big(\varPhi(Y^{u}_{t}+\xi_{1})-\varPhi(Y^{u}_{t})\big)e^{-\alpha t}\,dt. $$

The right-hand side converges to \(\alpha({\mathbf {E}}\varPhi(u+\xi _{1})-\varPhi(u))\) as \(h\to0\) in virtue of the Lebesgue theorem on dominated convergence. It follows that \(\varPhi\) satisfies Eq. (2.2). □

6 Proof of Theorem 2.1

Assume that the claims are exponentially distributed, i.e., \(F(x)=1-e^{-x/\mu}\). Similarly to the classical case, this assumption allows us to obtain for the ruin probability an ordinary differential equation (but of a higher order). Indeed, now the integro-differential equation (2.2) has the form

$$ \frac{1}{2} \sigma^{2}u^{2} \varPhi''(u)+ (au-c)\varPhi'(u) - \alpha\varPhi(u)+\frac{\alpha}{\mu} \int_{0}^{\infty}\varPhi (u+y)e^{-y/\mu}\,dy=0. $$
(6.1)

Notice that

$$ \frac{d}{du}\int_{0}^{\infty}\varPhi(u+y)e^{-y/\mu}\,dy=-\varPhi(u) + \frac{1}{\mu}\int_{0}^{\infty}\varPhi(u+y)e^{-y/\mu}\,dy. $$

Differentiating (6.1) and adding to it the obtained identity multiplied by \(\mu\), we eliminate the integral term and arrive at a third order ordinary differential equation. It does not contain the function itself and therefore is reduced to a second order differential equation for the function \(G=\varPhi'\) which can be easily transformed to the form

$$ G^{\prime\prime}- p(u)G^{\prime}+ p_{0}(u)G=0, $$
(6.2)

where

$$\begin{aligned} p(u)&:=\frac{1}{\mu}- 2\bigg(1+\frac{a}{\sigma^{2}}\bigg) \frac{1}{u}+\frac{2c}{\sigma^{2}} \frac{1}{u^{2}},\\ p_{0}(u)&:=- \frac{2a}{\mu\sigma^{2}}\frac{1}{u}+\left(a-\alpha+c/\mu\right) \frac{2}{\sigma^{2}}\frac{1}{u^{2}}. \end{aligned}$$

The substitution \(G(u)=R(u)Z(u)\) with

$$ R(u):=\exp\bigg(\frac{1}{2} \int_{1}^{u} p(s)\,d s\bigg) $$

eliminates the first derivative and leads to the equation

$$ Z''- q(u) Z=0, $$
(6.3)

where

$$ q(u):=\frac{1}{4\mu^{2}}+\bigg(\frac{a}{\sigma^{2}}-1\bigg)\frac {1}{u}+\sum_{i=2}^{4} A_{i}\frac{1}{u^{i}} $$

with certain constants \(A_{i}\) which are of no importance in our asymptotic analysis. It is easy to check that

$$ \int^{\infty}_{x_{0}}\,\left( \frac{|q^{\prime\prime}(x)|}{q^{3/2}(x)} + \frac{|q^{\prime}(x)|^{2}}{q^{5/2}(x)} \right)\,d x< \infty, $$

where \(x_{0}=\sup\{x\ge1\colon q(x)\le0\}+1\).

According to [10, Ch. 2.6], Eq. (6.3) has two fundamental solutions \(Z_{+}\) and \(Z_{-}\) given by

$$ Z_{\pm}(x)=\sqrt{2\mu}\,\exp\bigg(\pm\int^{x}_{x_{0}}\,\sqrt{q(z)}\,d z\bigg)\big(1+o(1)\big), \qquad x\to\infty, $$

i.e.,

$$ Z_{\pm}\sim\exp\bigg(\pm\Big(\frac{x}{2\mu}+\frac{a-\sigma ^{2}}{\sigma ^{2}}\,\ln x\Big)\bigg). $$

Since

$$ R(x)\sim\exp\bigg(\frac{x}{2\mu}-\frac{a+\sigma^{2}}{\sigma ^{2}}\,\ln x \bigg), $$

we obtain that (6.2) admits as solutions functions with the asymptotics

$$ G_{+}(x)\sim x^{-2}e^{\frac{1}{\mu}x}, \qquad G_{-}(x)\sim x^{-2a/\sigma^{2}}. $$

The differential equation of the third order for \(\varPhi\) has three solutions: \(\varPhi_{0}(u)=1\),

$$ \varPhi_{+}(u)=\int^{u}_{x_{0}}\,G_{+}(x)\,d x, \qquad \varPhi_{-}(u)=-\int^{+\infty}_{u}\,G_{-}(x)\,d x. $$

The ruin probability \(\varPsi:=1-\varPhi\) is a linear combination of these functions, i.e.,

$$ \varPsi(u)=C_{0}+C_{1}\varPhi_{+}(u)+C_{2}\varPhi_{-}(u). $$

Since \(\varPhi_{+}(u)\to\infty\) as \(u\to\infty\), we obtain immediately that \(C_{1}=0\). For the case \(\beta>0\), we know from Proposition 3.1 that \(\varPsi(\infty)=0\). Thus, \(C_{0}=0\) and

$$ \varPsi(u)= C_{2}\,\int_{u}^{\infty}\, x^{-2a/\sigma^{2}}\,\big(1+\delta (x)\big)\,d x, $$

where \(\delta(x)\to0\) as \(x\to\infty\) and \(C_{2}>0\). The integral decreases at infinity as the power function \(u^{-\beta}/\beta\), and we obtain Theorem 2.1.  □