Abstract
We investigate the asymptotics of ruin probabilities when the company invests its reserve in a risky asset with a regime-switching price. We assume that the asset price is a conditional geometric Brownian motion with parameters modulated by a Markov process with a finite number of states. Using techniques from implicit renewal theory, we obtain the rate of convergence to zero of the ruin probabilities as the initial capital tends to infinity.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Models where an insurance company invests its reserve (or a part of it) in risky assets constitute an important class currently under extensive study. Considering a single risky asset is justified by the common practice of investing in a market portfolio or in an index (a fund which simulates an index like the DAX or the S&P500), which is an economically reasonable strategy. Since insurance contracts usually are of a long duration and the return may depend e.g. on the business cycles of the economy, models with regime switching are now more and more popular. The main question addressed here is the rate of decay of the ruin probability as the initial reserve tends to infinity.
In this note, we extend the recent result of Ellanskaya and Kabanov [9], established for a model with characteristics of the asset price depending on a telegraph process, i.e., on a Markov process with two states, 0 and 1. In the present paper, we study the case where the characteristics depend on an ergodic Markov process \(\theta \) with a finite number of states, as was suggested in Di Masi et al. [7]. When \(\theta _{t}=k\), the asset price evolves as a geometric Brownian motion with drift \(a_{k}\) and volatility \(\sigma _{k}\). It is well known, see e.g. Frolova et al. [11], Kabanov and Pergamenshchikov [15], Kabanov and Pukhlyakov [17], Eberlein et al. [8], that in the case of a single regime, i.e., when the price process is the classic geometric Brownian motion with drift \(a\) and volatility \(\sigma \), the ruin probability decreases to zero, as the initial capital \(u\) tends to infinity, with the rate \(u^{-\beta}\), where \(\beta :=2a/\sigma ^{2} -1\), provided that \(\beta >0\). In [9], it was shown that in a model with two regimes, 0 and 1, the ruin probabilities decrease with a rate \(u^{-\beta}\), where \(\beta \) is a number between the values \(\beta _{k}:=2a_{k}/\sigma ^{2}_{k} -1\), \(k=0,1\), which are assumed to be strictly positive. This \(\beta \) is the root of an algebraic equation of third order and does not depend on the initial value of \(\theta \).
In the present paper, we extend the above result to the case where the number of states of the hidden Markov process \(\theta \) is \(K\ge 2\). It turns out that, provided all \(\beta _{k}>0\), \(k=0,\dots , K-1\), the rate of convergence to zero of the ruin probabilities, depending in general on the initial state \(i\), is determined by the root \(\gamma _{i}>0\) of the cumulant generating function of the log-price process at the first return time of \(\theta \) to the state \(i\). It is worth noting that the switching by a telegraph signal is rather specific: the latter returns to the initial state after the second jump, while for a general Markov process, the return may happen after an arbitrary number of jumps.
Although the main idea is again based on implicit renewal theory, it turns out that the analysis of the considered model is much more complicated and the calculation of the rate parameter, depending in general on the initial state, is not so straightforward. We hope that the results of this paper elucidate challenging problems of estimation of ruin probabilities for other stochastic volatility models.
2 The model
Let be a stochastic basis with a Wiener process \(W=(W_{t})\), a Poisson random measure \(\pi (dt,dx)\) on with mean \(\tilde{\pi}(dt,dx)=\Pi (dx)dt\), and a piecewise constant right-continuous Markov process \(\theta =(\theta _{t})\). For the latter, we assume that it takes values in the finite set \(\Theta :=\{0,1,\dots ,K-1\}\), has the \(K\times K\) transition intensity matrix \(\Lambda =(\lambda _{jk})\) with communicating states, and the initial value \(\theta _{0}=i\) (so that \(\theta =\theta ^{i}\), where the superscript \(i\) as usual denotes the starting value of the process). The \(\sigma \)-algebras generated by \(W\), \(\pi \) and \(\theta \) are independent.
Recall that \(\lambda _{jj}=-\sum _{k\neq j}\lambda _{jk}\) and \(\lambda _{i}:=-\lambda _{ii}>0\) for each \(i\).
Let \(T_{n}\) denote the successive jump times of the Poisson process with , and let \(\tau _{n}\) be the successive jumps of \(\theta \) with the convention \(T_{0}=0\) and \(\tau _{0}=0\). Recall that the lengths of the intervals between the consecutive jump times of \(\theta \) are independent exponentially distributed random variables.
The reserve \(X=X^{u}\) of an insurance company evolves not only due to the business activity part, described as in the classical Cramér–Lundberg model, but also due to some risky investments. We assume that the reserve is fully invested in a risky asset whose price \(S\) is a conditional geometric Brownian motion, given the Markov process \(\theta \). That is, \(S\) is given by a so-called hidden Markov model with
where , \(\sigma _{k}>0\), \(k=0,\dots ,K-1\). In this case, \(X\) is of the form
where \(dR_{t}=a_{\theta _{t}}dt+\sigma _{\theta _{t}} dW_{t}=dS_{t}/S_{t}\), that is, \(R\) is the relative price process, and
So the reserve evolution is described by the process \((X^{u},\theta )= (X^{u,i},\theta ^{i})\), where \(u>0\) is the initial capital and \(i\) is the initial regime, i.e., the initial value of \(\theta \).
We assume that the process \(P\) is not increasing (otherwise ruin never happens).
We work assuming that . In this case, \(\Pi (dx)=\alpha _{1} F_{1}(dx)+\alpha _{2}F_{2}(dx)\), where \(F_{1}(dx)\) is a probability measure on \((-\infty ,0)\) and \(F_{2}(dx)\) is a probability measure on \((0,\infty )\). This means that the integral with respect to \(\Pi \) represents the difference of two independent compound Poisson processes with intensities \(\alpha _{1}\), \(\alpha _{2}\) with negative and positive jumps, whose absolute values have the distributions \(F_{1}(dx)\) and \(F_{2}(dx)\), respectively.
The hypothesis that the parameters of the business process \(P\) are also Markov modulated is economically reasonable; see the Cox models in Grandell [13, Chap. 4] and, for a more general setting with risky investments, in Behme and Sideris [4]. To make the presentation more transparent, we do not assume this and leave this generalisation, which is mathematically not significant, to the reader (one only needs to adjust the proofs of Lemmas 4.4 and 5.1).
The solution of the linear equation (2.1) can be represented as
where
the stochastic exponential \({\mathcal{E}}_{t}(R)\) is equal to \({S_{t}}\), and the log-price process \(V=\ln {\mathcal{E}}(R)\) admits the stochastic differential
Of course, \(S\), \(R\), \(Y\) and \(V\) depend on \(i\) (we omitted the superscript \(i\) in the above formulas).
Let \(\tau ^{u,i}:=\inf \{t>0:\ X^{u,i}_{t}\le 0\}\) be the instant of ruin corresponding to the initial capital \(u\) and the initial regime \(i\). Then \(\Psi _{i}(u):={\mathbf{P}}[\tau ^{u,i}<\infty ]\) is the ruin probability and \(\Phi _{i}(u):=1-\Psi _{i}(u)\) is the survival probability. It is clear that \({ \tau ^{u,i}= \inf \{t\ge 0:\ Y^{i}_{t} \ge u\}}\).
The constant parameter values \(a=0\), \(\sigma =0\) correspond to the Cramér–Lundberg setup where \(X^{u,i}_{t}=u+P_{t}\) and the compound Poisson process \(P\) is usually written in the form
In the case of non-life insurance, the jumps are negative, i.e., \(\xi _{k}\ge 0\), \(c> 0\) (i.e., \(F_{2}=0\)). The case \(\xi _{k}\le 0\), \(c< 0\) (i.e., \(F_{1}=0\)) corresponds to the model of life insurance or annuity payments (sometimes referred to as the dual model). Models with both kinds of jumps are mathematically more difficult and less frequent in the literature; but see e.g. Albrecher et al. [1], Kabanov and Pukhlyakov [17] and the references therein. For these classical models with a positive average trend and “non-heavy” tail of \(F\), the Lundberg inequality says that the ruin probability decreases exponentially fast as the initial reserve \(u\) tends to infinity. For exponentially distributed claims, the ruin probability can be computed explicitly; see Asmussen and Albrecher [2, Chap. IV.3b] or Grandell [13, Sect. 1.1].
For models with risky investments, the situation is radically different. For example, for a model with exponentially distributed jumps and the price of the risky asset given by a geometric Brownian motion with drift coefficient \(a\) and volatility \(\sigma >0\), the ruin probability, as a function of the initial capital \(u\), decreases as \(C u^{1-2a/\sigma ^{2}}\) if \(2a/\sigma ^{2}-1>0\). If \(2a/\sigma ^{2}-1\le 0\), ruin happens almost surely; see Frolova et al. [11], Kabanov and Pergamenshchikov [15], Pergamenshchikov and Zeitouni [23] and Kabanov and Pukhlyakov [17].
To formulate our result for a model where the volatility and drift are modulated by a finite-state Markov process, we assume throughout the paper except Sect. 7 that
(in other words, \(\beta _{*}:=\min _{j} \beta _{j}>0\)). See a comment on this hypothesis in Sect. 8.
Let \({\upsilon }^{i}_{1}:=\inf \{t>0\colon \theta ^{i}_{t-}\neq i, \ \theta ^{i}_{t}=i\}\) be the first return time of the (continuous-time) Markov process \(\theta =\theta ^{i}\) to its initial state \(i\). We consider further the consecutive return times defined recursively by
Recalling that \(V\) also depends on \(i\), we introduce the random variable \(M_{i1}:=e^{-V_{{\upsilon }^{i}_{1}}}\) and define the moment-generating function with
Proposition 2.1
The function \({\Upsilon }_{i}\) is strictly convex, continuous, and there is a unique \(\gamma _{i}>0\) such that \({\Upsilon }_{i}(\gamma _{i})=1\).
Note that one can characterise \(\gamma _{i}\) also as the strictly positive root of the cumulant generating function \(H_{i}(q):=\ln {\mathbf{E}}[ e^{-qV_{{\upsilon }^{i}_{1}}}]\), which is strictly convex and continuous.
Postponing the proof of Proposition 2.1 to the next section, we formulate our main result.
Theorem 2.2
Fix the initial value \(i\) and suppose that \(\Pi (|x|^{\gamma _{i}}):=\int |x|^{\gamma _{i}} \Pi (dx) <\infty \). Then
Remark 2.3
In the case where \(\theta \) is a telegraph signal, i.e., a two-state Markov process, the values \(\gamma _{0}\) and \(\gamma _{1}\) coincide (see Eberlein et al. [9]). In the general case considered here, \(\gamma _{i}\) may depend on the initial value \(i\). To alleviate the formulas, we fix the initial value \(i=0\) and omit the index \(i=0\) when this does not lead to ambiguity.
The proof of Theorem 2.2 is based on implicit renewal theory. Namely, we deduce it from Theorem 2.4 below which is the Kesten–Goldie theorem, see Goldie [12, Theorem 4.1], combined with a statement on strict positivity of \(C_{+}\) due to Guivarc’h and Le Page [14] (for a simpler proof of the latter, see Buraczewski and Damek [6], and an extended discussion in Kabanov and Pergamenshchikov [16]).
Theorem 2.4
Suppose that the pair of random variables \((M,Q)\) is such that \(M>0\), the law of \(\ln M\) is non-arithmetic, and for some \(\gamma >0\),
Let the random variable \(Z\) be independent of \((M,Q)\) and have the same law as \(Q+MZ\). Then
where \(C_{+}+C_{-}>0\). If the random variable \(Z \) is unbounded from above, then \(C_{+}>0\).
We apply Theorem 2.4 with \(Q=Q_{1}:=- e^{-V}\cdot P_{{\upsilon }_{1}}\), \(M=M_{1}:=e^{-V_{{\upsilon }_{1}}}\) and \(Z:=Y_{\infty}\). First, we check that its conditions are fulfilled under the chosen constraints for the model specification. We proceed as follows. The existence of \(\gamma \) such that \(\Upsilon (q):= {\mathbf{E}}[M_{1}^{\gamma}]=1\) with \({\mathbf{E}}[ M_{1}^{\gamma +{\varepsilon }}]<\infty \) for some \({\varepsilon }>0\) is shown in Proposition 2.1; the crucial part of its proof is Lemma 3.1 on the continuity of \(\Upsilon \). Clearly, the law of the random variable \(\ln M_{1}\) is not arithmetic. We verify that \(Q_{1}\in L^{\gamma}(\Omega )\) in Sect. 4, which turns out to be the most technical part. To prove the existence of the limit \(Y_{\infty}\) with the needed properties, we observe that
Using the abbreviations
we rewrite the above identity in a more transparent form as
Note that the random variables \({\upsilon }_{k}-{\upsilon }_{k-1}\), that is, the lengths of the intervals between the successive returns to the initial state, form an i.i.d. sequence. The random variables \((Q_{k},M_{k})\) all have the same law and are for each \(k\) independent of the \(\sigma \)-algebra \(\sigma \{(M_{1},Q_{1}),\dots , (M_{k-1},Q_{k-1})\}\). With these observations, the arguments are rather standard and given in Sect. 5. We conclude the proof of the theorem by establishing the bounds \(\bar{G}(u)\le \Psi _{i} (u)\le C \bar{G}(u)\), with \(\bar{G}_{i}(u)={\mathbf{P}}[Y_{\infty }>u]\) and a constant \(C>0\), in Lemma 6.1.
3 Properties of the moment-generating function: the proof of Proposition 2.1
Recall that \(\tau _{n}\) are the times of consecutive jumps of \(\theta \), that is, \(\tau _{0}:=0\),
We introduce the imbedded Markov chain \(\vartheta _{n}:=\theta _{\tau _{n}}\), \(n=0,1,\dots \), with transition probabilities \(P_{k\ell}=\lambda _{k\ell}/\lambda _{k}\), \(k\neq \ell \), and \(P_{kk}=0\). Then \(\varpi :=\inf \{j\ge 2\colon \vartheta _{j}=0\}\) is the first return time of the (discrete-time) Markov chain \(\vartheta \) to the starting point 0 and \({\upsilon }_{1}=\tau _{\varpi}\).
Put
The random variable \(M_{1}\) admits the representation
where
The conditional law of the random variables \(\zeta ^{0}_{1}\),…,\(\zeta ^{i_{k-1}}_{k}\) given \(\vartheta _{1}=i_{1}\), \(\vartheta _{2}=i_{2}\), …, \(\vartheta _{k}=i_{k}\) is the same as the unconditional law of independent random variables \(\tilde{\zeta}^{0}_{1}\),…,\(\tilde{\zeta}^{i_{k-1}}_{k}\), where for any \(m\), we have \({\mathcal{L}}(\tilde{\zeta}^{j}_{m})={\mathcal{L}}(\sigma _{j} W_{\tau }+(1/2) \sigma _{j}^{2}\beta _{j}\tau )\) with an exponential random variable \(\tau \) with parameter \(\lambda _{j}\) independent of the Wiener process \(W\). It follows that
where
if the denominator is positive, and \(f_{j}(q)=\infty \) otherwise.
Clearly, \(f_{j}(q)<\infty \) if \(q\in [0,r_{j})\), \(f_{j}(q)=\infty \) if \(q\in [r_{j},\infty )\), and \(f_{j}(r_{j}-)=\infty \), where \(r_{j}\) is the positive root of the equation
that is, \(r_{j}=r(\lambda _{j},\beta _{j},\sigma _{j})\) with
Note that the formula (3.1) can be written in a shorter form
If \(q\le \beta _{*}:=\min _{j} \beta _{j}\), then all \(f_{j}(q)\le 1\) and \({\Upsilon }(q)\) is dominated by the probability of return of \(\theta \) to the starting point 0, that is, \([0,\beta _{*}]\subseteq {\mathrm{dom}}\, {\Upsilon }\). Also, \(f_{j}(\beta _{*}/2)<1\) for all \(j\), and therefore \({\Upsilon }(\beta _{*}/2)<1\). Since we assume that any state of \(\theta \) can be reached from any other state, we have \({\mathrm{dom}}\, {\Upsilon }\subseteq [0,\underline{r})\), where
More precise information is given by the following lemma.
Lemma 3.1
We have \({\mathrm{dom}}\, {\Upsilon }=[0,\underline{r})\) and \(\lim _{q\uparrow \underline{r}}{\Upsilon }(q)=\infty \) (which means that the function is continuous).
Proof
To explain the idea, let us first consider the case \(K=3\). Regrouping terms in the formula (3.1) according to 4 pairs “exit from 0 to \(\ell \), return back from \(m\)”, we get the representation
Note that if \(P_{1,2}f_{1}(q)P_{2,1}f_{2}(q)<1\), then
otherwise the above sum is equal to infinity. Thus \({\Upsilon }\) is a product of two continuous functions with values in , hence has the same property, and the result follows.
For a model with an arbitrary \(K\), we get the continuity of \({\Upsilon }\) from a continuity result for more general functions. Let us consider a subset \(A\subseteq \{0,1,\dots ,K-1\}\). For \(i,k\notin A \), we denote by \(\Gamma ^{A}_{ik}\) the set formed by the vectors \((i,k)\) and \((i,i_{1},i_{2},\dots ,i_{m},k)\), \(i_{j}\in A\), \(j=1,\dots ,m\), \(m\ge 1\). The elements of \(\Gamma ^{A}_{ik}\) are interpreted as cuts of sample paths of the Markov chain entering \(A\) from the state \(i\), evolving in \(A\), and leaving \(A\) to the state \(k\).
Putting \(h_{i,j}(q)=P_{i,j}f_{i}(q)\) with the natural convention \(0\cdot \infty =0\), we associate with elements of \(\Gamma ^{A}_{ik}\) the continuous functions \(q\mapsto h_{i,k}(q)\) and
with values in and consider the sum over \(\Gamma _{ik}^{A}\) of all these functions,
Since \(f_{j}<1\) on the interval \((0,\beta _{*})\), also \(U^{A}_{ik}<1\) on this interval. We show by induction that is a continuous function with \(U^{A}_{ik}(0)\le 1\). Since \({\Upsilon }=U^{A\setminus \{0\}}_{00}\), this gives the assertion of the lemma.
The idea of the proof consists in representing \(U^{A}_{ik}\) as a sum of a finite number of positive continuous functions using an appropriate partition of \(\Gamma ^{A}_{ik}\). Namely, for \(i_{1}\in A\) and \(n\ge 0\), we define the sets \(\Delta _{ik}^{i_{1},0}:=\{(i,k)\}\),
composed by the vectors with the first component \(i\), followed by \(n \ge 0\) blocks formed by vectors from \(\Gamma ^{A\setminus \{i_{1}\}}_{i_{1},i_{1}}\), and completed by vectors from \(\Gamma ^{A\setminus \{i_{1}\}}_{i_{1},k}\). Clearly, the countable family \(\Delta _{ik}^{i_{1},n}\), \(i_{1}\in A\), \(n\ge 0\), is a partition of \(\Gamma ^{A}_{ik}\) and
The result is then obvious when \(A\) is a singleton, i.e., \(|A|=1\). Supposing that the assertion is already proved for the case where \(|A|=K_{1}-1\), we consider the case where \(|A|=K_{1}\). By the induction hypothesis, is a continuous function for every \(i_{1}\in A\), \(m\notin A\setminus {\{i_{1}\}}\). The result follows from (3.4) and the formula for the geometric series. □
The strictly convex function \({\Upsilon }\) is less than or equal to 1 on \([0,\beta _{*}]\), finite on \([0,\underline{r})\) and tends to infinity at \(\underline{r}\). Hence there is a unique \(\gamma \in (\beta _{*},r_{*})\) such that \({\Upsilon }(\gamma )=1\). Moreover, \({\Upsilon }(\gamma +\epsilon )<\infty \) for some \({\varepsilon }>0\).
4 Integrability of \(Q_{1}\)
We start the study of the integrability properties of \(Q_{1}\) with a general lemma involving parameters \(\beta ,\sigma >0\) and an exponential random variable \(\tau \) independent of \(W\) with parameter \(\lambda >0\).
Lemma 4.1
Let \(0< q< r(\lambda ,\beta ,\sigma )\), where \(r(\lambda ,\beta ,\sigma )\) is given by (3.2). Then
Proof
Put \(W^{(\sigma \beta /2)}_{s}:=W_{s}+(1/2)\sigma \beta s\). Take \(\rho ,\rho '>1\) such that \(1/\rho +1/\rho '=1\) and \(\rho q< r(\lambda ,\beta ,\sigma )\). Dominating the integrand by its supremum and using the Hölder inequality, we get that
Since an exponential random variable has moments of any order, the first factor on the right-hand side is finite. According to Borodin and Salminen [5, Eq. (1.2.1) in Chap. 2],
The lemma is proved. □
Note that the condition \({\Upsilon }(q)<\infty \) holds only for
and the above lemma implies the following useful result.
Corollary 4.2
If \({\Upsilon }(q)<\infty \), then \(C^{*}(q):=\max _{j} C(q,\lambda _{j},\beta _{j},\sigma _{j})<\infty \).
Lemma 4.3
Let \(q>0\) be such that \({\Upsilon }(q)<\infty \). Then
Proof
Let \(\tau \) be a random variable exponentially distributed with parameter \(\lambda >0\) and \(W\) a Wiener process independent of \(\tau \). Then
if the denominator on the right-hand side is strictly greater than zero, and infinity otherwise.
Using conditioning with respect to \({\mathcal{F}}_{k-1}=\sigma (\vartheta _{1},\dots ,\vartheta _{k-1})\) and the Markov property, we get from (3.3) with the abbreviation \(\bar{f}_{k}(q):=f_{0}(q)f_{\vartheta _{1}}(q)\cdots f_{\vartheta _{k-1}}(q)\), \(k\ge 1\), that
where \(p_{*}>0\) is the minimum of the values \(P_{j,0}\) different from zero. Note that
It follows that
Since
where \(\lambda _{*}:=\min _{i} \lambda _{i}\), we obtain from the above inequalities that
The first integrability property in (4.1) is proved.
To prove the second property in (4.1), we start with the case \(q\le 1\). Using the elementary inequality \((\sum |x_{i}|)^{q}\le \sum |x_{i}|^{q}\) and Corollary 4.2, we get that
Now let \(q>1\). Due to the continuity of \({\Upsilon }\), there exists \(q'>q\) such that \({\Upsilon }(q')<\infty \). Applying first the Jensen inequality and then the Young inequality for products with the conjugate exponents \(q'/q\) and \(q'/(q'-q)\), we get that
where \(q_{1}:=(q-1)q'/(q'-q)\). The expectation of the integral on the right-hand side is finite by the first inequality in (4.1), applied with \(q'\). It remains to recall that the first return time of the finite state Markov process \(\theta \) has moments of any order.
For the reader’s convenience, we give the proof of the above fact. Take an arbitrary \(m>1\) and denote by \(\varrho _{j}:=\tau _{j}-\tau _{j-1}\) the inter-jump times of the Markov process \(\theta \). Recall that the conditional distribution of the vector \((\varrho _{1},\dots ,\varrho _{k})\) given \(\vartheta _{1}=i_{1},\dots ,\vartheta _{k-1}=i_{k-1}\) is the same as the distribution of the vector \((\tilde{\varrho}_{1},\dots ,\tilde{\varrho}_{k})\) with independent components having, respectively, exponential distributions with the parameters \(\lambda _{0}, \lambda _{i_{1}}, \dots , \lambda _{i_{k-1}}\). Using the Hölder inequality (now for the sum) and this fact, we get that
where \(\Gamma \) is the Gamma function and \(\lambda _{*}:=\min _{j} \lambda _{j}\). It remains to make a reference to the fact that the first return time \(\varpi \) for the Markov chain \(\vartheta \) has moments of any order; see e.g. Feller [10, Chap. XV, exercises 18–20]. □
The following lemma provides the required integrability property of \(Q_{1}\).
Lemma 4.4
Suppose that \(\Pi (|x|^{\gamma}):=\int |x|^{\gamma}\Pi (dx)<\infty \). Then \({\mathbf{E}}[ |Q_{1}|^{\gamma} ]<\infty \).
Proof
1) For \(\gamma \le 1\), the inequality \((|x|+|y|)^{\gamma}\le |x|^{\gamma}+|y|^{\gamma}\) allows us to check separately the finiteness of the moments of the integral of \(e^{-V}\) with respect to Lebesgue measure (this is already done, see the second property in (4.1)) and of the integral with respect to the jump component of the process \(P\). The latter integral is just a sum. Since the jump measure \(\pi (dt,dx)\) has the compensator \(\tilde{\pi}(dt,dx)=\Pi (dx)dt\), we have
by the first property in (4.1).
2) For \(\gamma > 1\), we split the integrals using the elementary inequality
Because of the second property in (4.1), we need to consider only the integral with respect to the jump component of \(P\). Note that \(e^{-V}|x| *\tilde{\pi}_{{\upsilon }_{1}}<\infty \). Then
Due to the first property in (4.1),
Let \(I_{s}:=e^{-V}|x| *(\pi -\tilde{\pi})_{s}\). According to the Novikov inequalitiesFootnote 1 with \(\alpha =1\), the moment of order \(\gamma >1\) of the random variable \(I^{*}_{t}:=\sup _{s\le t}|I_{s}|\) admits the bound
where \(C'_{\gamma ,1}:=C_{\gamma ,1}(\Pi (|x|))^{\gamma}<\infty \), \(C''_{\gamma ,1}:=C_{\gamma ,1}\Pi (|x|^{\gamma})<\infty \) due to our assumption. But as we proved, both integrals on the right-hand side are finite. □
5 Study of the process \(Y\)
Lemma 5.1
The process \(Y\) has the following properties:
\({\mathrm{(i)}}\) \(Y_{t}\) converges almost surely as \(t\to \infty \) to a finite random variable \(Y_{\infty}\).
\({\mathrm{(ii)}}\) \(Y_{\infty}=Q_{1}+M_{1}Y_{1,\infty}\) where \(Y_{1,\infty}\) is a random variable independent of \((Q_{1},M_{1})\) and having the same law as \(Y_{\infty}\).
\({\mathrm{(iii)}}\) \(Y_{\infty}\) is unbounded from above.
Proof
(i) Take \(p\in (0,\gamma \wedge 1 )\). Then \(r:={\mathbf{E}}[M_{1}^{p} ]<1\), and \({\mathbf{E}}[|Q_{1}|^{p} ]<\infty \) by Lemma 4.4. It follows that \({\mathbf{E}}[ |Y_{{\upsilon }_{n+1}}-Y_{{\upsilon }_{n}}|^{p} ]={\mathbf{E}}[ M_{1}^{p} \cdots M_{n}^{p} Q_{n+1}^{p} ]=r^{n} {\mathbf{E}}[ |Q_{1}|^{p} ]\) and therefore
Thus \(\sum _{n} |Y_{{\upsilon }_{n+1}}-Y_{{\upsilon }_{n}}|<\infty \) a.s., implying that \(Y_{{\upsilon }_{n}}\) converges a.s. to some finite random variable we denote by \(Y_{\infty}\).
Let \(Y_{t}^{*}:=\sup _{s\le t}|Y_{s}|\). Then
Put
Then
and therefore, for any \({\varepsilon }>0\),
By the Borel–Cantelli lemma, we obtain \(\Delta _{n}(\omega )\le {\varepsilon }\) for all \(n\ge n(\omega )\) for all \(\omega \) except a nullset. This implies that \(Y_{t}\) converges a.s. to the same limit as the sequence \(Y_{{\upsilon }_{n}}\).
(ii) Rewriting (2.4) in the form
and observing that the sequence of random variables in parentheses converges almost surely to a random variable with the same law as \(Y_{\infty}\) and independent of \((Q_{1},M_{1})\), we get the assertion.
(iii) In view of (ii), it is sufficient to check that the set \(\{Q_{1}\ge N,\ M_{1}\le 1/N\}\) is non-null for any \(N\ge 1\). Recall that
where \(dV_{s}=\sigma _{\theta _{s}}dW_{s}+(1/2)\sigma ^{2}_{\theta _{s}} \beta _{\theta _{s}}ds\). We consider several cases.
\(1)\) \(c<0\): Using conditioning with respect to \(\theta \), we may argue as if \(\theta \) were deterministic, i.e., assuming that \(V\) is a process with a deterministic switching of parameters and \({\upsilon }_{1}\) is just a number, say \(t>0\). On the set \(\{T_{1}>{\upsilon }_{1}\}\), we have \(Q_{1}=-c\int _{0}^{{\upsilon }_{1}} e^{-V_{s}}ds\). Since \(T_{1}\) is independent of \(W\) and of the set \(\{T_{1}>{\upsilon }_{1}\}\), we need to check only that the set
is non-null. If \(\theta \) has no jumps on \([0,t]\), then \(V_{s}= \sigma _{0}W_{s}+(1/2)\sigma ^{2}_{0}\beta _{0}s\) on this interval, and we get the Brownian bridge property using conditioning with respect to \(W_{t}=x\). Indeed, the conditional distribution of \((W_{s})_{s\le t}\) given \(W_{t}=x\) is the same as the (unconditional) distribution of the Brownian bridge \(B^{x}=(B^{x})_{s\le t}\) ending at time \(t\) at the value \(x\). The latter is a continuous Gaussian process. This implies that the conditional distribution of the integral involved in \(B_{N}(t)\) is unbounded from above. Integrating over a suitable set with respect to the distribution of \(W_{t}\) shows that \(B_{N}(t)\) is non-null.
In the case of several jumps of \(\theta \) at the instants \(t_{1},\dots , t_{k}\), we can show that the integral over the interval \([0,t_{1}]\) has unbounded conditional distribution given \((W_{t_{1}},W_{t_{2}}-W_{t_{1}},\dots ,W_{t}-W_{t_{k}})=(x_{t_{1}},x_{t_{2}}, \dots x_{t_{k}+1})\) and conclude by integrating with respect to the distribution of the increments of \(W\) over a set \([x,\infty )^{k+1}\) for sufficiently large .
\(2)\) \(c\ge 0\): Put \(\sigma ^{*}:= \max _{j} \sigma _{j} \), \(\kappa ^{*}:=\max _{j} (1/2)\sigma _{j}^{2}\beta _{j}\), \(\kappa _{*}:= \min _{j} (1/2)\sigma _{j}^{2}\beta _{j}\). Let \(\delta >0\) and \(r_{N}:=(2K\sigma ^{*} \delta + \ln N)/\kappa _{*} \). The set
is non-null. On this set, we have for all \(s\in [0,{\upsilon }_{1}]\) the bounds
implying that
and
Since \(P\) is not an increasing process, \(\Pi ((-\infty ,0))>0\). Hence the set
is non-null, and its intersection with \(A_{N}\) is also non-null. But this intersection is a subset of the set \(\{Q_{1}\ge N, M_{1}\le 1/N\}\). □
Remark 5.2
The statement of Lemma 5 (iii) can be also deduced from a much more general result in Behme et al. [3, Theorem 4.1 (ii)].
6 Bounds for the ruin probability
Lemma 6.1
For every \(u>0\),
where \(\bar{G}_{i}(u):={\mathbf{P}}[Y_{\infty}^{i}>u]\).
Proof
Let \(\tau \) be an arbitrary stopping time with respect to the filtration \(({\mathcal{F}}^{P,R,\theta}_{t})\). As the finite limit \(Y^{i}_{\infty}\) exists, the random variable
is well defined. On the set \(\{\tau <\infty \}\), we have
Let \(\xi \) be an \({\mathcal{F}}_{\tau}^{P,R,\theta}\)-measurable random variable. Note that the conditional distribution of \(Y_{\tau ,\infty}^{i}\) given is the same as the distribution of \(Y_{\infty}^{j}\). It follows that
Thus if \({\mathbf{P}}[\tau <\infty ]>0\), then
Noting that \(\Psi _{i}(u):={\mathbf{P}}[\tau ^{u,i}<\infty ]\ge {\mathbf{P}}[Y^{i}_{\infty}>u ]=\bar{G}_{i}(u)>0\), we deduce from here using (6.2) that
implying the equality in (6.1). Also, we have
implying the result. □
Remark 6.2
The representation of the ruin probability in the form (6.1) goes back to the seminal paper by Paulsen [19]. A more general result for a Markov-modulated risk process can be found in the very recent paper by Behme and Sideris [4, Theorem 4.2].
7 Ruin with probability one
Assuming that \(\beta ^{*}:=\max _{j}\beta _{j}\) is strictly negative, we give a sufficient condition under which the ruin is imminent.
Theorem 7.1
Suppose that \(\beta ^{*}\!\!<0\), \(\Pi ((-\infty ,0))\!>\!0\) and there exists \(\delta \in (0,|\beta ^{*}|\wedge 1)\) for which \(\Pi (|x|^{\delta} )<\infty \). Then \(\Psi _{i}(u)=1\) for any \(u>0\) and \(i\).
Proof
Put \(\widetilde{X}_{n}=\widetilde{X}^{i}_{n}:=X^{i}_{\upsilon _{n}}\). Note that (2.3) implies that the sequence \((\widetilde{X}_{n})\) satisfies the difference equation
where \({A}_{n}:=M^{-1}_{n}:=e^{V_{{\upsilon }_{n}}-V_{{\upsilon }_{n-1}}}\) and
According to Kabanov and Pergamenshchikov [16, Corollary 6.2], \(\inf _{n} \widetilde{X}_{n}<0\) a.s. if the ratio \(B_{1}/A_{1}\) is unbounded from below and there is \(\delta \in (0,1) \) such that \({\mathbf{E}}[A^{\delta}_{1}]<1\) and \({\mathbf{E}}[|B_{1}|^{\delta}]<\infty \). By our assumption, the event that on a fixed finite interval, the process \(P\) has arbitrarily many downward jumps of size larger than some \({\varepsilon }>0\) and no jumps upward is of strictly positive probability. Due to the independence of \(P\) and \((W,\theta )\), this implies that \(-Q_{1}=B_{1}/A_{1}\) is unbounded from below.
Noting that
we get that
Finally, the property \({\mathbf{E}}[|B_{1}|^{\delta}]<\infty \) can be proved by the same arguments as in the proof of Lemma 4.4 with \(\gamma \) and \(V\) replaced by \(\delta \) and \(V_{{\upsilon }_{1}}-V\) and the reference to (4.1) replaced by the reference to (7.2) in Lemma 7.2 below. □
Lemma 7.2
Suppose that \(\beta ^{*}<0\). Then for any \(\delta \in (0,|\beta ^{*}|)\),
Proof
The arguments are very similar to those for Lemma 4.3 and we only sketch them. The only new feature is that we need to consider processes of the form \((V_{T}-V_{s})_{s\in [0,T]}\) rather than \((V_{s})_{s\in [0,T]}\). The crucial observation is that the process \((W_{T}-W_{s})_{s\in [0,T]}\) in the reversed time \(s':=T-s\) is a Wiener process.
First observe that
Given a trajectory of \(\theta \), the exponential and the integral in each summand are conditionally independent and their conditional expectations admit explicit expressions. For the integral, it is \(1/\lambda _{\vartheta _{k-1}}\tilde{f}_{\vartheta _{k-1}}(\delta )\), where \(\tilde{f}_{j}\) is given in (7.1). Note that for \(\delta \in (0,|\beta ^{*}|)\), the conditional expectation of the integral is dominated by \(1/\lambda _{*}\), implying that
Due to the choice of \(\beta \), we have \(\tilde{f}^{*}:=\max _{j} \tilde{f}_{j}(\delta )<1\) and therefore
The first property in (7.2) is proved.
Let \(\tau \) be an exponential random variable with parameter \(\lambda >0\). For any \(\delta \in (0,\widetilde{r})\), where
we have according to Borodin and Salminen [5, (1.1.1) in Chap. 2] that
We get (as in Corollary 4.2) that for all \(k\ge 1\),
with some constant \(\tilde{C}^{*}(\delta )<\infty \), and we complete the proof of the second property in (7.2) as in Lemma 4.3. □
8 Comments and ramifications
As was observed by Paulsen [19] for the model where \(R\) and \(P\) are independent Lévy processes, the asymptotic of the ruin probability is related to the tail behaviour of the law of the integral \(Y_{\infty}=-(1/S_{-})\cdot P_{\infty}\), where \(S={\mathcal{E}}(R)\). If \(Y_{\infty}\) satisfies the affine distributional equation \(Z \stackrel{d}{=}Q+MZ\), then the tail behaviour can be obtained via implicit renewal theory. In the Paulsen model, extensively studied in the papers Paulsen [20, 21, 22], one can take \(M=1/S_{1}\) and \(Q=-(1/S_{-})\cdot P_{1}\). The function \(\Upsilon (q) := {\mathbf{E}}[M^{q}]\) has an explicit form; it can be expressed in terms of the Lévy triplets of \(R\).
Ellanskaya and Kabanov [9] considered a model with stochastic volatility (called also a model with a hidden Markov process) supposing that \(S\) is a conditional geometric Brownian motion with parameters depending on a two-state Markov process \(\theta \) and \(P\) is a compound Poisson process with drift. It was shown in [9] that the \(Y^{i}_{\infty}\) satisfy the affine distributional equations \(Z^{i} \stackrel{d}{=}Q^{i}+M^{i} Z^{i}\), where \(M^{i}=e^{-V^{i}_{\tau _{2}}}\) and \(Q^{i}=-(1/S^{i}_{-})\cdot P_{\tau _{2}}\) have distributions not depending on the initial value \(i\) of the process \(\theta \). In [9], the function \(\Upsilon (q) := {\mathbf{E}}[ (M^{i})^{q}]\) has an explicit form in terms of the defined characteristics and does not depend on \(i\).
The present paper extends the model of [9] to the case where the hidden finite-state Markov process has all states communicating. This extension is non-trivial. Now \(Y^{i}_{\infty}\) satisfies the affine distributional equation \(Z \stackrel{d}{=}Q^{i}+M^{i} Z\), where the distributions of the random variables \(M^{i}=e^{-V_{{\upsilon }^{i}_{1}}}\) and \(Q^{i}=-(1/S_{-})\cdot P_{{\upsilon }^{i}_{1}}\) may depend on \(i\). The difficulty arises because in contrast to the two-state case, the moment-generating function \(\Upsilon _{i}(q):= {\mathbf{E}}[(M^{i})^{q}]\) does not admit an explicit expression in terms of the model specification, and its continuity properties and shape of the graph are not clear.
The contribution of the present paper consists in proving that if all parameters \(\beta _{k}\) are strictly positive, the function is continuous and there exists a strictly positive root \(\gamma _{i}\) of the equation \(\Upsilon _{i}(q)=1\). The existence of such a root is the crucial property. If the model has it, the other hypotheses of the Kesten–Goldie theorem can be verified for a wider class of models than considered here. Our arguments are based heavily on the assumption that all \(\beta _{k}>0\). This is not necessary: the needed root may exist even when some \(\beta _{k}\) are negative. We show this in the following case study for the model where \(\theta \) is a telegraph process and \(\Upsilon \) has an explicit form.
Let us consider the model with a two-state Markov process \(\theta \) as studied in [9]. Suppose first that \(\sigma _{j}\neq 0\) for \(j=1,2\). Then \(\varpi =2\) and the convex continuous function admits, on its domain
the explicit expression
Then
The equation \(\Upsilon (q)=1\) has a root \(\gamma >0\) if and only if \(\lambda ^{10}\sigma _{0}^{2}\beta _{0}+\sigma _{1}^{2}\lambda ^{01} \beta _{1}>0\), and in that case, this root coincides with the unique strictly positive root in \({\mathrm{dom}}\, \Upsilon \) of the equation \(\tilde{\Upsilon}(q)=1\). Of course, the above inequality always holds if at least one of the coefficients \(\beta _{k}\) is strictly positive.
Assuming \(\sigma _{j}> 0\) allows us to work with the parameters \(\beta _{j}:=2a_{j}/\sigma ^{2}_{j}-1\). A model with risky investments where some of the \(\sigma _{j}\) are zero is of interest: one can imagine that risky investments are prohibited in some states of the economy. To explain the situation, we consider again the model with a two-state process \(\theta \), supposing that \(\sigma _{0}\neq 0\) and \(\sigma _{1}=0\).
Now the convex continuous function has the domain
on which it is given by the formula
Then \(\Upsilon '(0)=-\sigma _{0}^{2}\beta _{0}/(2\lambda ^{01})-a_{1}/ \lambda ^{10}\), and the inequality \(\Upsilon '(0)<0\) is equivalent to the inequality
If \(a_{1}=0\), then \(\beta _{0}>0\) and \(\gamma =\beta _{0}\) is the strictly positive root of the equation \(\Upsilon (q)=1\). In the general case, the equation \(\Upsilon (q)=1\) on the set \({\mathrm{dom}}\, \Upsilon \) has the form
If \(a_{1}\neq 0\), then this equation can be written as
Under the assumption (8.1), the larger of the roots
belongs to \({}\,\Upsilon \).
References
Albrecher, H., Gerber, H., Yang, H.: A direct approach to the discounted penalty function. N. Am. Actuar. J. 14, 420–434 (2010)
Asmussen, S., Albrecher, H.: Ruin Probabilities. World Scientific, Singapore (2010)
Behme, A., Lindner, A., Reker, J., Rivero, V.: Continuity properties and the support of killed exponential functionals. Stoch. Process. Appl. 140, 115–146 (2021)
Behme, A., Sideris, A.: Markov-modulated generalized Ornstein–Uhlenbeck processes and an application in risk theory. Bernoulli 28, 1309–1339 (2022)
Borodin, A.N., Salminen, P.: Handbook of Brownian Motion – Facts and Formulae, 2nd edn. Springer Basel AG/Birkhäuser, Basel (2002)
Buraczewski, D., Damek, E.: A simple proof of heavy tail estimates for affine type Lipschitz recursions. Stoch. Process. Appl. 127, 657–668 (2017)
Di Masi, G.B., Kabanov, Yu.M., Runggaldier, W.J.: Mean-square hedging of options on a stock with Markov volatilities. Theory Probab. Appl. 39, 172–182 (1994)
Eberlein, E., Kabanov, Yu., Schmidt, T.: Ruin probabilities for a Sparre Andersen model with investments. Stoch. Process. Appl. 144, 72–84 (2022)
Ellanskaya, A., Kabanov, Yu.: On ruin probabilities with risky investments in a stock with stochastic volatility. Extremes 24, 687–697 (2021)
Feller, W.: An Introduction to Probability Theory and Its Applications, vol. 1, 3rd edn. Wiley, New York (1991)
Frolova, A., Kabanov, Yu., Pergamenshchikov, S.: In the insurance business risky investments are dangerous. Finance Stoch. 6, 227–235 (2002)
Goldie, C.M.: Implicit renewal theory and tails of solutions of random equations. Ann. Appl. Probab. 1, 126–166 (1991)
Grandell, J.: Aspects of Risk Theory. Springer, Berlin (1990)
Guivarc’h, Y., Le Page, E.: On the homogeneity at infinity of the stationary probability for affine random walk. In: Bhattacharya, S., et al. (eds.) Contemporary Mathematics. Recent Trends in Ergodic Theory and Dynamical Systems, pp. 119–130. Am. Math. Soc., Providence (2015)
Kabanov, Yu., Pergamenshchikov, S.: In the insurance business risky investments are dangerous: the case of negative risk sums. Finance Stoch. 20, 355–379 (2016)
Kabanov, Yu., Pergamenshchikov, S.: Ruin probabilities for a Lévy-driven generalized Ornstein–Uhlenbeck process. Finance Stoch. 24, 39–69 (2020)
Kabanov, Yu., Pukhlyakov, N.: Ruin probabilities with investments: smoothness, IDE and ODE, asymptotic behavior. J. Appl. Probab. 59, 556–570 (2022)
Novikov, A.A.: On discontinuous martingales. Theory Probab. Appl. 20, 11–26 (1975)
Paulsen, J.: Risk theory in a stochastic economic environment. Stoch. Process. Appl. 46, 327–361 (1993)
Paulsen, J.: Sharp conditions for certain ruin in a risk process with stochastic return on investments. Stoch. Process. Appl. 75, 135–148 (1998)
Paulsen, J.: On Cramér-like asymptotics for risk processes with stochastic return on investments. Ann. Appl. Probab. 12, 1247–1260 (2002)
Paulsen, J., Gjessing, H.K.: Ruin theory with stochastic return on investments. Adv. Appl. Probab. 29, 965–985 (1997)
Pergamenshchikov, S., Zeitouni, O.: Ruin probability in the presence of risky investments. Stoch. Process. Appl. 116, 267–278 (2006). Erratum to: “Ruin probability in the presence of risky investments”. Stoch. Process. Appl. 119, 305–306 (2009)
Acknowledgements
This work was supported by the Russian Science Foundation associated grants 20-68-47030 and 20-61-47043.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Kabanov, Y., Pergamenshchikov, S. On ruin probabilities with investments in a risky asset with a regime-switching price. Finance Stoch 26, 877–897 (2022). https://doi.org/10.1007/s00780-022-00483-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00780-022-00483-w
Keywords
- Ruin probabilities
- Risky investments
- Stochastic volatility
- Hidden Markov model
- Regime switching
- Implicit renewal theory