1 Introduction

Exponential functionals of Lévy processes play a very important role in various domains of probability theory, such as self-similar Markov processes, random processes in random environment, fragmentation processes, branching processes, mathematical finance, to name but a few. In general, the distribution of the exponential functional of a Lévy process X=(X t ,t≥0) with lifetime ζ, defined as

$$I := \int_0^{\zeta} e^{- X_t}{\mathrm{d}}{t}, $$

can be rather complicated. Nonetheless, this distribution is known explicitly for the case when X is either: a standard Poisson processes, Brownian motion with drift, a particular class of spectrally negative Lamperti-stable process (see for instance [20, 32, 38]), spectrally positive Lévy processes satisfying the Cramér condition (see for instance [40]). In the class of Lévy processes with double-sided jumps the distribution of the exponential functional is known in closed form only in the case of Lévy processes with hyper-exponential jumps, see the recent paper by Cai and Kou [14]. An overview of this topic can be found in Bertoin and Yor [7].

For many applications, it is enough to have estimates of ℙ(I<t) as t→0+ and ℙ(I>t) as t→+∞. However it is quite difficult to obtain such estimates in the general case. The behaviour of the tail ℙ(I>t) has been studied the most. There are results in special cases that correspond to heavy tails, light tails and convolution equivalent tails (see for instance [19, 20, 24, 36, 41, 42]). On the other hand, the behaviour of ℙ(I<t) has been studied only in two particular cases: when X has exponential moments and its Laplace exponent is regularly varying at infinity with index θ∈(1,2) (see [37]) and when X is a subordinator whose Laplace exponent is regularly varying at zero (see [11]).

At the same time, the problem of finding the distribution of the supremum of a stable process has also stimulated a lot of research. The explicit expressions for the Wiener-Hopf factors for a dense set of parameters were first obtained by Doney [22]. In the spectrally positive case a convergent series representation for the density of the supremum was first obtained by Bernyk, Dalang and Peskir [4], and a complete asymptotic expansion was derived by Patie [39]. The general case was treated recently in [25] and [27], where the authors have derived explicit formulas for the Wiener-Hopf factors, the Mellin transform of the supremum and also convergent and asymptotic series representations for the density of the supremum.

In this paper we pursue two goals. First, we study the exponential functionals of hypergeometric processes, in particular we obtain the Mellin transform and both convergent and asymptotic series representations for the density of the exponential functional. This gives us the first explicit results on exponential functionals of Lévy processes which have double-sided jumps of infinite activity or infinite variation. The Mellin transform is identified explicitly with a new method which is of independent interest since it can also be used to determine the Mellin transform of exponential functionals of some other Lévy processes (possibly killed) satisfying Cramer’s condition. For instance, using this method one can derive the well-known results on the exponential functionals of Brownian motion with drift and the very interesting recent results by Cai and Kou [14] for processes with double-sided hyper-exponential jumps. At the same time, the techniques that we use to determine convergent and asymptotic series representations are based on similar arguments used in [25] and [27].

Our second goal is to present applications of these results. Using the Lamperti transformation and the fact that hypergeometric processes include the Lamperti-stable processes we will prove several interesting results on fluctuations of stable processes. In particular, we obtain a new proof for the series representations for the density of the supremum of stable processes found in [25] and [27], which is more straightforward and somewhat less technical. We also derive the first known result on the density of the entrance law of the excursion measure of a stable process reflected at its past infimum, which, coupled with the recent paper by Chaumont [17], provides a huge number of explicit identities for functionals of stable processes. We also determine the density of the entrance law of the stable process conditioned to stay positive. Finally, we obtain several new results related to multidimensional symmetric stable processes, such as the density of the last passage time from the sphere and the entrance law at zero of the radial process which, to the best of our knowledge, are only known in the Brownian case (see for instance Getoor [23]).

The paper is organized as follows: in Sect. 2 we introduce hypergeometric processes and establish the connections between this class and the Lamperti-stable processes. In Sect. 3 we study the Mellin transform of the exponential functional of a hypergeometric process and in Sect. 4 we derive the convergent and asymptotic series representations for the density of the exponential functional. Finally, in Sect. 5 we present some applications of these results to fluctuations of stable processes.

2 Hypergeometric and Lamperti-Stable Processes

Hypergeometric processes were first introduced in [33] and, more generally, in [29]. These processes were originally constructed using Vigon’s theory of philanthropy (see [45]) and they provide examples of Lévy processes with an explicit Wiener-Hopf factorization. The class of processes which we will study in this paper should be considered as a subclass of the hypergeometric processes studied in [33] and as a generalization of Lamperti-stable processes, which were introduced by Caballero and Chaumont [9].

We start by defining a function ψ(z) as

$$ \psi(z) = -\frac{\Gamma(1-\beta+\gamma-z)}{{\varGamma}(1-\beta-z)} \frac{{\varGamma}(\hat{\beta}+\hat{\gamma}+z)}{{\varGamma}(\hat{\beta}+z)}, $$
(1)

where \((\beta,\gamma,\hat{\beta},\hat{\gamma})\) belong to the admissible set of parameters

$$ {\mathcal{A}} = \bigl\{ \beta \le 1,\ \gamma \in (0,1),\ \hat{\beta}\ge 0,\ \hat{\gamma}\in (0,1)\bigr\}. $$
(2)

Our first goal is to show that ψ(z) is the Laplace exponent of a (possibly killed) Lévy process X, that is \(\psi(z)=\ln {\mathbb{E}}[\exp(z X_{1}) ]\). We will call this process X a hypergeometric process.

From now on we will use the following notation

$$ \eta=1-\beta+\gamma+\hat{\beta}+\hat{\gamma}. $$
(3)

Recall that the hypergeometric function is defined for |z|<1, z∈ℂ by convergent series

$${}_2F_1(a,b;c;z) = \sum_{n\ge 0} \frac{(a)_n(b)_n}{(c)_n} \frac{ z^n}{n!}, $$

where (a) n =Γ(a+n)/Γ(a) is the Pochhammer symbol (see Sect. 9.1 in [26]).

Proposition 1

  1. (i)

    The function ψ(z) defined by (1) is the Laplace exponent of a Lévy process X. The density of the Lévy measure of X is given by

    (4)
  2. (ii)

    When β<1 and \(\hat{\beta}>0\), the process X is killed at the rate

    $$q = -\psi(0) = \frac{{\varGamma}(1-\beta+\gamma)}{{\varGamma}(1-\beta)} \frac{{\varGamma}(\hat{\beta}+\hat{\gamma})}{{\varGamma}(\hat{\beta})}. $$

    When β=1 and \(\hat{\beta}> 0\) {β<1 and \(\hat{\beta}=0\)}, X drifts to +∞ {−∞} and

    $${\mathbb{E}}[X_1]=\frac{{\varGamma}(\gamma){\varGamma}(\hat{\beta}+\hat{\gamma})}{{\varGamma}(\hat{\beta})} \quad \biggl\{ {\mathbb{E}}[X_1]=-\frac{{\varGamma}(\hat{\gamma}){\varGamma}(1-\beta+\gamma)}{{\varGamma}(1-\beta)}\biggr\}. $$

    When β=1 and \(\hat{\beta}=0\), we have \({\mathbb{E}}[X_{1}]=0\) and the process X oscillates.

  3. (iii)

    The process X has no Gaussian component. When \(\gamma+\hat{\gamma}<1\) {\(1\le \gamma+\hat{\gamma}<2\)}, the process has paths of bounded variation and no linear drift {paths of unbounded variation}.

  4. (iv)

    We have an explicit Wiener-Hopf factorization \(-\psi(z)=\kappa(q,-z)\hat{\kappa}(q,z)\) where the Wiener-Hopf factors are given by

    $$ \kappa(q,z) = \frac{{\varGamma}(1-\beta+\gamma+z)}{{\varGamma}(1-\beta+z)},\qquad \hat{\kappa}(q,z)=\frac{{\varGamma}(\hat{\beta}+\hat{\gamma}+z)}{{\varGamma}(\hat{\beta}+z)}. $$
    (5)
  5. (v)

    The process \(\hat{X}=-X\) is also a hypergeometric process with parameters \((1-\hat{\beta},\hat{\gamma},1-\beta,\gamma)\).

Proof

First let us prove (i). Let X (1) be a general hypergeometric Lévy process (see Sect. 3.2 in [29]) with parameters

$$\sigma=d=k_1=\delta_1=\delta_2=0,\qquad \beta=1,\qquad c_1=-\frac{1}{{\varGamma}(-\gamma)},\qquad c_2=-\frac{1}{{\varGamma}(-\hat{\gamma}_2)}, $$
$$\alpha_1=\beta,\qquad \alpha_2=1-\hat{\beta},\qquad \gamma_1=\gamma,\qquad \gamma_2=\hat{\gamma},\qquad k_2=\frac{{\varGamma}(\hat{\beta}+\hat{\gamma})}{{\varGamma}(\hat{\beta})}. $$

This process is constructed using Vigon’s theory of philanthropy (see [45]) from two subordinators H (1) and \(\hat{H}^{(1)}\) which have Laplace exponents

$$\kappa^{(1)}(q,z) = \kappa(q,z)-\kappa(q,0),\qquad \hat{\kappa}^{(1)}(q,z) = \hat{\kappa}(q,z) $$

where κ(q,z) and \(\hat{\kappa}(q,z)\) are given by (5). We see that the Laplace exponent ψ (1)(z) of the process X (1) satisfies

$$\psi^{(1)}(z)=\psi(z)+k \hat{\kappa}(q,z) $$

where k=κ(q,0). Therefore the process \(X^{(1)}_{t}\) has the same distribution as \(X_{t}-\hat{H}_{kt}\), in particular the distribution of the positive jumps of X (1) coincides with the distribution of the positive jumps of X. From [29], we find that the Lévy measure of X (1) restricted to x>0 coincides with (4). The expression of the Lévy measure for x<0 follows easily by symmetry considerations. This proves that ψ(z) defined by (1) is the Laplace transform of a (possibly killed) Lévy process with the density of the Lévy measure given by (4).

The rest of the proof is rather straightforward. Property (v) follows easily from the definition of the Laplace exponent (1). The Wiener-Hopf factorization (iv) follows easily by construction: we know that both κ(q,z) and \(\hat{\kappa}(q,z)\) defined by (5) are Laplace transforms of (possibly killed) subordinators, and the result follows from identity \(-\psi(z)=\kappa(q,-z)\hat{\kappa}(q,z)\) and the uniqueness of the Wiener-Hopf factorization (see Corollary 6.19 in [31]).

Let us prove (ii). The fact that X drifts to +∞ when β=1 and \(\hat{\beta}>0\) follows from the Wiener-Hopf factorization (iv): in this case κ(q,0)=0, therefore the ascending ladder height process drifts to +∞, while the descending ladder height process is killed at rate \(\hat{\kappa}(q,0)>0\). The expression for \({\mathbb{E}}[X_{1}]\) follows from (1) using the fact that \({\mathbb{E}}[X_{1}]=\psi'(0)\). Other results in (ii) can be verified in a similar way.

Let us prove (iii). Formula (1) and the asymptotic expansion for the Gamma function (see formula 8.328.1 in [26]) imply that

$$ \psi({\mathrm{i}}z)=O\bigl(|z|^{\gamma+\hat{\gamma}}\bigr) = o\bigl(|z|^2\bigr),\quad z\to \infty,\ z\in {\mathbb{R}}. $$
(6)

Applying Proposition 2 in [5], we conclude that X has no Gaussian component, and that when \(\gamma+\hat{\gamma}<1\) {\(1<\gamma+\hat{\gamma}<2\)} the process has paths of bounded variation and no linear drift {paths of unbounded variation}. In the remaining case \(\gamma+\hat{\gamma}=1\) the density of the Lévy measure has a singularity of the form Cx −2+o(x −2) as x→0+ (see formula 15.3.12 in [1]), which implies that the process has paths of unbounded variation. □

Note that hypergeometric processes belong to the larger family of meromorphic processes, which were introduced recently in [30]. There are several equivalent definitions of this class of processes (see Theorem 1 in [30]). In particular, a Lévy process is meromorphic if the density of the Lévy measure is given by an infinite series of exponential functions with positive coefficients. This definition and the formula (4) confirm the fact that hypergeometric processes are meromorphic.

The three Lamperti-stable processes ξ ,ξ and ξ were introduced by Caballero and Chaumont [9] by applying the Lamperti transformation (see [34]) to the positive self-similar Markov processes constructed from a stable process. In particular, the process ξ is obtained from a stable process started at x>0 and killed upon exit from the positive half-line, while process ξ {ξ } is obtained from a stable process conditioned to stay positive {conditioned to hit zero continuously}. We refer to [9, 12, 20] for all the details on these processes.

Our next goal is to show that the three Lamperti-stable processes are in fact hypergeometric processes, that is their Laplace exponent is given by (1). First we present several definitions and notations. We assume that Y is a strictly stable Lévy process, which is started at zero and is described by the stability parameter α∈(0,1)∪(1,2) and the skewness parameter β∈[−1,1]. The characteristic exponent of Y is given by

$$ {\varPsi}_Y(z) = -\ln {\mathbb{E}}\bigl[ \exp({\mathrm{i}}z Y_1)\bigr] = c|z|^{\alpha} \biggl(1-{\mathrm{i}}\beta \tan\biggl(\frac{\pi \alpha}{2}\biggr) {\mathrm{sign}}(z)\biggr),\quad z\in {\mathbb{R}}, $$
(7)

and the Lévy measure of Y is

$$ \pi_Y(x) = c_+ x^{-1-\alpha}{\mathbf{1}}_{\{x>0\}} + c_- |x|^{-1-\alpha}{\mathbf{1}}_{\{x<0\}}. $$
(8)

The parameters c and β are given in terms of c , c + as follows

$$ c=c_++c_-,\qquad \beta = \frac{c_+-c_-}{c_++c_-}. $$
(9)

These classic results can be found in Chap. 8 in [5] or Theorem 14.15 in [44].

Stable processes satisfy the scaling property, which states that the processes {Y at :t≥0} and \(\{a^{\frac{1}{\alpha}}Y_{t}: t\ge 0\}\) have the same distribution. We see that c in (7) is just a scaling parameter, thus without loss of generality we set

$$ c = {\varGamma}(1+\alpha) \frac{2}{\pi} \sin\biggl( \frac{\pi \alpha}{2}\biggr) \biggl(1+\beta^2 \tan\biggl(\frac{\pi \alpha}{2}\biggr)^2\biggr)^{-\frac{1}{2}}. $$
(10)

It is of course a rather non-obvious choice of the normalization constant, but as we will see later, it is in fact appropriate for our purpose of connecting Lamperti-stable processes with hypergeometric processes.

Usually it is more convenient to parameterize stable processes by the parameters (α,ρ) instead of (α,β), where the positivity parameter ρ=ℙ(Y 1>0) can be expressed in terms of (α,β) as follows

$$ \rho = \frac{1}{2} + \frac{1}{\pi\alpha} \tan^{-1} \biggl(\beta \tan\biggl(\frac{\pi \alpha}{2}\biggr)\biggr). $$
(11)

One can check that with our normalization (10) the parameters c +, c must be given by

$$ c_+ = {\varGamma}(1+\alpha) \frac{\sin(\pi \alpha\rho)}{\pi},\qquad c_-={\varGamma}(1+\alpha)\frac{\sin(\pi \alpha(1-\rho))}{\pi}. $$
(12)

Conversely, if c + and c are given as above and we define c and β as in (9), we will obtain identities in (10) and (11). Thus (12) is just one possible way to parameterize the Lévy measure (8) of a general stable process, which is consistent with (10) and (11).

According to Caballero and Chaumont [9], the Laplace exponent \(\psi_{\xi^{*}}(z)=\ln {\mathbb{E}}[\exp(z \xi^{*}_{1})]\) of the Lamperti-stable process ξ , which is associated to a stable process Y started at x>0 and killed upon the first exit from the positive half-line, is given by

(13)

Note that when α<1, the Laplace exponent (13) can be rewritten as

$$\psi_{\xi^*}(z) = \int_{{\mathbb{R}}{\setminus}\{0\}} \bigl(e^{zx}-1\bigr) e^x\pi_Y\bigl(e^x-1\bigr){\mathrm{d}}x -c_- \alpha^{-1}, $$

so that in this case ξ is a process of bounded variation with no linear drift.

Theorem 1

Lamperti-stable processes ξ , ξ , ξ can be identified as hypergeometric processes with the following sets of parameters

 

β

γ

\(\hat{\beta}\)

\(\hat{\gamma}\)

ξ

1−α(1−ρ)

αρ

1−α(1−ρ)

α(1−ρ)

ξ

1

αρ

1

α(1−ρ)

ξ

0

αρ

0

α(1−ρ)

Proof

For the proof of the result for ξ see Proposition 2 in [33]. The result for ξ follows from Proposition 1 in [20]. Thus we only need to prove the result for ξ .

Let us set

$$ (\beta,\gamma,\hat{\beta},\hat{\gamma}) = \bigl(1-\alpha (1-\rho), \alpha \rho, 1-\alpha (1-\rho), \alpha(1-\rho)\bigr) $$
(14)

and compute the Lévy measure of the hypergeometric process X defined by these parameters. Note that due to (3) we have η=1+α. We find that formulas (4), (8) and (12) imply that for x>0 we have

In order to derive the above identity we have also used the reflection formula for the gamma function

$$ {\varGamma}(s){\varGamma}(1-s) = \frac{\pi}{\sin(\pi s)}, $$
(15)

and the fact that 2 F 1(a,b;a,z)=(1−z)b (see formulas 8.334.3 and 9.131.1 in [26]). Similarly, we find that π(x)=e x π Y (e x−1) for x<0. We see that, given our normalization (10), the Lévy measure of the hypergeometric process is the same as the Lévy measure of the Lamperti-stable process. We know that in the case α<1 both of these processes have paths of finite variation and no linear drift, this proves that \(X\stackrel{d}{=}\xi^{*}\).

In the case α>1 the processes X and ξ have infinite variation, no Gaussian component and identical Lévy measures. Thus their Laplace exponents may differ only by a linear function. In order to establish that the Laplace exponents are equal it is enough to show that \(\psi_{X}'(0)=\psi'_{\xi^{*}}(0)\). Using (1) and (15), we find that the Laplace exponent of the hypergeometric process X defined by parameters (14) is given by

$$\psi_X(z) = \frac{1}{\pi}{\varGamma}(\alpha-z){\varGamma}(1+z) \sin\bigl(\pi\bigl(z-\alpha(1-\rho)\bigr)\bigr) $$

Therefore we have

$$ \psi_X'(0) = {\varGamma}(\alpha) \frac{\sin(\pi \alpha(1-\rho))}{\pi} \bigl({\varPsi}(\alpha)-{\varPsi}(1)\bigr)+{\varGamma}(\alpha) \cos\bigl(\pi\alpha (1-\rho)\bigr), $$
(16)

where Ψ(z)=Γ′(z)/Γ(z) is the digamma function (see Sect. 8.36 in [26]). On the other hand, from (13) we find that

(17)

where c + and c are defined in (12). Let us show that the expressions in the right-hand side of (16) and (17) are identical.

First we will deal with the integrals multiplying c + in (17). We rearrange the terms as follows

(18)

Performing the change of variables u=exp(y) it is easy to see that the second integral in the right-hand side of (18) is equal to 1/(α−1). In order to compute the first integral we use integration by parts

(19)

where in the last step we have used the reflection formula for the gamma function (15) and the following integral formula

$$ \int_0^{\infty} \frac{e^{a u}}{(e^{bu}-1)^{c}} {\mathrm{d}}{u} = \frac{1}{b} \frac{{\varGamma}(c-\frac{a}{b}){\varGamma}(1-c)}{{\varGamma}(1-\frac{a}{b})},\qquad \frac{a}{b}<c<1, $$
(20)

which can be obtained by a change of variables e u=x from the integral representation for the beta function, see formula 3.191.3 in [26]. The last integral in (17) can also be computed using integration by parts

(21)

where we have again used (20). Combining (17), (18), (19) and (21) we see that

$$\psi'_{\xi^*}(0)=\frac{c_+-c_-}{1-\alpha}+c_ + \biggl[ \frac{1}{\alpha-1}+\frac{\pi}{\alpha\sin(\pi\alpha)}\biggr] + c_-\biggl[-\frac{1}{\alpha} - \frac{1}{\alpha}\bigl({\varPsi}(1)-{\varPsi}(2-\alpha)\bigr)\biggr]. $$

Using (12), the reflection formula for the digamma function Ψ(2−α)=Ψ(α)+πcot(πα)+1/(1−α) (see formulas 8.365.1 and 8.365.8 in [26]), the identity Γ(z+1)=(z) and the addition formula for the sine function it is not hard to reduce the above expression to (16). We leave all the remaining details to the reader. □

3 Mellin Transform of the Exponential Functional

Let X be a hypergeometric Lévy process with parameters \((\beta,\gamma,\hat{\beta},\hat{\gamma}) \in {\mathcal{A}}\) and \(\hat{\beta}>0\). We assume that α>0 and define the exponential functional as

$$ I(\alpha,X) = \int_{0}^{\zeta} e^{-\alpha X_t} {\mathrm{d}}{t}, $$
(22)

where ζ is the lifetime of the process X. Note that the above integral converges with probability one: according to Proposition 1, either ζ is finite (if β<1) or ζ=+∞ and the process X drifts to +∞ (if β=1). Everywhere in this paper we will denote δ=1/α.

Our main tool in studying the exponential functional is the Mellin transform, which is defined as

$$ {\mathcal{M}}(s)={\mathcal{M}}(s;\alpha,\beta,\gamma,\hat{\beta},\hat{\gamma}) = {\mathbb{E}}\bigl[I(\alpha,X)^{s-1}\bigr]. $$
(23)

From the definition of the Laplace exponent (1) we find that X satisfies Cramér’s condition, that is to say \({\mathbb{E}}[\exp(-\hat{\beta}X_{1})]=1\), therefore applying Lemma 2 from [43] we conclude that \({\mathcal{M}}(s)\) exists for \(s \in (0,1+\hat{\beta}\delta)\).

In order to describe our main result in this section, we need to define the double gamma function, G(z;τ). This function was introduced by Alexeiewsky in 1889 and was extensively studied by Barnes [2, 3]. The double gamma function is defined by an infinite product in Weierstrass’s form

(24)

Here the prime in the second product means that the term corresponding to m=n=0 is omitted. Note that by definition G(z;τ) is an entire function in z and if τ∉ℚ it has simple zeros on the lattice +n, m≤0, n≤0. Barnes has shown that it is possible to define constants a=a(τ) and b=b(τ) in such a way so that we have G(1;τ)=1 and that G(z;τ) satisfies the following three functional identities (see [2] and [27])

(25)
(26)
(27)

The function G(z;τ) can also be expressed as an infinite product of gamma functions, see [2]. An integral representation for ln(G(z;τ)) was obtained in [35] and several important asymptotic expansions were established in [8].

Next we introduce a function which will play a central role in our study of exponential functionals. Recall that we have denoted δ=1/α.

Definition 1

For s∈ℂ we define

(28)

where the constant C is such that M(1)=1.

Our main result in this section is the following Theorem, which provides an explicit expression for the Mellin transform of the exponential functional.

Theorem 2

Assume that α>0, \((\beta,\gamma,\hat{\beta},\hat{\gamma}) \in {\mathcal{A}}\) and \(\hat{\beta}>0\). Then \({\mathcal{M}}(s)\equiv {\varGamma}(s)M(s)\) for all s∈ℂ.

Before we are able to prove Theorem 2, we need to establish several auxiliary results.

Lemma 1

  1. (i)

    Assume that τ>0. When s→∞ in the domain |arg(s)|<πϵ<π, we have

    (29)
  2. (iv)

    When s→∞ in the domain 0<ϵ<arg(s)<πϵ, we have

    $$ \ln\bigl(M(s)\bigr) = -(\gamma+\hat{\gamma}) s \ln(s) + s \bigl(\bigl(1+\ln(\delta)\bigr) (\gamma+\hat{\gamma})+\pi {\mathrm{i}}\hat{\gamma}+O\bigl(\ln(s)\bigr). $$
    (30)
  3. (iii)

    When s→∞ in a vertical strip a<Re(s)<b, we have

    $$ \bigl| M(s)\bigr| = \exp\biggl(\frac{\pi}{2} (\gamma-\hat{\gamma}) \bigl|\mathrm {Im}(s)\bigr|+O\bigl(\ln\bigl|\mathrm {Im}(s)\bigr|\bigr)\biggr). $$
    (31)

Proof

Part (i) follows from the asymptotic expansion for G(z;τ), given in formula (4.5) in [8], while parts (ii) and (iii) are simple corollaries of (i) and (28). □

The following notation will be used extensively in this paper: if X is a hypergeometric process with parameters \((\beta,\gamma,\hat{\beta},\hat{\gamma})\), then \(\tilde{X}\) will denote the hypergeometric process with parameters \((\delta \beta, \delta \gamma,\delta\hat{\beta}, \delta\hat{\gamma})\), provided that this parameter set is admissible. In particular, the Laplace exponent of \(\tilde{X}\) is given by

$$ \tilde{\psi}(z) = -\frac{{\varGamma}(1-\delta(\beta-\gamma)-z) {\varGamma}(\delta(\hat{\beta}+ \hat{\gamma})+z)}{{\varGamma}(1-\delta\beta-z) {\varGamma}(\delta\hat{\beta}+z)}. $$
(32)

Lemma 2

  1. (i)

    M(s) is a real meromorphic function which has zeros

    $$ \bigl\{-(1-\beta)\delta-m\delta - n,\ 1+(\hat{\beta}+\hat{\gamma})\delta+m\delta+n\big\}_{m,n\ge 0}, $$
    (33)

    and poles

    $$ \bigl\{z^{-}_{m,n}=-(1-\beta+\gamma)\delta-m\delta-n,\ z^{+}_{m,n}=1+\hat{\beta}\delta+m\delta + n\bigr\}_{m,n\ge 0}. $$
    (34)

    All zeros/poles are simple if α∉ℚ.

  2. (ii)

    M(s) satisfies the following functional identities

    (35)
    (36)
    (37)

Proof

The proof of (i) follows from the definition of the double gamma function (24), while the functional identity (35) is a simple corollary of the functional identity for the double gamma function (25).

Let us prove (37). We use (27) and find that for all s and x

$$\frac{G(s+x;\delta)}{G(s;\delta)} = \alpha^{\alpha s x} C \frac{G(\alpha(s+x);\alpha)}{G(\alpha s;\alpha)}, $$

where C=C(x) depends only on x. The above identity implies that

where \(\tilde{C}\) does not depend on s. It is easy to see that (37) follows from the above identity and (28).

The functional identity (36) follows from (37) and (35), we leave all the details to the reader. □

The following proposition will be central in the proof of Theorem 2. It allows us to identify explicitly the Mellin transform of the exponential functional. This result is also applicable to some other Lévy process, including Brownian motion with drift and more generally, processes with hyper-exponential or phase-type jumps, therefore it is of independent interest.

First let us present the main ingredients. Let Y be a (possibly killed) Lévy process started from zero, and let \(\psi_{Y}(z)=\ln {\mathbb{E}}[\exp(z Y_{1})]\) denote its Laplace exponent. In the case when ψ Y (0)=0 (the process is not killed) we will also assume that \({\mathbb{E}}[Y_{1}]>0\), so that Y drifts to +∞. As usual we define the exponential functional \(I=\int_{0}^{\zeta} \exp(-Y_{t}) {\mathrm{d}}{t}\) (where ζ is the lifetime of Y) and the Mellin transform \({\mathcal{M}}_{Y}(s)={\mathbb{E}}[I^{s-1}]\).

Proposition 2

(Verification result)

Assume that Cramér’s condition is satisfied: there exists z 0<0 such that ψ Y (z) is finite for all z∈(z 0,0) and ψ Y (−θ)=0 for some θ∈(0,−z 0). If f(s) satisfies the following three properties

  1. (i)

    f(s) is analytic and zero-free in the strip Re(s)∈(0,1+θ),

  2. (ii)

    f(1)=1 and f(s+1)=−sf(s)/ψ Y (−s) for all s∈(0,θ),

  3. (iii)

    |f(s)|−1=o(exp(2π|Im(s)|)) as Im(s)→∞, uniformly in Re(s)∈(0,1+θ),

then \({\mathcal{M}}_{Y}(s) \equiv f(s)\) for Re(s)∈(0,1+θ).

Proof

The Cramér’s condition and Lemma 2 in [43] imply that \({\mathcal{M}}_{Y}(s)\) can be extended to an analytic function in the strip Re(s)∈(0,1+θ). In the case when ψ Y (0)=0 {ψ Y (0)<0} we use Lemma 2.1 in [36] {Proposition 3.1 from [15]} to conclude that \({\mathcal{M}}_{Y}(s)\) satisfies the functional identity

$$ {\mathcal{M}}_Y(s+1)=-\frac{s}{\psi_Y(-s)} {\mathcal{M}}_Y(s) $$
(38)

for all s∈(0,θ). Since f(s) satisfies the same functional identity we conclude that the function \(F(s)={\mathcal{M}}_{Y}(s)/f(s)\) satisfies F(s+1)=F(s) for all s∈(0,θ). Using the assumption that f(s) is analytic and zero-free we conclude that F(s) is an analytic function in the strip Re(s)∈(0,1+θ). Since F(s) is also periodic with period equal to one, it can be extended to an analytic and periodic function in the entire complex plane.

Our goal now is to prove that the function F(s) is in fact constant. Since F(s) is analytic and periodic in the entire complex plane, it can be represented as a Fourier series

$$F(s) = \sum_{n \in {\mathbb{Z}}} c_n e^{2\pi {\mathrm{i}}n s}, $$

where the series converges in the entire complex plane. This means that the two functions

$$F_1(z) = \sum_{n\ge 1} c_n z^n,\qquad F_2(z)=\sum_{n\ge 1} c_{-n} z^n. $$

are analytic in the entire complex plane, and that for all s∈ℂ

$$ F(s)=c_0+F_1\bigl(\exp(2\pi {\mathrm{i}}s)\bigr)+F_2\bigl(\exp(-2\pi {\mathrm{i}}s)\bigr). $$
(39)

Due to the inequality \(|{\mathcal{M}}_{Y}(s)|<{\mathcal{M}}_{Y}(\mathrm {Re}(s))\), assumption (iii) and periodicity of F(s) we conclude that uniformly in Re(s) we have

$$ F(s) =o\bigl(\exp\bigl(2 \pi \bigl|\mathrm {Im}(s)\bigr|\bigr)\bigr),\quad \hbox{as}\ \mathrm {Im}(s)\to \infty. $$
(40)

In particular, when Im(s)→+∞ we have F 1(exp(2πis))→F 1(0)=0, therefore the estimates (39) and (40) imply that F 2(z)=o(|z|) as z→∞ in the entire complex plane, and using Cauchy’s estimates (Proposition 2.14 in [21]) we conclude that F 2(z)≡0. Similarly, considering the case when Im(s)→−∞, we find that F 1(z)≡0. Therefore F(s) must be constant, and the value of this constant is equal to one, since \(F(1)= {\mathcal{M}}_{Y}(1)/f(1)=1\). □

We would like to stress that Proposition 2 is an important result of independent interest. We know that if Cramér’s condition is satisfied then the Mellin transform \({\mathcal{M}}_{Y}(s)\) satisfies the functional identity (38), however it is clear that there are infinitely many functions which satisfy the same functional identity. Proposition 2 tells us that if we have found such a function f(s), which satisfies (38), and if we can verify the two conditions about the zeros of this function and its asymptotic behaviour, then we can in fact uniquely identify \({\mathcal{M}}_{Y}(s)\equiv f(s)\). In particular, this proposition can be used to provide a very simple and short proof of the well-known result on exponential functional of Brownian motion with drift and of the recent results on exponential functionals of processes with double-sided hyper-exponential jumps (see [14]).

Proof of Theorem 2

First of all, we check that Cramér’s condition is satisfied with \(\theta=\hat{\beta}\delta\). Let f(s)=Γ(s)M(s), where M(s) is defined by (28). From Lemma 2(i) we know that f(s) is analytic and zero-free in the strip \(\mathrm {Re}(s)\in (0,1+\hat{\beta}\delta)\). By construction we have f(1)=1, and from formula (35) we find that f(s) satisfies f(s+1)=−sf(s)/ψ(−αs) for \(s\in (0,\hat{\beta}\delta)\). Next, Lemma 1 (iii) and Stirling’s asymptotic formula for the gamma function (see formula 8.327.3 in [26])

(41)

imply that as s→∞ in the vertical strip \(\mathrm {Re}(s)\in (0,1+\hat{\beta}\delta)\) we have

$$\bigl|f(s)\bigr|^{-1} = \exp\biggl(\frac{\pi}{2} (1-\gamma+\hat{\gamma}) \bigl|\mathrm {Im}(s)\bigr|+o\bigl(\mathrm {Im}(s)\bigr)\biggr) = o\bigl(\exp\bigl(\pi\bigl|\mathrm {Im}(s)\bigr|\bigr)\bigr), $$

where in the last step we have also used the fact that both γ and \(\hat{\gamma}\) belong to the interval (0,1).

We see that function f(s) satisfies all conditions of Proposition 2, thus we can conclude that \({\mathcal{M}}(s)\equiv f(s)\). □

Corollary 1

Assume that α>0, \(\hat{\beta}>0\) and that both sets of parameters \((\beta, \gamma, \hat{\beta},\hat{\gamma})\) and \((\delta\beta, \delta \gamma, \delta \hat{\beta}, \delta \hat{\gamma})\) belong to the admissible set \({\mathcal{A}}\). Then we have the following identity in distribution

$$ \epsilon_1^{\alpha} \times I(\alpha;X ) \buildrel {d}\over {=} \alpha^{\gamma +\hat{\gamma}} \times \epsilon_1 \times I(\delta; \tilde{X})^{\alpha} $$
(42)

where ϵ 1∼Exp(1) and all random variables are assumed to be independent.

Proof

Rewrite (37) as

$${\varGamma}(1-\alpha+\alpha s){\mathcal{M}}(s;\alpha,\beta,\gamma,\hat{\beta},\hat{\gamma}) = \alpha^{(s-1)(\gamma+\hat{\gamma})} {\varGamma}(s){\mathcal{M}}(1-\alpha+\alpha s;\delta, \delta \beta,\delta \gamma,\delta \hat{\beta},\delta \hat{\gamma}) $$

and use the following facts: (i) \({\varGamma}(s)={\mathbb{E}}[\epsilon_{1}^{s-1}]\); (ii) if f(s) is the Mellin transform of a random variable ξ then f(1−α+αs) is the Mellin transform of the random variable ξ α; (iii) the Mellin transform of the product of independent random variables is the product of their Mellin transforms. □

4 Density of the Exponential Functional

In this section we will study the density of the exponential functional, defined as

$$p(x) = \frac{{\mathrm{d}}}{{\mathrm{d}}{x}} {\mathbb{P}}\bigl(I(\alpha,X) \le x\bigr),\quad x\ge 0. $$

As in the previous section, X is a hypergeometric process with parameters \((\beta,\gamma,\hat{\beta},\hat{\gamma}) \in {\mathcal{A}}\) and \(\hat{\beta}>0\). The main results of this section are the convergent series representations and complete asymptotic expansions of this function as x→0+ or x→+∞. These results should be seen as extensions of related results in [25] and [27].

Let us define the following three sets of parameters which will be used extensively later. In the following definition (and everywhere else in this paper) ψ(⋅), \(\tilde{\psi}(\cdot)\) and M(⋅) denote the functions which were defined in (1), (32) and (28); the sequences \(z^{-}_{m,n}\) and \(z^{+}_{m,n}\) represent the poles of M(⋅) and were defined in (34); the constant η is defined by (3).

Definition 2

Define the coefficients {a n } n≥0 as

$$ a_{n} = -\frac{1}{n!} \prod_{j=0}^n \psi(\alpha j),\quad n\ge 0. $$
(43)

The coefficients {b m,n } m,n≥0 are defined recursively

(44)

Similarly, {c m,n } m,n≥0 are defined recursively

(45)

Note that if β=1 we have ψ(0)=0, which implies that a n =0 for all n≥0.

Proposition 3

Assume that α∉ℚ. For all m,n≥0, we have

Proof

Let us prove that the residue of \({\mathcal{M}}(s)\) at \(s=z^{-}_{m,n}\) is equal to b m,n . First, we use Theorem 2 and rearrange the terms in the functional identity (35) to find that

$${\mathcal{M}}(s) = \frac{\delta M(s+1)}{s+(1-\beta+\gamma)\delta} \frac{{\varGamma}(2-\beta+\gamma+\alpha s){\varGamma}(\hat{\beta}+\hat{\gamma}-\alpha s)}{{\varGamma}(1-\beta+\alpha s){\varGamma}(\hat{\beta}-\alpha s)}. $$

The above identity and the definition (44) imply that as s→−(1−β+γ)δ

$${\mathcal{M}}(s) = \frac{b_{0,0}}{s+(1-\beta+\gamma)\delta}+O(1), $$

which means that the residue of \({\mathcal{M}}(s)\) at \(z^{-}_{0,0}=-(1-\beta+\gamma)\delta\) is equal to b 0,0.

Next, let us prove that the residues satisfy the second recursive identity in (44). We rewrite (35) as

$$ {\mathcal{M}}(s) = -\frac{\psi(-\alpha s)}{s} {\mathcal{M}}(s+1). $$
(46)

We know that \({\mathcal{M}}(s)\) has a simple pole at \(s=z^{-}_{m,n}\) while \({\mathcal{M}}(s+1)\) has a simple pole at \(z^{-}_{m,n}+1=z^{-}_{m-1,n}\). One can also check that the function ψ(−αs) is analytic at \(s=z^{-}_{m,n}\) for m≥1. Therefore we have as \(s\to z^{-}_{m,n}\)

which, together with (46) imply that

$$\operatorname{Res}\bigl({\mathcal{M}}(s):s=z^-_{m,n}\bigr)=-\frac{\psi(-\alpha z^-_{m,n})}{z^-_{m,n}} \times \operatorname{Res}\bigl({\mathcal{M}}(s):s=z^-_{m-1,n}\bigr). $$

The proof of all remaining cases is very similar and we leave the details to the reader. □

Proposition 3 immediately gives us a complete asymptotic expansion of p(x) as x→0+ and x→+∞, which we present in the next Theorem.

Theorem 3

Assume that α∉ℚ. Then

(47)
(48)

Proof

The starting point of the proof is the expression of p(x) as the inverse Mellin transform

$$ p(x) = \frac{1}{2\pi {\mathrm{i}}} \int_{1+{\mathrm{i}}{\mathbb{R}}} {\mathcal{M}}(s) x^{-s} {\mathrm{d}}{s},\quad x>0. $$
(49)

Due to (31), Theorem 2 and Stirling’s formula (41) we know that \(|{\mathcal{M}}(x+{\mathrm{i}}u)|\) decreases exponentially as u→∞ (uniformly in x in any finite interval), therefore the integral in the right-hand side of (49) converges absolutely and p(x) is a smooth function for x>0. Assume that c<0 and that c satisfies \(c \ne z^{-}_{m,n}\) and c≠−n for all m,n. Shifting the contour of integration 1+iℝ↦c+iℝ and taking into account the residues at the poles \(s=z^{-}_{m,n}\) and s=−n, we find that

(50)

where the first summation is over all m≥0,n≥0, such that \(z^{-}_{m,n}>c\). Next, we perform a change of variables s=c+iu and obtain the following estimate

$$\biggl|\int_{c+{\mathrm{i}}{\mathbb{R}}} {\mathcal{M}}(s) x^{-s} {\mathrm{d}}{s}\biggr| = x^{-c} \biggl|\int_{{\mathbb{R}}} {\mathcal{M}}(s) x^{-{\mathrm{i}}u} {\mathrm{d}}{u}\biggr|< x^{-c} \int_{{\mathbb{R}}} \bigl|{\mathcal{M}}(c+{\mathrm{i}}u)\bigr| {\mathrm{d}}{u} =O\bigl(x^{-c}\bigr) $$

which proves (47). The proof of (48) is identical, except that we have to shift the contour of integration in the opposite direction. □

It turns out that for almost all parameters α the asymptotic series (47) and (48) converge to p(x) for all x>0. In order to state this result, we need to define the following set of real numbers.

Definition 3

Let \({\mathcal{L}}\) be the set of real irrational numbers x, for which there exists a constant b>1 such that the inequality

$$ \biggl|x -\frac{p}{q}\biggr| < \frac{1}{b^{q}} $$
(51)

is satisfied for infinitely many integers p and q.

This set was introduced in [27] in connection with the distribution of the supremum of the stable process and it was later studied in [25]. It was proved in [25] that \(x\notin {\mathcal{L}}\cup {\mathbb{Q}}\) if and only if

$$ \lim_{q\to +\infty} \frac{\ln \| qx \| }{q} =0. $$
(52)

where \(\| x \|=\min\{ |x-i| \; : \; i\in {\mathbb{Z}}\}\). There also exists a characterization of elements of \({\mathcal{L}}\) in terms of their continued fraction expansion (see Proposition 1 in [25]). It is known that \({\mathcal{L}}\) is a proper subset of Liouville numbers, that it is dense in ℝ and that its Hausdorff dimension of \({\mathcal{L}}\) is zero (which implies that it has Lebesgue measure zero). This set is closed under addition/multiplication by rational numbers. It is also known that \(x\in {\mathcal{L}}\) if and only if \(x^{-1} \in {\mathcal{L}}\). See [25] for proofs of these results and for some further references.

The following Theorem is our second main result in this section.

Theorem 4

Assume that \(\alpha \notin {\mathcal{L}} \cup {\mathbb{Q}}\). Then for all x>0

(53)

First let us establish the following technical result, which gives us a formula for M(s) similar to the reflection formula for the Gamma function (15).

Lemma 3

Define

$$ c_k=1-(1-\beta+\gamma)\delta-\delta/2-k. $$
(54)

Then for all u∈ℂ

(55)

where we have defined

(56)

Proof

Let us set s=c k +iu. Then iterating the functional identity (35) k times we obtain

$$ {\mathcal{M}}(s)={\mathcal{M}}(s+k)\frac{{\varGamma}(s)}{{\varGamma}(s+k)} \prod\limits_{j=0}^{k-1} \frac{{\varGamma}(1-\beta+\gamma+\alpha s +\alpha j)}{{\varGamma}( 1-\beta+\alpha s +\alpha j)} \frac{{\varGamma}(\hat{\beta}+\hat{\gamma}-\alpha s -\alpha j)}{{\varGamma}( \hat{\beta}-\alpha s -\alpha j)}. $$
(57)

We use identity

$$\frac{{\varGamma}(s)}{{\varGamma}(s+k)}=(-1)^k \frac{{\varGamma}(1-k-s)}{{\varGamma}(1-s)} $$

and (15) and rewrite (57) as

(58)

Next we change the index jk−1−j and use (15) to obtain

(59)

The proof of (55) follows from (58), (59) and the following identity

$$\prod_{j=0}^{k-1} {\varGamma}(\alpha j + z)=\frac{G(\delta z+k;\delta)}{G(\delta z;\delta)}, $$

which is obtained by iterating formula (25) k times. □

Proof of Theorem 4

We will use a similar technique as in the proof of Theorem 2 in [25]. Let us define \(B=1-\gamma-\hat{\gamma}\) and assume that B>0. We start with (50) and set c=c k , where c k is defined by (54). Note that \({\mathcal{M}}(s)\) does not have singularities on the vertical line c k +iℝ; if this was not the case then c k would coincide with one of the poles \(z^{-}_{m,n}\), which would imply that α is rational.

We define

$$ \everymath{\displaystyle} \begin{array}{l} I_1(x,k)=x^{-c_k}\mathrm {Re}\biggl[\int_{0}^{k} {\mathcal{M}}(c_k+{\mathrm{i}}u) x^{-{\mathrm{i}}u}{\mathrm{d}}{u}\biggr], \\[12pt] I_2(x,k)=x^{-c_k}\mathrm {Re}\biggl[\int_{k}^{\infty} {\mathcal{M}}(c_k+{\mathrm{i}}u) x^{-{\mathrm{i}}u} {\mathrm{d}}{u}\biggr]. \end{array} $$
(60)

It is clear that the integral in the right-hand side of (50) is equal to 2(I 1(x,k)+I 2(x,k)). Our goal is to prove that I j (x,k)→0 as k→+∞ for all x>0.

First, let us deal with I 2(x,k). Using Theorem 2, formula (30) and Stirling’s asymptotic formula (41) we find that there exists a constant C 1>0 such that for all s in the domain

$${\mathcal{D}} = \biggl\{s\in {\mathbb{C}}: |s|>2\ \hbox{and}\ \frac{\pi}{8}<\arg(s)<\frac{7\pi}{8}\biggr\} $$

we have an upper bound

$$\bigl|{\mathcal{M}}(s)\bigr|< | s|^{C_1} \exp\bigl(\mathrm {Re}\bigl[Bs \ln(s)+s\bigl(\bigl(1+\ln(\delta)\bigr)(1-B) + \pi {\mathrm{i}}\hat{\gamma}-1\bigr)\bigr]\bigr). $$

From now on we will denote u=Im(s). Computing the real part in the above expression we obtain that for all \(s\in {\mathcal{D}}\)

(61)

Next we check that for all k large enough, the conditions u>k and s=c k +iu imply \(s \in {\mathcal{D}}\), so the integrand in formula (60) defining I 2(x,k) can be bounded from above as given in (61). Let us restrict s to the line of integration \(L_{2}=\{s=c_{k}+{\mathrm{i}}u \; :\; u\ge k\}\) and simplify this upper bound. Basically, we want to isolate a term of the form exp(−Bkln(k)) and show that everything else does not grow faster than an exponential function of k. First of all, when sL 2 we have Re(s)=c k <1−k and |s|>|Re(s)|=|c k |>k−1, therefore

$$\exp\bigl(B \mathrm {Re}(s) \ln|s|\bigr)<\exp\bigl(-B(k-1)\ln(k-1)\bigr). $$

Next, for sL 2 and k sufficiently large it is true that −2k<Re(s)<1−k, which implies that there exists a constant C 2>0 such that

$$\exp\bigl(\mathrm {Re}(s)\bigl(\bigl(1+\ln(\delta)\bigr)(1-B)-1\bigr)\bigr)< C_2^k. $$

Finally, for sL 2 we have arg(s)>0, which together with the assumption B>0 shows that for all u>0

$$\exp\bigl(-\bigl(\pi \hat{\gamma}+B\arg(s)\bigr)u\bigr) < \exp(-\pi \hat{\gamma}u). $$

Combining the above three estimates with (60) and (61) we see that

(62)

The right-hand side of the above inequality converges to 0 as k→+∞, therefore I 2(x,k)→0 as k→+∞.

Now we will deal with I 1(x,k). Our first goal is to find an upper bound for the product of trigonometric functions in (55). We will follow the proof of Theorem 2 in [25]: we use the trigonometric identities

which imply that |cos(x)|cosh(y)≤|cos(x+iy)|≤cosh(y), therefore

$$\biggl|\frac{\cos(a+{\mathrm{i}}y)}{\cos(b+{\mathrm{i}}y)}\biggr|\le \frac{1}{|\cos(b)|}. $$

Applying the above estimate and Lemma 1 from [25] we conclude that for \(\alpha \notin {\mathcal{L}} \cup {\mathbb{Q}}\) and for k large enough

$$\Biggl|\prod_{j=0}^{k-1} \frac{\cos(\pi \alpha(j-{\mathrm{i}}u + \gamma \delta))}{\cos(\pi \alpha(j - {\mathrm{i}}u))}\Biggr| \le \prod_{j=0}^{k-1} \bigl|\sec(\pi \alpha j)\bigr|=2^{k+o(k)}<3^k. $$

Using (57), (60) and the above inequality we conclude that for all k large enough

$$ \bigl|I_1(x,k)\bigr| < x^{-c_k} 3^k \int_{0}^{k} \bigl|{\mathcal{M}}(c_k+k+{\mathrm{i}}u)\bigr|\times \biggl|\frac{F(-{\mathrm{i}}u)}{F(-{\mathrm{i}}u + k )}\biggr| {\mathrm{d}}{u}. $$
(63)

Now our goal is to prove that F(iu)/F(−iu+k) converges to zero faster than any exponential function of k as k→+∞. We use (56), Stirling’s formula (41) and the asymptotic expansion (29) to conclude that when w→∞ in the domain |arg(w)|<3π/4 we have

$$\ln\bigl(F(w)\bigr) = Bw \ln(w) + O(w). $$

This asymptotic result implies that there exists a constant C 3>0 such that for k large enough and for all v∈[0,1]

where in the last step we have used the fact that arctan(v)<π/2 and B>0. Thus we see that for all k large enough we have

$$\max_{0\le u \le k} \biggl|\frac{F(-{\mathrm{i}}u)}{F(-{\mathrm{i}}u + k)}\biggr| = \max_{0\le v \le 1} \biggl|\frac{F(-{\mathrm{i}}k v)}{F(-{\mathrm{i}}kv + k)}\biggr| < C_3^k \exp\bigl(-B k \ln(k)\bigr). $$

Combining the above estimate with (63) we obtain that for all k large enough

$$\bigl|I_1(x,k)\bigr| < x^{-c_k} 3^k C_3^k e^{-B k \ln(k)} \int_{0}^{\infty} \bigl|{\mathcal{M}}\bigl(1-(1-\beta+\gamma)\delta-\delta/2+{\mathrm{i}}u\bigr)\bigr|{\mathrm{d}}{u} $$

and we see that I 1(x,k)→0 as k→+∞. Thus when \(\gamma+\hat{\gamma}<1\) the first series in (53) converges to p(x) for all x>0.

When \(\gamma+\hat{\gamma}>1\) the proof is very similar, except that one has to shift the contour of integration in the opposite direction. □

As we have mentioned in Sect. 2, hypergeometric processes belong to a bigger class of meromorphic Lévy processes, which was introduced in [30]. It is a natural question of whether the results similar to the ones presented in Theorems 2, 3 and 4 can be obtained for this bigger class of meromorphic processes. We think that there is a good chance that Theorem 2, which gives an explicit expression for the Mellin transform of the exponential functional, can be generalized. The Mellin transform of the exponential functional would probably be given by an infinite product of gamma functions and would involve roots and poles of the function ψ(z), where ψ(z) is the Laplace exponent of the meromorphic process. This is certainly the case for hypergeometric processes, as formula (28) can be rewritten as an infinite product of gamma functions with the help of formula (4.5) in [27]. Another piece of evidence which supports our guess is provided by the case when the process X has hyper-exponential jumps, which can be considered as simple examples of meromorphic processes: here the Mellin transform can be given explicitly by a finite product of gamma functions, see [14].

At the same time, we are almost certain that Theorems 3 and 4, which provide asymptotics and series expansions for the density of the exponential functional, can not be extended to the case of general meromorphic processes. One of the main facts used in the proof of Theorems 3 and 4 was the fact that the Mellin transform \({\mathcal{M}}(s)\) has only simple poles. Considering the functional equation (46) it is easy to see that this condition is equivalent to requiring that there are no roots/poles of the Laplace exponent ψ(αz) which differ by an integer number. The case of hypergeometric processes is very special (and unique) in that we know explicitly the roots/poles of the Laplace exponent ψ(z) (which is given by (1)), and by imposing the condition \(\alpha \not \in {\mathbb{Q}}\) we can ensure that the Mellin transform \({\mathcal{M}}(s)\) has no multiple poles. This will not work for general meromorphic processes: since we don’t know explicitly the roots of ψ(z), there is always a possibility that the Mellin transform has poles of multiplicity greater than one, which will have two important implications. First of all, it will make it hard (maybe impossible?) to compute the residues at these poles (as we do in Proposition 3). Second, it will also imply that the asymptotic expansions for the density of the exponential functional (similar to the ones presented in Theorem 3) will be more complicated and will contain logarithmic terms.

An unusual feature of the results presented in Theorems 3 and 4 (and of the similar results in [25] and [27]) is that they do not hold for rational values of α, which is clearly the case that would be most interesting for applications. As we’ve discussed above, the problem lies in the fact that for α∈ℚ the Mellin transform \({\mathcal{M}}(s)\) has poles of multiplicity greater than one, which makes the picture much more complicated. Some consolation can be provided by the fact that if α is rational then the formula (28) can be simplified and the Mellin transform \({\mathcal{M}}(s)\) can be given in terms of simpler functions, see the recent paper [28] for an example of these computations in the related case of the supremum of a stable process. The density of the exponential functional can be recovered then by inverting the Mellin transform numerically. Another possible way of approximating the density of the exponential functional when α is rational is based on the following procedure: (i) first approximate α by an algebraic but irrational \(\tilde{\alpha}\) (which will ensure that \(\tilde{\alpha}\notin {\mathcal{L}} \cup {\mathbb{Q}}\)) and (ii) compute the density using series expansions given in Theorem 4. Again, see [28] for several numerical examples.

5 Applications

In this section we will present several applications of the above results on exponential functionals of hypergeometric Lévy processes. In particular, we will study various functionals related to strictly stable Lévy processes. Our main tool will be the Lamperti transformation, which links a positive self-similar Markov process (pssMp) to an associated Lévy process. By studying this associated Lévy process and its exponential functional we can obtain many interesting results about the original self-similar Markov process.

We assume that the reader is familiar with the Lamperti transformation [34] and Lamperti-stable processes (see [9, 12, 13, 20]).

5.1 Extrema of Stable Processes

As our first example, we will present a new proof of some results on the distribution of extrema of general stable processes, which were obtained in [25] and [27]. Let us define S t =sup{Y s :0≤st}, where Y is a stable process (started from zero) with parameters (α,ρ), whose characteristic exponent is given by (7) and the parameter c is normalized as in (10). The infinite series representation for the density of S 1 (valid for almost all values of α) was derived recently in [25]. This formula was obtained using an explicit expression for the Mellin transform of S 1, which was found in [27] via the Wiener-Hopf factorization of stable processes. Our goal in this section is to present a more direct route which leads to the density of S 1.

We start with a hypergeometric Lévy process \(\hat{\xi}\), which is defined by Laplace exponent (1) and

$$\hbox{parameters of $\hat{\xi}$} : (\beta,\gamma,\hat{\beta},\hat{\gamma}) = \bigl(1-\alpha \rho,\alpha(1-\rho),1-\alpha \rho,\alpha \rho\bigr). $$

Let \(\hat{Y}=-Y\) denote the dual process of Y. Note that \(\hat{Y}\) is a stable process with parameters (α,1−ρ). From [9] and Theorem 1 we find that \(\hat{\xi}\) is associated by the Lamperti transformation [34] to the positive self-similar Markov process \(\hat{Z}\), which is defined as the process \(\hat{Y}\) killed at the first exit from the positive half-line. More precisely,

where \(\hat{T}_{x}=\inf\{t>0:\hat{Y}_{t}\le -x\}\) and Δ denotes the cemetery state. It is a well-known result from the theory of the Lamperti transformation that the lifetime of \(\hat{Z}\) started at one is equal in distribution to the exponential functional of the associated Lévy process, which gives us the following identity

$$\hat{T}_1 \buildrel {d}\over {=} \int_0^{\zeta} e^{\alpha \hat{\xi}_t} {\mathrm{d}}{t}, $$

where ζ is the lifetime of \(\hat{\xi}\). Using our notation (22) this identity can be written as

$$ \hat{T}_1 \buildrel {d}\over {=} I(\alpha,\xi), $$
(64)

where \(\xi=-\hat{\xi}\) is also a hypergeometric Lévy process, and from Proposition 1(v) we find that

$$ \hbox{parameters of $\xi$} : (\beta,\gamma,\hat{\beta},\hat{\gamma}) = \bigl(\alpha\rho, \alpha\rho, \alpha\rho, \alpha(1-\rho)\bigr). $$
(65)

Finally, using the fact that \(\hat{Y}=-Y\) and the scaling property of stable processes we obtain

The above identity combined with (64) shows that \(S_{1}^{-\alpha}\stackrel{d}{=}I(\alpha,\xi)\). In particular, the density of S 1 can be represented in terms of the density of I(α,ξ) (which we will denote by p(x)) as follows

$$\frac{{\mathrm{d}}}{{\mathrm{d}}{x}} {\mathbb{P}}(S_1\le x) = \alpha x^{-1-\alpha}p\bigl(x^{-\alpha}\bigr), \quad x>0. $$

Using this expression, the fact that ξ has parameters (65) and applying Theorem 3 {Theorem 4} we recover the asymptotic expansions that appears in Theorem 9 in [27] {the convergent series representations given in Theorem 2 in [25]}.

Remark 1

Note that the convergent and the asymptotic series given in Theorems 3 and 4 have identical form. Combining these two results we obtain the following picture: depending on whether \(\gamma+\hat{\gamma}<1\) or \(\gamma+\hat{\gamma}>1\) one of the series in (53) converges for all x∈(0,∞), and this convergent series is also asymptotic at one of the boundaries of this interval (at 0 or +∞). At the same time the second series in (53) provides an asymptotic series at the other boundary. Therefore in what follows we will only present the statements about the convergent series representation, with the understanding that the asymptotic expansions are implicitly imbedded into these results.

5.2 Entrance Laws

In this section we will obtain some explicit results on the entrance law of the stable process conditioned to stay positive and the entrance law of the excursion measure of the stable process reflected at its past infimum. We will use the following result from [6]: if a non-arithmetic Lévy process X satisfies \({\mathbb{E}}[|X_{1}|]<\infty\) and \({\mathbb{E}}[X_{1}]>0\), then as x→0+ its corresponding pssMp (Z,Q x ) in the Lamperti representation converges weakly (in the sense of finite-dimensional distributions) towards a nondegenerate probability law (Z,Q 0). Under these conditions, the entrance law under Q 0 is described as follows: for every t>0 and every measurable function f:ℝ+→ℝ+,

$$ \mathbf{Q}_0\bigl[f(Z_t)\bigr] = \frac{1}{\alpha \mathbb{E}[X_1]} {\mathbb{E}}\biggl[\frac{1}{I}f\biggl(\frac{t}{I}\biggr)\biggr], $$
(66)

where \(I=I(\alpha,X)=\int_{0}^{\infty} \exp\{-\alpha X_{s}\}\,{\mathrm{d}} s.\) Necessary and sufficient conditions for the weak convergence of (Z,Q x ) on the Skorokhod space were given in [10].

As before, we consider a stable process (Y,ℙ x ) and denote by \((Y, {\mathbb{P}}_{x}^{\uparrow})\) the stable process conditioned to stay positive (see [16, 18] for a proper definition). It is known that \((Y, {\mathbb{P}}_{x}^{\uparrow})\) is a pssMp with index α. According to Corollary 2 in [9] (and Theorem 1 above) its associated Lévy process is given by ξ , which is a hypergeometric Lévy process with parameters (1,αρ,1,α(1−ρ)). From Proposition 1 (ii) we see that \({\mathbb{E}}[\xi^{\uparrow}_{1}]>0\), and the fact that hypergeometric processes have Lévy measures with exponentially decaying tails implies that \({\mathbb{E}}[|\xi^{\uparrow}_{1}|]<\infty\), thus we have the weak convergence of \((Y, {\mathbb{P}}_{x}^{\uparrow})\) as x→0+. We denote the limiting law by ℙ. Note also that in this particular case, the weak convergence of \((Y, {\mathbb{P}}_{x}^{\uparrow})\) has been proved in a direct way in [16].

In our next result the coefficients {b m,n } m,n≥0 and {c m,n } m,n≥0 are defined as in Definition 2 with parameters \(\beta=\hat{\beta}=1\), γ=αρ and \(\hat{\gamma}=\alpha(1-\rho)\).

Proposition 4

Assume that \(\alpha\notin\mathcal{L}\cup \mathbb{Q}\).

  1. (i)

    Let \(p^{\uparrow}_{t}\) be the density of the entrance law of (Y,ℙ). For x>0 we have

    (67)
  2. (ii)

    Let q t be the density of the entrance law of the excursion measure of the reflected process \((Y-\underline{Y}, {\mathbb{P}})\), where \(\underline{Y}_{t}=\inf_{0\leq s\leq t}Y_{s}\). Then \(q_{1}(x)=x^{-\alpha(1-\rho)}p_{1}^{\uparrow}(x)\) can be computed via (67).

Proof

Let p(x) be the density of I(α,ξ ). From (66) we find that the density of the entrance law of (Y,ℙ) is given by

$$p_1^\uparrow(x) = \frac{1}{\alpha {\mathbb{E}}[\xi_1^{\uparrow}]} x^{-1} p\bigl(x^{-1}\bigr). $$

Using Proposition 1 (ii) we obtain \({\mathbb{E}}[\xi^{\uparrow}_{1}]={\varGamma}(\alpha \rho){\varGamma}(1+\alpha(1-\rho))\). In order to finish the proof of part (i) we only need to apply results of Theorem 53. The identity \(q_{1}(x)=x^{-\alpha(1-\rho)}p_{1}^{\uparrow}(x)\) in part (ii) was established in [16]. □

5.3 The Distribution of the Lifetime of a Stable Process Conditioned to Hit Zero Continuously

Our third application deals with stable processes conditioned to hit zero continuously. Processes in this class are defined as Doob’s h-transform with respect to the function h(x)=αρx αρ−1, which is excessive for the killed process (Y t 1 {t<T},ℙ x ). Its law \({\mathbb{P}}^{\downarrow}_{x}\), which is defined on each σ-field \(\mathcal{F}_{t}\) by

$$ \frac{{\mathrm{d}} \mathbb{P}^{\downarrow}_x}{{\mathrm{d}} \mathbb{P}_x}\bigg|_{\mathcal{F}_t} = \frac{Y^{\alpha\rho-1}_t}{x^{\alpha\rho-1}}\mathbf{1}_{\{t<T\}}, $$
(68)

is that of a pssMp that hits zero in a continuous way. According to Corollary 3 in [9] (and Theorem 1 above), its associated Lévy process in the Lamperti representation is the hypergeometric Lévy process ξ with parameters (0,αρ,0,α(1−ρ)). In particular from Proposition 1(ii) we find that the process ξ drifts to −∞. Let T be the life-time of the stable process conditioned to hit zero continuously. From the Lamperti transformation it follows that \(T^{\downarrow}\buildrel {d}\over {=}I(\alpha, -\xi^{\downarrow})\), therefore \(T^{\downarrow}\buildrel {d}\over {=}I(\alpha, \xi)\), where ξ is a hypergeometric process with parameters (1,α(1−ρ),1,αρ). Let us define the coefficients {b m,n } m,n≥0 and {c m,n } m,n≥0 as in Definition 2 with \(\beta=\hat{\beta}=1\), γ=α(1−ρ) and \(\hat{\gamma}=\alpha\rho\). Then according to Theorem 53, if \(\alpha\notin \mathcal{L}\cup \mathbb{Q}\) and x>0 we have

5.4 Distribution of Some Functionals Related to the Radial Part of a Symmetric Stable Process

Our last application deals with the radial part of symmetric stable processes in ℝd. Let Y=(Y t ,t≥0) be a symmetric stable Lévy process of index α∈(0,2) in ℝd (d≥1), defined by

$${\mathbb{E}}_0\bigl[e^{i \langle \lambda, {\mathbf{Y}}_t \rangle}\bigr] = e^{-t\|\lambda\|^{\alpha}}, $$

for all t≥0 and λ∈ℝd. Here ℙ y denotes the law of the process Y started from y∈ℝd, ∥⋅∥ is the norm in ℝd and 〈⋅,⋅〉 is the Euclidean inner product.

According to Caballero et al. [13], when α<d the radial process \(R_{t}=\frac{1}{2}\|{\mathbf{Y}}_{t}\|\) is a transient positive self-similar Markov process with index α and infinite lifetime. From Theorem 7 in [13] we find that the Laplace exponent of its associated Lévy process ξ is given by

$$ \psi_{\xi}(z) = -\frac{{\varGamma}((\alpha-z)/2)}{{\varGamma}(-z/2)} \frac{{\varGamma}((z +d)/2)}{{\varGamma}((z +d-\alpha)/2)}. $$
(69)

This shows that the process X:=2ξ is a hypergeometric Lévy process with parameters (1,α/2,(dα)/2,α/2). From Proposition 1(ii) we see that ξ drifts to +∞ and that

$$ {\mathbb{E}}[\xi_1] = \frac{1}{2}\frac{{\varGamma}(\frac{\alpha}{2}){\varGamma}(\frac{d}{2})}{{\varGamma}(\frac{d-\alpha}{2})}. $$
(70)

Let P x denote the law of the process R starting from x>0. The process ξ satisfies \({\mathbb{E}}[\xi_{1}]>0\) and \({\mathbb{E}}[|\xi_{1}|]<\infty\), and according to [6, 10] we have the weak convergence of (R,P x ) towards (R,P 0) as x→0+.

Proposition 5

Let \(\tilde{p}_{t}\) be the density of the entrance law of (R,P 0). If α<d then for x>0 we have

(71)

Proof

We follow the same approach as in the proof of Proposition 4. Using (66) we conclude that \(\tilde{p}_{1}(x)=(2/\alpha {\mathbb{E}}[\xi_{1}]) x^{-1}p(x^{-1})\), where p(x) is the density of I(α,ξ)=I(α/2,X). In the case \(\alpha\notin \mathcal{L}\cup \mathbb{Q}\) the infinite series representation for p(x) and the expression for the coefficients follow from Theorem 53 and Definition 2 with β=1, γ=α/2, \(\hat{\beta}= (d-\alpha)/2\) and \(\hat{\gamma}=\alpha/2\). Note that both infinite series in (71) converge for all α, thus we can remove the assumption \(\alpha\notin \mathcal{L}\cup \mathbb{Q}\).

There is also a simpler way to derive an explicit series representation for p(x). Theorem 2 and Definition 1 tell us that the Mellin transform of I(α/2,X) is given by

(72)

where C is a constant such that the right-hand side of the above identity is equal to one for s=1. Using the functional identity (25) for the double gamma function we simplify the right-hand side in (72) and obtain

$$ {\mathbb{E}}\bigl[I(\alpha/2,X)^{s-1}\bigr] = \frac{{\varGamma}(\frac{\alpha}{2})}{{\varGamma}(\frac{d-\alpha}{2})}{\varGamma}(s) \frac{{\varGamma}(\frac{d-\alpha s}{2})}{{\varGamma}(\frac{\alpha s}{2})}. $$
(73)

We see that \({\mathbb{E}}[I(\alpha/2,X)^{s-1}]\) has simple poles at points {−n} n≥1 and {(d+2m)/α} m≥0. The residues at these points (which give us the coefficients in (71)) can be easily found using the fact that the residue of Γ(s) at s=−n is equal to (−1)n/n!. □

Next, we will study the last passage time of (Y,ℙ0) from the sphere in ℝd of radius r, i.e.

$$U_r = \sup\bigl\{s\ge 0: \|{\mathbf{Y}}_s\|<r\bigr\}. $$

Due to the self-similarity property of Y we find that random variables U r satisfy the scaling property \(b^{\alpha}U_{r}\stackrel{d}{=} U_{br}\) (valid for any b,r>0), thus it is sufficient to consider the case of U 2.

Proposition 6

If α<d then

(74)

Proof

Let us define the last passage time of R as L r =sup{s≥0:∥R s ∥<r}. It is clear that \(U_{2}\buildrel {d}\over {=}L_{1}\). According to Proposition 1 in [19], the random variable L 1 has the same law as \({\mathcal{G}}^{\alpha} I(\alpha, \xi)\), where ξ is a Lévy process associated to R by the Lamperti transformation and \({\mathcal{G}}\) is independent of ξ. Lemma 1 in [19] tells us that \({\mathcal{G}}\buildrel {d}\over {=}e^{-\mathcal{U}\mathcal{Z}}\), where \(\mathcal{U}\) and \(\mathcal{Z}\) are independent random variables, such that \(\mathcal{U}\) is uniformly distributed over [0,1] and the law of \(\mathcal{Z}\) is given by

$${\mathbb{P}}(\mathcal{Z}>u) = \frac{1}{{\mathbb{E}}[H_1]} \int_{(u,\infty)} x\,\nu({\mathrm{d}}{x}), \quad u\ge 0, $$

where H=(H t ,t≥0) denotes the ascending ladder height of ξ and ν its Lévy measure. From the proof of Lemma 3 in [13] we know that H has no linear drift and that its Lévy measure is given by (up to a multiplicative constant)

$$\nu({\mathrm{d}}{x}) = \frac{e^{2x}}{(e^{2x}-1)^{1+\alpha/2}}{\mathrm{d}}{x}. $$

Using the above expression, integration by parts, the integral identity (20) and the reflection formula for the gamma function (15) we obtain

$${\mathbb{E}}[H_1] = \int_0^{\infty} x \nu({\mathrm{d}}{x}) = \frac{\pi}{\alpha \sin(\frac{\pi \alpha}{2})}. $$

Let us find the Mellin transform of L 1. We use the independence of U and \({\mathcal{Z}}\) and obtain

where we have again applied integration by parts and the integral identity (20). Combining the above two expressions with (73) we have

$${\mathbb{E}}\bigl[L_1^{s-1}\bigr] = {\mathbb{E}}\bigl[{\mathcal{G}}^{s-1}\bigr]\times {\mathbb{E}}\bigl[I(\alpha,\xi)^{s-1}\bigr] = \frac{{\varGamma}(s)}{{\varGamma}(\frac{d-\alpha}{2})} \frac{{\varGamma}(\frac{d-\alpha s}{2})}{{\varGamma}(1-\frac{\alpha(1-s)}{2})}. $$

The function in the right-hand side of the above equation has simple poles at points {−n} n≥0 and {(d+2m)/α} m≥0. We express the density of L 1 as the inverse Mellin transform, compute the residues and use similar technique as in the proof of Theorem 4 to obtain the series representation (74). The details are left to the reader. □