Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

3.1 Lévy-Itô Decomposition

Let \((\Omega , \fancyscript{F}, \mathrm P )\) be a probability space and \((R^d, \fancyscript{B}(R^d), \langle \cdot ,\cdot \rangle )\) be a d-dimensional Euclidean space \(R^d\) with the \(\sigma \)-algebra of Borel subsets \(\fancyscript{B}(R^d)\), the scalar product \(\langle x,y\rangle =\sum\nolimits ^{d}_{j=1}x_jy_j\) for row vectors \(x=(x_1,\ldots ,x_d)\), \(y=(y_1,\ldots ,y_d)\), and the norm \(|x|=\sqrt{\langle x,x\rangle }\).

We are assuming that the reader is familiar with the foundations of probability theory based on the measure theory.

A mapping \(X:R_+\times \Omega \rightarrow R^d\) such that for each \(B\in \fancyscript{B}(R^d)\) and \(t\ge 0\) \(\left\{ \omega :X(t,\omega )\in B\right\} \in \fancyscript{F}\) is called a d-dimensional stochastic process.

For fixed \(\omega \in \Omega \) a function \(X(\cdot ,\omega )\) is called a sample path of \(X\). Later we shall use the notation \(X=\left\{ X_t,t\ge 0\right\} \).

If for each \(t\ge 0\) and \(\varepsilon >0\)

$$\begin{aligned} \lim \limits _{h\rightarrow 0}\mathrm P \left\{ |X_{t+h}-X_t|>\varepsilon \right\} =0, \end{aligned}$$

a process \(X=\{X_t,t\ge 0\}\) is called stochastically continuous.

Definition 3.1

A d-dimensional stochastic process \(X=\left\{ X_t,t\ge 0\right\} \) is an additive process if the following conditions are satisfied:

  1. (1)

    for any \(n\ge 1\) and \(0\le t_0<t_1<\cdots <t_n\), increments \(X_{t_0}, X_{t_1}-X_{t_0}, \ldots , X_{t_n}-X_{t_{n-1}}\) are independent;

  2. (2)

    \(X_0=0\)\(\mathrm P \)-a.e.;

  3. (3)

    \(X\) is stochastically continuous;

  4. (4)

    \(\mathrm P \)-a.e. sample paths are right-continuous in \(t\ge 0\) and have left limits in \(t>0\).

An additive process in law is a stochastic process satisfying (1)–(3).

Let \(X=\left\{ X_t, t\ge 0\right\} \) be a d-dimensional additive process.

Let \(U_{\varepsilon }=\left\{ x\in R^d:|x|>\varepsilon \right\} \), \(\varepsilon >0\), \(\fancyscript{B}_{\varepsilon }(R^d)=\fancyscript{B}(R^d)\cap U_{\varepsilon }\).

For \(B\in \fancyscript{B}_\varepsilon (R^d)\) define

$$\begin{aligned} p(t, B)=\sum \limits _{0\le s\le t}1_{B}(X_s-X_{s-}) \end{aligned}$$

and

$$\begin{aligned} X^{B}_t=\sum \limits _{0\le s\le t}(X_s-X_{s^-})1_B(X_s-X_{s-}), \quad t\ge 0. \end{aligned}$$

The following properties hold true (see, e.g., [13]).

  1. (i)

    For each \(B\in \fancyscript{B}_{\varepsilon }(R^d)\), \(t>0\), the function

    $$\begin{aligned} \mathrm E p(t,B):=\Pi (t,B)<\infty . \end{aligned}$$

    and is continuous in \(t\). The stochastic process \(p(t,B)\), \(t\ge 0\) is a Poisson additive process with mean function \(\Pi (t,B)\), \(t\ge 0\), i. e. it satisfies the assumptions (1)–(4) and for each \(t>0\), \(k=0,1,\ldots \)

    $$\begin{aligned} \mathrm P \left\{ p(t,B)=k\right\} =e^{-\Pi (t,B)}\frac{(\Pi (t,B))^k}{k!}. \end{aligned}$$

    Moreover,

    $$\begin{aligned} \int \limits _{R^d\backslash \{0\}}|x|^2\wedge 1\Pi (t,\text{ d}x)<\infty . \end{aligned}$$
  2. (ii)

    For each \(B_1,\ldots ,B_m\in \fancyscript{B}_{\varepsilon }(R^d)\) such that \(B_j\cap B_k=\emptyset \), \(j\ne k\), stochastic processes

    $$\begin{aligned} \left\{ X_t^{B_1},t\ge 0\right\} ,\ldots ,\left\{ X_t^{B_m}, t\ge 0\right\} \quad \mathrm and \quad \left\{ X_t-\sum ^m_{j=1}X^{B_j}_j,t\ge 0\right\} \end{aligned}$$

    are additive mutually independent processes and for each \(\varepsilon >0\), \(t>0\)

    $$\begin{aligned} \mathrm E |X_t-X_t^{U_{\varepsilon }}|^2<\infty . \end{aligned}$$
  3. (iii)

    Let \(0<\varepsilon _n\downarrow 0\), as \(n\rightarrow \infty \), and \(\Delta _k=\left\{ x\in R^d:\varepsilon _k<|x|\le \varepsilon _{k-1}\right\} \), \(k=2,3,\ldots \), \(\Delta _1=\left\{ x\in R^d:|x|>\varepsilon _1\right\} \). There exists a subsequence \(\left\{ n_k,k=1,2,\ldots \right\} \) such that, as \(k\rightarrow \infty \), the sequence

    $$\begin{aligned} X_t^{(k)}:=X_t-X_t^{\Delta _1}-\sum \limits ^{n_k}_{j=2}(X_t^{\Delta _j}-EX_t^{\Delta _j}), \quad t\ge 0, \end{aligned}$$

    converges uniformly on each finite time interval P-a.e. to the continuous Gaussian additive process \(X^0=\left\{ X_t^0,t\ge 0\right\} \) such that

    $$\begin{aligned} \mathrm E e^{i\langle z,X_t^0\rangle }=\exp \left\{ i\langle z,a(t)\rangle -\frac{1}{2}\langle zA(t),z\rangle \right\} , \quad z\in R^d, \end{aligned}$$

    where \(a(t)\), \(t\ge 0\), is a continuous d-dimensional function and \(A(t)\) is a continuous symmetric nonnegative definite \(d\times d\) matrix valued function.

  4. (iv)

    For each \(z\in R^d\) and \(t>0\)

    $$\begin{aligned} \mathrm E \exp \left\{ i\langle z,X_t\rangle \right\}&=\exp \Bigg \{i\langle z,a(t)\rangle -\frac{1}{2}\langle zA(t),z\rangle \Bigg . \nonumber \\&\quad + \Bigg .\int \limits _{R^d\backslash \{0\}}\left(e^{i\langle z,x\rangle }-1-i\langle z,x\rangle 1_{\{|x|\le 1\}}\right)\Pi (t,\text{ d}x)\Bigg \}, \end{aligned}$$
    (3.1)

    implying the Lévy-Khinchine formula as \(t=1\).

Definition 3.2

A d-dimensional additive process \(X=\left\{ X_t,t\ge 0\right\} \) is called a Lévy process if it is temporally homogeneous, i.e., for each \(s,t>0\),

$$\begin{aligned} \fancyscript{L}\left(X_{t+s}-X_s\right)=\fancyscript{L}(X_t). \end{aligned}$$

Definition 3.3

A d-dimensional stochastic process \(X=\{X_t,t\ge 0\}\) is called a Lévy process in law if it is temporally homogeneous and satisfies the assumptions (1)–(3).

An additive process is a Lévy one if and only if the functions \(a(t)\), \(A(t)\) and \(\Pi (t,B)\), \(t\ge 0\), are linear in \(t\), i.e., \(a(t)=at\), \(A(t)=At\) and \(\Pi (t, B)=\Pi (B)t\). The triplet \((a, A, \Pi )\), where \(a\in R^d\), \(A\) is \(a\) symmetric nonnegative definite \(d\times d\) matrix and \(\Pi (B)\), \(B\in \fancyscript{B}(R^d_0)\), is a measure such that

$$\begin{aligned} \int \limits _{R^d_0}|x|^2\wedge 1 \Pi (\text{ d}x)<\infty , \end{aligned}$$

is called the triplet of Lévy characteristics; \(R^d_0:=R^d\backslash \{0\}\). This triplet uniquely defines the finite dimensional distributions \(\fancyscript{L}\left(X_{t_1},X_{t_2},\ldots ,X_{t_n}\right)\), \(0\le t_1<t_2<\cdots <t_n\), \(n\ge 1\).

The class of Lévy triplets corresponds one-to-one with the class of Lévy processes in law.

A d-dimensional Lévy process \(X\) with the triplet of Lévy characteristics \((0,I_d,0)\), where \(I_d\) is the \(d\times d\) unit matrix, is called the standard d-dimensional Brownian motion .

Definition 3.4

A probability distribution \(\mu \) on \(R^d\) is called infinitely divisible if, for any positive integer \(n\), there exists a probability measure \(\mu _n\) on \(R^d\) such that \(\mu =\underbrace{\mu _{n}*\cdots *\mu _{n}}_{n \quad \mathrm times }\). Here “\(*\)” means the convolution of probability distributions.

We shall write that \(\mu \in ID(R^d)\).

From the celebrated Lévy-Khinchine formula and (3.1) it follows that the class of infinitely distributions \(\mu \) corresponds one-to-one with the class of Lévy processes in law by means of the equality

$$\begin{aligned}&\mathrm E \exp \left\{ i\langle z,X_1\rangle \right\} \\&\qquad =\exp \left\{ i\langle z,a\rangle -\frac{1}{2}\langle zA, z\rangle +\int \limits _{R_0^d}\left(e^{i\langle z, x\rangle }-1-i\langle z,x\rangle 1_{\left\{ |x|\le 1\right\} }\right)\Pi (\text{ d}x)\right\} \\&\qquad = \int \limits _{R^d}e^{i\langle z,x\rangle }\mu (\text{ d}x). \end{aligned}$$

For each Lévy process in law \(X=\left\{ X_t,t\ge 0\right\} \) there exists a modification \(Y=\left\{ Y_t,t\ge 0\right\} \) with right-continuous sample paths in \(t\ge 0\), having left limits in \(t>0\) and satisfying \(P(X_t\ne Y_t)=0\), \(t\ge 0\).

3.2 Self-Decomposable Lévy Processes

Definition 3.5

A probability distribution \(\mu \) on \(R^d\) is called self-decomposable, or of class \(L(R^d)\), if, for any \(b>1\), there exists a probability measure \(\rho _b\) on \(R^d\) such that

$$\begin{aligned} \hat{\mu }(z)=\hat{\mu }(b^{-1}z)\hat{\rho }_b(z), \quad z\in R^d, \end{aligned}$$
(3.2)

where \(\hat{\mu }\) means the characteristic function of the probability distribution \(\mu \) on \(R^d\).

If \(\mu \) is self-decomposable, then \(\mu \) is infinitely divisible and, for any \(b>1\), \(\rho _b\) in the decomposition (3.2) is uniquely determined and \(\rho _b\) is infinitely divisible.

Definition 3.6

A Lévy process \(X=\left\{ X_t, t\ge 0\right\} \) in law is said to be self-decomposable if the probability distribution \(\fancyscript{L}(X_1)\) is self-decomposable.

The Gaussian Lévy processes in law are, obviously, self-decomposable, because in this case (3.2) is satisfied with

$$\begin{aligned} \hat{\rho }_b(z)=\exp \left\{ i\langle z,(1-b^{-1})a\rangle -\frac{1}{2}\langle z(1-b^{-2})A,z\rangle \right\} . \end{aligned}$$

A criterion of self-decomposability of non-Gaussian \(\mu \in ID(R^d)\) with the triplet \((a, A, \Pi )\) of Lévy characteristics will be formulated using the canonical polar decomposition of a Lévy measure\(\Pi \) (see Remark 16 in [4, Lemma 1 in [5] and Proposition 2 in [6]).

Write

$$\begin{aligned} S^{d-1}=\left\{ x\in R^d:|x|=1\right\} , \quad K=\int \limits _{R_0^d}|x|^2\wedge 1\Pi (\text{ d}x)>0. \end{aligned}$$

Proposition 3.7

There exists a pair\((\lambda , \Pi _{\xi })\), where\(\lambda \)is a probability measure on\(S^{d-1}\)and\(\Pi _{\xi }\)is a\(\sigma \)-finite measure on\((0,\infty )\)such that\(\Pi _{\xi }(C)\)is measurable in\(\xi \in S^{d-1}\)for every\(C\in \fancyscript{B}\left((0,\infty )\right)\),

$$\begin{aligned} \int \limits ^{\infty }_{0}r^2\wedge 1\Pi _{\xi }(\text{ d}r)\equiv K \end{aligned}$$
(3.3)

and

$$\begin{aligned} \Pi (B)=\int \limits _{S^{d-1}}\lambda (\text{ d}\xi )\int \limits ^{\infty }_{0}1_B(r\xi )\Pi _{\xi }(\text{ d}r), \quad B\in {\fancyscript{B}}(B_{0}^{d}). \end{aligned}$$
(3.4)

If a pair\((\lambda ^{\prime },\Pi ^{\prime }_{\xi })\)satisfies (3.3) and (3.4), then\(\lambda ^{\prime }=\lambda \)and\(\Pi _{\xi }=\Pi ^{\prime }_{\xi }\)\(\lambda \)-a.e.

Proof

Existence. Consider the probability space \((R_0^d, \fancyscript{B}(R_0^d),\mathrm P _{\Pi })\), where

$$\begin{aligned} \mathrm P _{\Pi }(B)=K^{-1}\int \limits _{B}|x|^2\wedge 1 \Pi (\text{ d}x), \quad B\in \fancyscript{B}(R_0^d). \end{aligned}$$

Let \(N(x)=x\), \(R(x)=|x|\), \(\Xi (x)=\frac{x}{|x|}\), \(x\in R_0^d\), \(\lambda (B)=\mathrm P _{\Pi }\{\Xi \in B\}\), \(B\in \fancyscript{B}(S^{d-1})\), \(\Pi ^0_{\xi }(C)=\mathrm P _{\Pi }\{R\in C|\Xi =\xi \}\) (a regular version of the conditional distribution), and

$$\begin{aligned} \Pi _{\xi }(C)=\int \limits _{C}K(r^2\wedge 1)^{-1}\Pi ^0_{\xi }(\text{ d}r), \quad C\in \fancyscript{B}\left((0,\infty )\right), \quad \xi \in S^{d-1}. \end{aligned}$$

The pair \((\lambda ,\Pi _{\xi })\) satisfies (3.3) and (3.4). Indeed,

$$\begin{aligned} \int \limits ^{\infty }_{0}(r^2\wedge 1)\Pi _{\xi }(\text{ d}r)=K\int \limits ^{\infty }_{0}\Pi ^0_{\xi }(\text{ d}r)\equiv K, \end{aligned}$$

and for every nonnegative measurable function \(f(x)\), \(x\in R_0^d\),

$$\begin{aligned} \int \limits _{S^{d-1}}\lambda (\text{ d}\xi )\int \limits ^{\infty }_{0}f(r\xi )\Pi _{\xi }(\text{ d}r)&=\int \limits _{S^{d-1}}\lambda (\text{ d}\xi )\int \limits ^{\infty }_{0}\frac{Kf(r\xi )}{r^2\wedge 1}\Pi ^0_{\xi }(\text{ d}r)\\&=\mathrm E _{\Pi }\left[\mathrm E _{\Pi }\left(\frac{Kf(R\Xi )}{R^2 \wedge 1}|\Xi \right)\right] \\&= \mathrm E _{\Pi }\left(\frac{Kf(N)}{R^2\wedge 1}\right)=\int \limits _{R_0^d}f(x)\Pi (\text{ d}x). \end{aligned}$$

It remains to take \(f(x)=1_B(x)\), \(B\in \fancyscript{B}(R_0^d)\).

Uniqueness. Let

$$\begin{aligned} \Pi (B)=\int \limits _{S^{d-1}}\lambda ^{\prime }(\text{ d}\xi )\int \limits ^{\infty }_{0}1_B(r\xi )\Pi ^{\prime }_{\xi }(\text{ d}r) \end{aligned}$$
(3.5)

and

$$\begin{aligned} \int \limits ^{\infty }_{0}(r^2\wedge 1)\Pi ^{\prime }_{\xi }(\text{ d}r)\equiv K. \end{aligned}$$
(3.6)

Then, for all \(B\in \fancyscript{B}(S^{d-1})\), from (3.3)–(3.6) we find that

$$\begin{aligned}&\int \limits _{R_0^d}1_B\left(\frac{x}{|x|}\right)K^{-1}(|x|^2\wedge 1)\Pi (\text{ d}x)\\&\qquad =\int \limits _{S^{d-1}}\lambda (\text{ d}\xi )\int \limits ^{\infty }_{0}1_B(\xi )K^{-1}(r^2\wedge 1)\Pi _{\xi }(\text{ d}r)=\lambda (B) \end{aligned}$$

and

$$\begin{aligned}&\int \limits _{R_0^d}1_B\left(\frac{x}{|x|}\right)K^{-1}(|x|^2\wedge 1)\Pi (\text{ d}x)\\&\qquad =\int \limits _{S^{d-1}}\lambda ^{\prime }(\text{ d}\xi )\int \limits ^{\infty }_{0}1_B(\xi )K^{-1}(r^2\wedge 1)\Pi ^{\prime }_{\xi }(\text{ d}r)=\lambda ^{\prime }(B), \end{aligned}$$

proving that \(\lambda =\lambda ^{\prime }\).

Finally, for every nonnegative measurable function \(h(r)\), \(r>0\),

$$\begin{aligned} \int \limits _{R^d_0}h(|x|)\Pi (\text{ d}x)=\int \limits _{S^{d-1}}\lambda (\text{ d}\xi )\int \limits ^{\infty }_{0}h(r)\Pi _{\xi }(\text{ d}r)=\int \limits _{S^{d-1}}\lambda (\text{ d}\xi )\int \limits ^{\infty }_{0}h(r)\Pi ^{\prime }_{\xi }(\text{ d}r), \end{aligned}$$

implying that \(\Pi _{\xi }=\Pi ^{\prime }_{\xi }\) \(\lambda \)-a.e. \(\square \)

Proposition 3.8

[7]. If

$$\begin{aligned} \Pi (B)=\int \limits _{B}g(x)\text{ d}x,\quad B\in \fancyscript{B}(R_0^d), \end{aligned}$$
(3.7)

then (3.3) and (3.4) hold with

$$\begin{aligned} \lambda (\text{ d}\xi )&=c(\xi )\text{ d}\xi ,\\ \Pi _{\xi }(\text{ d}r)&=r^{d-1}g(r\xi )c^{-1}(\xi ), \end{aligned}$$

where

$$\begin{aligned} c(\xi )=K^{-1}\int \limits ^{\infty }_{0}(r^2\wedge 1)r^{d-1}g(r\xi )\text{ d}r, \end{aligned}$$

assuming that

$$\begin{aligned} K:=\int \limits _{R_0^d}(|x|^2\wedge 1)g(x)\text{ d}x>0. \end{aligned}$$

Proof

Write

$$\begin{aligned}\begin{array}{rcl} x_1&=& r\cos \varphi _1,\\x_2 &=&r\sin \varphi _1\cos \varphi _2,\\&\cdots \cdots \cdots \cdots \\x_{d-1}&=& r\sin \varphi _1\sin \varphi _2\cdots \sin \varphi _{d-1}\cos \varphi _{d-1},\\ x_{d}&=& r\sin \varphi _1\sin \varphi {2}\cdots \sin \varphi _{d-2}\sin \varphi _{d-1},\end{array} \end{aligned}$$

where \(r\ge 0\), \(0\le \varphi _1\le \pi ,\) \(\ldots \), \(0\le \varphi _{d-2}\le \pi \), \(0\le \varphi _{d-1}<2\pi \). It is well-known that the Jacobian

$$\begin{aligned} J=\frac{D(x_1,x_2,\ldots ,x_d)}{D(r,\varphi _1,\varphi _2,\ldots ,\varphi _{d-1})}=r^{d-1}\sin ^{d-2}\varphi _{1}\sin ^{d-3}\varphi _2\cdots \sin \varphi _{d-2}. \end{aligned}$$

Denoting \(\xi =\frac{x}{r}\) and \(\text{ d}\xi =\sin ^{d-2}\varphi _1\sin ^{d-3}\varphi _{2}\cdots \sin \varphi _{d-2}\text{ d}\varphi _1\text{ d}\varphi _2\cdots \text{ d}\varphi _{d-2}\text{ d}\varphi _{d-1}\), for any Borel measurable and integrable with respect to the Lebesgue measure on \(R^d\) function \(f(x)\), we find that

$$\begin{aligned} \int \limits _{R^d}f(x)\text{ d}x=\int \limits _{S^{d-1}}\text{ d}\xi \int \limits ^{\infty }_{0}f(r\xi )r^{d-1}\text{ d}r=\int \limits _{S^{d-1}}c(\xi )\text{ d}\xi \int \limits ^{\infty }_{0}f(r\xi )r^{d-1}c^{-1}(\xi )\text{ d}r \end{aligned}$$
(3.8)

and apply formula (3.8) to the functions \(f_{B}(x)=g(x)1_B(x)\), \(x\in R^d\), \(B\in \fancyscript{B}(R_0^d)\). The identity (3.3) is trivially satisfied. \(\square \)

The following criterion of self-decomposability of a probability distribution \(\mu \in ID (R^d)\) with the triplet of Lévy characteristics \((a, A, \Pi )\) is well-known (see Theorem 15.10 of [2] and [8]).

Theorem 3.9

A probability distribution\(\mu \in ID (R^d)\)or a Lévy process in law with the triplet\((a, A, \Pi )\)of Lévy characteristics is self-decomposable if and only if in (3.4)

$$\begin{aligned} \Pi _{\xi }(\text{ d}r)=\frac{k_{\xi }(r)}{r}\text{ d}r, \end{aligned}$$

where a nonnegative function\(k_{\xi }(r)\)is measurable in\(\xi \in S^{d-1}\)and decreasing in\(r>0\)for\(\lambda \)-a.e. \(\xi \).

Corollary 3.10

If (3.7) is satisfied, then\(\mu \in ID(R^d)\)or a corresponding Lévy process in law with the triplet\((a, A, \Pi )\)of Lévy characteristics is self-decomposable if and only if the function\(k_{\xi }(r):=r^dg(r\xi )\)is decreasing in\(r>0\)for a.e.\(\xi \in S^{d-1}\)with respect to the Lebesgue surface measure on\(S^{d-1}\).

3.3 Lévy Subordinators

Definition 3.11

An univariate Lévy process with nonnegative increments is called a Lévy subordinator.

The class of Lévy subordinators correspond one-to-one with the class \(ID(R_+)\) of infinitely divisible distributions on \(R_+\). It is well-known (see, e.g., [2, 3, 9, 10]) that for \(\tau \in ID(R_+)\) the Laplace exponent

$$\begin{aligned} \psi (\theta ):=-\log \left(\int \limits ^{\infty }_{0}e^{-\theta u}\tau (\text{ d}u)\right)=\beta _0\theta +\int \limits ^{\infty }_{0}\left(1-e^{-\tau u}\rho (\text{ d}u)\right), \quad \theta \ge 0, \end{aligned}$$

is defined uniquely by the characteristics \((\beta _0,\rho )\), where \(\beta _0\ge 0\) and \(\rho \) is a \(\sigma \)-finite measure on \((0,\infty )\), satisfying

$$\begin{aligned} \int \limits ^{\infty }_{0}(u\wedge 1)\rho (\text{ d}u)<\infty . \end{aligned}$$

Extending the Thorin class and following Bondesson [11], we introduce the scale of Thorin classes \(T_{\varkappa }(R_+)\), \(0<\varkappa \le \infty \), as increasing subclasses of \(ID(R_+)\) such that \(T_{\infty }(R_+)=ID(R_+)\), where \(T_{\infty }(R_+)\) is the minimal class of probability distributions on \(R_+\), closed under convolutions and weak limits, containing all classes \(T_{\varkappa }(R_+)\), \(\varkappa >0\).

Definition 3.12

An infinitely divisible distribution \(\tau \) on \(R_+\) with the characteristics \((\beta _0,\rho )\) is of the Thorin class \(T_{\varkappa }(R_+)\), \(\varkappa >0\), if \(\rho (\text{ d}t)=l(t)\text{ d}t\) and \(k_{\varkappa }(t):=t^{2-\varkappa }l(t)\), \(t\ge 0\), is completely monotone, i.e., \(k_{\varkappa }\) is infinitely differentiable and \((-1)^nk^{(n)}_{\varkappa }(t)\ge 0\) for all \(n\ge 0\) and \(t>0\).

Lévy subordinators corresponding to the distributions from \(T_{\varkappa }(R_+)\), \(0<\varkappa <\infty \), are called the Thorin’s subordinators.

According to Bernstein’s theorem (see, e.g., [12]) there exists a unique positive measure \(Q_{\varkappa }\) on \(R_+\) such that

$$\begin{aligned} k_{\varkappa }(t)=\int \limits ^{\infty }_{0}e^{-vt}Q_{\varkappa }(\text{ d}v), \quad t>0, \end{aligned}$$

and

$$\begin{aligned} Q_{\varkappa }(\{0\})=\lim \limits _{t\rightarrow \infty }k_{\varkappa }(t). \end{aligned}$$

Write

$$\begin{aligned} a_{\varkappa }(t)=t^{-\varkappa }\int \limits ^{t}_{0}v^{\varkappa -1}e^{-v}\text{ d}v+t^{-\varkappa +1}\int \limits ^{\infty }_{t}v^{\varkappa -2}e^{-v}\text{ d}v, \quad t>0, \end{aligned}$$

and observe that, as \(t\rightarrow \infty \),

$$\begin{aligned} a_{\varkappa }(t)\sim \Gamma (\varkappa )t^{-\varkappa }, \end{aligned}$$

and

$$\begin{aligned} a_{\varkappa }(t)\sim \left\{ \begin{array}{l@{\quad }l} (\varkappa (1-\varkappa ))^{-1},&\mathrm if \quad 0<\varkappa <1, \\ \log \dfrac{1}{t},&\mathrm if \quad \varkappa =1, \\ \Gamma (\varkappa -1)t^{1-\varkappa },&\mathrm if \quad \varkappa >1, \end{array} \right. \end{aligned}$$
(3.9)

as \(t\rightarrow 0\).

Proposition 3.13

An infinitely divisible distribution\(\tau \)on\(R_+\)with the characteristics\((\beta _0, \rho )\)is of the Thorin class\(T_{\varkappa }(R_+)\), \(\varkappa >0\), if and only if the Laplace exponent

$$\begin{aligned} \psi _{\varkappa }(\theta )=\left\{ \begin{array}{l} \beta _0\theta +\Gamma (\varkappa -1)\int \limits ^{\infty }_{0}\left(v^{-\varkappa +1}-(\theta +v)^{-\varkappa +1}\right)Q_{\varkappa }(\text{ d}v), \quad \mathrm if \quad \varkappa \ne 1, \\ \beta _0\theta +\int \limits ^{\infty }_{0}\log \left(1+\frac{\theta }{v}\right)Q_1(\text{ d}v), \quad \mathrm if \quad \varkappa =1, \end{array} \right. \end{aligned}$$
(3.10)

where the measure\(Q_{\varkappa }\), called the Thorin measure, satisfies

$$\begin{aligned} \int \limits ^{\infty }_{0}a_{\varkappa }(v)Q_{\varkappa }(\text{ d}v)<\infty , \end{aligned}$$
(3.11)

implying that\(Q_{\varkappa }(\{0\})=0\)for\(\varkappa \ge 1\).

Proof

We have that

$$\begin{aligned} \int \limits ^{\infty }_{0}(t\wedge 1)l(t)\text{ d}t=\int \limits ^{\infty }_{0}(t\wedge 1)t^{\varkappa -2}\int \limits ^{\infty }_{0}e^{-tv}Q_{\varkappa }(\text{ d}v)\text{ d}t=\int \limits ^{\infty }_{0}a_{\varkappa }(v)Q_{\varkappa }(\text{ d}v) \end{aligned}$$

and

$$\begin{aligned} \psi _{\varkappa }(\theta )&=\beta _0\theta +\int \limits ^{\infty }_{0}\left(1-e^{-\theta t}\right)t^{\varkappa -2}\int \limits ^{\infty }_{0}e^{-vt}Q_{\varkappa }(\text{ d}v)\text{ d}t\\&=\beta _0\theta +\int \limits ^{\infty }_{0}\left(\int \limits ^{\infty }_{0}t^{\varkappa -2}\left(1-e^{-\theta t}\right)e^{-vt}\text{ d}t\right)Q_{\varkappa }(\text{ d}v). \end{aligned}$$

However, for \(0<\varkappa <1\),

$$\begin{aligned} \int \limits ^{\infty }_{0}t^{\varkappa -2}\left(1-e^{-\theta t}\right)e^{-vt}\text{ d}t&=-\int \limits ^{\infty }_{0}e^{-vt}t^{\varkappa -2}\sum \limits ^{\infty }_{k=1}\frac{(-t\theta )^k}{k!}\text{ d}t \\&= -\sum \limits ^{\infty }_{k=1}\frac{(-\theta )^k}{k!}v^{-k-\theta +1}\Gamma (k+\varkappa -1)\\&=-v^{-\varkappa +1}\sum \limits ^{\infty }_{k=1}\frac{\Gamma (k+\varkappa -1)}{k!}\left(-\frac{\theta }{v}\right)^k \\&=v^{-\varkappa +1}\Gamma (\varkappa -1)\left(1-\left(1+\frac{\theta }{v}\right)^{-\varkappa +1}\right), \end{aligned}$$

for \(\varkappa =1\), as a Froullani integral,

$$\begin{aligned} \int \limits ^{\infty }_{0}\left(1-e^{-\theta t}\right)t^{-1}e^{-vt}\text{ d}t=\log \left(1+\frac{\theta }{v}\right) \end{aligned}$$

and, for \(\varkappa >1\),

$$\begin{aligned} \int \limits ^{\infty }_{0}t^{\varkappa -2}\left(1-e^{-\theta t}\right) e^{-vt} \text{ d}t&=\frac{1}{\varkappa -1}\int \limits ^{\infty }_{0}\left(1-e^{-\theta t}\right)e^{-vt} \text{ d}t^{\varkappa -1} \\&=\frac{1}{\varkappa -1}\int \limits ^{\infty }_{0}t^{\varkappa -1}\left[ve^{-vt}-(\theta +v)e^{-(\theta +v)t}\right]\text{ d}t\\&=\frac{\Gamma (\varkappa )}{\varkappa -1}\left(v^{-\varkappa +1}-(\theta +v)^{-\varkappa +1}\right)\\&=\Gamma (\varkappa -1)\left(v^{-\varkappa +1}-(\theta +v)^{-\varkappa +1}\right). \end{aligned}$$

\(\square \)

Remark 3.14

Having in mind (3.9), inequality (3.11) is satisfied if and only if the measure \(Q_{\varkappa }\) is a Radon measure such that for \(\varkappa \ne 1\)

$$\begin{aligned} \int \limits ^1_{0}u^{0\wedge (1-\varkappa )}Q_{\varkappa }(\text{ d}u)<\infty \quad \mathrm and \quad \int \limits ^{\infty }_{1}u^{-\varkappa }Q_{\varkappa }(\text{ d}u)<\infty , \end{aligned}$$

and for \(\varkappa =1\)

$$\begin{aligned} \int \limits ^{1}_{0}\log \left(\frac{1}{u}\right)Q_1(\text{ d}u)<\infty \quad \mathrm and \quad \int \limits ^{\infty }_{1}u^{-1}Q_1(\text{ d}u)<\infty . \end{aligned}$$

Recall now that the families of Tweedie or power-variance distributions

$$\begin{aligned} \left\{ Tw_p(\alpha ,\lambda ),\alpha >0,\lambda >0\right\} , \quad p\in R^1\backslash [0,1), \end{aligned}$$

are defined as exponential dispersion models (see [1315]), satisfying the following properties: for each \(\alpha >0\), \(\lambda >0\) and given \(p\)

$$\begin{aligned} \int \limits _{R^1}xTw_{p}(\alpha ,\lambda )(\text{ d}x)&=\alpha ,\\ \int \limits _{R^1}(x-\alpha )^2Tw_p(\alpha ,\lambda )(\text{ d}x)&=\lambda ^{-1}\alpha ^p \end{aligned}$$

and \(Tw_0(\alpha ,\lambda ):=N(\alpha ,\lambda ^{-1})\), \(\alpha \in R^1\), \(\lambda >0\). It is known that for \(p\ge 1\) \(Tw_p(\alpha ,\lambda )\in ID(R_+)\). Moreover, for \(p>1\), \(\alpha >0\), \(\lambda >0\) \(Tw_p(\alpha ,\lambda )\in T_{\frac{1}{p-1}}(R_+)\), because their characteristics are

$$\begin{aligned} \left(0,c_{p,\lambda }t^{-2+\frac{1}{p-1}}\exp \left\{ -\frac{\alpha ^{1-p}}{p-1}\lambda t\right\} \text{ d}t\right), \end{aligned}$$

where

$$\begin{aligned} c_{p,\lambda }=\frac{\lambda ^{\frac{1}{p-1}}}{\Gamma \left(\frac{p}{p-1}\right)(p-1)^{\frac{p}{p-1}}}, \end{aligned}$$

and \(Tw_1(\alpha ,\lambda )\in ID(R_+)\) with characteristics \((0,\alpha \lambda \varepsilon _{\lambda ^{-1}}(\text{ d}t))\). The Thorin measure \(Q_{\frac{1}{p-1}}\) of \(Tw_p(\alpha ,\lambda )\), \(p>1\), obviously, equals

$$\begin{aligned} c_{p,\lambda }\varepsilon _{\frac{\alpha ^{1-p}}{p-1}}(\text{ d}t). \end{aligned}$$

Theorem 3.15

[16].

  1. (i)

    Thorin classes\(T_{\varkappa }(R_+)\), \(0<\varkappa <\infty \), are increasing, closed under convolutions and weak limits;

  2. (ii)

    \(T_{\infty }(R_+)=ID(R_+)\);

  3. (iii)

    Thorin classes\(T_{\varkappa }(R_+)\), \(0<\varkappa \le \infty \), are generalized convolutions of Tweedie distributions\(T_{\frac{\varkappa +1}{\varkappa }}(\alpha ,\lambda )\), \(\alpha >0\), \(\lambda >0\).

Proof

Because for \(0<\varkappa _1<\varkappa _2\)

$$\begin{aligned} k_{\varkappa _2}(t)=t^{\varkappa _1-\varkappa _2}k_{\varkappa _2}(t), \quad t>0, \end{aligned}$$

\(t^{-\gamma }\), \(t>0\), \(\gamma >0\), are completely monotone functions and the complete monotonicity is preserved under multiplication, from Definition 3.11 it follows that \(T_{\varkappa _1}(R_+)\subset T_{\varkappa _2}(R_+)\).

Closedness of \(T_{\varkappa }(R_+)\) under convolutions and weak limits follows from the well-known properties that the complete monotonicity is preserved under formation of linear combinations and pointwise limits (see, e.g., [12]).

  1. (ii)

    Observe that the characteristics and the Laplace exponent of \(Tw_{\frac{\varkappa +1}{\varkappa }}(\alpha ,\lambda )\) are equal

    $$\begin{aligned} \left(0,\frac{\lambda ^{\varkappa }\varkappa ^{1+\varkappa }}{\Gamma (1+\varkappa )}t^{-2+\varkappa }e^{-\varkappa \lambda \alpha ^{-\frac{1}{\varkappa }}t}\text{ d}t\right) \end{aligned}$$

    and

    $$\begin{aligned} \psi _{\frac{1+\varkappa }{\varkappa },\alpha ,\lambda }(\theta )=\frac{\lambda \varkappa }{\varkappa ^{1-\varkappa }(\varkappa -1)}\left[\alpha ^{\frac{\varkappa -1}{\varkappa }}\varkappa ^{1-\varkappa }-\left(\varkappa \alpha ^{-\frac{1}{\varkappa }}+\frac{\theta }{\lambda }\right)^{1-\varkappa }\right] \end{aligned}$$
    (3.12)

    Because for each \(\theta \ge 0\)

    $$\begin{aligned} \lim \limits _{\varkappa \rightarrow \infty }\psi _{\frac{1+\varkappa }{\varkappa },\alpha ,\lambda }(\theta )&= \lim \limits _{\varkappa \rightarrow \infty }\frac{\lambda \varkappa \alpha ^{\frac{\varkappa -1}{\varkappa }}}{\varkappa -1}\left[1-\left(\frac{\theta }{\varkappa \lambda }\alpha ^{\frac{1}{\varkappa }}\right)\right]^{1-\varkappa }\nonumber \\&=\alpha \lambda \left(1-e^{-\frac{\theta }{\lambda }}\right), \end{aligned}$$
    (3.13)

    it follows that for all \(\alpha >0\), \(\lambda >0\) the scaled Poisson distributions \(Tw_1(\alpha ,\lambda )\in T_{\infty }(R_+)\). Hawing in mind properties (i), we conclude from (3.13) that \(T_{\infty }(R_+)=ID(R_+)\).

  2. (iii)

    The case \(\varkappa =\infty \) is contained in (ii).

Let \(0<\varkappa <\infty \). The statement (iii) follows easily from the Proposition 3.13, the formula (3.12) and the statement (i). \(\square \)

Remark 3.16

Because

$$\begin{aligned} Tw_2(\alpha ,\lambda )(\text{ d}t)=\frac{(\lambda \alpha ^{-1})^{\lambda }}{\Gamma (\lambda )}t^{\lambda -1}e^{-\lambda \alpha ^{-1}t}1_{(0,\infty )}\text{ d}t, \quad \text{ therefore} \quad \quad T_1(R_+)=GGC. \end{aligned}$$

Because

$$\begin{aligned} Tw_{\frac{3}{2}}(\alpha ,\lambda )(C)=e^{-2\lambda \sqrt{\alpha }}\varepsilon _0(C)+\int \limits _{C}\frac{2\lambda }{\sqrt{u}}e^{-2\frac{\lambda }{\sqrt{\alpha }}(u+\alpha )}I_1(4\alpha \sqrt{u})\text{ d}u, \quad C\in \fancyscript{B}(R_+), \end{aligned}$$

where \(I_{\gamma }(z)\) is the modified Bessel function of the first kind (see Appendix), i.e.

$$\begin{aligned} I_{\gamma }(z)=\sum \limits ^{\infty }_{k=0}\frac{\left(\frac{z}{2}\right)^{2k+\gamma }}{k!\Gamma (\gamma +k+1)}, \quad \gamma \ge -1, \end{aligned}$$

is the compound Poisson-exponential distribution, therefore \(T_2(R_+)\) is the class of generalized convolutions of compound Poisson-exponential distributions, which coincides with the generalized mixed exponential convolutions, studied by Goldie [17], Steutel [18, 19] and Bondesson [11].

Example 3.17

(noncentral gamma distribution) Following Fisher [20] (see also [21, 22]) we say that \(\Gamma _{\beta ,\gamma ,\lambda }\) is a noncentral gamma distribution with the shape parameter \(\beta >0\), the scale parameter \(\gamma >0\) and the noncentrality parameter \(\lambda >0\) if its pdf \(f_{\beta ,\gamma ,\lambda }\) is the Poisson mixture of the gamma densities:

$$\begin{aligned} f_{\beta ,\gamma ,\lambda }(x)&=e^{-\lambda }\sum \limits ^{\infty }_{j=0}\frac{\lambda ^j}{j!}\frac{\beta ^{\gamma +j}x^{\gamma +j-1}}{\Gamma (\gamma +j)}e^{-\beta x}\\&=e^{-\lambda -\beta x}\beta \left(\frac{\beta x}{\lambda }\right)^{\frac{\gamma -1}{2}}I_{\gamma -1}\left(\sqrt{\beta \lambda x}\right), \quad x>0. \end{aligned}$$

Fisher in [20] derived that the probability law

$$\begin{aligned} \fancyscript{L}\left(\sum \limits ^{n}_{j=1}X^2_j\right)=\Gamma _{\frac{1}{2},\frac{n}{2},\lambda }, \end{aligned}$$

where \(X_1,\ldots ,X_n\) are independent, \(\fancyscript{L}(X_j)=N(\alpha _j,1)\) and \(\lambda =\frac{1}{2}\sum \limits ^{n}_{j=1}\alpha ^2_j\).

Let \(\text{ Bess}_{\beta ,\lambda }\), \(\beta >0\), \(\lambda >0\), be a probability distribution on \(R_+\), defined by the formula:

$$\begin{aligned} \text{ Bess}_{\beta ,\lambda }(\text{ d}x)=e^{-\lambda }\varepsilon _0(\text{ d}x)+\beta e^{-\lambda -\beta x}I_1\left(2\sqrt{\beta \lambda x}\right)\text{ d}x. \end{aligned}$$

Because

$$\begin{aligned} \int \limits ^{\infty }_{0}e^{-\theta x}f_{\beta ,\gamma ,\lambda }(x)\text{ d}x&=e^{-\lambda }\beta (\beta \lambda )^{\gamma -1}\sum \limits ^{\infty }_{j=0}e^{-(\beta +\theta )x}\frac{(\beta \lambda x)^j}{j!\Gamma (\gamma +j)} \nonumber \\&=e^{-\lambda }\left(\frac{\beta }{\theta +\beta }\right)^{\gamma }\sum \limits ^{\infty }_{j=0}\frac{1}{j!}\left(\frac{\beta \lambda }{\theta +\beta }\right)^j\nonumber \\&=\left(\frac{\beta }{\theta +\beta }\right)^{\gamma }e^{\frac{-\lambda \theta }{\theta +\beta }} \end{aligned}$$
(3.14)

and

$$\begin{aligned} \int \limits ^{\infty }_{0}e^{-\theta x}\text{ Bess}_{\beta ,\lambda }(\text{ d}x)&=e^{-\lambda }+\lambda \beta e^{-\lambda }\sum \limits ^{\infty }_{k=0}\int \limits ^{\infty }_{0}e^{-(\theta +\beta )x}\frac{(\beta \lambda x)^k}{k!(k+1)!}\text{ d}x \nonumber \\&=e^{-\lambda }+e^{-\lambda }\sum \limits ^{\infty }_{k=0}\left(\frac{\beta \lambda }{\theta +\beta }\right)^{k+1}\frac{1}{(k+1)!}=e^{\frac{-\lambda \theta }{\theta +\beta }}, \end{aligned}$$

we find that

$$\begin{aligned} \Gamma _{\beta ,\gamma ,\lambda }=\Gamma _{\beta ,\gamma }*\text{ Bess}_{\beta ,\lambda }, \end{aligned}$$

implying equalities:

$$\begin{aligned} \Gamma _{\beta ,\gamma _1,\lambda _1}*\Gamma _{\beta ,\gamma _2,\lambda _2}&=\Gamma _{\beta ,\gamma _1+\gamma _2,\lambda _1+\lambda _2},\\ \Gamma _{\beta ,\gamma _1,\lambda }*\Gamma _{\beta ,\gamma _2}&=\Gamma _{\beta ,\gamma _1+\gamma _2,\lambda } \end{aligned}$$

and

$$\begin{aligned} \Gamma _{\beta ,\gamma ,\lambda _1}*\text{ Bess}_{\beta ,\lambda _2}=\Gamma _{\beta ,\gamma ,\lambda _1+\lambda _2}. \end{aligned}$$

From (3.14) it follows that

$$\begin{aligned} \int \limits ^{\infty }_{0}e^{-\theta x}f_{\beta ,\gamma ,\lambda }(x)\text{ d}x=\exp \left\{ \int \limits ^{\infty }_{0}\left(e^{-\theta u}-1\right)\left(\frac{\gamma }{u}+\lambda \right)e^{-\beta u} \text{ d}u\right\} , \end{aligned}$$

proving that the noncentral gamma distributions are infinitely divisible on \(R_+\) with characteristics \((0,l_{\beta ,\gamma ,\lambda }(u)\text{ d}u)\), where

$$\begin{aligned} l_{\beta ,\gamma , \lambda }(u)=\left(\frac{\gamma }{u}+\lambda \right)e^{-\beta u}, \quad u>0. \end{aligned}$$

This function is completely monotone, implying that \(\Gamma _{\beta , \gamma , \lambda }\in T_2(R_+)\).

Because the function

$$\begin{aligned} k_{\beta ,\gamma , \lambda }(u):=ul_{\beta ,\gamma ,\lambda }(u)=(\gamma +\lambda u)e^{-\beta u}, \quad u>0 \end{aligned}$$
(3.15)

is not completely monotone, \(\Gamma _{\beta ,\gamma ,\lambda }\bar{\in }T_1(R_+)\). From (3.15) it follows that \(k_{\beta ,\gamma ,\lambda }\) is nondecreasing if and only if \(\lambda \le \beta \gamma \). Only in this case the noncentral gamma distribution \(\Gamma _{\beta ,\gamma ,\lambda }\) is self-decomposable.

Inverse noncentral gamma distribution

$$\begin{aligned} I\Gamma _{\beta ,\gamma ,\lambda }(\text{ d}x):=x^{-2}f_{\beta ,\gamma ,\lambda }(x^{-1})\text{ d}x \end{aligned}$$

permits to define noncentral Student’s \(t\)-distribution \(T_d(\nu ,\Sigma ,\alpha ,\lambda )\) with \(\nu >0\) degrees of freedom, a scaling matrix \(\Sigma \), a location vector \(\alpha \in R^d\), and a noncentrality parameter \(\lambda >0\) by means of the pdf \(f_{\nu ,\Sigma ,\lambda }(x-\alpha )\), \(x\in R^d\), where

$$\begin{aligned} f_{\nu ,\Sigma ,\lambda }(x)&=\int \limits ^{\infty }_{0}g_{0,u\Sigma }(x)u^{-2}f_{\frac{\nu }{2},\frac{\nu }{2},\lambda }\left(\frac{1}{u}\right)\text{ d}u\\&=\int \limits ^{\infty }_{0}g_{0,u\Sigma }(x)e^{-\lambda }\sum \limits ^{\infty }_{j=0}\frac{\lambda ^j}{j!}\frac{\left(\frac{\nu }{2}\right)^{\frac{\nu }{2}+j}}{\Gamma \left(\frac{\nu }{2}+j\right)}u^{-\frac{\nu }{2}-j-1}e^{-\frac{\nu }{2u}}\text{ d}u\\&= \frac{e^{-\lambda }\left(\frac{\nu }{2}\right)^{\frac{\nu }{2}}}{(2\pi )^{\frac{d}{2}}\sqrt{|\Sigma |}}\left(\frac{\nu }{2}+\frac{1}{2}\langle x\Sigma ^{-1}, x\rangle \right)^{-\frac{\nu +d}{2}}\\&\quad \times \sum \limits ^{\infty }_{j=0}\frac{\Gamma \left(\frac{\nu +d}{2}+j\right)}{j!\Gamma \left(\frac{\nu }{2}+j\right)}\left(\frac{\lambda \nu }{\nu +\langle x\Sigma ^{-1},x\rangle }\right)^j\\&= \frac{e^{-\lambda }}{(\nu \pi )\frac{d}{2}\sqrt{|\Sigma |}}\left(\frac{\nu +\langle x\Sigma ^{-1,x\rangle }}{\nu }\right)^{-\frac{\nu +d}{2}}\\&\quad \times \sum \limits ^{\infty }_{j=0}\frac{\Gamma \left(\frac{\nu +d}{2}+j\right)}{j!\Gamma \left(\frac{\nu }{2}+j\right)}\left(\frac{\lambda \nu }{\nu +\langle x\Sigma ^{-1},x\rangle }\right)^j. \end{aligned}$$

Analogously we define doubly noncentral Student’s \(t\)-distributions \(T_d(\nu ,\Sigma ,\alpha ,a,\lambda )\) with \(\nu >0\) degrees of freedom, a scaling matrix \(\Sigma \), a location vector \(\alpha \in R^d\), a noncentrality vector \(a\in R^d\), and parameter \(\lambda >0\) by means of pdf \(f_{\nu ,\Sigma ,a,\lambda }(x-\alpha )\), \(x\in R^d\), where

$$\begin{aligned} f_{\nu ,\Sigma ,a,\lambda }(x)&=\int \limits ^{\infty }_{0}g_{ua,u\Sigma }(x)u^{-2}f_{\frac{\nu }{2},\frac{\nu }{2},\lambda }\left(\frac{1}{u}\right)\text{ d}u\\&=\int \limits ^{\infty }_{0}g_{ua,u\Sigma }(x)e^{-\lambda }\sum \limits ^{\infty }_{j=0}\frac{\lambda ^j}{j!}\frac{\left(\frac{\nu }{2}\right)^{\frac{\nu }{2}+j}}{\Gamma \left(\frac{\nu }{2}+j\right)}u^{-\frac{\nu }{2}-j-1}e^{-\frac{\nu }{2u}}\text{ d}u\\&=\frac{e^{-\lambda }\left(\frac{\nu }{2}\right)^{\frac{\nu }{2}}2\exp \left\{ \langle x\Sigma ^{-1},x\rangle \right\} }{(2\pi )^{\frac{d}{2}}\sqrt{|\Sigma |}}\left(\frac{\langle a\Sigma ^{-1},a\rangle }{\nu +\langle x\Sigma ^{-1},x\rangle }\right)^{\frac{\nu +d}{4}}\\&\quad \times \sum \limits ^{\infty }_{j=0}\frac{\left(\frac{\lambda \nu }{2}\right)^j}{j!\Gamma \left(\frac{\nu }{2}+j\right)}\left(\frac{\langle a\Sigma ^{-1},a\rangle }{\nu +\langle x\Sigma ^{-1},x\rangle }\right)^{\frac{j}{2}}\\&\quad \times K_{\frac{\nu +d}{2}+j}\left(\left[\langle a\Sigma ^{-1},a\rangle \left(\nu +\langle a\Sigma ^{-1},x\rangle \right)\right]^{\frac{1}{2}}\right). \end{aligned}$$

Most likely, distributions \(I\Gamma _{\beta ,\gamma ,\lambda }\), \(T_d(\nu ,\Sigma ,\alpha ,\lambda )\) and \(T_d(\nu ,\Sigma ,\alpha ,a,\lambda )\) are not infinitely divisible and do not correspond to any Lévy processes.

Example 3.18

(generalized gamma distribution). Recall that Bondesson introduced and studied in [11] a remarkable subclass of GGC of pdf on \((0,\infty )\), called the hyperbolically completely monotone pdf (HCM for short). It is said that \(f\) is HCM, if for every \(u>0\), \(f(uv)f\left(\frac{u}{v}\right)\) is the completely monotone function in \(w=v+v^{-1}\). For instance, GIG densities are HCM, because

$$\begin{aligned} gig(uv;\lambda ,\chi ,\psi )gig\left(\frac{u}{v};\lambda ,\chi ,\psi \right)=\frac{\left(\frac{\psi }{\chi }\right)^{\lambda }u^{2(\lambda -1)}}{\left(2K_{\lambda }(\sqrt{\chi \psi })\right)^2}\exp \left\{ -\frac{1}{2}\left(\chi u^{-1}+\psi u\right)w\right\} \end{aligned}$$

and the function \(e^{-ax}\), \(x>0\), \(a>0\), is, obviously, completely monotone.

The generalized gamma density functions \(g_{\beta ,\gamma ,\delta }\) are defined by the formula (see, e.g., [11]):

$$\begin{aligned} g_{\beta ,\gamma ,\delta }(x)=\frac{|\delta |}{\Gamma (\gamma )}\beta ^{\gamma }x^{\delta \gamma -1}\exp \left\{ -\beta x^{\delta }\right\} , \quad x>0, \quad \delta \in R^1_0, \quad \beta >0, \quad \gamma >0. \end{aligned}$$

It is proved in [11] that for \(0<|\delta |\le 1\) \(g_{\beta ,\gamma ,\delta }\) are HCM, because

$$\begin{aligned} g_{\beta ,\gamma ,\delta }(uv)g_{\beta ,\gamma ,\delta }\left(\frac{u}{v}\right)=\left(\frac{|\delta |\beta ^{\gamma }}{\Gamma (\gamma )}\right)^2u^{2(\delta \gamma -1)}\exp \left\{ -\beta u^{\delta }(v^{\delta }+v^{-\delta })\right\} \end{aligned}$$

and

$$\begin{aligned} \frac{\text{ d}}{\text{ d}w}\left(v^{\delta }+v^{-\delta }\right)=\frac{\delta \sin (\delta \pi )}{\pi }\int \limits ^{0}_{-\infty }\frac{|t|^{\delta }}{1+t^2-tw}\text{ d}t. \end{aligned}$$

The statement now follows from the known properties of completely monotone functions (see, e.g., [12]).

For \(\delta >1\), pdf \(g_{\beta ,\gamma ,\delta }\) are not infinitely divisible (see [11]) and, for \(\delta <-1\), it is unknown whether or not \(g_{\beta ,\gamma ,\delta }\) are infinitely divisible.

Following Definitions 1.1, 1.2 and using densities \(g_{\beta ,\gamma ,\delta }\), \(\delta <0\), \(\beta >0\), \(\gamma >0\), it is natural to define the generalized Student’s \(t\)-distributions with pdf as mixtures

$$\begin{aligned} \int \limits ^{\infty }_{0}g_{0,u\Sigma }(x-\alpha )g_{\beta ,\gamma ,\delta }(u)\text{ d}u, \quad x\in R^d \end{aligned}$$

and the generalized noncentral Student’s \(t\)-distributions with pdf as mixtures

$$\begin{aligned} \int \limits ^{\infty }_{0}g_{ua,u\Sigma }(x-\alpha )g_{\beta ,\gamma ,\delta }(u)\text{ d}u, \quad x\in R^d. \end{aligned}$$

In the case \(-1\le \delta <0\) their pdf are infinitely divisible, but, excepting \(\delta =-1\), their Lévy measure had no tractable expressions.

 

3.4 Subordinated Lévy Processes

Subordination of Markov processes as a transformation through random time change was introduced by Bochner in 1949 (see [23, 24]). In the context of Lévy processes subordination give us possibility to construct and investigate statistical models with desirable feature of the marginal distributions.

Let \(X=\{X_t,t\ge 0\}\) be a Lévy process in \(R^d\), \(X_0\equiv 0\), with the triplet of Lévy characteristics\((a,A,\Pi )\) and the characteristic exponent

$$\begin{aligned} \varphi (z)&:=-\log Ee^{i\langle z,X_t\rangle }\\&=-i\langle a,z\rangle +\frac{1}{2}\langle zA,z\rangle -\int \limits _{R^d_0}\left(e^{i\langle z,x\rangle }-1-i\langle z,x\rangle 1_{\{|x|\le 1\}}\right)\Pi (\text{ d}x), \quad z\in R^d, \end{aligned}$$

called the subordinand process.

Let \(T=\{T_t,t\ge 0\}\) be a Lévy subordinator, \(T_0\equiv 0\), with the Laplace exponent

$$\begin{aligned} \psi (\theta ):=-\log \mathrm E e^{-\theta T_1}=\beta _0\theta +\int \limits ^{\infty }_{0}\left(1-e^{-\theta x}\right)\rho (\text{ d}x), \quad \theta \ge 0, \end{aligned}$$

and characteristics \((\beta _0,\rho )\), independent of \(X\).

The subordinated process \(\tilde{X}=\{\tilde{X}_t,t\ge 0\}\) is defined as a superposition

$$\begin{aligned} \tilde{X}_t=X_{T_t}, \quad t\ge 0. \end{aligned}$$

The following theorem is obtained by Zolotarev [25], Bochner [24], Ikeda and Watanabe [26], and Rogozin [27]. It was treated by Feller [28] and Sato [2]. These ideas were extended to the multivariate subordination of Lévy processes by Barndorff-Nielsen et al. in 2001 (see [5]).

Let

$$\begin{aligned} \mu ^t(B)&=\mathrm P \{X_t\in B\}, \quad B\in \fancyscript{B}(R^d), \quad t\ge 0,\\ \tau ^t(C)&=\mathrm P \{T_t\in C\}, \quad C\in \fancyscript{B}(R_+), \quad t\ge 0 \end{aligned}$$

and

$$\begin{aligned} \tilde{\mu }^t(B)=\mathrm P \{\tilde{X}_t\in B\}, \quad B\in \fancyscript{B}(R^d), \quad t\ge 0. \end{aligned}$$

 

Theorem 3.19

  1. (i)

    The subordinated process\(\tilde{X}=\{\tilde{X}_t,t\ge 0\}\)is a Lévy process with characteristic exponent\(\tilde{\varphi }(z)=\psi \left(\varphi (z)\right)\), \(z\in R^d\), and triplet of Lévy characteristics\((\tilde{a}, \tilde{A}, \tilde{\Pi })\), where

    $$\begin{aligned} \tilde{a}&=\beta _0a+\int \limits _{(0,\infty )}\left(\int \limits _{|x|\le 1}x\mu ^s(\text{ d}x)\right)\rho (\text{ d}s),\nonumber \\ \tilde{A}&=\beta _0A \end{aligned}$$
    (3.16)

    and

    $$\begin{aligned} \tilde{\Pi }(B)=\beta _0\Pi (B)+\int \limits _{(0,\infty )}\mu ^s(B)\rho (\text{ d}s), \quad B\in \fancyscript{B}(R_0^d). \end{aligned}$$
  2. (ii)

    For\(t\ge 0\), \(B\in \fancyscript{B}(R^d)\)

    $$\begin{aligned} \tilde{\mu }^t(B)=\int \limits _{R_+}\mu ^s(B)\tau ^t(\text{ d}s). \end{aligned}$$
    (3.17)

We refer the reader for proof to [2].