Abstract
Bondesson (Generalized Gamma Convolutions and Related Classes of Distributions and Densities, Lecture Notes in Statistics, vol. 76, Springer, Berlin, 1992) said “Since a lot of the standard distributions now are known to be infinitely divisible, the class of infinitely divisible distributions has perhaps partly lost its interest. Smaller classes should be more in focus.” This view was presented more than two decades ago, yet has not been fully addressed. Over the last decade, many classes of infinitely divisible distributions have been studied and characterized. In this article, we summarize such “smaller classes” and try to find classes which known infinitely divisible distributions belong to, as precisely as possible.
AMS Subject Classification 2000: Primary: 60E07, 60H05; Secondary: 60E10, 60G51
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Generalized gamma convolution
- Infinitely divisible distribution
- Lévy measure
- Mixture of exponential distributions
- Nested subclasses
- Selfdecomposable distribution
- Stable distribution
- Stochastic integral with respect to Lévy process
1 Introduction
The theory of infinitely divisible distributions has been a core topic of probability theory and the subject of extensive study over the years. One reason for that is the fact that many important distributions are infinitely divisible, such as Gaussian, stable, exponential, Poisson, compound Poisson and gamma distributions. Another reason is that the set of infinitely divisible distributions on \(\mathbb{R}^{d}\) coincides with the set of distributions which are limits of distributions of sums \(\sum _{j=1}^{k_{n}}\xi _{ n,j}\) of \(\mathbb{R}^{d}\)-valued triangle arrays \(\{\xi _{n,j},1 \leq j \leq k_{n},n \geq 1\},k_{n} \uparrow \infty \) as \(n \rightarrow \infty \), where for each n, \(\xi _{n,1},\xi _{n,2},\ldots\) are independent, with the condition of infinite smallness that is \(\lim _{n\rightarrow \infty }\max _{1\leq j\leq k_{n}}P(\vert \xi _{n,j}\vert \geq \varepsilon ) = 0\) for any \(\varepsilon > 0\). Suppose that \(\xi _{n,k} = a_{n}^{-1}(\xi _{j} - b_{j})\), for a n > 0 with \(\lim _{n\rightarrow \infty }a_{n} = \infty \), \(\lim _{n\rightarrow \infty }a_{n+1}a_{n}^{-1} = 1\), \(b_{j} \in \mathbb{R}^{d}\) and k n = n. If \(\{\xi _{j}\}\) are independent, then the resulting class is the class of selfdecomposable distributions, and if furthermore \(\{\xi _{j}\}\) are identically distributed, then the resulting class is the class of stable distributions including Gaussians, These two classes are important classes of infinitely divisible distributions. Selfdecomposable distributions are known as marginal distributions of the stationary processes of Ornstein-Uhlenbeck type, which are stationary solutions of the Langevin equations with Lévy noise.
In 1977, Thorin [85, 86] introduced a class between the classes of stable and selfdecomposable distributions, called now the Thorin class, whose elements are called Generalized Gamma Convolutions (GGCs for short), when he wanted to prove the infinite divisibility of the Pareto and the log-normal distributions. Bondesson [16] published a monograph on this topic in 1992.
In 1983, Jurek and Vervaat [38] and Sato and Yamazato [79] showed that any selfdecomposable distribution \(\tilde{\mu }\) can be characterized by stochastic integrals with respect to some Lévy process {X t } with \(E[\log \vert X_{1}\vert ] < \infty \) such as
where \(\mathcal{L}(X)\) is the law of a random variable of X. (The paper by Jurek [36] is a short historical survey on stochastic integral representations of classes of infinitely divisible distributions.) Since a Lévy process {X t } can be constructed one to one in law by some infinitely divisible distribution μ satisfying \(\mathcal{L}(X_{1}) =\mu\), (1) can be regarded as a mapping Φ, say, from the class of infinitely divisible distributions with finite log-moments to the class of selfdecomposable distributions as
If we denote by {X t (μ)} a Lévy process such that \(\mathcal{L}(X_{1}^{(\mu )}) =\mu\), (1) and (2) gives us
Barndorff-Nielsen and Thorbjørnsen [7–10] introduced a mapping
related to the Bercovici-Pata bijection between free probability and classical probability. Then in Barndorff-Nielsen et al. [12], we investigated the range of the mapping Ψ and characterized several classes of infinitely divisible distributions in terms of the mappings Φ and \(\varUpsilon\). Among others, we found that the composition of these two mappings produces the Thorin class. Since then, many mappings have been studied as mappings constructing classes of infinitely divisible distributions giving new probabilistic explanations of such classes and also as mappings themselves from a mathematical point of view.
Let us recall one sentence by Bondesson [16]. “Since a lot of the standard distributions now are known to be infinitely divisible, the class of infinitely divisible distributions has perhaps partly lost its interest. Smaller classes should be more in focus.” In this article, we survey such “smaller classes” and try to find classes which known infinitely divisible distributions belong to, as precisely as possible. All infinitely divisible distributions we treat here are finite dimensional and most of the examples are one-dimensional.
In Sect. 2, we give some preliminaries on infinitely divisible distributions on \(\mathbb{R}^{d}\), Lévy processes and stochastic integrals with respect to Lévy processes.
In Sect. 3, we explain some known classes of infinitely divisible distributions and their relationships, and the characterization in terms of stochastic integral mappings is discussed in Sect. 4. Section 5 is devoted to some other mappings. These three sections form the first main subject of this article. Also, compositions of mappings are discussed in Sect. 6.
Since we have mappings to construct classes in hand, we can construct nested subclasses by the iteration of those mappings. This is the topic in Sect. 7. For the class of selfdecomposable distributions, these nested subclasses were already studied by Urbanik [90] and later by Sato [70].
Once we have a general theory for infinitely divisible distributions, it is necessary to provide specific examples. We know that many distributions are infinitely divisible. Then, the next question related to the above may be which classes such known infinitely distributions belong to. This is the second main subject of this article and is discussed in Sects. 8–10. Section 8 treats known distributions. After the monograph by Bondesson [16] and later a paper by James et al. [28], GGCs have been highlighted, and thus examples of GGCs recently appearing in quite different problems are explained separately in Sect. 9. Section 10 discusses new examples of α-selfdecomposable distributions.
We conclude the article with a short Sect. 11 on fixed points of the mapping for α-selfdecomposable distributions, offering a new perspective on the class of stable distributions.
Since this is a survey article, only a few statements have explicit proofs. However, even if the statements do not have proofs, readers may consult original proofs in the papers cited.
2 Preliminaries
2.1 Infinitely Divisible Distributions on \(\mathbb{R}^{d}\)
In the following, \(\mathcal{P}(\mathbb{R}^{d})\) is the set of all probability distributions on \(\mathbb{R}^{d}\) and \(\hat{\mu }(z):=\int _{\mathbb{R}^{d}}e^{i\langle z,x\rangle }\mu (dx),z \in \mathbb{R}^{d}\), is the characteristic function of \(\mu \in \mathcal{P}(\mathbb{R}^{d})\).
Definition 2.1
\(\mu \in \mathcal{P}(\mathbb{R}^{d})\) is infinitely divisible if, for any \(n \in \mathbb{N}\), there exists \(\mu _{n} \in \mathcal{P}(\mathbb{R}^{d})\) such that \(\hat{\mu }(z) =\hat{\mu } _{n}(z)^{n}\). \(ID(\mathbb{R}^{d})\) denotes the class of all infinitely divisible distributions on \(\mathbb{R}^{d}\).
We also use
and
where \(\log ^{+}a =\max \{\log a,0\}\).
The so-called Lévy-Khintchine representation of infinitely divisible distribution is provided in the following proposition.
Proposition 2.2 (The Lévy-Khintchine Representation; See e.g. Sato [73, Theorem 8.1])
-
(1)
If \(\mu \in ID(\mathbb{R}^{d})\) , then
$$\displaystyle{ \hat{\mu }(z) =\exp \left \{-2^{-1}\langle z,Az\rangle + i\langle \gamma,z\rangle +\int _{ \mathbb{R}^{d}}\left (e^{i\langle z,x\rangle } - 1 - \frac{i\langle z,x\rangle } {1 + \vert x\vert ^{2}}\right )\nu (dx)\right \},\quad z \in \mathbb{R}^{d}, }$$(3)where A is a symmetric nonnegative-definite d × d matrix, ν is a measure on \(\mathbb{R}^{d}\) satisfying
$$\displaystyle{ \nu (\{0\}) = 0\quad \text{and}\quad \int _{\mathbb{R}^{d}}(\vert x\vert ^{2} \wedge 1)\nu (dx) < \infty, }$$(4)and γ is a vector in \(\mathbb{R}^{d}\) .
-
(2)
The representation of \(\hat{\mu }\) in (1) by A,ν and γ is unique.
-
(3)
Conversely, if A is a symmetric nonnegative-definite d × d matrix, ν is a measure satisfying (4) and \(\gamma \in \mathbb{R}^{d}\) , then there exists a \(\mu \in ID(\mathbb{R}^{d})\) whose characteristic function is given by (3) .
A is called the Gaussian covariance matrix or the Gaussian part and ν is called the Lévy measure. The triplet (A, ν, γ) is called the Lévy-Khintchine triplet of μ. When we want to emphasize the Lévy-Khintchine triplet, we may write μ = μ (A, ν, γ). If the Lévy measure ν of μ satisfies \(\int _{\vert x\vert >1}\vert x\vert \nu (dx) < \infty \), then there exists the mean \(\gamma ^{1} \in \mathbb{R}^{d}\) of μ such that
In this case, we will write \(\mu =\mu _{(A,\nu,\gamma ^{1})_{1}}\). If ν of μ satisfies \(\int _{\vert x\vert \leq 1}\vert x\vert \nu (dx) < \infty \), then there exists \(\gamma ^{0} \in \mathbb{R}^{d}\) (called the drift of μ) such that
We write \(\mu =\mu _{(A,\nu,\gamma ^{0})_{0}}\) in this case. We also write ν μ for ν when ν is the Lévy measure of μ.
In the following, the notation 1 B denotes the indicator function of the set \(B \in \mathcal{B}(\mathbb{R}^{d})\). Here and in what follows, \(\mathcal{B}(C)\) is the set of Borel sets in C.
Proposition 2.3 (Polar Decomposition of Lévy Measure; See e.g. Barndorff-Nielsen et al. [12, Lemma 2.1])
Let ν μ be the Lévy measure of some \(\mu \in I(\mathbb{R}^{d})\) with \(0 <\nu _{\mu }(\mathbb{R}^{d}) \leq \infty \) . Then there exist a \(\sigma\) -finite measure \(\lambda\) on \(S:=\{\xi \in \mathbb{R}^{d}: \vert \xi \vert = 1\}\) with \(0 \leq \lambda (S) \leq \infty \) and a family \(\{\nu _{\xi }: \xi \in S\}\) of measures on \((0,\infty )\) such that
Here \(\lambda\) and \(\{\nu _{\xi }\}\) are uniquely determined by ν μ in the following sense: if \(\lambda\) , \(\{\nu _{\xi }\}\) and \(\lambda '\) , \(\{\nu '_{\xi }\}\) both have properties (5)–(7) , then there is a measurable function \(c(\xi )\) on S such that
We call \(\nu _{\xi }\) the radial component of ν μ and when \(\nu _{\xi }\) is absolute continuous, we call its density the Lévy density.
Definition 2.4 (The Cumulant of \(\hat{\mu }\))
For \(\mu \in ID(\mathbb{R}^{d})\), \(C_{\mu }(z) =\log \hat{\mu } (z)\) is called the cumulant of μ, where \(\log\) is the distinguished logarithm. (For the definition of the distinguished logarithm, see e.g. Sato [73], the sentence after Lemma 7.6.)
2.2 Stochastic Integrals with Respect to Lévy Processes
Definition 2.5
A stochastic process {X t , t ≥ 0} on \(\mathbb{R}^{d}\) is called a Lévy process, if the following conditions are satisfied.
-
(1)
X 0 = 0 a.s.
-
(2)
For any 0 ≤ t 0 < t 1 < ⋯ < t n , n ≥ 1, \(X_{t_{0}},X_{t_{1}} - X_{t_{0}},\ldots,X_{t_{n}} - X_{t_{n-1}}\) are independent.
-
(3)
For h > 0, the distribution of X t+h − X t does not depend on t.
-
(4)
For any t ≥ 0 and \(\varepsilon > 0\), \(\lim _{h\rightarrow 0}P(\vert X_{t+h} - X_{t}\vert >\varepsilon ) = 0\).
-
(5)
For almost all ω, the sample paths X t (ω) are right-continuous in t ≥ 0 and have left limits in t > 0.
Dropping the condition (5) in Definition 2.5, we call any process satisfying (1)−(4) a Lévy process in law. In the following, “Lévy process” simply means “Lévy process in law”. It is known (see e.g. Sato [73, Theorem 7.10(i)]) that if {X t } is a Lévy process on \(\mathbb{R}^{d}\), then for any \(t \geq 0,\mathcal{L}(X_{t}) \in ID(\mathbb{R}^{d})\) and if we let \(\mathcal{L}(X_{1}) =\mu\), then \(\mathcal{L}(X_{t}) =\mu ^{t{\ast}}\), where μ t∗ is the distribution with characteristic function \(\hat{\mu }(z)^{t}\). Thus the distribution of a Lévy process {X t } is determined by that of X 1. Further, a stochastic process {X t , t ≥ 0} on \(\mathbb{R}^{d}\) is called an additive process (in law), if (1), (2) and (4) are satisfied.
Proposition 2.6 (Stochastic Integral with Respect to Lévy Process; See Sato [77, Sect. 3.4])
Let {X t } be a Lévy process on \(\mathbb{R}^{d}\) with \(\mathcal{L}(X_{1}) =\mu _{(A,\nu,\gamma )}\) .
-
(1)
Let f(t) be a real-valued locally square integrable measurable function on \([0,\infty )\) . Then the stochastic integral \(X:=\int _{ 0}^{a}f(t)dX_{t}\) exists and \(\mathcal{L}(X) \in ID(\mathbb{R}^{d})\) . Its cumulant is represented as
$$\displaystyle{ C_{\mathcal{L}(X)}(z) =\int _{ 0}^{a}C_{\mu }(\,f(t)z)dt. }$$
The Lévy-Khintchine triplet (A X ,ν X ,γ X ) of \(\mathcal{L}(X)\) is the following:
-
(2)
The improper stochastic integral over \([0,\infty )\) is defined as follows, whenever the limit exists:
$$\displaystyle{X:=\int _{ 0}^{\infty }f(t)dX_{ t} =\lim _{a\rightarrow \infty }\int _{0}^{a}f(t)dX_{ t}\quad \text{in probability.}}$$
Suppose f(t) is locally square integrable on \([0,\infty )\) . Then \(\int _{0}^{\infty }f(t)dX_{t}\) exists if and only if \(\lim _{a\rightarrow \infty }\int _{0}^{a}C_{\mu }(f(t)z)dt\) exists in \(\mathbb{C}\) for all \(z \in \mathbb{R}^{d}\) . We have
Remark 2.7
We will treat many f(t)’s which have singularity at t = 0: (i) f(t) = G α, β ∗(t), t > 0, the inverse function of \(t = G_{\alpha,\beta }(s) =\int _{ s}^{\infty }u^{-\alpha -1}e^{-u^{\beta } }du,s \geq 0\), in Sect. 5.2, which is specialized to the kernels of \(\varUpsilon\)-, Ψ-, \(\mathcal{G}\)- and \(\mathcal{M}\)-mappings in Sect. 4.1. (ii) f(t) = t −1∕α, t > 0, which is the kernel of the stable mapping in Sect. 5.4.
3 Some Known Classes of Infinitely Divisible Distributions
As mentioned in Sect. 1, the main concern of this article is to discuss known and new classes of infinitely divisible distributions and characterize them in several ways. We start with some known classes in Sects. 3.2 and 3.3, and show the relationships among themselves in Sect. 3.4.
3.1 Completely Monotone Functions
In the following, the concept of completely monotone function plays an important role. So, we start with the definition of completely monotone function and some properties of it.
Definition 3.1 (Completely Monotone Function)
A function \(\varphi (x)\) on \((0,\infty )\) is completely monotone if it has derivatives \(\varphi ^{(n)}\) of all orders and \((-1)^{n}\varphi ^{(n)}(x) \geq 0,n \in \mathbb{Z}_{+},x > 0\).
Two typical examples of completely monotone functions are e −x and x −p, p > 0.
Proposition 3.2 (Bernstein’s Theorem. See e.g. Feller [21, Chap. XIII, 4])
A function \(\varphi\) on \((0,\infty )\) is completely monotone if and only if it is the Laplace transform of a measure μ on \((0,\infty )\) .
Proposition 3.3 (See e.g. Feller [21, Chap. XIII, 4])
-
(1)
The product of two completely monotone functions on \((0,\infty )\) is also completely monotone.
-
(2)
If \(\varphi\) is completely monotone on \((0,\infty )\) and if ψ is a positive function with a completely monotone derivative on \((0,\infty )\) , then the composed function \(\varphi (\psi )\) is also completely monotone on \((0,\infty )\) .
3.2 The Classes of Stable and Semi-stable Distributions
Definition 3.4
Let \(\mu \in ID(\mathbb{R}^{d})\).
-
(1)
It is called stable if, for any a > 0, there exist b > 0 and \(c \in \mathbb{R}^{d}\) such that
$$\displaystyle{ \hat{\mu }(z)^{a} =\hat{\mu } (bz)e^{i\langle c,z\rangle }. }$$(8)\(S(\mathbb{R}^{d})\) denotes the class of all stable distributions on \(\mathbb{R}^{d}\).
-
(2)
It is called strictly stable if, for any a > 0, there is b > 0 such that \(\hat{\mu }(z)^{a} =\hat{\mu } (bz).\)
-
(3)
It is called semi-stable if, for some a > 0 with a ≠ 1, there exists b > 0 and \(c \in \mathbb{R}^{d}\) satisfying (8). \(SS(\mathbb{R}^{d})\) denotes the class of all semi-stable distributions on \(\mathbb{R}^{d}\).
-
(4)
It is called strictly semi-stable if, for some a > 0 with a ≠ 1, there exists b > 0 satisfying \(\hat{\mu }(z)^{a} =\hat{\mu } (bz)\).
\(\mu \in \mathcal{P}(\mathbb{R}^{d})\) is called trivial if it is a distribution of a random variable concentrated at one point, otherwise it is called non-trivial. When this one point is \(c \in \mathbb{R}^{d}\), we write μ = δ c .
Theorem 3.5 (See e.g. Sato [73, Theorem 13.15] or Sato [72, Theorem 3.3])
If μ is non-trivial stable, then there exists a unique α ∈ (0,2] such that b = a 1∕α in (8) .
In this case, we say that such a μ is α-stable. Gaussian distribution and Cauchy distribution are 2-stable and 1-stable, respectively. Note that any trivial distribution is stable in the sense that (8) is satisfied, and α is not uniquely determined. In the following, when we say α-stable distribution, we always include all trivial distributions. Also note that trivial distributions which are not δ 0 are not strictly stable except 1-stable distribution.
3.3 Some Known Classes of Infinitely Divisible Distributions
We start with the following six classes which are well-studied in the literature. We call Vx an elementary gamma random variable (resp. elementary mixed-exponential random variable, elementary compound Poisson random variable) on \(\mathbb{R}^{d}\) if x is a nonrandom, nonzero element of \(\mathbb{R}^{d}\) and V is a real random variable having gamma distribution (resp. a mixture of a finite number of exponential distributions, compound Poisson distribution whose jump size distribution is uniform on the interval [0, a] for some a > 0).
-
(1)
The class \(U(\mathbb{R}^{d})\) (the Jurek class): \(\mu \in U(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as
$$\displaystyle{ \nu _{\xi }(dr) =\ell _{\xi }(r)dr, }$$(9)where \(\ell_{\xi }(r)\) is a nonnegative function measurable in \(\xi \in S\) and nonincreasing on \((0,\infty )\) as a function of r.
The class \(U(\mathbb{R}^{d})\) was introduced by Jurek [31] and \(\mu \in U(\mathbb{R}^{d})\) is called s-selfdecomposable. Jurek [31] proved that \(\mu \in U(\mathbb{R}^{d})\) if and only if for any b > 1 there exists \(\mu _{b} \in ID(\mathbb{R}^{d})\) such that \(\hat{\mu }(z) =\hat{\mu } (b^{-1}z)^{b^{-1} }\hat{\mu }_{b}(z).\) Sato [77] also formulated \(U(\mathbb{R}^{d})\) as the smallest class of distributions on \(\mathbb{R}^{d}\) closed under convolution and weak convergence and containing all distributions of elementary compound Poisson random variables on \(\mathbb{R}^{d}\).
-
(2)
The class \(B(\mathbb{R}^{d})\) (the Goldie–Steutel–Bondesson class): \(\mu \in B(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as
$$\displaystyle{ \nu _{\xi }(dr) =\ell _{\xi }(r)dr, }$$(10)where \(\ell_{\xi }(r)\) is a nonnegative function measurable in \(\xi \in S\) and completely monotone on \((0,\infty )\) as a function of r.
Historically, Goldie [23] proved the infinite divisibility of mixtures of exponential distributions and Steutel [82] found the description of their Lévy measures. Then Bondesson [16] studied generalized convolutions of mixtures of exponential distributions on \(\mathbb{R}_{+}\). It is the smallest class of distributions on \(\mathbb{R}_{+}\) that contains all mixtures of exponential distributions and that is closed under convolution and weak convergence on \(\mathbb{R}_{+}\). \(B(\mathbb{R}^{d})\) is its generalization by Barndorff-Nielsen et al. [12], where all mixtures of exponential distributions are replaced by all distributions of elementary mixed-exponential random variables on \(\mathbb{R}^{d}\).
-
(3)
The class \(L(\mathbb{R}^{d})\) (the class of selfdecomposable distributions): \(\mu \in L(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, ν μ is expressed as
$$\displaystyle{ \nu _{\xi }(dr) = r^{-1}k_{\xi }(r)dr, }$$(11)where \(k_{\xi }(r)\) is a nonnegative function measurable in \(\xi \in S\) and nonincreasing on \((0,\infty )\) as a function of r.
It is known (see e.g. Sato [73, Theorem 15.10]) that \(\mu \in L(\mathbb{R}^{d})\) if and only if for any b > 1, there exists some \(\rho _{b} \in \mathcal{P}(\mathbb{R}^{d})\) such that
This statement usually is used as the definition of the selfdecomposability. ρ b in (12) can be shown to be infinitely divisible. Hence, we may replace \(\rho _{b} \in \mathcal{P}(\mathbb{R}^{d})\) by \(\rho _{b} \in ID(\mathbb{R}^{d})\) in the previous statement.
-
(4)
The class \(T(\mathbb{R}^{d})\) (the Thorin class): \(\mu \in T(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as
$$\displaystyle{ \nu _{\xi }(dr) = r^{-1}k_{\xi }(r)dr, }$$(13)where \(k_{\xi }(r)\) is a nonnegative function measurable in \(\xi \in S\) and completely monotone on \((0,\infty )\) as a function of r.
Originally this class was studied by Thorin [85, 86] when he wanted to prove the infinite divisibility of the Pareto and the log-normal distributions, as mentioned in Sect. 1. The class \(T(\mathbb{R}_{+})\) (resp. \(T(\mathbb{R})\)) is defined as the smallest class of distributions on \(\mathbb{R}_{+}\) (resp. \(\mathbb{R}\)) that contains all positive (resp. positive and negative) gamma distributions and that is closed under convolution and weak convergence on \(\mathbb{R}_{+}\) (resp. \(\mathbb{R}\)). The distributions in \(T(\mathbb{R}_{+})\) are called generalized gamma convolutions (GGCs) and those in \(T(\mathbb{R})\) are called extended generalized gamma convolutions (EGGCs). Thorin showed that the Pareto and the log-normal distributions are GGCs, and thus are selfdecomposable and infinitely divisible. The infinite divisibility of the log-normal distribution was not known before the theory on hyperbolic complete monotonicity which was developed by Thorin.
\(T(\mathbb{R}^{d})\) is a generalization of \(T(\mathbb{R})\) by Barndorff-Nielsen et al. [12], where all positive and negative gamma distributions are replaced by all distributions of elementary gamma random variable on \(\mathbb{R}^{d}\).
-
(5)
The class \(G(\mathbb{R}^{d})\) (the class of type G distributions): \(\mu \in G(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as
$$\displaystyle{ \nu _{\xi }(dr) = g_{\xi }(r^{2})dr, }$$(14)where \(g_{\xi }(r)\) is a nonnegative function measurable in \(\xi \in S\) and completely monotone on \((0,\infty )\) as a function of r.
When d = 1, \(\mu \in G(\mathbb{R}) \cap I_{\mathrm{sym}}(\mathbb{R})\) if and only if \(\mu = \mathcal{L}(V ^{1/2}Z)\), where V > 0, \(\mathcal{L}(V ) \in I(\mathbb{R})\), Z is the standard normal random variable, and V and Z are independent. When d ≥ 1, \(\mu =\mu _{(A,\nu,\gamma )} \in G(\mathbb{R}^{d}) \cap ID_{\mathrm{sym}}(\mathbb{R}^{d})\) if and only if ν(B) = E[ν 0(Z −1 B)] for some Lévy measure ν 0. (See Maejima and Rosiński [47].) Previously only symmetric distributions in \(G(\mathbb{R}^{d})\) were said to be of type G. In this article, however, we say that any distribution from \(G(\mathbb{R}^{d})\) is of type G.
-
(6)
The class \(M(\mathbb{R}^{d})\) (Aoyama et al. [4]): \(\mu \in M(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as
$$\displaystyle{ \nu _{\xi }(dr) = r^{-1}g_{\xi }(r^{2})dr, }$$(15)where \(g_{\xi }(r)\) is a nonnegative function measurable in \(\xi \in S\) and completely monotone on \((0,\infty )\) as a function of r. (Originally, in Aoyama et al. [4], we defined the class \(M(\mathbb{R}^{d})\) restricted in \(I_{\mathrm{sym}}(\mathbb{R}^{d})\). However, in this article, we do not assume the symmetry of \(\mu \in M(\mathbb{R}^{d})\). The requirement (15) is independent of the symmetry of the distribution.)
This class was introduced, being motivated by how the class will be if we replace \(g_{\xi }(r^{2})\) in (14) by \(r^{-1}g_{\xi }(r^{2})\) by multiplying an extra r −1 in the Lévy density which can been seen from (1) to (3) and from (2) to (4).
3.4 Relationships Among the Classes
With respect to relationships among the classes mentioned in Sect. 3.3, we have the following.
-
(1)
\(L(\mathbb{R}^{d}) \cup G(\mathbb{R}^{d}) \subsetneq U(\mathbb{R}^{d})\) and \(T(\mathbb{R}^{d}) \subsetneq L(\mathbb{R}^{d})\). (By definition.)
-
(2)
Each class in Sect. 3.3 includes \(S(\mathbb{R}^{d})\). This is because if \(\mu \in S(\mathbb{R}^{d})\), then either A ≠ 0 and ν μ = 0 or A = 0 and \(\nu _{\xi }(dr) = r^{-1-\alpha }dr\) for some α ∈ (0, 2), (see e.g. Sato [73, Theorem 14.3]).
-
(3)
\(T(\mathbb{R}^{d}) \subsetneq B(\mathbb{R}^{d}) \subsetneq G(\mathbb{R}^{d})\). These inclusions follow from the properties of completely monotone functions. It follows from Proposition 3.3(1) that \(T(\mathbb{R}^{d}) \subset B(\mathbb{R}^{d})\). If we put \(g_{\xi }(x) = l_{\xi }(x^{1/2})\), then it follows from Proposition 3.3(2) that \(B(\mathbb{R}^{d}) \subset G(\mathbb{R}^{d})\). The relation \(\subsetneq \) can be shown by choosing suitable Lévy densities.
-
(4)
\(T(\mathbb{R}^{d}) \subsetneq M(\mathbb{R}^{d}) \subsetneq L(\mathbb{R}^{d}) \cap G(\mathbb{R}^{d})\). The proof is as follows (Aoyama et al. [4]):
We first show that \(M(\mathbb{R}^{d}) \subsetneq L(\mathbb{R}^{d}) \cap G(\mathbb{R}^{d})\). Note that r −1∕2 is completely monotone and by Proposition 3.3(1) that the product of two completely monotone functions is also completely monotone. Thus by the definition of \(M(\mathbb{R}^{d})\), it is clear that \(M(\mathbb{R}^{d}) \subset L(\mathbb{R}^{d}) \cap G(\mathbb{R}^{d})\). To show that \(M(\mathbb{R}^{d})\neq L(\mathbb{R}^{d}) \cap G(\mathbb{R}^{d})\), it is enough to construct \(\mu \in ID(\mathbb{R}^{d})\) such that \(\mu \in L(\mathbb{R}^{d}) \cap G(\mathbb{R}^{d})\) but \(\mu \notin M(\mathbb{R}^{d})\).
First consider the case d = 1. Let
For our purpose, it is enough to construct a function \(g: (0,\infty )\mapsto (0,\infty )\) such that (a) r −1∕2 g(r) is completely monotone on \((0,\infty )\), (meaning that the corresponding μ belongs to \(G(\mathbb{R})\)), (b) g(r 2) or, equivalently, g(r) is nonincreasing on \((0,\infty )\), (meaning that the corresponding μ belongs to \(L(\mathbb{R})\)), and (c) g(r) is not completely monotone on \((0,\infty )\), (meaning that the corresponding μ does not belong to \(M(\mathbb{R})\)). We show that
satisfies the requirements (a)−(c) above.
-
(a)
We have
$$\displaystyle{ r^{-1/2}g(r) =\int _{ 0.9}^{1}e^{-ru}du + 0.1\int _{ 1.1}^{\infty }e^{-ru}du, }$$
which is a sum of two completely monotone functions, and thus r −1∕2 g(r) is completely monotone.
-
(b)
If h(r) is nonincreasing, then so is g(r) = r −1∕2 h(r). To show it, we have
$$\displaystyle\begin{array}{rcl} h'(r)& =& -0.9e^{-0.9r} + e^{-r} - 0.11e^{-1.1r} = -0.9e^{-1.1r}\left [\left (e^{0.1r} - \frac{1} {1.8}\right )^{2} -\frac{0.604} {3.24} \right ] {}\\ & \leq &-0.9e^{-1.1r}\left [\left (1 - \frac{1} {1.8}\right )^{2} -\frac{0.604} {3.24} \right ] = -0.01e^{-1.1r} < 0,\quad r > 0. {}\\ \end{array}$$ -
(c)
To show (c), we see that
$$\displaystyle{ h(r) =\int _{ 0}^{\infty }e^{-ru}Q(du), }$$
where Q is a signed measure such that Q = Q 1 + Q 2 + Q 3 and
On the other hand,
where
Thus
where
We are going to show that U is a signed measure, namely, for some interval \((a,b),U\left ((a,b)\right ) < 0\). If so, g is not completely monotone by Bernstein’s theorem (Proposition 3.2). We have
Take (a, b) = (1. 15, 1. 35). Then
This concludes that g is not completely monotone.
A d-dimensional example of \(\mu \in ID(\mathbb{R}^{d})\) such that \(\mu \in L(\mathbb{R}^{d}) \cap G(\mathbb{R}^{d})\) but \(\mu \notin M(\mathbb{R}^{d})\) is given by taking the example of the Lévy measure on \(\mathbb{R}\) constructed above as the radial component of a Lévy measure on \(\mathbb{R}^{d}\). This completes the proof of \(M(\mathbb{R}^{d}) \subsetneq L(\mathbb{R}^{d}) \cap G(\mathbb{R}^{d})\).
We next show that \(T(\mathbb{R}^{d}) \subsetneq M(\mathbb{R}^{d})\). If \(\mu \in T(\mathbb{R}^{d})\), then the radial component of the Lévy measure of μ has the form \(\nu _{\xi }(dr) = r^{-1}k_{\xi }(r)dr\), where \(k_{\xi }\) is completely monotone. By Proposition 3.3 and the fact that ψ(r) = r 1∕2 has a completely monotone derivative, then \(g_{\xi }(r):= k_{\xi }(r^{1/2})\) is completely monotone. Thus \(\nu _{\xi }(dr)\) can be read as \(r^{-1}g_{\xi }(r^{2})dr\), where \(g_{\xi }\) is completely monotone, concluding that \(\mu \in M(\mathbb{R}^{d})\).
To show that \(T(\mathbb{R}^{d})\neq M(\mathbb{R}^{d})\), it is enough to find a completely monotone function \(g_{\xi }\) such that \(k_{\xi }(r) = g_{\xi }(r^{2})\) is not completely monotone. However, the function \(g_{\xi }(r) = e^{-r}\) has such a property. Although e −r is completely monotone, \((-1)^{2} \frac{d^{2}} {dr^{2}} e^{-r^{2} } < 0\) for small r > 0. This completes the proof of the inclusion \(T(\mathbb{R}^{d}) \subsetneq M(\mathbb{R}^{d})\).
Remark 3.6
It is important to remark that any distribution in \(L(\mathbb{R})\) is unimodal, (a result by Yamazato [97]), which implies the unimodality of any distribution in \(T(\mathbb{R})\), since \(T(\mathbb{R}) \subset L(\mathbb{R})\).
The following are examples for non-inclusion among classes. (See Schilling et al. [80, Chap. 9].)
-
(5)
\(L(\mathbb{R}) \subset /\,\,B(\mathbb{R})\). Let ν μ (dx) = x −11(0, 1)(x)dx, x > 0. Then k(x) = 1(0, 1)(x) is nonincreasing and thus \(\mu \in L(\mathbb{R})\), but ℓ(x) = x −11(0, 1)(x) is not completely monotone and thus \(\mu \notin B(\mathbb{R})\). Hence \(L(\mathbb{R}) \subset /\,\,B(\mathbb{R})\).
-
(6)
\(B(\mathbb{R}) \subset /\,\,L(\mathbb{R})\). Let ν μ (dx) = e −x dx, x > 0. Then it is easy to see that \(\mu \in B(\mathbb{R})\), but \(\mu \notin L(\mathbb{R})\). Therefore \(B(\mathbb{R}) \subset /\,\,L(\mathbb{R})\).
4 Stochastic Integral Mappings and Characterizations of Classes (I)
This section is one of the main subjects of this article, as referred to in Sect. 1. In Sect. 4.1 we explain well-studied six stochastic integral mappings, and then in Sect. 4.2 we characterize them by stochastic integral mappings.
4.1 Six Stochastic Integral Mappings
For \(\mu \in ID(\mathbb{R}^{d})\), let {X t (μ), t ≥ 0} be the Lévy process with \(\mathcal{L}(X_{1}^{(\mu )}) =\mu\). Let f(t) be a real-valued square integrable measurable function on [a, b], for any \(0 < a < b < \infty \) and suppose that the stochastic integral \(\int _{0}^{\infty }f(t)dX_{t}^{(\mu )}\) is definable in the sense of Proposition 2.6. Then we can define a mapping μ ↦ Φ f (μ). We denote the domain of Φ f by \(\mathfrak{D}(\varPhi _{f})\) that is the class of \(\mu \in ID(\mathbb{R}^{d})\) for which Φ f (μ) is definable. We also denote the range of Φ f by \(\mathfrak{R}(\varPhi _{f}) =\varPhi _{f}(\mathfrak{D}(\varPhi _{f}))\).
Now the following are well-studied mappings.
-
(1)
\(\mathcal{U}\)-mapping (Jurek [31]). For \(\mu \in \mathfrak{D}(\mathcal{U}) = ID(\mathbb{R}^{d})\), \(\mathcal{U}(\mu ) = \mathcal{L}\left (\int _{0}^{1}tdX_{t}^{(\mu )}\right ).\)
-
(2)
\(\varUpsilon\)-mapping (Barndorff-Nielsen et al. [12]). For \(\mu \in \mathfrak{D}(\varUpsilon ) = ID(\mathbb{R}^{d})\), \(\varUpsilon (\mu ) =\)
\(\mathcal{L}\left (\int _{0}^{1}\log (t^{-1})dX_{t}^{(\mu )}\right ).\)
-
(3)
Φ-mapping (Jurek and Vervaat [38], Sato and Yamazato [79], Wolfe [96]). For \(\mu \in \mathfrak{D}(\varPhi ) = ID_{\log }(\mathbb{R}^{d})\), \(\varPhi (\mu ) = \mathcal{L}\left (\int _{0}^{\infty }e^{-t}dX_{t}^{(\mu )}\right ).\)
-
(4)
Ψ-mapping (Barndorff-Nielsen et al. [12]). Let \(p(s) =\int _{ s}^{\infty }e^{-u}u^{-1}du,s > 0\), and denote its inverse function by p ∗(t). For \(\mu \in \mathfrak{D}(\varPsi ) = ID_{\log }(\mathbb{R}^{d})\),
\(\varPsi (\mu ) = \mathcal{L}\left (\int _{0}^{\infty }p^{{\ast}}(t)dX_{t}^{(\mu )}\right ).\)
-
(5)
\(\mathcal{G}\)-mapping (Maejima and Sato [48]). Let \(g(s) =\int _{ s}^{\infty }e^{-u^{2} }du,s > 0\), and denote its inverse function by g ∗(t). For \(\mu \in \mathfrak{D}(\mathcal{G}) = ID(\mathbb{R}^{d})\), \(\mathcal{G}(\mu ) = \mathcal{L}\left (\int _{0}^{\sqrt{\pi }/2}g^{{\ast}}(t)dX_{ t}^{(\mu )}\right ).\)
-
(6)
\(\mathcal{M}\)-mapping (Maejima and Nakahara [44]). Let \(m(s) =\int _{ s}^{\infty }e^{-u^{2} }u^{-1}du,s > 0\), and denote its inverse function by m ∗(t). For \(\mu \in \mathfrak{D}(\mathcal{M}) = ID_{\log }(\mathbb{R}^{d})\), \(\mathcal{M}(\mu ) = \mathcal{L}\left (\int _{0}^{\infty }m^{{\ast}}(t)dX_{t}^{(\mu )}\right ).\)
In the above, it is easy to see that the domains of the mappings are \(ID(\mathbb{R}^{d})\) when the intervals of the stochastic integrals are finite. However, in the cases where stochastic integrals are improper at infinity, we need the proofs. As an example, we show the case of the Φ-mapping below. (For (4), see Barndorff-Nielsen et al. [12, Theorem C], and for (6), see Maejima and Nakahara [44, Theorem 2.3], respectively.) Note that in the six examples above, the singularity of the kernel at t = 0 does not give any influence for determining the domains of the mappings.
For showing that \(\mathfrak{D}(\varPhi ) = ID_{\log }(\mathbb{R}^{d})\), we use Proposition 2.6(2) with f(t) = e −t. Let (A, ν, γ) and \((\tilde{A},\tilde{\nu },\tilde{\gamma })\) be the Lévy-Khintchine triplets of μ and Φ(μ), respectively. If we could show that \(\tilde{A}\) and \(\tilde{\gamma }\) are finite, and \(\tilde{\nu }\) is a Lévy measure, then Proposition 2.6(2) ensures that the existence of the stochastic integral defining Φ(μ).
-
(i)
(Gaussian part): \(\tilde{A} =\int _{ 0}^{\infty }e^{-2t}Adt\) exists.
-
(ii)
(Lévy measure): We are going to show that
$$\displaystyle{\tilde{\nu }(B) =\int _{\mathbb{R}^{d}}\nu (dx)\int _{0}^{\infty }1_{ B}(e^{-t}x)dt,\quad B \in \mathcal{B}(\mathbb{R}^{d}),}$$
satisfies that \(\int _{\mathbb{R}^{d}}(\vert x\vert ^{2} \wedge 1)\tilde{\nu }(dx) < \infty \). We have
where
and
Thus,
if and only if
-
(iii)
(γ-part): To complete the proof, it is enough to show that
$$\displaystyle{\tilde{\gamma }=\int _{ 0}^{\infty }e^{-t}dt \cdot \gamma +\int _{ \mathbb{R}^{d}}\nu (dx)\int _{0}^{\infty }e^{-t}x\left ( \frac{1} {1 + \vert e^{-t}x\vert ^{2}} - \frac{1} {1 + \vert x\vert ^{2}}\right )dt < \infty,}$$
whenever \(\int _{\vert x\vert >1}\log \vert x\vert \nu (dx) < \infty \). The first integral is trivial. As to the second integral, we have
Here
and
where
and
The proof is completed.
4.2 Characterization of Classes as the Ranges of the Mappings
The six classes of infinitely divisible distributions in Sect. 3.3 can be characterized as the ranges of the mappings discussed in the previous section, as follows.
Proposition 4.1
We have the following.
-
(1)
\(U(\mathbb{R}^{d}) = \mathcal{U}(ID(\mathbb{R}^{d}))\) . (Jurek [31] .)
-
(2)
\(B(\mathbb{R}^{d}) =\varUpsilon (ID(\mathbb{R}^{d}))\) . (Barndorff-Nielsen et al. [12] .)
-
(3)
\(L(\mathbb{R}^{d}) =\varPhi (ID_{\log }(\mathbb{R}^{d}))\) . (Jurek and Vervaat [38] , Sato and Yamazato [79] , Wolfe [96] .)
-
(4)
\(T(\mathbb{R}^{d}) =\varPsi (ID_{\log }(\mathbb{R}^{d}))\) . (Barndorff-Nielsen et al. [12] .)
-
(5)
\(G(\mathbb{R}^{d}) = \mathcal{G}(ID(\mathbb{R}^{d}))\) . (Aoyama and Maejima [3] for symmetric case and Maejima and Sato [48] for general case.)
-
(6)
\(M(\mathbb{R}^{d}) = \mathcal{M}(ID_{\log }(\mathbb{R}^{d}))\) . (Aoyama et al. [4] for symmetric case and Maejima and Nakahara [44] for general case.)
For the readers’ convenience, we give here the proof of (3) \(L(\mathbb{R}^{d}) =\varPhi (ID_{\log }(\mathbb{R}^{d}))\) as an example. We show that \(L(\mathbb{R}^{d}) \supset \varPhi (ID_{\log }(\mathbb{R}^{d}))\) and that \(L(\mathbb{R}^{d}) \subset \varPhi (ID_{\log }(\mathbb{R}^{d}))\), separately.
-
(a)
(\(L(\mathbb{R}^{d}) \supset \varPhi (ID_{\log }(\mathbb{R}^{d}))\)): Suppose that \(\mu = \mathcal{L}\left (\int _{0}^{\infty }e^{-t}dX_{t}\right )\) for some Lévy process {X t } satisfying that \(\mathcal{L}(X_{1}) \in ID_{\log }(\mathbb{R}^{d})\). Let b > 1 and let \(\{\tilde{X} _{t}\}\) be an independent copy of {X t }. In what follows, the notation \(\mathop{=}\limits^{\mathrm{ d}}\) means the equality in law. We have
$$\displaystyle{b^{-1}\int _{ 0}^{\infty }e^{-t}d\tilde{X}_{ t} =\int _{ 0}^{\infty }e^{-(t+\log b)}d\tilde{X}_{ t}\mathop{ =}\limits^{\mathrm{ d}}\int _{\log b}^{\infty }e^{-t}dX_{ t},}$$
and
which shows the relation (12) and \(\mu \in L(\mathbb{R}^{d})\).
-
(b)
(\(L(\mathbb{R}^{d}) \subset \varPhi (ID_{\log }(\mathbb{R}^{d}))\)): We need a lemma on selfsimilar additive process.
Definition 4.2
Let H > 0. A stochastic process {X t , t ≥ 0} on \(\mathbb{R}^{d}\) is H-selfsimilar if for any c > 0, \(\{X_{ct}\}\mathop{ =}\limits^{\mathrm{ d}}\{c^{H}X_{t}\}\).
Lemma 4.3 (Sato [71])
\(\mu \in L(\mathbb{R}^{d})\) if and only if there exists a 1-selfsimilar additive process {Y t } such that \(\mathcal{L}(Y _{1}) =\mu\) .
The following proof is due to Jeanblanc et al. [29]. Let \(\mu \in L(\mathbb{R}^{d})\). By Lemma 4.3, there exists 1-selfsimilar additive process {Y t } such that \(\mathcal{L}(Y _{1}) =\mu\). Define
Since {Y t } is additive, {X t } is also additive. Further, for h > 0,
Thus {X t } is a Lévy process. By (16),
and thus
implying that \(\mathcal{L}(X_{1}) \in \mathfrak{D}(\varPhi )\) and \(\mu = \mathcal{L}\left (\int _{0}^{\infty }e^{-v}dX_{v}\right )\) so that \(\mu \in \varPhi (ID_{\log }(\mathbb{R}^{d}))\).
5 Stochastic Integral Mappings and Characterizations of Classes (II)
Some other mappings in addition to the six mappings in Sect. 4.1 above will be explained in this section. Let
5.1 Φ α -Mapping
We define Φ α -mapping as follows:
The domain of Φ α is given as
(For \(\alpha \in (0,1) \cup (1,2)\), see Sato [75, Theorem 2.4], and for α = 1, Sato [77, Theorem 4.4].)
Here we introduce a notion of “weak mean” for later use.
Definition 5.1 (The Weak Mean of \(\mu \in ID(\mathbb{R}^{d})\) [77, Definition 3.6])
Let \(\mu =\mu _{(A,\nu,\gamma )} \in ID(\mathbb{R}^{d})\). It is said that μ has weak mean m μ if
and
The range of Φ α is as follows.
Theorem 5.2 (Sato [77, Theorem 4.18]. \(\mathfrak{R}(K_{1,\alpha })\) in the Notation There)
Let 0 < α < 2. Then \(\mu \in \mathfrak{R}(\varPhi _{\alpha })\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ = 0 or ν μ ≠ 0, and in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as, for some \(k_{\xi }(r)\) which is a nonnegative function measurable in \(\xi\) and nonincreasing on \((0,\infty )\) as a function of r,
-
(1)
\((\alpha < 1)\ \nu _{\xi }(dr) = r^{-\alpha -1}k_{\xi }(r)dr\) ,
-
(2)
\((\alpha = 1)\ \nu _{\xi }(dr) = r^{-2}k_{\xi }(r)dr\) , and the weak mean of μ is 0,
-
(3)
\((1 <\alpha < 2)\ \nu _{\xi }(dr) = r^{-\alpha -1}k_{\xi }(r)dr\) , and the mean of μ is 0.
We introduce the class \(L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) (the class of α-selfdecomposable distributions). Let \(\alpha \in \mathbb{R}\). We say that \(\mu \in ID(\mathbb{R}^{d})\) is α-selfdecomposable, if for any b > 1, there exists \(\rho _{b} \in ID(\mathbb{R}^{d})\) satisfying
Theorem 5.3 (Maejima and Ueda [52])
-
(1)
For \(\beta <\alpha,\quad L^{\langle \beta \rangle }(\mathbb{R}^{d}) \supset L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) .
-
(2)
For \(\alpha > 2\quad L^{\langle \alpha \rangle }(\mathbb{R}^{d}) =\{\delta _{\gamma }: \gamma \in \mathbb{R}^{d}\}\) .
-
(3)
\(L^{\langle 2\rangle }(\mathbb{R}^{d}) =\{ \text{all Gaussian distributions}\}\) .
-
(4)
\(L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) is left-continuous in \(\alpha \in \mathbb{R}\) , namely,
$$\displaystyle{\bigcap _{\beta <\alpha }L^{\langle \beta \rangle }(\mathbb{R}^{d}) = L^{\langle \alpha \rangle }(\mathbb{R}^{d})\quad \text{for all }\alpha \in \mathbb{R}.}$$ -
(5)
Let \(\alpha \in (-\infty,2)\) . Then, \(\mu \in L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as
$$\displaystyle{ \nu _{\xi }(dr) = r^{-\alpha -1}\ell_{ \xi }(r)dr, }$$(18)
where \(\ell_{\xi }(r)\) is a nonnegative function which is measurable in \(\xi\) , and nonincreasing on \((0,\infty )\) as a function of r.
Remark 5.4
-
(1)
We have \(L^{(-1)}(\mathbb{R}^{d}) = U(\mathbb{R}^{d})\) and \(L^{(0)}(\mathbb{R}^{d}) = L(\mathbb{R}^{d})\). Thus by Theorem 5.3(1), if α < −1, \(L^{\langle \alpha \rangle }(\mathbb{R}^{d}) \supset U(\mathbb{R}^{d}) \supset L(\mathbb{R}^{d})\).
-
(2)
Another class bigger than \(U(\mathbb{R}^{d})\) is
$$\displaystyle{A(\mathbb{R}^{d}):= \left \{\varPhi _{\cos }(\mu ) = \mathcal{L}\left (\int _{ 0}^{1}\cos (2^{-1}\pi t)dX_{ t}^{(\mu )}\right )\,\,:\,\,\mu \in I(\mathbb{R}^{d})\right \}.}$$(See Maejima et al. [58, Theorem 2.6].)
-
(3)
It is an open problem to find the relationship between \(L^{\langle \alpha \rangle }(\mathbb{R}^{d}),\alpha < -1,\) and \(A(\mathbb{R}^{d})\).
The relations between the mappings Φ α and the classes \(L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) are as follows. (The case α = 0 is nothing but Proposition 4.1(3).)
Theorem 5.5 (Maejima et al. [57, Theorem 4.6])
Let α < 0. \(\tilde{\mu }\in L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) if and only if \(\tilde{\mu }=\varPhi _{\alpha }(\mu )\) for some \(\mu \in ID(\mathbb{R}^{d})\) .
Theorem 5.6 (Maejima and Ueda [52, Theorem 5.1(ii) and (iv)])
Let \(\alpha \in (0,1) \cup (1,2)\) .
-
(1)
When 0 < α < 1, \(\tilde{\mu }\in L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) if and only if
$$\displaystyle{ \tilde{\mu }=\sigma _{\alpha } {\ast}\varPhi _{\alpha }(\mu ), }$$(19)where \(\mu \in ID_{\alpha }(\mathbb{R}^{d})\) and \(\sigma _{\alpha }\) is a strictly α-stable distribution or a trivial distribution, where ∗ means convolution.
-
(2)
When 1 < α < 2, \(\tilde{\mu }\in L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) if and only if (19) holds for some \(\mu \in ID_{\alpha }^{0}(\mathbb{R}^{d})\) and some α-stable distribution \(\sigma _{\alpha }\) .
For the case α = 1 we need slightly different mapping called the essential improper stochastic integrals introduced by Sato [74, 76] defined as
(The term “essentially improper stochastic integral” is changed to “essentially definable improper stochastic integral” in Sato [76].) When f(t) = (1 + t)−1, which is the integrand of Φ 1(μ), we write Φ f, es(μ) as Φ 1, es(μ).
Theorem 5.7
When α = 1, \(\tilde{\mu }\in L^{\langle 1\rangle }(\mathbb{R}^{d})\) if and only if \(\tilde{\mu }=\sigma _{1}{\ast}\tilde{\rho }\) , where \(\tilde{\rho }\in \varPhi _{1,\mathrm{es}}(\rho )\) for some \(\rho \in I_{1}(\mathbb{R}^{d})\) and \(\sigma _{1}\) is a 1-stable distribution.
Remark 5.8
The classes \(L^{\langle \alpha \rangle }(\mathbb{R}^{d}),\alpha \in \mathbb{R}\), were already studied by many authors before Maejima and Ueda [52]. Alf and O’Connor [2] and O’Connor [62] studied the class of all infinitely divisible distributions on \(\mathbb{R}\) with unimodal Lévy measures with mode 0, and showed that the class is equal to \(L^{\langle -1\rangle }(\mathbb{R})\), As to this class, Alf and O’Connor [2] studied stochastic integral characterizations with respect to Lévy processes. O’Conner [62] studied the decomposability (17) for d = 1 and α = −1, and characterized this class by some limit theorem. O’Connor [61, 63] also studied the classes \(L^{\langle \alpha \rangle }(\mathbb{R}),\alpha \in (-1,2)\). He defined these classes by a condition of radial components of Lévy measures, and characterized these classes by stochastic integrals with respect to Lévy processes, by the decomposability (17) for d = 1, and by similar limit theorems to that in the case \(L^{\langle -1\rangle }(\mathbb{R})\). Jurek [30, 31, 35] and Iksanov et al. [26] defined and studied so-called s-selfdecomposable distributions on a real separable Hilbert space H. The totality of s-selfdecomposable distributions, denoted by \(\mathcal{U}(H)\) in their papers, is equal to \(L^{\langle -1\rangle }(\mathbb{R}^{d})\), when \(H = \mathbb{R}^{d}\). Jurek [32–34] and Jurek and Schreiber [37] studied the classes \(\mathcal{U}_{\beta }(Q),\beta \in \mathbb{R}\), of distributions on a real separable Banach space E, where Q is a linear operator on E with certain properties. These classes are equal to \(L^{\langle -\beta \rangle }(\mathbb{R}^{d})\) if \(E = \mathbb{R}^{d}\) and Q is the identity operator. They defined the classes \(\mathcal{U}_{\beta }(Q)\) by some limit theorems. As to these classes, they studied the decomposability similar to (17) and stochastic integral characterizations, although some results are only for the case that Q is the identity operator.
Remark 5.9
Maejima et al. [57] studied the classes \(K_{\alpha }(\mathbb{R}^{d}),\alpha < 2\): \(\mu \in K_{\alpha }(\mathbb{R}^{d})\) if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as
where \(\ell_{\xi }(r)\) is a nonnegative function which is measurable in \(\xi\) and nonincreasing on \((0,\infty )\) as a function of r, and \(\ell_{\xi }(\infty ) = 0\). The relation between \(K_{\alpha }(\mathbb{R}^{d})\) and \(L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) for α < 2 is
where \(\mathcal{C}_{\alpha }(\mathbb{R}^{d})\) is the totality of \(\mu \in ID(\mathbb{R}^{d})\) whose Lévy measure ν μ satisfies
\(\lim _{r\rightarrow \infty }r^{\alpha }\int _{\vert x\vert >r}\nu _{\mu }(dx) = 0\). (Maejima and Ueda [52], Maejima et al. [57].)
Remark 5.10
Recall that the difference between \(U(\mathbb{R}^{d})\) and \(B(\mathbb{R}^{d})\) in terms of Lévy measure is that \(\ell_{\xi }(r)\) in (9) is nonincreasing and \(\ell_{\xi }(r)\) in (10) is completely monotone. Also, the difference between \(L(\mathbb{R}^{d})\) and \(T(\mathbb{R}^{d})\) in terms of Lévy measure is that \(k_{\xi }(r)\) in (11) is nonincreasing and \(k_{\xi }(r)\) in (13) is completely monotone. From this point of view, the nonincreasing function \(\ell_{\xi }(r)\) in (20) can be replaced by a completely monotone function \(\ell_{\xi }(r)\) with \(\ell_{\xi }(\infty ) = 0\). Actually, if we do so, we can get (31) in Sect. 5.4 later, which leads to the tempered stable distribution by Rosiński [69].
5.2 Ψ α, β -Mapping
We define a more general notation of mapping, which we call Ψ α, β -mapping. Let
and let s = G α, β ∗(t) be its inverse function. Define Ψ α, β -mapping by
with
where Γ(⋅ ) is the gamma function. These mappings were introduced first by Sato [75] for β = 1 and later by Maejima and Nakahara [44] for general β > 0. Due to Sato [75] and Maejima and Nakahara [44], we see the domains \(\mathfrak{D}(\varPsi _{\alpha,\beta })\) as follows, which are independent of the value β > 0.
The six mappings in Sect. 4.1 are the special cases of the Φ α - and Ψ α, β -mappings as follows.
Remark 5.11
\(\mathcal{U} =\varPhi _{-1}\), \(\varUpsilon =\varPsi _{-1,1}\), Φ = Φ 0, Ψ = Ψ 0, 1, \(\mathcal{G} =\varPsi _{-1,2}\), \(\mathcal{M} =\varPsi _{0,2}\).
5.3 Φ (b)-Mapping
Let b > 1. Define Φ (b)-mapping by
where [t] denotes the largest integer not greater than \(t \in \mathbb{R}\).
\(\mu \in ID(\mathbb{R}^{d})\) is called semi-selfdecomposable if there exist b > 1 and \(\rho \in ID(\mathbb{R}^{d})\) such that \(\hat{\mu }(z) =\hat{\mu } (b^{-1}z)\hat{\rho }(z)\). We call this b a span of μ, and we denote the class of all semi-selfdecomposable distributions with span b by \(L(b, \mathbb{R}^{d})\). From the definitions, \(L(b, \mathbb{R}^{d}) \supsetneq L(\mathbb{R}^{d})\) and \(L(\mathbb{R}^{d}) =\bigcap _{b>1}L(b, \mathbb{R}^{d})\). \(\mu \in L(b, \mathbb{R}^{d})\) is also realized as a limiting distribution of normalized partial sums of independent random variables under the condition of infinite smallness when the limit is taken through a geometric subsequence. A typical example is a semi-stable distribution.
Theorem 5.12
Fix any b > 1. Then, the range \(\mathfrak{R}(\varPhi _{(b)})\) is the class of all semi-selfdecomposable distributions with span b on \(\mathbb{R}^{d}\) , namely,
(For the proof, see Maejima and Ueda [50].)
5.4 Stable Mapping
This section is from Maejima et al. [59]. Let 0 < α < 2. Define a mapping by
Note that the kernel above has a singularity at t = 0 and is not square integrable around t = 0. This fact gives an influence when determining the domain of mappings. The following characterization of \(\mathfrak{D}(\varXi _{\alpha })\) follows from Proposition 5.3 and Example 4.5 of Sato [76].
Theorem 5.13
-
(1)
If 0 < α < 1, then
$$\displaystyle{\mathfrak{D}(\varXi _{\alpha }) = \left \{\mu =\mu _{(0,\nu,0)_{0}} \in ID(\mathbb{R}^{d})\,\,:\,\,\int _{ \mathbb{R}^{d}}\vert x\vert ^{\alpha }\nu (dx) < \infty \right \}.}$$ -
(2)
If α = 1, then
$$\displaystyle\begin{array}{rcl} \mathfrak{D}(\varXi _{1})& \,=\,& \biggl \{\mu =\mu _{(0,\nu,0)_{0}}\,=\,\mu _{(0,\nu,0)_{1}} \in ID(\mathbb{R}^{d})\,\,:\,\,\int _{ \mathbb{R}^{d}}\vert x\vert \ \nu (dx) < \infty,\int _{\mathbb{R}^{d}}x\ \nu (dx) = 0, {}\\ & \lim \limits _{\varepsilon \downarrow 0}& \int _{\vert x\vert \leq 1}x\log (\vert x\vert \vee \varepsilon )\ \nu (dx)\ \text{and}\ \lim _{T\rightarrow \infty }\int _{\vert x\vert >1}x\log (\vert x\vert \wedge T)\ \nu (dx)\ \text{exist}\biggr \}. {}\\ \end{array}$$ -
(3)
If 1 < α < 2, then
$$\displaystyle{\mathfrak{D}(\varXi _{\alpha }) = \left \{\mu =\mu _{(0,\nu,0)_{1}} \in ID(\mathbb{R}^{d})\,\,:\,\,\int _{ \mathbb{R}^{d}}\vert x\vert ^{\alpha }\nu (dx) < \infty \right \}.}$$
Remark 5.14
There is a simple sufficient condition for μ in (2). Namely, \(\mu =\mu _{(0,\nu,\gamma )} \in \mathfrak{D}(\varXi _{1})\) if \(\int _{\mathbb{R}^{d}}\vert x\vert \,\vert \log \vert x\vert \vert \ \nu (dx) < \infty \), \(\int _{\mathbb{R}^{d}}x\ \nu (dx) = 0\), and \(\gamma =\int _{\mathbb{R}^{d}} \frac{x} {1+\vert x\vert ^{2}} \nu (dx)\).
The next theorem gives a full characterization of \(\mathfrak{R}(\varXi _{\alpha })\). \(S_{\alpha }^{0}(\mathbb{R}^{d})\) denotes the class of strictly α-stable distributions on \(\mathbb{R}^{d}\). Note that in the case \(\alpha = 1\ \hat{\mu }\) can be written as follows:
where \(\lambda _{1}\) is a finite measure on S and \(\tau \in \mathbb{R}^{d}\), and where \(\int _{S}\xi \lambda _{1}(d\xi ) = 0.\) (See e.g. Sato [73, Theorem 14.10].)
Theorem 5.15 (Maejima et al. [59])
Let 0 < α < 2.
-
(1)
When α ≠ 1, we have
$$\displaystyle{\varXi _{\alpha }(\mathfrak{D}(\varXi _{\alpha })) = S_{\alpha }^{0}(\mathbb{R}^{d}).}$$ -
(2)
When α = 1, we have
$$\displaystyle{ \varXi _{1}(\mathfrak{D}(\varXi _{1})) = \left \{\mu \in S_{1}^{0}(\mathbb{R}^{d})\,\,:\,\,\tau \in \mathrm{ span\ supp}(\lambda _{ 1})\right \}, }$$
where, respectively, \(\lambda _{1}\) and τ are those in (22) . Here \(\mathrm{supp}(\lambda _{1})\) denotes the support of \(\lambda _{1}\) . If \(\lambda _{1} = 0\) , then we put \(\mathrm{span\ supp}(\lambda _{1}) =\{ 0\}\) by convention.
6 Compositions of Stochastic Integral Mappings
The motivation for the paper by Barndorff-Nielsen et al. [12] was to see if the Thorin class can be realized as the composited mapping of Φ and \(\varUpsilon\), where Φ produces the class of selfdecomposable distributions and \(\varUpsilon\) produces the Goldie-Steutel-Bondesson class. So, we believed that compositions of stochastic integral mappings would be important and useful in many aspects, which was verified by several observations. This is why we will discuss compositions of stochastic integral mappings.
Let Φ f and Φ g be two stochastic integral mappings. The composition of two mappings is defined as
with
We have the following.
Theorem 6.1
We have
-
(1)
\(\varPsi =\varPhi \circ \varUpsilon =\varUpsilon \circ \varPhi\) ,
-
(2)
\(\varUpsilon \circ \mathcal{U} = \mathcal{U}\circ \varUpsilon\) ,
-
(3)
\(\mathcal{G}\circ \mathcal{U} = \mathcal{U}\circ \mathcal{G}\) ,
-
(4)
\(\varPhi \circ \mathcal{U} = \mathcal{U}\circ \varPhi\) .
Proof of (1) of Theorem 6.1 (Barndorff-Nielsen et al. [12, Theorem C(ii)])
Note that \(\mathfrak{D}(\varPsi )(z) = ID_{\log }(\mathbb{R}^{d})\). If \(\mu \in ID_{\log }(\mathbb{R}^{d})\), then
On the other hand, if \(\mu \in ID(\mathbb{R}^{d})\), then
Also note that
(see Barndorff-Nielsen et al. [12, Theorem C(i)]). Thus, if \(\mu \in ID_{\log }(\mathbb{R}^{d})\), then \(\varUpsilon (\mu ) \in ID_{\log }(\mathbb{R}^{d})\) by (23), and hence
and
If we could show
then we can apply Fubini’s theorem to get
meaning \(\varPhi \circ \varUpsilon =\varUpsilon \circ \varPhi\), and
concluding \(\varPhi \circ \varUpsilon =\varPsi\). It remains to prove (24). We need the following lemma.
Lemma 6.2
Let \(\mu =\mu _{(A,\nu,\gamma )} \in ID(\mathbb{R}^{d})\) For each fixed \(z \in \mathbb{R}\) ,
where c z > 0 is a finite constant depending only on z.
Proof of Lemma 6.2
Let \(g(z,x) = e^{i\langle z,x\rangle } - 1 - \frac{i\langle z,x\rangle } {1+\vert x\vert ^{2}}\). Since
we have
The inequalities
and
conclude the proof of the lemma.
We then have, by Lemma 6.2 that
\(I_{1} < \infty \) and \(I_{2} < \infty \) are trivial. As to I 3,
where
and
As to I 4, we omit the proof, since the basic ideas are the same as for I 3. Equation (24) is thus proved.
Proof of (2) of Theorem 6.1
We have
Proof of (3) of Theorem 6.1
We have
Proof of (4) of Theorem 6.1
We first show that \(\mathcal{U}(\mu ) \in ID_{\log }(\mathbb{R}^{d})\) if and only if \(\mu \in ID_{\log }(\mathbb{R}^{d})\). Let \(\mu \in ID(\mathbb{R}^{d})\) and \(\tilde{\mu }= \mathcal{U}(\mu )\). We have
where \(h(x) \sim \log \vert x\vert \) as \(\vert x\vert \rightarrow \infty \). Thus, \(\int _{\vert x\vert >2}\log \vert x\vert \nu _{\tilde{\mu }}(dx) < \infty \) if and only if \(\int _{\vert x\vert >2}\log \vert x\vert \nu _{\mu }(dx) < \infty \). Then if \(\mu \in ID_{\log }(\mathbb{R}^{d})\), then using \(\mathcal{U}(\mu ) \in ID_{\log }(\mathbb{R}^{d})\), we have
For the applicability of Fubini’s theorem above, we have to check that
By Lemma 6.2, we have
\(I_{1} < \infty \) and \(I_{2} < \infty \) are trivial. We have
where
I 4 can be handled similarly. This completes the proof of (4) of Theorem 6.1.
The following Proposition 6.3 is a special case of Theorem 3.1 of Sato [75] and Proposition 6.4 can be proved similarly, but we give a proof of Proposition 6.4 here.
Proposition 6.3
Let
and let k ∗ (t) be its inverse function. Define a mapping \(\mathcal{K}\) from \(\mathfrak{D}(\mathcal{K})\) into \(ID(\mathbb{R}^{d})\) by
Then \(\mathfrak{D}(\mathcal{K}) = ID(\mathbb{R}^{d})\) and
Proposition 6.4
Let
and let a ∗ (t) be its inverse function. Define a mapping \(\mathcal{A}\) from \(\mathfrak{D}(\mathcal{A})\) into \(ID(\mathbb{R}^{d})\) by
Then \(\mathfrak{D}(\mathcal{A}) = ID(\mathbb{R}^{d})\) and
Proof
With respect to the domain of the \(\mathcal{A}\)-mapping, it is enough to show
but (26) follows from
Next applying Proposition 2.6(1), we have
and
Thus,
and
If we are allowed to exchange the order of the integrations by Fubini’s theorem, then we have
implying \(\mathcal{A}\circ \mathcal{U} = \mathcal{U}\circ \mathcal{A}\), and we have
concluding \(\mathcal{A}\circ \mathcal{U} = \mathcal{G}\). In order to assure the exchange of the order of the integrations by Fubini’s theorem, it is enough to show that
For \(\mu =\mu _{(A,\nu,\gamma )} \in ID(\mathbb{R}^{d})\), we have
where
Hence
The finiteness of \(\int _{0}^{1}du\int _{0}^{\infty }(I_{1} + I_{2})v^{2}e^{-v^{2} }dv\) is trivial. Noting that | g(z, x) | ≤ c z | x | 2(1 + | x | 2)−1 with a positive constant c z depending on z, we have
where
As to I 4, note that for \(a \in \mathbb{R}\),
since | b | (1 + b 2)−1 ≤ 2−1. Then
This completes the proof of (27).
We can give a more general result than Propositions 6.3 and 6.4.
Theorem 6.5 (Maejima and Ueda [53])
Let \(\mathcal{X}_{\beta }(\mu ) = \mathcal{L}\left (X_{\beta }^{(\mu )}\right ),\beta > 0\) . Then for \(\alpha \in (-\infty,1) \cup (1,2)\) and β > 0,
(Remark that \(\mathcal{X}_{\beta }(\mu ) =\mu ^{\beta {\ast}}.)\)
Proposition 6.3 is the case of Theorem 6.5 with α = −1 and β = 1, since \(\mathcal{K} = \mathcal{X}_{1} \circ \varPsi _{-2,1} =\varPsi _{-2,1}\), and Proposition 6.4 is the case of Theorem 6.5 with α = −1 and β = 2, since \(\mathcal{A} = \mathcal{X}_{2} \circ \varPsi _{-3,2}\).
7 Nested Subclasses of Classes of Infinitely Divisible Distributions
As mentioned in Sect. 1 already, once we have mappings in hand, it is natural to consider the iteration of mappings. In our case, this procedure gives us nested subclasses of the original class, which, without mapping, was already studied in the case of selfdecomposable distributions by Urbanik [90] and Sato [70].
7.1 Iteration of Mappings
Let Φ f be a stochastic integral mapping. The iteration Φ f m is defined by Φ f 1 = Φ f and
with
We have
implying
and
Therefore, if we write
\(K_{m}^{f}(\mathbb{R}^{d}),m = 2,3,\ldots,\) are nested subclasses of \(K_{1}^{f}(\mathbb{R}^{d}) =\varPhi _{f}(\mathfrak{D}(\varPhi _{f}))\).
With respect to the domain of mappings, if Φ f is a proper stochastic integral mapping, then \(\mathfrak{D}(\varPhi _{f}^{m}) = ID(\mathbb{R}^{d})\) as mentioned before. For Φ f = Φ or Ψ, (which is an improper stochastic integral mapping), we have the following.
Lemma 7.1
We have
-
(1)
\(\mathfrak{D}(\varPhi ^{m}) = ID_{\log ^{m}}(\mathbb{R}^{d})\) ,
-
(2)
\(\mathfrak{D}(\varPsi ^{m}) = ID_{\log ^{m}}(\mathbb{R}^{d})\) .
Proof
-
(1)
See e.g. Rocha-Arteaga and Sato [67, Theorem 49].
-
(2)
We first show that
$$\displaystyle{\varUpsilon (\mu ) \in ID_{\log ^{m}}(\mathbb{R}^{d})\,\,\text{if and only if}\,\,\mu \in ID_{\log ^{m }}(\mathbb{R}^{d}).}$$Let \(\mu \in ID(\mathbb{R}^{d})\) and \(\tilde{\mu }=\varUpsilon (\mu )\). We have
$$\displaystyle\begin{array}{rcl} \int _{\vert x\vert >2}\log ^{m}\vert x\vert \nu _{\tilde{\mu }}(dx)& =& \int _{ 0}^{\infty }e^{-s}ds\int _{ \vert x\vert >2/s}\log ^{m}(s\vert x\vert )\nu _{\mu }(dx) {}\\ & =& \int _{\mathbb{R}^{d}}\nu _{\mu }(dx)\int _{2/\vert x\vert }^{\infty }e^{-s}(\log s +\log \vert x\vert )^{m}ds {}\\ & =:& \int _{\mathbb{R}^{d}}h(x)\nu _{\mu }(dx). {}\\ \end{array}$$
Here h(x) = o( | x | 2) as \(\vert x\vert \downarrow 0\) and \(h(x) \sim \log ^{m}\vert x\vert \) as \(\vert x\vert \rightarrow \infty \). Thus, \(\int _{\vert x\vert >2}\log ^{m}\vert x\vert \nu _{\tilde{\mu }}(dx) < \infty \) if and only if \(\int _{\vert x\vert >2}\log ^{m}\vert x\vert \nu _{\mu }(dx) < \infty \). By Theorem 6.1(1), we know that \(\varPsi ^{m} =\varPhi ^{m} \circ \varUpsilon ^{m}\). Since \(\mathfrak{D}(\varUpsilon ^{m}) = ID(\mathbb{R}^{d})\) and \(\mathfrak{D}(\varPhi ^{m}) = ID_{\log ^{m}}(\mathbb{R}^{d})\), we have \(\mathfrak{D}(\varPsi ^{m}) = ID_{\log ^{m}}(\mathbb{R}^{d})\).
7.2 Definitions and Some Properties of Nested Subclasses
Put
Definition 7.2
For \(m = 0,1,2,\ldots\), define
\(U_{m}(\mathbb{R}^{d}) = \mathcal{U}^{m+1}(ID(\mathbb{R}^{d}))\),
\(B_{m}(\mathbb{R}^{d}) = \varUpsilon ^{m+1}(ID(\mathbb{R}^{d}))\),
\(L_{m}(\mathbb{R}^{d}) = \varPhi ^{m+1}(ID_{\log ^{m+1}}(\mathbb{R}^{d}))\),
\(T_{m}(\mathbb{R}^{d}) = \varPsi ^{m+1}(ID_{\log ^{m+1}}(\mathbb{R}^{d}))\),
\(G_{m}(\mathbb{R}^{d}) = \mathcal{G}^{m+1}(ID(\mathbb{R}^{d}))\),
\(M_{m}(\mathbb{R}^{d}) = \mathcal{M}^{m+1}(ID(\mathbb{R}^{d}))\)
and further \(U_{\infty }(\mathbb{R}^{d}) =\bigcap _{ m=0}^{\infty }U_{m}(\mathbb{R}^{d}),B_{\infty }(\mathbb{R}^{d}) =\bigcap _{ m=0}^{\infty }B_{m}(\mathbb{R}^{d}),L_{\infty }(\mathbb{R}^{d}) =\bigcap _{ m=0}^{\infty }L_{m}(\mathbb{R}^{d})\), \(T_{\infty }(\mathbb{R}^{d}) =\bigcap _{ m=0}^{\infty }T_{m}(\mathbb{R}^{d})\), \(G_{\infty }(\mathbb{R}^{d}) =\bigcap _{ m=0}^{\infty }G_{m}(\mathbb{R}^{d})\), \(M_{\infty }(\mathbb{R}^{d}) =\bigcap _{ m=0}^{\infty }M_{m}(\mathbb{R}^{d})\).
Distributions in \(L_{\infty }(\mathbb{R}^{d})\) are called completely selfdecomposable distributions.
We start with the following.
Definition 7.3
A class \(H \subset ID(\mathbb{R}^{d})\) is said to be completely closed in the strong sense (c.c.s.s.), if the following are satisfied.
-
(1)
It is closed under convolution.
-
(2)
It is closed under weak convergence.
-
(3)
If X is an \(\mathbb{R}^{d}\)-valued random variable with \(\mathcal{L}(X) \in H\), then \(\mathcal{L}(aX + b) \in H\) for any a > 0 and \(b \in \mathbb{R}^{d}\).
-
(4)
μ ∈ H implies μ s∗ ∈ H for any s > 0.
Proposition 7.4 (Maejima and Sato [48, Proposition 3.2])
Fix \(0 < a < \infty \) . Suppose that f is square integrable on (0,a) and \(\int _{0}^{a}f(t)dt\neq 0\) . Define a mapping Φ f by
Then the following are true.
-
(1)
\(\mathfrak{D}(\varPhi _{f}) = ID(\mathbb{R}^{d})\) .
-
(2)
If H is c.c.s.s., then \(\varPhi _{f}(H) \subset H\) .
-
(3)
If H is c.c.s.s., then Φ f (H) is also c.c.s.s.
Remark 7.5
-
(1)
Note that Proposition 7.4 can be applied to \(\varUpsilon\)- and \(\mathcal{G}\)-mappings, because in those mappings the stochastic integral is proper, f is square integrable and \(\int _{0}^{a}f(t)dt\neq 0\). Since \(ID(\mathbb{R}^{d})\) is c.c.s.s., \(B(\mathbb{R}^{d})\) and \(G(\mathbb{R}^{d})\) are c.c.s.s.
-
(2)
Proposition 7.4(3) is not necessarily true when \(a = \infty \). Namely, there is a mapping Φ f defined by \(\varPhi _{f}(\mu ) = \mathcal{L}\left (\int _{0}^{\infty }f(t)dX_{t}^{(\mu )}\right )\) such that \(\varPhi _{f}(H \cap \mathfrak{D}(\varPhi _{f}))\) is not closed under weak convergence for some H which is c.c.s.s. Indeed, the mapping Ψ α with 0 < α < 1 in Theorem 4.2 of Sato [75] serves as an example.
-
(3)
However, it is known that when Φ f = Φ, Proposition 7.4(2) and (3) are true with Φ f (H) replaced by \(\varPhi (H \cap \mathfrak{D}(\varPhi ))\), even if \(a = \infty \). See Lemma 4.1 of Barndorff-Nielsen et al. [12]. In particular, \(L_{m}(\mathbb{R}^{d})\) is c.c.s.s. for m = 0, 1, ….
-
(4)
We also have that \(T_{\infty }(\mathbb{R}^{d})\) is c.c.s.s. (Maejima and Sato [48, Lemma 3.8].)
Theorem 7.6
We have the following.
-
(1)
\(B_{m}(\mathbb{R}^{d}) \subset U_{m}(\mathbb{R}^{d})\) ,
-
(2)
\(G_{m}(\mathbb{R}^{d}) \subset U_{m}(\mathbb{R}^{d})\) ,
-
(3)
\(L_{m}(\mathbb{R}^{d}) \subset U_{m}(\mathbb{R}^{d})\) ,
-
(4)
\(T_{m}(\mathbb{R}^{d}) \subset L_{m}(\mathbb{R}^{d})\) .
Proof
-
(1)
We know that \(B_{0}(\mathbb{R}^{d}) \subset U_{0}(\mathbb{R}^{d})\). Suppose that \(B_{m}(\mathbb{R}^{d}) \subset U_{m}(\mathbb{R}^{d})\) for some m ≥ 0, as the induction hypothesis. Then
$$\displaystyle\begin{array}{rcl} B_{m+1}(\mathbb{R}^{d})& =& \varUpsilon ^{m+2}(ID(\mathbb{R}^{d})) =\varUpsilon (\varUpsilon ^{m+1}(ID(\mathbb{R}^{d})) =\varUpsilon (B_{ m}(\mathbb{R}^{d})) {}\\ & \subset &\varUpsilon (U_{m}(\mathbb{R}^{d})) =\varUpsilon (\mathcal{U}^{m+1}(ID(\mathbb{R}^{d})) = \mathcal{U}^{m+1}(\varUpsilon (ID(\mathbb{R}^{d})) {}\\ & & \mbox{ (since $\varUpsilon \circ \mathcal{U} = \mathcal{U}\circ \varUpsilon $ (Theorem 6.1(2)))} {}\\ & =& \mathcal{U}^{m+1}(\mathcal{U}(ID(\mathbb{R}^{d})) = U_{ m+1}(\mathbb{R}^{d}). {}\\ \end{array}$$ -
(2)
The same proof as above works if we apply the relation \(\mathcal{G}\circ \mathcal{U} = \mathcal{U}\circ \mathcal{G}\) (Theorem 6.1(3)) instead of \(\varUpsilon \circ \mathcal{U} = \mathcal{U}\circ \varUpsilon\).
-
(3)
We know that \(L_{0}(\mathbb{R}^{d}) \subset U_{0}(\mathbb{R}^{d})\). Suppose that \(L_{m}(\mathbb{R}^{d}) \subset U_{m}(\mathbb{R}^{d})\) for some m ≥ 0, as the induction hypothesis. Then
$$\displaystyle\begin{array}{rcl} L_{m+1}(\mathbb{R}^{d})& =& \varPhi ^{m+2}(ID_{\log ^{ m+2}}(\mathbb{R}^{d})) =\varPhi (\varPhi ^{m+1}(ID_{\log ^{ m+2}}(\mathbb{R}^{d}))) {}\\ & \subset &\varPhi (\varPhi ^{m+1}(ID_{\log ^{ m+1}}(\mathbb{R}^{d}))) =\varPhi (L_{ m}(\mathbb{R}^{d}) \cap ID_{\log }(\mathbb{R}^{d})) {}\\ & \subset &\varPhi (U_{m}(\mathbb{R}^{d}) \cap ID_{\log }(\mathbb{R}^{d})) =\varPhi (\mathcal{U}^{m+1}(ID(\mathbb{R}^{d})) \cap ID_{\log }(\mathbb{R}^{d})) {}\\ & =& \varPhi (\mathcal{U}^{m+1}(ID_{\log }(\mathbb{R}^{d})) = \mathcal{U}^{m+1}(\varPhi (ID_{\log }(\mathbb{R}^{d})) {}\\ & & \mbox{ (by $\varPhi \circ \mathcal{U} = \mathcal{U}\circ \varPhi $ (Theorem 6.1 (4)))} {}\\ & =& \mathcal{U}^{m+1}(L_{ 0}(\mathbb{R}^{d})) \subset \mathcal{U}^{m+1}(U_{ 0}(\mathbb{R}^{d})) = U_{ m+1}(\mathbb{R}^{d}). {}\\ \end{array}$$ -
(4)
We show \(T_{m}(\mathbb{R}^{d}) \subset L_{m}(\mathbb{R}^{d})\). We can show that, for any m ≥ 0,
$$\displaystyle\begin{array}{rcl} T_{m}(\mathbb{R}^{d})& =& (\varPhi \varUpsilon )^{m+1}(ID_{\log ^{ m+1}}(\mathbb{R}^{d})) = (\varUpsilon ^{m+1}\varPhi ^{m+1})(ID_{\log ^{ m+1}}(\mathbb{R}^{d})) {}\\ & =& \varUpsilon ^{m+1}(L_{ m}(\mathbb{R}^{d})). {}\\ \end{array}$$Then by Proposition 7.4(2) and Remark 7.5(3),
$$\displaystyle{\varUpsilon ^{m+1}(L_{ m}(\mathbb{R}^{d})) \subset L_{ m}(\mathbb{R}^{d}).}$$
The proof is completed.
7.3 Limits of Nested Subclasses
The following is a main result on the limits of nested subclasses.
Theorem 7.7 (Maejima and Sato [48], Aoyama et al. [5])
Let \(\overline{S(\mathbb{R}^{d})}\) be the closure of \(S(\mathbb{R}^{d})\) , where the closure is taken under weak convergence and convolution. We have
To prove this theorem, we start with the following two known results.
Theorem 7.8 (The Class of Completely Selfdecomposable Distributions. Urbanik [90] and Sato [70])
\(L_{\infty }(\mathbb{R}^{d}) = \overline{S(\mathbb{R}^{d})}.\)
Theorem 7.9 (Jurek [35], See Also Maejima and Sato [48])
\(U_{\infty }(\mathbb{R}^{d}) = L_{\infty }(\mathbb{R}^{d}).\)
We also have the following two propositions.
Proposition 7.10
Proof
Trivial from Theorem 7.6.
Proposition 7.11
We have
Proof
It follows from Remark 7.5(1) that \(B_{\infty }(\mathbb{R}^{d})\) and \(G_{\infty }(\mathbb{R}^{d})\) are c.c.s.s., and from Remark 7.5 (4) that \(T_{\infty }(\mathbb{R}^{d})\) is also c.c.s.s. Thus, we have
We know that each class includes \(S(\mathbb{R}^{d})\). Thus,
The proof is completed.
Proof of Theorem 7.7
The statement follows from Theorems 7.8 and 7.9 and Propositions 7.10 and 7.11.
7.4 Limits of the Iterations of Stochastic Integral Mappings
A natural question is whether \(L_{\infty }(\mathbb{R}^{d})\) is the only class which can appear as the limit of iterations of stochastic integral mappings. In this section, we give an answer to this question. We start with the following.
Theorem 7.12 (A Characterization of \(L_{\infty }(\mathbb{R}^{d})\) (Sato [70]))
\(\mu \in L_{\infty }(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and
where Γ μ is a measure on (0,2) satisfying
and \(\lambda _{\alpha }\) is a probability measure on S for each α and it is measurable in α. Here Γ μ is unique and so it can be considered a characteristic of μ.
Definition 7.13
For \(A \in \mathcal{B}((0,2))\), define \(L_{\infty }^{A}(\mathbb{R}^{d}):=\{\mu \in L_{\infty }(\mathbb{R}^{d}): \varGamma ^{\mu }\) \(\left ((0,2)\setminus A\right ) = 0\}.\)
Theorem 7.14 (Sato [78], Maejima and Ueda [54])
We have
7.5 Characterizations of Some Nested Subclasses
Here we treat three cases, \(L_{m}(\mathbb{R}^{d})\), \(B_{m}(\mathbb{R}^{d})\) and \(G_{m}(\mathbb{R}^{d})\).
Sato [70] characterized the classes \(L_{m}(\mathbb{R}^{d})\) in terms of \(\nu _{\xi }\) as follows. Recall the functions \(k_{\xi }\) in (11). We call the function \(h_{\xi }(u)\) defined by \(h_{\xi }(u) = k_{\xi }(e^{-u})\) the h-function of μ.
Let f be a real-valued function on \(\mathbb{R}\). For \(\varepsilon > 0,n = 1,2,\ldots,\) denote
Define \(\varDelta _{\varepsilon }^{0}f = f\). We say that \(f(u),u \in \mathbb{R},\) is monotone of order n if \(\varDelta _{\varepsilon }^{j}f \geq 0\) for \(\varepsilon > 0,j = 0,1,2,\ldots,n\).
Theorem 7.15 (Sato [70, Theorem 3.2])
Let m = 1,2,…. Then \(\mu \in L_{m}(\mathbb{R}^{d})\) if and only if \(\mu \in L(\mathbb{R}^{d})\) and the h-function of μ is monotone of order m + 1 for \(\lambda\) -a.e. \(\xi\) , where \(\lambda\) is the measure appearing in (2.3) .
Another characterization of \(L_{m}(\mathbb{R}^{d})\) in terms of the decomposability is the following.
Theorem 7.16 (See Sato [70, Theorem 2.1] and Rocha-Arteaga and Sato [67, Theorem 49])
For \(m = 1,2,\ldots,\infty,\mu \in L_{m}(\mathbb{R}^{d})\) if and only if \(\mu \in L_{0}(\mathbb{R}^{d})\) and for any b > 1, there exists some \(\rho _{b} \in L_{m-1}(\mathbb{R}^{d})\) such that (12) is satisfied.
For the characterization of \(B_{m}(\mathbb{R}^{d})\), we introduce a sequence of functions \(\varepsilon _{m}(x),m = 0,1,2,\ldots.\) For x ≥ 0, let
We have
Theorem 7.17 (Maejima [43])
Let m = 1,2,…. Then \(\mu \in B_{m}(\mathbb{R}^{d})\) if and only if \(\mu \in B_{0}(\mathbb{R}^{d})\) and ν μ is either 0 or it is expressed as
for the Lévy measure ν 0 of some \(\mu _{0} \in ID(\mathbb{R}^{d})\) .
For the characterization of \(G_{m}(\mathbb{R}^{d})\), we restrict ourselves to the symmetric distributions, which is easier. For \(m \in \mathbb{N}\), let ϕ m (x) be the probability density function of the product of (m + 1) independent standard normal random variables.
Theorem 7.18 (Aoyama and Maejima [3])
Let \(\mu \in ID_{\mathrm{sym}}(\mathbb{R}^{d})\) . Then for each \(m \in \mathbb{N}\) , \(\mu \in G_{m}(\mathbb{R}^{d})\) if and only if \(\mu \in G_{0}(\mathbb{R}^{d})\) and ν μ is either 0 or it is expressed as
where ν 0 is the Lévy measure of some \(\mu _{0} \in G_{0}(\mathbb{R}^{d})\) .
Another characterization is the following.
Theorem 7.19 (Aoyama and Maejima [3])
Let \(m \in \mathbb{N}\) . A \(\mu \in ID_{\mathrm{sym}}(\mathbb{R}^{d})\) belongs to \(G_{m}(\mathbb{R}^{d})\) if and only if \(\mu \in G_{0}(\mathbb{R}^{d})\) and ν μ is either 0 or it is expressed as
where \(\lambda\) is a symmetric measure on the unit sphere S on \(\mathbb{R}^{d}\) and \(g_{m,\xi }(r)\) is represented as
where \(g_{\xi }(r)\) on \((0,\infty )\) is a jointly measurable function such that \(g_{\xi } = g_{-\xi },\lambda -a.e.\) for any fixed \(\xi \in S\) , \(g_{\xi }(\cdot )\) is completely monotone on \((0,\infty )\) and satisfies
with c independent of \(\xi\) .
7.6 Some Nested Subclasses Appearing in Finance
In Carr et al. [19], they discussed the problem of pricing options with Lévy processes and Sato processes (which are the selfsimilar additive processes) for asset returns. Then they showed the importance of the distributions in \(L_{1}(\mathbb{R}_{+})\) or \(L_{2}(\mathbb{R}_{+})\), and also \(L_{\infty }(\mathbb{R}_{+})\). Actually, some tempered stable distributions belong to \(L_{1}(\mathbb{R}^{d})\) and \(L_{2}(\mathbb{R}^{d})\), which will be seen in Sect. 5.4 later, and Rosiński [69] mentioned that tempered stable processes were introduced in mathematical finance to model stochastic volatility (see e.g. CGMY model in Carr et al. [18] discussed in Sect. 7.7 later), and that option pricing based on such processes were considered.
7.7 Nested Subclasses of \(L(\mathbb{R}^{d})\) with Continuous Parameter
We have discussed nested subclasses \(L_{m}(\mathbb{R}^{d}),m = 1,2,\ldots,\) of \(L(\mathbb{R}^{d})\). Nguyen Van Thu [91–93] extended \(L_{m}(\mathbb{R}^{d})\) to \(L_{p}(\mathbb{R}^{d})\) by replacing the integers m by positive real numbers p > 0. It turns out that his classes \(L_{p}(\mathbb{R}^{d})\) are special cases of \(L_{p.\alpha }(\mathbb{R}^{d})\), recently studied by Sato [77]. For p > 0 and \(\alpha \in \mathbb{R}\), let
and denote its inverse function by j p, α ∗(t). Define
Then
and the classes \(L_{p}(\mathbb{R}^{d})\) by Nguyen Van Thu [91–93] are
For the details of \(L_{p,\alpha }(\mathbb{R}^{d})\), see Sato [77].
Also note that \(\varepsilon _{\alpha,m}^{{\ast}}(t)\) in Maejima et al. [57] is the same as j p, α ∗(t) above with p = m + 1. Hence, \(L_{m,\alpha }(\mathbb{R}^{d}),m = 1,2,\ldots,\alpha < 2\), in Maejima et al. [57] is the special case of \(L_{p,\alpha }(\mathbb{R}^{d})\) in Sato [77].
8 Examples (I)
All examples in this section are one-dimensional distributions except in Sect. 8.5, and we show which classes such known distributions belong to.
8.1 Gamma, χ 2-, Student t- and F-Distributions
-
(a)
Let \(\varGamma _{c,\lambda }\) be a gamma random variable with parameters c > 0 and \(\lambda > 0\). Namely, \(P(\varGamma _{c.\lambda } \in B) = \lambda ^{c}\varGamma (c)^{-1}\int _{B\cap (0,\infty )}x^{c-1}e^{-\lambda x}dx\). (When c = 1, it is exponential.) In its Lévy-Khintchine representation, the Gaussian part is 0 and the Lévy measure is \(\nu (dr) = ce^{-\lambda r}r^{-1}1_{(0,\infty )}(r)dr\), (see e.g. Steutel and van Harn [83, Chap. III, Example 4.8]). Then \(\mathcal{L}(\varGamma _{c,\lambda }) \in T(\mathbb{R}_{+})\), (from the form of the Lévy measure of \(\mathcal{L}(\varGamma _{c,\lambda }))\), but \(\mathcal{L}(\varGamma _{c,\lambda })\notin L_{1}(\mathbb{R}_{+})\), (Maejima et al. [56, Example 1(i)]).
-
(b)
Let \(n \in \mathbb{N}\) and let Z 1, …, Z n be independent standard normal random variables. The distribution of
$$\displaystyle{\chi ^{2}(n):= Z_{ 1}^{2} + \cdots + Z_{ n}^{2}}$$is called the χ 2-distribution with n degrees of freedom. It is known that
$$\displaystyle{\mathcal{L}(\chi ^{2}(n)) = \mathcal{L}(\varGamma _{ n/2,1/2}),}$$and hence \(\mathcal{L}(\chi ^{2}(n)) \in T(\mathbb{R}_{+})\).
-
(c)
Let Z be the standard normal random variable and χ 2(n) a χ 2-random variable with n degrees of freedom and suppose that they are independent. Then the distribution of
$$\displaystyle{ t(n):= \frac{Z} {\sqrt{\chi ^{2 } (n)/n}} }$$(28)is called Student t-distribution of n degrees of freedom. Its density is
$$\displaystyle{\mu (dx) = \frac{1} {B(n/2,1/2)\sqrt{n}}\left (1 + \frac{x^{2}} {n} \right )^{-(n+1)/2}dx,}$$where B(⋅ , ⋅ ) is the Beta function. It is known that \(\mathcal{L}(t(n)) \in L(\mathbb{R})\), (see Steutel and van Harn [83, Chap. VI, Theorem 11.15]).
-
(d)
Let χ 1 2(n) and χ 2 2(m) be two independent χ 2-random variables with n and m degrees of freedom, respectively. Then the distribution of
$$\displaystyle{ F(n,m):= \frac{\chi _{1}^{2}(n)/n} {\chi _{2}^{2}(m)/m} }$$(29)is called F-distribution, and its density is
$$\displaystyle{\mu (dx) = \frac{1} {B(n/2,m/2)x}\left ( \frac{nx} {nx + m}\right )^{n/2}\left (1 - \frac{nx} {nx + m}\right )^{m/2}dx,\quad x > 0.}$$It is known that \(\mathcal{L}(F) \in I(\mathbb{R}_{+})\), (Ismail and Kelker [27]), and more that \(\mathcal{L}(F) \in T(\mathbb{R}_{+})\), (Bondesson [16, Example 4.3.1]).
8.2 Logarithm of Gamma Random Variable
It is known that \(\nu _{\xi }\) corresponding to \(\log \varGamma _{c,\lambda }\) is ν 1 = 0 and
(see e.g. Linnik and Ostrovskii [41, Eq. (2.6.13)]). This does not depend on the parameter \(\lambda > 0\).
-
(a)
\(\mathcal{L}(\log \varGamma _{c,\lambda }) \in L(\mathbb{R})\), (Shanbhag and Sreehari [81]). Shanbhag and Sreehari proved the selfdecomposability by showing (12) without using (30). However, once we know (30), we can show it by (11) and (30).
-
(b)
\(\mathcal{L}(\log \varGamma _{c,\lambda }) \in L_{1}(\mathbb{R})\) if c ≥ 1∕2, (Akita and Maejima [1]). It is enough to apply Theorem 7.15.
-
(c)
\(\mathcal{L}(\log \varGamma _{c,\lambda }) \in L_{2}(\mathbb{R})\) if c ≥ 1, (Akita and Maejima [1]). It is enough to apply Theorem 7.15 again.
-
(d)
\(\mathcal{L}(\log \varGamma _{c,1}) \in T(\mathbb{R})\). (See Bondesson [16, p. 112].)
8.3 Symmetrized Gamma Distribution
The symmetrized gamma distribution with parameter c > 0 and \(\lambda > 0\), is written as sym-gamma \((c,\lambda )\). Its characteristic function is \(\varphi (z) = \left (\lambda ^{2}/(\lambda ^{2} + z^{2})\right )^{c}\) and in its Lévy-Khintchine representation, the Gaussian part is 0 and the Lévy measure is \(\nu (dr) = c\vert r\vert ^{-1}e^{-\lambda \vert r\vert }dr,\,\,(r\neq 0)\). (See Steutel and van Harn [83, Chap. V, Example 6.17].) (When c = 1 it is the Laplace distribution.)
We have
-
(a)
sym-gamma \((c,\lambda ) \in T(\mathbb{R})\), (from the form of the Lévy measure above).
Thus
-
(b)
sym-gamma \((c,\lambda ) \in G(\mathbb{R})\), (see Rosiński [68]).
8.4 More Examples Related to Gamma Random Variables
-
(a)
Product of independent gamma random variables. (Steutel and van Harn [83, Chap. VI, Theorem 5.20].) Let Γ 1, Γ 2, … Γ n be independent gamma random variables, and let \(q_{1},q_{2},\ldots,q_{n} \in \mathbb{R}\) with | q j | ≥ 1. Then
$$\displaystyle{\mathcal{L}(\varGamma _{1}^{q_{1} }\varGamma _{2}^{q_{2} }\cdots \varGamma _{n}^{q_{n} }) \in L(\mathbb{R}_{+}).}$$ -
(b)
When n = 1 above, we can say more. Namely,
$$\displaystyle{\mathcal{L}(\varGamma _{1}^{q_{1} }) \in T(\mathbb{R}_{+}).}$$(Thorin [87].)
-
(c)
Power of gamma random variables. (Bosch and Simon [17].) Let Γ be a gamma random variable and p ∈ (−1, 0). Then \(\mathcal{L}(\varGamma ^{p}) \in L(\mathbb{R}_{+})\). The proof is as follows: Let
$$\displaystyle{g(u) = \frac{u\varGamma (1 - p(u + 1))} {\varGamma (1 - pu)},}$$and let X = { X t } be the Lévy process such that
$$\displaystyle{E\left [e^{-uX_{t} }\right ] = e^{-ug(u)},\quad u,t \geq 0.}$$Then by an application of Proposition 2 of Bertoin and Yor [13] (see Bosch and Simon [17] for the details), we have
$$\displaystyle{\varGamma ^{p}\mathop{ =}\limits^{\mathrm{ d}}\int _{ 0}^{\infty }e^{-X_{t} }dt(=: I).}$$Let \(T_{y} =\inf \{ t > 0: X_{t} = y\}\) for every y > 0. The fact that \(X_{t} \rightarrow \infty \) a.s. as \(t \rightarrow \infty \) and the absence of positive jumps assure that \(T_{y} < \infty \) a.s. We thus have
$$\displaystyle{I =\int _{ 0}^{T_{y} }e^{-X_{t} }dt +\int _{ T_{y}}^{\infty }e^{-X_{t} }dt\mathop{ =}\limits^{\mathrm{ d}}\int _{0}^{T_{y} }e^{-X_{t} }dt + e^{-y}\int _{ 0}^{\infty }e^{-X'_{t} }dt,}$$where X′ is an independent copy of X and the second equality follows from the Markov property at T y . This shows that I satisfies (12), and hence \(\varGamma ^{p}(\mathop{=}\limits^{\mathrm{ d}}I)\) is selfdecomposable by (12). We remark here that \(\mathcal{L}(\varGamma ^{p}),p \in (0,1)\) is not infinitely divisible. (See Bosch and Simon [17, p. 627].)
-
(d)
Exponential function of gamma random variable. (Bondesson [16, p. 94].) Let X is denumerable convolution of gamma random variables \(\varGamma _{c_{j},\lambda _{j}}\) with c j ≥ 1. Then
$$\displaystyle{\mathcal{L}(e^{X}) \in T(\mathbb{R}_{ +}).}$$ -
(e)
Let Γ be a gamma random variable and let \(a,b \in \mathbb{R}\). Then
$$\displaystyle{\mathcal{L}(a\varGamma + b\varGamma ^{2}) \in T(\mathbb{R}).}$$(Privault and Yang [65].)
8.5 Tempered Stable Distribution
The tempered stable distributions were defined by Rosiński [69]. Let 0 < α < 2. T α is called a tempered α-stable random variable on \(\mathbb{R}^{d}\), if \(\mathcal{L}(T_{\alpha }) =\mu _{(A,\nu,\gamma )}\) is such that A = 0 and ν μ has polar decomposition
where \(q_{\xi }(r)\) is completely monotone in r, measurable in \(\xi\), and \(\lambda (S) < \infty,q_{\xi }(\infty ) = 0\). Because of the assumption that \(q_{\xi }(\infty ) = 0\), \(q_{\xi }(r)\) cannot be constant, and thus an α-stable distribution is not tempered α-stable but tempered β( < α)-stable.
We have the following. It is easy to see by checking (13) for (a) and Theorem 7.15 for (b)–(d). (See Barndorff-Nielsen et al. [12].)
-
(a)
If 0 < α < 2, then \(\mathcal{L}(T_{\alpha }) \in T(\mathbb{R}^{d})\).
-
(b)
If 1∕4 ≤ α < 2, then \(\mathcal{L}(T_{\alpha }) \in L_{1}(\mathbb{R}^{d})\).
-
(c)
If 0 < α < 1∕4, and \(q_{\xi }(r) = c(\xi )e^{-b(\xi )r}\) for all \(\xi\) in a set of positive \(\lambda\)-measure, where \(c(\xi )\) and \(b(\xi )\) are positive measurable functions of \(\xi\), then \(\mathcal{L}(T_{\alpha })\notin L_{1}(\mathbb{R}^{d})\).
-
(d)
If 2∕3 ≤ α < 2, then \(\mathcal{L}(T_{\alpha }) \in L_{2}(\mathbb{R}^{d})\).
8.6 Limits of Generalized Ornstein-Uhlenbeck Processes (Exponential Integrals of Lévy Processes)
-
(a)
Let {(X t , Y t ), t ≥ 0} be a two-dimensional Lévy process. Suppose that {X t } does not have positive jumps, \(0 < E[X_{1}] < \infty \) and \(\mathcal{L}(Y _{1}) \in ID_{\log }(\mathbb{R})\). Then
$$\displaystyle{\mathcal{L}\left (\int _{0}^{\infty }e^{-X_{t-} }dY _{t}\right ) \in L(\mathbb{R}).}$$(Bertoin et al. [15].)
-
(b)
Let {N t } be a Poisson process, and let {Y t } be a strictly stable Lévy process or a Brownian motion with drift. Then
$$\displaystyle{\mathcal{L}\left (\int _{0}^{\infty }e^{-N_{t-} }dY _{t}\right ) \in L(\mathbb{R}).}$$(Kondo et al. [39].)
-
(c)
Let {N t } t ≥ 0 be a Poisson process such that E[N 1] < 1. Then
$$\displaystyle{ \mathcal{L}\left (\int _{0}^{\infty }e^{-(t-N_{t})}dt\right ) \in L(\mathbb{R}) \cap L_{ 1}(\mathbb{R})^{c}. }$$(32)
(Lindner and Maejima [40].) The proof of (32) is as follows: Let X t : = t − N t and \(V:=\int _{ 0}^{\infty }e^{-X_{t}}dt\). For c > 0, let \(\tau _{c}:=\inf \{ t \geq 0: X_{t} = c\}\). Since \(X_{t} \rightarrow \infty \) a.s. as \(t \rightarrow \infty \) and {X t } does not have positive jumps, \(\tau _{c} < \infty \) almost surely. Then
where V c and Y c are independent. We have
Denote
By the strong Markov property, \(\{X_{t} - X_{\tau _{c}}\}_{t>\tau _{c}}\) is independent of Y c and has the same distribution as {X t } t > 0. Thus we conclude that for all c > 0
where \(\mathcal{L}(X_{c}') = \mathcal{L}(V )\). Thus \(\mathcal{L}(V ) \in L(\mathbb{R})\) by (12). But, in order that it is in \(L_{1}(\mathbb{R})\), it is needed that \(\mathcal{L}(Y _{c}) \in L(\mathbb{R})\) by Theorem 7.16. This, however, is not the case. For instance, we have
This means that Y 1 has a point mass at \(\int _{0}^{1}e^{-t}dt = 1 - e^{-1}\), but is not a constant, namely, \(\mathcal{L}(Y _{1})\) is a non-trivial distribution with a point mass. Recall that any non-trivial selfdecomposable distribution on \(\mathbb{R}\) must be absolutely continuous (see e.g. Sato [73, Theorem 27.13]), and thus \(\mathcal{L}(Y _{1})\notin L(\mathbb{R})\). We then conclude that \(\mathcal{L}(V )\notin L_{1}(\mathbb{R})\).
8.7 Type S Random Variable
For 0 < α < 2, define X: = V 1∕α Z α , where V is a positive infinitely divisible random variable and Z α is a symmetric α-stable random variable on \(\mathbb{R}\), and where V and Z α are independent. We call X the type S random variable.
Here we explain subordination of Lévy processes.
Theorem 8.1 (Sato [73, Theorem 30.1])
Let {V t ,t ≥ 0} be a subordinator (a nondecreasing Lévy process on \(\mathbb{R}\) ) and let {Z t ,t ≥ 0} be a Lévy process on \(\mathbb{R}^{d}\) , independent of {V t }. Then \(X_{t}:= Z_{V _{t}}\) is a Lévy process on \(\mathbb{R}^{d}\) , and \(\mathcal{L}(X_{t}) \in ID(\mathbb{R}^{d})\) .
The transformation of {Z t } to {X t } is called subordination by the subordinator {V t }.
Theorem 8.2
Let {V t ,t ≥ 0} be a subordinator and let {Z α (t)} be a symmetric α(∈ (0,2])-stable Lévy process on \(\mathbb{R}\) , independent of {V t }. Then if we write V = V 1 ,
Thus, \(\mathcal{L}(V ^{1/\alpha }Z_{\alpha }) \in ID(\mathbb{R})\) , implying that type S random variables are infinitely divisible.
Proof
We compare the characteristic functions on both sides of (33). Note that \(E[e^{izZ_{\alpha }}] =\exp \{ -c\vert z\vert ^{\alpha }\}\) with some c > 0, and for the Lévy process {X t }, \(E[e^{izX_{t}}] = \left (E[e^{izX_{1}}]\right )^{t}\). We then have
and
implying that both sides of (33) are equal in law.
Notice that a symmetric stable random variable is of type G. For, we can check, by the characteristic functions,
where Z α∕2 + is a positive α∕2-stable random variable.
Theorem 8.3
Type S random variables are of type G.
Proof
By (34), we have
It remains to show that \(\mathcal{L}(V ^{2/\alpha }Z_{\alpha /2}^{+}) \in I(\mathbb{R}_{+})\), but this can be shown in the same way as in the proof of (33), completing the proof.
-
(a)
If \(V \mathop{ =}\limits^{\mathrm{ d}}\varGamma _{1,\lambda }\), then \(\mathcal{L}(Z_{\alpha }(V )) \in \mathcal{G}(T(\mathbb{R})) \subset T(\mathbb{R})\). (See Bondesson [16, p. 38].)
-
(b)
Let \(\lambda > 0\) and {B t } a standard Brownian motion, and let {Z t } be a symmetric stable Lévy process. Then \(\int _{0}^{\infty }e^{-B_{t}-\lambda t}dZ_{t}\) is of type S. (See Maejima and Niiyama [45], Aoyama et al. [4] and Kondo et al. [39].)
8.8 Convolution of Symmetric Stable Distributions of Different Indexes
The characteristic function of the convolution of symmetric stable distributions of different indexes is \(\varphi (z) =\exp \left \{\int _{(0,2)} -\vert z\vert ^{\alpha }m(d\alpha )\right \}\), where m is a measure on the interval (0, 2). It belongs to \(L_{\infty }(\mathbb{R})\). (See e.g. Sato [67].)
8.9 Product of Independent Standard Normal Random Variables
Let Z 1, Z 2, … be independent standard normal random variables.
-
(a)
\(\mathcal{L}(Z_{1}Z_{2}) \in T(\mathbb{R})\). This is because \(\mathcal{L}(Z_{1}Z_{2}) = \mathcal{L}(\text{sym-gamma}(1/2,1))\), (see Steutel and van Harn [83, p. 504]),
-
(b)
(Maejima and Rosiński [46, Example 5.1].) \(\mathcal{L}(Z_{1}\cdots Z_{n}) \in G(\mathbb{R}),\,\,n \geq 2\). The proof is as follows: Recall that if \(V > 0,\mathcal{L}(V ) \in I(\mathbb{R}),Z\) is the standard normal random variable and V and Z are independent, then \(\mu = \mathcal{L}(V ^{1/2}Z) \in G(\mathbb{R})\). Here we need a lemma.
Lemma 8.4 (Shanbhag and Sreehari [81, Corollary 4])
Let Z be the standard normal random variable and Y a positive random variable independent of Z. Then |Z| p Y is infinitely divisible for any p ≥ 2.
We have \(Z_{1}\cdots Z_{n}\mathop{ =}\limits^{\mathrm{ d}}Z_{1}\vert Z_{2}\cdots Z_{n}\vert \) and | Z 2⋯Z n | 2 is infinitely divisible by Lemma 8.4, which implies that \(\mathcal{L}(Z_{1}\cdots Z_{n}) \in G(\mathbb{R}),\,\,n \geq 2\).
-
(c)
When n = 2, we can say more, namely, \(\mathcal{L}(Z_{1}Z_{2}) \in G_{1}(\mathbb{R})\). (For the proof, see Maejima and Rosiński [46, Example 5.2].)
9 Examples (II)
In this section, we list examples of distributions in the classes \(L(\mathbb{R}),B(\mathbb{R}),T(\mathbb{R})\) and \(G(\mathbb{R})\), in addition to what we have explained in the previous section.
9.1 Examples in \(L(\mathbb{R})\)
There are many examples in \(L(\mathbb{R})\). The following are some of them.
-
(a)
Let Z be the standard normal random variable, t(n) Student’s t-random variable and let F(n, m) be F-random variable. Then (i) \(\mathcal{L}(\log \vert Z\vert ) \in L(\mathbb{R})\), (ii) \(\mathcal{L}(\log \vert t\vert ) \in L(\mathbb{R})\) and (iii) \(\mathcal{L}(\log F) \in L(\mathbb{R})\). (Shanbhag and Sreehari [81].) These follow from the following facts:
-
(i)
Since \(\vert Z\vert ^{2}\mathop{ =}\limits^{\mathrm{ d}}\chi ^{2}(1)\), \(\log \vert Z\vert \mathop{ =}\limits^{\mathrm{ d}}\frac{1} {2}\log \chi ^{2}(1)\).
-
(ii)
By (28),
$$\displaystyle{\log \vert t(n)\vert \mathop{ =}\limits^{\mathrm{ d}}\log \vert Z\vert -\frac{1} {2}\log \varGamma _{n/2,1/2} + \frac{1} {2}\log n,}$$where Z and Γ n∕2, 1∕2 are independent.
-
(iii)
By (29),
$$\displaystyle{\log F(n,m)\mathop{ =}\limits^{\mathrm{ d}}\log \varGamma _{n/2,1/2} -\log \varGamma _{m/2,1/2} -\log n +\log m,}$$
where Γ n∕2, 1∕2 and Γ m∕2, 1∕2 are independent.
-
(i)
-
(b)
Let E have a standard exponential random variable. Consider \(X\mathop{ =}\limits^{\mathrm{ d}} -\log E\). Then the distribution function G 1 of X is \(G_{1}(x) = e^{-e^{-x} },x \in \mathbb{R}\), called Gumbel distribution. (See Steutel and van Harn [83, Chap. IV, Example 11.1].) By Sect. 5.2(a), \(\mathcal{L}(X) \in L(\mathbb{R})\). Also \(G_{2}(x) = 1 - e^{-e^{x} },x \in \mathbb{R}\), is selfdecomposable, because G 2(x) = 1 − G 1(−x) and so \(G_{2} = \mathcal{L}(-X)\).
-
(c)
Let Y be a beta random variable. Then \(\mathcal{L}\left (\log Y (1 - Y )^{-1}\right ) \in L(\mathbb{R})\). (Barndorff-Nielsen et al. [11].)
9.2 Examples in \(L_{1}(\mathbb{R}_{+})\)
The following is Maejima et al. [56, Example 1(ii)]. Let \(\mu \in ID(\mathbb{R}_{+})\) be such that k +1(r) in (11) is cx −α e −ar, r > 0 with a, c > 0 and 0 < α < 2. Then \(\mu \in L_{1}(\mathbb{R}_{+})\). It is enough to apply Theorem 7.15.
9.3 Examples in \(B(\mathbb{R})\)
-
(a)
(Bondesson [16, p. 143].) Let {Y j } be i.i.d. exponential random variables and N a Poisson random variable independent of {Y j }. Put \(X =\sum _{ j=1}^{N}Y _{j}\). Then \(\mathcal{L}(X) \in B(\mathbb{R}_{+})\).
-
(b)
(Bondesson [16, pp. 143–144].) Let Y = Y (α, β) be a beta random variable with parameters α and β and let \(X = -\log Y\). Then
-
(b1)
\(\mathcal{L}(X) \in B(\mathbb{R}_{+})\).
-
(b2)
\(\mathcal{L}(X) \in L(\mathbb{R}_{+})\) if and only if 2α +β ≥ 1.
-
(b1)
9.4 Examples in \(G(\mathbb{R})\)
More examples of distributions in \(G(\mathbb{R})\) are the following by Fukuyama and Takahashi [22]. Let \(([0,1],\mathfrak{B},\lambda )\) be the Lebesgue probability space with Lebesgue measure \(\lambda\). For any \(\mu \in G(\mathbb{R}) \cap ID_{\mathrm{sym}}(\mathbb{R})\), there exist {a j }, \(A_{n}(\rightarrow \infty )\) and \(\{\beta _{j}\} \subset \mathbb{R}\) such that
converges weakly to μ on the Lebesgue probability space.
9.5 Examples in \(T(\mathbb{R})\)
There are many examples in \(T(\mathbb{R})\). (See e.g. Bondesson [16].) The following are some of them.
-
(a)
(Log-normal distribution.) Let Z be the standard normal random variable and put X = e Z. The distribution of X is called the log-normal distribution, and its density is
$$\displaystyle{\mu (dx) = \frac{1} {\sqrt{2\pi }} \frac{1} {x}\exp \left \{-\frac{1} {2}(\log x)^{2}\right \}1_{ (0,\infty )}(x)dx.}$$The log-normal distribution belongs to \(T(\mathbb{R}_{+})\). (See Steutel and van Harn [83, Chap. VI, Theorems 5.18 and 5.21].)
-
(b)
(Pareto distribution.) Let Γ 1, 1 and Γ c, 1, c > 0 be two independent gamma random variables and put X = Γ 1, 1∕Γ c, 1. Then its density is
$$\displaystyle{\mu (dx) = \frac{1} {B(1,c)}\left ( \frac{1} {1 + x}\right )^{1+c}1_{ (0,\infty )}(x)dx,}$$and the corresponding distribution is called the Pareto distribution and belongs to \(T(\mathbb{R})\). (See Steutel and van Harn [83, Chap. VI, Example 12.9 and Theorems 5.18 and 5.19(ii)].)
-
(c)
Generalized inverse Gaussian distributions belong to \(T(\mathbb{R})\). (See e.g. Bondesson [16, Example 4.3.2].)
-
(d)
Let X α be a positive α-stable random variable with 0 < α < 1. Then \(\mathcal{L}(\log X_{\alpha }) \in T(\mathbb{R})\). (See Bondesson [16, Example 7.2.5].)
-
(e)
(Lévy ’s stochastic area X of the two-dimensional Brownian motion. See e.g. Sato [73, Example 15.15].) The density of X is \(f(x) = (\pi \cosh x)^{-1}\) and k ±1(r) in (11) is \(\vert 2\sinh r\vert ^{-1}\). Since \(\vert 2\sinh r\vert ^{-1}\) is completely monotone in \(r \in (0,\infty )\), we have \(\mathcal{L}(X) \in T(\mathbb{R}_{+})\). This distribution μ 1 with a bit different scaling (the density is \(f_{1}(x) = (2\pi \cosh \frac{1} {2}x)^{-1}\)) is called the hyperbolic cosine distribution, (see e.g. Steutel and van Harn [83, p. 505], for this and below). It is also known that μ 1 is \(\mathcal{L}(\log (Y/Z))\) with independent Y and Z both of which are Γ 1∕2, 1. The distribution μ 2 with density \(f_{2}(x) = (2\pi ^{2}\sinh \frac{1} {2}x)^{-1}x\) is called hyperbolic sine distribution. It is known that μ 2 is \(\mathcal{L}(Y + Z)\) with independent Y and Z both of which are distributed as hyperbolic cosine distribution. \(k_{\xi }(r)\) is \(\vert \sinh r\vert ^{-1}\) up to scaling, and thus also \(\mu _{2} \in T(\mathbb{R}_{+})\).
9.6 Examples in \(T(\mathbb{R}) \cap L_{1}(\mathbb{R})^{c}\) (Revisited)
-
(a)
\(\mathcal{L}(\varGamma _{c,\lambda }).\) (Section 8.1.)
-
(b)
\(\mathcal{L}(T_{\alpha })\) if 0 < α < 1∕4 and \(q_{\xi }(r) = c(\xi )e^{-b(\xi )r}\) for all \(\xi\) in a set of positive \(\lambda\)-measure, where \(c(\xi )\) and \(b(\xi )\) are positive measurable functions of \(\xi\). (Section 8.5, (a) and (c).)
10 Examples (III)
The class of GGCs, which is the Thorin class, is generating renewed interest, since many examples have recently appeared in quite different problems. We explain some of them below.
10.1 The Rosenblatt Process and the Rosenblatt Distribution
Let 0 < D < 1∕2. The Rosenblatt process is defined, for t ≥ 0, as
where \(\{B_{s},s \in \mathbb{R}\}\) is a standard Brownian motion, \(\int _{\mathbb{R}^{2}}'\) is the Wiener-Itô multiple integral on \(\mathbb{R}^{2}\) and C(D) is a normalizing constant. The distribution of Z D (1) is called the Rosenblatt distribution.
The Rosenblatt process is H-selfsimilar with H = 1 − D and has stationary increments. The Rosenblatt process lives in the so-called second Wiener chaos. Consequently, it is not a Gaussian process.
In the last few years, this stochastic process has been the object of several papers. (See Pipiras and Taqqu [64], Tudor [88], Tudor and Viens [89], Veillette and Taqqu [94] among others.)
Let
and for every t ≥ 0 define an integral operator A t by
Since A t is a self-adjoint Hilbert-Schmidt operator (see Dobrushin and Major [20]), all eigenvalues \(\lambda _{n}(t),n = 1,2,\ldots,\) are real and satisfy \(\sum _{n=1}^{\infty }\lambda _{n}^{2}(t) < \infty \).
We start with the following.
Theorem 10.1 (Maejima and Tudor [49])
For every t 1 ,…,t d ≥ 0,
where \(\{\varepsilon _{n}\}\) are i.i.d. standard normal random variables.
The case d = 1 was shown by Taqqu (see Proposition 2 of Dobrushin and Major [20]). The proof is enough to extend the idea of Taqqu from one dimension to multi-dimensions.
Theorem 10.2 (Maejima and Tudor [49])
For every t 1 ,…,t d ≥ 0, the law of (Z D (t 1 ),…,Z D (t d )) belongs to \(T(\mathbb{R}^{d})\) .
Proof
By Theorem 10.1,
where \(\varepsilon _{n}^{2}(\lambda _{n}(t_{1}),\ldots,\lambda _{n}(t_{d})),n = 1,2,\ldots,\) are the elementary gamma random variables in \(\mathbb{R}^{d}\). Since they are independent and since the class \(T(\mathbb{R}^{d})\) is closed under convolution and weak convergence, we see that the law of \(\sum _{n=1}^{\infty }\varepsilon _{n}^{2}(\lambda _{n}(t_{1}),\ldots,\lambda _{n}(t_{d}))\) belongs to \(T(\mathbb{R}^{d})\), and so does the law of (Z D (t 1), …, Z D (t d )). This completes the proof.
In general, let I 2 B(f) be a double Wiener-Itô integral with respect to standard Brownian motion B, where \(f \in L_{\mathrm{sym}}^{2}(\mathbb{R}_{+}^{2})\). Then we have a more general result as follows:
Proposition 10.3
where the series converges in L 2 (Ω) and almost surely. Also
Thus \(\mathcal{L}\left (I_{2}^{B}(f))\right ) \in T(\mathbb{R})\) .
(For the proof, see e.g. Nourdin and Peccati [60].)
The Rosenblatt distribution is represented by double Wiener-Itô integrals. However, we have seen that it belongs to the Thorin class \(T(\mathbb{R})\). The distributions in \(T(\mathbb{R})\) have several stochastic integral representations with respect to Lévy processes. Here we take one example. We regard them as members of the class of selfdecomposable distributions, which is a larger class than the Thorin class. This allows us to obtain a new result related to the Rosenblatt distribution.
The following is known. (Aoyama et al. [6, Corollary 2.1].) If \(\{\varGamma _{t,\lambda },t \geq 0\}\) is a gamma process with parameter \(\lambda > 0\), {N(t), t ≥ 0} is a Poisson process with unit rate and they are independent, then for any \(c > 0,\lambda > 0\),
Let
Note that {Y t , t ≥ 0} is a Lévy process. Then we have
where Γ 1∕2, 1∕2 (n) and {Y t (n)} are independent copies of Γ 1∕2, 1∕2 and {Y t }, respectively. Thus
Remark 10.4
\(\sum _{n=1}^{\infty }\lambda _{n}Y _{t}^{(n)}\) is convergent a.s. and in L 2 because
Remark 10.5
Since {Y t (n)}, n = 1, 2, …, are independent and identically distributed Lévy processes, their infinite weighted sum {Z t } is a Lévy process.
We thus finally have the following theorem.
Theorem 10.6 (Maejima and Tudor [49])
where {Z t } is a Lévy process in Remark 10.5 .
10.2 The Duration of Bessel Excursions Straddling Independent Exponential Times
This section is from Bertoin et al. [14].
Let {R t , t ≥ 0} be a Bessel process with R 0 = 0, with dimension d = 2(1 −α), (0 < α < 1, equivalently 0 < d < 2). When α = 1∕2, {R t } is a Brownian motion. Let
and
which is the length of the excursion above 0, straddling t, for the process {R u , u ≥ 0}, and let \(\varepsilon\) be a standard exponential random variable independent of {R u , u ≥ 0}. Let \(\varDelta _{\alpha }:=\varDelta _{ \varepsilon }^{(\alpha )}\), which is the duration of Bessel excursions straddling independent exponential times.
Theorem 10.7
\(\mathcal{L}(\varDelta _{\alpha }) \in T(\mathbb{R}_{+})\) .
The idea of the proof is the following. They showed that
with a nonnegative random variable G α on [0, 1]. (The density function of G α is explicitly given.) Since \(k(x):= E[e^{-xG_{\alpha }}]\) is completely monotone by Bernstein’s theorem (Proposition 3.2), the statement of the theorem follows from (13).
10.3 Continuous State Branching Processes with Immigration
We start with some general theory on GGCs. Any GGC \(\mu \in T(\mathbb{R}_{+})\) has the Laplace transform:
where γ ≥ 0, \(\int _{0}^{\infty }\frac{(1\wedge x)} {x} k(x)dx < \infty \) and k(x) is completely monotone on \((0,\infty )\). By Bernstein’s theorem (Proposition 3.2), there exists a positive measure \(\sigma\) such that
We call this \(\sigma\) the Thorin measure, (see James et al. [28, Sect. 1.2.b]). Therefore, \(\mu \in T(\mathbb{R}_{+})\) can be parameterized by the pair \((\gamma,\sigma )\). Recall
The integrability condition for the Lévy measure ν of GGC is, in terms of \(\sigma\), turned out to be
(see James et al. [28, Eq. (3)]) which is equivalent to
The following is from Handa [24]. Consider continuous state branching processes with immigration (CBCI-process, in short) with quadruplet (a, b, ρ, δ) having the generator
where ρ is a measure on \((0,\infty )\) satisfying \(\int _{0}^{\infty }(y \wedge y^{2})\rho (dy) < \infty \).
Theorem 10.8
Let γ ≥ 0 and suppose that \(\sigma\) is a non-zero Thorin measure.
-
(1)
There exist (a,b,M) such that
$$\displaystyle{\gamma +\int \frac{1} {s + u}\sigma (du) = \frac{1} {a}s + b +\int \frac{s} {s + u}M(du),\quad s > 0.}$$ -
(2)
Any GGC with pair ( \(\gamma,\sigma\) ) is a unique stationary solution of the CBCI-process with quadruplet (a,b,ρ,1), where ρ is a measure on \((0,\infty )\) defined by
$$\displaystyle{\rho (dy) = \left (\int _{0}^{\infty }u^{2}e^{-yu}M(du)\right )dy.}$$
10.4 Lévy Density of Inverse Local Time of Some Diffusion Processes
This section is from Takemura and Tomisaki [84].
Example 10.9 (Also, Shilling et al. [80, p. 201])
Let \(I = (0,\infty )\) and − 1 < p < 0. Let \(\mathcal{G}^{(p)} = \frac{1} {2} \frac{d^{2}} {dx^{2}} + \frac{2p+1} {2x} \frac{d} {dx}\). Assume 0 is reflecting. Let \(\mathbb{D}^{(p)}\) be the diffusion process on I with the generator \(\mathcal{G}^{(p)}\) and ℓ (p) the Lévy density of the inverse local time at 0 for \(\mathbb{D}^{(p)}\). Then we have \(\ell^{(p)}(x) = C \frac{1} {x}x^{-\vert p\vert }\), which is the Lévy density of a GGC.
Example 10.10
Let \(I = (0,\infty )\) and − 1 < p < 0. Let \(\mathbb{D}^{(p)}\) be the diffusion process with the generator \(\mathcal{G}^{(p)} = 2x \frac{d^{2}} {dx^{2}} + (2p + 2) \frac{d} {dx}\) and suppose that the end point 0 is reflecting. If ℓ (p) is the Lévy density of the inverse local time at 0 for \(\mathbb{D}^{(p)}\), then \(\ell^{(p)}(x) = C \frac{1} {x}x^{-\vert p\vert }\), which is again the Lévy density of a GGC.
Example 10.11
Let − 1 < p < 1 and β > 0. Let
where K p (x) is the modified Bessel function and, let \(\mathbb{D}^{(p,\beta )}\) be the diffusion process on I with the generator \(\mathcal{G}^{(p,\beta )}\). Suppose that the end point 0 is reflecting. Then ℓ (p, β), the Lévy density of the inverse local time at 0 for \(\mathbb{D}^{(p,\beta )}\), satisfies
which is the Lévy density of a GGC. (When p = 0, Shilling et al. [80, p. 202].)
Example 10.12
Let − 1 < p < 1 and β > 0. Let
If \(\mathbb{D}^{(p,\beta )}\) is the diffusion process with the generator \(\mathcal{G}^{(p,\beta )}\) and the end point 0 is reflecting, then ℓ (p, β), the Lévy density of the inverse local time at 0 for \(\mathbb{D}^{(p,\beta )}\), is \(\ell^{(p,\beta )}(x) = C \frac{1} {x}x^{-\vert p\vert }e^{-\beta x}\), which is a GGC.
10.5 GGCs in Finance
Lévy processes play an important role in asset modeling, and among others a typical pure jump Lévy process is a subordination of Brownian motion. One of them is the variance-gamma process {Y t } by Madan and Seneta [42], which is a time-changed Brownian motion B = { B t } on \(\mathbb{R}\) subordinated by the gamma process Γ = {Γ(t)}; namely
where the gamma process {Γ(t)} is a Lévy process on \(\mathbb{R}\) such that \(\mathcal{L}(\varGamma (1))\) is the distribution of a gamma random variable \(\varGamma _{1,\lambda }\). This is a special case of Example 30.8 of Sato [73], where B is a general Lévy process on \(\mathbb{R}^{d}\), and when B is the standard Brownian motion on \(\mathbb{R}\), for \(z \in \mathbb{R}\),
This is sym-gamma \((t,\sqrt{\lambda })\) in Sect. 5.3.
The variance-gamma processes, which are studied in finance, are generalized to the variance-GGC process. The variance-GGC process is {Y t } in (35) with the replacement of the gamma process Γ by the GGC process \(\tilde{\varGamma }=\{\tilde{\varGamma } _{t}\}\), which is a Lévy process on \(\mathbb{R}\) such that \(\mathcal{L}(\tilde{\varGamma }_{1})\) is a GGC. The following is known.
Proposition 10.13 (Privault and Yang [66])
Let B is a Brownian motion with drift and \(Y _{t} = B_{\tilde{\varGamma }_{t}}\) . Then Y t is decomposed as Y t = U t − W t , where {U t } and {W t } are two independent GGC process, and thus \(\mathcal{L}(Y _{t}) \in T(\mathbb{R})\) .
The next example is the so-called CGMY model (Carr et al. [18]). It is EGGC with the Lévy density \(r^{-1}k_{\xi }(r)\) and
where C > 0, G, M ≥ 0, Y < 2. The case Y = 0 is the special case of the variance gamma model. This model has been used as a new model for asset returns, which, in contrast to standard models like Black-Scholes model, allows for jump components displaying finite or infinite activity and variation.
11 Examples of α-Selfdecomposable Distributions
In this section, we give two examples of α-selfdecomposable distributions. The first one is two-dimensional.
11.1 The First Example
Many examples in \(L^{\langle 0\rangle }(\mathbb{R}) = L(\mathbb{R})\) are known as selfdecomposable distributions, but we have less examples of distributions in \(L^{\langle \alpha \rangle }(\mathbb{R}^{d}),\alpha \neq 0.\) In this section, we give an example in \(L^{\langle -2\rangle }(\mathbb{R}^{2})\). This section is from Maejima and Ueda [51].
Let (Z 1, Z 2) be a bivariate Gaussian random variable, where Z 1 and Z 2 are standard Gaussian random variables with correlation coefficient \(\sigma \in (-1,1)\). Define a bivariate gamma random variable by W = (Z 1 2, Z 2 2). Our concerns are whether W is selfdecomposable or not and if not, which class its distribution belongs to.
Theorem 11.1
Suppose \(\sigma \neq 0\) . Then
Remark 11.2
This is an example showing that \(L^{\langle \alpha \rangle }(\mathbb{R}^{2})\) is not right-continuous in α at α = −2, namely
Proof of Theorem 11.1
Let \(\overline{W}:= \frac{1} {2}(W_{1} + W_{2})\) , where W 1, W 2 are independent copies of W. Note that \(\overline{W}\) is α-selfdecomposable if and only if W is α-selfdecomposable. Vere-Jones [95] gave the form of the moment generating function of \(\overline{W}\). Then we can see that the Lévy measure ν of \(\overline{W}\) is
where
where I 1(⋅ ) is the modified Bessel function of the first kind. To show \(\mathcal{L}(\overline{W}) \in L^{\langle -2\rangle }(\mathbb{R}^{2})\), it is enough to check that \(\ell_{\xi }(r),r > 0\) is nonincreasing, which is proved in Maejima and Ueda [51].
To see that \(\mathcal{L}(\overline{W})\notin L^{\langle \alpha \rangle }(\mathbb{R}^{2}),\alpha > -2\), it is enough to check that for any \(\beta > 0,r^{\beta }\ell_{\xi }(r),r > 0\) is “not” nonincreasing, which is easily shown.
11.2 The Second Example
This section is from Maejima and Ueda [55].
Remark 11.3
Thus, \(L^{\langle \alpha \rangle }(\mathbb{R})\) is not right-continuous at α = 0.
Consider \(\mathcal{L}(\log \varGamma _{c,\lambda })\). It is known that \(\mathcal{L}(\log \varGamma _{c,\lambda }) \in L(\mathbb{R}) = L^{\langle 0\rangle }(\mathbb{R})\) (Sect. 5.2 (a)). Let
Write the solution of k(r) = α by r = r α . Let
and
Theorem 11.4
Proof
As we have seen in (30) in Sect. 5.2, ν −1 of \(\log \varGamma _{c,\lambda }\) is \(\nu _{-1}(dr) = \frac{e^{-cr}} {r(1-e^{-r})}dr,r > 0\). Thus
and it is enough to check the monotonicity or non-monotonicity of ℓ c, α (r), r > 0, depending on (c, α). (For the details of the proof, see Maejima and Ueda [55].)
Corollary 11.5
\(L^{\langle \alpha \rangle }(\mathbb{R})\) is not right-continuous at α ∈ (0,1].
Remark 11.6
-
(i)
For any \(c > 0,\,\,\mathcal{L}(\log \varGamma _{c,\lambda })\notin L^{\langle \alpha \rangle }(\mathbb{R}),\,\,\alpha > 1\).
-
(ii)
Let E be an exponential random variable. Then
$$\displaystyle{\mathcal{L}(\log E)\left \{\begin{array}{@{}l@{\quad }l@{}} \in L^{\langle 1\rangle }(\mathbb{R}), \quad \\ \notin L^{\langle \alpha \rangle }(\mathbb{R}),\,\,\alpha > 1.\quad \end{array} \right.}$$ -
(iii)
Let Z be a standard normal random variable. Then
$$\displaystyle{\mathcal{L}(\log \vert Z\vert )\left \{\begin{array}{@{}l@{\quad }l@{}} \in L^{\langle 1\rangle }(\mathbb{R}), \quad \\ \notin L^{\langle \alpha \rangle }(\mathbb{R}),\,\,\alpha > 1,\quad \end{array} \right.}$$since \(Z^{2}\mathop{ =}\limits^{\mathrm{ d}}\varGamma _{1/2,1/2}\).
12 Fixed Points of Stochastic Integral Mappings: A New Sight of \(S(\mathbb{R}^{d})\) and Related Topics
Following Jurek and Vervaat [38], Jurek [31] and Jurek [32], we define a fixed point μ under a mapping Φ f as follows.
Definition 12.1
\(\mu \in \mathfrak{D}(\varPhi _{f})\) is called a fixed point under the mapping Φ f , if there exist a > 0 and \(c \in \mathbb{R}^{d}\) such that
Remark 12.2
Given a mapping Φ f , the natural definition of its fixed point may be μ satisfying Φ f (μ) = μ. However, if we restrict ourselves to the mapping Φ α for instance, only the Cauchy distribution satisfies Φ α (μ) = μ. Then what is the meaning of (36)? We know that \(\mu \in ID(\mathbb{R}^{d})\) determines a Lévy process {X t } such that \(\mu = \mathcal{L}(X_{1})\), and \(\mu ^{a{\ast}}{\ast}\delta _{c} = \mathcal{L}(X_{a} + c)\). Therefore, (36) means that some Lévy process is a “fixed point” in some sense.
We consider here only Φ α . The set of all fixed points under the mapping Φ α is denoted by FP(Φ α ). For 0 < p ≤ 2, let \(S_{p}(\mathbb{R}^{d})\) be the class of all p-stable distributions on \(\mathbb{R}^{d}\) and thus \(S(\mathbb{R}^{d}) =\bigcup _{0<p\leq 2}S_{p}(\mathbb{R}^{d})\). Furthermore, for 1 < p ≤ 2, let \(S_{p}^{0}(\mathbb{R}^{d})\) be the class of p-stable distributions on \(\mathbb{R}^{d}\) with mean 0.
Theorem 12.3
We have
Remark 12.4
Theorem 12.3 for α ≤ 0 was already proved in Jurek and Vervaat [38], Jurek [31] and Jurek [32] even in a general setting of a real separable Banach space. The case for 0 < α < 2 is by Ichifuji et al. [25]. One meaning of this theorem is to give new characterizations of the classes \(S(\mathbb{R}^{d})\), \(\bigcup _{p\in (\alpha,2]}S_{p}(\mathbb{R}^{d})\) with 0 < α < 1 and \(\bigcup _{p\in (\alpha,2]}S_{p}^{0}(\mathbb{R}^{d})\) with 1 ≤ α < 2.
References
K. Akita, M. Maejima, On certain self-decomposable self-similar processes with independent increments. Stat. Probab. Lett. 59, 53–59 (2002)
C. Alf, T.A. O’Connor, Unimodality of the Lévy spectral function. Pac. J. Math. 69, 285–290 (1977)
T. Aoyama, M. Maejima, Characterizations of subclasses of type G distributions on \(\mathbb{R}^{d}\) by stochastic integral representations. Bernoulli 13, 148–160 (2007)
T. Aoyama, M. Maejima, J. Rosiński, A subclass of type G selfdecomposable distributions on \(\mathbb{R}^{d}\). J. Theor. Probab. 21, 14–34 (2008)
T. Aoyama, A. Lindner, M. Maejima, A new family of mappings of infinitely divisible distributions related to the Goldie-Steutel-Bondesson class. Electron. J. Probab. 15, 1119–1142 (2010)
T. Aoyama, M. Maejima, Y. Ueda, Several forms of stochastic integral representations of gamma random variables and related topics. Probab. Math. Stat. 32, 99–118 (2011)
O.E. Barndorff-Nielsen, S. Thorbjørnsen, Lévy laws in free probability. Proc. Natl. Acad. Sci. U.S.A. 99, 16568–16575 (2002)
O.E. Barndorff-Nielsen, S. Thorbjørnsen, Lévy processes in free probability. Proc. Natl. Acad. Sci. U.S.A. 99, 16576–16580 (2002)
O.E. Barndorff-Nielsen, S. Thorbjørnsen, A connection between free and classical infinitely divisibility. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 7, 573–590 (2004)
O.E. Barndorff-Nielsen, S. Thorbjørnsen, Regularising mapping of Lévy measures. Stoch. Process. Appl. 16, 423–446 (2006)
O.E. Barndorff-Nielsen, J. Kent, M. Sørensen, Normal variance-mean mixtures and z distributions. Int. Stat. Rev. 50, 145–159 (1982)
O.E. Barndorff-Nielsen, M. Maejima, K. Sato, Some classes of multivariate infinitely divisible distributions admitting stochastic integral representations. Bernoulli 12, 1–33 (2006)
J. Bertoin, M. Yor, On the entire moments of self-similar Markov processes and exponential functionals. Ann. Fac. Sci. Toulouse Math. VI 11, 33–45 (2002)
J. Bertoin, T. Fujita, B. Roynette, M. Yor, On a particular class of self decomposable random variables: the duration of a Bessel excursion straddling an independent exponential time. Probab. Math. Stat. 26, 315–366 (2006)
J. Bertoin, A. Lindner, R. Maller, On continuity properties of the law of integrals of Lévy processes, in Séminaire de Probabilités XLI, ed. by C. Donati-Martin et al. Lecture Notes in Mathematics, vol. 1934 (Springer, Berlin, 2008), pp. 137–159
L. Bondesson, Generalized Gamma Convolutions and Related Classes of Distributions and Densities. Lecture Notes in Statistics, vol. 76 (Springer, Berlin, 1992)
P. Bosch, T. Simon, On the self-decomposability of the Fréchet distribution. Indag. Math. 24, 626–636 (2013)
P. Carr, H. Geman, D.B. Madan, M. Yor, The fine structure of asset returns: an empirical investigation. J. Bus. 75, 305–333 (2002)
P. Carr, H. Geman, D.B. Madan, M. Yor, Pricing options on realized variance. Finance Stochast. 9, 453–475 (2005)
R.L. Dobrushin, P. Major, Non-central limit theorem for non-linear functions of Gaussian fields. Zeit. Wahrschein. verw. Geb. 50, 27–52 (1979)
W. Feller, An Introduction to Probability Theory and Its Applications, vol. II (Wiley, New York, 1971)
K. Fukuyama, S. Takahashi, On limit distributions of trigonometric sums. Rev. Roum. Math. Pure Appl. 53, 19–24 (2008)
C. Goldie, A class of infinitely divisible random variables. Proc. Camb. Philos. Soc. 63, 1141–1143 (1967)
K. Handa, The sector constants of continuous state branching processes with immigration. J. Funct. Anal. 262, 4488–4524 (2012)
K. Ichifuji, M. Maejima, Y. Ueda, Fixed points of mappings of infinitely divisible distributions on \(\mathbb{R}^{d}\). Stat. Probab. Lett. 80, 1320–1328 (2010)
A.M. Iksanov, Z.J. Jurek, B.M. Schreiber, A new factorization property of the selfdecomposable probability measures. Ann. Probab. 32, 1356–1369 (2004)
M.E.H. Ismail, D.H. Kelker, Special functions, Stieltjes transforms and infinite divisibility. SIAM J. Math. Anal. 10, 884–901 (1979)
L.F. James, B. Roynette, M. Yor, Generalized gamma convolutions, Dirichlet means, Thorin measures, with explicit examples. Probab. Surv. 5, 346–415 (2008)
M. Jeanblanc, J. Pitman, M. Yor, Self-similar processes with independent increments associated with Lévy and Bessel processes. Stoch. Process. Appl. 100, 223–231 (2002)
Z.J. Jurek, Limit distributions for sums of shrunken random variables. Diss. Math. 185, 1981 (1981)
Z.J. Jurek, Relations between the s-selfdecomposable and selfdecomposable measures. Ann. Probab. 13, 592–608 (1985)
Z.J. Jurek, Random integral representations for classes of limit distributions similar to Lévy class \(L_{0}\). Probab. Theory Relat. Fields 78, 473–490 (1988)
Z.J. Jurek, Random integral representations for classes of limit distributions similar to Lévy class \(L_{0}\). II. Nagoya Math. J. 114, 53–64 (1989)
Z.J. Jurek, Random integral representations for classes of limit distributions similar to Lévy class \(L_{0}\). III, in Probability in Banach Spaces, 8: Proceedings of the Eighth International Conference. Bowdoin College in Summer of 1991, ed. by R.M. Dudley et al. (Birkhäuser, Boston, MA, 1992), pp. 137–151
Z.J. Jurek, The random integral representation hypothesis revisited: new classes of s-selfdecomposable laws, in Abstract and Applied Analysis (World Scientific, Singapore, 2004), pp. 479–498
Z.J. Jurek, The random integral representation conjecture: a quarter of a century later. Lith. Math. J. 51, 362–369 (2011)
Z.J. Jurek, B.M. Schreiber, Fourier transforms of measures from the classes \(\mathcal{U}_{\beta },\ -2 <\beta \leq -1\). J. Multivar. Anal. 41, 194–211 (1992)
Z.J. Jurek, W. Vervaat, An integral representation for selfdecomposable Banach space valued random variables. Zeit. Wahrschein. verw. Geb. 62, 247–262 (1983)
H. Kondo, M. Maejima, K. Sato, Some properties of exponential integrals of Lévy processes and examples. Electron. Commun. Probab. 11, 291–303 (2006)
L. Lindner, M. Maejima, Unpublished note, Marburg University (2007)
J.V. Linnik, I.V. Ostrovskii, Decomposition of Random Variables and Vectors (American Mathematical Society, Providence, RI, 1977)
D.B. Madan, E. Seneta, The variance gamma (VG) model for share market returns. J. Bus. 63, 511–524 (1990)
M. Maejima, Subclasses of Goldie-Steutel-Bondesson class of infinitely divisible distributions on \(\mathbb{R}^{d}\) by \(\varUpsilon\)-mapping. Lat. Am. J. Probab. Math. Stat. 3, 55–66 (2007)
M. Maejima, G. Nakahara, A note on new classes of infinitely divisible distributions on \(\mathbb{R}^{d}\). Electron. Commun. Probab. 14, 358–371 (2009)
M. Maejima, Y. Niiyama, The generalized Langevin equation and an example of type G distributions. Inst. Stat. Math. Coop. Res. Rep. 175, 126–129 (2005)
M. Maejima, J. Rosiński, The class of type G distributions on \(\mathbb{R}^{d}\) and related subclasses of infinitely divisible distributions. Demonstratio Math. 34, 251–266 (2001)
M. Maejima, J. Rosiński, Type G distributions on \(\mathbb{R}^{d}\). J. Theor. Probab. 15, 323–341 (2002)
M. Maejima, K. Sato, The limits of nested subclasses of several classes of infinitely divisible distributions are identical with the closure of the class of stable distributions. Probab. Theory Relat. Fields 145, 119–142 (2009)
M. Maejima, C.A. Tudor, On the distribution of the Rosenblatt process. Stat. Probab. Lett. 83, 1490–1495 (2013)
M. Maejima, Y. Ueda, Stochastic integral characterizations of semi-selfdecomposable distributions and related Ornstein-Uhlenbeck type processes. Commun. Stoch. Anal. 3, 349–367 (2009)
M. Maejima, Y. Ueda, A note on a bivariate gamma distribution. Stat. Probab. Lett. 80, 1991–1994 (2010)
M. Maejima, Y. Ueda, α-Selfdecomposable distributions and related Ornstein-Uhlenbeck type processes. Stoch. Process. Appl. 120, 2363–2389 (2010)
M. Maejima, Y. Ueda, Compositions of mappings of infinitely divisible distributions with applications to finding the limits of some nested subclasses. Electron. Commun. Probab. 15, 227–239 (2010)
M. Maejima, Y. Ueda, Nested subclasses of the class of α-selfdecomposable distributions. Tokyo J. Math. 34, 383–406 (2011)
M. Maejima, Y. Ueda, Examples of α-selfdecomposable distributions. Stat. Probab. Lett. 83, 286–291 (2013)
M. Maejima, K. Sato, T. Watanabe, Distributions of selfsimilar and semi-selfsimilar processes with independent increments. Stat. Probab. Lett. 47, 395–401 (2000)
M. Maejima, M. Matsui, M. Suzuki, Classes of infinitely divisible distributions on \(\mathbb{R}^{d}\) related to the class of selfdecomposable distributions. Tokyo J. Math. 33, 453–486 (2010)
M. Maejima, V. Pérez-Abreu, K. Sato, A class of multiple infinitely divisible distributions related to arcsine density. Bernoulli 18, 476–795 (2012)
M. Maejima, J. Rosiński, Y. Ueda, Stochastic integral and series representations for strictly stable distributions. J. Theor. Probab. (2015). doi:10.1007/s10959-013-0518-8. Published online: 27 September 2013
I. Nourdin, G. Peccati, Normal Approximations with Malliavin Calculus (Cambridge University Press, Cambridge, 2012)
T.A. O’Connor, Infinitely divisible distributions similar to class L distributions. Z. Wahrsch. Verw. Gebiete 50, 265–271 (1979)
T.A. O’Connor, Infinitely divisible distributions with unimodal Levy spectral functions. Ann. Probab. 7, 494–499 (1979)
T.A. O’Connor, Some classes of limit laws containing the stable distributions. Z. Wahrsch. Verw. Gebiete 55, 25–33 (1981)
V. Pipiras, M.S. Taqqu, Regularization and integral representations of Hermite processes. Stat. Probab. Lett. 80, 2014–2023 (2010)
N. Privault, D. Yang, Infinite divisibility of interpolated gamma powers. J. Math. Anal. Appl. 405, 373–387 (2013)
N. Privault, D. Yang, Variance-GGC asset price models and their sensitivity analysis. Preprint (2015), ntu.edu.sg/home/nprivault
A. Rocha-Arteaga, K. Sato, Topics in Infinitely Divisible Distributions and Lévy Processes. Aportaciones Matemáticas, Investigación 17 (Sociedad Matemática Mexicana, Mexico, 2003)
J. Rosinski, On a class of infinitely divisible processes represented as mixtures of Gaussian processes, in Stable Processes and Related Topics, ed. by S. Cambanis et al. (Birkhäuser, Basel, 1991), pp. 27–41
J. Rosiński, Tempering stable processes. Stoch. Process. Appl. 117, 677–707 (2007)
K. Sato, Class L of multivariate distributions and its subclasses. J. Multivar. Anal. 10, 207–232 (1980)
K. Sato, Self-similar processes with independent increments. Probab. Theory Relat. Fields 89, 285–300 (1991)
K. Sato, Time evolutions of Lévy processes, in Trend in Probability and Related Analysis, ed. by N. Kôno, N.-R. Shieh (World Scientific, Singapore, 1997), pp. 35–82
K. Sato, Lévy Processes and Infinitely Divisible Distributions (Cambridge University Press, Cambridge, 1999)
K. Sato, Monotonicity and non-monotonicity of domains of stochastic integral operators. Probab. Math. Stat. 26, 23–39 (2006)
K. Sato, Two families of improper stochastic integrals with respect to Lévy processes. Lat. Am. J. Probab. Math. Stat. 1, 47–87 (2006)
K. Sato, Transformations of infinitely divisible distributions via improper stochastic integrals. Lat. Am. J. Probab. Math. Stat. 3, 67–110 (2007)
K. Sato, Fractional integrals and extensions of selfdecomposability, in Lévy Matters I, ed. by T. Duquesne et al. Lecture Notes in Mathematics, vol. 2010 (Springer, Berlin, 2010), pp. 1–91
K. Sato, Description of limits of ranges of iterations of stochastic integral mappings of infinitely divisible distributions. Lat. Am. J. Probab. Math. Stat. 8, 1–17 (2011)
K. Sato, M. Yamazato, Stationary processes of Ornstein-Uhlenbeck type, in Probability Theory and Mathematical Statistics, ed. by K. Ito, J.V. Prokhorov. Lecture Notes in Mathematics, vol. 1021 (Springer, Berlin, 1983), pp. 541–551
R.L. Schilling, R. Song, Z. Vondra\(\check{c}\) ek, Bernstein Functions. Theory and Applications (De Gruyter, New York, 2010)
P.R. Shanbhag, M. Sreehari, On certain self-decomposable distributions. Z. Wahrsch. verw. Gebiete 38, 217–222 (1977)
F.W. Steutel, Note on the infinitely divisibility of exponential mixtures. Ann. Math. Stat. 38, 1303–1305 (1967)
F.W. Steutel, K. van Harn, Infinite Divisibility of Probability Distributions on the Real Line (Dekker, New York, 2004)
T. Takemura, M. Tomisaki, Lévy measure density corresponding to inverse local time. Publ. RIMS Kyoto Univ. 49, 563–599 (2013)
O. Thorin, On the infinite divisibility of the Pareto distribution. Scand. Actuar. J. 1977, 31–40 (1977)
O. Thorin, On the infinite divisibility of the lognormal distribution. Scand. Actuar. J. 1977, 121–148 (1977)
O. Thorin, Proof of a conjecture of L. Bondesson concerning infinite divisibility of powers of a gamma variable. Scand. Actuar. J. 1978, 151–164 (1978)
C.A. Tudor, Analysis of the Rosenblatt process. ESAIM Probab. Stat. 12, 230–257 (2008)
C.A. Tudor, F. Viens, Variations and estimators for the selfsimilarity order through Malliavin calculus. Ann. Probab. 6, 2093–2134 (2009)
K. Urbanik, Limit laws for sequences of normed sums satisfying some stability conditions, in Multivariate Analysis–III, ed. by P.R. Krishnaiah (Academic, New York, 1973), pp. 225–237
N. Van Thu, Universal multiply self-decomposable probability measures on Banach spaces. Probab. Math. Stat. 3, 71–84 (1982)
N. Van Thu, Fractional calculus in probability. Probab. Math. Stat. 3, 173–189 (1984)
N. Van Thu, An alternative approach to multiple self-decomposable probability measures on Banach spaces. Probab. Theory Relat. Fields 72, 35–54 (1986)
M.S. Veillette, M.S. Taqqu, Properties and numerical evaluation of the Rosenblatt distribution. Bernoulli 19, 982–1005 (2013)
D. Vere-Jones, The infinite divisibility of a bivariate gamma distribution. Sankhyā Ser. A 29, 421–422 (1967)
S.J. Wolfe, On a continuous analogue of the stochastic difference equation X n = ρ X n−1 + B n . Stoch. Process. Appl. 12, 301–312 (1982)
M. Yamazato, Unimodality of infinitely divisible distribution functions of class L. Ann. Probab. 6, 523–531 (1978)
Acknowledgements
I would like to express my sincere gratitude to people who collaborated on the papers cited in this article, Ole E. Barndorff-Nielsen, Alexander Lindner, Muneya Matsui, Víctor Pérez-Abreu, Jan Rosiński, Ken-iti Sato, Ciprian A. Tudor, and my students at Keio University. Special thanks to Ken-iti Sato, who first led me to this interesting subject in the theory of infinitely divisible distributions and also read the first draft of this article, giving me many valuable comments. My former Ph.D. student, Yohei Ueda, also deserves special mention for working with me for many years on this topic at Keio University. I would like to acknowledge as well an anonymous referee for suggestions that helped to enhance this article.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Maejima, M. (2015). Classes of Infinitely Divisible Distributions and Examples. In: Lévy Matters V. Lecture Notes in Mathematics(), vol 2149. Springer, Cham. https://doi.org/10.1007/978-3-319-23138-9_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-23138-9_1
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-23137-2
Online ISBN: 978-3-319-23138-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)