Keywords

1 Introduction

The theory of infinitely divisible distributions has been a core topic of probability theory and the subject of extensive study over the years. One reason for that is the fact that many important distributions are infinitely divisible, such as Gaussian, stable, exponential, Poisson, compound Poisson and gamma distributions. Another reason is that the set of infinitely divisible distributions on \(\mathbb{R}^{d}\) coincides with the set of distributions which are limits of distributions of sums \(\sum _{j=1}^{k_{n}}\xi _{ n,j}\) of \(\mathbb{R}^{d}\)-valued triangle arrays \(\{\xi _{n,j},1 \leq j \leq k_{n},n \geq 1\},k_{n} \uparrow \infty \) as \(n \rightarrow \infty \), where for each n, \(\xi _{n,1},\xi _{n,2},\ldots\) are independent, with the condition of infinite smallness that is \(\lim _{n\rightarrow \infty }\max _{1\leq j\leq k_{n}}P(\vert \xi _{n,j}\vert \geq \varepsilon ) = 0\) for any \(\varepsilon > 0\). Suppose that \(\xi _{n,k} = a_{n}^{-1}(\xi _{j} - b_{j})\), for a n  > 0 with \(\lim _{n\rightarrow \infty }a_{n} = \infty \), \(\lim _{n\rightarrow \infty }a_{n+1}a_{n}^{-1} = 1\), \(b_{j} \in \mathbb{R}^{d}\) and k n  = n. If \(\{\xi _{j}\}\) are independent, then the resulting class is the class of selfdecomposable distributions, and if furthermore \(\{\xi _{j}\}\) are identically distributed, then the resulting class is the class of stable distributions including Gaussians, These two classes are important classes of infinitely divisible distributions. Selfdecomposable distributions are known as marginal distributions of the stationary processes of Ornstein-Uhlenbeck type, which are stationary solutions of the Langevin equations with Lévy noise.

In 1977, Thorin [85, 86] introduced a class between the classes of stable and selfdecomposable distributions, called now the Thorin class, whose elements are called Generalized Gamma Convolutions (GGCs for short), when he wanted to prove the infinite divisibility of the Pareto and the log-normal distributions. Bondesson [16] published a monograph on this topic in 1992.

In 1983, Jurek and Vervaat [38] and Sato and Yamazato [79] showed that any selfdecomposable distribution \(\tilde{\mu }\) can be characterized by stochastic integrals with respect to some Lévy process {X t } with \(E[\log \vert X_{1}\vert ] < \infty \) such as

$$\displaystyle{ \tilde{\mu }= \mathcal{L}\left (\int _{0}^{\infty }e^{-t}dX_{ t}\right ), }$$
(1)

where \(\mathcal{L}(X)\) is the law of a random variable of X. (The paper by Jurek [36] is a short historical survey on stochastic integral representations of classes of infinitely divisible distributions.) Since a Lévy process {X t } can be constructed one to one in law by some infinitely divisible distribution μ satisfying \(\mathcal{L}(X_{1}) =\mu\), (1) can be regarded as a mapping Φ, say, from the class of infinitely divisible distributions with finite log-moments to the class of selfdecomposable distributions as

$$\displaystyle{ \tilde{\mu }=\varPhi (\mu ). }$$
(2)

If we denote by {X t (μ)} a Lévy process such that \(\mathcal{L}(X_{1}^{(\mu )}) =\mu\), (1) and (2) gives us

$$\displaystyle{\tilde{\mu }=\varPhi (\mu ) = \mathcal{L}\left (\int _{0}^{\infty }e^{-t}dX_{ t}^{(\mu )}\right ).}$$

Barndorff-Nielsen and Thorbjørnsen [710] introduced a mapping

$$\displaystyle{\varUpsilon (\mu ) = \mathcal{L}\left (\int _{0}^{1}\log (t^{-1})dX_{ t}^{(\mu )}\right )}$$

related to the Bercovici-Pata bijection between free probability and classical probability. Then in Barndorff-Nielsen et al. [12], we investigated the range of the mapping Ψ and characterized several classes of infinitely divisible distributions in terms of the mappings Φ and \(\varUpsilon\). Among others, we found that the composition of these two mappings produces the Thorin class. Since then, many mappings have been studied as mappings constructing classes of infinitely divisible distributions giving new probabilistic explanations of such classes and also as mappings themselves from a mathematical point of view.

Let us recall one sentence by Bondesson [16]. “Since a lot of the standard distributions now are known to be infinitely divisible, the class of infinitely divisible distributions has perhaps partly lost its interest. Smaller classes should be more in focus.” In this article, we survey such “smaller classes” and try to find classes which known infinitely divisible distributions belong to, as precisely as possible. All infinitely divisible distributions we treat here are finite dimensional and most of the examples are one-dimensional.

In Sect. 2, we give some preliminaries on infinitely divisible distributions on \(\mathbb{R}^{d}\), Lévy processes and stochastic integrals with respect to Lévy processes.

In Sect. 3, we explain some known classes of infinitely divisible distributions and their relationships, and the characterization in terms of stochastic integral mappings is discussed in Sect. 4. Section 5 is devoted to some other mappings. These three sections form the first main subject of this article. Also, compositions of mappings are discussed in Sect. 6.

Since we have mappings to construct classes in hand, we can construct nested subclasses by the iteration of those mappings. This is the topic in Sect. 7. For the class of selfdecomposable distributions, these nested subclasses were already studied by Urbanik [90] and later by Sato [70].

Once we have a general theory for infinitely divisible distributions, it is necessary to provide specific examples. We know that many distributions are infinitely divisible. Then, the next question related to the above may be which classes such known infinitely distributions belong to. This is the second main subject of this article and is discussed in Sects. 810. Section 8 treats known distributions. After the monograph by Bondesson [16] and later a paper by James et al. [28], GGCs have been highlighted, and thus examples of GGCs recently appearing in quite different problems are explained separately in Sect. 9. Section 10 discusses new examples of α-selfdecomposable distributions.

We conclude the article with a short Sect. 11 on fixed points of the mapping for α-selfdecomposable distributions, offering a new perspective on the class of stable distributions.

Since this is a survey article, only a few statements have explicit proofs. However, even if the statements do not have proofs, readers may consult original proofs in the papers cited.

2 Preliminaries

2.1 Infinitely Divisible Distributions on \(\mathbb{R}^{d}\)

In the following, \(\mathcal{P}(\mathbb{R}^{d})\) is the set of all probability distributions on \(\mathbb{R}^{d}\) and \(\hat{\mu }(z):=\int _{\mathbb{R}^{d}}e^{i\langle z,x\rangle }\mu (dx),z \in \mathbb{R}^{d}\), is the characteristic function of \(\mu \in \mathcal{P}(\mathbb{R}^{d})\).

Definition 2.1

\(\mu \in \mathcal{P}(\mathbb{R}^{d})\) is infinitely divisible if, for any \(n \in \mathbb{N}\), there exists \(\mu _{n} \in \mathcal{P}(\mathbb{R}^{d})\) such that \(\hat{\mu }(z) =\hat{\mu } _{n}(z)^{n}\). \(ID(\mathbb{R}^{d})\) denotes the class of all infinitely divisible distributions on \(\mathbb{R}^{d}\).

We also use

$$\displaystyle\begin{array}{rcl} ID_{\mathrm{sym}}(\mathbb{R}^{d})&:=& \{\mu \in ID(\mathbb{R}^{d}):\mu \,\, \text{is symmetric on}\,\,\mathbb{R}^{d}\}, {}\\ ID_{\log }(\mathbb{R}^{d})&:=& \{\mu \in ID(\mathbb{R}^{d}):\int _{ \mathbb{R}^{d}}\log ^{+}\vert x\vert \mu (dx) < \infty \} {}\\ \end{array}$$

and

$$\displaystyle{ID_{\log ^{m}}(\mathbb{R}^{d}):=\{\mu \in ID(\mathbb{R}^{d}):\int _{ \mathbb{R}^{d}}(\log ^{+}\vert x\vert )^{m}\mu (dx) < \infty \},\quad m = 1,2,\ldots,}$$

where \(\log ^{+}a =\max \{\log a,0\}\).

The so-called Lévy-Khintchine representation of infinitely divisible distribution is provided in the following proposition.

Proposition 2.2 (The Lévy-Khintchine Representation; See e.g. Sato [73, Theorem 8.1])

  1. (1)

    If \(\mu \in ID(\mathbb{R}^{d})\) , then

    $$\displaystyle{ \hat{\mu }(z) =\exp \left \{-2^{-1}\langle z,Az\rangle + i\langle \gamma,z\rangle +\int _{ \mathbb{R}^{d}}\left (e^{i\langle z,x\rangle } - 1 - \frac{i\langle z,x\rangle } {1 + \vert x\vert ^{2}}\right )\nu (dx)\right \},\quad z \in \mathbb{R}^{d}, }$$
    (3)

    where A is a symmetric nonnegative-definite d × d matrix, ν is a measure on \(\mathbb{R}^{d}\) satisfying

    $$\displaystyle{ \nu (\{0\}) = 0\quad \text{and}\quad \int _{\mathbb{R}^{d}}(\vert x\vert ^{2} \wedge 1)\nu (dx) < \infty, }$$
    (4)

    and γ is a vector in \(\mathbb{R}^{d}\) .

  2. (2)

    The representation of \(\hat{\mu }\) in (1) by A,ν and γ is unique.

  3. (3)

    Conversely, if A is a symmetric nonnegative-definite d × d matrix, ν is a measure satisfying  (4) and \(\gamma \in \mathbb{R}^{d}\) , then there exists a \(\mu \in ID(\mathbb{R}^{d})\) whose characteristic function is given by  (3) .

A is called the Gaussian covariance matrix or the Gaussian part and ν is called the Lévy measure. The triplet (A, ν, γ) is called the Lévy-Khintchine triplet of μ. When we want to emphasize the Lévy-Khintchine triplet, we may write μ = μ (A, ν, γ). If the Lévy measure ν of μ satisfies \(\int _{\vert x\vert >1}\vert x\vert \nu (dx) < \infty \), then there exists the mean \(\gamma ^{1} \in \mathbb{R}^{d}\) of μ such that

$$\displaystyle{\hat{\mu }(z) =\exp \left \{-2^{-1}\langle z,Az\rangle + i\langle \gamma ^{1},z\rangle +\int _{ \mathbb{R}^{d}}\left (e^{i\langle z,x\rangle } - 1 - i\langle z,x\rangle \right )\nu (dx)\right \}.}$$

In this case, we will write \(\mu =\mu _{(A,\nu,\gamma ^{1})_{1}}\). If ν of μ satisfies \(\int _{\vert x\vert \leq 1}\vert x\vert \nu (dx) < \infty \), then there exists \(\gamma ^{0} \in \mathbb{R}^{d}\) (called the drift of μ) such that

$$\displaystyle{\hat{\mu }(z) =\exp \left \{-2^{-1}\langle z,Az\rangle + i\langle \gamma ^{0},z\rangle +\int _{ \mathbb{R}^{d}}\left (e^{i\langle z,x\rangle } - 1\right )\nu (dx)\right \}.}$$

We write \(\mu =\mu _{(A,\nu,\gamma ^{0})_{0}}\) in this case. We also write ν μ for ν when ν is the Lévy measure of μ.

In the following, the notation 1 B denotes the indicator function of the set \(B \in \mathcal{B}(\mathbb{R}^{d})\). Here and in what follows, \(\mathcal{B}(C)\) is the set of Borel sets in C.

Proposition 2.3 (Polar Decomposition of Lévy Measure; See e.g. Barndorff-Nielsen et al. [12, Lemma 2.1])

Let ν μ be the Lévy measure of some \(\mu \in I(\mathbb{R}^{d})\) with \(0 <\nu _{\mu }(\mathbb{R}^{d}) \leq \infty \) . Then there exist a \(\sigma\) -finite measure \(\lambda\) on \(S:=\{\xi \in \mathbb{R}^{d}: \vert \xi \vert = 1\}\) with \(0 \leq \lambda (S) \leq \infty \) and a family \(\{\nu _{\xi }: \xi \in S\}\) of measures on \((0,\infty )\) such that

$$\displaystyle{ \nu _{\xi }(B)\,\,\text{is measurable in}\,\,\xi \,\,\text{for each}\,\,B \in \mathcal{B}((0,\infty )), }$$
(5)
$$\displaystyle{ 0 <\nu _{\xi }((0,\infty )) \leq \infty \quad \text{for each }\xi \in S, }$$
(6)
$$\displaystyle{ \nu _{\mu }(B) =\int _{S}\lambda (d\xi )\int _{0}^{\infty }1_{ B}(r\xi )\nu _{\xi }(dr)\quad \text{for }B \in \mathcal{B}(\mathbb{R}^{d}\setminus \{0\}). }$$
(7)

Here \(\lambda\) and \(\{\nu _{\xi }\}\) are uniquely determined by ν μ in the following sense: if \(\lambda\) , \(\{\nu _{\xi }\}\) and \(\lambda '\) , \(\{\nu '_{\xi }\}\) both have properties  (5)–(7) , then there is a measurable function \(c(\xi )\) on S such that

$$\displaystyle\begin{array}{rcl} 0 < c(\xi ) < \infty,\,\,\lambda '(d\xi ) = c(\xi )\lambda (d\xi ),\,\,c(\xi )\nu '_{\xi }(dr) =\nu _{\xi }(dr)\quad \mbox{ for $\lambda $-a.e. }\xi \in S.& & {}\\ \end{array}$$

We call \(\nu _{\xi }\) the radial component of ν μ and when \(\nu _{\xi }\) is absolute continuous, we call its density the Lévy density.

Definition 2.4 (The Cumulant of \(\hat{\mu }\))

For \(\mu \in ID(\mathbb{R}^{d})\), \(C_{\mu }(z) =\log \hat{\mu } (z)\) is called the cumulant of μ, where \(\log\) is the distinguished logarithm. (For the definition of the distinguished logarithm, see e.g. Sato [73], the sentence after Lemma 7.6.)

2.2 Stochastic Integrals with Respect to Lévy Processes

Definition 2.5

A stochastic process {X t , t ≥ 0} on \(\mathbb{R}^{d}\) is called a Lévy process, if the following conditions are satisfied.

  1. (1)

    X 0 = 0 a.s.

  2. (2)

    For any 0 ≤ t 0 < t 1 < ⋯ < t n , n ≥ 1, \(X_{t_{0}},X_{t_{1}} - X_{t_{0}},\ldots,X_{t_{n}} - X_{t_{n-1}}\) are independent.

  3. (3)

    For h > 0, the distribution of X t+h X t does not depend on t.

  4. (4)

    For any t ≥ 0 and \(\varepsilon > 0\), \(\lim _{h\rightarrow 0}P(\vert X_{t+h} - X_{t}\vert >\varepsilon ) = 0\).

  5. (5)

    For almost all ω, the sample paths X t (ω) are right-continuous in t ≥ 0 and have left limits in t > 0.

Dropping the condition (5) in Definition 2.5, we call any process satisfying (1)−(4) a Lévy process in law. In the following, “Lévy process” simply means “Lévy process in law”. It is known (see e.g. Sato [73, Theorem 7.10(i)]) that if {X t } is a Lévy process on \(\mathbb{R}^{d}\), then for any \(t \geq 0,\mathcal{L}(X_{t}) \in ID(\mathbb{R}^{d})\) and if we let \(\mathcal{L}(X_{1}) =\mu\), then \(\mathcal{L}(X_{t}) =\mu ^{t{\ast}}\), where μ t is the distribution with characteristic function \(\hat{\mu }(z)^{t}\). Thus the distribution of a Lévy process {X t } is determined by that of X 1. Further, a stochastic process {X t , t ≥ 0} on \(\mathbb{R}^{d}\) is called an additive process (in law), if (1), (2) and (4) are satisfied.

Proposition 2.6 (Stochastic Integral with Respect to Lévy Process; See Sato [77, Sect. 3.4])

Let {X t } be a Lévy process on \(\mathbb{R}^{d}\) with \(\mathcal{L}(X_{1}) =\mu _{(A,\nu,\gamma )}\) .

  1. (1)

    Let f(t) be a real-valued locally square integrable measurable function on \([0,\infty )\) . Then the stochastic integral \(X:=\int _{ 0}^{a}f(t)dX_{t}\) exists and \(\mathcal{L}(X) \in ID(\mathbb{R}^{d})\) . Its cumulant is represented as

    $$\displaystyle{ C_{\mathcal{L}(X)}(z) =\int _{ 0}^{a}C_{\mu }(\,f(t)z)dt. }$$

The Lévy-Khintchine triplet (A X X X ) of \(\mathcal{L}(X)\) is the following:

$$\displaystyle\begin{array}{rcl} A_{X}& =& \int _{0}^{a}f(t)^{2}Adt, {}\\ \nu _{X}(B)& =& \int _{0}^{a}dt\int _{ \mathbb{R}^{d}}1_{B}(f(t)x)\nu (dx),\quad B \in \mathcal{B}(\mathbb{R}^{d}\setminus \{0\}), {}\\ \gamma _{X}& =& \int _{0}^{a}f(t)dt\left (\gamma +\int _{ \mathbb{R}^{d}}x\left ( \frac{1} {1 + \vert \,f(t)x\vert ^{2}} - \frac{1} {1 + \vert x\vert ^{2}}\right )\nu (dx)\right ). {}\\ \end{array}$$
  1. (2)

    The improper stochastic integral over \([0,\infty )\) is defined as follows, whenever the limit exists:

    $$\displaystyle{X:=\int _{ 0}^{\infty }f(t)dX_{ t} =\lim _{a\rightarrow \infty }\int _{0}^{a}f(t)dX_{ t}\quad \text{in probability.}}$$

Suppose f(t) is locally square integrable on \([0,\infty )\) . Then \(\int _{0}^{\infty }f(t)dX_{t}\) exists if and only if \(\lim _{a\rightarrow \infty }\int _{0}^{a}C_{\mu }(f(t)z)dt\) exists in \(\mathbb{C}\) for all \(z \in \mathbb{R}^{d}\) . We have

$$\displaystyle\begin{array}{rcl} C_{\mathcal{L}(X)}(z)& =& \lim _{a\rightarrow \infty }\int _{0}^{a}C_{\mu }(f(t)z)dt, {}\\ A_{X}& =& \int _{0}^{\infty }f(t)^{2}Adt, {}\\ \nu _{X}(B)& =& \int _{0}^{\infty }dt\int _{ \mathbb{R}^{d}}1_{B}(f(t)x)\nu (dx),\quad B \in \mathcal{B}(\mathbb{R}^{d}\setminus \{0\}), {}\\ \gamma _{X}& =& \lim _{q\rightarrow \infty }\int _{0}^{a}f(t)dt\left (\gamma +\int _{ \mathbb{R}^{d}}x\left ( \frac{1} {1 + \vert f(t)x\vert ^{2}} - \frac{1} {1 + \vert x\vert ^{2}}\right )\nu (dx)\right ). {}\\ \end{array}$$

Remark 2.7

We will treat many f(t)’s which have singularity at t = 0: (i) f(t) = G α, β (t), t > 0, the inverse function of \(t = G_{\alpha,\beta }(s) =\int _{ s}^{\infty }u^{-\alpha -1}e^{-u^{\beta } }du,s \geq 0\), in Sect. 5.2, which is specialized to the kernels of \(\varUpsilon\)-, Ψ-, \(\mathcal{G}\)- and \(\mathcal{M}\)-mappings in Sect. 4.1. (ii) f(t) = t −1∕α, t > 0, which is the kernel of the stable mapping in Sect. 5.4.

3 Some Known Classes of Infinitely Divisible Distributions

As mentioned in Sect. 1, the main concern of this article is to discuss known and new classes of infinitely divisible distributions and characterize them in several ways. We start with some known classes in Sects. 3.2 and 3.3, and show the relationships among themselves in Sect. 3.4.

3.1 Completely Monotone Functions

In the following, the concept of completely monotone function plays an important role. So, we start with the definition of completely monotone function and some properties of it.

Definition 3.1 (Completely Monotone Function)

A function \(\varphi (x)\) on \((0,\infty )\) is completely monotone if it has derivatives \(\varphi ^{(n)}\) of all orders and \((-1)^{n}\varphi ^{(n)}(x) \geq 0,n \in \mathbb{Z}_{+},x > 0\).

Two typical examples of completely monotone functions are e x and x p, p > 0.

Proposition 3.2 (Bernstein’s Theorem. See e.g. Feller [21, Chap. XIII, 4])

A function \(\varphi\) on \((0,\infty )\) is completely monotone if and only if it is the Laplace transform of a measure μ on \((0,\infty )\) .

Proposition 3.3 (See e.g. Feller [21, Chap. XIII, 4])

  1. (1)

    The product of two completely monotone functions on \((0,\infty )\) is also completely monotone.

  2. (2)

    If \(\varphi\) is completely monotone on \((0,\infty )\) and if ψ is a positive function with a completely monotone derivative on \((0,\infty )\) , then the composed function \(\varphi (\psi )\) is also completely monotone on \((0,\infty )\) .

3.2 The Classes of Stable and Semi-stable Distributions

Definition 3.4

Let \(\mu \in ID(\mathbb{R}^{d})\).

  1. (1)

    It is called stable if, for any a > 0, there exist b > 0 and \(c \in \mathbb{R}^{d}\) such that

    $$\displaystyle{ \hat{\mu }(z)^{a} =\hat{\mu } (bz)e^{i\langle c,z\rangle }. }$$
    (8)

    \(S(\mathbb{R}^{d})\) denotes the class of all stable distributions on \(\mathbb{R}^{d}\).

  2. (2)

    It is called strictly stable if, for any a > 0, there is b > 0 such that \(\hat{\mu }(z)^{a} =\hat{\mu } (bz).\)

  3. (3)

    It is called semi-stable if, for some a > 0 with a ≠ 1, there exists b > 0 and \(c \in \mathbb{R}^{d}\) satisfying (8). \(SS(\mathbb{R}^{d})\) denotes the class of all semi-stable distributions on \(\mathbb{R}^{d}\).

  4. (4)

    It is called strictly semi-stable if, for some a > 0 with a ≠ 1, there exists b > 0 satisfying \(\hat{\mu }(z)^{a} =\hat{\mu } (bz)\).

\(\mu \in \mathcal{P}(\mathbb{R}^{d})\) is called trivial if it is a distribution of a random variable concentrated at one point, otherwise it is called non-trivial. When this one point is \(c \in \mathbb{R}^{d}\), we write μ = δ c .

Theorem 3.5 (See e.g. Sato [73, Theorem 13.15] or Sato [72, Theorem 3.3])

If μ is non-trivial stable, then there exists a unique α ∈ (0,2] such that b = a 1∕α in  (8) .

In this case, we say that such a μ is α-stable. Gaussian distribution and Cauchy distribution are 2-stable and 1-stable, respectively. Note that any trivial distribution is stable in the sense that (8) is satisfied, and α is not uniquely determined. In the following, when we say α-stable distribution, we always include all trivial distributions. Also note that trivial distributions which are not δ 0 are not strictly stable except 1-stable distribution.

3.3 Some Known Classes of Infinitely Divisible Distributions

We start with the following six classes which are well-studied in the literature. We call Vx an elementary gamma random variable (resp. elementary mixed-exponential random variable, elementary compound Poisson random variable) on \(\mathbb{R}^{d}\) if x is a nonrandom, nonzero element of \(\mathbb{R}^{d}\) and V is a real random variable having gamma distribution (resp. a mixture of a finite number of exponential distributions, compound Poisson distribution whose jump size distribution is uniform on the interval [0, a] for some a > 0).

  1. (1)

    The class \(U(\mathbb{R}^{d})\) (the Jurek class): \(\mu \in U(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ  = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as

    $$\displaystyle{ \nu _{\xi }(dr) =\ell _{\xi }(r)dr, }$$
    (9)

    where \(\ell_{\xi }(r)\) is a nonnegative function measurable in \(\xi \in S\) and nonincreasing on \((0,\infty )\) as a function of r.

The class \(U(\mathbb{R}^{d})\) was introduced by Jurek [31] and \(\mu \in U(\mathbb{R}^{d})\) is called s-selfdecomposable. Jurek [31] proved that \(\mu \in U(\mathbb{R}^{d})\) if and only if for any b > 1 there exists \(\mu _{b} \in ID(\mathbb{R}^{d})\) such that \(\hat{\mu }(z) =\hat{\mu } (b^{-1}z)^{b^{-1} }\hat{\mu }_{b}(z).\) Sato [77] also formulated \(U(\mathbb{R}^{d})\) as the smallest class of distributions on \(\mathbb{R}^{d}\) closed under convolution and weak convergence and containing all distributions of elementary compound Poisson random variables on \(\mathbb{R}^{d}\).

  1. (2)

    The class \(B(\mathbb{R}^{d})\) (the Goldie–Steutel–Bondesson class): \(\mu \in B(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ  = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as

    $$\displaystyle{ \nu _{\xi }(dr) =\ell _{\xi }(r)dr, }$$
    (10)

    where \(\ell_{\xi }(r)\) is a nonnegative function measurable in \(\xi \in S\) and completely monotone on \((0,\infty )\) as a function of r.

Historically, Goldie [23] proved the infinite divisibility of mixtures of exponential distributions and Steutel [82] found the description of their Lévy measures. Then Bondesson [16] studied generalized convolutions of mixtures of exponential distributions on \(\mathbb{R}_{+}\). It is the smallest class of distributions on \(\mathbb{R}_{+}\) that contains all mixtures of exponential distributions and that is closed under convolution and weak convergence on \(\mathbb{R}_{+}\). \(B(\mathbb{R}^{d})\) is its generalization by Barndorff-Nielsen et al. [12], where all mixtures of exponential distributions are replaced by all distributions of elementary mixed-exponential random variables on \(\mathbb{R}^{d}\).

  1. (3)

    The class \(L(\mathbb{R}^{d})\) (the class of selfdecomposable distributions): \(\mu \in L(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ  = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, ν μ is expressed as

    $$\displaystyle{ \nu _{\xi }(dr) = r^{-1}k_{\xi }(r)dr, }$$
    (11)

    where \(k_{\xi }(r)\) is a nonnegative function measurable in \(\xi \in S\) and nonincreasing on \((0,\infty )\) as a function of r.

It is known (see e.g. Sato [73, Theorem 15.10]) that \(\mu \in L(\mathbb{R}^{d})\) if and only if for any b > 1, there exists some \(\rho _{b} \in \mathcal{P}(\mathbb{R}^{d})\) such that

$$\displaystyle{ \hat{\mu }(z) =\hat{\mu } (b^{-1}z)\hat{\rho }_{ b}(z). }$$
(12)

This statement usually is used as the definition of the selfdecomposability. ρ b in (12) can be shown to be infinitely divisible. Hence, we may replace \(\rho _{b} \in \mathcal{P}(\mathbb{R}^{d})\) by \(\rho _{b} \in ID(\mathbb{R}^{d})\) in the previous statement.

  1. (4)

    The class \(T(\mathbb{R}^{d})\) (the Thorin class): \(\mu \in T(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ  = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as

    $$\displaystyle{ \nu _{\xi }(dr) = r^{-1}k_{\xi }(r)dr, }$$
    (13)

    where \(k_{\xi }(r)\) is a nonnegative function measurable in \(\xi \in S\) and completely monotone on \((0,\infty )\) as a function of r.

Originally this class was studied by Thorin [85, 86] when he wanted to prove the infinite divisibility of the Pareto and the log-normal distributions, as mentioned in Sect. 1. The class \(T(\mathbb{R}_{+})\) (resp. \(T(\mathbb{R})\)) is defined as the smallest class of distributions on \(\mathbb{R}_{+}\) (resp. \(\mathbb{R}\)) that contains all positive (resp. positive and negative) gamma distributions and that is closed under convolution and weak convergence on \(\mathbb{R}_{+}\) (resp. \(\mathbb{R}\)). The distributions in \(T(\mathbb{R}_{+})\) are called generalized gamma convolutions (GGCs) and those in \(T(\mathbb{R})\) are called extended generalized gamma convolutions (EGGCs). Thorin showed that the Pareto and the log-normal distributions are GGCs, and thus are selfdecomposable and infinitely divisible. The infinite divisibility of the log-normal distribution was not known before the theory on hyperbolic complete monotonicity which was developed by Thorin.

\(T(\mathbb{R}^{d})\) is a generalization of \(T(\mathbb{R})\) by Barndorff-Nielsen et al. [12], where all positive and negative gamma distributions are replaced by all distributions of elementary gamma random variable on \(\mathbb{R}^{d}\).

  1. (5)

    The class \(G(\mathbb{R}^{d})\) (the class of type G distributions): \(\mu \in G(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ  = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as

    $$\displaystyle{ \nu _{\xi }(dr) = g_{\xi }(r^{2})dr, }$$
    (14)

    where \(g_{\xi }(r)\) is a nonnegative function measurable in \(\xi \in S\) and completely monotone on \((0,\infty )\) as a function of r.

When d = 1, \(\mu \in G(\mathbb{R}) \cap I_{\mathrm{sym}}(\mathbb{R})\) if and only if \(\mu = \mathcal{L}(V ^{1/2}Z)\), where V > 0, \(\mathcal{L}(V ) \in I(\mathbb{R})\), Z is the standard normal random variable, and V and Z are independent. When d ≥ 1, \(\mu =\mu _{(A,\nu,\gamma )} \in G(\mathbb{R}^{d}) \cap ID_{\mathrm{sym}}(\mathbb{R}^{d})\) if and only if ν(B) = E[ν 0(Z −1 B)] for some Lévy measure ν 0. (See Maejima and Rosiński [47].) Previously only symmetric distributions in \(G(\mathbb{R}^{d})\) were said to be of type G. In this article, however, we say that any distribution from \(G(\mathbb{R}^{d})\) is of type G.

  1. (6)

    The class \(M(\mathbb{R}^{d})\) (Aoyama et al. [4]): \(\mu \in M(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ  = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as

    $$\displaystyle{ \nu _{\xi }(dr) = r^{-1}g_{\xi }(r^{2})dr, }$$
    (15)

    where \(g_{\xi }(r)\) is a nonnegative function measurable in \(\xi \in S\) and completely monotone on \((0,\infty )\) as a function of r. (Originally, in Aoyama et al. [4], we defined the class \(M(\mathbb{R}^{d})\) restricted in \(I_{\mathrm{sym}}(\mathbb{R}^{d})\). However, in this article, we do not assume the symmetry of \(\mu \in M(\mathbb{R}^{d})\). The requirement (15) is independent of the symmetry of the distribution.)

This class was introduced, being motivated by how the class will be if we replace \(g_{\xi }(r^{2})\) in (14) by \(r^{-1}g_{\xi }(r^{2})\) by multiplying an extra r −1 in the Lévy density which can been seen from (1) to (3) and from (2) to (4).

3.4 Relationships Among the Classes

With respect to relationships among the classes mentioned in Sect. 3.3, we have the following.

  1. (1)

    \(L(\mathbb{R}^{d}) \cup G(\mathbb{R}^{d}) \subsetneq U(\mathbb{R}^{d})\) and \(T(\mathbb{R}^{d}) \subsetneq L(\mathbb{R}^{d})\). (By definition.)

  2. (2)

    Each class in Sect. 3.3 includes \(S(\mathbb{R}^{d})\). This is because if \(\mu \in S(\mathbb{R}^{d})\), then either A ≠ 0 and ν μ  = 0 or A = 0 and \(\nu _{\xi }(dr) = r^{-1-\alpha }dr\) for some α ∈ (0, 2), (see e.g. Sato [73, Theorem 14.3]).

  3. (3)

    \(T(\mathbb{R}^{d}) \subsetneq B(\mathbb{R}^{d}) \subsetneq G(\mathbb{R}^{d})\). These inclusions follow from the properties of completely monotone functions. It follows from Proposition 3.3(1) that \(T(\mathbb{R}^{d}) \subset B(\mathbb{R}^{d})\). If we put \(g_{\xi }(x) = l_{\xi }(x^{1/2})\), then it follows from Proposition 3.3(2) that \(B(\mathbb{R}^{d}) \subset G(\mathbb{R}^{d})\). The relation \(\subsetneq \) can be shown by choosing suitable Lévy densities.

  4. (4)

    \(T(\mathbb{R}^{d}) \subsetneq M(\mathbb{R}^{d}) \subsetneq L(\mathbb{R}^{d}) \cap G(\mathbb{R}^{d})\). The proof is as follows (Aoyama et al. [4]):

We first show that \(M(\mathbb{R}^{d}) \subsetneq L(\mathbb{R}^{d}) \cap G(\mathbb{R}^{d})\). Note that r −1∕2 is completely monotone and by Proposition 3.3(1) that the product of two completely monotone functions is also completely monotone. Thus by the definition of \(M(\mathbb{R}^{d})\), it is clear that \(M(\mathbb{R}^{d}) \subset L(\mathbb{R}^{d}) \cap G(\mathbb{R}^{d})\). To show that \(M(\mathbb{R}^{d})\neq L(\mathbb{R}^{d}) \cap G(\mathbb{R}^{d})\), it is enough to construct \(\mu \in ID(\mathbb{R}^{d})\) such that \(\mu \in L(\mathbb{R}^{d}) \cap G(\mathbb{R}^{d})\) but \(\mu \notin M(\mathbb{R}^{d})\).

First consider the case d = 1. Let

$$\displaystyle{ \nu (dr) = r^{-1}g(r^{2})dr,\quad r > 0. }$$

For our purpose, it is enough to construct a function \(g: (0,\infty )\mapsto (0,\infty )\) such that (a) r −1∕2 g(r) is completely monotone on \((0,\infty )\), (meaning that the corresponding μ belongs to \(G(\mathbb{R})\)), (b) g(r 2) or, equivalently, g(r) is nonincreasing on \((0,\infty )\), (meaning that the corresponding μ belongs to \(L(\mathbb{R})\)), and (c) g(r) is not completely monotone on \((0,\infty )\), (meaning that the corresponding μ does not belong to \(M(\mathbb{R})\)). We show that

$$\displaystyle{ g(r):= r^{-1/2}h(r):= r^{-1/2}\left (e^{-0.9r} - e^{-r} + 0.1e^{-1.1r}\right ),\quad r > 0, }$$

satisfies the requirements (a)−(c) above.

  1. (a)

    We have

    $$\displaystyle{ r^{-1/2}g(r) =\int _{ 0.9}^{1}e^{-ru}du + 0.1\int _{ 1.1}^{\infty }e^{-ru}du, }$$

which is a sum of two completely monotone functions, and thus r −1∕2 g(r) is completely monotone.

  1. (b)

    If h(r) is nonincreasing, then so is g(r) = r −1∕2 h(r). To show it, we have

    $$\displaystyle\begin{array}{rcl} h'(r)& =& -0.9e^{-0.9r} + e^{-r} - 0.11e^{-1.1r} = -0.9e^{-1.1r}\left [\left (e^{0.1r} - \frac{1} {1.8}\right )^{2} -\frac{0.604} {3.24} \right ] {}\\ & \leq &-0.9e^{-1.1r}\left [\left (1 - \frac{1} {1.8}\right )^{2} -\frac{0.604} {3.24} \right ] = -0.01e^{-1.1r} < 0,\quad r > 0. {}\\ \end{array}$$
  2. (c)

    To show (c), we see that

    $$\displaystyle{ h(r) =\int _{ 0}^{\infty }e^{-ru}Q(du), }$$

where Q is a signed measure such that Q = Q 1 + Q 2 + Q 3 and

$$\displaystyle{Q_{1}(\{0.9\}) = 1,\,\,Q_{2}(\{1\}) = -1,\,\,Q_{3}(\{1.1\}) = 0.1.}$$

On the other hand,

$$\displaystyle{r^{-1/2} = \pi ^{-1/2}\int _{ 0}^{\infty }e^{-ru}u^{-1/2}du =:\int _{ 0}^{\infty }e^{-ru}R(du),}$$

where

$$\displaystyle{ R(du) = (\pi u)^{-1/2}du. }$$

Thus

$$\displaystyle{g(r) =\int _{ 0}^{\infty }e^{-ru}R(du)\int _{ 0}^{\infty }e^{-rv}Q(dv) =\int _{ 0}^{\infty }e^{-rw}U(dw),}$$

where

$$\displaystyle{ U(B) =\int _{ 0}^{\infty }Q(B - y)R(dy). }$$

We are going to show that U is a signed measure, namely, for some interval \((a,b),U\left ((a,b)\right ) < 0\). If so, g is not completely monotone by Bernstein’s theorem (Proposition 3.2). We have

$$\displaystyle\begin{array}{rcl} U\left ((a,b)\right )& =& \pi ^{-1/2}\int _{ 0}^{\infty }Q\left ((a - y,b - y)\right )y^{-1/2}dy {}\\ & =& \pi ^{-1/2}\sum _{ j=1}^{3}\int _{ 0}^{\infty }Q_{ j}\left ((a - y,b - y)\right )y^{-1/2}dy {}\\ & =& \pi ^{-1/2}\left [\int _{ a-0.9}^{b-0.9}y^{-1/2}dy -\int _{ a-1}^{b-1}y^{-1/2}dy + 0.1\int _{ a-1.1}^{b-1.1}y^{-1/2}dy\right ] {}\\ & =& 2\pi ^{-1/2}\left [\left (\sqrt{b - 0.9} -\sqrt{a - 0.9}\right ) -\left (\sqrt{b - 1} -\sqrt{a - 1}\right )\right. {}\\ & & \qquad \qquad \quad \left.+0.1\left (\sqrt{b - 1.1} -\sqrt{a - 1.1}\right )\right ]. {}\\ \end{array}$$

Take (a, b) = (1. 15, 1. 35). Then

$$\displaystyle\begin{array}{rcl} & & U\left ((1.15,1.35)\right ) {}\\ & & \quad = 2\pi ^{-1/2}\left [(\sqrt{0.45} -\sqrt{0.25}) - (\sqrt{0.35} -\sqrt{0.15}) + 0.1(\sqrt{0.25} -\sqrt{0.05})\right ] {}\\ & & \quad < -0.01\pi ^{-1/2} < 0. {}\\ \end{array}$$

This concludes that g is not completely monotone.

A d-dimensional example of \(\mu \in ID(\mathbb{R}^{d})\) such that \(\mu \in L(\mathbb{R}^{d}) \cap G(\mathbb{R}^{d})\) but \(\mu \notin M(\mathbb{R}^{d})\) is given by taking the example of the Lévy measure on \(\mathbb{R}\) constructed above as the radial component of a Lévy measure on \(\mathbb{R}^{d}\). This completes the proof of \(M(\mathbb{R}^{d}) \subsetneq L(\mathbb{R}^{d}) \cap G(\mathbb{R}^{d})\).

We next show that \(T(\mathbb{R}^{d}) \subsetneq M(\mathbb{R}^{d})\). If \(\mu \in T(\mathbb{R}^{d})\), then the radial component of the Lévy measure of μ has the form \(\nu _{\xi }(dr) = r^{-1}k_{\xi }(r)dr\), where \(k_{\xi }\) is completely monotone. By Proposition 3.3 and the fact that ψ(r) = r 1∕2 has a completely monotone derivative, then \(g_{\xi }(r):= k_{\xi }(r^{1/2})\) is completely monotone. Thus \(\nu _{\xi }(dr)\) can be read as \(r^{-1}g_{\xi }(r^{2})dr\), where \(g_{\xi }\) is completely monotone, concluding that \(\mu \in M(\mathbb{R}^{d})\).

To show that \(T(\mathbb{R}^{d})\neq M(\mathbb{R}^{d})\), it is enough to find a completely monotone function \(g_{\xi }\) such that \(k_{\xi }(r) = g_{\xi }(r^{2})\) is not completely monotone. However, the function \(g_{\xi }(r) = e^{-r}\) has such a property. Although e r is completely monotone, \((-1)^{2} \frac{d^{2}} {dr^{2}} e^{-r^{2} } < 0\) for small r > 0. This completes the proof of the inclusion \(T(\mathbb{R}^{d}) \subsetneq M(\mathbb{R}^{d})\).

Remark 3.6

It is important to remark that any distribution in \(L(\mathbb{R})\) is unimodal, (a result by Yamazato [97]), which implies the unimodality of any distribution in \(T(\mathbb{R})\), since \(T(\mathbb{R}) \subset L(\mathbb{R})\).

The following are examples for non-inclusion among classes. (See Schilling et al. [80, Chap. 9].)

  1. (5)

    \(L(\mathbb{R}) \subset /\,\,B(\mathbb{R})\). Let ν μ (dx) = x −11(0, 1)(x)dx,  x > 0. Then k(x) = 1(0, 1)(x) is nonincreasing and thus \(\mu \in L(\mathbb{R})\), but (x) = x −11(0, 1)(x) is not completely monotone and thus \(\mu \notin B(\mathbb{R})\). Hence \(L(\mathbb{R}) \subset /\,\,B(\mathbb{R})\).

  2. (6)

    \(B(\mathbb{R}) \subset /\,\,L(\mathbb{R})\). Let ν μ (dx) = e x dx,  x > 0. Then it is easy to see that \(\mu \in B(\mathbb{R})\), but \(\mu \notin L(\mathbb{R})\). Therefore \(B(\mathbb{R}) \subset /\,\,L(\mathbb{R})\).

4 Stochastic Integral Mappings and Characterizations of Classes (I)

This section is one of the main subjects of this article, as referred to in Sect. 1. In Sect. 4.1 we explain well-studied six stochastic integral mappings, and then in Sect. 4.2 we characterize them by stochastic integral mappings.

4.1 Six Stochastic Integral Mappings

For \(\mu \in ID(\mathbb{R}^{d})\), let {X t (μ), t ≥ 0} be the Lévy process with \(\mathcal{L}(X_{1}^{(\mu )}) =\mu\). Let f(t) be a real-valued square integrable measurable function on [a, b], for any \(0 < a < b < \infty \) and suppose that the stochastic integral \(\int _{0}^{\infty }f(t)dX_{t}^{(\mu )}\) is definable in the sense of Proposition 2.6. Then we can define a mapping μΦ f (μ). We denote the domain of Φ f by \(\mathfrak{D}(\varPhi _{f})\) that is the class of \(\mu \in ID(\mathbb{R}^{d})\) for which Φ f (μ) is definable. We also denote the range of Φ f by \(\mathfrak{R}(\varPhi _{f}) =\varPhi _{f}(\mathfrak{D}(\varPhi _{f}))\).

Now the following are well-studied mappings.

  1. (1)

    \(\mathcal{U}\)-mapping (Jurek [31]). For \(\mu \in \mathfrak{D}(\mathcal{U}) = ID(\mathbb{R}^{d})\), \(\mathcal{U}(\mu ) = \mathcal{L}\left (\int _{0}^{1}tdX_{t}^{(\mu )}\right ).\)

  2. (2)

    \(\varUpsilon\)-mapping (Barndorff-Nielsen et al. [12]). For \(\mu \in \mathfrak{D}(\varUpsilon ) = ID(\mathbb{R}^{d})\), \(\varUpsilon (\mu ) =\)

    \(\mathcal{L}\left (\int _{0}^{1}\log (t^{-1})dX_{t}^{(\mu )}\right ).\)

  3. (3)

    Φ-mapping (Jurek and Vervaat [38], Sato and Yamazato [79], Wolfe [96]). For \(\mu \in \mathfrak{D}(\varPhi ) = ID_{\log }(\mathbb{R}^{d})\), \(\varPhi (\mu ) = \mathcal{L}\left (\int _{0}^{\infty }e^{-t}dX_{t}^{(\mu )}\right ).\)

  4. (4)

    Ψ-mapping (Barndorff-Nielsen et al. [12]). Let \(p(s) =\int _{ s}^{\infty }e^{-u}u^{-1}du,s > 0\), and denote its inverse function by p (t). For \(\mu \in \mathfrak{D}(\varPsi ) = ID_{\log }(\mathbb{R}^{d})\),

    \(\varPsi (\mu ) = \mathcal{L}\left (\int _{0}^{\infty }p^{{\ast}}(t)dX_{t}^{(\mu )}\right ).\)

  5. (5)

    \(\mathcal{G}\)-mapping (Maejima and Sato [48]). Let \(g(s) =\int _{ s}^{\infty }e^{-u^{2} }du,s > 0\), and denote its inverse function by g (t). For \(\mu \in \mathfrak{D}(\mathcal{G}) = ID(\mathbb{R}^{d})\), \(\mathcal{G}(\mu ) = \mathcal{L}\left (\int _{0}^{\sqrt{\pi }/2}g^{{\ast}}(t)dX_{ t}^{(\mu )}\right ).\)

  6. (6)

    \(\mathcal{M}\)-mapping (Maejima and Nakahara [44]). Let \(m(s) =\int _{ s}^{\infty }e^{-u^{2} }u^{-1}du,s > 0\), and denote its inverse function by m (t). For \(\mu \in \mathfrak{D}(\mathcal{M}) = ID_{\log }(\mathbb{R}^{d})\), \(\mathcal{M}(\mu ) = \mathcal{L}\left (\int _{0}^{\infty }m^{{\ast}}(t)dX_{t}^{(\mu )}\right ).\)

In the above, it is easy to see that the domains of the mappings are \(ID(\mathbb{R}^{d})\) when the intervals of the stochastic integrals are finite. However, in the cases where stochastic integrals are improper at infinity, we need the proofs. As an example, we show the case of the Φ-mapping below. (For (4), see Barndorff-Nielsen et al. [12, Theorem C], and for (6), see Maejima and Nakahara [44, Theorem 2.3], respectively.) Note that in the six examples above, the singularity of the kernel at t = 0 does not give any influence for determining the domains of the mappings.

For showing that \(\mathfrak{D}(\varPhi ) = ID_{\log }(\mathbb{R}^{d})\), we use Proposition 2.6(2) with f(t) = e t. Let (A, ν, γ) and \((\tilde{A},\tilde{\nu },\tilde{\gamma })\) be the Lévy-Khintchine triplets of μ and Φ(μ), respectively. If we could show that \(\tilde{A}\) and \(\tilde{\gamma }\) are finite, and \(\tilde{\nu }\) is a Lévy measure, then Proposition 2.6(2) ensures that the existence of the stochastic integral defining Φ(μ).

  1. (i)

    (Gaussian part): \(\tilde{A} =\int _{ 0}^{\infty }e^{-2t}Adt\) exists.

  2. (ii)

    (Lévy measure): We are going to show that

    $$\displaystyle{\tilde{\nu }(B) =\int _{\mathbb{R}^{d}}\nu (dx)\int _{0}^{\infty }1_{ B}(e^{-t}x)dt,\quad B \in \mathcal{B}(\mathbb{R}^{d}),}$$

satisfies that \(\int _{\mathbb{R}^{d}}(\vert x\vert ^{2} \wedge 1)\tilde{\nu }(dx) < \infty \). We have

$$\displaystyle{\int _{\mathbb{R}^{d}}(\vert x\vert ^{2} \wedge 1)\tilde{\nu }(dx) =\int _{ \vert x\vert \leq 1}\vert x\vert ^{2}\tilde{\nu }(dx) +\int _{ \vert x\vert >1}\tilde{\nu }(dx),}$$

where

$$\displaystyle\begin{array}{rcl} \int _{\vert x\vert \leq 1}\vert x\vert ^{2}\tilde{\nu }(dx)& =& \int _{ \mathbb{R}^{d}}\nu (dx)\int _{0}^{\infty }\vert e^{-t}x\vert ^{2}1_{\{ \vert e^{-t}x\vert \leq 1\}}dt {}\\ & =& \int _{\mathbb{R}^{d}}\vert x\vert ^{2}\nu (dx)\int _{ 0}^{\infty }e^{-2t}1_{\{ \vert x\vert \leq e^{t}\}}dt =\int _{\mathbb{R}^{d}}\vert x\vert ^{2}\nu (dx)\int _{ 0\vee \log \vert x\vert }^{\infty }e^{-2t}dt {}\\ & =& \frac{1} {2}\int _{\mathbb{R}^{d}}\vert x\vert ^{2}(1 \wedge \vert x\vert ^{-2})\nu (dx) = \frac{1} {2}\int _{\mathbb{R}^{d}}(\vert x\vert ^{2} \wedge 1)\nu (dx) {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} \int _{\vert x\vert >1}\tilde{\nu }(dx)& =& \int _{\mathbb{R}^{d}}\nu (dx)\int _{0}^{\infty }1_{\{ \vert e^{-t}x\vert >1\}}dt =\int _{\mathbb{R}^{d}}\nu (dx)\int _{0}^{\infty }1_{\{ \vert x\vert >e^{t}\}}dt {}\\ & =& \int _{\vert x\vert >1}\nu (dx)\int _{0}^{\infty }1_{\{ t<\log \vert x\vert \}}dt =\int _{\vert x\vert >1}\log \vert x\vert \nu (dx). {}\\ \end{array}$$

Thus,

$$\displaystyle{\int _{\mathbb{R}^{d}}(\vert x\vert ^{2} \wedge 1)\tilde{\nu }(dx) < \infty }$$

if and only if

$$\displaystyle{\int _{\vert x\vert >1}\log \vert x\vert \nu (dx) < \infty.}$$
  1. (iii)

    (γ-part): To complete the proof, it is enough to show that

    $$\displaystyle{\tilde{\gamma }=\int _{ 0}^{\infty }e^{-t}dt \cdot \gamma +\int _{ \mathbb{R}^{d}}\nu (dx)\int _{0}^{\infty }e^{-t}x\left ( \frac{1} {1 + \vert e^{-t}x\vert ^{2}} - \frac{1} {1 + \vert x\vert ^{2}}\right )dt < \infty,}$$

whenever \(\int _{\vert x\vert >1}\log \vert x\vert \nu (dx) < \infty \). The first integral is trivial. As to the second integral, we have

$$\displaystyle\begin{array}{rcl} & & \int _{\mathbb{R}^{d}}\nu (dx)\int _{0}^{\infty } \frac{e^{-t}\vert x\vert ^{3}} {(1 + \vert e^{-t}x\vert ^{2})(1 + \vert x\vert ^{2})}dt {}\\ & & \quad = \left (\int _{\vert x\vert \leq 1} +\int _{\vert x\vert >1}\right )\nu (dx)\int _{0}^{\infty } \frac{e^{-t}\vert x\vert ^{3}} {(1 + \vert e^{-t}x\vert ^{2})(1 + \vert x\vert ^{2})}dt {}\\ & & \quad =: I_{1} + I_{2}. {}\\ \end{array}$$

Here

$$\displaystyle{ I_{1} \leq \int _{\vert x\vert \leq 1}\vert x\vert ^{3}\nu (dx)\int _{ 0}^{\infty }e^{-t}dt \leq \int _{ \vert x\vert \leq 1}\vert x\vert ^{2}\nu (dx) < \infty }$$

and

$$\displaystyle\begin{array}{rcl} I_{2}& =& \int _{\vert x\vert >1}\nu (dx)\int _{0}^{\infty } \frac{e^{-t}\vert x\vert ^{3}} {(1 + \vert e^{-t}x\vert ^{2})(1 + \vert x\vert ^{2})}dt {}\\ & =& \int _{\vert x\vert >1}\nu (dx)\left (\int _{0}^{\log \vert x\vert } +\int _{ \log \vert x\vert }^{\infty }\right ) \frac{e^{-t}\vert x\vert ^{3}} {(1 + \vert e^{-t}x\vert ^{2})(1 + \vert x\vert ^{2})}dt =: I_{3} + I_{4}, {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} I_{3}& =& \int _{\vert x\vert >1} \frac{\vert x\vert ^{2}} {1 + \vert x\vert ^{2}}\nu (dx)\int _{0}^{\log \vert x\vert } \frac{e^{-t}\vert x\vert } {1 + \vert e^{-t}x\vert ^{2}}dt, {}\\ & \leq & \int _{\vert x\vert >1} \frac{\vert x\vert ^{2}} {1 + \vert x\vert ^{2}}\nu (dx)\int _{0}^{\log \vert x\vert }\frac{1} {2}dt \leq \int _{\vert x\vert >1}\log \vert x\vert \nu (dx) < \infty {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} I_{4}& \leq & \int _{\vert x\vert >1}\nu (dx)\int _{\log \vert x\vert }^{\infty }\frac{e^{-t}\vert x\vert ^{3}} {1 + \vert x\vert ^{2}}dt =\int _{\vert x\vert >1} \frac{\vert x\vert ^{3}} {1 + \vert x\vert ^{2}}e^{-\log \vert x\vert }\nu (dx) {}\\ & \leq & \int _{\vert x\vert >1} \frac{\vert x\vert ^{2}} {1 + \vert x\vert ^{2}}\nu (dx) < \infty. {}\\ \end{array}$$

The proof is completed.

4.2 Characterization of Classes as the Ranges of the Mappings

The six classes of infinitely divisible distributions in Sect. 3.3 can be characterized as the ranges of the mappings discussed in the previous section, as follows.

Proposition 4.1

We have the following.

  1. (1)

    \(U(\mathbb{R}^{d}) = \mathcal{U}(ID(\mathbb{R}^{d}))\) . (Jurek  [31] .)

  2. (2)

    \(B(\mathbb{R}^{d}) =\varUpsilon (ID(\mathbb{R}^{d}))\) . (Barndorff-Nielsen et al.  [12] .)

  3. (3)

    \(L(\mathbb{R}^{d}) =\varPhi (ID_{\log }(\mathbb{R}^{d}))\) . (Jurek and Vervaat  [38] , Sato and Yamazato  [79] , Wolfe  [96] .)

  4. (4)

    \(T(\mathbb{R}^{d}) =\varPsi (ID_{\log }(\mathbb{R}^{d}))\) . (Barndorff-Nielsen et al.  [12] .)

  5. (5)

    \(G(\mathbb{R}^{d}) = \mathcal{G}(ID(\mathbb{R}^{d}))\) . (Aoyama and Maejima  [3] for symmetric case and Maejima and Sato  [48] for general case.)

  6. (6)

    \(M(\mathbb{R}^{d}) = \mathcal{M}(ID_{\log }(\mathbb{R}^{d}))\) . (Aoyama et al.  [4] for symmetric case and Maejima and Nakahara  [44] for general case.)

For the readers’ convenience, we give here the proof of (3) \(L(\mathbb{R}^{d}) =\varPhi (ID_{\log }(\mathbb{R}^{d}))\) as an example. We show that \(L(\mathbb{R}^{d}) \supset \varPhi (ID_{\log }(\mathbb{R}^{d}))\) and that \(L(\mathbb{R}^{d}) \subset \varPhi (ID_{\log }(\mathbb{R}^{d}))\), separately.

  1. (a)

    (\(L(\mathbb{R}^{d}) \supset \varPhi (ID_{\log }(\mathbb{R}^{d}))\)): Suppose that \(\mu = \mathcal{L}\left (\int _{0}^{\infty }e^{-t}dX_{t}\right )\) for some Lévy process {X t } satisfying that \(\mathcal{L}(X_{1}) \in ID_{\log }(\mathbb{R}^{d})\). Let b > 1 and let \(\{\tilde{X} _{t}\}\) be an independent copy of {X t }. In what follows, the notation \(\mathop{=}\limits^{\mathrm{ d}}\) means the equality in law. We have

    $$\displaystyle{b^{-1}\int _{ 0}^{\infty }e^{-t}d\tilde{X}_{ t} =\int _{ 0}^{\infty }e^{-(t+\log b)}d\tilde{X}_{ t}\mathop{ =}\limits^{\mathrm{ d}}\int _{\log b}^{\infty }e^{-t}dX_{ t},}$$

and

$$\displaystyle\begin{array}{rcl} \int _{0}^{\infty }e^{-t}dX_{ t}& = & \int _{\log b}^{\infty }e^{-t}dX_{ t} +\int _{ 0}^{\log b}e^{-t}dX_{ t} {}\\ & \mathop{=}\limits^{\mathrm{ d}}& b^{-1}\int _{ 0}^{\infty }e^{-t}d\tilde{X}_{ t} +\int _{ 0}^{\log b}e^{-t}dX_{ t}, {}\\ \end{array}$$

which shows the relation (12) and \(\mu \in L(\mathbb{R}^{d})\).

  1. (b)

    (\(L(\mathbb{R}^{d}) \subset \varPhi (ID_{\log }(\mathbb{R}^{d}))\)): We need a lemma on selfsimilar additive process.

Definition 4.2

Let H > 0. A stochastic process {X t , t ≥ 0} on \(\mathbb{R}^{d}\) is H-selfsimilar if for any c > 0, \(\{X_{ct}\}\mathop{ =}\limits^{\mathrm{ d}}\{c^{H}X_{t}\}\).

Lemma 4.3 (Sato [71])

\(\mu \in L(\mathbb{R}^{d})\) if and only if there exists a 1-selfsimilar additive process {Y t } such that \(\mathcal{L}(Y _{1}) =\mu\) .

The following proof is due to Jeanblanc et al. [29]. Let \(\mu \in L(\mathbb{R}^{d})\). By Lemma 4.3, there exists 1-selfsimilar additive process {Y t } such that \(\mathcal{L}(Y _{1}) =\mu\). Define

$$\displaystyle{ X_{t} =\int _{ e^{-t}}^{1}s^{-1}dY _{ s}. }$$
(16)

Since {Y t } is additive, {X t } is also additive. Further, for h > 0,

$$\displaystyle\begin{array}{rcl} X_{t+h} - X_{t}& = & \int _{e^{-(t+h)}}^{e^{-t} }s^{-1}dY _{ s} {}\\ & = & \int _{e^{-h}}^{1}(e^{-t}u)^{-1}dY _{ e^{-t}u} {}\\ & \mathop{=}\limits^{\mathrm{ d}}& \int _{e^{-h}}^{1}(e^{-t}u)^{-1}e^{-t}dY _{ u}\quad (\text{by}\,\,\{Y _{cu}\}\mathop{ =}\limits^{\mathrm{ d}}\{cY _{u}\}) {}\\ & = & X_{h}. {}\\ \end{array}$$

Thus {X t } is a Lévy process. By (16),

$$\displaystyle{X_{t} = -\int _{0}^{t}(e^{-v})^{-1}d_{ v}Y _{e^{-v}}}$$

and thus

$$\displaystyle{\int _{0}^{\infty }e^{-v}dX_{ v} = -\int _{0}^{\infty }dY _{ e^{-v}} = Y _{1} - Y _{0} = Y _{1},}$$

implying that \(\mathcal{L}(X_{1}) \in \mathfrak{D}(\varPhi )\) and \(\mu = \mathcal{L}\left (\int _{0}^{\infty }e^{-v}dX_{v}\right )\) so that \(\mu \in \varPhi (ID_{\log }(\mathbb{R}^{d}))\).

5 Stochastic Integral Mappings and Characterizations of Classes (II)

Some other mappings in addition to the six mappings in Sect. 4.1 above will be explained in this section. Let

$$\displaystyle\begin{array}{rcl} ID_{\alpha }(\mathbb{R}^{d})& =& \left \{\mu \in ID(\mathbb{R}^{d}): \int _{ \mathbb{R}^{d}}\vert x\vert ^{\alpha }\mu (dx) < \infty \right \},\quad \text{for }\alpha > 0, {}\\ ID_{\alpha }^{0}(\mathbb{R}^{d})& =& \left \{\mu \in ID_{\alpha }(\mathbb{R}^{d}): \int _{ \mathbb{R}^{d}}x\mu (dx) = 0\right \},\quad \text{for }\alpha \geq 1, {}\\ ID_{1}^{{\ast}}(\mathbb{R}^{d})& =& \left \{\mu =\mu _{ (A,\nu,\gamma )} \in ID_{1}^{0}(\mathbb{R}^{d}): \lim _{ T\rightarrow \infty }\int _{1}^{T}t^{-1}dt\int _{ \vert x\vert >t}x\nu (dx)\text{ exists in }\mathbb{R}^{d}\right \}. {}\\ \end{array}$$

5.1 Φ α -Mapping

We define Φ α -mapping as follows:

$$\displaystyle{ \varPhi _{\alpha }(\mu ) = \left \{\begin{array}{@{}l@{\quad }l@{}} \mathcal{L}\left (\int _{0}^{-1/\alpha }(1 +\alpha t)^{-1/\alpha }dX_{ t}^{(\mu )}\right ),\quad &\text{when }\alpha < 0, \\ [15pt]\mathcal{L}\left (\int _{0}^{\infty }e^{-t}dX_{ t}^{(\mu )}\right ), \quad &\text{when }\alpha = 0, \\ \mathcal{L}\left (\int _{0}^{\infty }(1 +\alpha t)^{-1/\alpha }dX_{ t}^{(\mu )}\right ), \quad &\text{when }0 <\alpha < 2. \end{array} \right. }$$

The domain of Φ α is given as

$$\displaystyle{\mathfrak{D}(\varPhi _{\alpha }) = \left \{\begin{array}{@{}l@{\quad }l@{}} ID(\mathbb{R}^{d}), \quad &\text{when }\alpha < 0, \\ ID_{\log }(\mathbb{R}^{d}), \quad &\text{when }\alpha = 0, \\ ID_{\alpha }(\mathbb{R}^{d}), \quad &\text{when }0 <\alpha < 1, \\ ID_{1}^{{\ast}}(\mathbb{R}^{d}),\quad &\text{when }\alpha = 1, \\ ID_{\alpha }^{0}(\mathbb{R}^{d}), \quad &\text{when }1 <\alpha < 2. \end{array} \right.}$$

(For \(\alpha \in (0,1) \cup (1,2)\), see Sato [75, Theorem 2.4], and for α = 1, Sato [77, Theorem 4.4].)

Here we introduce a notion of “weak mean” for later use.

Definition 5.1 (The Weak Mean of \(\mu \in ID(\mathbb{R}^{d})\) [77, Definition 3.6])

Let \(\mu =\mu _{(A,\nu,\gamma )} \in ID(\mathbb{R}^{d})\). It is said that μ has weak mean m μ if

$$\displaystyle{\int _{1<\vert x\vert \leq a}x\nu (dx)\,\,\text{is convergent in}\,\,\mathbb{R}^{d}\,\,\text{as}\,\,a \rightarrow \infty,}$$

and

$$\displaystyle{C_{\mu }(z) = -2^{-1}\langle z,Az\rangle +\lim _{ a\rightarrow \infty }\int _{\vert x\vert \leq a}(e^{i\langle z,x\rangle } - 1 - i\langle z,x\rangle )\nu (dx) + i\langle m_{\mu },z\rangle.}$$

The range of Φ α is as follows.

Theorem 5.2 (Sato [77, Theorem 4.18]. \(\mathfrak{R}(K_{1,\alpha })\) in the Notation There)

Let 0 < α < 2. Then \(\mu \in \mathfrak{R}(\varPhi _{\alpha })\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ = 0 or ν μ ≠ 0, and in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as, for some \(k_{\xi }(r)\) which is a nonnegative function measurable in \(\xi\) and nonincreasing on \((0,\infty )\) as a function of r,

  1. (1)

    \((\alpha < 1)\ \nu _{\xi }(dr) = r^{-\alpha -1}k_{\xi }(r)dr\) ,

  2. (2)

    \((\alpha = 1)\ \nu _{\xi }(dr) = r^{-2}k_{\xi }(r)dr\) , and the weak mean of μ is 0,

  3. (3)

    \((1 <\alpha < 2)\ \nu _{\xi }(dr) = r^{-\alpha -1}k_{\xi }(r)dr\) , and the mean of μ is 0.

We introduce the class \(L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) (the class of α-selfdecomposable distributions). Let \(\alpha \in \mathbb{R}\). We say that \(\mu \in ID(\mathbb{R}^{d})\) is α-selfdecomposable, if for any b > 1, there exists \(\rho _{b} \in ID(\mathbb{R}^{d})\) satisfying

$$\displaystyle{ \hat{\mu }(z) =\hat{\mu } (b^{-1}z)^{b^{\alpha }}\hat{\rho }_{ b}(z),\quad z \in \mathbb{R}^{d}. }$$
(17)

Theorem 5.3 (Maejima and Ueda [52])

  1. (1)

    For \(\beta <\alpha,\quad L^{\langle \beta \rangle }(\mathbb{R}^{d}) \supset L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) .

  2. (2)

    For \(\alpha > 2\quad L^{\langle \alpha \rangle }(\mathbb{R}^{d}) =\{\delta _{\gamma }: \gamma \in \mathbb{R}^{d}\}\) .

  3. (3)

    \(L^{\langle 2\rangle }(\mathbb{R}^{d}) =\{ \text{all Gaussian distributions}\}\) .

  4. (4)

    \(L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) is left-continuous in \(\alpha \in \mathbb{R}\) , namely,

    $$\displaystyle{\bigcap _{\beta <\alpha }L^{\langle \beta \rangle }(\mathbb{R}^{d}) = L^{\langle \alpha \rangle }(\mathbb{R}^{d})\quad \text{for all }\alpha \in \mathbb{R}.}$$
  5. (5)

    Let \(\alpha \in (-\infty,2)\) . Then, \(\mu \in L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as

    $$\displaystyle{ \nu _{\xi }(dr) = r^{-\alpha -1}\ell_{ \xi }(r)dr, }$$
    (18)

where \(\ell_{\xi }(r)\) is a nonnegative function which is measurable in \(\xi\) , and nonincreasing on \((0,\infty )\) as a function of r.

Remark 5.4

  1. (1)

    We have \(L^{(-1)}(\mathbb{R}^{d}) = U(\mathbb{R}^{d})\) and \(L^{(0)}(\mathbb{R}^{d}) = L(\mathbb{R}^{d})\). Thus by Theorem 5.3(1), if α < −1, \(L^{\langle \alpha \rangle }(\mathbb{R}^{d}) \supset U(\mathbb{R}^{d}) \supset L(\mathbb{R}^{d})\).

  2. (2)

    Another class bigger than \(U(\mathbb{R}^{d})\) is

    $$\displaystyle{A(\mathbb{R}^{d}):= \left \{\varPhi _{\cos }(\mu ) = \mathcal{L}\left (\int _{ 0}^{1}\cos (2^{-1}\pi t)dX_{ t}^{(\mu )}\right )\,\,:\,\,\mu \in I(\mathbb{R}^{d})\right \}.}$$

    (See Maejima et al. [58, Theorem 2.6].)

  3. (3)

    It is an open problem to find the relationship between \(L^{\langle \alpha \rangle }(\mathbb{R}^{d}),\alpha < -1,\) and \(A(\mathbb{R}^{d})\).

The relations between the mappings Φ α and the classes \(L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) are as follows. (The case α = 0 is nothing but Proposition 4.1(3).)

Theorem 5.5 (Maejima et al. [57, Theorem 4.6])

Let α < 0. \(\tilde{\mu }\in L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) if and only if \(\tilde{\mu }=\varPhi _{\alpha }(\mu )\) for some \(\mu \in ID(\mathbb{R}^{d})\) .

Theorem 5.6 (Maejima and Ueda [52, Theorem 5.1(ii) and (iv)])

Let \(\alpha \in (0,1) \cup (1,2)\) .

  1. (1)

    When 0 < α < 1, \(\tilde{\mu }\in L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) if and only if

    $$\displaystyle{ \tilde{\mu }=\sigma _{\alpha } {\ast}\varPhi _{\alpha }(\mu ), }$$
    (19)

    where \(\mu \in ID_{\alpha }(\mathbb{R}^{d})\) and \(\sigma _{\alpha }\) is a strictly α-stable distribution or a trivial distribution, where ∗ means convolution.

  2. (2)

    When 1 < α < 2, \(\tilde{\mu }\in L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) if and only if  (19) holds for some \(\mu \in ID_{\alpha }^{0}(\mathbb{R}^{d})\) and some α-stable distribution \(\sigma _{\alpha }\) .

For the case α = 1 we need slightly different mapping called the essential improper stochastic integrals introduced by Sato [74, 76] defined as

$$\displaystyle\begin{array}{rcl} \varPhi _{f,\mathrm{es}}(\mu )&:=& \Bigg\{\mathcal{L}\left (\mathop{\mbox{ p-$\lim $}}_{t\rightarrow \infty }\left (\int _{0}^{t}f(s)dX_{ s}^{(\mu )} - q(t)\right )\right ): q\ \text{is an}\ \mathbb{R}^{d}\text{-valued nonrandom } {}\\ & & \mbox{ function such that}\ \int _{0}^{t}f(s)dX_{ s}^{(\mu )} - q(t)\ \mbox{ converges in probability} {}\\ & & \mbox{ as $t \rightarrow \infty $}\Bigg\}. {}\\ \end{array}$$

(The term “essentially improper stochastic integral” is changed to “essentially definable improper stochastic integral” in Sato [76].) When f(t) = (1 + t)−1, which is the integrand of Φ 1(μ), we write Φ f, es(μ) as Φ 1, es(μ).

Theorem 5.7

When α = 1, \(\tilde{\mu }\in L^{\langle 1\rangle }(\mathbb{R}^{d})\) if and only if \(\tilde{\mu }=\sigma _{1}{\ast}\tilde{\rho }\) , where \(\tilde{\rho }\in \varPhi _{1,\mathrm{es}}(\rho )\) for some \(\rho \in I_{1}(\mathbb{R}^{d})\) and \(\sigma _{1}\) is a 1-stable distribution.

Remark 5.8

The classes \(L^{\langle \alpha \rangle }(\mathbb{R}^{d}),\alpha \in \mathbb{R}\), were already studied by many authors before Maejima and Ueda [52]. Alf and O’Connor [2] and O’Connor [62] studied the class of all infinitely divisible distributions on \(\mathbb{R}\) with unimodal Lévy measures with mode 0, and showed that the class is equal to \(L^{\langle -1\rangle }(\mathbb{R})\), As to this class, Alf and O’Connor [2] studied stochastic integral characterizations with respect to Lévy processes. O’Conner [62] studied the decomposability (17) for d = 1 and α = −1, and characterized this class by some limit theorem. O’Connor [61, 63] also studied the classes \(L^{\langle \alpha \rangle }(\mathbb{R}),\alpha \in (-1,2)\). He defined these classes by a condition of radial components of Lévy measures, and characterized these classes by stochastic integrals with respect to Lévy processes, by the decomposability (17) for d = 1, and by similar limit theorems to that in the case \(L^{\langle -1\rangle }(\mathbb{R})\). Jurek [30, 31, 35] and Iksanov et al. [26] defined and studied so-called s-selfdecomposable distributions on a real separable Hilbert space H. The totality of s-selfdecomposable distributions, denoted by \(\mathcal{U}(H)\) in their papers, is equal to \(L^{\langle -1\rangle }(\mathbb{R}^{d})\), when \(H = \mathbb{R}^{d}\). Jurek [3234] and Jurek and Schreiber [37] studied the classes \(\mathcal{U}_{\beta }(Q),\beta \in \mathbb{R}\), of distributions on a real separable Banach space E, where Q is a linear operator on E with certain properties. These classes are equal to \(L^{\langle -\beta \rangle }(\mathbb{R}^{d})\) if \(E = \mathbb{R}^{d}\) and Q is the identity operator. They defined the classes \(\mathcal{U}_{\beta }(Q)\) by some limit theorems. As to these classes, they studied the decomposability similar to (17) and stochastic integral characterizations, although some results are only for the case that Q is the identity operator.

Remark 5.9

Maejima et al. [57] studied the classes \(K_{\alpha }(\mathbb{R}^{d}),\alpha < 2\): \(\mu \in K_{\alpha }(\mathbb{R}^{d})\) if \(\mu \in ID(\mathbb{R}^{d})\) and either ν μ  = 0 or ν μ ≠ 0 and, in case ν μ ≠ 0, the radial component \(\nu _{\xi }\) of ν μ is expressed as

$$\displaystyle{ \nu _{\xi }(dr) = r^{-\alpha -1}\ell_{ \xi }(r)dr, }$$
(20)

where \(\ell_{\xi }(r)\) is a nonnegative function which is measurable in \(\xi\) and nonincreasing on \((0,\infty )\) as a function of r, and \(\ell_{\xi }(\infty ) = 0\). The relation between \(K_{\alpha }(\mathbb{R}^{d})\) and \(L^{\langle \alpha \rangle }(\mathbb{R}^{d})\) for α < 2 is

$$\displaystyle{K_{\alpha }(\mathbb{R}^{d}) = L^{\langle \alpha \rangle }(\mathbb{R}^{d}) \cap \mathcal{C}_{\alpha }(\mathbb{R}^{d}),}$$

where \(\mathcal{C}_{\alpha }(\mathbb{R}^{d})\) is the totality of \(\mu \in ID(\mathbb{R}^{d})\) whose Lévy measure ν μ satisfies

\(\lim _{r\rightarrow \infty }r^{\alpha }\int _{\vert x\vert >r}\nu _{\mu }(dx) = 0\). (Maejima and Ueda [52], Maejima et al. [57].)

Remark 5.10

Recall that the difference between \(U(\mathbb{R}^{d})\) and \(B(\mathbb{R}^{d})\) in terms of Lévy measure is that \(\ell_{\xi }(r)\) in (9) is nonincreasing and \(\ell_{\xi }(r)\) in (10) is completely monotone. Also, the difference between \(L(\mathbb{R}^{d})\) and \(T(\mathbb{R}^{d})\) in terms of Lévy measure is that \(k_{\xi }(r)\) in (11) is nonincreasing and \(k_{\xi }(r)\) in (13) is completely monotone. From this point of view, the nonincreasing function \(\ell_{\xi }(r)\) in (20) can be replaced by a completely monotone function \(\ell_{\xi }(r)\) with \(\ell_{\xi }(\infty ) = 0\). Actually, if we do so, we can get (31) in Sect. 5.4 later, which leads to the tempered stable distribution by Rosiński [69].

5.2 Ψ α, β -Mapping

We define a more general notation of mapping, which we call Ψ α, β -mapping. Let

$$\displaystyle{t = G_{\alpha,\beta }(s) =\int _{ s}^{\infty }u^{-\alpha -1}e^{-u^{\beta }}du,\quad s \geq 0,}$$

and let s = G α, β (t) be its inverse function. Define Ψ α, β -mapping by

$$\displaystyle{\varPsi _{\alpha,\beta }(\mu ) = \mathcal{L}\left (\int _{0}^{G_{\alpha,\beta }(0)}G_{\alpha,\beta }^{{\ast}}(t)dX_{ t}^{(\mu )}\right ),}$$

with

$$\displaystyle{G_{\alpha,\beta }(0) = \left \{\begin{array}{@{}l@{\quad }l@{}} \beta ^{-1}\varGamma (-\alpha \beta ^{-1}),\quad &\text{when }\alpha < 0, \\ \infty, \quad &\text{when }\alpha \geq 0, \end{array} \right.}$$

where Γ(⋅ ) is the gamma function. These mappings were introduced first by Sato [75] for β = 1 and later by Maejima and Nakahara [44] for general β > 0. Due to Sato [75] and Maejima and Nakahara [44], we see the domains \(\mathfrak{D}(\varPsi _{\alpha,\beta })\) as follows, which are independent of the value β > 0.

$$\displaystyle{\mathfrak{D}(\varPsi _{\alpha,\beta }) = \left \{\begin{array}{@{}l@{\quad }l@{}} ID(\mathbb{R}^{d}), \quad &\text{when }\alpha < 0, \\ ID_{\log }(\mathbb{R}^{d}), \quad &\text{when }\alpha = 0, \\ ID_{\alpha }(\mathbb{R}^{d}), \quad &\text{when }0 <\alpha < 1, \\ ID_{1}^{{\ast}}(\mathbb{R}^{d}),\quad &\text{when }\alpha = 1, \\ ID_{\alpha }^{0}(\mathbb{R}^{d}), \quad &\text{when }1 <\alpha < 2. \end{array} \right.}$$

The six mappings in Sect. 4.1 are the special cases of the Φ α - and Ψ α, β -mappings as follows.

Remark 5.11

\(\mathcal{U} =\varPhi _{-1}\), \(\varUpsilon =\varPsi _{-1,1}\), Φ = Φ 0, Ψ = Ψ 0, 1, \(\mathcal{G} =\varPsi _{-1,2}\), \(\mathcal{M} =\varPsi _{0,2}\).

5.3 Φ (b)-Mapping

Let b > 1. Define Φ (b)-mapping by

$$\displaystyle{\varPhi _{(b)}(\mu ) = \mathcal{L}\left (\int _{0}^{\infty }b^{-[t]}dX_{ t}^{(\mu )}\right ),\quad \mathfrak{D}(\varPhi _{ (b)}) = ID_{\log }(\mathbb{R}^{d}),}$$

where [t] denotes the largest integer not greater than \(t \in \mathbb{R}\).

\(\mu \in ID(\mathbb{R}^{d})\) is called semi-selfdecomposable if there exist b > 1 and \(\rho \in ID(\mathbb{R}^{d})\) such that \(\hat{\mu }(z) =\hat{\mu } (b^{-1}z)\hat{\rho }(z)\). We call this b a span of μ, and we denote the class of all semi-selfdecomposable distributions with span b by \(L(b, \mathbb{R}^{d})\). From the definitions, \(L(b, \mathbb{R}^{d}) \supsetneq L(\mathbb{R}^{d})\) and \(L(\mathbb{R}^{d}) =\bigcap _{b>1}L(b, \mathbb{R}^{d})\). \(\mu \in L(b, \mathbb{R}^{d})\) is also realized as a limiting distribution of normalized partial sums of independent random variables under the condition of infinite smallness when the limit is taken through a geometric subsequence. A typical example is a semi-stable distribution.

Theorem 5.12

Fix any b > 1. Then, the range \(\mathfrak{R}(\varPhi _{(b)})\) is the class of all semi-selfdecomposable distributions with span b on \(\mathbb{R}^{d}\) , namely,

$$\displaystyle{\varPhi _{(b)}\left (ID_{\log }(\mathbb{R}^{d})\right ) = L(b, \mathbb{R}^{d}).}$$

(For the proof, see Maejima and Ueda [50].)

5.4 Stable Mapping

This section is from Maejima et al. [59]. Let 0 < α < 2. Define a mapping by

$$\displaystyle{ \varXi _{\alpha }(\mu ) = \mathcal{L}\left (\int _{0}^{\infty }t^{-1/\alpha }dX_{ t}^{(\mu )}\right ). }$$
(21)

Note that the kernel above has a singularity at t = 0 and is not square integrable around t = 0. This fact gives an influence when determining the domain of mappings. The following characterization of \(\mathfrak{D}(\varXi _{\alpha })\) follows from Proposition 5.3 and Example 4.5 of Sato [76].

Theorem 5.13

  1. (1)

    If 0 < α < 1, then

    $$\displaystyle{\mathfrak{D}(\varXi _{\alpha }) = \left \{\mu =\mu _{(0,\nu,0)_{0}} \in ID(\mathbb{R}^{d})\,\,:\,\,\int _{ \mathbb{R}^{d}}\vert x\vert ^{\alpha }\nu (dx) < \infty \right \}.}$$
  2. (2)

    If α = 1, then

    $$\displaystyle\begin{array}{rcl} \mathfrak{D}(\varXi _{1})& \,=\,& \biggl \{\mu =\mu _{(0,\nu,0)_{0}}\,=\,\mu _{(0,\nu,0)_{1}} \in ID(\mathbb{R}^{d})\,\,:\,\,\int _{ \mathbb{R}^{d}}\vert x\vert \ \nu (dx) < \infty,\int _{\mathbb{R}^{d}}x\ \nu (dx) = 0, {}\\ & \lim \limits _{\varepsilon \downarrow 0}& \int _{\vert x\vert \leq 1}x\log (\vert x\vert \vee \varepsilon )\ \nu (dx)\ \text{and}\ \lim _{T\rightarrow \infty }\int _{\vert x\vert >1}x\log (\vert x\vert \wedge T)\ \nu (dx)\ \text{exist}\biggr \}. {}\\ \end{array}$$
  3. (3)

    If 1 < α < 2, then

    $$\displaystyle{\mathfrak{D}(\varXi _{\alpha }) = \left \{\mu =\mu _{(0,\nu,0)_{1}} \in ID(\mathbb{R}^{d})\,\,:\,\,\int _{ \mathbb{R}^{d}}\vert x\vert ^{\alpha }\nu (dx) < \infty \right \}.}$$

Remark 5.14

There is a simple sufficient condition for μ in (2). Namely, \(\mu =\mu _{(0,\nu,\gamma )} \in \mathfrak{D}(\varXi _{1})\) if \(\int _{\mathbb{R}^{d}}\vert x\vert \,\vert \log \vert x\vert \vert \ \nu (dx) < \infty \), \(\int _{\mathbb{R}^{d}}x\ \nu (dx) = 0\), and \(\gamma =\int _{\mathbb{R}^{d}} \frac{x} {1+\vert x\vert ^{2}} \nu (dx)\).

The next theorem gives a full characterization of \(\mathfrak{R}(\varXi _{\alpha })\). \(S_{\alpha }^{0}(\mathbb{R}^{d})\) denotes the class of strictly α-stable distributions on \(\mathbb{R}^{d}\). Note that in the case \(\alpha = 1\ \hat{\mu }\) can be written as follows:

$$\displaystyle{ \hat{\mu }(z) =\exp \left \{-\int _{S}\left (\vert \langle z,\xi \rangle \vert + i2\pi ^{-1}\langle z,\xi \rangle \log \vert \langle z,\xi \rangle \vert \right )\lambda _{ 1}(d\xi ) + i\langle z,\tau \rangle \right \}, }$$
(22)

where \(\lambda _{1}\) is a finite measure on S and \(\tau \in \mathbb{R}^{d}\), and where \(\int _{S}\xi \lambda _{1}(d\xi ) = 0.\) (See e.g. Sato [73, Theorem 14.10].)

Theorem 5.15 (Maejima et al. [59])

Let 0 < α < 2.

  1. (1)

    When α ≠ 1, we have

    $$\displaystyle{\varXi _{\alpha }(\mathfrak{D}(\varXi _{\alpha })) = S_{\alpha }^{0}(\mathbb{R}^{d}).}$$
  2. (2)

    When α = 1, we have

    $$\displaystyle{ \varXi _{1}(\mathfrak{D}(\varXi _{1})) = \left \{\mu \in S_{1}^{0}(\mathbb{R}^{d})\,\,:\,\,\tau \in \mathrm{ span\ supp}(\lambda _{ 1})\right \}, }$$

where, respectively, \(\lambda _{1}\) and τ are those in  (22) . Here \(\mathrm{supp}(\lambda _{1})\) denotes the support of \(\lambda _{1}\) . If \(\lambda _{1} = 0\) , then we put \(\mathrm{span\ supp}(\lambda _{1}) =\{ 0\}\) by convention.

6 Compositions of Stochastic Integral Mappings

The motivation for the paper by Barndorff-Nielsen et al. [12] was to see if the Thorin class can be realized as the composited mapping of Φ and \(\varUpsilon\), where Φ produces the class of selfdecomposable distributions and \(\varUpsilon\) produces the Goldie-Steutel-Bondesson class. So, we believed that compositions of stochastic integral mappings would be important and useful in many aspects, which was verified by several observations. This is why we will discuss compositions of stochastic integral mappings.

Let Φ f and Φ g be two stochastic integral mappings. The composition of two mappings is defined as

$$\displaystyle{(\varPhi _{f} \circ \varPhi _{g})(\mu ) =\varPhi _{f}(\varPhi _{g}(\mu )),}$$

with

$$\displaystyle{\mathfrak{D}(\varPhi _{f} \circ \varPhi _{g}) =\{\mu \in \mathfrak{D}(\varPhi _{g})\,\,:\,\,\varPhi _{g}(\mu ) \in \mathfrak{D}(\varPhi _{f})\}.}$$

We have the following.

Theorem 6.1

We have

  1. (1)

    \(\varPsi =\varPhi \circ \varUpsilon =\varUpsilon \circ \varPhi\) ,

  2. (2)

    \(\varUpsilon \circ \mathcal{U} = \mathcal{U}\circ \varUpsilon\) ,

  3. (3)

    \(\mathcal{G}\circ \mathcal{U} = \mathcal{U}\circ \mathcal{G}\) ,

  4. (4)

    \(\varPhi \circ \mathcal{U} = \mathcal{U}\circ \varPhi\) .

Proof of (1) of Theorem 6.1 (Barndorff-Nielsen et al. [12, Theorem C(ii)])

Note that \(\mathfrak{D}(\varPsi )(z) = ID_{\log }(\mathbb{R}^{d})\). If \(\mu \in ID_{\log }(\mathbb{R}^{d})\), then

$$\displaystyle{C_{\varPhi (\mu )}(z) =\int _{ 0}^{\infty }C_{\mu }(e^{-t}z)dt.}$$

On the other hand, if \(\mu \in ID(\mathbb{R}^{d})\), then

$$\displaystyle{C_{\varUpsilon (\mu )}(z) =\int _{ 0}^{1}C_{\mu }(\log (t^{-1})z)dt =\int _{ 0}^{\infty }e^{-s}C_{\mu }(sz)ds.}$$

Also note that

$$\displaystyle{ \varUpsilon (\mu ) \in ID_{\log }(\mathbb{R}^{d})\,\,\text{if and only if}\,\,\mu \in ID_{\log }(\mathbb{R}^{d}), }$$
(23)

(see Barndorff-Nielsen et al. [12, Theorem C(i)]). Thus, if \(\mu \in ID_{\log }(\mathbb{R}^{d})\), then \(\varUpsilon (\mu ) \in ID_{\log }(\mathbb{R}^{d})\) by (23), and hence

$$\displaystyle{C_{(\varPhi \circ \varUpsilon )(\mu )}(z) =\int _{ 0}^{\infty }dt\int _{ 0}^{\infty }e^{-s}C_{\mu }(e^{-t}sz)ds}$$

and

$$\displaystyle{C_{(\varUpsilon \circ \varPhi )(\mu )}(z) =\int _{ 0}^{\infty }e^{-s}ds\int _{ 0}^{\infty }C_{\mu }(e^{-t}sz)dt.}$$

If we could show

$$\displaystyle{ \int _{0}^{\infty }e^{-s}ds\int _{ 0}^{\infty }\vert C_{\mu }(e^{-t}sz)\vert dt < \infty,\quad \text{for each}\,\,z \in \mathbb{R}, }$$
(24)

then we can apply Fubini’s theorem to get

$$\displaystyle{C_{(\varPhi \circ \varUpsilon )(\mu )}(z) = C_{(\varUpsilon \circ \varPhi )(\mu )}(z),}$$

meaning \(\varPhi \circ \varUpsilon =\varUpsilon \circ \varPhi\), and

$$\displaystyle\begin{array}{rcl} C_{(\varPhi \circ \varUpsilon )(\mu )}(z)& =& \int _{0}^{\infty }dt\int _{ 0}^{\infty }e^{-s}C_{\mu }(uz)e^{t-ue^{t} }du =\int _{ 0}^{\infty }C_{\mu }(uz)e^{-u}u^{-1}du {}\\ & =& -\int _{0}^{\infty }C_{\mu }(uz)dp(u) =\int _{ 0}^{\infty }C_{\mu }(p^{{\ast}}(t)z)dt = C_{\varPsi (\mu )}(z), {}\\ \end{array}$$

concluding \(\varPhi \circ \varUpsilon =\varPsi\). It remains to prove (24). We need the following lemma.

Lemma 6.2

Let \(\mu =\mu _{(A,\nu,\gamma )} \in ID(\mathbb{R}^{d})\) For each fixed \(z \in \mathbb{R}\) ,

$$\displaystyle{\vert C_{\mu }(az)\vert \leq c_{z}\left [a^{2} + \vert a\vert +\int _{ \mathbb{R}^{d}} \frac{\vert ax\vert ^{2}} {1 + \vert ax\vert ^{2}}\nu (dx) +\int _{\mathbb{R}^{d}} \frac{(\vert a\vert + \vert a\vert ^{3})\vert x\vert ^{3}} {(1 + \vert x\vert ^{2})(1 + \vert ax\vert ^{2})}\nu (dx)\right ],}$$

where c z > 0 is a finite constant depending only on z.

Proof of Lemma 6.2

Let \(g(z,x) = e^{i\langle z,x\rangle } - 1 - \frac{i\langle z,x\rangle } {1+\vert x\vert ^{2}}\). Since

$$\displaystyle{\vert C_{\mu }(z)\vert \leq \frac{1} {2}(\mathrm{tr}A)\vert z\vert ^{2} + \vert \gamma \vert \vert z\vert +\int _{ \mathbb{R}^{d}}\vert g(z,x)\vert \nu (dx),}$$

we have

$$\displaystyle{ \vert C_{\mu }(az)\vert \leq c_{z}(a^{2} + \vert a\vert ) +\int _{ \mathbb{R}^{d}}\vert g(z,ax)\vert \nu (dx) +\int _{\mathbb{R}^{d}}\vert g(az,x) - g(z,ax)\vert \nu (dx). }$$

The inequalities

$$\displaystyle{g(z,x) \leq c_{z} \frac{\vert x\vert ^{2}} {1 + \vert x\vert ^{2}}}$$

and

$$\displaystyle{\vert g(az,x) - g(z,ax)\vert \leq c_{z} \frac{(\vert a\vert + \vert a\vert ^{3})\vert x\vert ^{3}} {(1 + \vert x\vert ^{2})(1 + \vert ax\vert ^{2})}}$$

conclude the proof of the lemma.

We then have, by Lemma 6.2 that

$$\displaystyle\begin{array}{rcl} & & \int _{0}^{\infty }e^{-s}ds\int _{ 0}^{\infty }\vert C_{\mu }(e^{-t}sz)\vert dt {}\\ & & \quad \leq \int _{0}^{\infty }e^{-s}ds\int _{ 0}^{\infty }c_{ z}\left [e^{-2t}s^{2} + e^{-t}s +\int _{ \mathbb{R}^{d}} \frac{\vert e^{-t}sx\vert ^{2}} {1 + \vert e^{-t}sx\vert ^{2}}\nu (dx)\right. {}\\ & & \qquad \left.+\int _{\mathbb{R}^{d}} \frac{(e^{-t}s + e^{-3t}s^{3})\vert x\vert ^{3}} {(1 + \vert x\vert ^{2})(1 + \vert e^{-t}sx\vert ^{2})}\nu (dx)\right ]dt {}\\ & & \quad =: I_{1} + I_{2} + I_{3} + I_{4}. {}\\ \end{array}$$

\(I_{1} < \infty \) and \(I_{2} < \infty \) are trivial. As to I 3,

$$\displaystyle\begin{array}{rcl} I_{3}& =& c_{z}\int _{0}^{\infty }e^{-s}ds\int _{ 0}^{\infty }dt\int _{ \mathbb{R}^{d}} \frac{\vert e^{-t}sx\vert ^{2}} {1 + \vert e^{-t}sx\vert ^{2}}\nu (dx) {}\\ & =& c_{z}\int _{0}^{\infty }e^{-s}ds\int _{ 0}^{\infty }dt\left (\int _{ \vert x\vert \leq 1} +\int _{\vert x\vert >1}\right ) \frac{\vert e^{-t}sx\vert ^{2}} {1 + \vert e^{-t}sx\vert ^{2}}\nu (dx), {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} & & \int _{0}^{\infty }e^{-s}ds\int _{ 0}^{\infty }dt\int _{ \vert x\vert \leq 1} \frac{\vert e^{-t}sx\vert ^{2}} {1 + \vert e^{-t}sx\vert ^{2}}\nu (dx) {}\\ & & \quad \leq \int _{0}^{\infty }e^{-s}s^{2}ds\int _{ 0}^{\infty }e^{-2t}dt\int _{ \vert x\vert \leq 1}\vert x\vert ^{2}\nu (dx) < \infty {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} & & \int _{0}^{\infty }e^{-s}ds\int _{ \vert x\vert >1}\nu (dx)\int _{0}^{\infty } \frac{\vert e^{-t}sx\vert ^{2}} {1 + \vert e^{-t}sx\vert ^{2}}dt {}\\ & & \quad =\int _{ 0}^{\infty }e^{-s}ds\int _{ \vert x\vert >1}\nu (dx)\left (\int _{0}^{\log \vert x\vert } +\int _{ \log \vert x\vert }^{\infty }\right ) \frac{\vert e^{-t}sx\vert ^{2}} {1 + \vert e^{-t}sx\vert ^{2}}dt {}\\ & & \quad \leq \int _{0}^{\infty }e^{-s}ds\int _{ \vert x\vert >1}\left (\log \vert x\vert + s^{2}\vert x\vert ^{2}\frac{1} {2}e^{-2\log \vert x\vert }\right )\nu (dx) < \infty. {}\\ \end{array}$$

As to I 4, we omit the proof, since the basic ideas are the same as for I 3. Equation (24) is thus proved.

Proof of (2) of Theorem 6.1

We have

$$\displaystyle\begin{array}{rcl} C_{(\varUpsilon \circ \mathcal{U})(\mu )}(z)& =& \int _{0}^{1}C_{ \mathcal{U}(\mu )}(\log (t^{-1})z)dt =\int _{ 0}^{1}dt\int _{ 0}^{1}C_{\mu }(\log (t^{-1})sz)ds {}\\ & & \qquad \text{(by Fubini's theorem)} {}\\ & =& \int _{0}^{1}ds\int _{ 0}^{1}C_{\mu }(\log (t^{-1})sz)dt =\int _{ 0}^{1}C_{\varUpsilon (\mu )}(sz)ds = C_{(\mathcal{U}\circ \varUpsilon )(\mu )}(z). {}\\ \end{array}$$

Proof of (3) of Theorem 6.1

We have

$$\displaystyle\begin{array}{rcl} C_{(\mathcal{G}\circ \mathcal{U})(\mu )}(z)& =& \int _{0}^{\sqrt{\pi }/2}C_{ \mathcal{U}(\mu )}(g^{{\ast}}(t)z)dt =\int _{ 0}^{\sqrt{\pi }/2}dt\int _{ 0}^{1}C_{\mu }(g^{{\ast}}(t)sz)ds {}\\ & & \qquad \text{(by Fubini's theorem)} {}\\ & =& \int _{0}^{1}ds\int _{ 0}^{\sqrt{\pi }/2}C_{\mu }(g^{{\ast}}(t)sz)dt =\int _{ 0}^{1}C_{ \mathcal{G}(\mu )}(sz)ds = C_{(\mathcal{U}\circ \mathcal{G})(\mu )}(z). {}\\ \end{array}$$

Proof of (4) of Theorem 6.1

We first show that \(\mathcal{U}(\mu ) \in ID_{\log }(\mathbb{R}^{d})\) if and only if \(\mu \in ID_{\log }(\mathbb{R}^{d})\). Let \(\mu \in ID(\mathbb{R}^{d})\) and \(\tilde{\mu }= \mathcal{U}(\mu )\). We have

$$\displaystyle\begin{array}{rcl} \int _{\vert x\vert >2}\log \vert x\vert \nu _{\tilde{\mu }}(dx)& =& \int _{0}^{1}sds\int _{ \vert x\vert >2/s}\log (s\vert x\vert )\nu _{\mu }(dx) {}\\ & =& \int _{\vert x\vert >2}\nu _{\mu }(dx)\int _{2/\vert x\vert }^{1}\left (s\log s + s\log \vert x\vert \right )ds {}\\ & =:& \int _{\vert x\vert >2}h(x)\nu _{\mu }(dx), {}\\ \end{array}$$

where \(h(x) \sim \log \vert x\vert \) as \(\vert x\vert \rightarrow \infty \). Thus, \(\int _{\vert x\vert >2}\log \vert x\vert \nu _{\tilde{\mu }}(dx) < \infty \) if and only if \(\int _{\vert x\vert >2}\log \vert x\vert \nu _{\mu }(dx) < \infty \). Then if \(\mu \in ID_{\log }(\mathbb{R}^{d})\), then using \(\mathcal{U}(\mu ) \in ID_{\log }(\mathbb{R}^{d})\), we have

$$\displaystyle\begin{array}{rcl} C_{(\varPhi \circ \mathcal{U})(\mu )}(z)& =& \int _{0}^{\infty }C_{ \mathcal{U}(\mu )}(e^{-t}z)dt =\int _{ 0}^{\infty }dt\int _{ 0}^{1}C_{\mu }(se^{-t}z)ds {}\\ & & \qquad \text{(by Fubini's theorem)} {}\\ & =& \int _{0}^{1}ds\int _{ 0}^{\infty }C_{\mu }(se^{-t}z)dt =\int _{ 0}^{1}C_{\varPhi (\mu )}(sz)ds = C_{(\mathcal{U}\circ \varPhi )(\mu )}(z) {}\\ \end{array}$$

For the applicability of Fubini’s theorem above, we have to check that

$$\displaystyle{\int _{0}^{1}ds\int _{ 0}^{\infty }\vert C_{\mu }(se^{-t}z)\vert dt < \infty.}$$

By Lemma 6.2, we have

$$\displaystyle\begin{array}{rcl} \int _{0}^{1}ds\int _{ 0}^{\infty }\vert C_{\mu }(se^{-t}z)\vert dt& \leq & \int _{ 0}^{1}ds\int _{ 0}^{\infty }c_{ z}\Bigg[e^{-2t}s^{2} + e^{-t}s +\int _{ \mathbb{R}^{d}} \frac{\vert e^{-t}sx\vert ^{2}} {1 + \vert e^{-t}sx\vert ^{2}}\nu (dx) {}\\ & & \quad +\int _{\mathbb{R}^{d}} \frac{(e^{-t}s + e^{-3t}s^{3})\vert x\vert ^{3}} {(1 + \vert x\vert ^{2})(1 + \vert e^{-t}sx\vert ^{2})}\nu (dx)\Bigg]dt {}\\ & =:& I_{1} + I_{2} + I_{3} + I_{4}. {}\\ \end{array}$$

\(I_{1} < \infty \) and \(I_{2} < \infty \) are trivial. We have

$$\displaystyle\begin{array}{rcl} I_{3} =\int _{ 0}^{1}ds\int _{ 0}^{\infty }dt\left (\int _{ \vert x\vert \leq 1} +\int _{\vert x\vert >1}\right ) \frac{\vert e^{-t}sx\vert ^{2}} {1 + \vert e^{-t}sx\vert ^{2}}\nu (dx),& & {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} & & \int _{0}^{1}ds\int _{ 0}^{\infty }dt\int _{ \vert x\vert \leq 1} \frac{\vert e^{-t}sx\vert ^{2}} {1 + \vert e^{-t}sx\vert ^{2}}\nu (dx) {}\\ & & \quad \leq \int _{0}^{1}s^{2}\int _{ 0}^{\infty }e^{-2t}dt\int _{ \vert x\vert \leq 1}\vert x\vert ^{2}\nu (dx) < \infty, {}\\ & & \int _{0}^{1}ds\int _{ \vert x\vert >1}\nu (dx)\int _{0}^{\infty } \frac{(e^{-t}s + e^{-3t}s^{3})\vert x\vert ^{3}} {(1 + \vert x\vert ^{2})(1 + \vert e^{-t}sx\vert ^{2})}dt {}\\ & & \quad =\int _{ 0}^{1}ds\int _{ \vert x\vert >1}\nu (dx)\left (\int _{0}^{\log \vert x\vert } +\int _{ \log \vert x\vert }^{\infty }\right ) \frac{(e^{-t}s + e^{-3t}s^{3})\vert x\vert ^{3}} {(1 + \vert x\vert ^{2})(1 + \vert e^{-t}sx\vert ^{2})}dt {}\\ & & \quad \leq \int _{0}^{1}ds\int _{ \vert x\vert >1}\left (\log \vert x\vert + s^{2}\vert x\vert ^{2}\frac{1} {2}e^{-2\log \vert x\vert }\right )\nu (dx) < \infty {}\\ \end{array}$$

I 4 can be handled similarly. This completes the proof of (4) of Theorem 6.1.

The following Proposition 6.3 is a special case of Theorem 3.1 of Sato [75] and Proposition 6.4 can be proved similarly, but we give a proof of Proposition 6.4 here.

Proposition 6.3

Let

$$\displaystyle{k(s) =\int _{ s}^{\infty }ue^{-u}du,\quad s \geq 0,}$$

and let k (t) be its inverse function. Define a mapping \(\mathcal{K}\) from \(\mathfrak{D}(\mathcal{K})\) into \(ID(\mathbb{R}^{d})\) by

$$\displaystyle{\mathcal{K}(\mu ) = \mathcal{L}\left (\int _{0}^{1}k^{{\ast}}(t)dX_{ t}^{(\mu )}\right ).}$$

Then \(\mathfrak{D}(\mathcal{K}) = ID(\mathbb{R}^{d})\) and

$$\displaystyle{ \varUpsilon = \mathcal{K}\circ \mathcal{U} = \mathcal{U}\circ \mathcal{K}. }$$

Proposition 6.4

Let

$$\displaystyle{a(s) = 2\int _{s}^{\infty }u^{2}e^{-u^{2} }du,\quad s \geq 0,}$$

and let a (t) be its inverse function. Define a mapping \(\mathcal{A}\) from \(\mathfrak{D}(\mathcal{A})\) into \(ID(\mathbb{R}^{d})\) by

$$\displaystyle{ \mathcal{A}(\mu ) = \mathcal{L}\left (\int _{0}^{\varGamma (3/2)}a^{{\ast}}(t)dX_{ t}^{(\mu )}\right ). }$$
(25)

Then \(\mathfrak{D}(\mathcal{A}) = ID(\mathbb{R}^{d})\) and

$$\displaystyle{\mathcal{G} = \mathcal{A}\circ \mathcal{U} = \mathcal{U}\circ \mathcal{A}.}$$

Proof

With respect to the domain of the \(\mathcal{A}\)-mapping, it is enough to show

$$\displaystyle{ \int _{0}^{\varGamma (3/2)}(a^{{\ast}}(t))^{2}dt < \infty, }$$
(26)

but (26) follows from

$$\displaystyle{\int _{0}^{\varGamma (3/2)}(a^{{\ast}}(t))^{2}dt =\int _{ 0}^{\infty }u^{4}e^{-u^{2} }du < \infty.}$$

Next applying Proposition 2.6(1), we have

$$\displaystyle\begin{array}{rcl} C_{\mathcal{U}(\mu )}(z)& =& \int _{0}^{1}C_{\mu }(tz)dt, {}\\ C_{\mathcal{G}(\mu )}(z)& =& \int _{0}^{\sqrt{\pi }/2}C_{\mu }(h^{{\ast}}(t)z)dt {}\\ \end{array}$$

and

$$\displaystyle{C_{\mathcal{A}(\mu )}(z) =\int _{ 0}^{\varGamma (3/2)}C_{\mu }(a^{{\ast}}(t)z)dt.}$$

Thus,

$$\displaystyle{C_{(\mathcal{A}\circ \mathcal{U})(\mu )}(z) =\int _{ 0}^{\varGamma (3/2)}dt\int _{ 0}^{1}C_{\mu }(a^{{\ast}}(t)uz)du}$$

and

$$\displaystyle{C_{(\mathcal{U}\circ \mathcal{A})(\mu )}(z) =\int _{ 0}^{1}dt\int _{ 0}^{\varGamma (3/2)}C_{\mu }(ta^{{\ast}}(u)z)du.}$$

If we are allowed to exchange the order of the integrations by Fubini’s theorem, then we have

$$\displaystyle{C_{(\mathcal{A}\circ \mathcal{U})(\mu )}(z) = C_{(\mathcal{U}\circ \mathcal{A})(\mu )}(z),}$$

implying \(\mathcal{A}\circ \mathcal{U} = \mathcal{U}\circ \mathcal{A}\), and we have

$$\displaystyle\begin{array}{rcl} C_{(\mathcal{A}\circ \mathcal{U})(\mu )}(z)& =& \int _{0}^{1}du\int _{ 0}^{\varGamma (3/2)}C_{\mu }(a^{{\ast}}(t)uz)dt = 2\int _{ 0}^{1}du\int _{ 0}^{\infty }C_{\mu }(vuz)v^{2}e^{-v^{2} }dv {}\\ & =& 2\int _{0}^{1}u^{-3}du\int _{ 0}^{\infty }C_{\mu }(yz)y^{2}e^{-y^{2}/u^{2} }dy {}\\ & =& 2\int _{0}^{\infty }C_{\mu }(yz)y^{2}dy\int _{ 0}^{1}u^{-3}e^{-y^{2}/u^{2} }du = 2\int _{0}^{\infty }C_{\mu }(yz)dy\int _{ y}^{\infty }te^{-t^{2} }dt {}\\ & =& \int _{0}^{\infty }C_{\mu }(yz)e^{-y^{2} }dy =\int _{ 0}^{\sqrt{\pi }/2}C_{\mu }(h^{{\ast}}(t)z)dt = C_{ \mathcal{G}(\mu )}(z), {}\\ \end{array}$$

concluding \(\mathcal{A}\circ \mathcal{U} = \mathcal{G}\). In order to assure the exchange of the order of the integrations by Fubini’s theorem, it is enough to show that

$$\displaystyle{ \int _{0}^{1}du\int _{ 0}^{\infty }\left \vert C_{\mu }(uvz)\right \vert v^{2}e^{-v^{2} }dv < \infty. }$$
(27)

For \(\mu =\mu _{(A,\nu,\gamma )} \in ID(\mathbb{R}^{d})\), we have

$$\displaystyle{\vert C_{\mu }(z)\vert \leq 2^{-1}(\mathrm{tr}A)\vert z\vert ^{2} + \vert \gamma \vert \vert z\vert +\int _{ \mathbb{R}^{d}}\vert g(z,x)\vert \nu (dx),}$$

where

$$\displaystyle{g(z,x) = e^{i\langle z,x\rangle } - 1 - i\langle z,x\rangle (1 + \vert x\vert ^{2})^{-1}.}$$

Hence

$$\displaystyle\begin{array}{rcl} \vert C_{\mu }(uvz)\vert & \leq & 2^{-1}(\mathrm{tr}A)u^{2}v^{2}\vert z\vert ^{2} + \vert \gamma \vert \vert u\vert \vert v\vert \vert z\vert +\int _{ \mathbb{R}^{d}}\vert g(z,uvx)\vert \nu (dx) {}\\ & & +\int _{\mathbb{R}^{d}}\vert g(uvz,x) - g(z,uvx)\vert \nu (dx) =: I_{1} + I_{2} + I_{3} + I_{4}. {}\\ \end{array}$$

The finiteness of \(\int _{0}^{1}du\int _{0}^{\infty }(I_{1} + I_{2})v^{2}e^{-v^{2} }dv\) is trivial. Noting that | g(z, x) | ≤ c z  | x | 2(1 + | x | 2)−1 with a positive constant c z depending on z, we have

$$\displaystyle\begin{array}{rcl} & & \int _{0}^{1}du\int _{ 0}^{\infty }I_{ 3}v^{2}e^{-v^{2} }dv {}\\ & & \quad \leq c_{z}\int _{\mathbb{R}^{d}}\nu (dx)\int _{0}^{1}du\int _{ 0}^{\infty } \frac{(uv\vert x\vert )^{2}} {1 + (uv\vert x\vert )^{2}}v^{2}e^{-v^{2} }dv {}\\ & & \quad = c_{z}\left (\int _{\vert x\vert \leq 1}\nu (dx) +\int _{\vert x\vert >1}\nu (dx)\right )\int _{0}^{1}du\int _{ 0}^{\infty } \frac{(uv\vert x\vert )^{2}} {1 + (uv\vert x\vert )^{2}}v^{2}e^{-v^{2} }dv {}\\ & & \quad =: I_{31} + I_{32}, {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} I_{31}& \leq & c_{z}\int _{\vert x\vert \leq 1}\vert x\vert ^{2}\nu (dx)\int _{ 0}^{1}u^{2}du\int _{ 0}^{\infty }v^{4}e^{-v^{2} }dv < \infty, {}\\ I_{32}& \leq & c_{z}\int _{\vert x\vert >1}\nu (dx)\int _{0}^{1}du\int _{ 0}^{\infty }v^{2}e^{-v^{2} }dv < \infty. {}\\ \end{array}$$

As to I 4, note that for \(a \in \mathbb{R}\),

$$\displaystyle\begin{array}{rcl} \vert g(az,x) - g(z,ax)\vert & =& \frac{\vert \langle az,x\rangle \vert \vert x\vert ^{2}\vert 1 - a^{2}\vert } {(1 + \vert x\vert ^{2})(1 + \vert ax\vert ^{2})} {}\\ & \leq & \frac{\vert z\vert \vert x\vert ^{3}(\vert a\vert + \vert a\vert ^{3})} {(1 + \vert x\vert ^{2})(1 + \vert ax\vert ^{2})} \leq \frac{\vert z\vert \vert x\vert ^{2}(1 + \vert a\vert ^{2})} {2(1 + \vert x\vert ^{2})}, {}\\ \end{array}$$

since | b | (1 + b 2)−1 ≤ 2−1. Then

$$\displaystyle\begin{array}{rcl} \int _{0}^{1}du\int _{ 0}^{\infty }I_{ 4}v^{2}e^{-v^{2} }dv \leq 2^{-1}\vert z\vert \int _{ \mathbb{R}^{d}} \frac{\vert x\vert ^{2}} {1 + \vert x\vert ^{2}}\nu (dx)\int _{0}^{1}du\int _{ 0}^{\infty }(1 + u^{2}v^{2})ve^{-v}dv < \infty.& & {}\\ \end{array}$$

This completes the proof of (27).

We can give a more general result than Propositions 6.3 and 6.4.

Theorem 6.5 (Maejima and Ueda [53])

Let \(\mathcal{X}_{\beta }(\mu ) = \mathcal{L}\left (X_{\beta }^{(\mu )}\right ),\beta > 0\) . Then for \(\alpha \in (-\infty,1) \cup (1,2)\) and β > 0,

$$\displaystyle{\varPsi _{\alpha,\beta } = \mathcal{X}_{\beta }\circ \varPhi _{\alpha }\circ \varPsi _{\alpha -\beta,\beta } = \mathcal{X}_{\beta }\circ \varPsi _{\alpha -\beta,\beta } \circ \varPhi _{\alpha }.}$$

(Remark that \(\mathcal{X}_{\beta }(\mu ) =\mu ^{\beta {\ast}}.)\)

Proposition 6.3 is the case of Theorem 6.5 with α = −1 and β = 1, since \(\mathcal{K} = \mathcal{X}_{1} \circ \varPsi _{-2,1} =\varPsi _{-2,1}\), and Proposition 6.4 is the case of Theorem 6.5 with α = −1 and β = 2, since \(\mathcal{A} = \mathcal{X}_{2} \circ \varPsi _{-3,2}\).

7 Nested Subclasses of Classes of Infinitely Divisible Distributions

As mentioned in Sect. 1 already, once we have mappings in hand, it is natural to consider the iteration of mappings. In our case, this procedure gives us nested subclasses of the original class, which, without mapping, was already studied in the case of selfdecomposable distributions by Urbanik [90] and Sato [70].

7.1 Iteration of Mappings

Let Φ f be a stochastic integral mapping. The iteration Φ f m is defined by Φ f 1 = Φ f and

$$\displaystyle{\varPhi _{f}^{m+1}(\mu ) =\varPhi _{ f}(\varPhi _{f}^{m}(\mu ))}$$

with

$$\displaystyle{\mathfrak{D}(\varPhi _{f}^{m+1}) =\{\mu \in \mathfrak{D}(\varPhi _{ f}^{m})\,\,:\,\,\varPhi _{ f}^{m}(\mu ) \in \mathfrak{D}(\varPhi _{ f})\}.}$$

We have

$$\displaystyle{\varPhi _{f}^{m+1}(\mathfrak{D}(\varPhi _{ f}^{m+1})) =\varPhi _{ f}^{m}(\varPhi _{ f}(\mathfrak{D}(\varPhi _{f}^{m+1}))),}$$

implying

$$\displaystyle{\varPhi _{f}(\mathfrak{D}(\varPhi _{f}^{m+1})) \subset \mathfrak{D}(\varPhi _{ f}^{m}),}$$

and

$$\displaystyle{\varPhi _{f}^{m+1}(\mathfrak{D}(\varPhi _{ f}^{m+1})) \subset \varPhi _{ f}^{m}(\mathfrak{D}(\varPhi _{ f}^{m})).}$$

Therefore, if we write

$$\displaystyle{K_{m}^{f}(\mathbb{R}^{d}):=\varPhi _{ f}^{m}(\mathfrak{D}(\varPhi _{ f}^{m})),}$$

\(K_{m}^{f}(\mathbb{R}^{d}),m = 2,3,\ldots,\) are nested subclasses of \(K_{1}^{f}(\mathbb{R}^{d}) =\varPhi _{f}(\mathfrak{D}(\varPhi _{f}))\).

With respect to the domain of mappings, if Φ f is a proper stochastic integral mapping, then \(\mathfrak{D}(\varPhi _{f}^{m}) = ID(\mathbb{R}^{d})\) as mentioned before. For Φ f  = Φ or Ψ, (which is an improper stochastic integral mapping), we have the following.

Lemma 7.1

We have

  1. (1)

    \(\mathfrak{D}(\varPhi ^{m}) = ID_{\log ^{m}}(\mathbb{R}^{d})\) ,

  2. (2)

    \(\mathfrak{D}(\varPsi ^{m}) = ID_{\log ^{m}}(\mathbb{R}^{d})\) .

Proof

 

  1. (1)

    See e.g. Rocha-Arteaga and Sato [67, Theorem 49].

  2. (2)

    We first show that

    $$\displaystyle{\varUpsilon (\mu ) \in ID_{\log ^{m}}(\mathbb{R}^{d})\,\,\text{if and only if}\,\,\mu \in ID_{\log ^{m }}(\mathbb{R}^{d}).}$$

    Let \(\mu \in ID(\mathbb{R}^{d})\) and \(\tilde{\mu }=\varUpsilon (\mu )\). We have

    $$\displaystyle\begin{array}{rcl} \int _{\vert x\vert >2}\log ^{m}\vert x\vert \nu _{\tilde{\mu }}(dx)& =& \int _{ 0}^{\infty }e^{-s}ds\int _{ \vert x\vert >2/s}\log ^{m}(s\vert x\vert )\nu _{\mu }(dx) {}\\ & =& \int _{\mathbb{R}^{d}}\nu _{\mu }(dx)\int _{2/\vert x\vert }^{\infty }e^{-s}(\log s +\log \vert x\vert )^{m}ds {}\\ & =:& \int _{\mathbb{R}^{d}}h(x)\nu _{\mu }(dx). {}\\ \end{array}$$

Here h(x) = o( | x | 2) as \(\vert x\vert \downarrow 0\) and \(h(x) \sim \log ^{m}\vert x\vert \) as \(\vert x\vert \rightarrow \infty \). Thus, \(\int _{\vert x\vert >2}\log ^{m}\vert x\vert \nu _{\tilde{\mu }}(dx) < \infty \) if and only if \(\int _{\vert x\vert >2}\log ^{m}\vert x\vert \nu _{\mu }(dx) < \infty \). By Theorem 6.1(1), we know that \(\varPsi ^{m} =\varPhi ^{m} \circ \varUpsilon ^{m}\). Since \(\mathfrak{D}(\varUpsilon ^{m}) = ID(\mathbb{R}^{d})\) and \(\mathfrak{D}(\varPhi ^{m}) = ID_{\log ^{m}}(\mathbb{R}^{d})\), we have \(\mathfrak{D}(\varPsi ^{m}) = ID_{\log ^{m}}(\mathbb{R}^{d})\).

7.2 Definitions and Some Properties of Nested Subclasses

Put

$$\displaystyle\begin{array}{rcl} U_{0}(\mathbb{R}^{d})& =& U(\mathbb{R}^{d}),\quad B_{ 0}(\mathbb{R}^{d}) = B(\mathbb{R}^{d}),\quad L_{ 0}(\mathbb{R}^{d}) = L(\mathbb{R}^{d}), {}\\ T_{0}(\mathbb{R}^{d})& =& T(\mathbb{R}^{d}),\quad G_{ 0}(\mathbb{R}^{d}) = G(\mathbb{R}^{d}),\quad M_{ 0}(\mathbb{R}^{d}) = M(\mathbb{R}^{d}). {}\\ \end{array}$$

Definition 7.2

For \(m = 0,1,2,\ldots\), define

\(U_{m}(\mathbb{R}^{d}) = \mathcal{U}^{m+1}(ID(\mathbb{R}^{d}))\),

\(B_{m}(\mathbb{R}^{d}) = \varUpsilon ^{m+1}(ID(\mathbb{R}^{d}))\),

\(L_{m}(\mathbb{R}^{d}) = \varPhi ^{m+1}(ID_{\log ^{m+1}}(\mathbb{R}^{d}))\),

\(T_{m}(\mathbb{R}^{d}) = \varPsi ^{m+1}(ID_{\log ^{m+1}}(\mathbb{R}^{d}))\),

\(G_{m}(\mathbb{R}^{d}) = \mathcal{G}^{m+1}(ID(\mathbb{R}^{d}))\),

\(M_{m}(\mathbb{R}^{d}) = \mathcal{M}^{m+1}(ID(\mathbb{R}^{d}))\)

and further \(U_{\infty }(\mathbb{R}^{d}) =\bigcap _{ m=0}^{\infty }U_{m}(\mathbb{R}^{d}),B_{\infty }(\mathbb{R}^{d}) =\bigcap _{ m=0}^{\infty }B_{m}(\mathbb{R}^{d}),L_{\infty }(\mathbb{R}^{d}) =\bigcap _{ m=0}^{\infty }L_{m}(\mathbb{R}^{d})\), \(T_{\infty }(\mathbb{R}^{d}) =\bigcap _{ m=0}^{\infty }T_{m}(\mathbb{R}^{d})\), \(G_{\infty }(\mathbb{R}^{d}) =\bigcap _{ m=0}^{\infty }G_{m}(\mathbb{R}^{d})\), \(M_{\infty }(\mathbb{R}^{d}) =\bigcap _{ m=0}^{\infty }M_{m}(\mathbb{R}^{d})\).

Distributions in \(L_{\infty }(\mathbb{R}^{d})\) are called completely selfdecomposable distributions.

We start with the following.

Definition 7.3

A class \(H \subset ID(\mathbb{R}^{d})\) is said to be completely closed in the strong sense (c.c.s.s.), if the following are satisfied.

  1. (1)

    It is closed under convolution.

  2. (2)

    It is closed under weak convergence.

  3. (3)

    If X is an \(\mathbb{R}^{d}\)-valued random variable with \(\mathcal{L}(X) \in H\), then \(\mathcal{L}(aX + b) \in H\) for any a > 0 and \(b \in \mathbb{R}^{d}\).

  4. (4)

    μ ∈ H implies μ s ∈ H for any s > 0.

Proposition 7.4 (Maejima and Sato [48, Proposition 3.2])

Fix \(0 < a < \infty \) . Suppose that f is square integrable on (0,a) and \(\int _{0}^{a}f(t)dt\neq 0\) . Define a mapping Φ f by

$$\displaystyle{\varPhi _{f}(\mu ) = \mathcal{L}\left (\int _{0}^{a}f(t)dX_{ t}^{(\mu )}\right ).}$$

Then the following are true.

  1. (1)

    \(\mathfrak{D}(\varPhi _{f}) = ID(\mathbb{R}^{d})\) .

  2. (2)

    If H is c.c.s.s., then \(\varPhi _{f}(H) \subset H\) .

  3. (3)

    If H is c.c.s.s., then Φ f (H) is also c.c.s.s.

Remark 7.5

 

  1. (1)

    Note that Proposition 7.4 can be applied to \(\varUpsilon\)- and \(\mathcal{G}\)-mappings, because in those mappings the stochastic integral is proper, f is square integrable and \(\int _{0}^{a}f(t)dt\neq 0\). Since \(ID(\mathbb{R}^{d})\) is c.c.s.s., \(B(\mathbb{R}^{d})\) and \(G(\mathbb{R}^{d})\) are c.c.s.s.

  2. (2)

    Proposition 7.4(3) is not necessarily true when \(a = \infty \). Namely, there is a mapping Φ f defined by \(\varPhi _{f}(\mu ) = \mathcal{L}\left (\int _{0}^{\infty }f(t)dX_{t}^{(\mu )}\right )\) such that \(\varPhi _{f}(H \cap \mathfrak{D}(\varPhi _{f}))\) is not closed under weak convergence for some H which is c.c.s.s. Indeed, the mapping Ψ α with 0 < α < 1 in Theorem 4.2 of Sato [75] serves as an example.

  3. (3)

    However, it is known that when Φ f  = Φ, Proposition 7.4(2) and (3) are true with Φ f (H) replaced by \(\varPhi (H \cap \mathfrak{D}(\varPhi ))\), even if \(a = \infty \). See Lemma 4.1 of Barndorff-Nielsen et al. [12]. In particular, \(L_{m}(\mathbb{R}^{d})\) is c.c.s.s. for m = 0, 1, .

  4. (4)

    We also have that \(T_{\infty }(\mathbb{R}^{d})\) is c.c.s.s. (Maejima and Sato [48, Lemma 3.8].)

Theorem 7.6

We have the following.

  1. (1)

    \(B_{m}(\mathbb{R}^{d}) \subset U_{m}(\mathbb{R}^{d})\) ,

  2. (2)

    \(G_{m}(\mathbb{R}^{d}) \subset U_{m}(\mathbb{R}^{d})\) ,

  3. (3)

    \(L_{m}(\mathbb{R}^{d}) \subset U_{m}(\mathbb{R}^{d})\) ,

  4. (4)

    \(T_{m}(\mathbb{R}^{d}) \subset L_{m}(\mathbb{R}^{d})\) .

Proof

  1. (1)

    We know that \(B_{0}(\mathbb{R}^{d}) \subset U_{0}(\mathbb{R}^{d})\). Suppose that \(B_{m}(\mathbb{R}^{d}) \subset U_{m}(\mathbb{R}^{d})\) for some m ≥ 0, as the induction hypothesis. Then

    $$\displaystyle\begin{array}{rcl} B_{m+1}(\mathbb{R}^{d})& =& \varUpsilon ^{m+2}(ID(\mathbb{R}^{d})) =\varUpsilon (\varUpsilon ^{m+1}(ID(\mathbb{R}^{d})) =\varUpsilon (B_{ m}(\mathbb{R}^{d})) {}\\ & \subset &\varUpsilon (U_{m}(\mathbb{R}^{d})) =\varUpsilon (\mathcal{U}^{m+1}(ID(\mathbb{R}^{d})) = \mathcal{U}^{m+1}(\varUpsilon (ID(\mathbb{R}^{d})) {}\\ & & \mbox{ (since $\varUpsilon \circ \mathcal{U} = \mathcal{U}\circ \varUpsilon $ (Theorem 6.1(2)))} {}\\ & =& \mathcal{U}^{m+1}(\mathcal{U}(ID(\mathbb{R}^{d})) = U_{ m+1}(\mathbb{R}^{d}). {}\\ \end{array}$$
  2. (2)

    The same proof as above works if we apply the relation \(\mathcal{G}\circ \mathcal{U} = \mathcal{U}\circ \mathcal{G}\) (Theorem 6.1(3)) instead of \(\varUpsilon \circ \mathcal{U} = \mathcal{U}\circ \varUpsilon\).

  3. (3)

    We know that \(L_{0}(\mathbb{R}^{d}) \subset U_{0}(\mathbb{R}^{d})\). Suppose that \(L_{m}(\mathbb{R}^{d}) \subset U_{m}(\mathbb{R}^{d})\) for some m ≥ 0, as the induction hypothesis. Then

    $$\displaystyle\begin{array}{rcl} L_{m+1}(\mathbb{R}^{d})& =& \varPhi ^{m+2}(ID_{\log ^{ m+2}}(\mathbb{R}^{d})) =\varPhi (\varPhi ^{m+1}(ID_{\log ^{ m+2}}(\mathbb{R}^{d}))) {}\\ & \subset &\varPhi (\varPhi ^{m+1}(ID_{\log ^{ m+1}}(\mathbb{R}^{d}))) =\varPhi (L_{ m}(\mathbb{R}^{d}) \cap ID_{\log }(\mathbb{R}^{d})) {}\\ & \subset &\varPhi (U_{m}(\mathbb{R}^{d}) \cap ID_{\log }(\mathbb{R}^{d})) =\varPhi (\mathcal{U}^{m+1}(ID(\mathbb{R}^{d})) \cap ID_{\log }(\mathbb{R}^{d})) {}\\ & =& \varPhi (\mathcal{U}^{m+1}(ID_{\log }(\mathbb{R}^{d})) = \mathcal{U}^{m+1}(\varPhi (ID_{\log }(\mathbb{R}^{d})) {}\\ & & \mbox{ (by $\varPhi \circ \mathcal{U} = \mathcal{U}\circ \varPhi $ (Theorem 6.1 (4)))} {}\\ & =& \mathcal{U}^{m+1}(L_{ 0}(\mathbb{R}^{d})) \subset \mathcal{U}^{m+1}(U_{ 0}(\mathbb{R}^{d})) = U_{ m+1}(\mathbb{R}^{d}). {}\\ \end{array}$$
  4. (4)

    We show \(T_{m}(\mathbb{R}^{d}) \subset L_{m}(\mathbb{R}^{d})\). We can show that, for any m ≥ 0,

    $$\displaystyle\begin{array}{rcl} T_{m}(\mathbb{R}^{d})& =& (\varPhi \varUpsilon )^{m+1}(ID_{\log ^{ m+1}}(\mathbb{R}^{d})) = (\varUpsilon ^{m+1}\varPhi ^{m+1})(ID_{\log ^{ m+1}}(\mathbb{R}^{d})) {}\\ & =& \varUpsilon ^{m+1}(L_{ m}(\mathbb{R}^{d})). {}\\ \end{array}$$

    Then by Proposition 7.4(2) and Remark 7.5(3),

    $$\displaystyle{\varUpsilon ^{m+1}(L_{ m}(\mathbb{R}^{d})) \subset L_{ m}(\mathbb{R}^{d}).}$$

The proof is completed.

7.3 Limits of Nested Subclasses

The following is a main result on the limits of nested subclasses.

Theorem 7.7 (Maejima and Sato [48], Aoyama et al. [5])

Let \(\overline{S(\mathbb{R}^{d})}\) be the closure of \(S(\mathbb{R}^{d})\) , where the closure is taken under weak convergence and convolution. We have

$$\displaystyle\begin{array}{rcl} U_{\infty }(\mathbb{R}^{d}) = B_{ \infty }(\mathbb{R}^{d}) = L_{ \infty }(\mathbb{R}^{d}) = T_{ \infty }(\mathbb{R}^{d}) = G_{ \infty }(\mathbb{R}^{d}) = M_{ \infty }(\mathbb{R}^{d}) = \overline{S(\mathbb{R}^{d})}.& & {}\\ \end{array}$$

To prove this theorem, we start with the following two known results.

Theorem 7.8 (The Class of Completely Selfdecomposable Distributions. Urbanik [90] and Sato [70])

\(L_{\infty }(\mathbb{R}^{d}) = \overline{S(\mathbb{R}^{d})}.\)

Theorem 7.9 (Jurek [35], See Also Maejima and Sato [48])

\(U_{\infty }(\mathbb{R}^{d}) = L_{\infty }(\mathbb{R}^{d}).\)

We also have the following two propositions.

Proposition 7.10

$$\displaystyle{T_{\infty }(\mathbb{R}^{d}) \subset U_{ \infty }(\mathbb{R}^{d}),\quad B_{ \infty }(\mathbb{R}^{d}) \subset U_{ \infty }(\mathbb{R}^{d})\quad \text{and}\quad G_{ \infty }(\mathbb{R}^{d}) \subset U_{ \infty }(\mathbb{R}^{d}).}$$

Proof

Trivial from Theorem 7.6.

Proposition 7.11

We have

$$\displaystyle{B_{\infty }(\mathbb{R}^{d}) \supset \overline{S(\mathbb{R}^{d})},\quad G_{ \infty }(\mathbb{R}^{d}) \supset \overline{S(\mathbb{R}^{d})}\quad \text{and}\quad T_{ \infty }(\mathbb{R}^{d}) \supset \overline{S(\mathbb{R}^{d})}.}$$

Proof

It follows from Remark 7.5(1) that \(B_{\infty }(\mathbb{R}^{d})\) and \(G_{\infty }(\mathbb{R}^{d})\) are c.c.s.s., and from Remark 7.5 (4) that \(T_{\infty }(\mathbb{R}^{d})\) is also c.c.s.s. Thus, we have

$$\displaystyle{B_{\infty }(\mathbb{R}^{d}) = \overline{B_{ \infty }(\mathbb{R}^{d})},\quad G_{ \infty }(\mathbb{R}^{d}) = \overline{G_{ \infty }(\mathbb{R}^{d})}\quad \text{and}\quad T_{ \infty }(\mathbb{R}^{d}) = \overline{T_{ \infty }(\mathbb{R}^{d})}.}$$

We know that each class includes \(S(\mathbb{R}^{d})\). Thus,

$$\displaystyle{B_{\infty }(\mathbb{R}^{d}) \supset \overline{S(\mathbb{R}^{d})},\quad G_{ \infty }(\mathbb{R}^{d}) \supset \overline{S(\mathbb{R}^{d})}\quad \text{and}\quad T_{ \infty }(\mathbb{R}^{d}) \supset \overline{S(\mathbb{R}^{d})}.}$$

The proof is completed.

Proof of Theorem 7.7

The statement follows from Theorems 7.8 and 7.9 and Propositions 7.10 and 7.11.

7.4 Limits of the Iterations of Stochastic Integral Mappings

A natural question is whether \(L_{\infty }(\mathbb{R}^{d})\) is the only class which can appear as the limit of iterations of stochastic integral mappings. In this section, we give an answer to this question. We start with the following.

Theorem 7.12 (A Characterization of \(L_{\infty }(\mathbb{R}^{d})\) (Sato [70]))

\(\mu \in L_{\infty }(\mathbb{R}^{d})\) if and only if \(\mu \in ID(\mathbb{R}^{d})\) and

$$\displaystyle{\nu _{\mu }(B) =\int _{(0,2)}\varGamma ^{\mu }(d\alpha )\int _{ S}\lambda _{\alpha }(d\xi )\int _{0}^{\infty }1_{ B}(r\xi )r^{-\alpha -1}dr,\quad B \in \mathcal{B}(\mathbb{R}^{d}),}$$

where Γ μ is a measure on (0,2) satisfying

$$\displaystyle{\int _{(0,2)}\left (\frac{1} {\alpha } + \frac{1} {2-\alpha }\right )\varGamma ^{\mu }(d\alpha ) < \infty }$$

and \(\lambda _{\alpha }\) is a probability measure on S for each α and it is measurable in α. Here Γ μ is unique and so it can be considered a characteristic of μ.

Definition 7.13

For \(A \in \mathcal{B}((0,2))\), define \(L_{\infty }^{A}(\mathbb{R}^{d}):=\{\mu \in L_{\infty }(\mathbb{R}^{d}): \varGamma ^{\mu }\) \(\left ((0,2)\setminus A\right ) = 0\}.\)

Theorem 7.14 (Sato [78], Maejima and Ueda [54])

We have

$$\displaystyle\begin{array}{rcl} \bigcap _{m=1}^{\infty }\mathfrak{R}(\varPhi _{\alpha }^{m})& =& \bigcap _{ m=1}^{\infty }\mathfrak{R}(\varPsi _{\alpha,1}^{m}) {}\\ & =& \left \{\begin{array}{@{}l@{\quad }l@{}} L_{\infty }(\mathbb{R}^{d}), \quad &\text{for }\alpha \in (-\infty,0], \\ L_{\infty }^{(\alpha,2)}(\mathbb{R}^{d}), \quad &\text{for }\alpha \in (0,1), \\ \left \{\mu \in L_{\infty }^{(1,2)}(\mathbb{R}^{d}): \mbox{ the weak mean of $\mu $ is $0$}\right \},\quad &\text{for }\alpha = 1, \\ \left \{\mu \in L_{\infty }^{(\alpha,2)}(\mathbb{R}^{d}):\int _{\mathbb{R}^{d}}x\mu (dx) = 0\right \}, \quad &\text{for }\alpha \in (1,2). \end{array} \right.{}\\ \end{array}$$

7.5 Characterizations of Some Nested Subclasses

Here we treat three cases, \(L_{m}(\mathbb{R}^{d})\), \(B_{m}(\mathbb{R}^{d})\) and \(G_{m}(\mathbb{R}^{d})\).

Sato [70] characterized the classes \(L_{m}(\mathbb{R}^{d})\) in terms of \(\nu _{\xi }\) as follows. Recall the functions \(k_{\xi }\) in (11). We call the function \(h_{\xi }(u)\) defined by \(h_{\xi }(u) = k_{\xi }(e^{-u})\) the h-function of μ.

Let f be a real-valued function on \(\mathbb{R}\). For \(\varepsilon > 0,n = 1,2,\ldots,\) denote

$$\displaystyle{\varDelta _{\varepsilon }^{n}f(u) =\sum _{ j=0}^{n}(-1)^{n-j}{n\choose j}f(u + j\varepsilon ).}$$

Define \(\varDelta _{\varepsilon }^{0}f = f\). We say that \(f(u),u \in \mathbb{R},\) is monotone of order n if \(\varDelta _{\varepsilon }^{j}f \geq 0\) for \(\varepsilon > 0,j = 0,1,2,\ldots,n\).

Theorem 7.15 (Sato [70, Theorem 3.2])

Let m = 1,2,…. Then \(\mu \in L_{m}(\mathbb{R}^{d})\) if and only if \(\mu \in L(\mathbb{R}^{d})\) and the h-function of μ is monotone of order m + 1 for \(\lambda\) -a.e.  \(\xi\) , where \(\lambda\) is the measure appearing in  (2.3) .

Another characterization of \(L_{m}(\mathbb{R}^{d})\) in terms of the decomposability is the following.

Theorem 7.16 (See Sato [70, Theorem 2.1] and Rocha-Arteaga and Sato [67, Theorem 49])

For \(m = 1,2,\ldots,\infty,\mu \in L_{m}(\mathbb{R}^{d})\) if and only if \(\mu \in L_{0}(\mathbb{R}^{d})\) and for any b > 1, there exists some \(\rho _{b} \in L_{m-1}(\mathbb{R}^{d})\) such that  (12) is satisfied.

For the characterization of \(B_{m}(\mathbb{R}^{d})\), we introduce a sequence of functions \(\varepsilon _{m}(x),m = 0,1,2,\ldots.\) For x ≥ 0, let

$$\displaystyle\begin{array}{rcl} \varepsilon _{0}(x)& =& e^{-x}, {}\\ \varepsilon _{1}(x)& =& -\int _{0}^{\infty }e^{-x/u}d\varepsilon _{ 0}(u) > 0, {}\\ & & \quad \cdots {}\\ \varepsilon _{m}(x)& =& -\int _{0}^{\infty }e^{-x/u}d\varepsilon _{ m-1}(u) > 0. {}\\ \end{array}$$

We have

Theorem 7.17 (Maejima [43])

Let m = 1,2,…. Then \(\mu \in B_{m}(\mathbb{R}^{d})\) if and only if \(\mu \in B_{0}(\mathbb{R}^{d})\) and ν μ is either 0 or it is expressed as

$$\displaystyle{\nu _{\mu }(B) = -\int _{0}^{\infty }\nu _{ 0}(t^{-1}B)d\varepsilon _{ m}(t)}$$

for the Lévy measure ν 0 of some \(\mu _{0} \in ID(\mathbb{R}^{d})\) .

For the characterization of \(G_{m}(\mathbb{R}^{d})\), we restrict ourselves to the symmetric distributions, which is easier. For \(m \in \mathbb{N}\), let ϕ m (x) be the probability density function of the product of (m + 1) independent standard normal random variables.

Theorem 7.18 (Aoyama and Maejima [3])

Let \(\mu \in ID_{\mathrm{sym}}(\mathbb{R}^{d})\) . Then for each \(m \in \mathbb{N}\) , \(\mu \in G_{m}(\mathbb{R}^{d})\) if and only if \(\mu \in G_{0}(\mathbb{R}^{d})\) and ν μ is either 0 or it is expressed as

$$\displaystyle{ \nu _{\mu }(B) =\int _{ -\infty }^{\infty }\nu _{ 0}(u^{-1}B)\phi _{ m-1}(u)du, }$$

where ν 0 is the Lévy measure of some \(\mu _{0} \in G_{0}(\mathbb{R}^{d})\) .

Another characterization is the following.

Theorem 7.19 (Aoyama and Maejima [3])

Let \(m \in \mathbb{N}\) . A \(\mu \in ID_{\mathrm{sym}}(\mathbb{R}^{d})\) belongs to \(G_{m}(\mathbb{R}^{d})\) if and only if \(\mu \in G_{0}(\mathbb{R}^{d})\) and ν μ is either 0 or it is expressed as

$$\displaystyle{ \nu _{\mu }(B) =\int _{S}\lambda (d\xi )\int _{0}^{\infty }1_{ B}(r\xi )g_{m,\xi }(r^{2})dr,\quad B \in \mathcal{B}_{ 0}(\mathbb{R}^{d}), }$$

where \(\lambda\) is a symmetric measure on the unit sphere S on \(\mathbb{R}^{d}\) and \(g_{m,\xi }(r)\) is represented as

$$\displaystyle\begin{array}{rcl} g_{m,\xi }(s) =\int _{ -\infty }^{\infty }\phi _{ m-1}(\sqrt{s}\vert r\vert ^{-1})\vert r\vert ^{-1}g_{\xi }(r^{2})dr,& & {}\\ \end{array}$$

where \(g_{\xi }(r)\) on \((0,\infty )\) is a jointly measurable function such that \(g_{\xi } = g_{-\xi },\lambda -a.e.\) for any fixed \(\xi \in S\) , \(g_{\xi }(\cdot )\) is completely monotone on \((0,\infty )\) and satisfies

$$\displaystyle{ \int _{0}^{\infty }(1 \wedge r^{2})g_{\xi }(r^{2})dr = c \in (0,\infty ) }$$

with c independent of \(\xi\) .

7.6 Some Nested Subclasses Appearing in Finance

In Carr et al. [19], they discussed the problem of pricing options with Lévy processes and Sato processes (which are the selfsimilar additive processes) for asset returns. Then they showed the importance of the distributions in \(L_{1}(\mathbb{R}_{+})\) or \(L_{2}(\mathbb{R}_{+})\), and also \(L_{\infty }(\mathbb{R}_{+})\). Actually, some tempered stable distributions belong to \(L_{1}(\mathbb{R}^{d})\) and \(L_{2}(\mathbb{R}^{d})\), which will be seen in Sect. 5.4 later, and Rosiński [69] mentioned that tempered stable processes were introduced in mathematical finance to model stochastic volatility (see e.g. CGMY model in Carr et al. [18] discussed in Sect. 7.7 later), and that option pricing based on such processes were considered.

7.7 Nested Subclasses of \(L(\mathbb{R}^{d})\) with Continuous Parameter

We have discussed nested subclasses \(L_{m}(\mathbb{R}^{d}),m = 1,2,\ldots,\) of \(L(\mathbb{R}^{d})\). Nguyen Van Thu [9193] extended \(L_{m}(\mathbb{R}^{d})\) to \(L_{p}(\mathbb{R}^{d})\) by replacing the integers m by positive real numbers p > 0. It turns out that his classes \(L_{p}(\mathbb{R}^{d})\) are special cases of \(L_{p.\alpha }(\mathbb{R}^{d})\), recently studied by Sato [77]. For p > 0 and \(\alpha \in \mathbb{R}\), let

$$\displaystyle{j_{p,\alpha }(s) = \frac{1} {\varGamma (p)}\int _{s}^{1}(-\log u)^{p-1}u^{-\alpha -1}du,\quad 0 < s \leq 1,}$$

and denote its inverse function by j p, α (t). Define

$$\displaystyle{\varLambda _{p,\alpha }:=\varPhi _{j_{p,\alpha }^{{\ast}}}\quad \text{and}\quad L_{p,\alpha }(\mathbb{R}^{d}):= \mathfrak{R}(\varLambda _{ p,\alpha }).}$$

Then

$$\displaystyle{L_{m}(\mathbb{R}^{d}) = L_{ m+1,1}(\mathbb{R}^{d}),\quad m = 1,2,\ldots,}$$

and the classes \(L_{p}(\mathbb{R}^{d})\) by Nguyen Van Thu [9193] are

$$\displaystyle{L_{p}(\mathbb{R}^{d}) = L_{ p,1}(\mathbb{R}^{d}),\quad p > 0.}$$

For the details of \(L_{p,\alpha }(\mathbb{R}^{d})\), see Sato [77].

Also note that \(\varepsilon _{\alpha,m}^{{\ast}}(t)\) in Maejima et al. [57] is the same as j p, α (t) above with p = m + 1. Hence, \(L_{m,\alpha }(\mathbb{R}^{d}),m = 1,2,\ldots,\alpha < 2\), in Maejima et al. [57] is the special case of \(L_{p,\alpha }(\mathbb{R}^{d})\) in Sato [77].

8 Examples (I)

All examples in this section are one-dimensional distributions except in Sect. 8.5, and we show which classes such known distributions belong to.

8.1 Gamma, χ 2-, Student t- and F-Distributions

  1. (a)

    Let \(\varGamma _{c,\lambda }\) be a gamma random variable with parameters c > 0 and \(\lambda > 0\). Namely, \(P(\varGamma _{c.\lambda } \in B) = \lambda ^{c}\varGamma (c)^{-1}\int _{B\cap (0,\infty )}x^{c-1}e^{-\lambda x}dx\). (When c = 1, it is exponential.) In its Lévy-Khintchine representation, the Gaussian part is 0 and the Lévy measure is \(\nu (dr) = ce^{-\lambda r}r^{-1}1_{(0,\infty )}(r)dr\), (see e.g. Steutel and van Harn [83, Chap. III, Example 4.8]). Then \(\mathcal{L}(\varGamma _{c,\lambda }) \in T(\mathbb{R}_{+})\), (from the form of the Lévy measure of \(\mathcal{L}(\varGamma _{c,\lambda }))\), but \(\mathcal{L}(\varGamma _{c,\lambda })\notin L_{1}(\mathbb{R}_{+})\), (Maejima et al. [56, Example 1(i)]).

  2. (b)

    Let \(n \in \mathbb{N}\) and let Z 1, , Z n be independent standard normal random variables. The distribution of

    $$\displaystyle{\chi ^{2}(n):= Z_{ 1}^{2} + \cdots + Z_{ n}^{2}}$$

    is called the χ 2-distribution with n degrees of freedom. It is known that

    $$\displaystyle{\mathcal{L}(\chi ^{2}(n)) = \mathcal{L}(\varGamma _{ n/2,1/2}),}$$

    and hence \(\mathcal{L}(\chi ^{2}(n)) \in T(\mathbb{R}_{+})\).

  3. (c)

    Let Z be the standard normal random variable and χ 2(n) a χ 2-random variable with n degrees of freedom and suppose that they are independent. Then the distribution of

    $$\displaystyle{ t(n):= \frac{Z} {\sqrt{\chi ^{2 } (n)/n}} }$$
    (28)

    is called Student t-distribution of n degrees of freedom. Its density is

    $$\displaystyle{\mu (dx) = \frac{1} {B(n/2,1/2)\sqrt{n}}\left (1 + \frac{x^{2}} {n} \right )^{-(n+1)/2}dx,}$$

    where B(⋅ , ⋅ ) is the Beta function. It is known that \(\mathcal{L}(t(n)) \in L(\mathbb{R})\), (see Steutel and van Harn [83, Chap. VI, Theorem 11.15]).

  4. (d)

    Let χ 1 2(n) and χ 2 2(m) be two independent χ 2-random variables with n and m degrees of freedom, respectively. Then the distribution of

    $$\displaystyle{ F(n,m):= \frac{\chi _{1}^{2}(n)/n} {\chi _{2}^{2}(m)/m} }$$
    (29)

    is called F-distribution, and its density is

    $$\displaystyle{\mu (dx) = \frac{1} {B(n/2,m/2)x}\left ( \frac{nx} {nx + m}\right )^{n/2}\left (1 - \frac{nx} {nx + m}\right )^{m/2}dx,\quad x > 0.}$$

    It is known that \(\mathcal{L}(F) \in I(\mathbb{R}_{+})\), (Ismail and Kelker [27]), and more that \(\mathcal{L}(F) \in T(\mathbb{R}_{+})\), (Bondesson [16, Example 4.3.1]).

8.2 Logarithm of Gamma Random Variable

It is known that \(\nu _{\xi }\) corresponding to \(\log \varGamma _{c,\lambda }\) is ν 1 = 0 and

$$\displaystyle{ \nu _{-1}(dr) = \frac{e^{-cr}} {r(1 - e^{-r})}dr,\quad r > 0, }$$
(30)

(see e.g. Linnik and Ostrovskii [41, Eq. (2.6.13)]). This does not depend on the parameter \(\lambda > 0\).

  1. (a)

    \(\mathcal{L}(\log \varGamma _{c,\lambda }) \in L(\mathbb{R})\), (Shanbhag and Sreehari [81]). Shanbhag and Sreehari proved the selfdecomposability by showing (12) without using (30). However, once we know (30), we can show it by (11) and (30).

  2. (b)

    \(\mathcal{L}(\log \varGamma _{c,\lambda }) \in L_{1}(\mathbb{R})\) if c ≥ 1∕2, (Akita and Maejima [1]). It is enough to apply Theorem 7.15.

  3. (c)

    \(\mathcal{L}(\log \varGamma _{c,\lambda }) \in L_{2}(\mathbb{R})\) if c ≥ 1, (Akita and Maejima [1]). It is enough to apply Theorem 7.15 again.

  4. (d)

    \(\mathcal{L}(\log \varGamma _{c,1}) \in T(\mathbb{R})\). (See Bondesson [16, p. 112].)

8.3 Symmetrized Gamma Distribution

The symmetrized gamma distribution with parameter c > 0 and \(\lambda > 0\), is written as sym-gamma \((c,\lambda )\). Its characteristic function is \(\varphi (z) = \left (\lambda ^{2}/(\lambda ^{2} + z^{2})\right )^{c}\) and in its Lévy-Khintchine representation, the Gaussian part is 0 and the Lévy measure is \(\nu (dr) = c\vert r\vert ^{-1}e^{-\lambda \vert r\vert }dr,\,\,(r\neq 0)\). (See Steutel and van Harn [83, Chap. V, Example 6.17].) (When c = 1 it is the Laplace distribution.)

We have

  1. (a)

    sym-gamma \((c,\lambda ) \in T(\mathbb{R})\), (from the form of the Lévy measure above).

Thus

  1. (b)

    sym-gamma \((c,\lambda ) \in G(\mathbb{R})\), (see Rosiński [68]).

8.4 More Examples Related to Gamma Random Variables

  1. (a)

    Product of independent gamma random variables. (Steutel and van Harn [83, Chap. VI, Theorem 5.20].) Let Γ 1, Γ 2, … Γ n be independent gamma random variables, and let \(q_{1},q_{2},\ldots,q_{n} \in \mathbb{R}\) with | q j  | ≥ 1. Then

    $$\displaystyle{\mathcal{L}(\varGamma _{1}^{q_{1} }\varGamma _{2}^{q_{2} }\cdots \varGamma _{n}^{q_{n} }) \in L(\mathbb{R}_{+}).}$$
  2. (b)

    When n = 1 above, we can say more. Namely,

    $$\displaystyle{\mathcal{L}(\varGamma _{1}^{q_{1} }) \in T(\mathbb{R}_{+}).}$$

    (Thorin [87].)

  3. (c)

    Power of gamma random variables. (Bosch and Simon [17].) Let Γ be a gamma random variable and p ∈ (−1, 0). Then \(\mathcal{L}(\varGamma ^{p}) \in L(\mathbb{R}_{+})\). The proof is as follows: Let

    $$\displaystyle{g(u) = \frac{u\varGamma (1 - p(u + 1))} {\varGamma (1 - pu)},}$$

    and let X = { X t } be the Lévy process such that

    $$\displaystyle{E\left [e^{-uX_{t} }\right ] = e^{-ug(u)},\quad u,t \geq 0.}$$

    Then by an application of Proposition 2 of Bertoin and Yor [13] (see Bosch and Simon [17] for the details), we have

    $$\displaystyle{\varGamma ^{p}\mathop{ =}\limits^{\mathrm{ d}}\int _{ 0}^{\infty }e^{-X_{t} }dt(=: I).}$$

    Let \(T_{y} =\inf \{ t > 0: X_{t} = y\}\) for every y > 0. The fact that \(X_{t} \rightarrow \infty \) a.s. as \(t \rightarrow \infty \) and the absence of positive jumps assure that \(T_{y} < \infty \) a.s. We thus have

    $$\displaystyle{I =\int _{ 0}^{T_{y} }e^{-X_{t} }dt +\int _{ T_{y}}^{\infty }e^{-X_{t} }dt\mathop{ =}\limits^{\mathrm{ d}}\int _{0}^{T_{y} }e^{-X_{t} }dt + e^{-y}\int _{ 0}^{\infty }e^{-X'_{t} }dt,}$$

    where X′ is an independent copy of X and the second equality follows from the Markov property at T y . This shows that I satisfies (12), and hence \(\varGamma ^{p}(\mathop{=}\limits^{\mathrm{ d}}I)\) is selfdecomposable by (12). We remark here that \(\mathcal{L}(\varGamma ^{p}),p \in (0,1)\) is not infinitely divisible. (See Bosch and Simon [17, p. 627].)

  1. (d)

    Exponential function of gamma random variable. (Bondesson [16, p. 94].) Let X is denumerable convolution of gamma random variables \(\varGamma _{c_{j},\lambda _{j}}\) with c j  ≥ 1. Then

    $$\displaystyle{\mathcal{L}(e^{X}) \in T(\mathbb{R}_{ +}).}$$
  2. (e)

    Let Γ be a gamma random variable and let \(a,b \in \mathbb{R}\). Then

    $$\displaystyle{\mathcal{L}(a\varGamma + b\varGamma ^{2}) \in T(\mathbb{R}).}$$

    (Privault and Yang [65].)

8.5 Tempered Stable Distribution

The tempered stable distributions were defined by Rosiński [69]. Let 0 < α < 2. T α is called a tempered α-stable random variable on \(\mathbb{R}^{d}\), if \(\mathcal{L}(T_{\alpha }) =\mu _{(A,\nu,\gamma )}\) is such that A = 0 and ν μ has polar decomposition

$$\displaystyle{ \nu _{\mu }(B) =\int _{S}\lambda (d\xi )\int _{0}^{\infty }1_{ B}(r\xi )r^{-\alpha -1}q_{\xi }(r)dr, }$$
(31)

where \(q_{\xi }(r)\) is completely monotone in r, measurable in \(\xi\), and \(\lambda (S) < \infty,q_{\xi }(\infty ) = 0\). Because of the assumption that \(q_{\xi }(\infty ) = 0\), \(q_{\xi }(r)\) cannot be constant, and thus an α-stable distribution is not tempered α-stable but tempered β( < α)-stable.

We have the following. It is easy to see by checking (13) for (a) and Theorem 7.15 for (b)–(d). (See Barndorff-Nielsen et al. [12].)

  1. (a)

    If 0 < α < 2, then \(\mathcal{L}(T_{\alpha }) \in T(\mathbb{R}^{d})\).

  2. (b)

    If 1∕4 ≤ α < 2, then \(\mathcal{L}(T_{\alpha }) \in L_{1}(\mathbb{R}^{d})\).

  3. (c)

    If 0 < α < 1∕4, and \(q_{\xi }(r) = c(\xi )e^{-b(\xi )r}\) for all \(\xi\) in a set of positive \(\lambda\)-measure, where \(c(\xi )\) and \(b(\xi )\) are positive measurable functions of \(\xi\), then \(\mathcal{L}(T_{\alpha })\notin L_{1}(\mathbb{R}^{d})\).

  4. (d)

    If 2∕3 ≤ α < 2, then \(\mathcal{L}(T_{\alpha }) \in L_{2}(\mathbb{R}^{d})\).

8.6 Limits of Generalized Ornstein-Uhlenbeck Processes (Exponential Integrals of Lévy Processes)

  1. (a)

    Let {(X t , Y t ), t ≥ 0} be a two-dimensional Lévy process. Suppose that {X t } does not have positive jumps, \(0 < E[X_{1}] < \infty \) and \(\mathcal{L}(Y _{1}) \in ID_{\log }(\mathbb{R})\). Then

    $$\displaystyle{\mathcal{L}\left (\int _{0}^{\infty }e^{-X_{t-} }dY _{t}\right ) \in L(\mathbb{R}).}$$

    (Bertoin et al. [15].)

  2. (b)

    Let {N t } be a Poisson process, and let {Y t } be a strictly stable Lévy process or a Brownian motion with drift. Then

    $$\displaystyle{\mathcal{L}\left (\int _{0}^{\infty }e^{-N_{t-} }dY _{t}\right ) \in L(\mathbb{R}).}$$

    (Kondo et al. [39].)

  3. (c)

    Let {N t } t ≥ 0 be a Poisson process such that E[N 1] < 1. Then

    $$\displaystyle{ \mathcal{L}\left (\int _{0}^{\infty }e^{-(t-N_{t})}dt\right ) \in L(\mathbb{R}) \cap L_{ 1}(\mathbb{R})^{c}. }$$
    (32)

(Lindner and Maejima [40].) The proof of (32) is as follows: Let X t : = tN t and \(V:=\int _{ 0}^{\infty }e^{-X_{t}}dt\). For c > 0, let \(\tau _{c}:=\inf \{ t \geq 0: X_{t} = c\}\). Since \(X_{t} \rightarrow \infty \) a.s. as \(t \rightarrow \infty \) and {X t } does not have positive jumps, \(\tau _{c} < \infty \) almost surely. Then

$$\displaystyle{V =\int _{ 0}^{\tau _{c} }e^{-X_{t} }dt +\int _{ \tau _{c}}^{\infty }e^{-X_{t} }dt =: Y _{c} + V _{c},}$$

where V c and Y c are independent. We have

$$\displaystyle{V _{c} =\int _{ \tau _{c}}^{\infty }e^{-(X_{t}-X_{\tau _{c}})}e^{-X_{\tau _{c}} }dt = e^{-c}\int _{ \tau _{c}}^{\infty }e^{-(X_{t}-X_{\tau _{c}})}dt.}$$

Denote

$$\displaystyle{V _{c}':=\int _{ \tau _{c}}^{\infty }e^{-(X_{t}-X_{\tau _{c}})}dt.}$$

By the strong Markov property, \(\{X_{t} - X_{\tau _{c}}\}_{t>\tau _{c}}\) is independent of Y c and has the same distribution as {X t } t > 0. Thus we conclude that for all c > 0

$$\displaystyle{V = Y _{c} + e^{-c}X_{ c}',}$$

where \(\mathcal{L}(X_{c}') = \mathcal{L}(V )\). Thus \(\mathcal{L}(V ) \in L(\mathbb{R})\) by (12). But, in order that it is in \(L_{1}(\mathbb{R})\), it is needed that \(\mathcal{L}(Y _{c}) \in L(\mathbb{R})\) by Theorem 7.16. This, however, is not the case. For instance, we have

$$\displaystyle{P\left (Y _{1} =\int _{ 0}^{1}e^{-t}dt\right ) \geq P(N_{ t}\,\,\mbox{ does not jump until time $1$}) = e^{-E[N_{1}]} > 0.}$$

This means that Y 1 has a point mass at \(\int _{0}^{1}e^{-t}dt = 1 - e^{-1}\), but is not a constant, namely, \(\mathcal{L}(Y _{1})\) is a non-trivial distribution with a point mass. Recall that any non-trivial selfdecomposable distribution on \(\mathbb{R}\) must be absolutely continuous (see e.g. Sato [73, Theorem 27.13]), and thus \(\mathcal{L}(Y _{1})\notin L(\mathbb{R})\). We then conclude that \(\mathcal{L}(V )\notin L_{1}(\mathbb{R})\).

8.7 Type S Random Variable

For 0 < α < 2, define X: = V 1∕α Z α , where V is a positive infinitely divisible random variable and Z α is a symmetric α-stable random variable on \(\mathbb{R}\), and where V and Z α are independent. We call X the type S random variable.

Here we explain subordination of Lévy processes.

Theorem 8.1 (Sato [73, Theorem 30.1])

Let {V t ,t ≥ 0} be a subordinator (a nondecreasing Lévy process on \(\mathbb{R}\) ) and let {Z t ,t ≥ 0} be a Lévy process on \(\mathbb{R}^{d}\) , independent of {V t }. Then \(X_{t}:= Z_{V _{t}}\) is a Lévy process on \(\mathbb{R}^{d}\) , and \(\mathcal{L}(X_{t}) \in ID(\mathbb{R}^{d})\) .

The transformation of {Z t } to {X t } is called subordination by the subordinator {V t }.

Theorem 8.2

Let {V t ,t ≥ 0} be a subordinator and let {Z α (t)} be a symmetric α(∈ (0,2])-stable Lévy process on \(\mathbb{R}\) , independent of {V t }. Then if we write V = V 1 ,

$$\displaystyle{ Z_{\alpha }(V )\mathop{ =}\limits^{\mathrm{ d}}V ^{1/\alpha }Z_{\alpha }. }$$
(33)

Thus, \(\mathcal{L}(V ^{1/\alpha }Z_{\alpha }) \in ID(\mathbb{R})\) , implying that type S random variables are infinitely divisible.

Proof

We compare the characteristic functions on both sides of (33). Note that \(E[e^{izZ_{\alpha }}] =\exp \{ -c\vert z\vert ^{\alpha }\}\) with some c > 0, and for the Lévy process {X t }, \(E[e^{izX_{t}}] = \left (E[e^{izX_{1}}]\right )^{t}\). We then have

$$\displaystyle{E\left [\exp \left \{izZ_{\alpha }(V )\right \}\right ] = E_{V }\left [\exp \left \{-cV \vert z\vert ^{\alpha }\right \}\right ]}$$

and

$$\displaystyle{E\left [\exp \left \{izV ^{1/\alpha }Z_{\alpha }\right \}\right ] = E_{ V }\left [\exp \left \{-c\vert V ^{1/\alpha }z\vert ^{\alpha }\right \}\right ],}$$

implying that both sides of (33) are equal in law.

Notice that a symmetric stable random variable is of type G. For, we can check, by the characteristic functions,

$$\displaystyle{ Z_{\alpha }\mathop{ =}\limits^{\mathrm{ d}}(Z_{\alpha /2}^{+})^{1/2}Z_{ 2}, }$$
(34)

where Z α∕2 + is a positive α∕2-stable random variable.

Theorem 8.3

Type S random variables are of type G.

Proof

By (34), we have

$$\displaystyle{V ^{1/\alpha }Z_{\alpha }\mathop{ =}\limits^{\mathrm{ d}}V ^{1/\alpha }(Z_{\alpha /2}^{+})^{1/2}Z_{ 2} = (V ^{2/\alpha }Z_{\alpha /2}^{+})^{1/2}Z_{ 2}.}$$

It remains to show that \(\mathcal{L}(V ^{2/\alpha }Z_{\alpha /2}^{+}) \in I(\mathbb{R}_{+})\), but this can be shown in the same way as in the proof of (33), completing the proof.

  1. (a)

    If \(V \mathop{ =}\limits^{\mathrm{ d}}\varGamma _{1,\lambda }\), then \(\mathcal{L}(Z_{\alpha }(V )) \in \mathcal{G}(T(\mathbb{R})) \subset T(\mathbb{R})\). (See Bondesson [16, p. 38].)

  2. (b)

    Let \(\lambda > 0\) and {B t } a standard Brownian motion, and let {Z t } be a symmetric stable Lévy process. Then \(\int _{0}^{\infty }e^{-B_{t}-\lambda t}dZ_{t}\) is of type S. (See Maejima and Niiyama [45], Aoyama et al. [4] and Kondo et al. [39].)

8.8 Convolution of Symmetric Stable Distributions of Different Indexes

The characteristic function of the convolution of symmetric stable distributions of different indexes is \(\varphi (z) =\exp \left \{\int _{(0,2)} -\vert z\vert ^{\alpha }m(d\alpha )\right \}\), where m is a measure on the interval (0, 2). It belongs to \(L_{\infty }(\mathbb{R})\). (See e.g. Sato [67].)

8.9 Product of Independent Standard Normal Random Variables

Let Z 1, Z 2,  be independent standard normal random variables.

  1. (a)

    \(\mathcal{L}(Z_{1}Z_{2}) \in T(\mathbb{R})\). This is because \(\mathcal{L}(Z_{1}Z_{2}) = \mathcal{L}(\text{sym-gamma}(1/2,1))\), (see Steutel and van Harn [83, p. 504]),

  2. (b)

    (Maejima and Rosiński [46, Example 5.1].) \(\mathcal{L}(Z_{1}\cdots Z_{n}) \in G(\mathbb{R}),\,\,n \geq 2\). The proof is as follows: Recall that if \(V > 0,\mathcal{L}(V ) \in I(\mathbb{R}),Z\) is the standard normal random variable and V and Z are independent, then \(\mu = \mathcal{L}(V ^{1/2}Z) \in G(\mathbb{R})\). Here we need a lemma.

Lemma 8.4 (Shanbhag and Sreehari [81, Corollary 4])

Let Z be the standard normal random variable and Y a positive random variable independent of Z. Then |Z| p Y is infinitely divisible for any p ≥ 2.

We have \(Z_{1}\cdots Z_{n}\mathop{ =}\limits^{\mathrm{ d}}Z_{1}\vert Z_{2}\cdots Z_{n}\vert \) and | Z 2Z n  | 2 is infinitely divisible by Lemma 8.4, which implies that \(\mathcal{L}(Z_{1}\cdots Z_{n}) \in G(\mathbb{R}),\,\,n \geq 2\).

  1. (c)

    When n = 2, we can say more, namely, \(\mathcal{L}(Z_{1}Z_{2}) \in G_{1}(\mathbb{R})\). (For the proof, see Maejima and Rosiński [46, Example 5.2].)

9 Examples (II)

In this section, we list examples of distributions in the classes \(L(\mathbb{R}),B(\mathbb{R}),T(\mathbb{R})\) and \(G(\mathbb{R})\), in addition to what we have explained in the previous section.

9.1 Examples in \(L(\mathbb{R})\)

There are many examples in \(L(\mathbb{R})\). The following are some of them.

  1. (a)

    Let Z be the standard normal random variable, t(n) Student’s t-random variable and let F(n, m) be F-random variable. Then (i) \(\mathcal{L}(\log \vert Z\vert ) \in L(\mathbb{R})\), (ii) \(\mathcal{L}(\log \vert t\vert ) \in L(\mathbb{R})\) and (iii) \(\mathcal{L}(\log F) \in L(\mathbb{R})\). (Shanbhag and Sreehari [81].) These follow from the following facts:

    1. (i)

      Since \(\vert Z\vert ^{2}\mathop{ =}\limits^{\mathrm{ d}}\chi ^{2}(1)\), \(\log \vert Z\vert \mathop{ =}\limits^{\mathrm{ d}}\frac{1} {2}\log \chi ^{2}(1)\).

    2. (ii)

      By (28),

      $$\displaystyle{\log \vert t(n)\vert \mathop{ =}\limits^{\mathrm{ d}}\log \vert Z\vert -\frac{1} {2}\log \varGamma _{n/2,1/2} + \frac{1} {2}\log n,}$$

      where Z and Γ n∕2, 1∕2 are independent.

    3. (iii)

      By (29),

      $$\displaystyle{\log F(n,m)\mathop{ =}\limits^{\mathrm{ d}}\log \varGamma _{n/2,1/2} -\log \varGamma _{m/2,1/2} -\log n +\log m,}$$

    where Γ n∕2, 1∕2 and Γ m∕2, 1∕2 are independent.

  2. (b)

    Let E have a standard exponential random variable. Consider \(X\mathop{ =}\limits^{\mathrm{ d}} -\log E\). Then the distribution function G 1 of X is \(G_{1}(x) = e^{-e^{-x} },x \in \mathbb{R}\), called Gumbel distribution. (See Steutel and van Harn [83, Chap. IV, Example 11.1].) By Sect. 5.2(a), \(\mathcal{L}(X) \in L(\mathbb{R})\). Also \(G_{2}(x) = 1 - e^{-e^{x} },x \in \mathbb{R}\), is selfdecomposable, because G 2(x) = 1 − G 1(−x) and so \(G_{2} = \mathcal{L}(-X)\).

  3. (c)

    Let Y be a beta random variable. Then \(\mathcal{L}\left (\log Y (1 - Y )^{-1}\right ) \in L(\mathbb{R})\). (Barndorff-Nielsen et al. [11].)

9.2 Examples in \(L_{1}(\mathbb{R}_{+})\)

The following is Maejima et al. [56, Example 1(ii)]. Let \(\mu \in ID(\mathbb{R}_{+})\) be such that k +1(r) in (11) is cx α e ar, r > 0 with a, c > 0 and 0 < α < 2. Then \(\mu \in L_{1}(\mathbb{R}_{+})\). It is enough to apply Theorem 7.15.

9.3 Examples in \(B(\mathbb{R})\)

  1. (a)

    (Bondesson [16, p. 143].) Let {Y j } be i.i.d. exponential random variables and N a Poisson random variable independent of {Y j }. Put \(X =\sum _{ j=1}^{N}Y _{j}\). Then \(\mathcal{L}(X) \in B(\mathbb{R}_{+})\).

  2. (b)

    (Bondesson [16, pp. 143–144].) Let Y = Y (α, β) be a beta random variable with parameters α and β and let \(X = -\log Y\). Then

    1. (b1)

      \(\mathcal{L}(X) \in B(\mathbb{R}_{+})\).

    2. (b2)

      \(\mathcal{L}(X) \in L(\mathbb{R}_{+})\) if and only if 2α +β ≥ 1.

9.4 Examples in \(G(\mathbb{R})\)

More examples of distributions in \(G(\mathbb{R})\) are the following by Fukuyama and Takahashi [22]. Let \(([0,1],\mathfrak{B},\lambda )\) be the Lebesgue probability space with Lebesgue measure \(\lambda\). For any \(\mu \in G(\mathbb{R}) \cap ID_{\mathrm{sym}}(\mathbb{R})\), there exist {a j }, \(A_{n}(\rightarrow \infty )\) and \(\{\beta _{j}\} \subset \mathbb{R}\) such that

$$\displaystyle{ \frac{1} {A_{n}}\sum _{j=1}^{n}a_{ j}\cos \left (2\pi j(\omega +\beta _{j})\right ),\quad \omega \in [0,1],}$$

converges weakly to μ on the Lebesgue probability space.

9.5 Examples in \(T(\mathbb{R})\)

There are many examples in \(T(\mathbb{R})\). (See e.g. Bondesson [16].) The following are some of them.

  1. (a)

    (Log-normal distribution.) Let Z be the standard normal random variable and put X = e Z. The distribution of X is called the log-normal distribution, and its density is

    $$\displaystyle{\mu (dx) = \frac{1} {\sqrt{2\pi }} \frac{1} {x}\exp \left \{-\frac{1} {2}(\log x)^{2}\right \}1_{ (0,\infty )}(x)dx.}$$

    The log-normal distribution belongs to \(T(\mathbb{R}_{+})\). (See Steutel and van Harn [83, Chap. VI, Theorems 5.18 and 5.21].)

  2. (b)

    (Pareto distribution.) Let Γ 1, 1 and Γ c, 1, c > 0 be two independent gamma random variables and put X = Γ 1, 1Γ c, 1. Then its density is

    $$\displaystyle{\mu (dx) = \frac{1} {B(1,c)}\left ( \frac{1} {1 + x}\right )^{1+c}1_{ (0,\infty )}(x)dx,}$$

    and the corresponding distribution is called the Pareto distribution and belongs to \(T(\mathbb{R})\). (See Steutel and van Harn [83, Chap. VI, Example 12.9 and Theorems 5.18 and 5.19(ii)].)

  3. (c)

    Generalized inverse Gaussian distributions belong to \(T(\mathbb{R})\). (See e.g. Bondesson [16, Example 4.3.2].)

  4. (d)

    Let X α be a positive α-stable random variable with 0 < α < 1. Then \(\mathcal{L}(\log X_{\alpha }) \in T(\mathbb{R})\). (See Bondesson [16, Example 7.2.5].)

  5. (e)

    (Lévy ’s stochastic area X of the two-dimensional Brownian motion. See e.g. Sato [73, Example 15.15].) The density of X is \(f(x) = (\pi \cosh x)^{-1}\) and k ±1(r) in (11) is \(\vert 2\sinh r\vert ^{-1}\). Since \(\vert 2\sinh r\vert ^{-1}\) is completely monotone in \(r \in (0,\infty )\), we have \(\mathcal{L}(X) \in T(\mathbb{R}_{+})\). This distribution μ 1 with a bit different scaling (the density is \(f_{1}(x) = (2\pi \cosh \frac{1} {2}x)^{-1}\)) is called the hyperbolic cosine distribution, (see e.g. Steutel and van Harn [83, p. 505], for this and below). It is also known that μ 1 is \(\mathcal{L}(\log (Y/Z))\) with independent Y and Z both of which are Γ 1∕2, 1. The distribution μ 2 with density \(f_{2}(x) = (2\pi ^{2}\sinh \frac{1} {2}x)^{-1}x\) is called hyperbolic sine distribution. It is known that μ 2 is \(\mathcal{L}(Y + Z)\) with independent Y and Z both of which are distributed as hyperbolic cosine distribution. \(k_{\xi }(r)\) is \(\vert \sinh r\vert ^{-1}\) up to scaling, and thus also \(\mu _{2} \in T(\mathbb{R}_{+})\).

9.6 Examples in \(T(\mathbb{R}) \cap L_{1}(\mathbb{R})^{c}\) (Revisited)

  1. (a)

    \(\mathcal{L}(\varGamma _{c,\lambda }).\) (Section 8.1.)

  2. (b)

    \(\mathcal{L}(T_{\alpha })\) if 0 < α < 1∕4 and \(q_{\xi }(r) = c(\xi )e^{-b(\xi )r}\) for all \(\xi\) in a set of positive \(\lambda\)-measure, where \(c(\xi )\) and \(b(\xi )\) are positive measurable functions of \(\xi\). (Section 8.5, (a) and (c).)

10 Examples (III)

The class of GGCs, which is the Thorin class, is generating renewed interest, since many examples have recently appeared in quite different problems. We explain some of them below.

10.1 The Rosenblatt Process and the Rosenblatt Distribution

Let 0 < D < 1∕2. The Rosenblatt process is defined, for t ≥ 0, as

$$\displaystyle\begin{array}{rcl} Z_{D}(t)& =& C(D)\int _{\mathbb{R}^{2}}'\left (\int _{0}^{t}(u - s_{ 1})_{+}^{-(1+D)/2}(u - s_{ 2})_{+}^{-(1+D)/2}du\right )dB_{ s_{1}}dB_{s_{2}}, {}\\ \end{array}$$

where \(\{B_{s},s \in \mathbb{R}\}\) is a standard Brownian motion, \(\int _{\mathbb{R}^{2}}'\) is the Wiener-Itô multiple integral on \(\mathbb{R}^{2}\) and C(D) is a normalizing constant. The distribution of Z D (1) is called the Rosenblatt distribution.

The Rosenblatt process is H-selfsimilar with H = 1 − D and has stationary increments. The Rosenblatt process lives in the so-called second Wiener chaos. Consequently, it is not a Gaussian process.

In the last few years, this stochastic process has been the object of several papers. (See Pipiras and Taqqu [64], Tudor [88], Tudor and Viens [89], Veillette and Taqqu [94] among others.)

Let

$$\displaystyle\begin{array}{rcl} \mathcal{H}_{D}& =& \Big\{h: h\,\,\text{is a complex-valued function on}\,\mathbb{R},h(x) = \overline{h(-x)}, {}\\ & & \quad \quad \int _{\mathbb{R}}\vert h(x)\vert ^{2}\vert x\vert ^{D-1}dx < \infty \Big\} {}\\ \end{array}$$

and for every t ≥ 0 define an integral operator A t by

$$\displaystyle{ A_{t}h(x) = C(D)\int _{-\infty }^{\infty }\frac{e^{it(x-y)-1}} {i(x - y)} h(y)\vert y\vert ^{D-1}dy,\quad h \in \mathcal{H}_{ D}. }$$

Since A t is a self-adjoint Hilbert-Schmidt operator (see Dobrushin and Major [20]), all eigenvalues \(\lambda _{n}(t),n = 1,2,\ldots,\) are real and satisfy \(\sum _{n=1}^{\infty }\lambda _{n}^{2}(t) < \infty \).

We start with the following.

Theorem 10.1 (Maejima and Tudor [49])

For every t 1 ,…,t d ≥ 0,

$$\displaystyle{(Z_{D}(t_{1}),\ldots,Z_{D}(t_{d}))\mathop{ =}\limits^{\mathrm{ d}}\left (\sum _{n=1}^{\infty }\lambda _{ n}(t_{1})(\varepsilon _{n}^{2} - 1),\ldots,\sum _{ n=1}^{\infty }\lambda _{ n}(t_{d})(\varepsilon _{n}^{2} - 1)\right ),}$$

where \(\{\varepsilon _{n}\}\) are i.i.d. standard normal random variables.

The case d = 1 was shown by Taqqu (see Proposition 2 of Dobrushin and Major [20]). The proof is enough to extend the idea of Taqqu from one dimension to multi-dimensions.

Theorem 10.2 (Maejima and Tudor [49])

For every t 1 ,…,t d ≥ 0, the law of (Z D (t 1 ),…,Z D (t d )) belongs to \(T(\mathbb{R}^{d})\) .

Proof

By Theorem 10.1,

$$\displaystyle\begin{array}{rcl} (Z_{D}(t_{1}),\ldots,Z_{D}(t_{d}))& \mathop{=}\limits^{\mathrm{ d}}& \left (\sum _{n=1}^{\infty }\lambda _{ n}(t_{1})(\varepsilon _{n}^{2} - 1),\ldots,\sum _{ n=1}^{\infty }\lambda _{ n}(t_{d})(\varepsilon _{n}^{2} - 1)\right ) {}\\ & = & \sum _{n=1}^{\infty }\varepsilon _{ n}^{2}(\lambda _{ n}(t_{1}),\ldots,\lambda _{n}(t_{d})) -\left (\sum _{n=1}^{\infty }\lambda _{ n}(t_{1}),\ldots,\sum _{n=1}^{\infty }\lambda _{ n}(t_{d})\right ), {}\\ \end{array}$$

where \(\varepsilon _{n}^{2}(\lambda _{n}(t_{1}),\ldots,\lambda _{n}(t_{d})),n = 1,2,\ldots,\) are the elementary gamma random variables in \(\mathbb{R}^{d}\). Since they are independent and since the class \(T(\mathbb{R}^{d})\) is closed under convolution and weak convergence, we see that the law of \(\sum _{n=1}^{\infty }\varepsilon _{n}^{2}(\lambda _{n}(t_{1}),\ldots,\lambda _{n}(t_{d}))\) belongs to \(T(\mathbb{R}^{d})\), and so does the law of (Z D (t 1), , Z D (t d )). This completes the proof.

In general, let I 2 B(f) be a double Wiener-Itô integral with respect to standard Brownian motion B, where \(f \in L_{\mathrm{sym}}^{2}(\mathbb{R}_{+}^{2})\). Then we have a more general result as follows:

Proposition 10.3

$$\displaystyle{I_{2}^{B}(f)\mathop{ =}\limits^{\mathrm{ d}}\sum _{ n=1}^{\infty }\lambda _{ n}(f)(\varepsilon _{n}^{2} - 1),}$$

where the series converges in L 2 (Ω) and almost surely. Also

$$\displaystyle{\hat{\mu }_{I_{2}^{B}(f)}(z) =\exp \left \{\frac{1} {2}\int _{\mathbb{R}_{+}}(e^{izx} - 1 - izx)\frac{1} {x}\left (\sum _{n=1}^{\infty }e^{-x/\lambda _{n} }\right )dx\right \}.}$$

Thus \(\mathcal{L}\left (I_{2}^{B}(f))\right ) \in T(\mathbb{R})\) .

(For the proof, see e.g. Nourdin and Peccati [60].)

The Rosenblatt distribution is represented by double Wiener-Itô integrals. However, we have seen that it belongs to the Thorin class \(T(\mathbb{R})\). The distributions in \(T(\mathbb{R})\) have several stochastic integral representations with respect to Lévy processes. Here we take one example. We regard them as members of the class of selfdecomposable distributions, which is a larger class than the Thorin class. This allows us to obtain a new result related to the Rosenblatt distribution.

The following is known. (Aoyama et al. [6, Corollary 2.1].) If \(\{\varGamma _{t,\lambda },t \geq 0\}\) is a gamma process with parameter \(\lambda > 0\), {N(t), t ≥ 0} is a Poisson process with unit rate and they are independent, then for any \(c > 0,\lambda > 0\),

$$\displaystyle{\varGamma _{c,\lambda }\mathop{ =}\limits^{\mathrm{ d}}\int _{0}^{\infty }e^{-t}d\varGamma _{ N(ct),\lambda }.}$$

Let

$$\displaystyle{Y _{t} =\varGamma _{N(t/2),1/2} - t.}$$

Note that {Y t , t ≥ 0} is a Lévy process. Then we have

$$\displaystyle{\varepsilon _{n}^{2} - 1\mathop{ =}\limits^{\mathrm{ d}}\varGamma _{ 1/2,1/2}^{(n)} - 1\mathop{ =}\limits^{\mathrm{ d}}\int _{ 0}^{\infty }e^{-t}dY _{ t}^{(n)},}$$

where Γ 1∕2, 1∕2 (n) and {Y t (n)} are independent copies of Γ 1∕2, 1∕2 and {Y t }, respectively. Thus

$$\displaystyle{Z_{D}\mathop{ =}\limits^{\mathrm{ d}}\int _{0}^{\infty }e^{-t}d\left (\sum _{ n=1}^{\infty }\lambda _{ n}Y _{t}^{(n)}\right ) =:\int _{ 0}^{\infty }e^{-t}dZ_{ t}.}$$

Remark 10.4

\(\sum _{n=1}^{\infty }\lambda _{n}Y _{t}^{(n)}\) is convergent a.s. and in L 2 because

$$\displaystyle{\sum _{n=1}^{\infty }E\left [\left (\lambda _{ n}Y _{t}^{(n)}\right )^{2}\right ] = E\left [Y _{ t}^{2}\right ]\sum _{ n=1}^{\infty }\lambda _{ n}^{2} < \infty.}$$

Remark 10.5

Since {Y t (n)}, n = 1, 2, , are independent and identically distributed Lévy processes, their infinite weighted sum {Z t } is a Lévy process.

We thus finally have the following theorem.

Theorem 10.6 (Maejima and Tudor [49])

$$\displaystyle{Z_{D}\mathop{ =}\limits^{\mathrm{ d}}\int _{0}^{\infty }e^{-t}dZ_{ t},}$$

where {Z t } is a Lévy process in Remark  10.5 .

10.2 The Duration of Bessel Excursions Straddling Independent Exponential Times

This section is from Bertoin et al. [14].

Let {R t , t ≥ 0} be a Bessel process with R 0 = 0, with dimension d = 2(1 −α), (0 < α < 1, equivalently 0 < d < 2). When α = 1∕2, {R t } is a Brownian motion. Let

$$\displaystyle\begin{array}{rcl} g_{t}^{(\alpha )}&:=& \sup \{s \leq t: R_{ s} = 0\}, {}\\ d_{t}^{(\alpha )}&:=& \inf \{s \geq t: R_{ s} = 0\} {}\\ \end{array}$$

and

$$\displaystyle{\varDelta _{t}^{(\alpha )}:= d_{ t}^{(\alpha )} - g_{ t}^{(\alpha )},}$$

which is the length of the excursion above 0, straddling t, for the process {R u , u ≥ 0}, and let \(\varepsilon\) be a standard exponential random variable independent of {R u , u ≥ 0}. Let \(\varDelta _{\alpha }:=\varDelta _{ \varepsilon }^{(\alpha )}\), which is the duration of Bessel excursions straddling independent exponential times.

Theorem 10.7

\(\mathcal{L}(\varDelta _{\alpha }) \in T(\mathbb{R}_{+})\) .

The idea of the proof is the following. They showed that

$$\displaystyle{E\left [e^{-s\varDelta _{\alpha }}\right ] =\exp \left \{-(1-\alpha )\int _{ 0}^{\infty }\left (1 - e^{-sx}\right )\frac{E[e^{-xG_{\alpha }}]} {x} dx\right \},\,\,s > 0,}$$

with a nonnegative random variable G α on [0, 1]. (The density function of G α is explicitly given.) Since \(k(x):= E[e^{-xG_{\alpha }}]\) is completely monotone by Bernstein’s theorem (Proposition 3.2), the statement of the theorem follows from (13).

10.3 Continuous State Branching Processes with Immigration

We start with some general theory on GGCs. Any GGC \(\mu \in T(\mathbb{R}_{+})\) has the Laplace transform:

$$\displaystyle\begin{array}{rcl} \pi (s):=\int _{ 0}^{\infty }e^{-sx}\mu (dx) =\exp \left \{-\gamma s -\int _{ 0}^{\infty }(1 - e^{-sx})\frac{k(x)} {x} dx\right \},\quad s > 0,& & {}\\ \end{array}$$

where γ ≥ 0, \(\int _{0}^{\infty }\frac{(1\wedge x)} {x} k(x)dx < \infty \) and k(x) is completely monotone on \((0,\infty )\). By Bernstein’s theorem (Proposition 3.2), there exists a positive measure \(\sigma\) such that

$$\displaystyle{k(x) =\int _{ 0}^{\infty }e^{-xy}\sigma (dy).}$$

We call this \(\sigma\) the Thorin measure, (see James et al. [28, Sect. 1.2.b]). Therefore, \(\mu \in T(\mathbb{R}_{+})\) can be parameterized by the pair \((\gamma,\sigma )\). Recall

$$\displaystyle\begin{array}{rcl} \pi (s) =\exp \left \{-\gamma s -\int _{0}^{\infty }(1 - e^{-sx})\frac{1} {x}\left (\int _{0}^{\infty }e^{-xy}\sigma (dy)\right )dx\right \},\quad s > 0.& & {}\\ \end{array}$$

The integrability condition for the Lévy measure ν of GGC is, in terms of \(\sigma\), turned out to be

$$\displaystyle{ \int _{0}^{\infty }\log \left (1 + \frac{s} {y}\right )\sigma (dy) < \infty \quad \text{for all}\,\,s > 0, }$$

(see James et al. [28, Eq. (3)]) which is equivalent to

$$\displaystyle{ \int _{(0,1/2]}\vert \log y\vert \sigma (dy) +\int _{(1/2,\infty )}\frac{1} {y}\sigma (dy) < \infty. }$$

The following is from Handa [24]. Consider continuous state branching processes with immigration (CBCI-process, in short) with quadruplet (a, b, ρ, δ) having the generator

$$\displaystyle{L_{\delta }f(x) = axf''(x) - bxf'(x) + x\int _{0}^{\infty }[f(x + y) - f(x) - yf'(x)]\rho (dy) +\delta f'(x),}$$

where ρ is a measure on \((0,\infty )\) satisfying \(\int _{0}^{\infty }(y \wedge y^{2})\rho (dy) < \infty \).

Theorem 10.8

Let γ ≥ 0 and suppose that \(\sigma\) is a non-zero Thorin measure.

  1. (1)

    There exist (a,b,M) such that

    $$\displaystyle{\gamma +\int \frac{1} {s + u}\sigma (du) = \frac{1} {a}s + b +\int \frac{s} {s + u}M(du),\quad s > 0.}$$
  2. (2)

    Any GGC with pair ( \(\gamma,\sigma\) ) is a unique stationary solution of the CBCI-process with quadruplet (a,b,ρ,1), where ρ is a measure on \((0,\infty )\) defined by

    $$\displaystyle{\rho (dy) = \left (\int _{0}^{\infty }u^{2}e^{-yu}M(du)\right )dy.}$$

10.4 Lévy Density of Inverse Local Time of Some Diffusion Processes

This section is from Takemura and Tomisaki [84].

Example 10.9 (Also, Shilling et al. [80, p. 201])

Let \(I = (0,\infty )\) and − 1 < p < 0. Let \(\mathcal{G}^{(p)} = \frac{1} {2} \frac{d^{2}} {dx^{2}} + \frac{2p+1} {2x} \frac{d} {dx}\). Assume 0 is reflecting. Let \(\mathbb{D}^{(p)}\) be the diffusion process on I with the generator \(\mathcal{G}^{(p)}\) and (p) the Lévy density of the inverse local time at 0 for \(\mathbb{D}^{(p)}\). Then we have \(\ell^{(p)}(x) = C \frac{1} {x}x^{-\vert p\vert }\), which is the Lévy density of a GGC.

Example 10.10

Let \(I = (0,\infty )\) and − 1 < p < 0. Let \(\mathbb{D}^{(p)}\) be the diffusion process with the generator \(\mathcal{G}^{(p)} = 2x \frac{d^{2}} {dx^{2}} + (2p + 2) \frac{d} {dx}\) and suppose that the end point 0 is reflecting. If (p) is the Lévy density of the inverse local time at 0 for \(\mathbb{D}^{(p)}\), then \(\ell^{(p)}(x) = C \frac{1} {x}x^{-\vert p\vert }\), which is again the Lévy density of a GGC.

Example 10.11

Let − 1 < p < 1 and β > 0. Let

$$\displaystyle{\mathcal{G}^{(p,\beta )} = \frac{1} {2} \frac{d^{2}} {dx^{2}} + \left \{ \frac{1} {2x} + \sqrt{2\beta }\frac{K_{p}'(\sqrt{2\beta }x)} {K_{p}(\sqrt{2\beta }x)}\right \} \frac{d} {dx},}$$

where K p (x) is the modified Bessel function and, let \(\mathbb{D}^{(p,\beta )}\) be the diffusion process on I with the generator \(\mathcal{G}^{(p,\beta )}\). Suppose that the end point 0 is reflecting. Then (p, β), the Lévy density of the inverse local time at 0 for \(\mathbb{D}^{(p,\beta )}\), satisfies

$$\displaystyle{\ell^{(p,\beta )}(x) = C \frac{1} {x}x^{-\vert p\vert }e^{-\beta x},}$$

which is the Lévy density of a GGC. (When p = 0, Shilling et al. [80, p. 202].)

Example 10.12

Let − 1 < p < 1 and β > 0. Let

$$\displaystyle{\mathcal{G}^{(p,\beta )} = 2x \frac{d^{2}} {dx^{2}} + 2\left \{1 + \sqrt{2\beta x}\frac{K'_{p}(\sqrt{2\beta x})} {K_{p}(\sqrt{2\beta x})}\right \} \frac{d} {dx}.}$$

If \(\mathbb{D}^{(p,\beta )}\) is the diffusion process with the generator \(\mathcal{G}^{(p,\beta )}\) and the end point 0 is reflecting, then (p, β), the Lévy density of the inverse local time at 0 for \(\mathbb{D}^{(p,\beta )}\), is \(\ell^{(p,\beta )}(x) = C \frac{1} {x}x^{-\vert p\vert }e^{-\beta x}\), which is a GGC.

10.5 GGCs in Finance

Lévy processes play an important role in asset modeling, and among others a typical pure jump Lévy process is a subordination of Brownian motion. One of them is the variance-gamma process {Y t } by Madan and Seneta [42], which is a time-changed Brownian motion B = { B t } on \(\mathbb{R}\) subordinated by the gamma process Γ = {Γ(t)}; namely

$$\displaystyle{ Y _{t} = B_{\varGamma (t)}, }$$
(35)

where the gamma process {Γ(t)} is a Lévy process on \(\mathbb{R}\) such that \(\mathcal{L}(\varGamma (1))\) is the distribution of a gamma random variable \(\varGamma _{1,\lambda }\). This is a special case of Example 30.8 of Sato [73], where B is a general Lévy process on \(\mathbb{R}^{d}\), and when B is the standard Brownian motion on \(\mathbb{R}\), for \(z \in \mathbb{R}\),

$$\displaystyle{E\left [e^{izY _{t} }\right ] = \left ( \frac{\lambda } {\lambda +z^{2}}\right )^{t}.}$$

This is sym-gamma \((t,\sqrt{\lambda })\) in Sect. 5.3.

The variance-gamma processes, which are studied in finance, are generalized to the variance-GGC process. The variance-GGC process is {Y t } in (35) with the replacement of the gamma process Γ by the GGC process \(\tilde{\varGamma }=\{\tilde{\varGamma } _{t}\}\), which is a Lévy process on \(\mathbb{R}\) such that \(\mathcal{L}(\tilde{\varGamma }_{1})\) is a GGC. The following is known.

Proposition 10.13 (Privault and Yang [66])

Let B is a Brownian motion with drift and \(Y _{t} = B_{\tilde{\varGamma }_{t}}\) . Then Y t is decomposed as Y t = U t − W t , where {U t } and {W t } are two independent GGC process, and thus \(\mathcal{L}(Y _{t}) \in T(\mathbb{R})\) .

The next example is the so-called CGMY model (Carr et al. [18]). It is EGGC with the Lévy density \(r^{-1}k_{\xi }(r)\) and

$$\displaystyle{k_{\xi }(r) = \left \{\begin{array}{@{}l@{\quad }l@{}} Ce^{-Gr}r^{-1-Y }, \quad &\text{for}\,\,\xi = -1, \\ Ce^{-Mr}r^{-1-Y },\quad &\text{for}\,\,\xi = +1. \end{array} \right.}$$

where C > 0, G, M ≥ 0, Y < 2. The case Y = 0 is the special case of the variance gamma model. This model has been used as a new model for asset returns, which, in contrast to standard models like Black-Scholes model, allows for jump components displaying finite or infinite activity and variation.

11 Examples of α-Selfdecomposable Distributions

In this section, we give two examples of α-selfdecomposable distributions. The first one is two-dimensional.

11.1 The First Example

Many examples in \(L^{\langle 0\rangle }(\mathbb{R}) = L(\mathbb{R})\) are known as selfdecomposable distributions, but we have less examples of distributions in \(L^{\langle \alpha \rangle }(\mathbb{R}^{d}),\alpha \neq 0.\) In this section, we give an example in \(L^{\langle -2\rangle }(\mathbb{R}^{2})\). This section is from Maejima and Ueda [51].

Let (Z 1, Z 2) be a bivariate Gaussian random variable, where Z 1 and Z 2 are standard Gaussian random variables with correlation coefficient \(\sigma \in (-1,1)\). Define a bivariate gamma random variable by W = (Z 1 2, Z 2 2). Our concerns are whether W is selfdecomposable or not and if not, which class its distribution belongs to.

Theorem 11.1

Suppose \(\sigma \neq 0\) . Then

$$\displaystyle{\mathcal{L}(W)\left \{\begin{array}{@{}l@{\quad }l@{}} \in L^{\langle \alpha \rangle }(\mathbb{R}^{2})\quad &\text{for all }\alpha \leq -2, \\ \notin L^{\langle \alpha \rangle }(\mathbb{R}^{2}) \quad &\text{for all }\alpha > -2. \end{array} \right.}$$

Remark 11.2

This is an example showing that \(L^{\langle \alpha \rangle }(\mathbb{R}^{2})\) is not right-continuous in α at α = −2, namely

$$\displaystyle{L^{\langle -2\rangle }(\mathbb{R}^{2}) \supsetneq \bigcup _{\beta >-2}L^{\langle \beta \rangle }(\mathbb{R}^{2}).}$$

Proof of Theorem 11.1

Let \(\overline{W}:= \frac{1} {2}(W_{1} + W_{2})\) , where W 1, W 2 are independent copies of W. Note that \(\overline{W}\) is α-selfdecomposable if and only if W is α-selfdecomposable. Vere-Jones [95] gave the form of the moment generating function of \(\overline{W}\). Then we can see that the Lévy measure ν of \(\overline{W}\) is

$$\displaystyle{\nu (B) =\int _{S}\lambda (d\xi )\int _{0}^{\infty }1_{ B}(r\xi ) \frac{1} {r^{(-2)+1}}\ell_{\xi }(r)dr,}$$

where

$$\displaystyle\begin{array}{rcl} \ell_{\xi }(r)& =& \frac{\vert \sigma \vert } {r(1 -\sigma ^{2})\sqrt{\cos \theta \sin \theta }}\exp \left \{- \frac{\cos \theta +\sin \theta } {1 -\sigma ^{2}}r\right \}I_{1}\left ( \frac{2\vert \sigma \vert \sqrt{\cos \theta \sin \theta }} {1 -\sigma ^{2}}r\right ), {}\\ & & \quad (\xi = (\cos \theta,\sin \theta ),\theta \in (0,\pi /2)), {}\\ \end{array}$$

where I 1(⋅ ) is the modified Bessel function of the first kind. To show \(\mathcal{L}(\overline{W}) \in L^{\langle -2\rangle }(\mathbb{R}^{2})\), it is enough to check that \(\ell_{\xi }(r),r > 0\) is nonincreasing, which is proved in Maejima and Ueda [51].

To see that \(\mathcal{L}(\overline{W})\notin L^{\langle \alpha \rangle }(\mathbb{R}^{2}),\alpha > -2\), it is enough to check that for any \(\beta > 0,r^{\beta }\ell_{\xi }(r),r > 0\) is “not” nonincreasing, which is easily shown.

11.2 The Second Example

This section is from Maejima and Ueda [55].

Remark 11.3

$$\displaystyle{\mathcal{L}(\varGamma _{c,\lambda })\left \{\begin{array}{@{}l@{\quad }l@{}} \in L^{\langle 0\rangle }(\mathbb{R}), \quad \\ \notin L^{\langle \alpha \rangle }(\mathbb{R}),\quad \alpha > 0.\quad \end{array} \right.}$$

Thus, \(L^{\langle \alpha \rangle }(\mathbb{R})\) is not right-continuous at α = 0.

Consider \(\mathcal{L}(\log \varGamma _{c,\lambda })\). It is known that \(\mathcal{L}(\log \varGamma _{c,\lambda }) \in L(\mathbb{R}) = L^{\langle 0\rangle }(\mathbb{R})\) (Sect. 5.2 (a)). Let

$$\displaystyle\begin{array}{rcl} h(\alpha;r)&:=& \frac{\alpha } {r} - \frac{e^{-r}} {1 - e^{r}},\quad r > 0, {}\\ k(r)&:=& \frac{r^{2}e^{-r}} {(1 - e^{-r})^{2}},\quad r > 0. {}\\ \end{array}$$

Write the solution of k(r) = α by r = r α . Let

$$\displaystyle{A_{1} =\{ (c,\alpha ) \in (0,\infty ) \times \mathbb{R}\,\,:\,\, 0 <\alpha < 1,\,\,c \geq h(\alpha;r_{\alpha })\}}$$

and

$$\displaystyle{A_{2} =\{ (c,\alpha ) \in (0,\infty ) \times \mathbb{R}\,\,:\,\,\alpha = 1,\,\,c \geq \frac{1} {2}\}.}$$

Theorem 11.4

$$\displaystyle{\mathcal{L}(\log \varGamma _{c,\lambda })\left \{\begin{array}{@{}l@{\quad }l@{}} \in L^{\langle \alpha \rangle }(\mathbb{R}),\quad &\,\,\text{if}\,\,(c,\alpha ) \in ((0,\infty ) \times (-\infty,0]) \cup A_{ 1} \cup A_{2}, \\ \notin L^{\langle \alpha \rangle }(\mathbb{R}),\,\, \quad &\,\,\text{if}\,\,(c,\alpha )\notin ((0,\infty ) \times (-\infty,0]) \cup A_{1} \cup A_{2}. \end{array} \right.}$$

Proof

As we have seen in (30) in Sect. 5.2, ν −1 of \(\log \varGamma _{c,\lambda }\) is \(\nu _{-1}(dr) = \frac{e^{-cr}} {r(1-e^{-r})}dr,r > 0\). Thus

$$\displaystyle{\nu _{-1}(dr) = \frac{1} {r^{\alpha +1}} \cdot \frac{r^{\alpha }e^{-cr}} {1 - e^{-r}}dr =: \frac{1} {r^{\alpha +1}}\ell_{c,\alpha }(r)dr}$$

and it is enough to check the monotonicity or non-monotonicity of c, α (r), r > 0, depending on (c, α). (For the details of the proof, see Maejima and Ueda [55].)

Corollary 11.5

\(L^{\langle \alpha \rangle }(\mathbb{R})\) is not right-continuous at α ∈ (0,1].

Remark 11.6

  1. (i)

    For any \(c > 0,\,\,\mathcal{L}(\log \varGamma _{c,\lambda })\notin L^{\langle \alpha \rangle }(\mathbb{R}),\,\,\alpha > 1\).

  2. (ii)

    Let E be an exponential random variable. Then

    $$\displaystyle{\mathcal{L}(\log E)\left \{\begin{array}{@{}l@{\quad }l@{}} \in L^{\langle 1\rangle }(\mathbb{R}), \quad \\ \notin L^{\langle \alpha \rangle }(\mathbb{R}),\,\,\alpha > 1.\quad \end{array} \right.}$$
  3. (iii)

    Let Z be a standard normal random variable. Then

    $$\displaystyle{\mathcal{L}(\log \vert Z\vert )\left \{\begin{array}{@{}l@{\quad }l@{}} \in L^{\langle 1\rangle }(\mathbb{R}), \quad \\ \notin L^{\langle \alpha \rangle }(\mathbb{R}),\,\,\alpha > 1,\quad \end{array} \right.}$$

    since \(Z^{2}\mathop{ =}\limits^{\mathrm{ d}}\varGamma _{1/2,1/2}\).

12 Fixed Points of Stochastic Integral Mappings: A New Sight of \(S(\mathbb{R}^{d})\) and Related Topics

Following Jurek and Vervaat [38], Jurek [31] and Jurek [32], we define a fixed point μ under a mapping Φ f as follows.

Definition 12.1

\(\mu \in \mathfrak{D}(\varPhi _{f})\) is called a fixed point under the mapping Φ f , if there exist a > 0 and \(c \in \mathbb{R}^{d}\) such that

$$\displaystyle{ \varPhi _{f}(\mu ) =\mu ^{a{\ast}}{\ast}\delta _{ c}. }$$
(36)

Remark 12.2

Given a mapping Φ f , the natural definition of its fixed point may be μ satisfying Φ f (μ) = μ. However, if we restrict ourselves to the mapping Φ α for instance, only the Cauchy distribution satisfies Φ α (μ) = μ. Then what is the meaning of (36)? We know that \(\mu \in ID(\mathbb{R}^{d})\) determines a Lévy process {X t } such that \(\mu = \mathcal{L}(X_{1})\), and \(\mu ^{a{\ast}}{\ast}\delta _{c} = \mathcal{L}(X_{a} + c)\). Therefore, (36) means that some Lévy process is a “fixed point” in some sense.

We consider here only Φ α . The set of all fixed points under the mapping Φ α is denoted by FP(Φ α ). For 0 < p ≤ 2, let \(S_{p}(\mathbb{R}^{d})\) be the class of all p-stable distributions on \(\mathbb{R}^{d}\) and thus \(S(\mathbb{R}^{d}) =\bigcup _{0<p\leq 2}S_{p}(\mathbb{R}^{d})\). Furthermore, for 1 < p ≤ 2, let \(S_{p}^{0}(\mathbb{R}^{d})\) be the class of p-stable distributions on \(\mathbb{R}^{d}\) with mean 0.

Theorem 12.3

We have

$$\displaystyle{\mathrm{FP}(\varPhi _{\alpha }) = \left \{\begin{array}{@{}l@{\quad }l@{}} S(\mathbb{R}^{d}), \quad &\text{when }\alpha \leq 0, \\ \bigcup _{p\in (\alpha,2]}S_{p}(\mathbb{R}^{d}), \quad &\text{when }0 <\alpha < 1, \\ \bigcup _{p\in (\alpha,2]}S_{p}^{0}(\mathbb{R}^{d}),\quad &\text{when }1 \leq \alpha < 2. \end{array} \right.}$$

Remark 12.4

Theorem 12.3 for α ≤ 0 was already proved in Jurek and Vervaat [38], Jurek [31] and Jurek [32] even in a general setting of a real separable Banach space. The case for 0 < α < 2 is by Ichifuji et al. [25]. One meaning of this theorem is to give new characterizations of the classes \(S(\mathbb{R}^{d})\), \(\bigcup _{p\in (\alpha,2]}S_{p}(\mathbb{R}^{d})\) with 0 < α < 1 and \(\bigcup _{p\in (\alpha,2]}S_{p}^{0}(\mathbb{R}^{d})\) with 1 ≤ α < 2.