Keywords

2020 Mathematics Subject Classification:

1 Introduction

Ipsen et al [3] and Mason [7] have proved under general conditions that a trimmed subordinator satisfies a self-standardizedcentral limit theorem [CLT]. One of their basic tools was a classic representation for subordinators (e.g., Rosiński [9]). Ipsen et al [3] used conditional characteristic function methods to prove their CLT, whereas Mason [7] applied a powerful normal approximation result for standardized infinitely divisible random variables by Zaitsev [12]. In this note, we shall examine self-standardized CLTs for trimmed subordinated subordinators. It turns out that there are two ways to trim a subordinated subordinator. One way leads to CLTs for the usual trimmed subordinator treated in [3] and [7], and a second way to a closely related subordinated trimmed subordinator and CLTs for it.

We begin by describing our setup and establishing some basic notation. Let \(V=\left ( V\left ( t\right ) ,t\geq 0\right ) \) and \(X=\left ( X\left ( t\right ) ,t\geq 0\right ) \) be independent 0 drift subordinators with Lévy measures ΛV and ΛX on \(\mathbb {R}^{+}=\left ( 0,\infty \right ) \), respectively, with tail function \(\overline {\Lambda }_{V}(x)=\Lambda _{V}((x,\infty ))\), respectively, \(\overline {\Lambda }_{X}(x)=\Lambda _{X}((x,\infty ))\), defined for x > 0, satisfying

$$\displaystyle \begin{aligned} \overline{\Lambda}_{V}\left( 0+\right) =\overline{\Lambda}_{X}\left( 0+\right) =\infty\text{.} {} \end{aligned} $$
(1)

For u > 0, let \(\varphi _{V}(u)=\sup \{x:\overline {\Lambda }_{V}(x)>u\},\) where \(\sup \varnothing :=0\). In the same way, define φX.

Remark 1

Observe that we always have

$$\displaystyle \begin{aligned} \varphi_{V}(u)\rightarrow0, \mathrm{as}\ u\rightarrow\infty. \end{aligned}$$

Moreover, whenever \(\overline {\Lambda }_{V}(0+)=\infty \), we have

$$\displaystyle \begin{aligned} \varphi_{V}(u)>0\ \text{for all }u>0. \end{aligned}$$

For details, see Remark 1 of Mason [7]. The same statement holds for φX.

Recall that the Lévy measure ΛV of a subordinator V  satisfies

$$\displaystyle \begin{aligned} \int_{0}^{1}x\Lambda_{V}(\mathrm{d}x)<\infty, \text{equivalently, for all }y>0,\ \int_{y}^{\infty}\varphi_{V}\left( x\right) \mathrm{d}x<\infty. \end{aligned}$$

The subordinator V  has Laplace transform defined for t ≥ 0 by

$$\displaystyle \begin{aligned} E\exp\left( -\theta V\left( t\right) \right) =\exp\left( -t\Phi _{V}\left( \theta\right) \right) ,\theta\geq0, \end{aligned}$$

where

$$\displaystyle \begin{aligned} \Phi_{V}\left( \theta\right) =\int_{0}^{\infty}\left( 1-\exp\left( -\theta v\right) \right) \Lambda_{V}\left( \mathrm{d}v\right) , \end{aligned}$$

which can be written after a change of variable to

$$\displaystyle \begin{aligned} =\int_{0}^{\infty}\left( 1-\exp\left( -\theta\varphi_{V}\left( u\right) \right) \right) \mathrm{d}u\text{.} \end{aligned}$$

In the same way, we define the Laplace transform of X.

Consider the subordinated subordinator process

$$\displaystyle \begin{aligned} W=\left( W\left( t\right) =V\left( X\left( t\right) \right) \text{, }t\geq0\right) . {} \end{aligned} $$
(2)

Applying Theorem 30.1 and Theorem 30.4 of Sato [11], we get that the process W is a 0 drift subordinator W with Lévy measure ΛW defined for Borel subsets B of \(\left ( 0,\infty \right ) \) by

$$\displaystyle \begin{aligned} \Lambda_{W}\left( B\right) =\int_{0}^{\infty}P\left\{ V\left( y\right) \in B\right\} \Lambda_{X}\left( \mathrm{d}y\right) , {} \end{aligned} $$
(3)

with Lévy tail function

$$\displaystyle \begin{aligned} \overline{\Lambda}_{W}\left( x\right) =\Lambda_{W}\left( \left( x,\infty\right) \right) ,\text{ for }x>0. \end{aligned}$$

Remark 2

Notice that (1) implies

$$\displaystyle \begin{aligned} \overline{\Lambda}_{W}\left( 0+\right) =\infty\text{.} \end{aligned}$$

To see this, we have by (3) that

$$\displaystyle \begin{aligned} \overline{\Lambda}_{W}\left( 0+\right) =\lim_{n\rightarrow\infty}\int _{0}^{\infty}P\left\{ V\left( y\right) \in\left( \frac{1}{n} ,\infty\right) \right\} \Lambda_{X}\left( \mathrm{d}y\right) \text{.} \end{aligned}$$

Now \(\overline {\Lambda }_{V}\left ( 0+\right ) =\infty \) implies that for all y > 0, \(P\left \{ V\left ( y\right ) \in \left ( 0,\infty \right ) \right \} =1.\) Hence by the monotone convergence theorem,

$$\displaystyle \begin{aligned} \lim_{n\rightarrow\infty}\int_{0}^{\infty}P\left\{ V\left( y\right) \in\left( \frac{1}{n},\infty\right) \right\} \Lambda_{X}\left( \mathrm{d}y\right) =\overline{\Lambda}_{X}\left( 0+\right) =\infty\text{.} \end{aligned}$$

For later use, we note that W has Laplace transform defined for t ≥ 0 by

$$\displaystyle \begin{aligned} E\exp\left( -\theta W\left( t\right) \right) =\exp\left( -t\Phi _{W}\left( \theta\right) \right) ,\theta\geq0, \end{aligned}$$

where

$$\displaystyle \begin{aligned} \Phi_{W}\left( \theta\right) =\int_{0}^{\infty}\left( 1-e^{-\theta x}\right) \Lambda_{W}\left( \mathrm{d}x\right) \end{aligned}$$
$$\displaystyle \begin{aligned} =\int_{0}^{\infty}\int_{0}^{\infty}\left( 1-e^{-\theta x}\right) P\left( V\left( y\right) \in\mathrm{d}x\right) \Lambda_{X}\left( \mathrm{d} y\right) \end{aligned}$$
$$\displaystyle \begin{aligned} =\int_{0}^{\infty}\left( 1-e^{y\Phi_{V}\left( \theta\right) }\right) \Lambda_{X}\left( \mathrm{d}y\right) . \end{aligned}$$

Definition 30.2 of Sato [11] calls the transformation of V  into W given by \(W\left ( t\right ) =V\left ( X\left ( t\right ) \right ) \) subordination by the subordinator X, which is sometimes called the directing process.

2 Two Methods of Trimming W

In order to talk about trimming W, we must first discuss the ordered jump sequences of V , X, and W. For any t > 0, denote the ordered jump sequence \(m_{V}^{\left ( 1\right ) }(t)\geq m_{V}^{\left ( 2\right ) }(t)\geq \cdots \) of V  on the interval \(\left [ 0,t\right ] \). Let ω1, ω2, … be i.i.d. exponential random variables with parameter 1, and for each n ≥ 1, let Γn = ω1 + … + ωn. It is well-known that for each t > 0,

$$\displaystyle \begin{aligned} \left( m_{V}^{\left( r\right) }(t)\right) _{r\geq1}\overset{\mathrm{D}} {=}\left( \varphi_{V}\left( \frac{\Gamma_{r}}{t}\right) \right) _{r\geq1}, {} \end{aligned} $$
(4)

and hence for each t > 0,

$$\displaystyle \begin{aligned} V(t)=\sum_{r=1}^{\infty}m_{V}^{\left( r\right) }(t)\overset{\mathrm{D}} {=}\sum_{r=1}^{\infty}\varphi_{V}\left( \frac{\Gamma_{r}}{t}\right) =:\widetilde{V}(t). {} \end{aligned} $$
(5)

See, for instance, equation (1.3) in IMR [3] and the references therein. It can also be inferred from a general representation for subordinators due to Rosiński [9].

In the same way, we define for each t > 0, \(\left ( m_{X}^{\left ( r\right ) }(t)\right ) _{r\geq 1}\) and \(\left ( m_{W}^{\left ( r\right ) }(t)\right ) _{r\geq 1}\), and we see that the analogs of the distributional identity (4) hold with \(m_{V}^{\left ( r\right ) }\) and φV replaced by \(m_{X}^{\left ( r\right ) }\) and φX , respectively, \(m_{W}^{\left ( r\right ) }\) and φW. Recalling (2), observe that for all t > 0,

$$\displaystyle \begin{aligned} W\left( t\right) =\sum_{0<s\leq t}\Delta W\left( s\right) =V\left( X\left( t\right) \right) =\sum_{0<s\leq X\left( t\right) }\Delta V\left( s\right) . {} \end{aligned} $$
(6)

From (6) and the version of (4) with \(m_{V}^{\left ( r\right ) }\) and φV replaced by \(m_{W}^{\left ( r\right ) }\) and φW, we have for each t > 0

$$\displaystyle \begin{aligned} W(t)=\sum_{r=1}^{\infty}m_{W}^{\left( r\right) }(t)\overset{\mathrm{D}} {=}\sum_{r=1}^{\infty}\varphi_{W}\left( \frac{\Gamma_{r}}{t}\right) =:\widetilde{W}(t). \end{aligned}$$

Let V, X and \(\left ( \Gamma _{r}\right ) _{r\geq 1}\) be independent. In particular, V  is independent of

$$\displaystyle \begin{aligned} \left\{ \left( m_{X}^{\left( r\right) }(y)\right) _{r\geq1},y>0\right\} \text{ and }\left( \Gamma_{r}\right) _{r\geq1}. \end{aligned}$$

Next consider for each t > 0

$$\displaystyle \begin{aligned} \left( m_{V}^{\left( r\right) }(X\left( t\right) )\right) _{r\geq1}. \end{aligned}$$

Note that conditioned on \(X\left ( t\right ) =y\)

$$\displaystyle \begin{aligned} \left( m_{V}^{\left( r\right) }(X\left( t\right) )\right) _{r\geq 1}\overset{\mathrm{D}}{=}\left( m_{V}^{\left( r\right) }(y)\right) _{r\geq1}. \end{aligned}$$

Therefore, using (4), we get for each t > 0

$$\displaystyle \begin{aligned} \left( m_{V}^{\left( r\right) }(X\left( t\right) )\right) _{r\geq 1}\overset{\mathrm{D}}{=}\left( \varphi_{V}\left( \frac{\Gamma_{r}}{X\left( t\right) }\right) \right) _{r\geq1}, \end{aligned}$$

and thus by (5),

$$\displaystyle \begin{aligned} V(X\left( t\right) )=\sum_{r=1}^{\infty}m_{V}^{\left( r\right) }(X\left( t\right) )\overset{\mathrm{D}}{=}\sum_{r=1}^{\infty}\varphi_{V}\left( \frac{\Gamma_{r}}{X\left( t\right) }\right) =:\widetilde{V}(X\left( t\right) ). \end{aligned}$$

Here are two methods of trimming \(W(t)=V\left ( X(t)\right ) \).

Method I

For each t > 0, trim \(W(t)=V\left ( X(t)\right ) \) based on the ordered jumps of V  on the interval \(\left ( 0,X\left ( t\right ) \right ] .\) In this case, for each t > 0 and k ≥ 1, define the kth trimmed version of \(V(X\left ( t\right ) )\)

$$\displaystyle \begin{aligned} V^{\left( k\right) }(X\left( t\right) ):=V(X\left( t\right) )-\sum _{r=1}^{k}m_{V}^{\left( r\right) }(X\left( t\right) ), \end{aligned}$$

which we will call the subordinated trimmed subordinator process. We note that

$$\displaystyle \begin{aligned} V^{\left( k\right) }(X\left( t\right) )\overset{\mathrm{D}}{=} \widetilde{V}(X\left( t\right) )-\sum_{r=1}^{k}\varphi_{V}\left( \frac{\Gamma_{r}}{X\left( t\right) }\right) =:\widetilde{V}^{\left( k\right) }(X\left( t\right) ). \end{aligned}$$

Method II

For each t > 0, trim W(t) based on the ordered jumps of W on the interval \(\left ( 0,t\right ] .\) In this case, for each t > 0 and k ≥ 1, define the kth trimmed version of W(t)

$$\displaystyle \begin{aligned} W^{\left( k\right) }(t):=W(t)-\sum_{r=1}^{k}m_{W}^{\left( r\right) }(t) \end{aligned}$$
$$\displaystyle \begin{aligned} \overset{\mathrm{D}}{=}\widetilde{W}(t)-\sum_{r=1}^{k}\varphi_{W}\left( \frac{\Gamma_{r}}{t}\right) =:\widetilde{W}^{\left( k\right) }(t). \end{aligned}$$

Remark 3

Notice that in method I trimming for each t > 0, we treat \(V(X\left ( t\right ) )\) as the subordinator V  randomly evaluated at \(X\left ( t\right ) \), whereas in method II trimming we consider W = V (X) as the subordinator, which results when the subordinator V  is randomly time changed by the subordinator X.

Remark 4

Though for each t > 0, \(V(X\left ( t\right ) )=W(t) \), typically we cannot conclude that for each t > 0 and k ≥ 1

$$\displaystyle \begin{aligned} V^{\left( k\right) }(X\left( t\right) )\overset{\mathrm{D}}{=}W^{\left( k\right) }(t). \end{aligned}$$

This is because it is not necessarily true that

$$\displaystyle \begin{aligned} \left( m_{V}^{\left( r\right) }(X\left( t\right) )\right) _{r\geq 1}\overset{\mathrm{D}}{=}\left( m_{W}^{\left( r\right) }(t)\right) _{r\geq1}. \end{aligned}$$

See the example in Appendix 1.

3 Self-Standardized CLTs for W

3.1 Self-Standardized CLTs for Method I Trimming

Set \(V^{\left ( 0\right ) }(t):=V(t)\), and for any integer k ≥ 1, consider the trimmed subordinator

$$\displaystyle \begin{aligned} V^{\left( k\right) }(t):=V(t)-m_{V}^{\left( 1\right) }(t)-\dots -m_{V}^{\left( k\right) }(t), \end{aligned}$$

which on account of (4) says for any integer k ≥ 0 and t > 0

$$\displaystyle \begin{aligned} V^{\left( k\right) }(t)\overset{\mathrm{D}}{=}\sum_{i=k+1}^{\infty} \varphi_{V}\left( \frac{\Gamma_{i}}{t}\right) =:\widetilde{V}^{\left( k\right) }(t). {} \end{aligned} $$
(7)

Let T be a strictly positive random variable independent of

$$\displaystyle \begin{aligned} \left\{ \left( m_{V}^{\left( r\right) }(t)\right) _{r\geq1},t>0\right\} \text{ and }\left( \Gamma_{r}\right) _{r\geq1}. {} \end{aligned} $$
(8)

Clearly, by (4), (7), and (8), we have for any integer k ≥ 0

$$\displaystyle \begin{aligned} V^{\left( k\right) }(T)\overset{\mathrm{D}}{=}\widetilde{V}^{\left( k\right) }(T). \end{aligned}$$

Set for any y > 0

$$\displaystyle \begin{aligned} \mu_{V}\left( y\right) :=\int_{y}^{\infty}\varphi_{V}\left( x\right) \mathrm{d}x{ \text{and} }\sigma_{V}^{2}\left( y\right) :=\int _{y}^{\infty}\varphi_{V}^{2}\left( x\right) \mathrm{d}x\text{.} \end{aligned}$$

We see by Remark 1 that (1) implies that

$$\displaystyle \begin{aligned} \sigma_{V}^{2}\left( y\right) >0 \text{ for all } y>0. \end{aligned}$$

Throughout these notes, Z denotes a standard normal random variable. We shall need the following formal extension of Theorem 1 of Mason [7]. Its proof is nearly exactly the same as the proof of the Mason [7] version, and just replace the sequence of positive constants \(\left \{ t_{n}\right \} _{n\geq 1}\) in the proof of Theorem 1 of Mason [7] by \(\left \{ T_{n}\right \} _{n\geq 1}\). The proof of Theorem 1 of Mason [7] is based on a special case of Theorem 1.2 of Zaitsev [12], which we state in the digression below. Here is our self-standardized CLT for method I trimmed subordinated subordinators.

Theorem 1

Assume that \(\overline {\Lambda }_{V}(0+)=\infty \) . For any sequence of positive integers \(\left \{ k_{n}\right \} _{n\geq 1}\) and sequence of strictly positive random variables \(\left \{ T_{n}\right \} _{n\geq 1}\) independent of \(\left ( \Gamma _{k}\right ) _{k\geq 1}\) satisfying

$$\displaystyle \begin{aligned} \frac{\sqrt{T_{n}}\sigma_{V}\left( \Gamma_{k_{n}}/T_{n}\right) }{\varphi _{V}\left( \Gamma_{k_{n}}/T_{n}\right) }\overset{\mathrm{P}}{\rightarrow }\infty,\mathit{\text{ as }}n\rightarrow\infty, \end{aligned}$$

we have uniformly in x, as \(n\rightarrow \infty \),

$$\displaystyle \begin{aligned} \left\vert P\left\{ \frac{\widetilde{V}^{\left( k_{n}\right) }\left( T_{n}\right) -T_{n}\mu_{V}\left( \Gamma_{k_{n}}/T_{n}\right) }{\sqrt{T_{n} }\sigma_{V}\left( \Gamma_{k_{n}}/T_{n}\right) }\leq x|\Gamma_{k_{n}} ,T_{n}\right\} -P\left\{ Z\leq x\right\} \right\vert \overset{\mathrm{P} }{\rightarrow}0\mathit{\text{,}} \end{aligned}$$

which implies as \(n\rightarrow \infty \)

$$\displaystyle \begin{aligned} \frac{\widetilde{V}^{\left( k_{n}\right) }\left( T_{n}\right) -T_{n} \mu_{V}\left( \Gamma_{k_{n}}/T_{n}\right) }{\sqrt{T_{n}}\sigma_{V}\left( \Gamma_{k_{n}}/T_{n}\right) }\overset{\mathrm{D}}{\rightarrow}Z. {} \end{aligned} $$
(9)

The remainder of this subsection will be devoted to examining a couple of special cases of the following example of Theorem 1.

Example

For each 0 < α < 1, let \(V_{\alpha }=\left ( V_{\alpha }\left ( t\right ) ,t\geq 0\right ) \) be an α-stable process with Laplace transform defined for θ > 0 by

$$\displaystyle \begin{aligned} E\exp\left( -\theta V_{\alpha}(t)\right) =\exp\left( -t\int_{0}^{\infty }\left( 1-\exp(-\theta x)\right) \alpha\Gamma\left( 1-\alpha\right) x^{-1-\alpha}\mathrm{d}x\right) \end{aligned}$$
$$\displaystyle \begin{aligned} =\exp\left( -t\int_{0}^{\infty}\left( 1-\exp(-\theta c_{\alpha}u^{-1/\alpha })\right) \mathrm{d}u\right) =\exp\left( -t\theta^{\alpha}\right) \text{,} {} \end{aligned} $$
(10)

where

$$\displaystyle \begin{aligned} c_{\alpha}=1/\Gamma^{1/\alpha}\left( 1-\alpha\right) . \end{aligned}$$

(See Example 24.12 of Sato [11].) Note that for Vα,

$$\displaystyle \begin{aligned} \varphi V_{\alpha}\left( x\right) =: \varphi_{\alpha}(x)=c_{\alpha}x^{-1/\alpha}{\mathbf{1}}_{\left\{ x>0\right\} }. \end{aligned}$$

We record that for each t > 0

$$\displaystyle \begin{aligned} V_{\alpha}\left( t\right) \overset{\mathrm{D}}{=}\widetilde{V}_{\alpha }(t):=c_{\alpha}\sum_{i=1}^{\infty}\left( \frac{\Gamma_{i}}{t}\right) ^{-1/\alpha}. {} \end{aligned} $$
(11)

For any t > 0, denote the ordered jump sequence \(m_{\alpha }^{\left ( 1\right ) }\left ( t\right ) \geq m_{\alpha }^{\left ( 2\right ) }\left ( t\right ) \geq \dots \) of Vα on the interval \(\left [ 0,t\right ] \). Consider the kth trimmed version of \(V_{\alpha }\left ( t\right ) \) defined for each integer k ≥ 1

$$\displaystyle \begin{aligned} V_{\alpha}^{\left( k\right) }\left( t\right) =V_{\alpha}\left( t\right) -m_{\alpha}^{\left( 1\right) }\left( t\right) -\dots-m_{\alpha}^{\left( k\right) }\left( t\right) , {} \end{aligned} $$
(12)

which for each t > 0

$$\displaystyle \begin{aligned} \overset{\mathrm{D}}{=}\widetilde{V}_{\alpha}^{\left( k\right) }\left( t\right) :=c_{\alpha}\sum_{i=1}^{\infty}\left( \frac{\Gamma_{k+i}} {t}\right) ^{-1/\alpha}. {} \end{aligned} $$
(13)

In this example, for ease of notation, write for each 0 < α < 1 and y > 0, \(\mu _{V_{\alpha }}\left ( y\right ) =\mu _{\alpha }\left ( y\right ) \) and \(\sigma _{V_{\alpha }}^{2}\left ( y\right ) =\sigma _{\alpha }^{2}\left ( y\right ) \). With this notation, we get that

$$\displaystyle \begin{aligned} \mu_{\alpha}\left( y\right) =\int_{y}^{\infty}c_{\alpha}v^{-1/\alpha }\mathrm{d}v=\frac{c_{\alpha}\alpha}{1-\alpha}y^{1-1/\alpha} \end{aligned}$$

and

$$\displaystyle \begin{aligned} \sigma_{\alpha}^{2}\left( y\right) =\int_{y}^{\infty}c_{\alpha} ^{2}v^{-2/\alpha}\mathrm{d}v=\frac{c_{\alpha}^{2}\alpha}{2-\alpha }y^{1-2/\alpha}. \end{aligned}$$

From (13), we have that for any k ≥ 1 and T > 0

$$\displaystyle \begin{aligned} \frac{\widetilde{V}_{\alpha}^{\left( k\right) }\left( T\right) -T\mu_{\alpha}\left( \frac{\Gamma_{k}}{T}\right) }{T^{1/2}\sigma_{\alpha }\left( \frac{\Gamma_{k}}{T}\right) }=\frac{\sum_{i=1}^{\infty}\left( \Gamma_{k+i}\right) ^{-1/\alpha}-\frac{\alpha}{1-\alpha}\Gamma_{k} ^{1-1/\alpha}}{\sqrt{\frac{\alpha}{2-\alpha}}\Gamma_{k}^{1/2-1/\alpha}}. {} \end{aligned} $$
(14)

Notice that

$$\displaystyle \begin{aligned} \frac{\sqrt{T}\sigma_{\alpha}\left( \frac{\Gamma_{k}}{T}\right) } {\varphi_{\alpha}(\frac{\Gamma_{k}}{T})}=\left( \Gamma_{k}\right) ^{1/2}\sqrt{\frac{\alpha}{2-\alpha}}. {} \end{aligned} $$
(15)

Clearly by (15) for any sequence of positive integers \(\left \{ k_{n}\right \} _{n\geq 1}\) converging to infinity and sequence of strictly positive random variables \(\left \{ T_{n}\right \} _{n\geq 1}\) independent of \(\left ( \Gamma _{k}\right ) _{k\geq 1}\),

$$\displaystyle \begin{aligned} \frac{\sqrt{T_{n}}\sigma_{\alpha}\left( \Gamma_{k_{n}}/T_{n}\right) }{\varphi_{\alpha}\left( \Gamma_{k_{n}}/T_{n}\right) }=\left( \Gamma _{k_{n}}\right) ^{1/2}\sqrt{\frac{\alpha}{2-\alpha}}\overset{\mathrm{P} }{\rightarrow}\infty,\text{ as }n\rightarrow\infty. \end{aligned}$$

Hence, by rewriting (9) in the above notation, we have by Theorem 1 that as \(n\rightarrow \infty \)

$$\displaystyle \begin{aligned} \frac{\widetilde{V}_{\alpha}^{\left( k_{n}\right) }\left( T_{n}\right) -T_{n}\mu_{\alpha}\left( \frac{\Gamma_{k_{n}}}{T_{n}}\right) }{T_{n} ^{1/2}\sigma_{\alpha}\left( \frac{\Gamma_{k_{n}}}{T_{n}}\right) } \overset{\mathrm{D}}{\rightarrow}Z. {} \end{aligned} $$
(16)

Digression

To make the presentation of our Example more self-contained, we shall show in this digression how a special case of Theorem 1.2 of Zaitsev [12] can be used to give a direct proof of (16).

It is pointed out in Mason [7] that Theorem 1.2 of Zaitsev [12] implies the following normal approximation. Let Y  be an infinitely divisible mean 0 and variance 1 random variable with Lévy measure Λ and Z be a standard normal random variable. Assume that the support of Λ is contained in a closed interval \(\left [ -\tau ,\tau \right ] \) with τ > 0; then for universal positive constants C1 and C2 for any λ > 0 all \(x\in \mathbb {R}\)

$$\displaystyle \begin{aligned} P\left\{ Z\leq x-\lambda\right\} -C_{1}\exp\left( -\frac{\lambda}{C_{2} \tau}\right) \leq P\left\{ Y\leq x\right\} \end{aligned}$$
$$\displaystyle \begin{aligned} \leq P\left\{ Z\leq x+\lambda\right\} +C_{1}\exp\left( -\frac{\lambda }{C_{2}\tau}\right) . {} \end{aligned} $$
(17)

We shall show how to derive (16) from (17). Note that

$$\displaystyle \begin{aligned} \frac{\sum_{i=1}^{\infty}\left( \Gamma_{k+i}\right) ^{-1/\alpha} -\frac{\alpha}{1-\alpha}\Gamma_{k}^{1-1/\alpha}}{\sqrt{\frac{\alpha}{2-\alpha }}\Gamma_{k}^{1/2-1/\alpha}}\overset{\mathrm{D}}{=}\frac{\sum_{i=1}^{\infty }\left( 1+\frac{\Gamma_{i}^{\prime}}{\Gamma_{k}}\right) ^{-1/\alpha} -\frac{\alpha}{1-\alpha}\Gamma_{k}}{\sqrt{\frac{\alpha}{2-\alpha}}\Gamma _{k}^{1/2}}, {} \end{aligned} $$
(18)

where \(\left ( \Gamma _{i}^{\prime }\right ) _{i\geq 1}\overset {\mathrm {D}} {=}\left ( \Gamma _{i}\right ) _{i\geq 1}\) and is independent of \(\left ( \Gamma _{i}\right ) _{i\geq 1}\). Let \(Y_{\alpha }=\left ( Y_{\alpha }\left ( y\right ) ,y\geq 0\right ) \) be the subordinator with Laplace transform defined for each y > 0 and θ ≥ 0, by

$$\displaystyle \begin{aligned} E\exp\left( -\theta Y_{\alpha}\left( y\right) \right) =\exp\left( -y\int_{0}^{1}\left( 1-\exp(-\theta x)\right) \alpha x^{-\alpha-1} \mathrm{d}x\right) \end{aligned}$$
$$\displaystyle \begin{aligned} =:\exp\left( -y\int_{0}^{1}\left( 1-\exp(-\theta x)\right) \Lambda_{\alpha }\left( \mathrm{d}x\right) \right) . {} \end{aligned} $$
(19)

Observe that the Lévy measure Λα of Yα has Lévy tail function on \(\left ( 0,\infty \right ) \)

$$\displaystyle \begin{aligned} \overline{\Lambda}_{\alpha}\left( x\right) =\left( x^{-\alpha}-1\right) {\mathbf{1}}_{\left\{ 0<x\leq1\right\} } \end{aligned}$$

with φ function

$$\displaystyle \begin{aligned} \varphi_{Y_{\alpha}}\left( u\right) =\left( 1+u\right) ^{-1/\alpha }{\mathbf{1}}_{\left\{ u>0\right\} }. \end{aligned}$$

Thus from (5), for each y > 0,

$$\displaystyle \begin{aligned} Y_{\alpha}\left( y\right) \overset{\mathrm{D}}{=}\sum_{i=1}^{\infty}\left( 1+\frac{\Gamma_{i}^{\prime}}{y}\right) ^{-1/\alpha}. \end{aligned}$$

Also, we find by differentiating the Laplace transform of \(Y_{\alpha }\left ( y\right ) \) that for each y > 0

$$\displaystyle \begin{aligned} EY_{\alpha}\left( y\right) =\frac{\alpha y}{1-\alpha}=:\beta_{\alpha}y\text{ {and }}VarY_{\alpha}\left( y\right) =\frac{\alpha y}{2-\alpha }=:\gamma_{\alpha}^{2}y, {} \end{aligned} $$
(20)

and hence,

$$\displaystyle \begin{aligned} Z_{\alpha}\left( y\right) :=\frac{Y_{\alpha}\left( y\right) -\beta _{\alpha}y}{\gamma_{\alpha}\sqrt{y}} \end{aligned}$$

is a mean 0 and variance 1 infinitely divisible random variable whose Lévy measure has support contained in the closed interval \(\left [ -\tau \left ( y\right ) ,\tau \left ( y\right ) \right ] \), where

$$\displaystyle \begin{aligned} \tau\left( y\right) =1/\left( \gamma_{\alpha}\sqrt{y}\right) . {} \end{aligned} $$
(21)

Thus by (17) for universal positive constants C1 and C2 for any λ > 0 all \(x\in \mathbb {R}\) and λ > 0,

$$\displaystyle \begin{aligned} P\left\{ Z\leq x-\lambda\right\} -C_{1}\exp\left( -\frac{\lambda}{C_{2} \tau\left( y\right) }\right) \leq P\left\{ Z_{\alpha}\left( y\right) \leq x\right\} \end{aligned}$$
$$\displaystyle \begin{aligned} \leq P\left\{ Z\leq x+\lambda\right\} +C_{1}\exp\left( -\frac{\lambda }{C_{2}\tau\left( y\right) }\right) . {} \end{aligned} $$
(22)

Clearly, since \(\left ( \Gamma _{i}^{\prime }\right ) _{i\geq 1}\overset {\mathrm {D}}{=}\left ( \Gamma _{i}\right ) _{i\geq 1}\) and \(\left ( \Gamma _{i}^{\prime }\right ) _{i\geq 1}\) is independent of \(\left ( \Gamma _{k_{n}}\right ) _{n\geq 1}\), we conclude by (22) and (21) that

$$\displaystyle \begin{aligned} P\left\{ Z\leq x-\lambda\right\} -C_{1}\exp\left( -\frac{\lambda \gamma_{\alpha}\sqrt{\Gamma_{k_{n}}}}{C_{2}}\right) \leq P\left\{ Z_{\alpha }\left( \Gamma_{k_{n}}\right) \leq x|\Gamma_{k_{n}}\right\} \end{aligned}$$
$$\displaystyle \begin{aligned} \leq P\left\{ Z\leq x+\lambda\right\} +C_{1}\exp\left( -\frac{\lambda \gamma_{\alpha}\sqrt{\Gamma_{k_{n}}}}{C_{2}}\right) . {} \end{aligned} $$
(23)

Now by the arbitrary choice of λ > 0, we get from (23) that uniformly in x, as \(k_{n}\rightarrow \infty \),

$$\displaystyle \begin{aligned} \left\vert P\left\{ \frac{Y_{\alpha}\left( \Gamma_{k_{n}}\right) -\beta_{\alpha}\Gamma_{k_{n}}}{\gamma_{\alpha}\sqrt{\Gamma_{k_{n}}}}\leq x|\Gamma_{k_{n}}\right\} -P\left\{ Z\leq x\right\} \right\vert \overset{\mathrm{P}}{\rightarrow}0. \end{aligned}$$

This implies as \(n\rightarrow \infty \)

$$\displaystyle \begin{aligned} \frac{Y_{\alpha}\left( \Gamma_{k_{n}}\right) -\beta_{\alpha}\Gamma_{k_{n}} }{\gamma_{\alpha}\sqrt{\Gamma_{k_{n}}}}\overset{\mathrm{D}}{\rightarrow}Z. {} \end{aligned} $$
(24)

Since the identity (14) holds for any k ≥ 1 and T > 0, (16) follows from (18) and (24). Of course, there are other ways to establish (24). For instance, (24) can be shown to be a consequence of Anscombe’s Theorem for Lévy processes. For details, see Appendix 2.

Remark 5

For any 0 < α < 1 and k ≥ 1, the random variable \(Y_{\alpha }\left ( \Gamma _{k}\right ) \) has Laplace transform

$$\displaystyle \begin{aligned} E\exp\left( -\theta Y_{\alpha}\left( \Gamma_{k}\right) \right) =\left( 1+\int_{0}^{1}\left( 1-\exp(-\theta x)\right) \Lambda_{\alpha}\left( \mathrm{d}x\right) \right) ^{-k}\text{, }\theta\geq0. \end{aligned}$$

It turns out that for any t > 0

$$\displaystyle \begin{aligned} Y_{\alpha}\left( \Gamma_{k}\right) \overset{\mathrm{D}}{=}V_{\alpha }^{\left( k\right) }\left( t\right) /m_{\alpha}^{\left( k\right) }\left( t\right) \text{,} \end{aligned}$$

where \(V_{\alpha }^{\left ( k\right ) }\left ( t\right ) \) and \(m_{\alpha }^{\left ( k\right ) }\left ( t\right ) \) are as in (12). See Theorem 1.1 (i) of Kevei and Mason [6]. Also refer to page 1979 of Ipsen et al [4].

Next we give two special cases of our example, which we shall return to in the next subsection when we discuss self-standardized CLTs for method II trimming.

Special Case 1: Subordination of Two Independent Stable Subordinators

For 0 < α1, α2 < 1, let \(V_{\alpha _{1}}\), respectively \(V_{\alpha _{2}}\), be an α1-stable process, respectively an α2-stable process, with a Laplace transform of the form (10). Assume that \(V_{\alpha _{1}}\) and \(V_{\alpha _{2}}\) are independent. Set for t ≥ 0

$$\displaystyle \begin{aligned} W\left( t\right) =V_{\alpha_{1}}\left( V_{\alpha_{2}}\left( t\right) \right) \end{aligned}$$

and

$$\displaystyle \begin{aligned} W=\left( W\left( t\right) \text{, }t\geq0\right) . \end{aligned}$$

One finds that for each t ≥ 0

$$\displaystyle \begin{aligned} W\left( t\right) =V_{\alpha_{1}}\left( V_{\alpha_{2}}\left( t\right) \right) =\sum_{0<s\leq V_{\alpha_{2}}\left( t\right) }\Delta V_{\alpha_{1} }\left( s\right) \text{.} \end{aligned}$$

Moreover, W is a stationary independent increment process, and for each t ≥ 0 and θ ≥ 0,

$$\displaystyle \begin{aligned} E\exp\left( -\theta W\left( t\right) \right) =E\exp\left( -V_{\alpha_{2} }\left( t\right) \theta^{\alpha_{1}}\right) \end{aligned}$$
$$\displaystyle \begin{aligned} =\exp\left( -t\theta^{\alpha_{1}\alpha_{2}}\right) . {} \end{aligned} $$
(25)

This says that W is the α1α2-stable subordinator \(V_{\alpha _{1}\alpha _{2}}\) with Laplace transform (25). (See Example 30.5 on page 202 of Sato [11].) Thus for each t ≥ 0 and θ ≥ 0,

$$\displaystyle \begin{aligned} E\exp\left( -\theta W\left( t\right) \right) =E\exp\left( -\theta V_{\alpha_{1}\alpha_{2}}\left( t\right) \right) \text{.} {} \end{aligned} $$
(26)

Therefore, with \(c\left ( \alpha _{1}\alpha _{2}\right ) =\frac {1} {\Gamma ^{1/\left ( \alpha _{1}\alpha _{2}\right ) }\left ( 1-\alpha _{1} \alpha _{2}\right ) }\), we get

$$\displaystyle \begin{aligned} c\left( \alpha_{1}\alpha_{2}\right) \sum_{i=1}^{\infty}\left( \frac {\Gamma_{i}}{t}\right) ^{-1/\left( \alpha_{1}\alpha_{2}\right) }=:\widetilde{V}_{\alpha_{1}\alpha_{2}}(t), \end{aligned}$$

which by (11), (25), and (26) for each fixed t > 0 is

$$\displaystyle \begin{aligned} \overset{\mathrm{D}}{=}V_{\alpha_{1}}\left( V_{\alpha_{2}}\left( t\right) \right) . \end{aligned}$$

Here we get that for any sequence of positive integers \(\left \{ k_{n}\right \} _{n\geq 1}\) converging to infinity and sequence of positive constants \(\left \{ s_{n}\right \} _{n\geq 1}\), by setting \(T_{n} =V_{\alpha _{2}}\left ( s_{n}\right ) ,\) for n ≥ 1, we have by (16) that as \(n\rightarrow \infty \)

$$\displaystyle \begin{aligned} \frac{\widetilde{V}_{\alpha_{1}}^{\left( k_{n}\right) }\left( V_{\alpha _{2}}\left( s_{n}\right) \right) -V_{\alpha_{2}}\left( s_{n}\right) \mu_{\alpha_{1}}\left( \frac{\Gamma_{k_{n}}}{V_{\alpha_{2}}\left( s_{n}\right) }\right) }{\sqrt{V_{\alpha_{2}}\left( s_{n}\right) } \sigma_{\alpha_{1}}\left( \frac{\Gamma_{k_{n}}}{V_{\alpha_{2}}\left( s_{n}\right) }\right) }\overset{\mathrm{D}}{\rightarrow}Z. \end{aligned}$$

Special Case 2: Mittag-Leffler Process

For each 0 < α < 1, let Vα be the α-stable process with Laplace transform (10). Now independent of V , let \(X=\left ( X\left ( s\right ) ,s\geq 0\right ) \) be the standard Gamma process, i.e., X is a zero drift subordinator with density for each s > 0

$$\displaystyle \begin{aligned} f_{X\left( s\right) }\left( x\right) =\frac{1}{\Gamma\left( s\right) }x^{s-1}e^{-x}\text{, for }x>0, \end{aligned}$$

mean and variance

$$\displaystyle \begin{aligned} EX\left( s\right) =s\mathrm{and}VarX\left( s\right) =s, \end{aligned}$$

and Laplace transform for θ ≥ 0

$$\displaystyle \begin{aligned} E\exp\left( -\theta X\left( s\right) \right) =\left( 1+\theta\right) ^{-s}\text{,} \end{aligned}$$

which after a little computation is

$$\displaystyle \begin{aligned} =\exp\left[ -s\int_{0}^{\infty}\left( 1-\exp\left( -\theta x\right) \right) x^{-1}e^{-x}\mathrm{d}x\right] \text{.} \end{aligned}$$

Notice that X has Lévy density

$$\displaystyle \begin{aligned} \rho\left( x\right) =x^{-1}e^{-x}\text{, for }x>0. \end{aligned}$$

(See Applebaum [1] pages 54–55.)

Consider the subordinated process

$$\displaystyle \begin{aligned} W=\left( W\left( s\right) :=V_{\alpha}\left( X\left( s\right) \right) \text{, }s\geq0\right) . \end{aligned}$$

Applying Theorem 30.1 and Theorem 30.4 of Sato [11], we see that W is a drift 0 subordinator with Laplace transform

$$\displaystyle \begin{aligned} E\exp\left( -\theta W\left( s\right) \right) =E\exp\left( -V\left( X\left( s\right) \right) \right) \end{aligned}$$
$$\displaystyle \begin{aligned} =E\exp\left( -X\left( s\right) \theta^{\alpha}\right) =\left( 1+\theta^{\alpha}\right) ^{-s} \end{aligned}$$
$$\displaystyle \begin{aligned} =\exp\left[ -s\int_{0}^{\infty}\left( 1-\exp\left( -\theta^{\alpha }y\right) \right) y^{-1}e^{-y}\mathrm{d}y\right] ,\theta\geq0. \end{aligned}$$

It has Lévy measure ΛW defined for Borel subsets B of \(\left ( 0,\infty \right ) \), by

$$\displaystyle \begin{aligned} \Lambda_{W}\left( B\right) =\int_{0}^{\infty}P\left\{ V_{\alpha}\left( y\right) \in B\right\} y^{-1}e^{-y}\mathrm{d}y. \end{aligned}$$

In particular, it has Lévy tail function

$$\displaystyle \begin{aligned} \overline{\Lambda}_{W}\left( x\right) =\int_{0}^{\infty}P\left\{ V\left( y\right) \in\left( x,\infty\right) \right\} y^{-1}e^{-y}\mathrm{d}y\text{, for }x>0. \end{aligned}$$

For later use, we note that

$$\displaystyle \begin{aligned} \int_{0}^{\infty}\left( 1-e^{-\theta x}\right) \Lambda_{W}\left( \mathrm{d}x\right) =\int_{0}^{\infty}\int_{0}^{\infty}\left( 1-e^{-\theta x}\right) P_{V_{\alpha}\left( y\right) }\left( \mathrm{d}x\right) ay^{-1}e^{-by}\mathrm{d}y \end{aligned}$$
$$\displaystyle \begin{aligned} =\int_{0}^{\infty}\left( 1-e^{y\theta^{\alpha}}\right) y^{-1}e^{-y} \mathrm{d}y. \end{aligned}$$

Such a process W is called the Mittag-Leffler process. See, e.g., Pillai [8].

By Theorem 4.3 of Pillai [8] for each s > 0, the exact distribution function Fα,s(x) of \(W\left ( s\right ) \) is for x ≥ 0

$$\displaystyle \begin{aligned} F_{\alpha,s}(x)=\sum_{r=0}^{\infty}\left( -1\right) ^{r}\frac{\Gamma\left( s+r\right) x^{\alpha\left( s+r\right) }}{\Gamma\left( s\right) r!\Gamma\left( 1+\alpha\left( s+r\right) \right) }, \end{aligned}$$

which says that for each s > 0 and x ≥ 0

$$\displaystyle \begin{aligned} P\left\{ W\left( s\right) \leq x\right\} =P\left\{ V_{\alpha}\left( X\left( s\right) \right) \leq x\right\} \end{aligned}$$
$$\displaystyle \begin{aligned} =P\left\{ \widetilde{V}_{\alpha}\left( X\left( s\right) \right) \leq x\right\} =F_{\alpha,s}(x). \end{aligned}$$

In this special case, for any sequence of positive integers \(\left \{ k_{n}\right \} _{n\geq 1}\) converging to infinity and sequence of positive constants \(\left \{ s_{n}\right \} _{n\geq 1}\), by setting \(T_{n}=X\left ( s_{n}\right ) ,\) for n ≥ 1, we get by (16) that as \(n\rightarrow \infty \)

$$\displaystyle \begin{aligned} \frac{\widetilde{V}_{\alpha}^{\left( k_{n}\right) }\left( X\left( s_{n}\right) \right) -X\left( s_{n}\right) \mu_{\alpha}\left( \Gamma_{k_{n}}/X\left( s_{n}\right) \right) }{\sqrt{X\left( s_{n}\right) }\sigma_{\alpha}\left( \Gamma_{k_{n}}/X\left( s_{n}\right) \right) }\overset{\mathrm{D}}{\rightarrow}Z. \end{aligned}$$

3.2 Self-Standardized CLTs for Method II Trimming

Let W be a subordinator of the form (2). Set for any y > 0

$$\displaystyle \begin{aligned} \mu_{W}\left( y\right) :=\int_{y}^{\infty}\varphi_{W}\left( x\right) \mathrm{d}x\text{ and }\sigma_{W}^{2}\left( y\right) :=\int _{y}^{\infty}\varphi_{W}^{2}\left( x\right) \mathrm{d}x\text{.} \end{aligned}$$

We see by Remarks 1 and 2 that (1) implies that

$$\displaystyle \begin{aligned} \sigma_{W}^{2}\left( y\right) >0\text{ for all }y>0. \end{aligned}$$

For easy reference for the reader, we state here a version of Theorem 1 of Mason [7] stated in terms of a self-standardized CLT for the method II trimmed subordinated subordinator W.

Theorem 2

Assume that \(\overline {\Lambda }_{W}(0+)=\infty \) . For any sequence of positive integers \(\left \{ k_{n}\right \} _{n\geq 1}\) and sequence of positive constants \(\left \{ t_{n}\right \} _{n\geq 1}\) satisfying

$$\displaystyle \begin{aligned} \frac{\sqrt{t_{n}}\sigma_{W}\left( \Gamma_{k_{n}}/t_{n}\right) }{\varphi _{W}\left( \Gamma_{k_{n}}/t_{n}\right) }\overset{\mathrm{P}}{\rightarrow }\infty,\mathit{\text{ as }}n\rightarrow\infty, \end{aligned}$$

we have uniformly in x, as \(n\rightarrow \infty \),

$$\displaystyle \begin{aligned} \left\vert P\left\{ \frac{\widetilde{W}^{\left( k_{n}\right) }\left( t_{n}\right) -t_{n}\mu_{W}\left( \Gamma_{k_{n}}/t_{n}\right) }{\sqrt{t_{n} }\sigma_{W}\left( \Gamma_{k_{n}}/t_{n}\right) }\leq x|\Gamma_{k_{n} }\right\} -P\left\{ Z\leq x\right\} \right\vert \overset{\mathrm{P} }{\rightarrow}0\mathit{\text{,}} \end{aligned}$$

which implies as \(n\rightarrow \infty \)

$$\displaystyle \begin{aligned} \frac{\widetilde{W}^{\left( k_{n}\right) }\left( t_{n}\right) -t_{n} \mu_{W}\left( \Gamma_{k_{n}}/t_{n}\right) }{\sqrt{t_{n}}\sigma_{W}\left( \Gamma_{k_{n}}/t_{n}\right) }\overset{\mathrm{D}}{\rightarrow}Z. \end{aligned}$$

Remark 6

Theorem 1 of Mason [7] contains the added assumption that \(k_{n}\rightarrow \infty \), as \(n\rightarrow \infty \). An examination of its proof shows that this assumption is unnecessary. Also we note in passing that Theorem 1 implies Theorem 2.

For the convenience of the reader, we state the following results. Corollary 1 is from Mason [7]. The proof of Corollary 2 follows after some obvious changes of notation that of Corollary 1.

Corollary 1

Assume that \(W\left ( t\right ) \), t ≥ 0, is a subordinator with drift 0, whose Lévy tail function \(\overline {\Lambda }_{W}\) is regularly varying at zero with index α, where 0 < α < 1. For any sequence of positive integers \(\left \{ k_{n}\right \} _{n\geq 1}\) converging to infinity and sequence of positive constants \(\left \{ t_{n}\right \} _{n\geq 1}\) satisfying \(k_{n}/t_{n}\rightarrow \infty \), we have, as \(n\rightarrow \infty \),

$$\displaystyle \begin{aligned} \frac{\widetilde{W}^{\left( k_{n}\right) }\left( t_{n}\right) -t_{n} \mu_{W}\left( k_{n}/t_{n}\right) }{\sqrt{t_{n}}\sigma_{W}\left( k_{n} /t_{n}\right) }\overset{\mathrm{D}}{\rightarrow}\sqrt{\frac{2}{\alpha}}Z. {} \end{aligned} $$
(27)

Corollary 2

Assume that \(W\left ( t\right ) \), t ≥ 0, is a subordinator with drift 0, whose Lévy tail function \(\overline {\Lambda }_{W}\) is regularly varying at infinity with index α, where 0 < α < 1. For any sequence of positive integers \(\left \{ k_{n}\right \} _{n\geq 1}\) converging to infinity and sequence of positive constants \(\left \{ t_{n}\right \} _{n\geq 1}\) satisfying kntn → 0, as \(n\rightarrow \infty \), we have (27) .

The subordinated subordinator introduced in Special Case 1 above satisfies the conditions of Corollary 1, and the subordinated subordinator in Special Case 2 above fulfills the conditions of Corollary 2. Consider the two cases.

  • To see this, notice that in Special Case 1, by (25) necessarily W has Lévy tail function on \(\left ( 0,\infty \right ) \)

    $$\displaystyle \begin{aligned} \overline{\Lambda}_{W}(y)=\Gamma\left( 1-\alpha_{1}\alpha_{2}\right) y^{-\alpha_{1}\alpha_{2}}{\mathbf{1}}_{\left\{ y>0\right\} }, \end{aligned}$$

    for 0 <  α1, α2 < 1, which is regularly varying at zero with index − α, where 0 < α = α1α2 < 1. In this case, from Corollary 1, we get (27) as long as \(k_{n}\rightarrow \infty \) and \(k_{n}/t_{n}\rightarrow \infty \), as \(n\rightarrow \infty .\)

  • In Special Case 2, observe that \(W=V_{\alpha }\left ( X\right ) ,\) with 0 < α < 1, where \(V_{\alpha }=\left ( V_{\alpha }\left ( t\right ) ,t\geq 0\right ) \) is an α-stable process with Laplace transform (10), \(X=\left ( X\left ( s\right ) ,s\geq 0\right ) \) is a standard Gamma process, and Vα and X are independent. The process \(r^{-1/\alpha }W\left ( r\right ) \) has Laplace transform \(\left ( 1+\theta ^{\alpha }/r\right ) ^{-r}\), for θ ≥ 0, which converges to \(\exp \left ( -\theta ^{\alpha }\right ) \) as \(r\rightarrow \infty \). This implies that for all t > 0

    $$\displaystyle \begin{aligned} r^{-1/\alpha}W\left( rt\right) \overset{\mathrm{D}}{\rightarrow}V_{\alpha }\left( t\right) ,\mathrm{as}r\rightarrow\infty\text{.} \end{aligned}$$

    By part (ii) of Theorem 15.14 of Kallenberg [5] and (10) for all x > 0

    $$\displaystyle \begin{aligned} r\overline{\Lambda}_{W}\left( r^{1/\alpha}x\right) \rightarrow\Gamma\left( 1-\alpha\right) x^{-\alpha},\mathrm{as}r\rightarrow \infty\text{.} \end{aligned}$$

    This implies that W has a Lévy tail function \(\overline {\Lambda }_{W}(y)\) on \(\left ( 0,\infty \right ) \), which is regularly varying at infinity with index − α, 0 < α < 1. In this case, by Corollary 2, we can conclude (27) as long as \(k_{n}\rightarrow \infty \) and kntn → 0, as \(n\rightarrow \infty .\)

4 Appendix 1

Recall the notation of Special Case 1. Let \(V_{\alpha _{1}}\), \(V_{\alpha _{2}}\), and \(\left ( \Gamma _{k}\right ) _{k\geq 1}\) be independent and \(W=V_{\alpha _{1}}\left ( V_{\alpha _{2}}\right ) \). For any t > 0, let \(m_{V_{\alpha _{1}}}^{\left ( 1\right ) }(V_{\alpha _{2}}\left ( t\right ) )\geq m_{V_{\alpha _{1}}}^{\left ( 2\right ) }(V_{\alpha _{2}}\left ( t\right ) )\geq \cdots \) denote the ordered jumps of \(V_{\alpha _{1}}\) on the interval \(\left [ 0,V_{\alpha _{2}}\left ( t\right ) \right ] \). They satisfy

$$\displaystyle \begin{aligned} \left( m_{\alpha_{1}}^{\left( k\right) }(V_{\alpha_{2}}\left( t\right) )\right) _{k\geq1}\overset{\mathrm{D}}{=}\left( c\left( \alpha_{1}\right) \left( \frac{\Gamma_{k}}{V_{\alpha_{2}}\left( t\right) }\right) ^{-1/\alpha_{1}}\right) _{k\geq1}. \end{aligned}$$

Let \(m_{W}^{\left ( 1\right ) }(t)\geq m_{W}^{\left ( 2\right ) }(t)\geq \cdots \) denote the ordered jumps of W on the interval \(\left [ 0,t\right ] \). In this case, for each t > 0

$$\displaystyle \begin{aligned} \left( m_{W}^{\left( k\right) }(t)\right) _{k\geq1}\overset{\mathrm{D}} {=}\left( c\left( \alpha_{1}\alpha_{2}\right) \left( \frac{\Gamma_{k}} {t}\right) ^{-1/\left( \alpha_{1}\alpha_{2}\right) }\right) _{k\geq1}. \end{aligned}$$

Observe that for all t > 0

$$\displaystyle \begin{aligned} W\left( t\right) =\sum_{0<s\leq t}\Delta W\left( s\right) =\sum_{0<s\leq V_{\alpha_{2}}\left( t\right) }\Delta V_{\alpha_{1}}\left( s\right) =\sum_{k=1}^{\infty}m_{\alpha_{1}}^{\left( k\right) }(V_{\alpha_{2}}\left( t\right) ). {} \end{aligned} $$
(28)

Note that though (28) holds, \(\left ( m_{\alpha _{1}}^{\left ( k\right ) }(V_{\alpha _{2}}\left ( t\right ) )\right ) _{k\geq 1}\) is not equal in distribution to \(\left ( m_{W}^{\left ( k\right ) }(t)\right ) _{k\geq 1}\). To see this, notice that

$$\displaystyle \begin{aligned} \left( \frac{m_{\alpha_{1}}^{\left( k\right) }(V_{\alpha_{2}}\left( t\right) )}{m_{\alpha_{1}}^{\left( 1\right) }(V_{\alpha_{2}}\left( t\right) )}\right) _{k\geq1}\overset{\mathrm{D}}{=}\left( \left( \frac{\Gamma_{k}}{\Gamma_{1}}\right) ^{-1/\alpha_{1}}\right) _{k\geq1}, {} \end{aligned} $$
(29)

whereas

$$\displaystyle \begin{aligned} \left( \frac{m_{W}^{\left( k\right) }(t)}{m_{W}^{\left( 1\right) } (t)}\right) _{k\geq1}\overset{\mathrm{D}}{=}\left( \left( \frac{\Gamma_{k} }{\Gamma_{1}}\right) ^{-1/\left( \alpha_{1}\alpha_{2}\right) }\right) _{k\geq1}. {} \end{aligned} $$
(30)

Obviously, the sequences (29) and (30) are not equal in distribution and thus

$$\displaystyle \begin{aligned} \left( m_{\alpha_{1}}^{\left( k\right) }(V_{\alpha_{2}}\left( t\right) )\right) _{k\geq1}\overset{\mathrm{D}}{\neq}\left( m_{W}^{\left( k\right) }(t)\right) _{k\geq1}. \end{aligned}$$

5 Appendix 2

A straightforward modification of the proof of Theorem 1 of Rényi [10] gives the following Anscombe’s theorem for Lévy processes.

Theorem A

Let \(X=\left ( X\left ( t\right ) ,t\geq 0\right ) \) be a mean zero Lévy process with \(EX^{2}\left ( t\right ) =t\) for t ≥ 0, and let \(\eta =\left ( \eta \left ( t\right ) ,t>0\right ) \) be a random process such that \(\eta \left ( t\right ) >0\) for all t > 0 and for some c > 0, \(\eta \left ( t\right ) /t\overset {\mathrm {P}}{\rightarrow }c\), as \(t\rightarrow \infty \), then

$$\displaystyle \begin{aligned} X\left( \eta\left( t\right) \right) /\sqrt{\eta\left( t\right) } \overset{\mathrm{D}}{\rightarrow}Z. \end{aligned}$$

A version of Anscombe’s theorem is given in Gut [ 2 ]. See his Theorem 3.1. In our notation, his Theorem 3.1 requires that \(\left \{ \eta \left ( t\right ) ,t\geq 0\right \} \) be a family of stopping times.

Example A

Let \(Y_{\alpha }=\left ( Y_{\alpha }\left ( y\right ) ,y\geq 0\right ) \) be the Lévy process with Laplace transform (19) and mean and variance functions (20). We see that

$$\displaystyle \begin{aligned} X:=\left( X\left( y\right) =\frac{Y_{\alpha}\left( y\right) -\beta_{\alpha}y}{\gamma_{\alpha}},y\geq0\right) \end{aligned}$$

defines a mean zero Lévy process with \(EX^{2}\left ( y\right ) =y \) for y ≥ 0. Now let \(\eta =\left ( \eta \left ( t\right ) , t\geq 0\right ) \) be a standard Gamma process independent of X. Notice that \(\eta \left ( t\right ) /t\overset {\mathrm {P}}{\rightarrow }1\), as \(t\rightarrow \infty \). Applying Theorem A, we get as \(t\rightarrow \infty \),

$$\displaystyle \begin{aligned} X\left( \eta\left( t\right) \right) /\sqrt{\eta\left( t\right) } \overset{\mathrm{D}}{\rightarrow}Z. \end{aligned}$$

In particular, since for each integer k ≥ 1, \(\eta \left ( k\right ) \overset {\mathrm {D}}{=}\Gamma _{k}\), this implies that (24) holds for any sequence of positive integers \(\left ( k_{n}\right ) _{n\geq 1}\) converging to infinity as \(n\rightarrow \infty \), i.e.,

$$\displaystyle \begin{aligned} \frac{Y_{\alpha}\left( \Gamma_{k_{n}}\right) -\beta_{\alpha}\Gamma_{k_{n}} }{\gamma_{\alpha}\sqrt{\Gamma_{k_{n}}}}\overset{\mathrm{D}}{\rightarrow}Z. \end{aligned}$$