1 Introduction

We shall begin by stating our results for trimmed subordinators. Special cases of our main result for subordinators, Theorem 1 below, have already been proved by Ipsen, Maller and Resnick (IMR) [6] , using classical methods. See, in particular, their Theorem 4.1. Our approach is based on a powerful distributional approximation result of Zaitsev [11], which we shall see in Sect. 5 extends to general trimmed Lévy processes. We shall first establish some basic notation.

Let \(V_{t}\), \(t\ge 0\), be a subordinator with Lévy measure \(\varLambda \) on \( {\mathbb {R}}^{+}=\left( 0,\infty \right) \) and drift 0. Define the tail function \({\overline{\varLambda }}(x)=\varLambda ((x,\infty ))\), for \(x>0\), and for \(u>0\) let

$$\begin{aligned} \varphi (u)=\sup \{x:{\overline{\varLambda }}(x)>u\}, \end{aligned}$$
(1)

where \(\sup \varnothing :=0\).

Remark 1

For later use, we observe that we always have

$$\begin{aligned} \varphi (u)\rightarrow 0{, \text {as }}u\rightarrow \infty . \end{aligned}$$
(2)

Notice that (2) is formally true if \({\overline{\varLambda }}(0+)=c>0,\) since in this case for all \(u>c\), \(\sup \{x:{\overline{\varLambda }} (x)>u\}=\varnothing \) and we define \(\sup \varnothing :=0\), and thus \( \varphi (u)=0\) for \(u>c\). The limit (2) also holds whenever

$$\begin{aligned} {\overline{\varLambda }}(0+)=\infty . \end{aligned}$$
(3)

To see this, assume (3) and choose any sequence \(x_{n}\searrow 0\) such that \(u_{n}:={\overline{\varLambda }}(x_{n})>0\) for \(n\ge 1.\) Clearly, \(u_{n} \rightarrow \infty \) as \(n\rightarrow \infty \). By the definition (1), the fact that \({\overline{\varLambda }}\) is nonincreasing on \(\left( 0,\infty \right) \) and \(x_{n}\notin \{x:{\overline{\varLambda }}(x)>u_{n}\}\) necessarily \(\varphi (u_{n})\le x_{n}\), and thus since \(\varphi \) is nonincreasing, (2) holds. Furthermore, when (3) holds,

$$\begin{aligned} \varphi (u)>0\,\,{ \text {for all }}u>0. \end{aligned}$$
(4)

To verify this, choose \(0<y_{n+1}<y_{n}\) such that \(y_{n}\searrow 0\), as \(n\rightarrow \infty \), and \(v_{n+1}={\overline{\varLambda }}(y_{n+1})>v_{n} ={\overline{\varLambda }}(y_{n})\) for \(n\ge 1.\) Therefore, \(y_{n+1}\in \{x:\overline{\varLambda }(x)>v_{n}\}\) and hence \(\varphi (v_{n})\ge y_{n+1}>0\) for all \(n\ge 1\). Since \(v_{n}\nearrow \infty \), we have (4).

Recall that the Lévy measure of a subordinator satisfies

$$\begin{aligned} \int _{0}^{1}x\varLambda (\mathrm {d}x)<\infty {, \text {equivalently, for all }}y>0,\int _{y}^{\infty }\varphi \left( x\right) \mathrm {d}x<\infty . \end{aligned}$$
(5)

The subordinator \(V_{t}\), \(t\ge 0\), has Laplace transform

$$\begin{aligned} E\exp \left( -\lambda V_{t}\right) =\exp \left( -t\varPhi \left( \lambda \right) \right) , \lambda \ge 0, \end{aligned}$$
(6)

where

$$\begin{aligned} \varPhi \left( \lambda \right) =\int _{0}^{\infty }\left( 1-\exp \left( -\lambda v\right) \right) \varLambda \left( \mathrm {d}v\right) , \end{aligned}$$

which can be written after a change of variable

$$\begin{aligned} =\int _{0}^{\infty }\left( 1-\exp \left( -\lambda \varphi \left( u\right) \right) \right) \mathrm {d}u. \end{aligned}$$
(7)

For any \(t>0\) denote the ordered jump sequence \(m_{t}^{\left( 1\right) }\ge m_{t}^{\left( 2\right) }\ge \cdots \) of \(V_{t}\) on the interval \(\left[ 0,t \right] \). Let \(\omega _{1},\omega _{2},\ldots \) be i.i.d. exponential random variables with parameter 1 and for each \(n\ge 1\) let \(\varGamma _{n}=\omega _{1}+\cdots +\omega _{n}\). It is well known that for each \(t>0\)

$$\begin{aligned} \left( m_{t}^{\left( k\right) }\right) _{k\ge 1}\overset{\mathrm {D}}{=} \left( \varphi \left( \frac{\varGamma _{k}}{t}\right) \right) _{k\ge 1}. \end{aligned}$$
(8)

See, for instance, equation (1.3) in IMR [6] and the references therein. It can also be inferred from a general representation for subordinators due to Rosiński [9].

Set \(V_{t}^{\left( 0\right) }:=V_{t}\) and for any integer \(k\ge 1\) consider the trimmed subordinator

$$\begin{aligned} V_{t}^{\left( k\right) }:=V_{t}-m_{t}^{\left( 1\right) }-\dots -m_{t}^{\left( k\right) }, \end{aligned}$$
(9)

which on account of (8) says for any integer \(k\ge 1\) and \(t>0\)

$$\begin{aligned} V_{t}^{\left( k\right) }\overset{\mathrm {D}}{=}\sum _{i=k+1}^{\infty }\varphi \left( \frac{\varGamma _{i}}{t}\right) =:{\widetilde{V}}_{t}^{\left( k\right) }. \end{aligned}$$
(10)

Set for any \(y>0\)

$$\begin{aligned} \mu \left( y\right) :=\int _{y}^{\infty }\varphi \left( x\right) \mathrm {d}x\,\, { \text {and} }\,\,\sigma ^{2}\left( y\right) :=\int _{y}^{\infty }\varphi ^{2}\left( x\right) \mathrm {d}x. \end{aligned}$$

We see by Remark 1 that (3) implies that

$$\begin{aligned} \sigma ^{2}\left( y\right)>0\,\,{ \hbox {for all} }\,\,y>0. \end{aligned}$$
(11)

Throughout these notes, Z, \(Z_{1},Z_{2}\) denote standard normal random variables. Here is our self-standardized central limit theorem (SSCLT) for trimmed subordinators. In Examples 4 and 5 we show that our theorem implies Theorem 4.1 and Remark 4.1 of IMR [6], who treat the case when \( t_{n}=t\) is fixed and \(k_{n}\rightarrow \infty \).

Theorem 1

Assume that \({\overline{\varLambda }}(0+)=\infty \). For any sequence of positive integers \(\left\{ k_{n}\right\} _{n\ge 1}\) converging to infinity and sequence of positive constants \( \left\{ t_{n}\right\} _{n\ge 1}\) satisfying

$$\begin{aligned} \frac{\sqrt{t_{n}}\sigma \left( \varGamma _{k_{n}}/t_{n}\right) }{\varphi \left( \varGamma _{k_{n}}/t_{n}\right) }\overset{\mathrm {P}}{\rightarrow }\infty ,{ \hbox {as} }\,\,n\rightarrow \infty , \end{aligned}$$
(12)

we have uniformly in x, as \(n\rightarrow \infty \),

$$\begin{aligned} \left| {\mathbb {P}}\left\{ \frac{{\widetilde{V}}_{t_{n}}^{\left( k_{n}\right) }-t_{n}\mu \left( \varGamma _{k_{n}}/t_{n}\right) }{\sqrt{t_{n}} \sigma \left( \varGamma _{k_{n}}/t_{n}\right) }\le x|\varGamma _{k_{n}}\right\} - {\mathbb {P}}\left\{ Z\le x\right\} \right| \overset{\mathrm {P}}{ \rightarrow }0, \end{aligned}$$
(13)

which implies as \(n\rightarrow \infty \)

$$\begin{aligned} \frac{{\widetilde{V}}_{t_{n}}^{\left( k_{n}\right) }-t_{n}\mu \left( \varGamma _{k_{n}}/t_{n}\right) }{\sqrt{t_{n}}\sigma \left( \varGamma _{k_{n}}/t_{n}\right) }\overset{\mathrm {D}}{\rightarrow }Z. \end{aligned}$$
(14)

Corollary 1

Assume that \(V_{t}\), \(t\ge 0\), is a subordinator with drift 0, whose Lévy tail function \({\overline{\varLambda }}\) is regularly varying at zero with index \(-\alpha \), where \( 0<\alpha <1\). For any sequence of positive integers \(\left\{ k_{n}\right\} _{n\ge 1}\) converging to infinity and sequence of positive constants \(\left\{ t_{n}\right\} _{n\ge 1}\) satisfying \( k_{n}/t_{n}\rightarrow \infty \), we have as \(n\rightarrow \infty \),

$$\begin{aligned} \frac{{\widetilde{V}}_{t_{n}}^{\left( k_{n}\right) }-t_{n}\mu \left( k_{n}/t_{n}\right) }{\sqrt{t_{n}}\sigma \left( k_{n}/t_{n}\right) }\overset{ \mathrm {D}}{\rightarrow }\sqrt{\frac{2}{\alpha }}Z. \end{aligned}$$
(15)

Remark 2

Notice that whenever

$$\begin{aligned} \liminf _{w\rightarrow \infty }\int _{w}^{\infty }\varphi ^{2}\left( x\right) \mathrm {d}x/\left( w\varphi ^{2}\left( w\right) \right) =:\beta >0, \end{aligned}$$
(16)

\(\varGamma _{k_{n}}/t_{n}\overset{\mathrm {P}}{\rightarrow }\infty \) and \( k_{n}\rightarrow \infty \), then

$$\begin{aligned} \sqrt{\varGamma _{k_{n}}t_{n}\sigma ^{2}\left( \varGamma _{k_{n}}/t_{n}\right) /\left( \varGamma _{k_{n}}\varphi ^{2}\left( \varGamma _{k_{n}}/t_{n}\right) \right) } =\frac{\sqrt{t_{n}}\sigma \left( \varGamma _{k_{n}}/t_{n}\right) }{\varphi \left( \varGamma _{k_{n}}/t_{n}\right) }\overset{\mathrm {P}}{\rightarrow }\infty , \end{aligned}$$

and thus (12) holds. In particular, (16) is satisfied whenever \(\varphi \) is regularly varying at infinity with index \(-1/\alpha \) with \(0<\alpha <2\).

Using the change of variables formula: For \(p\ge 1\), whenever the integrals exist, for \(r>0\),

$$\begin{aligned} \int _{0}^{\varphi \left( r\right) }x^{p}\varLambda \left( \mathrm {d}x\right) =\int _{r}^{\infty }\varphi ^{p}\left( u\right) \mathrm {d}u, \end{aligned}$$
(17)

(for (17), see p. 301 of Brémaud [3]) one readily sees that (16) is fulfilled whenever the Feller class at zero condition holds (e.g., Maller and Mason [8]):

$$\begin{aligned} \limsup _{x\downarrow 0}\frac{x^{2}{\overline{\varLambda }}(x)}{ \int _{0}^{x}u^{2}\varLambda (\mathrm {d}u)}<\infty . \end{aligned}$$
(18)

(For more details, refer to Example 2.)

Remark 3

Corollary 1 implies part of Theorem 9.1 of IMR [6], namely, whenever for \(0<\alpha <1\),

$$\begin{aligned} {\overline{\varLambda }}(x)=x^{-\alpha }1\left\{ x>0\right\} , x>0, \end{aligned}$$

then for each fixed \(t>0\), as \(n\rightarrow \infty \),

$$\begin{aligned} \frac{{\widetilde{V}}_{t}^{\left( n\right) }-t\mu \left( n/t\right) }{\sqrt{t} \sigma \left( n/t\right) }\overset{\mathrm {D}}{\rightarrow }\sqrt{\frac{2}{ \alpha }}Z. \end{aligned}$$
(19)

The first part of their Theorem 9.1 can be shown to be equivalent to (19).

Remark 4

The analog of Corollary 1 for a sequence of i.i.d. positive random variables \(\xi _{1},\xi _{2}\dots \) in the domain of attraction of a stable law of index \(0<\alpha <2\) says that as \( n\rightarrow \infty \),

$$\begin{aligned} \frac{\sum _{i=r_{n}+1}^{n}\xi _{n}^{\left( i\right) }-nc\left( r_{n}/n\right) }{\sqrt{n}a\left( r_{n}/n\right) }\overset{\mathrm {D}}{ \rightarrow }\sqrt{\frac{2}{2-\alpha }}Z, \end{aligned}$$

where for each \(n\ge 2\), \(\xi _{n}^{\left( 1\right) }\ge \dots \ge \xi _{n}^{\left( n\right) }\) denote the order statistics of \(\xi _{1},\dots ,\xi _{n}\), \(\left\{ r_{n}\right\} _{n\ge 1}\) is a sequence of positive integers \(1\le r_{n}\le n\) satisfying \(r_{n}\rightarrow \infty \) and \( r_{n}/n\rightarrow 0\) as \(n\rightarrow \infty \), and \(c\left( r_{n}/n\right) \) and \(a\left( r_{n}/n\right) \) are appropriate centering and norming constants. For details refer to S. Csörgő, Horváth and Mason [4]. The proof of our Corollary 1 borrows ideas from the proof of their Theorem 1.

2 Preliminaries for Proofs

In this section, we collect some facts that are needed in our proofs. Lemmas 1 and 2 are elementary; however, for completeness we indicate proofs.

2.1 A Useful Special Case of a Result of Zaitsev [11]

We shall be making use of the following special case of Theorem 1.2 of Zaitsev [11]. which in this paper we shall call the Zaitsev Fact.

Fact (Zaitsev [11]) Let Y be an infinitely divisible mean 0 and variance 1 random variable with Lévy measure \(\varLambda \) and Z be a standard normal random variable. Assume that the support of \(\varLambda \) is contained in a closed ball with center 0 of radius \(\tau >0\), then for universal positive constants \(C_{1}\) and \(C_{2}\) for any \(\lambda >0\)

$$\begin{aligned} \varPi \left( Y,Z;\lambda \right) \le C_{1}\exp \left( -\frac{\lambda }{ C_{2}\tau }\right) , \end{aligned}$$

where

$$\begin{aligned} \varPi \left( Y,Z;\lambda \right) :=\sup _{B\in {\mathcal {B}}}\max \left\{ \mathbb { P}\left\{ Y\in B\right\} -{\mathbb {P}}\left\{ Z\in B^{\lambda }\right\} , {\mathbb {P}}\left\{ Z\in B\right\} -{\mathbb {P}}\left\{ Y\in B^{\lambda }\right\} \right\} , \end{aligned}$$

with \(B^{\lambda }=\left\{ y\in {\mathbb {R}}\text {:}\inf _{x\in B}\left| x-y\right| <\lambda \right\} \) for \(B\in {\mathcal {B}}\), the Borel sets of \({\mathbb {R}}\).

Notice that under the conditions of the Zaitsev Fact for all x, \(\lambda >0\) and \(\varepsilon >\varPi \left( Y,Z;\lambda \right) \), with \(\varepsilon >0\),

$$\begin{aligned} {\mathbb {P}}\left\{ Y\le x\right\} \le {\mathbb {P}}\left\{ Z\le x+\lambda \right\} +\varepsilon \end{aligned}$$

and

$$\begin{aligned} {\mathbb {P}}\left\{ Z\le x-\lambda \right\} \le {\mathbb {P}}\left\{ Y\le x\right\} +\varepsilon , \end{aligned}$$

and thus

$$\begin{aligned} {\mathbb {P}}\left\{ Z\le x-\lambda \right\} -\varepsilon \le {\mathbb {P}}\left\{ Y\le x\right\} \le {\mathbb {P}}\left\{ Z\le x+\lambda \right\} +\varepsilon \text {.} \end{aligned}$$

In particular, the Zaitsev Fact says that for all \(x\in {\mathbb {R}}\) and \(\lambda >0\),

$$\begin{aligned}&{\mathbb {P}}\left\{ Z\le x-\lambda \right\} -C_{1}\exp \left( -\frac{\lambda }{ C_{2}\tau }\right) \le {\mathbb {P}}\left\{ Y\le x\right\} \\&\quad \le {\mathbb {P}}\left\{ Z\le x+\lambda \right\} +C_{1}\exp \left( -\frac{\lambda }{C_{2}\tau }\right) . \end{aligned}$$

2.2 Moments of a Positive Random Variable

Given \(t>0\), let \(X_{t}\) be a positive random variable with Laplace transform

$$\begin{aligned} \varPsi _{X_{t}}\left( \lambda \right) :=E\exp \left( -\lambda X_{t}\right) =\exp \left( -t\varPhi \left( \lambda \right) \right) , \end{aligned}$$

where \(\varPhi \) is the Laplace exponent

$$\begin{aligned} \varPhi \left( \lambda \right) =\int _{0}^{\infty }\left( 1-\exp \left( -\lambda \varphi \left( u\right) \right) \right) \mathrm {d}u\text {,} \end{aligned}$$

and \(\varphi \) a nonincreasing positive function on \(\left( 0,\infty \right) \) such that \(\varphi \left( u\right) \rightarrow 0\) as \(u\rightarrow \infty \) . Assume that

$$\begin{aligned} \mu :=\int _{0}^{\infty }\varphi \left( u\right) \mathrm {d}u<\infty \,\,{ \text {and} }\,\,\sigma ^{2}:=\int _{0}^{\infty }\varphi ^{2}\left( u\right) \mathrm {d}u<\infty , \end{aligned}$$

which implies \(\varPhi \left( \lambda \right) <\infty \) for all \(\lambda >0\) and \(\varPhi \left( \lambda \right) \) twice differentiable on \(\left( 0,\infty \right) .\) Differentiating \(\varPsi _{X_{t}}\left( \lambda \right) \) with respect to \(\lambda \) twice and evaluating \(\varPsi _{X_{t}}^{\prime }\left( 0+\right) \) and \(\varPsi _{X}^{^{\prime \prime }}\left( 0+\right) \), we get the following moments:

Lemma 1

Under the above assumptions,

$$\begin{aligned} \hbox {E}X_t=t\mu \,\,{ \hbox {and}}\,\, \mathrm{Var}X_t=t\sigma ^{2}. \end{aligned}$$

2.3 An Asymptotic Independence Result

We shall need the following elementary asymptotic independence result.

Lemma 2

Let \(\left( X_{n},Y_{n}\right) _{n\ge 1}\) be a sequence of pairs of real-valued random variables on the same probability space, and for each \(n\ge 1\) let \(\phi _{n}\) be a measurable function. Suppose that for distribution functions F and G for all continuity points x of F and y of G

$$\begin{aligned} {\mathbb {P}}\left\{ X_{n}\le x|Y_{n}\right\} \overset{\mathrm {P}}{\rightarrow }F\left( x\right) \,\,{ \hbox {and }}\,\,{\mathbb {P}}\left\{ \phi _{n}\left( Y_{n}\right) \le y\right\} \rightarrow G\left( y\right) , \end{aligned}$$
(20)

then

$$\begin{aligned} {\mathbb {P}}\left\{ X_{n}\le x,\phi _{n}\left( Y_{n}\right) \le y\right\} \rightarrow F\left( x\right) G\left( y\right) . \end{aligned}$$
(21)

Proof

Notice that

$$\begin{aligned}&\left| {\mathbb {P}}\left\{ X_{n}\le x,\phi _{n}\left( Y_{n}\right) \le y\right\} -F\left( x\right) G\left( y\right) \right| \\&\quad \le \left| E\left[ \left( {\mathbb {P}}\left\{ X_{n}\le x|Y_{n}\right\} -F\left( x\right) \right) 1\left\{ \phi _{n}\left( Y_{n}\right) \le y\right\} \right] \right| \\&\qquad +\left| F\left( x\right) {\mathbb {P}}\left\{ \phi _{n}\left( Y_{n}\right) \le y\right\} -F(x)G(y)\right| \\&\quad \le E\left| {\mathbb {P}}\left\{ X_{n}\le x|Y_{n}\right\} -F\left( x\right) \right| +\left| {\mathbb {P}}\left\{ \phi _{n}\left( Y_{n}\right) \le y\right\} -G(y)\right| , \end{aligned}$$

which by (20) converges to zero. \(\square \)

3 Proof of Subordinator Results

3.1 Proof of Theorem 1

For each \(t>0\) and \(y>0\), consider the random variable

$$\begin{aligned} T\left( t,y\right) =\sum _{i=1}^{\infty }\varphi \left( \frac{y}{t}+\frac{ \varGamma _{i}^{\prime }}{t}\right) , \end{aligned}$$

with \(\left( \varGamma _{i}^{\prime }\right) _{i\ge 1}\) \(\overset{\mathrm {D}}{= }\) \(\left( \varGamma _{i}\right) _{i\ge 1},\) which has Laplace transform

$$\begin{aligned} \varUpsilon _{t,y}\left( \lambda \right) :=E\exp \left( -\lambda T\left( t,y\right) \right) =\exp \left( -t\varPhi _{t,y}\left( \lambda \right) \right) , \end{aligned}$$

where \(\varPhi _{t,y}\left( \lambda \right) \) is the Laplace exponent,

$$\begin{aligned} \varPhi _{t,y}\left( \lambda \right) =\int _{0}^{\infty }\left( 1-\exp \left( -\lambda \varphi \left( \frac{y}{t}+u\right) \right) \right) \mathrm {d}u \text {.} \end{aligned}$$

Introducing the Lévy measure \(\varLambda _{t,y}\) defined on \(\left( 0,\infty \right) \) by the tail function

$$\begin{aligned} {\overline{\varLambda }}_{y/t}(u)=\left\{ \begin{array}{c} {\overline{\varLambda }}(u)-\frac{y}{t}, \quad { \hbox {for} }\,\,0<u<\varphi \left( \frac{y}{t}\right) \\ 0,\quad \quad { \hbox {for} }\,\,u\ge \varphi \left( \frac{y}{t}\right) \end{array} \right. , \end{aligned}$$

we see that

$$\begin{aligned} \sup \{x:{\overline{\varLambda }}_{y/t}(x)>u\}= & {} \sup \left\{ x:{\overline{\varLambda }}(x)- \frac{y}{t}>u\right\} \\= & {} \varphi \left( \frac{y}{t}+u\right) \end{aligned}$$

and thus

$$\begin{aligned} \varPhi _{t,y}\left( \lambda \right) =\int _{0}^{\infty }\left( 1-\exp \left( -\lambda v\right) \right) \varLambda _{t,y}\left( \mathrm {d}v\right) . \end{aligned}$$

Clearly, \(T\left( t,y\right) \) is an infinitely divisible random variable and the support of \(\varLambda _{y/t}\) is contained in \(\left[ 0,\varphi (y/t) \right] \). Applying Lemma 1, one finds that

$$\begin{aligned} \hbox {E}T\left( t,y\right) =t\int _{y/t}^{\infty }\varphi \left( u\right) \mathrm {d} u=:t\mu \left( \frac{y}{t}\right) \end{aligned}$$

and

$$\begin{aligned} \hbox {Var}T\left( t,y\right) =t\int _{y/t}^{\infty }\varphi ^{2}\left( u\right) \mathrm {d}u=:t\sigma ^{2}\left( \frac{y}{t}\right) . \end{aligned}$$

Note that (3) implies (11) and thus for all \(y>0\), \( \sigma ^{2}\left( \frac{y}{t}\right) >0\). For each \(t>0\) and \(y>0\), consider the standardized version of \(T\left( t,y\right) \)

$$\begin{aligned} S\left( t,y\right) =\frac{T\left( t,y\right) -\hbox {E}T\left( t,y\right) }{\sqrt{ \hbox {Var}T\left( t,y\right) }}. \end{aligned}$$

We can write

$$\begin{aligned} S\left( t,y\right) =\frac{T\left( t,y\right) -t\mu \left( \frac{y}{t}\right) }{\sqrt{t}\sigma \left( \frac{y}{t}\right) }\text {.} \end{aligned}$$

Now \(S\left( t,y\right) \) is an infinitely divisible random with

$$\begin{aligned} \hbox {E}S\left( t,y\right)= & {} 0\,\, { \text {and }}\,\,\hbox {Var}S\left( t,y\right) =1, \end{aligned}$$

whose Lévy measure has support contained in \(\left[ 0,\varphi (y/t)/\left( \sqrt{t}\sigma \left( \frac{y}{t}\right) \right) \right] \). Applying the Zaitsev Fact to the infinitely divisible random variable \(S\left( t,y\right) \), we get for any \(t>0,\) \(y>0\) and \(\lambda >0\) and for universal positive constants \(C_{1} \) and \(C_{2}\)

$$\begin{aligned} \varPi \left( S\left( t,y\right) ,Z;\lambda \right) \le C_{1}\exp \left( - \frac{\lambda \sqrt{t}\sigma \left( \frac{y}{t}\right) }{C_{2}\varphi (y/t)} \right) . \end{aligned}$$

This implies that whenever \(\left\{ t_{n}\right\} _{n\ge 1}\) is a sequence of positive constants and \(Y_{k_{n}}\) is a sequence of positive random variables such that each \(Y_{k_{n}}\) is independent of \( \left( \varGamma _{i}^{\prime }\right) _{i\ge 1}\) and

$$\begin{aligned} \frac{\sqrt{t_{n}}\sigma \left( Y_{k_{n}}/t_{n}\right) }{\varphi \left( Y_{k_{n}}/t_{n}\right) }\overset{\mathrm {P}}{\rightarrow }\infty ,{ \text {as} }\,\,n\rightarrow \infty , \end{aligned}$$
(22)

then uniformly in x

$$\begin{aligned} \left| {\mathbb {P}}\left\{ S\left( t_{n},Y_{k_{n}}\right) \le x|Y_{k_{n}}\right\} -{\mathbb {P}}\left\{ Z\le x\right\} \right| \overset{ \mathrm {P}}{\rightarrow }0{,\,\, \text {as} }\,\, n\rightarrow \infty , \end{aligned}$$
(23)

and thus we have

$$\begin{aligned} \left| {\mathbb {P}}\left\{ S\left( t_{n},Y_{k_{n}}\right) \le x\right\} - {\mathbb {P}}\left\{ Z\le x\right\} \right| \rightarrow 0{,\,\, \text {as } }\,\, n\rightarrow \infty . \end{aligned}$$
(24)

By choosing \(Y_{k_{n}}=\varGamma _{k_{n}}\) and independent of \( \left( \varGamma _{i}^{\prime }\right) _{i\ge 1},\) with \(\left( \varGamma _{i}^{\prime }\right) _{i\ge 1}\overset{\mathrm {D}}{=}\) \(\left( \varGamma _{i}\right) _{i\ge 1}\), we get by (10) that

$$\begin{aligned} \frac{{\widetilde{V}}_{t_{n}}^{\left( k_{n}\right) }-t_{n}\mu \left( \frac{ \varGamma _{k_{n}}}{t_{n}}\right) }{\sqrt{t_{n}}\sigma \left( \frac{\varGamma _{k_{n}}}{t_{n}}\right) }\overset{\mathrm {D}}{=}\frac{\sum _{i=1}^{\infty }\varphi (\left( Y_{k_{n}}+\varGamma _{i}^{\prime }\right) /t_{n})-t_{n}\mu \left( \frac{Y_{k_{n}}}{t_{n}}\right) }{\sqrt{t_{n}}\sigma \left( \frac{ Y_{k_{n}}}{t_{n}}\right) } \\ =\frac{T\left( t_{n},Y_{k_{n}}\right) -t_{n}\mu \left( \frac{Y_{k_{n}}}{t_{n} }\right) }{\sqrt{t_{n}}\sigma \left( \frac{Y_{k_{n}}}{t_{n}}\right) } =S\left( t_{n},Y_{k_{n}}\right) . \end{aligned}$$

Keeping (12) in mind, (13) and (14) follow from (23) and (24), respectively. \(\square \)

3.2 Proof of Corollary 1

The proof will be a consequence of Theorem 1 and Lemma 2. Note that \(V_{t}\) has Laplace transform

$$\begin{aligned} E\exp \left( -\lambda V_{t}\right) =\exp \left( -t\varPhi \left( \lambda \right) \right) , \lambda \ge 0, \end{aligned}$$

of the form given by (6). Since \({\overline{\varLambda }}\) is assumed to be regularly varying at 0 with index \(-\alpha \), \(0<\alpha <1\), the \( \varphi \) in (7) is regularly varying at \(\infty \) with index \( -1/\alpha \) and thus for \(x>0\),

$$\begin{aligned} \varphi (x)=L\left( x\right) x^{-1/\alpha }\text {, } \end{aligned}$$
(25)

where \(L\left( x\right) \) is slowly varying at infinity. This implies that as \(z\rightarrow \infty ,\)

$$\begin{aligned} \mu \left( z\right) =\int _{z}^{\infty }\varphi \left( u\right) \mathrm {d} u\sim a_{\alpha }L\left( z\right) z^{-1/\alpha +1}\text {, } \end{aligned}$$
(26)

and

$$\begin{aligned} \sigma ^{2}\left( z\right) =\int _{z}^{\infty }\varphi ^{2}\left( u\right) \mathrm {d}u\sim b_{\alpha }^{2}L^{2}\left( z\right) z^{-2/\alpha +1}, \end{aligned}$$
(27)

where \(a_{\alpha }=\alpha /\left( 1-\alpha \right) \) and \(b_{\alpha }^{2}=\alpha /\left( 2-\alpha \right) \).

With this notation, we can write

$$\begin{aligned} \frac{t_{n}\mu \left( \frac{\varGamma _{k_{n}}}{t_{n}}\right) -t_{n}\mu \left( \frac{k_{n}}{t_{n}}\right) }{\sqrt{t_{n}}\sigma \left( \frac{\varGamma _{k_{n}}}{ t_{n}}\right) }=-\frac{t_{n}\int _{k_{n}/t_{n}}^{\varGamma _{k_{n}}/t_{n}}\varphi \left( u\right) \mathrm {d}u}{\sqrt{t_{n}}\sigma \left( \frac{\varGamma _{k_{n}}}{ t_{n}}\right) }, \end{aligned}$$

which equals

$$\begin{aligned} -\frac{\varphi \left( k_{n}/t_{n}\right) \left( \varGamma _{k_{n}}-k_{n}\right) }{ \sqrt{t_{n}}\sigma \left( \frac{\varGamma _{k_{n}}}{t_{n}}\right) }-\frac{\sqrt{ t_{n}}}{\sigma \left( \frac{\varGamma _{k_{n}}}{t_{n}}\right) }\int _{k_{n}/t_{n}}^{\varGamma _{k_{n}}/t_{n}}\left( \varphi \left( u\right) -\varphi \left( k_{n}/t_{n}\right) \right) \mathrm {d}u. \end{aligned}$$
(28)

Claim 1

As \(n\rightarrow \infty \),

$$\begin{aligned} \sigma \left( \varGamma _{k_{n}}/t_{n}\right) /\sigma \left( k_{n}/t_{n}\right) \overset{\mathrm {P}}{\rightarrow }1. \end{aligned}$$

Proof

This follows from the fact that \(\varGamma _{k_{n}}/k_{n}\overset{ \mathrm {P}}{\rightarrow }1\), \(k_{n}/t_{n}\rightarrow \infty \) and \(\sigma \left( z\right) \) is regularly varying at \(\infty \) with index \(-1/\alpha +1/2.\) \(\square \)

Claim 2

As \(n\rightarrow \infty \),

$$\begin{aligned} \sqrt{k_{n}}\varphi \left( k_{n}/t_{n}\right) /\left( \sqrt{t_{n}} \sigma \left( k_{n}/t_{n}\right) \right) \rightarrow b_{\alpha }^{-1}=\sqrt{ \frac{2-\alpha }{\alpha }}. \end{aligned}$$

Proof

This is a consequence of \(k_{n}/t_{n}\rightarrow \infty \) combined with (25) and (27), which together say

$$\begin{aligned} \sqrt{k_{n}}\varphi \left( k_{n}/t_{n}\right) \sim \sqrt{k_{n}}L\left( k_{n}/t_{n}\right) \left( k_{n}/t_{n}\right) ^{-1/\alpha } \end{aligned}$$

and

$$\begin{aligned} \sqrt{t_{n}}\sigma \left( k_{n}/t_{n}\right) \sim b_{\alpha }\sqrt{t_{n}} L\left( k_{n}/t_{n}\right) \left( k_{n}/t_{n}\right) ^{-1/\alpha +1/2}. \end{aligned}$$

\(\square \)

Claim 3

As \(n\rightarrow \infty \),

$$\begin{aligned} t_{n}\int _{k_{n}/t_{n}}^{\varGamma _{k_{n}}/t_{n}}\left( \varphi \left( u\right) -\varphi \left( k_{n}/t_{n}\right) \right) \mathrm {d}u/\left( \sqrt{t_{n}} \sigma \left( k_{n}/t_{n}\right) \right) \overset{\mathrm {P}}{\rightarrow }0. \end{aligned}$$

Proof

Since

$$\begin{aligned} \left( \varGamma _{k_{n}}-k_{n}\right) /\sqrt{k_{n}}\overset{\mathrm {D}}{ \rightarrow }Z{, \text {as} }\,\,n\rightarrow \infty , \end{aligned}$$
(29)

for any \(0<\varepsilon <1\) there exists a \(c>0\) such that

$$\begin{aligned} {\mathbb {P}}\left\{ \varGamma _{k_{n}}\in \left[ k_{n}-c\sqrt{k_{n}},k_{n}+c\sqrt{ k_{n}}\right] \right\} >1-\varepsilon \end{aligned}$$

for all large enough n. When \(\varGamma _{k_{n}}\in \left[ k_{n}-c\sqrt{k_{n}} ,k_{n}-c\sqrt{k_{n}}\right] \),

$$\begin{aligned}&\frac{t_{n}}{\sqrt{k_{n}}\varphi \left( k_{n}/t_{n}\right) }\left| \int _{k_{n}/t_{n}}^{\varGamma _{k_{n}}/t_{n}}\left( \varphi \left( u\right) -\varphi \left( k_{n}/t_{n}\right) \right) \right| \mathrm {d}u\\&\quad \le \frac{t_{n}}{\sqrt{k_{n}}\varphi \left( k_{n}/t_{n}\right) }\int _{\left( k_{n}-c\sqrt{k_{n}}\right) /t_{n}}^{\left( k_{n}+c\sqrt{k_{n}}\right) /t_{n}} \left[ \varphi \left( \frac{ k_{n}-c\sqrt{k_{n}}}{t_{n}}\right) -\varphi \left( \frac{ k_{n}+c\sqrt{k_{n}}}{t_{n}}\right) \right] \mathrm {d}u\\&\quad =\frac{2c}{\varphi \left( k_{n}/t_{n}\right) }\left[ \varphi \left( \frac{ k_{n}-c\sqrt{k_{n}}}{t_{n}}\right) -\varphi \left( \frac{ k_{n}+c\sqrt{k_{n} }}{t_{n}}\right) \right] . \end{aligned}$$

Now for any \(\lambda >1\), for all large enough n

$$\begin{aligned}&\frac{2c}{\varphi \left( k_{n}/t_{n}\right) }\left[ \varphi \left( \frac{ k_{n}-c\sqrt{k_{n}}}{t_{n}}\right) -\varphi \left( \frac{ k_{n}+c\sqrt{k_{n} }}{t_{n}}\right) \right] \\&\quad \le \frac{2c}{\varphi \left( k_{n}/t_{n}\right) }\left[ \varphi \left( \frac{ k_{n}}{\lambda t_{n}}\right) -\varphi \left( \frac{\lambda k_{n}}{t_{n}} \right) \right] , \end{aligned}$$

which converges to

$$\begin{aligned} 2c\left( \lambda ^{1/\alpha }-\lambda ^{-1/\alpha }\right) . \end{aligned}$$

Since \(\lambda >1\) can be made arbitrarily close to 1 and \(\varepsilon >0\) can be chosen arbitrarily close to 0, we see using Claim 2 that Claim 3 is true. \(\square \)

Putting everything together, keeping (29) in mind, we conclude that as \( n\rightarrow \infty \),

$$\begin{aligned} \frac{t_{n}\mu \left( \frac{\varGamma _{k_{n}}}{t_{n}}\right) -t_{n}\mu \left( \frac{k_{n}}{t_{n}}\right) }{\sqrt{t_{n}}\sigma \left( \frac{\varGamma _{k_{n}}}{ t_{n}}\right) }\overset{\mathrm {D}}{\rightarrow }-\sqrt{\frac{2-\alpha }{ \alpha }}Z. \end{aligned}$$
(30)

Choose \(Y_{k_{n}}=\varGamma _{k_{n}}\) and independent of \(\left( \varGamma _{i}^{\prime }\right) _{i\ge 1}\) \(\overset{\mathrm {D}}{=}\left( \varGamma _{i}\right) _{i\ge 1}\). We get by Remark 2 that (12) holds, which implies (13). Thus, by (13) and Lemma 2, for independent standard normal random variables \(Z_{1}\) and \(Z_{2}\), as \( n\rightarrow \infty \)

$$\begin{aligned}&\frac{T\left( t_{n},Y_{k_{n}}\right) -t_{n}\mu \left( \frac{k_{n}}{t_{n}} \right) }{\sqrt{t_{n}}\sigma \left( \frac{Y_{k_{n}}}{t_{n}}\right) } \\&\quad =\frac{T\left( t_{n},Y_{k_{n}}\right) -t_{n}\mu \left( \frac{Y_{k_{n}}}{t_{n}} \right) }{\sqrt{t_{n}}\sigma \left( \frac{Y_{k_{n}}}{t_{n}}\right) }+\frac{ t_{n}\mu \left( \frac{Y_{k_{n}}}{t_{n}}\right) -t_{n}\mu \left( \frac{k_{n}}{ t_{n}}\right) }{\sqrt{t_{n}}\sigma \left( \frac{Y_{k_{n}}}{t_{n}}\right) } \overset{\mathrm {D}}{\rightarrow }Z_{1}+\sqrt{\frac{2-\alpha }{\alpha }}Z_{2}. \end{aligned}$$

Noting that \(\sigma \left( \frac{Y_{k_{n}}}{t_{n}}\right) /\sigma \left( \frac{ k_{n}}{t_{n}}\right) \overset{\mathrm {P}}{\rightarrow }1\) and \(Z_{1}+\sqrt{ \frac{2-\alpha }{\alpha }}Z_{2}\overset{\mathrm {D}}{=}\sqrt{\frac{2}{\alpha }} Z \), we get as \(n\rightarrow \infty \),

$$\begin{aligned} \frac{T\left( t_{n},Y_{k_{n}}\right) -t_{n}\mu \left( \frac{k_{n}}{t_{n}} \right) }{\sqrt{t_{n}}\sigma \left( \frac{k_{n}}{t_{n}}\right) }\overset{ \mathrm {D}}{\rightarrow }\sqrt{\frac{2}{\alpha }}Z\text {,} \end{aligned}$$

which since

$$\begin{aligned} \frac{T\left( t_{n},Y_{k_{n}}\right) -t_{n}\mu \left( \frac{k_{n}}{t_{n}} \right) }{\sqrt{t_{n}}\sigma \left( \frac{k_{n}}{t_{n}}\right) }\overset{ \mathrm {D}}{=}\frac{{\widetilde{V}}_{t_{n}}^{\left( k_{n}\right) }-t_{n}\mu \left( \frac{\varGamma _{k_{n}}}{t_{n}}\right) }{\sqrt{t_{n}} \sigma \left( \frac{k_{n}}{t_{n}}\right) }, \end{aligned}$$

gives (15). \(\square \)

4 Examples of Theorem 1

In the following examples, we always assume that (3) holds.

Example 1

There always exist \(k_{n}\rightarrow \infty \) and \(t_{n}\rightarrow \infty \) such that (12) holds. For example for any \(k_{n}\rightarrow \infty \), let \(t_{n}=\rho k_{n}\) for some \(\rho >0. \) Since \(\varGamma _{k_{n}}/k_{n}\overset{\mathrm {P}}{\rightarrow }1\), \( \varGamma _{k_{n}}/t_{n}\overset{\mathrm {P}}{\rightarrow }1/\rho \), which implies that

$$\begin{aligned} {\mathbb {P}}\left\{ \frac{\sqrt{t_{n}}\sigma \left( \varGamma _{k_{n}}/t_{n}\right) }{\varphi \left( \varGamma _{k_{n}}/t_{n}\right) }>\frac{ \sqrt{\rho k_{n}}\sigma \left( 2/\rho \right) }{\varphi \left( 1/\left( 2\rho \right) \right) }\right\} \rightarrow 1 \end{aligned}$$

and thus (12) holds and hence by Theorem 1, we conclude ( 13) and (14).

Example 2

Assume the Feller class at zero condition (18). Noting that \({\overline{\varLambda }}(\varphi (y)-)\ge y\), we get from (18) that

$$\begin{aligned} \limsup _{y\rightarrow \infty }\frac{\varphi ^{2}\left( y\right) y}{ \int _{0}^{\varphi \left( y\right) }u^{2}\varLambda (\mathrm {d}u)}\le & {} \limsup _{y\rightarrow \infty }\frac{\varphi ^{2}\left( y\right) y}{ \int _{0}^{\varphi \left( y\right) -}u^{2}\varLambda (\mathrm {d}u)}\\\le & {} \limsup _{y\rightarrow \infty }\frac{\varphi ^{2}\left( y\right) {\overline{\varLambda }}(\varphi \left( y\right) -)}{\int _{0}^{\varphi \left( y\right) -}u^{2}\varLambda (\mathrm {d}u)}<\infty , \end{aligned}$$

which says

$$\begin{aligned} \limsup _{y\rightarrow \infty }\frac{\varphi ^{2}\left( y\right) y}{ \int _{y}^{\infty }\varphi ^{2}\left( x\right) \mathrm {d}x}<\infty . \end{aligned}$$

This implies that

$$\begin{aligned} \liminf _{y\rightarrow \infty }\int _{y}^{\infty }\varphi ^{2}\left( x\right) \mathrm {d}x/\left( y\varphi ^{2}\left( y\right) \right) =:\beta >0. \end{aligned}$$

Therefore, as in Remark 2, we see that if \(\varGamma _{k_{n}}/t_{n}\overset{ \mathrm {P}}{\rightarrow }\infty \) and \(k_{n}\rightarrow \infty \), then (12) holds and thus by Theorem 1, we infer (13) and (14).

Example 3

Let

$$\begin{aligned} {\overline{\varLambda }}(x)=\left\{ \begin{array}{ll} \log \left( 1/x\right) ,&{} \quad 0<x<1 \\ 0,&{} \quad x\ge 1 \end{array} \right. \end{aligned}$$

Clearly, \(\varphi (u)=\exp \left( -u\right) \), \(0<u<\infty \), and for \(0<x<1\)

$$\begin{aligned} \frac{x^{2}{\overline{\varLambda }}(x)}{\int _{0}^{x}u^{2}\varLambda (\mathrm {d}u)} =2\log \left( 1/x\right) , \end{aligned}$$

which \(\nearrow \infty \), as \(x\searrow 0\). Thus, the Feller class at zero condition does not hold. However, the domain of attraction to normal at infinity condition holds (e.g., Doney and Maller [5] and Maller and Mason [7]), since for all \(x\ge 1\)

$$\begin{aligned} \frac{x^{2}{\overline{\varLambda }}(x)}{\int _{0}^{x}u^{2}\varLambda (\mathrm {d}u)}=0 \text {.} \end{aligned}$$

In this example for all \(y>0\) and \(t>0\),

$$\begin{aligned} \frac{\sigma \left( y/t\right) }{\varphi \left( y/t\right) }=\frac{1}{\sqrt{2 }}. \end{aligned}$$

Thus, for any sequence of positive integers \(k_{n}\rightarrow \infty \) and sequence of positive constants \(t_{n}\rightarrow \infty \)

$$\begin{aligned} \frac{\sqrt{t_{n}}\sigma \left( \varGamma _{k_{n}}/t_{n}\right) }{\varphi \left( \varGamma _{k_{n}}/t_{n}\right) }\overset{\mathrm {P}}{\rightarrow } \infty ,{ \text {as} }\,\,n\rightarrow \infty , \end{aligned}$$

which says that (12) is satisfied and hence by Theorem 1, ( 13) and (14) hold.

Next we show that as a special case of Theorem 1, we get Theorem 4.1 and Remark 4.1 of IMR [6], who consider the case when \(t_{n}=t\) is fixed and \(k_{n}\rightarrow \infty .\) Their Theorem 4.1 and Remark 4.1 say that whenever there exist constants \(a_{n}\) and \(b_{n}\) such that for a nondegenerate random variable \(\varDelta \)

$$\begin{aligned} \frac{m_{1}^{\left( n\right) }-b_{n}}{a_{n}}\overset{\mathrm {D}}{=}\frac{ \varphi \left( \varGamma _{n}\right) -b_{n}}{a_{n}}\overset{\mathrm {D}}{ \rightarrow }\varDelta \end{aligned}$$
(31)

then for all \(t>0\) the following self-standardized trimmed central limit theorem (CLT) holds

$$\begin{aligned} \frac{{\widetilde{V}}_{t}^{\left( n\right) }-t\mu \left( \varGamma _{n}/t\right) }{ \sqrt{t}\sigma \left( \varGamma _{n}/t\right) }\overset{\mathrm {D}}{\rightarrow }Z \text {.} \end{aligned}$$

Remark 5

We should note that in the statements of Theorem 4.1 and Remark 4.1 of IMR [6], “ \(\mu \)” should be “\(t\mu \)” , and, in equation (4.2), “ \(\lim _{r\rightarrow \infty }\) ” should be removed and “\(=\varPhi \left( x\right) \), \(x\in {\mathbb {R}}\).” should be replaced by “\(\Rightarrow \varPhi \left( x\right) \), \(x\in {\mathbb {R}}\), as \(r\rightarrow \infty .\)

IMR [6] have shown in their Theorem 2.1 that for (31) to hold it is necessary and sufficient that there exist functions \(a\left( r\right) \) and \(b\left( r\right) \) of \(r>0\) such that whenever \(a\left( r\right) x+b\left( r\right) >0\)

$$\begin{aligned} \lim _{r\rightarrow \infty }\frac{r-{\overline{\varLambda }}(a\left( r\right) x+b\left( r\right) )}{\sqrt{r}}=h\left( x\right) , \end{aligned}$$
(32)

where \(h\left( x\right) \in {\mathbb {R}}\) is a nondecreasing function having the form for some \(\gamma \le 0\),

$$\begin{aligned} h\left( x\right) =\left\{ \begin{array}{ll} 2x,&{} \quad \hbox {if}\, \gamma =0, \\ -\frac{2}{\gamma }\log \left( 1-\gamma x\right) , &{}\quad \hbox {when}\,\gamma <0\,\text {and}\,\,1-\gamma x>0. \end{array} \right. \end{aligned}$$
(33)

In which case \(P\left\{ \varDelta \le x\right\} =P\left\{ Z\le h\left( x\right) \right\} \).

The next two examples show that whenever (31) holds and hence (32) with \(h\left( x\right) \) as in (33) is satisfied, then special cases of condition (12) are fulfilled. Example 4 treats the case when \(\gamma <0\) in (33), and Example 5 considers the case when \( \gamma =0\) in (33).

Example 4 [The case \(\gamma <0\) in (33)] From Proposition 4.1 of IMR [6], we get that whenever (31) holds and we have (33) for some \(\gamma <0\) then

$$\begin{aligned} \int _{0}^{x}u^{2}\varLambda (\mathrm {d}u)\sim \frac{2x^{2}\sqrt{\overline{ \varLambda }(x)}}{\left| \gamma \right| }{, \text {as} }\,\,x\downarrow 0\text {,} \end{aligned}$$
(34)

and \({\overline{\varLambda }}(x)\) is slowly varying at 0. Since \(\varphi \left( z\right) \searrow 0\) as \(z\nearrow \infty \), this implies that as y/t converges to \(\infty \),

$$\begin{aligned}&\frac{t\sigma ^{2}\left( \varphi \left( y/t\right) \right) }{\varphi ^{2}\left( y/t\right) }=\frac{t\int _{0}^{\varphi \left( y/t\right) }u^{2}\varLambda (\mathrm {d}u)}{\varphi ^{2}\left( y/t\right) }\\&\quad \sim \frac{2t\sqrt{{\overline{\varLambda }}(\varphi \left( y/t\right) )}}{ \left| \gamma \right| }, \ \ { \text {as} \ \ }\,y/t\rightarrow \infty \text {,} \end{aligned}$$

which by (3), for each fixed \(t>0\), converges to infinity as \( y\rightarrow \infty \). We readily see then that (12) is satisfied, whenever \(k_{n}\rightarrow \infty \) and \(t_{n}=t>0\), fixed, as \(n\rightarrow \infty \), and thus by Theorem 1, (13) and (14) hold. Notice that a Lévy measure that satisfies (34) is not in the Feller class at zero.

Example 5 [The case \(\gamma =0\) in (33)] Using the notation from Proposition 4.2 of IMR [6], set

$$\begin{aligned} H\left( r\right) =e^{2\sqrt{r}}\text {, }V\left( x\right) =\varphi \left( \frac{1}{4}\left( \log x\right) ^{2}\right) { \text {and} }g_{2}\left( e^{2\sqrt{r}}\right) =\varphi ^{2}\left( r\right) \sqrt{r}\text {.} \end{aligned}$$

Proposition 4.2 of IMR [6] says when \(\gamma =0\) in (33) that for a function \(\pi _{2}\)

$$\begin{aligned} \int _{0}^{\varphi \left( x\right) }u^{2}\varLambda (\mathrm {d} u)=\int _{x}^{\infty }\varphi ^{2}\left( s\right) \mathrm {d}s=\pi _{2}\left( e^{2\sqrt{x}}\right) \text {,} \end{aligned}$$

which from (4.13) in IMR [6] satisfies

$$\begin{aligned} \frac{\int _{0}^{\varphi \left( x\right) }u^{2}\varLambda (\mathrm {d}u)}{\varphi ^{2}\left( x\right) \sqrt{x}}=\frac{\pi _{2}\left( e^{2\sqrt{x}}\right) }{ g_{2}\left( e^{2\sqrt{x}}\right) }\rightarrow \infty {, \text {as} }\,\,x\rightarrow \infty \text {.} \end{aligned}$$

This implies that as y/t converges to \(\infty \) and ty is bounded away from 0, then

$$\begin{aligned} \frac{t\sigma ^{2}\left( \varphi \left( y/t\right) \right) }{\varphi ^{2}\left( y/t\right) }=\frac{\sqrt{ty}\int _{0}^{\varphi \left( y/t\right) }u^{2}\varLambda (\mathrm {d}u)}{\varphi ^{2}\left( y/t\right) \sqrt{y/t}} \rightarrow \infty \text {.} \end{aligned}$$

Thus, if \(\varGamma _{k_{n}}/t_{n}\overset{\mathrm {P}}{\rightarrow }\infty \) and for some \(\varepsilon >0\), \({\mathbb {P}}\left\{ t_{n}\varGamma _{k_{n}}>\varepsilon \right\} \rightarrow 1\), then (12) is fulfilled and hence by Theorem 1, (13) and (14) hold. In particular, this is satisfied when \(k_{n}\rightarrow \infty \) and \( t_{n}=t>0\), fixed, as \(n\rightarrow \infty \).

5 A SSCLT for a Trimmed Lévy Process

Before we can talk about a SSCLT for a trimmed Lévy process, we must first establish a pointwise representation for the Lévy process that we shall consider, as well as some necessary notation and auxiliary results needed to define what we mean by a trimmed Lévy process and to prove a SSCLT for it.

5.1 A Pointwise Representation for the Lévy Process

Let \((\varOmega ,{\mathcal {F}},{\mathbb {P}})\) be a probability space carrying a real-valued Lévy process \((X_{t})_{t\ge 0}\), with \(X_{0}=0\) and canonical triplet \((\gamma ,\sigma ^{2},\varLambda )\), where \(\gamma \in {\mathbb {R}}\), \(\sigma ^{2}\ge 0\), and \(\varLambda \) is a Lévy measure, that is a nonnegative measure on \({\mathbb {R}}\) satisfying

$$\begin{aligned} \int _{{\mathbb {R}}{\setminus } \{0\}}(x^{2}\wedge 1)\varLambda (\mathrm {d}x)<\infty . \end{aligned}$$

For \(x>0\), put

$$\begin{aligned} {\overline{\varLambda }}_{+}(x)=\varLambda ((x,\infty ))\,\,{ \text {and} }\,\,{\overline{\varLambda }}_{-}(x)=\varLambda ((-\infty ,-x)), \end{aligned}$$
(35)

with corresponding Lévy measures \(\varLambda _{+}\) and \(\varLambda _{-}\) on \( {\mathbb {R}}^{+}=\left( 0,\infty \right) \) and set

$$\begin{aligned} {\overline{\varLambda }}(x)={\overline{\varLambda }}_{+}(x)+{\overline{\varLambda }}_{-}(x). \end{aligned}$$
(36)

We assume always that

$$\begin{aligned} {\overline{\varLambda }}_{+}(0+)={\overline{\varLambda }}_{-}(0+)=\infty . \end{aligned}$$
(37)

For \(u>0\) let

$$\begin{aligned} \varphi _{+}(u)=\sup \{x:{\overline{\varLambda }}_{+}(x)>u\}\,{ \text {and} }\, \varphi _{-}(u)=\sup \{x:{\overline{\varLambda }}_{-}(x)>u\}. \end{aligned}$$

By Remark 1, we have

$$\begin{aligned} \varphi _{+}(u)\rightarrow 0\, { \text {and} }\,\varphi _{-}(u)\rightarrow 0,\, \text {as}\,u\rightarrow \infty . \end{aligned}$$
(38)

The process \((X_{t})_{t\ge 0}\) has the representation (e.g., Bertoin [2] and Sato [10])

$$\begin{aligned} X_{t}=\sigma Z_{t}+\gamma t+X_{t}^{\left( 1\right) }+X_{t}^{\left( 2\right) }, \end{aligned}$$

with

$$\begin{aligned} X_{t}^{\left( 1\right) }:=\lim _{\varepsilon \searrow 0}\left( \sum _{0<s\le t}\varDelta X_{s}1\left\{ \varepsilon <\left| \varDelta X_{s}\right| \le 1\right\} -t\mu _{\varepsilon }\right) , \end{aligned}$$
(39)

where for \(0<\varepsilon <1\)

$$\begin{aligned} \mu _{\varepsilon }:=&\int _{{\mathbb {R}}{\setminus } \{0\}}x1\left\{ \varepsilon<\left| x\right| \le 1\right\} \varLambda \left( \mathrm {d}x\right) ,\\ X_{t}^{\left( 2\right) }:=&\sum _{0<s\le t}\varDelta X_{s}1\left\{ \left| \varDelta X_{s}\right| >1\right\} , \end{aligned}$$

and \(\left( Z_{t}\right) _{t\ge 0}\) is a standard Wiener process independent of \(\left( X_{t}^{\left( 1\right) }\right) _{t\ge 0}\) and \( \left( X_{t}^{\left( 2\right) }\right) _{t\ge 0}\). (As usual \(\varDelta X_{s} =X_{s}-X_{s-}.)\) The limit in (39) is defined as in pages 14–15 of Bertoin [2].

Decomposing further, we get

$$\begin{aligned} X_{t}=\sigma Z_{t}+\gamma t+X_{t}^{\left( 1,+\right) }+X_{t}^{\left( 1,-\right) }+X_{t}^{\left( 2,+\right) }+X_{t}^{\left( 2,-\right) }, \end{aligned}$$
(40)

with

$$\begin{aligned} X_{t}^{\left( 1,\pm \right) }=\lim _{\varepsilon \searrow 0}\left( \sum _{0<s\le t}\varDelta X_{s}1\left\{ \varepsilon <\pm \varDelta X_{s}\le 1\right\} -t\mu _{\varepsilon }^{\pm }\right) , \end{aligned}$$

where for \(0<\varepsilon <1\)

$$\begin{aligned} \mu _{\varepsilon }^{\pm }:=\pm \int _{0}^{\infty }x1\left\{ \varepsilon <x\le 1\right\} \varLambda _{\pm }(\mathrm {d}x) \end{aligned}$$

and

$$\begin{aligned} X_{t}^{\left( 2,\pm \right) }=\sum _{0<s\le t}\varDelta X_{s}1\left\{ \pm \varDelta X_{s}>1\right\} . \end{aligned}$$

For any \(t>0\), denote the ordered positive jump sequence

$$\begin{aligned} m_{t}^{\left( 1,+\right) }\ge m_{t}^{\left( 2,+\right) }\ge \cdots \end{aligned}$$

of \(X_{t}\) on the interval \(\left[ 0,t\right] \) and let

$$\begin{aligned} m_{t}^{\left( 1,-\right) }\le m_{t}^{\left( 2,-\right) }\le \cdots \end{aligned}$$

denote the corresponding ordered negative jump sequence of \(X_{t}\). Note that the positive and negative jumps are independent. With this notation, we can write

$$\begin{aligned} X_{t}^{\left( 1,\pm \right) }=\lim _{\varepsilon \searrow 0}\left( \sum _{i=1}^{\infty }m_{t}^{\left( i,\pm \right) }1\left\{ \varepsilon <\pm m_{t}^{\left( i,\pm \right) }\le 1\right\} -t\mu _{\varepsilon }^{\pm }\right) , \end{aligned}$$

and

$$\begin{aligned} X_{t}^{\left( 2,\pm \right) }=\sum _{i=1}^{\infty }m_{t}^{\left( i,\pm \right) }1\left\{ \pm m_{t}^{\left( i,\pm \right) }>1\right\} . \end{aligned}$$

Let \(\left( \varGamma ^{+}\right) _{i\ge 1}\) \(\overset{\mathrm {D}}{=}\) \(\left( \varGamma _{i}^{-}\right) _{i\ge 1}\overset{\mathrm {D}}{=}\left( \varGamma _{i}\right) _{i\ge 1}\), with \(\left( \varGamma _{i}^{+}\right) _{i\ge 1}\) and \(\left( \varGamma _{i}^{-}\right) _{i\ge 1}\) independent. It turns out that by the same arguments that lead to (8), for each \(t>0\)

$$\begin{aligned} \left( m_{t}^{\left( 1,+\right) },m_{t}^{\left( 2,+\right) },\dots \right) \overset{\mathrm {D}}{=}\left( \varphi _{+}\left( \frac{\varGamma _{1}^{+}}{t} \right) , \varphi _{+}\left( \frac{\varGamma _{2}^{+}}{t}\right) ,\dots \right) \end{aligned}$$
(41)

and

$$\begin{aligned} \left( m_{t}^{\left( 1,-\right) },m_{t}^{\left( 2,-\right) },\dots \right) \overset{\mathrm {D}}{=}\left( -\varphi _{-}\left( \frac{\varGamma _{1}^{-}}{t} \right) ,-\varphi _{-}\left( \frac{\varGamma _{2}^{-}}{t}\right) ,\dots \right) . \end{aligned}$$
(42)

Let \({\widehat{X}}_{t}^{\left( 1,\pm \right) }\) and \({\widehat{X}}_{t}^{\left( 2,\pm \right) }\) be defined as \(X_{t}^{\left( 1,\pm \right) }\) and \( X_{t}^{\left( 2,\pm \right) }\) with \(m_{t}^{\left( i,\pm \right) }\) replaced by \(\pm \varphi _{\pm }\left( \frac{\varGamma _{i}^{\pm }}{t}\right) \). We see then by (40) that for each fixed \(t\ge 0\)

$$\begin{aligned} X_{t}\overset{\mathrm {D}}{=}{\widehat{X}}_{t}:=\sigma Z_{t}+\gamma t+\widehat{X }_{t}^{\left( 1,+\right) }+{\widehat{X}}_{t}^{\left( 1,-\right) }+{\widehat{X}} _{t}^{\left( 2,+\right) }+{\widehat{X}}_{t}^{\left( 2,-\right) }, \end{aligned}$$
(43)

where \(\left( Z_{t}\right) _{t\ge 0}\) is a Wiener process independent of \( \left( \varGamma _{i}^{+}\right) _{i\ge 1}\) and \(\left( \varGamma _{i}^{-}\right) _{i\ge 1}\).

Our aim is to show that for a trimmed version \({\widetilde{T}} _{t_{n}}^{(k_{n},\ell _{n})}\) of \({\widehat{X}}_{t_{n}}\) defined for suitable sequences of positive integers \(\left( k_{n}\right) _{n\ge 1}\) and \(\left( \ell _{n}\right) _{n\ge 1}\) and positive constants \(\left( t_{n}\right) _{n\ge 1}\) that under appropriate regularity conditions there exist centering and norming functions \(A_{n}\left( \cdot ,\cdot \right) \) and \( B_{n}\left( \cdot ,\cdot \right) \) such that uniformly in \(x\in {\mathbb {R}}\), as \(n\rightarrow \infty \),

$$\begin{aligned} {\mathbb {P}}\left\{ \frac{{\widetilde{T}}_{t_{n}}^{(k_{n},\ell _{n})}-A_{n}\left( \varGamma _{k_{n}}^{+},\varGamma _{\ell _{n}}^{-}\right) }{ B_{n}\left( \varGamma _{k_{n}}^{+},\varGamma _{\ell _{n}}^{-}\right) }\le x|\varGamma _{k_{n}}^{+},\varGamma _{\ell _{n}}^{-}\right\} \overset{\mathrm {P}}{ \rightarrow }{\mathbb {P}}\left\{ Z\le x\right\} , \end{aligned}$$
(44)

which implies

$$\begin{aligned} \frac{{\widetilde{T}}_{t_{n}}^{(k_{n},\ell _{n})}-A_{n}\left( \varGamma _{k_{n}}^{+},\varGamma _{\ell _{n}}^{-}\right) }{B_{n}\left( \varGamma _{k_{n}}^{+},\varGamma _{\ell _{n}}^{-}\right) }\overset{\mathrm {D}}{ \rightarrow }Z\text {.} \end{aligned}$$
(45)

Statement (45) is what we call a SSCLT for a trimmed Lévy process. In order to define \({\widetilde{T}}_{t_{n}}^{(k_{n},\ell _{n})}\), specify the centering and norming functions \(A_{n}\left( \cdot ,\cdot \right) \) and \( B_{n}\left( \cdot ,\cdot \right) \), and state and prove our versions of (44) and (45) given in Theorem 2 in Sect. 5.6, we must first introduce some notation and preliminary results, which we shall do in the next four subsections.

5.2 A Useful Spectrally Positive Lévy Process

Let \(\left( P_{t}\right) _{t\ge 0}\), be a nondegenerate spectrally positive Lévy process without a normal component and having zero drift with infinitely divisible characteristic function

$$\begin{aligned} \mathrm{E e}^{\mathrm {i}\theta P_{t}}=\mathrm{e}^{t\varUpsilon (\theta )},\quad \theta \in \mathbb { R}, \end{aligned}$$

where

$$\begin{aligned} \varUpsilon (\theta )=\int _{\left( 0,\infty \right) }\left( \mathrm{e}^{\mathrm {i}\theta x}-1-\mathrm {i}\theta x{\mathbf {1}}_{\{0<x\le 1\}}\right) \pi (\mathrm {d}x) \end{aligned}$$

and \(\pi \) is a Lévy measure on \({\mathbb {R}}^{+}\) with \(\int _{\left( 0,\infty \right) }(x^{2}\wedge 1)\pi (\text {d}x)\) finite. Such a process has no negative jumps. Again we shall assume

$$\begin{aligned} {\overline{\pi }}(0+)=\infty . \end{aligned}$$
(46)

As above for \(u>0\) let \(\varphi _{\pi }(u)=\sup \{x:{\overline{\pi }}(x)>u\}\). Applying Remark 1, we see that (46) implies

$$\begin{aligned} \varphi _{\pi }(u)>0\text { for all }u>0\text { and }\lim _{u\rightarrow \infty }\varphi _{\pi }(u)=0. \end{aligned}$$
(47)

(Often in the definition of a spectrally positive Lévy process it is assumed that it is not a subordinator. See Abdel-Hameed [1].)

The process \(\left( P_{t}\right) _{t\ge 0}\) has the representation

$$\begin{aligned} P_{t}=P_{t}^{\left( 1\right) }+P_{t}^{\left( 2\right) }, \end{aligned}$$

where \(P_{t}^{\left( 1\right) }=\)

$$\begin{aligned} \lim _{\varepsilon \searrow 0}\left( \sum _{0<s\le t}\varDelta P_{s}1\left\{ \varepsilon<\varDelta P_{s}\le 1\right\} -t\int _{0}^{\infty }x1\left\{ \varepsilon <x\le 1\right\} \pi \left( \mathrm {d}x\right) \right) \end{aligned}$$
(48)

and

$$\begin{aligned} P_{t}^{\left( 2\right) }=\sum _{0<s\le t}\varDelta P_{s}1\left\{ \varDelta P_{s}>1\right\} . \end{aligned}$$
(49)

The processes \(\left( P_{t}^{\left( 1\right) }\right) _{t\ge 0}\) and \( \left( P_{t}^{\left( 2\right) }\right) _{t\ge 0}\) are independent Lévy processes. Observe that for any \(t>0\), we can write

$$\begin{aligned} P_{t}^{\left( 1\right) }\overset{\mathrm {D}}{=}{\widehat{P}}_{t}^{\left( 1\right) }, \end{aligned}$$

with \({\widehat{P}}_{t}^{\left( 1\right) }=\)

$$\begin{aligned} \lim _{\varepsilon \searrow 0}\left( \sum _{i=1}^{\infty }\varphi _{\pi }\left( \varGamma _{i}/t\right) 1\left\{ \varepsilon<\varphi _{\pi }\left( \varGamma _{i}/t\right) \le 1\right\} -t\int _{0}^{\infty }x1\left\{ \varepsilon <x\le 1\right\} \pi \left( \mathrm {d}x\right) \right) , \end{aligned}$$

where \(\left\{ \varGamma _{i}\right\} _{i\ge 1}\) is as above. Also write

$$\begin{aligned} {\widehat{P}}_{t}^{\left( 2\right) }=\sum _{i=1}^{\infty }\varphi _{\pi }\left( \varGamma _{i}/t\right) 1\left\{ \varphi _{\pi }\left( \varGamma _{i}/t\right) >1\right\} . \end{aligned}$$

For each \(t>0\), we have

$$\begin{aligned} P_{t}\overset{\mathrm {D}}{=}{\widehat{P}}_{t}^{\left( 1\right) }+{\widehat{P}} _{t}^{\left( 2\right) }. \end{aligned}$$
(50)

The random variable \({\widehat{P}}_{t}^{\left( 1\right) }\) has characteristic function

$$\begin{aligned} E\mathrm{e}^{\mathrm {i}\theta {\widehat{P}}_{t}^{\left( 1\right) }}=\mathrm{e}^{t\varUpsilon _{1}(\theta )},\quad \theta \in {\mathbb {R}}, \end{aligned}$$
(51)

where

$$\begin{aligned} \varUpsilon _{1}(\theta )=\int _{\left( 0,1\right] }\left( \mathrm{e}^{\mathrm {i}\theta x}-1-\mathrm {i}\theta x{\mathbf {1}}_{\{0<x\le 1\}}\right) \pi (\mathrm {d}x). \end{aligned}$$
(52)

5.3 A Useful Infinitely Divisible Random Variable

For each \(t>0\) and \(y>0\) with \(\left( \varGamma _{i}^{\prime }\right) _{i\ge 1} \) \(\overset{\mathrm {D}}{=}\) \(\left( \varGamma _{i}\right) _{i\ge 1}\) , consider the random variable

$$\begin{aligned} {\widehat{P}}_{t}^{\left( 1\right) }\left( y\right) =\lim _{\varepsilon \searrow 0}{\widehat{P}}_{t}^{\left( 1\right) }\left( y,\varepsilon \right) , \end{aligned}$$
(53)

where for \(0<\varepsilon <1\)

$$\begin{aligned} {\widehat{P}}_{t}^{\left( 1\right) }\left( y,\varepsilon \right) ={\widehat{P}} _{t}^{\left( 1,1\right) }\left( y,\varepsilon \right) -{\mathbf {E}}{\widehat{P}} _{t}^{\left( 1,1\right) }\left( y,\varepsilon \right) , \end{aligned}$$
(54)

with

$$\begin{aligned} {\widehat{P}}_{t}^{\left( 1,1\right) }\left( y,\varepsilon \right) =\sum _{i=1}^{\infty }\varphi _{\pi }\left( \frac{y}{t}+\frac{\varGamma _{i}^{\prime }}{t}\right) 1\left\{ \varepsilon <\varphi _{\pi }\left( \frac{y }{t}+\frac{\varGamma _{i}^{\prime }}{t}\right) \le 1\right\} \end{aligned}$$
(55)

and

$$\begin{aligned} E{\widehat{P}}_{t}^{\left( 1,1\right) }\left( y,\varepsilon \right)= & {} \int _{0}^{\infty }\varphi _{\pi }\left( \frac{y}{t}+\frac{x}{t}\right) 1\left\{ \varepsilon<\varphi _{\pi }\left( \frac{y}{t}+\frac{x}{t}\right) \le 1\right\} \mathrm {d}x\nonumber \\= & {} t\int _{0}^{\infty }\varphi _{\pi }\left( \frac{y}{t}+x\right) 1\left\{ \varepsilon <\varphi _{\pi }\left( \frac{y}{t}+x\right) \le 1\right\} \mathrm {d}x=:t\mu _{\pi }\left( \varepsilon ,\frac{y}{t}\right) . \nonumber \\ \end{aligned}$$
(56)

Also let

$$\begin{aligned} {\widehat{P}}_{t}^{\left( 2\right) }\left( y\right) =\sum _{i=1}^{\infty }\varphi _{\pi }\left( \frac{y}{t}+\frac{\varGamma _{i}^{\prime }}{t}\right) 1\left\{ \varphi _{\pi }\left( \frac{y}{t}+\frac{\varGamma _{i}^{\prime }}{t} \right) >1\right\} . \end{aligned}$$
(57)

Introduce the rate 1 Poisson process

$$\begin{aligned} N\left( x\right) =\sum _{k=1}^{\infty }1\left\{ \varGamma _{i}^{\prime }\le x\right\} , x\ge 0. \end{aligned}$$
(58)

We can write (53) as

$$\begin{aligned} \lim _{\varepsilon \searrow 0}\left( \int _{0}^{\infty }\varphi _{\pi }\left( \frac{y}{t}+\frac{x}{t}\right) 1\left\{ \varepsilon <\varphi _{\pi }\left( \frac{y}{t}+\frac{x}{t}\right) \le 1\right\} \left( N\left( \mathrm {d} x\right) -\mathrm {d}x\right) \right) . \end{aligned}$$
(59)

Consider the Lévy measure \(\pi _{y/t}\) defined on \(\left( 0,\infty \right) \) by the tail function

$$\begin{aligned} {\overline{\pi }}_{y/t}(x)=\left\{ \begin{array}{ll} {\overline{\pi }}(x)-\frac{y}{t},&{}\quad \hbox { for }0<x<\varphi _{\pi }\left( \frac{y }{t}\right) \\ 0,&{} \quad \hbox { for }u\ge \varphi _{\pi }\left( \frac{y}{t}\right) . \end{array} \right. \end{aligned}$$

Note that for all \(u>0\)

$$\begin{aligned} \sup \left\{ x:{\overline{\pi }}_{y/t}(x)>u\right\} =\varphi _{\pi }\left( \frac{y}{t}+u\right) . \end{aligned}$$
(60)

For future reference, we record that \({\widehat{P}}_{t}^{\left( 1\right) }\left( y\right) \) has characteristic function

$$\begin{aligned} E\mathrm{e}^{\mathrm {i}\theta {\widehat{P}}_{t}^{\left( 1\right) }\left( y\right) }=\mathrm{e}^{t\varUpsilon _{1}(\theta ,y)},\quad \theta \in {\mathbb {R}}, \end{aligned}$$
(61)

where

$$\begin{aligned} \varUpsilon _{1}(\theta ,y)= & {} \int _{\left( 0,\infty \right) }\left( \mathrm{e}^{\mathrm {i} \theta x}-1-\mathrm {i}\theta x{\mathbf {1}}_{\{0<x\le 1\}}\right) \pi _{y/t}( \mathrm {d}x)\nonumber \\= & {} \int _{\left( 0,\infty \right) }\left( \mathrm{e}^{\mathrm {i}\theta \varphi _{\pi }\left( \frac{y}{t}+u\right) }-1-\mathrm {i}\theta \varphi _{\pi }\left( \frac{y}{t}+u\right) {\mathbf {1}}_{\{0<\varphi _{\pi }\left( \frac{y}{t} +u\right) \le 1\}}\right) \mathrm {d}u. \end{aligned}$$
(62)

By an examination of (61) and (62), we see that \({\widehat{P}} _{t}^{\left( 1\right) }\left( y\right) \) is an infinitely divisible random variable. Clearly, from (59), we get

$$\begin{aligned} \hbox {E}{\widehat{P}}_{t}^{\left( 1\right) }\left( y\right) =0 \end{aligned}$$

and

$$\begin{aligned}&\lim _{\varepsilon \searrow 0}\hbox {E}\left( {\widehat{P}}_{t}^{\left( 1\right) }\left( y,\varepsilon \right) \right) ^{2}=\int _{0}^{\infty }\varphi _{\pi }^{2}\left( \frac{y}{t}+\frac{x}{t}\right) 1\left\{ 0<\varphi _{\pi }\left( \frac{y}{t}+\frac{x}{t}\right) \le 1\right\} \mathrm {d}x\nonumber \\&\quad =t\int _{0}^{\infty }\varphi _{\pi }^{2}\left( \frac{y}{t}+x\right) 1\left\{ 0<\varphi _{\pi }\left( \frac{y}{t}+x\right) \le 1\right\} \mathrm {d}x\nonumber \\&\quad =t\int _{y/t}^{\infty }\varphi _{\pi }^{2}\left( u\right) 1\left\{ 0<\varphi _{\pi }\left( u\right) \le 1\right\} \mathrm {d}u=:t\sigma _{\pi }^{2}\left( y/t\right) >0, \end{aligned}$$
(63)

where the fact that \(\sigma _{\pi }^{2}\left( y/t\right) >0\) follows from ( 46) implies (47).

5.4 Application of the Above Constructions

For any fixed \(y>0\) and \(t>0\), consider the tail functions defined for \(x>0\) by

$$\begin{aligned} {\overline{\varLambda }}_{y/t,+}(x)=\left\{ \begin{array}{ll} {\overline{\varLambda }}_{+}(x)-\frac{y}{t}, &{}\quad 0<x<\varphi _{+}\left( \frac{y}{t}\right) \\ 0, &{}\quad x\ge \varphi _{+}(\frac{y}{t}). \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} {\overline{\varLambda }}_{y/t,-}(x)=\left\{ \begin{array}{ll} {\overline{\varLambda }}_{-}(x)-\frac{y}{t}, &{}\quad 0<x<\varphi _{-}\left( \frac{y}{t}\right) \\ 0, &{}\quad x\ge \varphi _{-}(\frac{y}{t}). \end{array} \right. \end{aligned}$$

Let \(N_{1}\) and \(N_{2}\) be two independent rate 1 Poisson processes on \( \left( 0,\infty \right) \) with jumps \(\varGamma _{i}^{\left( 1\right) }\), \( i\ge 1\), and \(\varGamma _{i}^{\left( 2\right) }\), \(i\ge 1,\) respectively. Now for \(t>0\) and \(y_{1}>0\) let \({\widehat{X}}_{t}^{\left( 1,+\right) }\left( y_{1}\right) \) and \({\widehat{X}}_{t}^{\left( 2,+\right) }\left( y_{1}\right) \) be constructed as \({\widehat{P}}_{t}^{\left( 1\right) }\left( y_{1}\right) \) and \({\widehat{P}}_{t}^{\left( 2\right) }\left( y_{1}\right) \) using the Poisson process \(N_{1}\) and the Lévy measure \(\varLambda _{+}\) with inverse \(\varphi _{+}\). In the same way for \(t>0\) and \(y_{2}>0\), construct \(\widehat{X }_{t}^{\left( 1,-\right) }\left( y_{2}\right) \) and \({\widehat{X}}_{t}^{\left( 2,-\right) }\left( y_{2}\right) \) using the Poisson process \(N_{2}\) and the L évy measure \(\varLambda _{-}\) with inverse \(\varphi _{-}\). One finds that \( {\widehat{X}}_{t}^{\left( 1,+\right) }\left( y_{1}\right) \) and \({\widehat{X}} _{t}^{\left( 1,-\right) }\left( y_{2}\right) \) are independent infinitely divisible random variables with Lévy measures defined via the above tail functions \(\varLambda _{y_{1}/t,+}\) and \(\varLambda _{y_{2}/t,-}\), respectively, whose supports are contained in \(\left[ 0,\varphi _{+}(y_{1}/t)\right] \) and \(\left[ 0,\varphi _{-}(y_{2}/t)\right] \), respectively. Moreover,

$$\begin{aligned} \hbox {E}{\widehat{X}}_{t}^{\left( 1,+\right) }\left( y_{1}\right) =\hbox {E}{\widehat{X}} _{t}^{\left( 1,-\right) }\left( y_{2}\right) =0, \end{aligned}$$

and by (63),

$$\begin{aligned}&\hbox {Var}\left( {\widehat{X}}_{t}^{\left( 1,+\right) }\left( y_{1}\right) \right) \nonumber \\&\quad =t\int _{y_{1}/t}^{\infty }\varphi _{+}^{2}\left( u\right) 1\left\{ 0<\varphi _{+}\left( u\right) \le 1\right\} \mathrm {d}u=:t\sigma _{+}^{2}\left( y_{1}/t\right) >0 \end{aligned}$$
(64)

and

$$\begin{aligned}&\hbox {Var}\left( {\widehat{X}}_{t}^{\left( 1,-\right) }\left( y_{2}\right) \right) \nonumber \\&\quad =t\int _{y_{2}/t}^{\infty }\varphi _{-}^{2}\left( u\right) 1\left\{ 0<\varphi _{-}\left( u\right) \le 1\right\} \mathrm {d}u=:t\sigma _{-}^{2}\left( y_{2}/t\right) >0. \end{aligned}$$
(65)

For \(t>0\), \(y_{1}>0\) and \(y_{2}>0\), consider the random variable

$$\begin{aligned} {\widehat{Y}}_{t}^{\left( 1\right) }\left( y_{1},y_{2}\right) =\sigma Z_{t}+ {\widehat{X}}_{t}^{\left( 1,+\right) }\left( y_{1}\right) -{\widehat{X}} _{t}^{\left( 1,-\right) }\left( y_{2}\right) \text {,} \end{aligned}$$

where \(\sigma \ge 0\) and \(\left( Z_{t}\right) _{t\ge 0}\) is a standard Brownian motion independent of the variables \({\widehat{X}}_{t}^{\left( 1,+\right) }\left( y_{1}\right) \) and \({\widehat{X}}_{t}^{\left( 1,-\right) }\left( y_{2}\right) \). Set

$$\begin{aligned} \hbox {Var}{\widehat{Y}}_{t}^{\left( 1\right) }\left( y_{1},y_{2}\right) =t\sigma ^{2}+t\sigma _{+}^{2}\left( y_{1}/t\right) +t\sigma _{-}^{2}\left( y_{2}/t\right) =:t\sigma ^{2}\left( t,y_{1},y_{2}\right) , \end{aligned}$$
(66)

where by (64) and (65), \(\sigma ^{2}\left( t,y_{1},y_{2}\right) >0.\)

A basic step toward extending Theorem 1 from subordinators to general Lévy processes is the following result: For each \(t>0\), \(y_{1}>0 \) and \(y_{2}>0\) consider the standardized version of \({\widehat{Y}} _{t}^{\left( 1\right) }\left( y_{1},y_{2}\right) \) given by

$$\begin{aligned} S^{(1)}\left( t,y_{1},y_{2}\right) =\frac{{\widehat{Y}}_{t}^{\left( 1\right) }\left( y_{1},y_{2}\right) }{\sqrt{\hbox {Var}{\widehat{Y}}_{t}^{\left( 1\right) }\left( y_{1},y_{2}\right) }}=\frac{{\widehat{Y}}_{t}^{\left( 1\right) }\left( y_{1},y_{2}\right) }{\sqrt{t}\sqrt{\sigma ^{2}\left( t,y_{1},y_{2}\right) }}. \end{aligned}$$

The random variable \(S^{(1)}\left( t,y_{1},y_{2}\right) \) is infinitely divisible with

$$\begin{aligned} \hbox {E}S^{(1)}\left( t,y_{1},y_{2}\right) =0\,\, \mathrm { and }\,\, \hbox {Var}S^{(1)}\left( t,y_{1},y_{2}\right) =1, \end{aligned}$$

whose Lévy measure has support contained in

$$\begin{aligned} \left[ \frac{-\varphi _{-}(y_{2}/t)}{\sqrt{t}\sigma \left( t,y_{1},y_{2}\right) },\frac{\varphi _{+}(y_{1}/t)}{\sqrt{t}\sigma \left( t,y_{1},y_{2}\right) }\right] . \end{aligned}$$

Since the random variable \(S^{(1)}\left( t,y_{1},y_{2}\right) \) is infinitely divisible, we can apply the Zaitsev Fact to get for \(t>0\) , \(y_{1}>0\), \(y_{2}>0\) and \(\lambda >0\) and for universal positive constants \(C_{1}\) and \(C_{2}\)

$$\begin{aligned} \varPi \left( S^{(1)}\left( t,y_{1},y_{2}\right) ,Z;\lambda \right) \le C_{1}\exp \left( -\frac{\lambda \sqrt{t}\sigma \left( t,y_{1},y_{2}\right) }{ C_{2}\varphi \left( t,y_{1},y_{2}\right) }\right) , \end{aligned}$$
(67)

where \(\varphi \left( t,y_{1},y_{2}\right) =\max \left\{ \varphi _{+}(y_{1}/t),\varphi _{-}(y_{2}/t)\right\} \).

5.5 Definition of Trimmed Lévy Process

Set for \(0<\varepsilon <1\), \(t>0\) and \(y>0\)

$$\begin{aligned} \mu _{\pm }\left( \varepsilon ,\frac{y}{t}\right) :=\int _{0}^{\infty }\varphi _{\pm }\left( \frac{y}{t}+x\right) 1\left\{ \varepsilon <\varphi _{\pm }\left( \frac{y}{t}+x\right) \le 1\right\} \mathrm {d}x. \end{aligned}$$
(68)

Let \(\left( Z_{t}\right) _{t\ge 0}\), \(\left( \varGamma _{i}^{+}\right) _{i\ge 1}\) and \(\left( \varGamma _{i}^{-}\right) _{i\ge 1}\) be as in (43). We shall consider for sequences of positive constants \(t_{n}\) and positive integers \(k_{n}\) and \(\ell _{n}\) trimmed versions of the Lévy process \( X_{t}\) at \(t_{n}\), namely \({\widehat{X}}_{t_{n}}\), given by

$$\begin{aligned} {\widetilde{T}}_{t_{n}}^{(k_{n},\ell _{n})}:=\sigma Z_{t_{n}}+\gamma t_{n}+ {\widetilde{T}}_{t_{n}}^{(k_{n},+)}+{\widetilde{T}}_{t_{n}}^{(\ell _{n},-)}\text { ,} \end{aligned}$$
(69)

where \({\widetilde{T}}_{t_{n}}^{(k_{n},+)}=\)

$$\begin{aligned}&=\lim _{\varepsilon \searrow 0}\left( \sum _{i=k_{n}+1}^{\infty }\varphi _{+}\left( \frac{\varGamma _{i}^{+}}{t_{n}}\right) 1\left\{ \varepsilon <\varphi _{+}\left( \frac{\varGamma _{i}^{+}}{t_{n}}\right) \le 1\right\} -t_{n}\mu _{+}\left( \varepsilon ,\frac{\varGamma _{k_{n}}^{+}}{t_{n}}\right) \right) \\&\quad +\sum _{i=k_{n}+1}^{\infty }\varphi _{+}\left( \frac{\varGamma _{i}^{+}}{t_{n}} \right) 1\left\{ \varphi _{+}\left( \frac{\varGamma _{i}^{+}}{t_{n}}\right) >1\right\} \end{aligned}$$

and \({\widetilde{T}}_{t_{n}}^{(\ell _{n},-)}=\)

$$\begin{aligned}&-\lim _{\varepsilon \searrow 0}\left( \sum _{i=\ell _{n}+1}^{\infty }\varphi _{-}\left( \frac{\varGamma _{i}^{-}}{t_{n}}\right) 1\left\{ \varepsilon <\varphi _{-}\left( \frac{\varGamma _{i}^{-}}{t_{n}}\right) \le 1\right\} -t_{n}\mu _{-}\left( \varepsilon ,\frac{\varGamma _{\ell _{n}}^{-}}{t_{n}} \right) \right) \\&\quad -\sum _{i=\ell _{n}+1}^{\infty }\varphi _{-}\left( \frac{\varGamma _{i}^{-}}{ t_{n}}\right) 1\left\{ \varphi _{-}\left( \frac{\varGamma _{i}^{-}}{t_{n}} \right) >1\right\} . \end{aligned}$$

Notice that by construction, \(Z_{t_{n}},{\widetilde{T}}_{t_{n}}^{(k_{n},+)}\) and \({\widetilde{T}}_{t_{n}}^{(\ell _{n},-)}\) are independent.

5.6 Our SSCLT for a Trimmed Lévy Process

Armed with the notation and auxiliary results established in the previous subsections, we now state and prove our SSCLT for the trimmed Lévy process defined in (69). We note in passing that assumption (37) can be relaxed a bit; however, the present version of our SSCLT and its proof suffices to reveal the main ideas.

Theorem 2

Assume that (37) holds. For any two sequences of positive integers \(\left\{ k_{n}\right\} _{n\ge 1}\) and \(\left\{ \ell _{n}\right\} _{n\ge 1}\) converging to infinity and sequence of positive constants \(\left\{ t_{n}\right\} _{n\ge 1}\) satisfying

$$\begin{aligned} \frac{\sqrt{t_{n}}\sigma \left( t_{n},\varGamma _{k_{n}}^{+},\varGamma _{\ell _{n}}^{-}\right) }{\varphi \left( t_{n},\varGamma _{k_{n}}^{+},\varGamma _{\ell _{n}}^{-}\right) }\overset{\mathrm {P}}{\rightarrow }\infty {, \text {as } }\rightarrow \infty \end{aligned}$$
(70)

and

$$\begin{aligned} \frac{\varGamma _{k_{n}}^{+}}{t_{n}}\overset{\mathrm {P}}{\rightarrow }\infty \,\, { \text {and} }\,\,\frac{\varGamma _{\ell _{n}}^{-}}{t_{n}}\overset{\mathrm {P} }{\rightarrow }\infty {, \text {as} }\rightarrow \infty , \end{aligned}$$
(71)

we have uniformly in x, as \(n\rightarrow \infty \)

$$\begin{aligned} \left| {\mathbb {P}}\left\{ \frac{{\widetilde{T}}_{t_{n}}^{(k_{n},\ell _{n})}-\gamma t_{n}}{\sqrt{t_{n}}\sqrt{\sigma ^{2}\left( t_{n},\varGamma _{k_{n}}^{+},\varGamma _{\ell _{n}}^{-}\right) }}\le x|\varGamma _{k_{n}}^{+},\varGamma _{\ell _{n}}^{-}\right\} -{\mathbb {P}}\left\{ Z\le x\right\} \right| \overset{\mathrm {P}}{\rightarrow }0, \end{aligned}$$
(72)

which implies as \(n\rightarrow \infty \)

$$\begin{aligned} \frac{{\widetilde{T}}_{t_{n}}^{(k_{n},\ell _{n})}-\gamma t_{n}}{\sqrt{t_{n}} \sqrt{\sigma ^{2}\left( t_{n},\varGamma _{k_{n}}^{+},\varGamma _{\ell _{n}}^{-}\right) }}\overset{\mathrm {D}}{\rightarrow }Z. \end{aligned}$$
(73)

A simple example Before we prove Theorem 2, we shall give a simple example. Let \((X_{t})_{t\ge 0}\) be a Lévy process with canonical triplet \((0,0,\varLambda )\). Recall the notation (35). Assume that \(\varLambda _{+}=\varLambda _{-}\) and \(\varLambda _{+}\) is regularly varying at zero with index \(-\alpha \), where \(0<\alpha <2\). This implies that \(\varphi _{+}=\varphi _{-}\) is regularly varying at \(\infty \) with index \(-1/\alpha \) and thus for \(x>0\),

$$\begin{aligned} \varphi _{+}(x)=\varphi _{-}\left( x\right) =L\left( x\right) x^{-1/\alpha }, \end{aligned}$$
(74)

where \(L\left( x\right) \) is slowly varying at infinity.

Applying (64), we see that

$$\begin{aligned} \sigma _{+}^{2}\left( y_{1}/t\right) =\int _{y_{1}/t}^{\infty }\varphi _{+}^{2}\left( u\right) 1\left\{ 0<\varphi _{+}\left( u\right) \le 1\right\} \mathrm {d}u, \end{aligned}$$

which by (74), as \(y_{1}/t\rightarrow \infty \),

$$\begin{aligned} \sim b_{\alpha }^{2}L^{2}\left( y_{1}/t\right) \left( y_{1}/t\right) ^{-2/\alpha +1}, \end{aligned}$$
(75)

where \(b_{\alpha }^{2}=\alpha /\left( 2-\alpha \right) \). In the same way, we get as \(y_{2}/t\rightarrow \infty \)

$$\begin{aligned} \sigma _{-}^{2}\left( y_{2}/t\right) \sim b_{\alpha }^{2}L^{2}\left( y_{2}/t\right) \left( y_{2}/t\right) ^{-2/\alpha +1}. \end{aligned}$$
(76)

Note that in this example \(\sigma ^{2}=0\), so that

$$\begin{aligned} \sigma ^{2}\left( t,y_{1},y_{2}\right) =\sigma _{+}^{2}\left( y_{1}/t\right) +\sigma _{-}^{2}\left( y_{2}/t\right) . \end{aligned}$$

Assuming \(k_{n}\rightarrow \infty \) and \(k_{n}/t_{n}\rightarrow \infty \), we get that

$$\begin{aligned} \frac{\varGamma _{k_{n}}^{+}}{k_{n}}\overset{\mathrm {P}}{\rightarrow }1\,\,\text {and}\,\,\frac{\varGamma _{k_{n}}^{-}}{k_{n}}\overset{\mathrm {P}}{ \rightarrow }1{, \text {as} }\,\,n\rightarrow \infty , \end{aligned}$$

and thus

$$\begin{aligned} \frac{\varGamma _{k_{n}}^{+}}{t_{n}}\overset{\mathrm {P}}{\rightarrow }\infty \,\, \text {and}\,\, \frac{\varGamma _{k_{n}}^{-}}{t_{n}}\overset{\mathrm {P}}{ \rightarrow }\infty {, \text {as} }\,\,n\rightarrow \infty . \end{aligned}$$

This implies that

$$\begin{aligned} \sigma _{\pm }^{2}\left( \varGamma _{k_{n}}^{\pm }/t_{n}\right) /\left( b_{\alpha }^{2}L^{2}\left( k_{n}/t_{n}\right) \left( k_{n}/t_{n}\right) ^{-2/\alpha +1}\right) \overset{\mathrm {P}}{\rightarrow }1{,\,\, \text {as} }\,\,n\rightarrow \infty \end{aligned}$$
(77)

and

$$\begin{aligned} \varphi \left( t_{n},\varGamma _{k_{n}}^{+},\varGamma _{k_{n}}^{-}\right) /\left( L\left( k_{n}/t_{n}\right) \left( k_{n}/t_{n}\right) ^{-1/\alpha }\right) \overset{\mathrm {P}}{\rightarrow }1 \,\,{, \text {as} }\,\,n\rightarrow \infty , \end{aligned}$$

from which we readily infer that

$$\begin{aligned} \frac{\sqrt{t_{n}}\sigma \left( t_{n},\varGamma _{k_{n}}^{+},\varGamma _{k_{n}}^{-}\right) }{\varphi \left( t_{n},\varGamma _{k_{n}}^{+},\varGamma _{k_{n}}^{-}\right) }\overset{\mathrm {P}}{\rightarrow }\infty \,\,{, \text {as} }\,\,n\,\,\rightarrow \infty . \end{aligned}$$

Thus, by Theorem 2 we have uniformly in x, as \(n\rightarrow \infty \)

$$\begin{aligned} \left| {\mathbb {P}}\left\{ \frac{{\widetilde{T}}_{t_{n}}^{(k_{n},k_{n})}}{ \sqrt{t_{n}}\sqrt{\sigma ^{2}\left( t_{n},\varGamma _{k_{n}}^{+},\varGamma _{k_{n}}^{-}\right) }}\le x\right\} -{\mathbb {P}}\left\{ Z\le x\right\} \right| \overset{\mathrm {P}}{\rightarrow }0. \end{aligned}$$
(78)

By (77) we can replace the random norming in (78) by a deterministic norming to get uniformly in x, as \(n\rightarrow \infty \)

$$\begin{aligned} \left| {\mathbb {P}}\left\{ \frac{{\widetilde{T}}_{t_{n}}^{(k_{n},k_{n})}}{ \sqrt{t_{n}}\sqrt{2\sigma _{+}^{2}\left( k_{k}/t\right) }}\le x\right\} - {\mathbb {P}}\left\{ Z\le x\right\} \right| \overset{\mathrm {P}}{ \rightarrow }0. \end{aligned}$$

Proof of Theorem 2

Consider two sequences of random variables \((Y_{1,k_{n}}) _{n\ge 1}\), independent of \((\varGamma _{i}^{(1) }) _{i\ge 1}\), and \(( Y_{2,\ell _{n}}) _{n\ge 1}\), independent of \((\varGamma _{i}^{(2) }) _{i\ge 1}\), and independent of each other. Assume that \( t_{n}>0\), \(k_{n}>0\) and \(\ell _{n}>0\) are such that

$$\begin{aligned} \frac{\sqrt{t_{n}}\sigma \left( t_{n},Y_{1,k_{n}},Y_{2,\ell _{n}}\right) }{ \varphi \left( t_{n},Y_{1,k_{n}},Y_{2,\ell _{n}}\right) }\overset{\mathrm {P}}{\rightarrow }\infty {, \text {as} }\rightarrow \infty , \end{aligned}$$
(79)

then by applying (67) we get uniformly in x, as \(n\rightarrow \infty \),

$$\begin{aligned} \left| {\mathbb {P}}\left\{ S^{\left( 1\right) }\left( t_{n},Y_{1,k_{n}},Y_{2,\ell _{n}}\right) \le x|Y_{1,k_{n}},Y_{2,\ell _{n}}\right\} -{\mathbb {P}}\left\{ Z\le x\right\} \right| \overset{ \mathrm {P}}{\rightarrow }0 \text {.} \end{aligned}$$
(80)

For \(t>0\), \(y_{1}>0\) and \(y_{2}>0\), set

$$\begin{aligned} {\widehat{Y}}_{t}^{\left( 2\right) }\left( y_{1},y_{2}\right)= & {} {\widehat{X}} _{t}^{\left( 2,+\right) }\left( y_{1}\right) -{\widehat{X}}_{t}^{\left( 2,-\right) }\left( y_{2}\right) \\= & {} \sum _{i=1}^{\infty }\varphi _{+}\left( \frac{y_{1}}{t}+\frac{\varGamma _{i}^{\left( 1\right) }}{t}\right) 1\left\{ \varphi _{+}\left( \frac{y_{1}}{t }+\frac{\varGamma _{i}^{\left( 1\right) }}{t}\right)>1\right\} \\&-\sum _{i=1}^{\infty }\varphi _{-}\left( \frac{y_{2}}{t}+\frac{\varGamma _{i}^{\left( 2\right) }}{t}\right) 1\left\{ \varphi _{-}\left( \frac{y_{2}}{t }+\frac{\varGamma _{i}^{\left( 2\right) }}{t}\right) >1\right\} . \end{aligned}$$

Further, let

$$\begin{aligned} {\widehat{Y}}_{t}\left( y_{1},y_{2}\right) ={\widehat{Y}}_{t}^{\left( 1\right) }\left( y_{1},y_{2}\right) +{\widehat{Y}}_{t}^{\left( 2\right) }\left( y_{1},y_{2}\right) \end{aligned}$$

and

$$\begin{aligned} S\left( t,y_{1},y_{2}\right) =\frac{{\widehat{Y}}_{t}\left( y_{1},y_{2}\right) }{\sqrt{t}\sqrt{\sigma ^{2}\left( t,y_{1},y_{2}\right) }}. \end{aligned}$$

We see that, if addition to (79), we assume that \(t_{n}>0\), \( k_{n}>0\) and \(\ell _{n}>0\) are such that

$$\begin{aligned} \frac{Y_{1,k_{n}}}{t_{n}}\overset{\mathrm {P}}{\rightarrow }\infty \text { and }\frac{Y_{2,\ell _{n}}}{t_{n}}\overset{\mathrm {P}}{\rightarrow }\infty { , \text {as} }\,\,n\rightarrow \infty , \end{aligned}$$
(81)

then by (38)

$$\begin{aligned} 1\left\{ \varphi _{+}\left( \frac{Y_{1,k_{n}}}{t_{n}}\right)>1\right\} \overset{\mathrm {P}}{\rightarrow }0\text { and }1\left\{ \varphi _{-}\left( \frac{Y_{2,\ell _{n}}}{t_{n}}\right) >1\right\} \overset{\mathrm {P}}{ \rightarrow }0, { \text {as} }\,\,n\rightarrow \infty , \end{aligned}$$

which implies that

$$\begin{aligned} {\mathbb {P}}\left\{ {\widehat{Y}}_{t}^{\left( 2\right) }\left( Y_{1,k_{n}},Y_{2,\ell _{n}}\right) \ne 0|Y_{1,k_{n}},Y_{2,\ell _{n}}\right\} \overset{\mathrm {P}}{\rightarrow }0, { \text {as} }\,\, n\rightarrow \infty . \end{aligned}$$

This gives

$$\begin{aligned} {\mathbb {P}}\left\{ {\widehat{Y}}_{t_{n}}\left( Y_{1,k_{n}},Y_{2,\ell _{n}}\right) ={\widehat{Y}}_{t_{n}}^{\left( 1\right) }\left( Y_{1,k_{n}},Y_{2,\ell _{n}}\right) |Y_{1,k_{n}},Y_{2,\ell _{n}}\right\} \overset{\mathrm {P}}{\rightarrow }1{, \text {as} }\,\,n\rightarrow \infty , \end{aligned}$$
(82)

which in combination with (80) implies that uniformly in x

$$\begin{aligned} \left| {\mathbb {P}}\left\{ S\left( t_{n},Y_{1,k_{n}},Y_{2,\ell _{n}}\right) \le x|Y_{1,k_{n}},Y_{2,\ell _{n}}\right\} -{\mathbb {P}}\left\{ Z\le x\right\} \right| \overset{\mathrm {P}}{\rightarrow }0{, \text {as} }\,\,n\rightarrow \infty \text {.} \end{aligned}$$
(83)

Let \(\left( Y_{1,k_{n}},Y_{2,\ell _{n}}\right) =\left( \varGamma _{k_{n}}^{+},\varGamma _{\ell _{n}}^{-}\right) ,\) and be independent of \( \left( \varGamma _{i}^{\left( 1\right) }\right) _{i\ge 1}\) and \(\left( \varGamma _{i}^{\left( 2\right) }\right) _{i\ge 1}\). We see that

$$\begin{aligned} {\mathbb {P}}\left\{ S\left( t_{n},\varGamma _{k_{n}}^{+},\varGamma _{\ell _{n}}^{-}\right) \le x|\varGamma _{k_{n}}^{+},\varGamma _{\ell _{n}}^{-}\right\} \overset{\mathrm {D}}{=} \end{aligned}$$
$$\begin{aligned} {\mathbb {P}}\left\{ \frac{{\widetilde{T}}_{t_{n}}^{(k_{n},\ell _{n})}-\gamma t_{n} }{\sqrt{t_{n}}\sqrt{\sigma ^{2}\left( t_{n},\varGamma _{k_{n}}^{+},\varGamma _{\ell _{n}}^{-}\right) }}\le x|\varGamma _{k_{n}}^{+},\varGamma _{\ell _{n}}^{-}\right\} . \end{aligned}$$
(84)

Combining (83) with (84), we get (72) and (73 ). \(\square \)