1 Introduction

Let \(\{T_{n};n\ge 0\}\) be a classical supercritical Galton–Watson branching process with offspring distribution \(\{b_{n};n\ge 0\}\) and mean \(m:=\sum _{n=0}^{\infty }nb_{n}, 1<m<\infty \). Define \(S_{n}=T_{n}/m^{n}\), it is well known that there exists a nonnegative random variable \(S\) such that \(S=\lim _{n\rightarrow \infty }S_{n} ~ a.s. \). C. C. Heyde [1012] derived the central limit theorem (CLT) and the law of the iterated logarithm (LIL) for \(\{T_{n};n\ge 0\}\):

(I) Suppose that \(\tau _{1}^{2}:=Var(T_{1})<\infty \) and \(P(S>0)=1\). Set \(\tau _{r}^{2}:=Var(T_{r}), r\ge 1\), then

$$\begin{aligned}&\tau _{r}^{-1}T_{n}^{-\frac{1}{2}}(T_{n+r}-m^{r}T_{n})\xrightarrow []{d} N(0,1)~(n\rightarrow \infty ),\\&(m^{2}-m)^{\frac{1}{2}}\tau _{1}^{-1}T_{n}^{-\frac{1}{2}}m^{n}(S-S_{n}) \xrightarrow []{d} N(0,1)~(n\rightarrow \infty ). \end{aligned}$$

(II) Suppose that \(E(T_{1}^{3})<\infty \), then

$$\begin{aligned}&\limsup _{n\rightarrow \infty }(\liminf _{n\rightarrow \infty }) \frac{T_{n+r}-m^{r}T_{n}}{(2\tau _{r}^{2}T_{n}\log n)^{\frac{1}{2}}}=1(-1)~~a.s.~ \text {on} ~\{S>0\},\\&\limsup _{n\rightarrow \infty }(\liminf _{n\rightarrow \infty })\frac{m^{n}S-T_{n}}{(2\tau _{1}^{2}(m^{2}-m)^{-1}T_{n}\log n)^{\frac{1}{2}}}=1(-1)~~a.s.~ \text {on} ~\{S>0\}. \end{aligned}$$

Similar limit theorems for a classical Galton–Watson branching process with immigration were studied in [13].

A branching process in a random environment is a natural and important extension of the Galton–Watson process, it is a class of non-homegeneous Galton–Waltson process indexed by a time environment, which has been studied by many authors, see [13, 5, 6, 14, 15, 1922].

In this article, we consider the Galton–Watson branching process with time-dependent immigration in the varying environments (IGWVE) defined as following:

Definition 1.1

Let \(Z_{0}\equiv 1\) and for any \(n\ge 0\),

$$\begin{aligned} Z_{n+1}=\sum _{j=1}^{Z_{n}}\xi _{n,j}+Y_{n+1} ~~a.s., \end{aligned}$$

where \(\{\xi _{n,j}; n\ge 0,j\ge 1\}\) are independent and have the same distribution in row, \(\{Y_{n},n\ge 1\}\) is a sequence of independent random variables taking values in \({\mathbb {N}}\) and independent of \(\{\xi _{n,j}; n\ge 0,j\ge 1\}\), then \(\{Z_{n}, n\ge 0\}\) is said to be a Galton–Watson branching process with time-dependent immigration in varying environments. Particularly, if for any \(n\ge 1\), \(Y_n\equiv 0\), then \(\{Z_{n}, n\ge 0\}\) is said to be a Galton–Watson branching process in varying environments (GWVE).

Throughout this paper we assume that the variances of \(\{\xi _{n,j}; n\ge 0,j\ge 1\}\) and \(\{Y_{n},n\ge 1\}\) exist. Define \(\mu _{n}=E(\xi _{n,j}), \delta _{n}^{2}=Var(\xi _{n,j})\) and \( \nu _{n}=E(Y_{n})\), we assume that

$$\begin{aligned} P(\xi _{n,1}=0)\equiv 0,~~\infty >\mu _{n}>1, ~~\infty >\nu _{n}, \delta _{n}^{2}>0. \end{aligned}$$

Our first main result is the following growth rate of IGWVE.

Theorem 1.1

Let \(\{Z_{n}, n\ge 0\}\) be an IGWVE. Define \(m_{n}=\prod _{i=0}^{n-1}\mu _{i}, W_{n}=Z_{n}/m_{n}, n\ge 1,\) then \(\{W_{n}, n\ge 1\}\) is a nonnegative submartingale. If

$$\begin{aligned} a:=\sum _{n=1}^{\infty }\frac{\nu _{n}}{m_{n}}<\infty , \end{aligned}$$
(1.1)

then there exists a nonnegative random variable \(W\) with \(E(W)<\infty \) such that \(W_{n}\xrightarrow {a.s.} W.\) Furthermore, if

$$\begin{aligned} b:=\sup _{n\ge 1}\sum _{j=n}^{\infty }\frac{\delta _{j}^{2}}{\mu _{j}^{2}m_{n,j-n}}<\infty , \end{aligned}$$
(1.2)

where \(m_{n,k}=\prod _{i=n}^{n+k-1}\mu _{i}\), then \(W_{n}\xrightarrow {L^{2}}W\).

To prove the CLT and the LIL for IGWVE, we need the following two decomposition results.

(A) For any fixed \(n\ge 0, r\ge 1\), one has

$$\begin{aligned} Z_{n+r}=\sum _{j=1}^{Z_{n}} X_{n,r}^{(j)} +Y_{n,r}~~ a.s., \end{aligned}$$

where \(\{X_{n,r}^{(j)},j\ge 1\}\) are i.i.d. and independent of \(Y_{n,r}\). The variances of \(\{X_{n,r}^{(j)},j\ge 1\}\) and \(Y_{n,r}\) exist. We define \(m_{n,r}=E(X_{n,r}^{(1)}), \sigma _{n,r}^{2}=Var(X_{n,r}^{(1)}), \pi _{n,r}=E(Y_{n,r}), \theta _{n,r}^{2}=Var(Y_{n,r}).\)

(B) For any fixed \(n\ge 0\) one has

$$\begin{aligned} Z_{n}-m_{n}W=\sum _{j=1}^{Z_{n}}(1-V_{n}^{(j)})+I_{n}~~a.s., \end{aligned}$$

where \(\{V_{n}^{(j)};j\ge 1\}\) are i.i.d. and independent of \(I_{n}\). If (1.2) is satisfied, then the variances of \(\{V_{n}^{(j)},j\ge 1\}\) exist. Furthermore, if

$$\begin{aligned} c:=\sup _{n\ge 0}\sum _{k=n+1}^{\infty }\frac{\nu _{k}}{m_{n,k-n}}<\infty , \end{aligned}$$
(1.3)

then the variances of \(I_{n}\) exist. We define \(\sigma _{n}^{2}\,=\,Var(V_{n}^{(1)}), \pi _{n}=E(I_{n}), \theta _{n}^{2}=Var(I_{n}).\)

Using the above two decomposition results, we obtain the CLT and the LIL for IGWVE.

Theorem 1.2

Let \(\{Z_{n}, n\ge 0\}\) be an IGWVE. If \(Z_{n}\xrightarrow {P}\infty \), (1.2),(1.3) are satisfied and

$$\begin{aligned} 0<d=\inf _{n\ge 0}\frac{\delta _{n}^{2}}{\mu _{n}^{2}}, \end{aligned}$$
(1.4)

then for any fixed \(r\ge 1\), when \(n\rightarrow \infty \) one has

$$\begin{aligned} \frac{Z_{n+r}-m_{n,r}Z_{n}}{\sigma _{n,r}\sqrt{Z_{n}}}\xrightarrow {d}N(0,1), ~~~~\frac{Z_{n}-m_{n}W}{\sigma _{n}\sqrt{Z_{n}}}\xrightarrow {d}N(0,1). \end{aligned}$$

Remark 1.3

Note that \(Z_n\ge 1\), a necessary and sufficient condition for \(Z_{n}\xrightarrow {a.s.}\infty \) is

$$\begin{aligned} \sum _{n=0}^{\infty }(1-P(\xi _{n,1}=1))=+\infty , \end{aligned}$$

which imply \(Z_{n}\xrightarrow {P}\infty \) [17].

Assume that there exists a constant \(0<\delta <1\) such that for any \(r\ge 1\),

$$\begin{aligned} \sup _{n}E\left( \left| \frac{X_{n,r}^{(j)}-m_{n,r} }{\sigma _{n,r}}\right| ^{2+\delta }\right) <\infty ,~~~~ \sup _{n}E\left( \left| \frac{V_{n}^{(j)}-1 }{\sigma _{n}}\right| ^{2+\delta }\right) <\infty . \end{aligned}$$
(1.5)

For any \(n\ge 1,~x\in R \) and \(r\ge 1\), define

$$\begin{aligned} Q_{n,r}(x)=P\left( \frac{Z_{n+r}-m_{n,r}Z_{n}}{\sigma _{n,r}\sqrt{Z_{n}}}\le x\right) ,~~Q_{n}(x)=P\left( \frac{Z_{n}-m_{n}W}{\sigma _{n}\sqrt{Z_{n}}}\le x\right) .\quad \end{aligned}$$
(1.6)

Theorem 1.4

Let \(\{Z_{n}, n\ge 0\}\) be an IGWVE. If \(Z_n\xrightarrow {a.e.}+\infty \) and (1.2)–(1.5) are satisfied, then there exists constants \(\{C_{r},r\ge 1\}\), \(C\) and \(D\) such that

$$\begin{aligned}&\sup _{x}|Q_{n,r}(x)-\Phi (x)|\le C_{r}E\left( Z_{n}^{-\frac{\delta }{2}}\right) + C\left[ E\left( Z_{n}^{-\frac{\delta }{2}}\right) \right] ^{\frac{1}{2}},\end{aligned}$$
(1.7)
$$\begin{aligned}&\sup _{x}|Q_{n}(x)-\Phi (x)|\le DE\left( Z_{n}^{-\frac{\delta }{2}}\right) +C\left[ E\left( Z_{n}^{-\frac{\delta }{2}}\right) \right] ^{\frac{1}{2}}. \end{aligned}$$
(1.8)

Theorem 1.5

Let \(\{Z_{n};n\ge 0\}\) be an IGWVE. Suppose that there exist five constants \(\alpha ,\beta ,\tau ,\gamma ,\delta \) with \(\beta >\alpha >1,\tau >\gamma >0\) and \( 0<\delta <1\) such that for any \(n\ge 0\),

$$\begin{aligned} \alpha \le \mu _{n}\le \beta , ~~\gamma ^{2}\le \delta _{n}^{2}\le \tau ^{2}, ~~\sum _{n=0}^{\infty }\left[ E\left( Z_{n}^{-\frac{\delta }{2}}\right) ^{\frac{1}{2}}\right] <\infty . \end{aligned}$$
(1.9)

If (1.3) and (1.5) are satisfied, then

$$\begin{aligned}&\limsup _{n\rightarrow \infty }(\liminf _{n\rightarrow \infty })\frac{Z_{n+r}-m_{n,r} Z_{n}}{(2\sigma _{n,r}^{2}Z_{n}\log n)^{\frac{1}{2}}}=1(-1)~~a.s.;\end{aligned}$$
(1.10)
$$\begin{aligned}&\limsup _{n\rightarrow \infty }(\liminf _{n\rightarrow \infty }) \frac{Z_{n}-m_{n}W}{(2\sigma _{n}^{2}Z_{n}\log n)^{\frac{1}{2}}}=1(-1) ~~a.s.. \end{aligned}$$
(1.11)

Remark 1.6

If (1.9) is satisfied, then \(Z_n\xrightarrow {a.e.}+\infty \), (1.2) and (1.4) are true.

2 A Growth Rate for IGWVE

In order to prove Theorem 1.1, we need the following lemma:

Lemma 2.1

Let \(\{X_{n},n\ge 0\}\) be a GWVE. For any fixed \(n\ge 0, r\ge 1\) one has

$$\begin{aligned} X_{n+r}=\sum _{j=1}^{X_{n}}X_{n,r}^{(j)}, \end{aligned}$$

where \(\{X_{n,r}^{(j)}; j\ge 1\}\) are i.i.d. and independent of \(X_{n}\). Furthermore,

$$\begin{aligned} m_{n,r}=E(X_{n,r}^{(j)})=\prod _{j=n}^{n+r-1}\mu _{j},~~~ \sigma _{n,r}^{2}=Var(X_{n,r}^{(j)})=(m_{n,r})^{2}\sum _{j=n}^{n+r-1} \frac{\delta _{j}^{2}}{\mu _{j}^{2}m_{n,j-n}}. \end{aligned}$$

Note that \(m_{n, 0}=1,~~ m_{n, 1}=\mu _{n},~~ X_{n,1}^{(j)}=\xi _{n,j}\) and \(\sigma _{n,1}^{2}=\delta _{n}^{2}.\)

Proof

Let \(X_{n,r}^{(j)}\) be the size of the \(r\)th generation of GWVE starting with the \(j\)th particle at time \(n\). The first result in Lemma 2.1 following from Definition 1.1. Similar to Proposition 4 and Proposition 6 in [8], the rest of Lemma 2.1 is true. \(\square \)

Proof of Theorem 1.1

By our basic assumption, it is obvious that \(W_n\) is integrable for all \(n\ge 1\). By the independence of \(\{Y_n,n\ge 1\}\) and \(\{\xi _{n,j},n\ge 0,j\ge 1\}\), one has

$$\begin{aligned} E(W_{n+1}|W_{n},W_{n-1},\cdots ,W_1)&= E\left( \frac{Z_{n+1}}{m_{n+1}} |Z_n,Z_{n-1},\cdots ,Z_1\right) \\&= \frac{1}{m_n}E\left[ \frac{1}{\mu _n}\left( \sum _{i=1}^{Z_n}\xi _{n,j} +Y_{n+1}\right) |Z_n\right] \\&= \frac{Z_n}{m_n}+\frac{v_{n+1}}{m_{n+1}}>\frac{Z_n}{m_n}=W_n, \end{aligned}$$

which means that \(\{W_n, n\ge 1\}\) is a nonnegative submartingale.

Let \(X_n\) be the size of the original GWVE at time \(n\) and \(U_{k,n}(k<n)\) be the number of descendants at time \(n\) of the particles that immigrated in generation \(k\), then

$$\begin{aligned} Z_n=X_n+\sum _{k=1}^nU_{k,n}, \end{aligned}$$
(2.1)

where \(U_{n,n}=Y_n\). Let \({\mathcal {G}}\) be the \(\sigma -\)field generated by \(\{Y_n,n\ge 1\}\), by the independence of \(\{Y_n,n\ge 1\}\) and \(\{\xi _{n,j},n\ge 0,j\ge 1\}\) one has

$$\begin{aligned} E(W_n|\mathcal {G})=E\left( \frac{Z_n}{m_n}|\mathcal {G}\right) = E\left( \frac{X_n}{m_n}\right) + E\left( \sum _{k=1}^{n}\frac{U_{k,n}}{m_n}|\mathcal {G}\right) . \end{aligned}$$
(2.2)

Now for \(k\le n\), the random variable \(U_{k,n}\) is the size of the \((n-k)\)th generation of ordinary GWVE starting, however, with \(Y_k\) particles at time \(k\). Therefore, by the independence of \(\{Y_n,n\ge 1\}\) and Lemma 2.1, its conditional expectation is just \(Y_{k}m_{k,n-k}\). According to (2.2) and Lemma 2.1, one has

$$\begin{aligned} E(W_n|\mathcal {G})=1+\sum _{k=1}^{n}\frac{Y_{k}}{m_k}~~\text { and}~~ E(W_n)=1+\sum _{k=1}^{n}\frac{v_{k}}{m_k}. \end{aligned}$$

Since (1.1) is satisfied we know that \(\{W_n,n\ge 1\}\) is \(L^1\)-bounded. By the convergence theorem of submartingale (see [9] Theorem 2.5), there exists a nonnegative random variable \(W\) with \(E(W)<\infty \) such that \(W_n\xrightarrow {a.s.}W.\)

In order to prove the last result in Theorem 1.1, we only need to prove that\(\{W_n,n\ge 1\}\) is \(L^2\)-bounded (see Theorem 7.6.10 of [4]). In fact, by (2.1) and the independence of \(\{Y_n,n\ge 1\}\) and \(\{\xi _{n,j},n\ge 0,j\ge 1\}\) one has

$$\begin{aligned} E(W_n^2|\mathcal {G})&= E\left( \frac{1}{m_n^2} \left[ X_n+\sum _{k=1}^nU_{k,n}\right] ^2|\mathcal {G}\right) \\&= E\left\{ \frac{1}{m_n^2}\left[ X_n^2+2X_n\sum _{k=1}^nU_{k,n} +\left( \sum _{k=1}^nU_{k,n}\right) ^2\right] |\mathcal {G}\right\} \\&= \frac{1}{m_n^2}E(X_n^2)+2\sum _{k=1}^n\frac{Y_k}{m_k}+\frac{1}{m_n^2} E\left[ \left( \sum _{k=1}^n \left( \sum _{j=1+\sum _{i=1}^{k-1}Y_{i}}^{\sum _{i=1}^{k} Y_{i}}X_{k,n-k}^{(j)}\right) \right) ^{2}|\mathcal {G}\right] \\&= \frac{1}{m_n^2}E(X_n^2)+2\sum _{k=1}^n\frac{Y_k}{m_k}\\&+\,\frac{1}{m_n^2}\left[ \sum _{k=1}^n(Y_kVar(X_{k,n-k}^{(1)}))+ \sum _{k=1}^nY_k\left( E(X_{k,n-k}^{(1)})\right) ^{2}\right] , \end{aligned}$$

where \(X_{k,n-k}^{(j)}\) is defined in Lemma 2.1, it is the size of the \((n-k)\)th generation of GWVE starting with the \(j\)th particle immigrated at time \(k\). By the Lemma 2.1 one has

$$\begin{aligned} E(W_n^2)&= \frac{1}{m_n^2}E(X_n^2)+2\sum _{k=1}^n\frac{v_k}{m_k}+ \frac{1}{m_n^2}\left\{ \sum _{k=1}^n (v_k \sigma _{k,n-k}^{2})+\sum _{k=1}^n (v_k m_{k,n-k}^{2})\right\} \\&= \frac{1}{m_n^2}E(X_n^2)+2\sum _{k=1}^n\frac{v_k}{m_k}+\sum _{k=1}^n \left[ \frac{v_k}{m_k^2}\cdot \left( \frac{\sigma _{k,n-k}^{2}}{m_{k,n-k}^{2}} +1\right) \right] \\&\le b+2a +ab+a<\infty , \end{aligned}$$

which means that \(\{W_n,n\ge 1\}\) is \(L^2\)-bounded. We complete the proof. \(\square \)

3 The CLT for IGWVE

In order to prove Theorem 1.2, we need the following lemmas.

Lemma 3.1

Let \(\{Z_n,n\ge 0\}\) be an IGWVE. For any fixed \(n\ge 0, r\ge 1\) one has

$$\begin{aligned} Z_{n+r}=\sum _{j=1}^{Z_n}X_{n,r}^{(j)}+Y_{n,r}~~~~~~a.s., \end{aligned}$$
(3.1)

where \(\{X_{n,r}^{(j)},j\ge 1\}\) are i.i.d. and independent of \(Y_{n,r}\). Furthermore,

$$\begin{aligned} \pi _{n,r}&= E(Y_{n,r})=\sum _{i=n+1}^{n+r}\nu _im_{i,n+r-i},\end{aligned}$$
(3.2)
$$\begin{aligned} \theta _{n,r}^{2}&= Var (Y_{n,r})= \sum _{i=n+1}^{n+r}\nu _i\sigma _{i,n+r-i}^2. \end{aligned}$$
(3.3)

Proof

Similar to Lemma 2.1, let \(X_{n,r}^{(j)}\) be the size of the \(r\)th generation of the ordinary GWVE staring with the \(j\)th particle at time \(n\). Let \(U_{k,n}\) be the number of descendants at time \(n\) of the particles that were immigrated in generation \(k\). Define \(Y_{n,r}=\sum _{k=n+1}^{n+r}U_{ k,n+r}.\) Then

$$\begin{aligned} Z_{n+r}=\sum _{j=1}^{Z_n}X_{n,r}^{(j)}+Y_{n,r},~~~~a.s.. \end{aligned}$$

By the independence of \(\{Y_n,n\ge 1\}\) and \(\{\xi _{n,j},n\ge 0,j\ge 1\}\) we know that \(\{X_{n,r}^{(j)},j\ge 1\}\) are i.i.d. and independent of \(Y_{n,r}\).

Let \({\mathcal {G}}\) be the \(\sigma -\)field generated by \(\{Y_n,n\ge 1\}\). Similar to the calculation of (2.2) we have

$$\begin{aligned} E(Y_{n,r}|{\mathcal {G}})=E\left( \sum _{k=n+1}^{n+r}U_{ k,n+r}|{\mathcal {G}}\right) =\sum _{k=n+1}^{n+r}Y_{k}E(X_{k,n+r-k}^{(1)}). \end{aligned}$$

So by Lemma 2.1, we have \(E(Y_{n,r})=\sum _{k=n+1}^{n+r}v_{k}m_{k,n+r-k}.\) On the other hand, by the independence of \(\{Y_n,n\ge 1\}\) and \(\{\xi _{n,j},n\ge 0,j\ge 1\}\) one has

$$\begin{aligned} Var(Y_{n,r}|{\mathcal {G}})&= Var\left( \sum _{k=n+1}^{n+r}U_{ k,n+r}|{\mathcal {G}}\right) =\sum _{k=n+1}^{n+r}Var(U_{ k,n+r}|{\mathcal {G}})\\&= \!\sum _{k=n+1}^{n+r}Var\left( \sum _{j=1+\sum _{i=n+1}^{k-1} Y_{i}}^{\sum _{i=n+1}^{k}Y_{i}}X_{k,n+r-k}^{(j)}|{\mathcal {G}}\!\right) \!=\! \sum _{k=n+1}^{n+r}Y_{k}Var(X_{k,n+r-k}^{(1)}). \end{aligned}$$

So by Lemma 2.1 we have \(Var(Y_{n,r})=\sum _{k=n+1}^{n+r}v_{k}\sigma _{k,n+r-k}^{2}.\) We complete the proof. \(\square \)

Lemma 3.2

[8] Let \(\{X_{n},n\ge 0\}\) be a GWVE. Define \(V_{n}=X_{n}/m_{n}\) for all \(n\ge 1\), then \(\{V_{n},n\ge 1\}\) is a nonnegative martingale and there exists a nonnegative random variable \(V\) such that \(V_{n}\xrightarrow {a.s.}V\) when \(n\rightarrow \infty \).

Lemma 3.3

Let \(\{X_{n},n\ge 0\}\) be a GWVE. Define \(V_{n,r}^{(j)}=X_{n,r}^{(j)}/m_{n,r}\), where \(X_{n,r}^{(j)}\) and \(m_{n,r}\) are defined in Lemma 2.1, then \(\{V_{n,r}^{(j)},r\ge 1\}\) is a nonnegative martingale and \(V_{n,r}^{(j)}\xrightarrow {a.s}V_{n}^{(j)}\) when \(n,j\) fixed and \(r\rightarrow \infty \). Then for any fixed \(n\ge 1\) one has

$$\begin{aligned} X_{n}-m_{n}V=\sum _{j=1}^{X_{n}}(1-V_{n}^{(j)})~~a.s., \end{aligned}$$

where \(V\) is defined in Lemma 3.2, \(\{V_{n}^{(j)};j\ge 1\}\) are i.i.d. and independent of \(X_{n}\).

Furthermore if (1.2) is satisfied, then \(V_{n,r}^{(j)}\xrightarrow {a.s}V_{n}^{(j)}\) when \(n,j\) fixed and \(r\rightarrow \infty \), \(E(V_{n}^{(j)})\equiv 1\),

$$\begin{aligned} Var(V_{n}^{(j)})=\sigma _{n}^{2}= \sum _{j=n}^{\infty }\frac{\delta _{j}^{2}}{\mu _{j}^{2}m_{n,j-n}}. \end{aligned}$$

Proof

Similar to Proposition 5 and Corollary 2 of [8], we know the first result in Lemma 3.3 is true. Then the second result following from Lemma 2.1. Using the same idea of Theorem 1 in [8], we can prove the last result. \(\square \)

Lemma 3.4

Let \(\{Z_{n},n\ge 0\}\) be an IGWVE. For any fixed \(n\ge 0\), one has

$$\begin{aligned} Z_{n}-m_{n}W=\sum _{j=1}^{Z_{n}}(1-V_{n}^{(j)})- I_{n} ~~a.s., \end{aligned}$$
(3.4)

where \(\{V_{n}^{(j)},j\ge 1\}\) are i.i.d. and independent of \(I_{n}\). Furthermore, if (2) and (3) are satisfied,

$$\begin{aligned} \pi _{n}&= E(I_{n})=\sum _{k=n+1}^{\infty }\frac{\nu _{k}}{m_{n,k-n}},\end{aligned}$$
(3.5)
$$\begin{aligned} \theta _{n}^{2}&= Var(I_{n})=\sum _{k=n+1}^{\infty } \frac{\nu _{k}\sigma _{k}^{2}}{m_{n,k-n}}. \end{aligned}$$
(3.6)

Proof

Applying Theorem 1.1 one has that for each \(n\ge 1\),

$$\begin{aligned} \lim _{r\rightarrow \infty }\frac{Z_{n+r}}{m_{n+r}}=\lim _{r\rightarrow \infty }W_{n+r} =W ~~a.s.. \end{aligned}$$

So by Lemma 3.1 one deduce that

$$\begin{aligned} Z_{n}-m_{n}W&= Z_{n}-m_{n}\lim _{r\rightarrow \infty }\frac{Z_{n+r}}{m_{n+r}}\nonumber \\&= \lim _{r\rightarrow \infty }\left[ Z_{n}-\frac{m_{n}}{m_{n+r}} \left( \sum _{j=1}^{Z_{n}}X_{n,r}^{(j)}+Y_{n,r}\right) \right] \nonumber \\&= \lim _{r\rightarrow \infty }\left\{ \sum _{j=1}^{Z_{n}}\left[ 1- \frac{X_{n,r}^{(j)}}{m_{n,r}}\right] -\frac{Y_{n,r}}{m_{n,r}}\right\} ~~a.s.. \end{aligned}$$
(3.7)

By Lemma 3.3 one has

$$\begin{aligned} \lim _{r\rightarrow \infty }\sum _{j=1}^{Z_{n}}\left[ 1-\frac{X_{n,r}^{(j)}}{m_{n,r}}\right] =\sum _{j=1}^{Z_{n}}\left[ 1-V_{n}^{(j)}\right] ~~a.s.. \end{aligned}$$
(3.8)

One the other hand, define \(T_{n,r}=\frac{Y_{n,r}}{m_{n,r}}=\sum _{k=n+1}^{n+r}U_{ k,n+r}/m_{n,r}\) for each \(r\ge 1\). Let \({\mathcal {F}}_{n,r}\) be the \(\sigma -\)field generated by \(\{Y_{n},\cdots ,Y_{n+r}, \xi _{i,j}, n-1\le i\le n+r-1,j\ge 1 \}\). Note that by Lemma 3.1 one has

$$\begin{aligned} E(T_{n,r})=\frac{1}{m_{n,r}}\sum _{k=n+1}^{n+r}v_{k}m_{k,n+r-k}= \sum _{k=n+1}^{n+r}\frac{v_{k}}{m_{n,k-n}}\le c, \end{aligned}$$

which means that \(\{T_{n,r},r\ge 1\}\) is \(L^{1}-\)bounded. Now

$$\begin{aligned} E(T_{n,r+1}|{\mathcal {F}}_{n,r})&= E\bigg (\frac{Y_{n,r+1}}{m_{n,r+1}} |{\mathcal {F}}_{n,r}\bigg )=E\bigg (\frac{\sum _{k=n+1}^{n+r+1}U_{ k,n+r+1}}{m_{n,r+1}}|{\mathcal {F}}_{n,r}\bigg )\\&= \frac{1}{m_{n,r+1}}E\bigg \{\!Y_{n+r+1}+\sum _{k=n+1}^{n+r} \bigg [\sum _{j=1+\sum _{i=n+1}^{k-1}Y_{i}}^{\sum _{i=n+1}^{k} Y_{i}}X_{k,n+r+1-k}^{(j)}\bigg ]|{\mathcal {F}}_{n,r}\bigg \}\\&= \frac{1}{m_{n,r+1}}\bigg \{E(Y_{n+r+1})\!+\!\sum _{k=n+1}^{n+r} \bigg [\sum _{j=1+\sum _{i=n+1}^{k-1}Y_{i}}^{\sum _{i=n+1}^{k} Y_{i}}X_{k,n+r-k}^{(j)}E(\xi _{n+r,1})\!\bigg ]\!\bigg \}\\&= \frac{Y_{n,r}}{m_{n,r}}+\frac{v_{n+r+1}}{m_{n,r+1}}>T_{n,r}, \end{aligned}$$

which means that \(\{T_{n,r},r\ge 1\}\) is a nonnegative submartingale with respect to \({\mathcal {F}}_{n,r},r\ge 1\), then there exists a nonnegative random variable \(I_{n}\) with \(E(I_{n})<\infty \) such that

$$\begin{aligned} \lim _{r\rightarrow \infty }T_{n,r}=I_{n}~~a.s.. \end{aligned}$$
(3.9)

By (3.7), (3.8)and (3.9) we obtain (3.4).

Now by Lemma 3.1 one has

$$\begin{aligned} E(T_{n,r})^{2}&= Var(T_{n,r})+(E(T_{n,r}))^{2}\\ {}&= \frac{1}{m_{n,r}^{2}} \left[ \sum _{k=n+1}^{n+r}\nu _k\sigma _{k,n+r-k}^2+\left( \sum _{k=n+1}^{n+r} \nu _km_{k,n+r-k}\right) ^{2}\right] \\&\le \sum _{k=n+1}^{n+r}\left[ \frac{v_{k}}{m_{n,k-n}^{2}}\cdot \frac{\sigma _{k,n+r-k}^2}{m_{k,n+r-k}^2}\right] +\left( \sum _{k=n+1}^{n+r} \frac{v_{k}}{m_{n,k-n}}\right) ^{2}\\&\le bc+c^{2}<\infty , \end{aligned}$$

which means that \(\{T_{n,r},r\ge 1\}\) is \(L^{2}-\)bounded, so \(T_{n,r}\xrightarrow {L^{2}}I_{n}\) when \(r\rightarrow \infty \). Thus,

$$\begin{aligned} E(I_{n})&= \lim _{r\rightarrow \infty }E(T_{n,r})=\sum _{k=n+1}^{\infty } \frac{v_{k}}{m_{n,k-n}},\\ Var(I_{n})&= \lim _{r\rightarrow \infty }Var(T_{n,r})=\lim _{r\rightarrow \infty } \sum _{k=n+1}^{n+r}\left[ \frac{v_{k}}{m_{n,k-n}}\cdot \frac{\sigma _{k,n+r-k}^2}{m_{k,n+r-k}^2}\right] \\&= \sum _{k=n+1}^{\infty } \left[ \frac{v_{k}\sigma _{k}^{2}}{m_{n,k-n}}\right] . \end{aligned}$$

We complete the proof of Lemma 3.4. \(\square \)

Now we consider a double sequence of random variables \(\{\zeta _{n,j}, j\ge 1,n\ge 1\}\), where for any \(n\ge 1\), \(\{\zeta _{n,j}, j\ge 1\}\) are i.i.d.. Let \(\{N_{n},n\ge 1\}\) be a sequence of random variables taking values in \({\mathbb {Z}}_{+}:=\{1,2,\cdots \}\), where \(N_{n}\xrightarrow {P}\infty \) and for any \(n\ge 1\), \(\{\zeta _{n,j}, j\ge 1\}\) and \(N_{n}\) are independent. Define

$$\begin{aligned} S_{N_{n}}=\sum _{j=1}^{N_{n}}\zeta _{n,j}, \end{aligned}$$

then we have the following result:

Lemma 3.5

If \(E(\zeta _{n,j})\equiv 0\) and \(Var(\zeta _{n,j})\equiv 1\), then one has

$$\begin{aligned} \frac{S_{N_{n}}}{\sqrt{N_{n}}}\xrightarrow []{d}N(0,1),n\rightarrow \infty . \end{aligned}$$

Proof

Since \(\{\zeta _{n,j},j\ge 1\}\) have the same distribution, we can set

$$\begin{aligned} \varphi _{n}(t)=E\left( exp\left( it\zeta _{n,j}\right) \right) . \end{aligned}$$

Note \(E(\zeta _{n,j})\equiv 0\) and \(Var(\zeta _{n,j})\equiv 1\), according to (3.8) of [18] P101, one obtains

$$\begin{aligned} \varphi _{n}(s)=\varphi _{n}(0)+\varphi '_{n}(0)s+\frac{\varphi ''_{n}(0)}{2!} s^{2}+o(s^{2})=1-\frac{s^{2}}{2}+o\left( \frac{s^{2}}{2}\right) ,(s\rightarrow 0). \end{aligned}$$

For any fixed \(t\) and \(k\) large enough

$$\begin{aligned} \varphi _{n}\left( \frac{t}{\sqrt{k}}\right) =1-\frac{t^{2}}{2k}+ o\left( \frac{1}{2k}\right) . \end{aligned}$$
(3.10)

Since for any \(n\ge 1\), \(\{N_{n},\zeta _{n,j},j\ge 1\}\) are independent, we have

$$\begin{aligned} E\left( \exp \left( it\frac{S_{N_{n}}}{\sqrt{N_{n}}}\right) \right)&= \sum _{k=1}^{\infty }E\left( \exp \left( it\frac{S_{N_{n}}}{\sqrt{N_{n}}}\right) |N_{n}=k\right) P(N_{n}=k)\nonumber \\&= \sum _{k=1}^{\infty }E\left( \exp \left( it\frac{S_{k}}{\sqrt{k}}\right) \right) P(N_{n}=k)\nonumber \\&= \sum _{k=1}^{\infty }\left( E\left( \exp \left( it\frac{\zeta _{n,1}}{\sqrt{k}}\right) \right) \right) ^{k}P(N_{n}=k). \end{aligned}$$
(3.11)

Fix \(t\in {\mathbb {R}}:=(-\infty ,+\infty )\), note that \((1-\frac{x}{n}+o(\frac{1}{n}))^{n}\rightarrow e^{-x}~(n\rightarrow \infty ), \) so there exists a constant \(M=M(\varepsilon )>0\) such that for any \(k\ge M\) one has

$$\begin{aligned} \left| \left( 1-\frac{t^{2}}{2k}+o\left( \frac{1}{2k}\right) \right) ^{k}- \exp \left( -\frac{t^{2}}{2}\right) \right| \le \frac{\varepsilon }{4}. \end{aligned}$$
(3.12)

Since \(N_{n}\xrightarrow {P}\infty \), for any \(\varepsilon >0\), there exists a constant \(N=N(\varepsilon )>1\) such that for any \(n\ge N\) one has \(P(N_{n}\le M)<\frac{\varepsilon }{4}.\) Thus, when \(n\ge N\), by (3.10),(3.11) and (3.12) one has

$$\begin{aligned} \Delta&:= \left| E\left( \exp \left( it\frac{S_{N_{n}}}{\sqrt{N_{n}}}\right) \right) -\exp \left( -\frac{t^{2}}{2}\right) \right| \\&= \left| \sum _{k=1}^{\infty }\left( E\left( \exp \left( it\frac{\zeta _{n,1}}{\sqrt{k}}\right) \right) \right) ^{k}P(N_{n}=k) -\exp \left( -\frac{t^{2}}{2}\right) \right| \nonumber \\&\le \left| \sum _{k=1}^{M}\left[ \left( E\left( \exp \left( it \frac{\zeta _{n,1}}{\sqrt{k}}\right) \right) \right) ^{k} -\exp \left( -\frac{t^{2}}{2}\right) \right] P(N_{n}=k)\right| \nonumber \\&+\left| \sum _{k=M+1}^{\infty }\left[ \left( 1-\frac{t^{2}}{2k} +o\left( \frac{1}{2k}\right) \right) ^{k}- \exp \left( -\frac{t^{2}}{2}\right) \right] P(N_{n}=k)\right| \nonumber \\&\le 2\cdot \frac{\varepsilon }{4}+ \frac{\varepsilon }{4}\cdot \sum _{k=M+1}^{\infty }P(N_{n}=k) <\varepsilon . \end{aligned}$$

This means the characteristic function of \(S_{N_{n}}/\sqrt{N_{n}}\) convergent to that of a standard normal random variable, so by Lévy continuous theorem we complete the proof of this lemma. \(\square \)

Proof of Theorem 1.2

By Lemma 3.1, for any \(n\ge 0,r\ge 1\) we have

$$\begin{aligned} Z_{n+r}-m_{n,r}Z_{n}=\sum _{j=1}^{Z_n}(X_{n,r}^{(j)}-m_{n,r})+Y_{n,r}~~a.s., \end{aligned}$$

where \(\{X_{n,r}^{(j)}, j\ge 1\}\) are i.i.d. and independent of \(Z_{n}\), \(E(X_{n,r})^{(j)}\equiv m_{n,r}\), \(Var(Z_{n,r}^{(j)})\equiv \sigma _{n,r}^{2}\). According to Lemma 3.5, if we can prove that \(Y_{n,r}/\sigma _{n,r}\sqrt{Z_{n}}\xrightarrow {P}0\) when \(r\) fixed and \(n\rightarrow \infty \), then we complete the proof of the first part of Theorem 1.2. In fact, for any \(\epsilon >0\),

$$\begin{aligned} P\left( \frac{Y_{n,r}}{\sigma _{n,r}\sqrt{Z_{n}}}\ge \epsilon \right) =E\left( P\left( \frac{Y_{n,r}}{\sigma _{n,r}\sqrt{Z_{n}}}\ge \epsilon |Z_{n}\right) \right) \le \frac{\theta _{n,r}^{2}}{\epsilon ^{2}\sigma _{n,r}^{2}}\cdot E\left( \frac{1}{Z_{n}}\right) . \end{aligned}$$

Note that \(Z_{n}\xrightarrow {P}\infty \) and \(Z_n\ge 1\), for any \(\epsilon >0\), there exists two constants \(N,M>0\) such that \(1/M<\epsilon /2\) and \(P(Z_n<M)<\epsilon /2\) for all \(n\ge N.\) When \(n\ge N,\)

$$\begin{aligned} E\left( \frac{1}{Z_{n}}\right)&= E\left( \frac{1}{Z_{n}}I_{[Z_n<M]}\right) +E\left( \frac{1}{Z_{n}}I_{[Z_n\ge M]}\right) \\&\le 1\cdot P(Z_n<M) +\frac{1}{M}P(Z_n\ge M)< \epsilon . \end{aligned}$$

Since (1.2),(1.3) and (1.4) are satisfied, then

$$\begin{aligned} P\left( \frac{Y_{n,r}}{\sigma _{n,r}\sqrt{Z_{n}}}\ge \epsilon \right) \le \frac{bc}{\epsilon ^{2}d}\cdot E\left( \frac{1}{Z_{n}}\right) \rightarrow 0,~~n\rightarrow \infty . \end{aligned}$$

By Lemma 3.4, for any \(n\ge 1\) one has

$$\begin{aligned} Z_{n}-m_{n}W=\sum _{j=1}^{Z_{n}}\left( 1-V_{n}^{(j)}\right) -I_{n}~~a.s., \end{aligned}$$

where \(\{V_{n}^{(j)};j\ge 1\}\) are i.i.d.and independent of \(Z_{n}\), \(E(V_{n}^{(j)})\equiv 1\), \(Var(V_{n}^{(j)})\equiv \sigma _{n}^{2}\). Similarly, according to Lemma 3.5, if we can prove that \(I_{n}/\sigma _{n}\sqrt{Z_{n}}\xrightarrow {P}0\) when \(n\rightarrow \infty \), then we complete the proof of the second part of Theorem 1.2. In fact, for any \(\epsilon >0\),

$$\begin{aligned} P\left( \frac{I_{n}}{\sigma _{n}\sqrt{Z_{n}}}\ge \epsilon \right) =E\left( P\left( \frac{I_{n}}{\sigma _{n}\sqrt{Z_{n}}}\ge \epsilon |Z_{n}\right) \right) \le \frac{\theta _{n}^{2}}{\epsilon ^{2}\sigma _{n}^{2}}\cdot E\left( \frac{1}{Z_{n}}\right) . \end{aligned}$$

Note that \(Z_{n}\xrightarrow {P}\infty \), (1.2)–(1.4) are satisfied, then

$$\begin{aligned} P\left( \frac{I_{n}}{\sigma _{n}\sqrt{Z_{n}}}\ge \epsilon \right) \le \frac{bc}{\epsilon ^{2}d}\cdot E\left( \frac{1}{Z_{n}}\right) \rightarrow 0,~~n\rightarrow \infty . \end{aligned}$$

We complete the proof of Theorem 1.2. \(\square \)

4 Convergence Rate in the CLT for IGWVE

In order to prove Theorem 1.4, we need following lemmas.

Lemma 4.1

([7] P322) Let \(\{X_n,n\ge 1\}\) be a sequence of independent random variables with \(E(X_{n})=0,n=1,2,\cdots \). Define

$$\begin{aligned} L_{n}=\sum _{j=1}^{n}X_{j},~~~~ G_{n}(x)=P\left( \frac{L_{n}}{\sqrt{Var(L_{n})}}\le x\right) , n\ge 1,x\in {\mathbb {R}}. \end{aligned}$$

If there exists an arbitrary small constant \(1>\delta >0\) such that

$$\begin{aligned} \Gamma _{n}^{2+\delta }:=\sum _{j=1}^{n}E|X_{j}|^{2+\delta }<\infty , \end{aligned}$$

then there exists a constant \(A\) such that

$$\begin{aligned} \sup _{x}|G_{n}(x)-\Phi (x)|\le A\Gamma _{n}^{2+\delta }[Var(L_{n})]^{-(1+\delta /2)}, \end{aligned}$$

where \(\Phi (x)\) is the standard normal distribution.

Lemma 4.2

Let \(\{\zeta _{n,j}, j\ge 1,n\ge 1\}\) be a double sequence of random variables with mean zero and variance 1, where for any \(n\ge 1\), \(\{\zeta _{n,j}, j\ge 1\}\) are i.i.d.. \(\{N_{n},n\ge 1\}\) is a sequence of integer valued random variables with \(P(N_{n}\rightarrow \infty )=1,\) as \(n\rightarrow \infty \). For any \(n\ge 1\), \(N_{n}\) and \(\{\zeta _{n,j}, j\ge 1\}\) are independent. \(\{k_n,n\ge 1\}\) is a sequence of strictly increasing positive integers. Define

$$\begin{aligned} \tilde{L}_{n}&= \sum _{j=1}^{k_{n}}\zeta _{n,j},~~~~ \tilde{G}_{n}(x)=P\left( \frac{\tilde{L}_{n}}{\sqrt{k_{n}}}\le x\right) .\\ \widehat{L}_{n}&= \sum _{j=1}^{N_{n}}\zeta _{n,j},~~~~ \widehat{G}_{n}(x)=P\left( \frac{\widehat{L}_{n}}{\sqrt{N_{n}}}\le x\right) . \end{aligned}$$

If there exist two constants \(\delta \) and \(M>0\) such that for any \(n\ge 1\),

$$\begin{aligned} \gamma _{n}:=E(|\zeta _{n,j}|^{2+\delta })\le M, \end{aligned}$$

then there exists a constants \(C\) (not depend on \(\{k_{n}\}\)) such that

$$\begin{aligned} \sup _{x}|\tilde{G}_{n}(x)-\Phi (x)|\le Ck_{n}^{-\frac{\delta }{2}}, ~~~~\sup _{x}|\widehat{G}_{n}(x)-\Phi (x)|\le CE(N_{n}^{-\frac{\delta }{2}}). \end{aligned}$$
(4.1)

Proof

Let \(C=AM\), where \(A\) is chosen in Lemma 4.1, then using Lemma 4.1 we have the first inequality of (4.1). As for the second inequality, since for any \(n\ge 1\), \(N_{n}\) and \(\{\xi _{n,j}, j\ge 1\}\) are independent, so

$$\begin{aligned} \widehat{G}_{n}(x)-\Phi (x)&= P\left( \frac{\sum _{i=1}^{N_{n}}\xi _{n,i}}{\sqrt{N_{n}}} \le x\right) -\Phi (x)\nonumber \\&= \sum _{j=1}^{\infty }\left[ P\left( \frac{\sum _{i=1}^{j}\xi _{n,i}}{\sqrt{j}}\le x\right) -\Phi (x)\right] P(N_{n}=j). \end{aligned}$$

By the first inequality of (4.1), one has

$$\begin{aligned} -CE\left( N_{n}^{-\frac{\delta }{2}}\right)&= -C\sum _{j=1}^{\infty } j^{-\frac{\delta }{2}}P(N_{n}=j)=\sum _{j=1}^{\infty }\left( -C\cdot j^{-\frac{\delta }{2}}\right) P(N_{n}=j)\nonumber \\&\le \sum _{j=1}^{\infty }\left( P\left( \frac{\sum _{i=1}^{j}\xi _{n,i}}{\sqrt{j}}\le x\right) -\Phi (x)\right) P(N_{n}=j)\le CE\left( N_{n}^{-\frac{\delta }{2}}\right) . \end{aligned}$$

We complete the proof. \(\square \)

Lemma 4.3

Assume that \(\{\zeta _{n,j}, j\ge 1,n\ge 1\}, \{N_{n},n\ge 1\}\) satisfy the conditions of Lemma 4.2. Let \(\{\eta _{n},n\ge 1\}\) be a sequence of independent random variables with \(E(|\eta _{n}|)<\infty \) and for any \(n\ge 1\), \(\eta _{n}\) is independent of \(\{\zeta _{n,j}, j\ge 1\}\) and \(N_{n}\). Define

then for any sequence \(\epsilon _{n}\) of positive constants one has

(4.2)

Proof

Let \(\Phi (x)\) be the distribution function of standard normal distribution. By Lemma 4.2 and the independence of \(\eta _{n}\) and \(\{\zeta _{n,j}, j\ge 1\}\) one has

$$\begin{aligned} -CE(N_{n}^{-\frac{\delta }{2}})&= -C\sum _{j=1}^{\infty } j^{-\frac{\delta }{2}}P(N_{n}=j)\nonumber \\&= -C\sum _{j=1}^{\infty }j^{-\frac{\delta }{2}}\left[ \int _{-\infty }^{\infty } dP(j^{-\frac{1}{2}}\eta _{n}\le y)\right] P(N_{n}=j)\nonumber \\&\le \sum _{j=1}^{\infty }\bigg [\int _{-\infty }^{\infty }\bigg \{P\bigg (j^{-\frac{1}{2}} \sum _{i=1}^{j}\zeta _{n,i}\le x-y\bigg )\nonumber \\&-\Phi (x-y)\}dP(j^{-\frac{1}{2}} \eta _{n}\le y)\bigg ] P(N_{n}=j) \le CE(N_{n}^{-\frac{\delta }{2}}).\qquad \end{aligned}$$
(4.3)

But,

(4.4)

where \(\zeta \) has the standard normal distribution and is independent of \(\eta _{n}\). By (4.3) and (4.4), one has

(4.5)

Now for any \(\epsilon _{n}>0\)

$$\begin{aligned} P(\zeta +N_{n}^{-\frac{1}{2}}\eta _{n}\le x)&= P(\zeta +N_{n}^{-\frac{1}{2}}\eta _{n}\le x, N_{n}^{-\frac{1}{2}}|\eta _{n}|\le \epsilon _{n})\nonumber \\&+P(\zeta +N_{n}^{-\frac{1}{2}}\eta _{n}\le x, N_{n}^{-\frac{1}{2}}|\eta _{n}|> \epsilon _{n})\nonumber \\&\le P(\zeta \le x+\epsilon _{n})+P(N_{n}^{-\frac{1}{2}}|\eta _{n}|> \epsilon _{n}). \end{aligned}$$
(4.6)

Similarly, we have

$$\begin{aligned} P(\zeta +N_{n}^{-\frac{1}{2}}\eta _{n}> x)\le P(\zeta > x-\epsilon _{n})+P(N_{n}^{-\frac{1}{2}}|\eta _{n}|> \epsilon _{n}), \end{aligned}$$

or equivalently,

$$\begin{aligned} P(\zeta +N_{n}^{-\frac{1}{2}}\eta _{n}\le x)\ge P(\zeta \le x-\epsilon _{n})-P(N_{n}^{-\frac{1}{2}}|\eta _{n}|> \epsilon _{n}). \end{aligned}$$
(4.7)

Also, by mean value theorem, we know that

$$\begin{aligned} \sup _{x}|P(\zeta \le x-\epsilon _{n})-P(\zeta \le x)|&= \sup _{x}|\Phi (x-\epsilon _{n})-\Phi (x)|\nonumber \\&= \sup _{x}\Phi '(\alpha (x))\epsilon _{n}<\frac{\epsilon _{n}}{2}. \end{aligned}$$
(4.8)

Similarly,

$$\begin{aligned} \sup _{x}|P(\zeta \le x+\epsilon _{n})-P(\zeta \le x)|<\frac{\epsilon _{n}}{2}. \end{aligned}$$
(4.9)

By (4.6)–(4.9) and Markov inequality one has

$$\begin{aligned} \Lambda _{n}&:= \sup _{x}|P(\zeta +N_{n}^{-\frac{1}{2}}\eta _{n}\le x)-\Phi (x)|\le P(N_{n}^{-\frac{1}{2}}|\eta _{n}|> \epsilon _{n})+\frac{\epsilon _{n}}{2}\nonumber \\&\le \epsilon _{n}^{-1}E(N_{n}^{-\frac{1}{2}}| \eta _{n}|)+\frac{\epsilon _{n}}{2}=\epsilon _{n}^{-1} E(N_{n}^{-\frac{1}{2}})E(|\eta _{n}|)+\frac{\epsilon _{n}}{2}. \end{aligned}$$
(4.10)

By (4.5) and (4.10) one obtain (4.2). We complete the proof of Lemma 4.3. \(\square \)

Lemma 4.4

Let \(\{Z_{n}, n\ge 0\}\) be an IGWVE. If (1.2)–(1.5) are satisfied, then there exists constants \(\{C_{r},r\ge 1\}\) and \(D\) such that for any sequence \(\{\epsilon _{n},n\ge 1\}\) of positive constants,

$$\begin{aligned}&\sup _{x}|Q_{n,r}(x)-\Phi (x)|\le C_{r}E\left( Z_{n}^{-\frac{\delta }{2}}\right) +(\epsilon _{n}\sigma _{n,r})^{-1}\pi _{n,r}E\left( Z_{n}^{-\frac{\delta }{2}}\right) +\frac{\epsilon _{n}}{2},\end{aligned}$$
(4.11)
$$\begin{aligned}&\sup _{x}|Q_{n}(x)-\Phi (x)|\le DE(Z_{n}^{-\frac{\delta }{2}})+(\epsilon _{n} \sigma _{n})^{-1}\pi _{n}E\left( Z_{n}^{-\frac{\delta }{2}}\right) +\frac{\epsilon _{n}}{2}. \end{aligned}$$
(4.12)

Proof

By Lemma 3.1 one has

$$\begin{aligned} \frac{Z_{n+r}-m_{n,r}Z_{n}}{\sigma _{n,r}}= \sum _{j=1}^{Z_{n}} \frac{X_{n,r}^{(j)}-m_{n,r}}{\sigma _{n,r}}+\frac{Y_{n,r}}{\sigma _{n,r}}~~a.s., \end{aligned}$$

where \(\{X_{n,r}^{(j)}; j\ge 1\}\) are i.i.d. and are independent of \(Z_{n}\) and \(Y_{n,r}\). Furthermore,

$$\begin{aligned} E\left( \frac{X_{n,r}^{(j)}-m_{n,r}}{\sigma _{n,r}}\right) \equiv 0,~~~~ Var\left( \frac{X_{n,r}^{(j)}-m_{n,r}}{\sigma _{n,r}}\right) \equiv 1. \end{aligned}$$

Note that \(Y_{n,r}\) is independent of \(Z_{n}\), by Lemma 4.3 and (1.5) we obtain (4.11).

Similarly, by Lemma 3.4 one has

$$\begin{aligned} \frac{Z_{n}-m_{n}W}{\sigma _{n}}=\sum _{j=1}^{Z_{n}} \frac{1-V_{n}^{(j)}}{\sigma _{n}}+\frac{I_{n}}{\sigma _{n}}, \end{aligned}$$

where \(\{V_{n}^{(j)}; j\ge 1\}\) are i.i.d. and are independent of \(Z_{n}\) and \(I_{n}\). Furthermore,

$$\begin{aligned} E\left( \frac{V_{n}^{(j)}-1}{\sigma _{n}}\right) \equiv 0,~~~~ Var\left( \frac{V_{n}^{(j)}-1}{\sigma _{n}}\right) \equiv 1. \end{aligned}$$

Note that \(I_{n}\) is independent of \(Z_{n}\), by Lemma 4.3 and (1.5) we obtain (4.12). \(\square \)

Proof of Theorem 1.4

Note that by Lemma 3.1, for any \(n\ge 1,r\ge 1\), one has

$$\begin{aligned} \frac{\pi _{n,r}}{\sigma _{n,r}}=\frac{\sum _{i=n+1}^{n+r}\nu _im_{i,n+r-i}}{m_{n,r} \sqrt{\sum _{j=n}^{n+r-1}\frac{\delta _j^2}{\mu _j^2m_{n,j-n}}}}\le \frac{1}{d}\sum _{i=n+1}^{n+r}\frac{\nu _i}{m_{n,i-n}}\le \frac{c}{d} \end{aligned}$$

and by Lemma 3.4, for any \(n\ge 1\) we have

$$\begin{aligned} \frac{\pi _{n}}{\sigma _{n}}=\frac{\sum _{i=n+1}^{\infty }\frac{\nu _i}{m_{i,i-n}}}{ \sqrt{\sum _{j=n}^{\infty }\frac{\delta _j^2}{\mu _j^2m_{n,j-n}}}}\le \frac{c}{d}. \end{aligned}$$

Taking \(\epsilon _{n}=\left[ E\left( Z_{n}^{-1/2}\right) \right] ^{\frac{1}{2}}\) in (4.11) and (4.12), \(C=c/d+1/2\), one obtains Theorem 1.4. \(\square \)

5 The LIL for IGWVE

In order to prove Theorem 1.5, we need the following lemmas.

Lemma 5.1

For any \(\varepsilon >0\) one has

$$\begin{aligned}&\sum _{n=2}^{\infty }[1-\Phi ((1+\varepsilon )(2\log n)^{\frac{1}{2}})]<\infty ,\end{aligned}$$
(5.1)
$$\begin{aligned}&\sum _{n=2}^{\infty }[1-\Phi ((1-\varepsilon )(2\log n)^{\frac{1}{2}})]=\infty . \end{aligned}$$
(5.2)

Proof

For any \(\varepsilon >0\), one has

$$\begin{aligned} 1-\Phi ((1+\varepsilon )(2\log n)^{\frac{1}{2}})&= \int _{(1+\varepsilon )(2\log n)^{\frac{1}{2}}}^{\infty }\frac{1}{\sqrt{2\pi }}e^{-\frac{t^{2}}{2}}dt \le \sum _{m=m_{0}}^{\infty }\frac{1}{\sqrt{2\pi }}e^{-\frac{m^{2}}{2}}\nonumber \\&\le \sum _{i=0}^{\infty }\frac{1}{\sqrt{2\pi }}e^{-\frac{m_{0}(m_{0}+i)}{2}} =N\sum _{i=0}^{\infty }\left( \frac{1}{\sqrt{e^{m_{0}}}}\right) ^{i}, \end{aligned}$$
(5.3)

where \(m_{0}=\left[ (1+\varepsilon )(2\log n)^{\frac{1}{2}}\right] -1\) and \(N=\frac{1}{\sqrt{2\pi }}e^{-\frac{m_{0}^{2}}{2}}\). When \(n\) is large enough one has \(\frac{1}{\sqrt{e^{m_{0}}}}<\frac{1}{2}\), so

$$\begin{aligned} 1-\Phi ((1+\varepsilon )(2\log n)^{\frac{1}{2}})\le N\sum _{i=0}^{\infty }\left( \frac{1}{\sqrt{e^{m_{0}}}}\right) ^{i}\le 2N =O(n^{-(1+\varepsilon )^{2}}), \end{aligned}$$
(5.4)

so we obtain the (5.1).

$$\begin{aligned} 1-\Phi ((1-\varepsilon )(2\log n)^{\frac{1}{2}})=\int _{(1-\varepsilon )(2\log n)^{\frac{1}{2}}}^{\infty }\frac{1}{\sqrt{2\pi }}e^{-\frac{t^{2}}{2}}dt \ge cn^{-(1-\varepsilon )^{2}}, \end{aligned}$$
(5.5)

where \(c\) is some positive constant. We obtain the (5.2) by (5.5). \(\square \)

Lemma 5.2

Let \(\{Z_{n}; n\ge 0\}\) be an IGWVE, then for any fixed \(r\ge 1\), \(\{Z_{rn}; n\ge 0\}\) is an IGWVE. If (1.2)-(1.5) are satisfied, then

$$\begin{aligned} \sup _{x}|Q_{rn,r}(x)-\Phi (x)|\le C_{r}E\left( Z_{rn}^{-\frac{\delta }{2}}\right) + C\left[ E\left( Z_{rn}^{-\frac{\delta }{2}}\right) \right] ^{\frac{1}{2}}, \end{aligned}$$
(5.6)

where \(C_{r}\) and \(C\) are defined in Theorem 1.3.

Proof

For any fixed \(r\ge 1\), let \(Z'_{n}=Z_{rn}\), by Lemma 3.1 one has

$$\begin{aligned} Z'_{n+1}=Z_{rn+r}=\sum _{j=1}^{Z_{rn}}X_{rn,r}^{(j)} +Y_{rn,r}=\sum _{j=1}^{Z'_{n}}X_{rn,r}^{(j)}+Y_{rn,r}, \end{aligned}$$

and for any \(n\ge 0 \), \(\{X_{rn,r}^{(j)},j\ge 1\}\) are i.i.d. By the proof of Lemma 3.1 one may see that \(\{X_{rn,r}^{(j)},j\ge 1\}\) are independent of \(\{Z'_{1},Z'_{2},\cdots ,Z'_{n}\}\) and \(Y_{rn,r}\), \(\{Y_{rn,r},n\ge 0\}\) are independent and for any \(n\ge 1\), \(Y_{rn,r}\) is independent of \(Z'_{n}\), so \(\{Z_{rn}; n\ge 0\}\) is in fact an IGWVE.

(5.6) is derived from Theorem 1.4. \(\square \)

Proof of Theorem 1.5

We only need to prove the \(\limsup \) part because of symmetry. Note that (1.2) and (1.4) are satisfied by (1.9). According to Theorem 1.4 and the assumption that \(\sum _{n=0}^{\infty }\left[ E \left( Z_{n}^{-\frac{\delta }{2}}\right) \right] ^{\frac{1}{2}}<\infty \) one has

$$\begin{aligned} \sum _{n=1}^{\infty }\sup _{x}\left| Q_{n,r}(x)-\Phi (x)\right| <\infty . \end{aligned}$$
(5.7)

By Lemma 5.1, one has that for any \(\varepsilon >0\),

$$\begin{aligned} \sum _{n=2}^{\infty }\left[ 1-\Phi ((1+\varepsilon )(2\log n)^{\frac{1}{2}})\right] <\infty , \end{aligned}$$

so applying (5.7) one obtains

$$\begin{aligned} \sum _{n=2}^{\infty }\left[ 1-Q_{n,r}\left( (1+\varepsilon )(2\log n)^{\frac{1}{2}} \right) \right] <\infty . \end{aligned}$$
(5.8)

Using (5.8) and Borel-Cantelli Lemma one has

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{Z_{n+r}-m_{n,r}Z_{n}}{(2\sigma _{n,r}^{2}Z_{n}\log n)^{\frac{1}{2}}}\le 1~~a.s. \end{aligned}$$
(5.9)

and

$$\begin{aligned} \liminf _{n\rightarrow \infty }\frac{Z_{n+r}-m_{n,r}Z_{n}}{(2\sigma _{n,r}^{2}Z_{n}\log n)^{\frac{1}{2}}}\ge -1~~a.s.. \end{aligned}$$
(5.10)

Fix \(r=1\). For any \(0<\varepsilon <1, n\ge 2\), define

$$\begin{aligned} A_{n}=\left\{ Z_{n}-\mu _{n-1}Z_{n-1}>(1-\varepsilon ) \delta _{n-1}(2Z_{n-1}\log (n-1))^{\frac{1}{2}}\right\} , \end{aligned}$$

we know that \(A_{n}\in \sigma (Z_{1},Z_{2},\cdots ,Z_{n})\). Observe that

$$\begin{aligned} P(A_{n+1}|Z_{1},Z_{2},\cdots ,Z_{n})&= \!P\left( \sum _{i=1}^{Z_{n}} (\xi _{n,i}\!-\!\mu _{n})\!+\!Y_{n+1}\!>\!(1\!-\!\varepsilon )\delta _{n}(2Z_{n}\log n)^{\frac{1}{2}}|Z_{n}\right) \nonumber \\&\ge P\left( \sum _{i=1}^{Z_{n}}(\xi _{n,i}-\mu _{n})>(1-\varepsilon ) \delta _{n}(2Z_{n}\log n)^{\frac{1}{2}}|Z_{n}\right) \nonumber \\&\ge 1-\Phi \left( (1-\varepsilon )(2\log n)^{\frac{1}{2}}\right) -CZ_{n}^{-\frac{\delta }{2}}, \end{aligned}$$
(5.11)

where the last inequality is derived from Lemma 4.3 when \(N_{n}\) is a constant. By Lemma 5.1,

$$\begin{aligned} \sum _{n=2}^{\infty }\left[ 1-\Phi ((1-\varepsilon )(2\log n)^{\frac{1}{2}})\right] =\infty . \end{aligned}$$
(5.12)

According to the assumption that \(\sum _{n=0}^{\infty }\left[ E \left( Z_{n}^{-\frac{\delta }{2}}\right) \right] ^{\frac{1}{2}}<\infty \), (5.11) and (5.12) one has

$$\begin{aligned} \sum _{n=1}^{\infty }P(A_{n+1}|Z_{1},Z_{2},\cdots ,Z_{n})=\infty , \end{aligned}$$

then by the extension of Borel-Cantelli lemma ([16] Corollary 7.20 ) one has \(P(A_{n}~i.o.)=1\), which implies

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{Z_{n+1}-\mu _{n}Z_{n}}{(2\delta _{n}^{2}Z_{n}\log n)^{\frac{1}{2}}}\ge 1~~a.s.. \end{aligned}$$
(5.13)

Given \(r>1\), we use \(Z'_{n}\) to denote \(Z_{rn}\). Then by Lemma 5.2 one has that \(\{Z'_{n},n\ge 0\}\) is an IGWVE. Applying the method for the proof of the case \(r=1\) to the process \(\{Z'_{n},n\ge 0\}\),

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{Z'_{n+1}-m_{rn,r}Z'_{n}}{(2\sigma _{rn,r}^{2}Z'_{n}\log n)^{\frac{1}{2}}}\ge 1~~a.s., \end{aligned}$$

which means

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{Z_{rn+r}-m_{rn,r}Z_{rn}}{(2\sigma _{rn,r}^{2}Z_{rn}\log rn)^{\frac{1}{2}}}\ge 1~~a.s.. \end{aligned}$$

But for any fixed \(r>1,\)

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{Z_{n+r}-m_{n,r} Z_{n}}{(2\sigma _{n,r}^{2}Z_{n}\log n)^{\frac{1}{2}}} \ge \limsup _{n\rightarrow \infty }\frac{Z_{rn+r}-m_{rn,r} Z_{rn}}{(2\sigma _{rn,r}^{2}Z_{rn}\log rn)^{\frac{1}{2}}}\ge 1~~a.s.. \end{aligned}$$
(5.14)

Finally, we derive (1.10) from (5.9) and (5.14).

According to Theorem 1.4 one has

$$\begin{aligned} \sum _{n=1}^{\infty }\sup _{x}\left| Q_{n}(x)-\Phi (x)\right| <\infty , \end{aligned}$$
(5.15)

but Lemma 5.1 tells us for any \(\varepsilon >0\)

$$\begin{aligned} \sum _{n=2}^{\infty }\left[ 1-\Phi \left( (1+\varepsilon )(2\log n)^{\frac{1}{2}}\right) \right] <\infty , \end{aligned}$$

so by (5.15) one has

$$\begin{aligned} \sum _{n=2}^{\infty }\left[ 1-Q_{n}\left( (1+\delta )(2\log n)^{\frac{1}{2}} \right) \right] <\infty . \end{aligned}$$

Thus by the Borel-Cantelli Lemma,

$$\begin{aligned}&\limsup _{n\rightarrow \infty }\frac{Z_{n}-m_{n}W}{(2\sigma _{n}^{2}Z_{n}\log n)^{\frac{1}{2}}}\le 1 ~~~~a.s.,\end{aligned}$$
(5.16)
$$\begin{aligned}&\liminf _{n\rightarrow \infty }\frac{Z_{n}-m_{n}W}{(2\sigma _{n}^{2}Z_{n}\log n)^{\frac{1}{2}}}\ge -1 ~~~~a.s.. \end{aligned}$$
(5.17)

As for the lower bound, first note that

$$\begin{aligned} \liminf _{n\rightarrow \infty }\frac{m_{n}W-Z_{n}}{(2\sigma _{n}^{2}Z_{n}\log n)^{\frac{1}{2}}}&\le \liminf _{n\rightarrow \infty }\frac{m_{n,r}^{-1} Z_{n+r}-Z_{n}}{(2\sigma _{n}^{2}Z_{n}\log n)^{\frac{1}{2}}}\nonumber \\&-\liminf _{n\rightarrow \infty }\frac{m_{n,r}^{-1}Z_{n+r}-m_{n}W}{(2\sigma _{n}^{2}Z_{n}\log n)^{\frac{1}{2}}}=:I_{1}-I_{2}, \end{aligned}$$
(5.18)

but by (1.10) we have

$$\begin{aligned} I_{1}\le -\inf _{n}\frac{\sigma _{n,r}}{\sigma _{n}m_{n,r}}~~~a.s., \end{aligned}$$
(5.19)

recalling the assumptions that for any \(k\ge 1\), \(\alpha \le \mu _{k}\le \beta , ~~\gamma ^{2}\le \delta _{k}^{2}\le \tau ^{2} \) and the definition of \(\sigma _{n,r}\) and \(\sigma _{n}\) one derives

$$\begin{aligned} 1-K_{2}\beta ^{-r}\ge \frac{\sigma _{n,r}}{\sigma _{n}m_{n,r}}=1- \frac{\sum _{j=n+r}^{\infty }\frac{\delta _{j}^{2}}{\mu _{j}^{2}m_{n,j-n}}}{ \sum _{j=n}^{\infty }\frac{\delta _{j}^{2}}{\mu _{j}^{2}m_{n,j-n}}}\ge 1-K_{1}\alpha ^{-r}, \end{aligned}$$
(5.20)

where \(K_1\) and \(K_2\) are two positive constants. Now, we consider \(I_{2}\). One sees that

$$\begin{aligned} \frac{m_{n,r}^{-1}Z_{n+r}-m_{n}W}{(2\sigma _{n}^{2}Z_{n}\log n)^{\frac{1}{2}}}&= \frac{Z_{n+r}-m_{n+r}W}{(2\sigma _{n+r}^{2}Z_{n+r}\log (n+r))^{\frac{1}{2}}}\nonumber \\&\cdot \left[ \frac{\frac{Z_{n+r}}{m_{n+r}}}{\frac{Z_{n}}{m_{n}}}\right] ^{\frac{1}{2}} \cdot \left[ \frac{\sigma _{n+r}^{2}}{m_{n,r}\sigma _{n}^{2}}\right] ^{\frac{1}{2}}, \end{aligned}$$
(5.21)

so according to (5.21) we have

$$\begin{aligned} I_{2}\ge -\sup _{n}\left[ \frac{\sigma _{n+r}^{2}}{m_{n,r} \sigma _{n}^{2}}\right] ^{\frac{1}{2}}~~a.s., \end{aligned}$$
(5.22)

but \(\sup _{n}\frac{1}{m_{n,r}}\le \alpha ^{-r}\) and

$$\begin{aligned} \sup _{n}\frac{\sigma _{n+r}^{2}}{\sigma _{n}^{2}}\le \frac{\sum _{j=n+r}^{\infty }\frac{\tau ^{2}}{\alpha ^{j-n-r+2}}}{ \sum _{j=n}^{\infty }\frac{\gamma ^{2}}{\beta ^{j-n+2}}}\le \frac{\tau ^{2}}{\gamma ^{2}}=:K_{3}. \end{aligned}$$
(5.23)

Since \(r\) is arbitrary, by (5.18)–(5.23) one has

$$\begin{aligned} \liminf _{n\rightarrow \infty }\frac{m_{n}W-Z_{n}}{(2\sigma _{n}^{2}Z_{n}\log n)^{\frac{1}{2}}}\le -1~~ a.s., \end{aligned}$$

which implies

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{Z_{n}-m_{n}W}{(2\sigma _{n}^{2}Z_{n}\log n)^{\frac{1}{2}}}\ge 1 ~~~~a.s.. \end{aligned}$$

\(\square \)