1 Introduction

Throughout this paper we assume \(\{X, X_{n}\}_{n\in \mathbb {N}}\) is a sequence of independent and identically distributed (i.i.d.) random variables with zero mean and finite second moment. For each \(1\le k\le n\), let \(S_{n}=\sum _{i=1}^{n} X_i, S_{k, n}=\sum _{i=k+1}^{n} X_i, V^2_n=\sum _{i=1}^nX^2_i, M_n=\max _{1\le i\le n}X_i, M_{k, n}=\max _{k+1\le i\le n}X_i\). The symbols \(S_n/V_n\) and \(M_n\) denote self-normalized partial sums and maxima respectively. Random sequence \(\{X_n\}_{n\in \mathbb {N}}\) is said to satisfy the central limit theorem (CLT), or random variable X is said to belong to the domain of attraction of the normal law, if there exist constants \(a_n>0\), \(b_n\in \mathbb {R}\) such that \((S_n-b_n)/a_n\mathop {\longrightarrow }\limits ^{d} \mathcal {N},\) where \(\mathcal {N}\) is the standard normal random variable and \(\mathop {\longrightarrow }\limits ^{d}\) denotes the convergence in distribution. It is known that CLT is equivalent to

$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{x\rightarrow \infty }\frac{x^2\mathbb {P}(|X|>x)}{\mathbb {E}X^2I(|X|\le x)}=0. \end{aligned}$$
(1.1)

Giné et al. [7] considered the limiting properties of self-normalized partial sums and obtained the following self-normalized version of the central limit theorem: \( {S_n}/V_n\mathop {\longrightarrow }\limits ^{d} \mathcal {N}\) as \(n\rightarrow \infty \) if and only if (1.1) holds.

Brosamler [3] and Schatte [15] obtained the almost sure central limit theorem (ASCLT) for partial sums. Some improved and generalized ASCLT results for partial sums were obtained by Miao [12], Berkes and Csáki [1], Hörmann [8], and Wu [16, 17]. Huang and Pang [10], Wu [18], Wu and Chen [21] and Zhang and Yang [23] obtained ASCLT results for self-normalized version. Lin [11] and Cao and Peng [4] obtained ASCLT results for maxima. Further, Zang [22] and Peng et al. [14] obtained the ASCLT result for partial sums and maxima. Peng et al. [14] obtained the following ASCLT result for partial sums and maxima: Let \(\{X, X_{n}\}_{n\in \mathbb {N}}\) be i.i.d. random variables with \(EX=0\) and \(EX^2=1\). Suppose there exist constants \(a_{n}>0, b_n\in \mathbb {R}\) and a nondegenerate distribution G(y) such that

$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }P\left( \frac{M_n-b_n}{a_n}\le y\right) =G(y), -\infty <y<\infty . \end{aligned}$$
(1.2)

Then

$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }\frac{1}{D_ n}\sum \limits _{k=1}^{n}d_kI\left( \frac{S_k}{\sqrt{k}}\le x, \frac{M_k-b_k}{a_k}\le y \right) =\Phi (x)G(y)\quad \mathrm{a.s.}\quad \mathrm{for}\,\mathrm{all}\quad x, y\in \mathbb {R}, \end{aligned}$$
(1.3)

with \(d_k=1/k\) and \(D_n=\sum _{k=1}^{n}d_k\), where I denotes an indicator function, and \(\Phi (x)\) is the standard normal distribution function.

However, ASCLT result for self-normalized partial sums and maxima has not been reported. Because the denominator of self-normalized partial sums contains random variables, so ASCLT for self-normalized partial sums and maxima is more difficult to study.

The difference between CLT and ASCLT lies in the weight in ASCLT. By a classical theorem of Hardy (see e.g. [5]: p. 35), if (1.3) holds with a weight sequence \(\{d_k; k\ge 1\}\), then, under certain regularity conditions, it will also hold for all smaller weight sequences. Therefore, one should also expect to get stronger results if we use larger weights. Schatte [15] pointed out that ASCLT fails for weight \(d_k=1\). It would be of considerable interest to determine the optimal weights.

The purpose of this paper is to study and establish the ASCLT for self-normalized partial sums and maxima of i.i.d. random variables, we will show that the ASCLT holds under a fairly general growth condition on \(d_k=k^{-1}\exp (\ln ^\alpha k)\), \(0\le \alpha <1\).

Our theorem is formulated as follows.

Theorem 1.1

Let \(\{X,X_n\}_{n\in \mathbb {N}}\) be a sequence of i.i.d. random variables with \(EX=0\) and \(EX^2=1\). Set

$$\begin{aligned} d_k=\frac{\exp (\ln ^\alpha k)}{k}, D_n=\sum \limits _{k=1}^{n}d_k,\quad \mathrm{for}\quad 0\le \alpha <1. \end{aligned}$$
(1.4)

Suppose that (1.2) holds. Then

$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }\frac{1}{D_n} \sum \limits _{k=1}^{n}d_{k}I\left( \frac{S_k}{V_k}\le x, \frac{M_k-b_k}{a_k}\le y \right) =\Phi (x)G(y)\quad \mathrm{a.s.}\quad \mathrm{for}\,\mathrm{any}\quad x, y\in \mathbb {R}.\nonumber \\ \end{aligned}$$
(1.5)

Remark 1.2

By the terminology of summation procedures, Theorem 1.1 remains valid if we replace the weight sequence \(\{d_k\}_{k\in \mathbb {N}}\) by any \(\{d_k^*\}_{k\in \mathbb {N}}\) such that \(0\le d_k^*\le d_k\), \(\sum _{k=1}^\infty d_k^*=\infty \).

Remark 1.3

Our results not only extend the ASCLT for partial sums and maxima obtained by Peng et al. [14] to the case of self-normalized partial sums and maxima but also give substantial improvements for weight sequence in Corollary 2.2 in Peng et al. [14].

Remark 1.4

By the Theorem 1 of Schatte [15], for \(\alpha =1\), i.e., \(d_k=1\), ASCLT does not hold. Therefore, in a sense, our Theorem 1.1 has been reached optimal form.

Remark 1.5

Obviously, \(\mathbb {E}X^2=1<\infty \) implies that (1.1) holds, so X is in the domain of attraction of the normal law.

2 Proofs

In the following, \(a_n\sim b_n\) denotes \(\mathop {\mathrm{lim}}\nolimits _{n\rightarrow \infty } a_n/b_n=1\) and \(a_n\ll b_n\) denotes that there exists a constant \(c>0\) such that \(a_n\le cb_n\) for sufficiently large n. The symbol c stands for a generic positive constant which may differ from one place to another.

To prove Theorem 1.1, the following three lemmas play important role, the Lemma 2.1 is due to Csörgő et al. [6]. Proof of Lemmas 2.2 and 2.3 is very difficult and the proof is given in Appendix.

Lemma 2.1

Let X be a random variable, and denote \(l(x)=\mathbb {E}X^2I\{|X|\le x\}\). The following statements are equivalent:

  1. (i)

    X is in the domain of attraction of the normal law.

  2. (ii)

    \(x^2\mathbb {P}(|X|>x)=o(l(x))\).

  3. (iii)

    \(x\mathbb {E}(|X|I(|X|>x))=o(l(x))\).

  4. (iv)

    \(\mathbb {E}(|X|^\beta I(|X|\le x))=o(x^{\beta -2}l(x))\) for \(\beta >2\).

Lemma 2.2

Let \(\{X_n\}_{n\in \mathbb {N}}\) be a sequence of random variables, and let \(\xi _{k,j}:={f}(X_{k+1},\ldots ,X_j)\) and \(\xi _{j}:=\xi _{0,j}={g}(X_1,\ldots ,X_j)\) be two functions which they are only related to \(X_{k+1},\ldots ,X_j\) and \(X_1,\ldots ,X_j\), respectively. If there exist constants \(c>0\) and \(\delta >0\) such that

$$\begin{aligned} |\xi _{k,j}|\le c,\quad \mathrm{for}\,\mathrm{any}\quad 0\le k<j, \end{aligned}$$
(2.1)

and

$$\begin{aligned} |\mathbb {E}\xi _k\xi _j|\ll \left( \frac{k}{j}\right) ^\delta , \mathbb {E}|\xi _j-\xi _{k,j}|\ll \left( \frac{k}{j}\right) ^\delta ,\quad \mathrm{for}\quad 1\le k<j, \end{aligned}$$
(2.2)

then

$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }\frac{1}{D_n}\sum \limits _{k=1}^{n}d_{k}\xi _k=0\quad \mathrm{a.s.}, \end{aligned}$$
(2.3)

where \(d_k\) and \(D_n\) are defined by (1.4).

Let \(l(x)=\mathbb {E}X^2I\{|X|\le x\}\), \(b=\inf \{x\ge 1; l(x)>0\}\) and

$$\begin{aligned} \eta _j=\inf \left\{ s; s\ge b+1, \frac{l(s)}{s^2}\le \frac{1}{j}\right\} \quad \mathrm{for}\quad j\ge 1. \end{aligned}$$

By the definition of \(\eta _j\), we have \(jl(\eta _j)\le \eta ^2_j\) and \(jl(\eta _j-\varepsilon )>(\eta _j-\varepsilon )^2\) for any \(\varepsilon >0\). It implies that

$$\begin{aligned} nl(\eta _n)\sim \eta ^2_n,\quad \mathrm{as}\quad n\rightarrow \infty . \end{aligned}$$
(2.4)

Let

$$\begin{aligned} \bar{X}_{ni}=X_iI(|X_i|\le \eta _n),\bar{S}_n=\sum \limits _{i=1}^{n}\bar{X}_{ni}, \bar{V}^2_n=\sum \limits _{i=1}^{n}\bar{X}^2_{ni},\quad \mathrm{for}\quad 1\le i\le n, \end{aligned}$$
$$\begin{aligned} \bar{S}_{k,n}=\sum \limits _{i=k+1}^{n}\bar{X}_{ni},\quad \bar{V}^2_{k,n}=\sum \limits _{i=k+1}^{n}\bar{X}^2_{ni} ,\quad \mathrm{for}\quad 0\le k< n. \end{aligned}$$

Lemma 2.3

Suppose that the assumptions of Theorem 1.1 hold. Then

$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }\frac{1}{D_n}\sum \limits _{k=1}^{n}d_{k} I\left( \frac{\bar{S}_k-\mathbb {E}\bar{S}_k}{\sqrt{kl(\eta _k)}}\le x, \frac{M_k-b_k}{a_k}\le y\right) =\Phi (x)G(y)\quad \mathrm{a.s.}\quad \mathrm{for}\,\mathrm{any}\quad x, y\in \mathbb {R}, \end{aligned}$$
(2.5)
$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }\frac{1}{D_n}\sum \limits _{k=1}^{n}d_{k} \left( I\left( \bigcup \limits _{i=1}^k(|X_i|>\eta _k)\right) -\mathbb {E}I\left( \bigcup \limits _{i=1}^k(|X_i|>\eta _k)\right) \right) =0\quad \mathrm{a.s.}, \end{aligned}$$
(2.6)
$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }\frac{1}{D_n}\sum \limits _{k=1}^{n}d_{k} \left( f\left( \frac{\bar{V}_k^2}{kl(\eta _k)}\right) - \mathbb {E}f\left( \frac{\bar{V}_k^2}{kl(\eta _k)}\right) \right) =0\quad \mathrm{a.s.}, \end{aligned}$$
(2.7)

where \(d_k\) and \(D_n\) are defined by (1.4) and f is a non-negative, bounded Lipschitz function.

Proof of Theorem 1.1

For any given \(0<\varepsilon <1\), note that

$$\begin{aligned}&I\left( \frac{S_k}{V_k}\le x, \frac{M_k-b_k}{a_k}\le y\right) \le I\left( \frac{\bar{S}_k}{\sqrt{(1+\varepsilon )kl(\eta _k)}}\le x, \frac{M_k-b_k}{a_k}\le y\right) \\&\quad +\,I\left( \bar{V}^2_k>(1+\varepsilon )kl(\eta _k)\right) +\,I\left( \bigcup \limits _{i=1}^k(|X_i|>\eta _k)\right) ,\quad \mathrm{for}\quad x\ge 0, \end{aligned}$$
$$\begin{aligned}&I\left( \frac{S_k}{V_k}\le x, \frac{M_k-b_k}{a_k}\le y\right) \le I\left( \frac{\bar{S}_k}{\sqrt{(1-\varepsilon )kl(\eta _k)}}\le x, \frac{M_k-b_k}{a_k}\le y\right) \\&\quad +\,I\left( \bar{V}^2_k<(1-\varepsilon )kl(\eta _k)\right) +\,I\left( \bigcup \limits _{i=1}^k(|X_i|>\eta _k)\right) ,\quad \mathrm{for}\quad x<0, \end{aligned}$$

and

$$\begin{aligned}&I\left( \frac{S_k}{V_k}\le x, \frac{M_k-b_k}{a_k}\le y\right) \ge I\left( \frac{\bar{S}_k}{\sqrt{(1-\varepsilon )kl(\eta _k)}}\le x, \frac{M_k-b_k}{a_k}\le y\right) \\&\quad -\,I\left( \bar{V}^2_k<(1-\varepsilon )kl(\eta _k)\right) -\,I\left( \bigcup \limits _{i=1}^k(|X_i|>\eta _k)\right) ,\quad \mathrm{for}\quad x\ge 0, \end{aligned}$$
$$\begin{aligned}&I\left( \frac{S_k}{V_k}\le x, \frac{M_k-b_k}{a_k}\le y\right) \ge I\left( \frac{\bar{S}_k}{\sqrt{(1+\varepsilon )kl(\eta _k)}}\le x, \frac{M_k-b_k}{a_k}\le y\right) \\&\quad -\,I\left( \bar{V}^2_k>(1+\varepsilon )kl(\eta _k)\right) -\,I\left( \bigcup \limits _{i=1}^k(|X_i|>\eta _k)\right) ,\quad \mathrm{for}\quad x<0. \end{aligned}$$

Hence, to prove (1.5), it suffices to prove

$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }\frac{1}{D_n}\sum \limits _{k=1}^{n}d_k I\left( \frac{\bar{S}_k}{\sqrt{kl(\eta _k)}}\le x\sqrt{1\pm \varepsilon }, \frac{M_k-b_k}{a_k}\le y\right) =\Phi (x\sqrt{1\pm \varepsilon })G(y)\quad \mathrm{a.s.}, \end{aligned}$$
(2.8)
$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }\frac{1}{D_n}\sum \limits _{k=1}^{n}d_k I\left( \bigcup \limits _{i=1}^k{(}|X_i|>\eta _k)\right) =0\quad \mathrm{a.s.}, \end{aligned}$$
(2.9)
$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }\frac{1}{D_n}\sum \limits _{k=1}^{n}d_k I(\bar{V}^2_k>(1+\varepsilon )kl(\eta _k))=0\quad \mathrm{a.s.}, \end{aligned}$$
(2.10)
$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }\frac{1}{D_n}\sum \limits _{k=1}^{n}d_k I(\bar{V}^2_k<(1-\varepsilon )kl(\eta _k))=0\quad \mathrm{a.s.} \end{aligned}$$
(2.11)

by the arbitrariness of \(\varepsilon >0\).

We first prove (2.8). Let \(0<\beta <1/2\) and \(h(\cdot ,\cdot )\) be a real function, such that for any given \(x, y\in \mathbb {R}\),

$$\begin{aligned} I\left( \chi \le \sqrt{1\pm \varepsilon }x-\beta , \frac{M_k-b_k}{a_k}\le y\right) \le h(\chi , y)\le I\left( \chi \le \sqrt{1\pm \varepsilon }x+\beta , \frac{M_k-b_k}{a_k}\le y\right) . \end{aligned}$$
(2.12)

Obviously, \(\mathbb {E}X^2=1<\infty \) implies that (1.1) holds, so X is in the domain of attraction of the normal law. By \(\mathbb {E}X=0\), Lemma 2.1 (iii) and (2.4), we have

$$\begin{aligned} |\mathbb {E}\bar{S}_k|= & {} |k\mathbb {E}XI(|X|\le \eta _k)|=|k\mathbb {E}XI(|X|>\eta _k)|\le k\mathbb {E}|X|I(|X|>\eta _k)\\= & {} o(\sqrt{kl(\eta _k)}),\quad \mathrm{as}\quad k\rightarrow \infty . \end{aligned}$$

This, combining with (2.5), (2.12) and the arbitrariness of \(\beta \) in (2.12), (2.8) holds.

By Lemma 2.1 (ii) and (2.4), we get

$$\begin{aligned} \mathbb {P}(|X|>\eta _j)=o(1)\frac{l(\eta _j)}{\eta ^2_j}=\frac{o(1)}{j},\quad \mathrm{as}\quad j\rightarrow \infty . \end{aligned}$$
(2.13)

This, combining with (2.6) and the Toeplitz lemma,

$$\begin{aligned} 0\le & {} \frac{1}{D_n}\sum \limits _{k=1}^{n}d_k I\left( \bigcup \limits _{i=1}^k{(}|X_i|>\eta _k)\right) \sim \frac{1}{D_n}\sum \limits _{k=1}^{n}d_k \mathbb {E}I\left( \bigcup \limits _{i=1}^k{(}|X_i|>\eta _k)\right) \\\le & {} \frac{1}{D_n}\sum \limits _{k=1}^{n}d_k k\mathbb {P}(|X|>\eta _k)\rightarrow 0\quad \mathrm{a.s.},\quad \mathrm{as}\quad n\rightarrow \infty . \end{aligned}$$

That is (2.9) holds.

Now we prove (2.10). For any \(\mu >0\), let f be a non-negative, bounded Lipschitz function such that

$$\begin{aligned} I(x>1+\mu )\le f(x)\le I(x>1+\mu /2). \end{aligned}$$

From \(\mathbb {E}\bar{V}^2_k=kl(\eta _k)\), \(\bar{X}_{ni}\) is i.i.d., Lemma 2.1 (iv), and (2.4),

$$\begin{aligned} \mathbb {P}\left( \bar{V}^2_k>\left( 1+\frac{\mu }{2}\right) kl(\eta _k)\right)= & {} \mathbb {P}\left( \bar{V}^2_k-\mathbb {E}\bar{V}^2_k>\frac{\mu }{2}kl(\eta _k)\right) \\\ll & {} \frac{\mathbb {E}(\bar{V}^2_k-\mathbb {E}\bar{V}^2_k)^2}{k^2l^2(\eta _k)} \ll \frac{\mathbb {E}X^4I(|X|\le \eta _k)}{kl^2(\eta _k)}\\= & {} \frac{o(1)\eta ^2_k}{kl(\eta _k)}=o(1)\rightarrow 0,\quad \mathrm{as}\quad k\rightarrow \infty . \end{aligned}$$

Therefore, from (2.7) and the Toeplitz lemma,

$$\begin{aligned} 0\le & {} \frac{1}{D_n}\sum \limits _{k=1}^{n}d_k I\left( \bar{V}^2_k>(1+\mu )kl(\eta _k)\right) \le \frac{1}{D_n}\sum \limits _{k=1}^{n}d_k f\left( \frac{\bar{V}^2_k}{kl(\eta _k)}\right) \\\sim & {} \frac{1}{D_n}\sum \limits _{k=1}^{n}d_k \mathbb {E}f\left( \frac{\bar{V}^2_k}{kl(\eta _k)}\right) \le \frac{1}{D_n}\sum \limits _{k=1}^{n}d_k \mathbb {E}I\left( \bar{V}^2_k>(1+\mu /2)kl(\eta _k)\right) \\= & {} \frac{1}{D_n}\sum \limits _{k=1}^{n}d_k \mathbb {P}(\bar{V}^2_k>(1+\mu /2)kl(\eta _k))\\\rightarrow & {} 0\quad \mathrm{a.s.},\quad \mathrm{as}\quad n\rightarrow \infty . \end{aligned}$$

Hence, (2.10) holds. By similar methods used to prove (2.10), we can prove (2.11). This completes the proof of Theorem 1.1.

3 Appendix

Proof of Lemma 2.2

Without loss of generality, we can suppose that \(\alpha >0\). By the proof of Lemma 2.2 in Wu [19], we know that (2.1) and (2.2) imply the following formula.

$$\begin{aligned} \mathbb {E}\left| \sum \limits _{k=1}^{n}d_k\xi _k\right| ^p\ll \left( \sum \limits _{1\le k\le j\le n}d_k d_j \left( \frac{k}{j}\right) ^{\delta }\right) ^{p/2}\quad \mathrm{for}\,\mathrm{any}\quad p>0. \end{aligned}$$
(3.1)

Note that

$$\begin{aligned} \sum \limits _{1\le k\le j\le n}d_kd_j\left( \frac{k}{j}\right) ^{\delta }\le & {} \sum \limits _{1\le k\le j\le n,k/j\le (\ln D_n)^{-2/\delta }}d_kd_j\left( \frac{k}{j}\right) ^{\delta }+\sum \limits _{1\le k\le j\le n,k/j>(\ln D_n)^{-2/\delta }}d_kd_j \nonumber \\:= & {} T_{n_1}+T_{n_2}. \end{aligned}$$
(3.2)
$$\begin{aligned} T_{n_1}\le \sum \limits _{1\le k\le j\le n,k/j\le (\ln D_n)^{-2/\delta }}d_kd_j\frac{1}{\ln ^2 D_n}\le \frac{D_n^2}{\ln ^2D_n}. \end{aligned}$$
(3.3)

By (2.10) in Wu [20],

$$\begin{aligned} D_n\!\sim \!\frac{1}{\alpha }\ln ^{1-\alpha }n\exp (\ln ^{\alpha }n),\quad \ln D_n\sim \ln ^\alpha n, \exp (\ln ^{\alpha }n)\sim \frac{\alpha D_n}{(\ln D_n)^{\frac{1-\alpha }{\alpha }}},\quad \ln \ln D_n\sim \alpha \ln \ln n.\nonumber \\ \end{aligned}$$
(3.4)

Hence,

$$\begin{aligned} T_{n_2}\le \sum \limits _{k=1}^nd_k\sum \limits _{k\le j<k(\ln D_n)^{2/\delta }}\frac{1}{j}\exp (\ln ^{\alpha }n)\ll \frac{D_n}{(\ln D_n)^\frac{1-\alpha }{\alpha }}\sum \limits _{k=1}^nd_k\ln \ln D_n\ll \frac{D_n^2\ln \ln D_n}{(\ln D_n)^{\frac{1-\alpha }{\alpha }}}.\nonumber \\ \end{aligned}$$
(3.5)

Thus, let \(\alpha _1=\min (2, (1-\alpha )/\alpha )>0\), by (3.2), (3.3) and (3.5), we get

$$\begin{aligned} \sum \limits _{1\le k\le l\le n}d_kd_l\left( \frac{k}{l}\right) ^{\delta }\ll \frac{D_n^2\ln \ln D_n}{(\ln D_n)^{\alpha _1}}. \end{aligned}$$
(3.6)

Let \(p>2(3\alpha +1)/(\alpha _1\alpha )\), i.e., \(\alpha _1p/2-1>2\), by the Markov inequality, \((\ln \ln D_n)^{p/2}=o(\ln D_n)\) for any \(p>0\), (3.1) and (3.6), for sufficiently large n, we have

$$\begin{aligned} \mathbb {P}\left( \left| \frac{1}{D_n}\sum \limits _{k=1}^nd_k\xi _k\right| >\varepsilon \right)\ll & {} \frac{1}{D_n^p}\mathbb {E}\left| \sum \limits _{k=1}^nd_k\xi _k\right| ^p \ll \frac{1}{D_n^p}\left( \sum \limits _{1\le k\le l\le n}d_kd_l\left( \frac{k}{l}\right) ^{\delta }\right) ^{p/2}\\\ll & {} \frac{1}{D_n^p}\left( \frac{D_n^2\ln \ln D_n}{(\ln D_n)^{\alpha _1}}\right) ^{p/2}\ll \frac{1}{(\ln D_n)^{\alpha _1p/2-1}}\\\le & {} \frac{1}{\ln ^2D_n}. \end{aligned}$$

By (3.4), we have \(D_{n+1}\sim D_n\). Let \(n_k=\inf \{n; D_n\ge \exp (k^{2/3})\}\), then \(D_{n_k}\ge \exp (k^{2/3}), D_{n_k-1}<\exp (k^{2/3})\). Therefore

$$\begin{aligned} 1\le \frac{D_{n_k}}{\exp (k^{2/3})}\sim \frac{D_{n_k-1}}{\exp (k^{2/3})}<1,\quad k\rightarrow \infty , \end{aligned}$$

i.e.,

$$\begin{aligned} D_{n_k}\sim \exp (k^{2/3}). \end{aligned}$$

Therefore, let \(T_n:=\frac{1}{D_n}\sum \nolimits _{i=1}^{n}d_i\xi _i\), we have

$$\begin{aligned} \sum \limits _{k=1}^{\infty }\mathbb {P}\left( \left| \frac{1}{D_{n_k}}\sum \limits _{i=1} ^{n_k}d_i\xi _i\right| >\varepsilon \right) \ll \sum \limits _{k=1}^{\infty }\frac{1}{k^{4/3}}<\infty , \end{aligned}$$

i.e.,

$$\begin{aligned} T_{n_k}\rightarrow 0\mathrm{a.s.} \end{aligned}$$

For \(n_k\le n< n_{k+1}\), by (2.1)

$$\begin{aligned} |T_n|\le |T_{n_k}|+\frac{2 c}{D_{n_k}}(D_{n_{k+1}}-D_{n_k})\rightarrow 0 \mathrm{a.s.} \end{aligned}$$

from \(D_{n_{k+1}}/D_{n_k}=\exp ((k+1)^{2/3}-k^{2/3})=\exp (k^{2/3}[(1+1/k)^{2/3}-1])\sim \exp (2k^{-1/3}/3)\rightarrow 1\). Therefore, (2.3) holds. This completes the proof of Lemma 2.2.

Proof of Lemma 2.3

By the central limit theorem for i.i.d. random variables and \(\mathrm{Var}\bar{S}_n\sim nl(\eta _n)\) as \(n\rightarrow \infty \) from \(\mathbb {E}X=0\), Lemma 2.1 (iii), and (2.4), it follows that

$$\begin{aligned} \frac{\bar{S}_n-\mathbb {E}\bar{S}_n}{\sqrt{nl(\eta _n)}}\mathop {\longrightarrow }\limits ^{d}\mathcal {N},\quad \mathrm{as}\quad n\rightarrow \infty , \end{aligned}$$

where \(\mathcal {N}\) denotes the standard normal random variable. By Theorem 1.1 in Hsing [9], we get

$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }\mathbb {P} \left( \frac{\bar{S}_n-\mathbb {E}\bar{S}_n}{\sqrt{nl(\eta _n)}}\le x, \frac{M_n-b_n}{a_n}\le y\right) =\Phi (x)G(y)\quad \mathrm{for}\,\mathrm{any}\quad x, y\in \mathbb {R}. \end{aligned}$$

This implies that for any g(xy) which is a non-negative, bounded Lipschitz function

$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }\mathbb {E}g\left( \frac{\bar{S}_n-\mathbb {E}\bar{S}_n}{\sqrt{nl(\eta _n)}}, \frac{M_n-b_n}{a_n}\right) = \int \limits _{-\infty }^\infty \int \limits _{-\infty }^\infty g(x,y)\Phi (\mathrm{d}x)G(\mathrm{d}y). \end{aligned}$$

Hence, we obtain

$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }\frac{1}{D_n}\sum \limits _{k=1}^{n}d_{k} \mathbb {E}g\left( \frac{\bar{S}_k-\mathbb {E}\bar{S}_k}{\sqrt{kl(\eta _k)}}, \frac{M_k-b_k}{a_k}\right) =\int \limits _{-\infty }^\infty \int \limits _{-\infty }^\infty g(x,y)\Phi (\mathrm{d}x)G(\mathrm{d}y) \end{aligned}$$

from the Toeplitz lemma.

On the other hand, note that (2.5) is equivalent to

$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }\frac{1}{D_n}\sum \limits _{k=1}^{n}d_{k} g\left( \frac{\bar{S}_k-\mathbb {E}\bar{S}_k}{\sqrt{kl(\eta _k)}}, \frac{M_k-b_k}{a_k}\right) =\int \limits _{-\infty }^\infty \int \limits _{-\infty }^\infty g(x,y)\Phi (\mathrm{d}x)G(\mathrm{d}y)\quad \mathrm{a.s.} \end{aligned}$$

from Theorem 7.1 of Billingsley [2] and Section 2 of Peligrad and Shao [13]. Hence, to prove (2.5), it suffices to prove

$$\begin{aligned} \mathop {\mathrm{lim}} \limits _{n\rightarrow \infty }\frac{1}{D_n}\sum \limits _{k=1}^{n}d_{k} \left( g\left( \frac{\bar{S}_k-\mathbb {E}\bar{S}_k}{\sqrt{kl(\eta _k)}}, \frac{M_k-b_k}{a_k}\right) - \mathbb {E}g\left( \frac{\bar{S}_k-\mathbb {E}\bar{S}_k}{\sqrt{kl(\eta _k)}}, \frac{M_k-b_k}{a_k}\right) \right) =0\quad \mathrm{a.s.}, \end{aligned}$$
(3.7)

for any g(xy) which is a non-negative, bounded Lipschitz function.

For any \(1\le k<j\), let

$$\begin{aligned} \xi _k:=g\left( \frac{\bar{S}_k-\mathbb {E}\bar{S}_k}{\sqrt{kl(\eta _k)}}, \frac{M_k-b_k}{a_k}\right) - \mathbb {E}g\left( \frac{\bar{S}_k-\mathbb {E}\bar{S}_k}{\sqrt{kl(\eta _k)}}, \frac{M_k-b_k}{a_k}\right) , \end{aligned}$$
$$\begin{aligned} \xi _{k,j}:=g\left( \frac{\bar{S}_{k,j}-\mathbb {E}\bar{S}_{k,j}}{\sqrt{j l(\eta _j)}}, \frac{M_{k,j}-b_j}{a_j}\right) - \mathbb {E}g\left( \frac{\bar{S}_{k,j}-\mathbb {E}\bar{S}_{k,j}}{\sqrt{j l(\eta _j)}}, \frac{M_{k,j}-b_j}{a_j}\right) . \end{aligned}$$

For any \(1\le k<j\), noting that \(g\left( \frac{\bar{S}_k-\mathbb {E}\bar{S}_k}{\sqrt{kl(\eta _k)}}, \frac{M_k-b_k}{a_k}\right) \) and \(g\left( \frac{\bar{S}_{k,j}-\mathbb {E}\bar{S}_{k,j}}{\sqrt{j l(\eta _j)}}, \frac{M_{k,j}-b_j}{a_j}\right) \) are independent, and the fact that g(xy) is a non-negative, bounded Lipschitz function, it is easy to see that

$$\begin{aligned} |\mathbb {E}\xi _k\xi _j|= & {} \left| \mathrm{Cov}\left( g\left( \frac{\bar{S}_k-\mathbb {E}\bar{S}_k}{\sqrt{kl(\eta _k)}}, \frac{M_k-b_k}{a_k}\right) , g\left( \frac{\bar{S}_j-\mathbb {E}\bar{S}_j}{\sqrt{j l(\eta _j)}}, \frac{M_j-b_j}{a_j}\right) \right) \right| \nonumber \\\le & {} \left| \mathrm{Cov}\left( g\left( \frac{\bar{S}_k-\mathbb {E}\bar{S}_k}{\sqrt{kl(\eta _k)}}, \frac{M_k-b_k}{a_k}\right) , g\left( \frac{\bar{S}_j-\mathbb {E}\bar{S}_j}{\sqrt{j l(\eta _j)}}, \frac{M_j-b_j}{a_j}\right) \right. \right. \nonumber \\&\left. \left. -\,g\left( \frac{\bar{S}_j-\mathbb {E}\bar{S}_j}{\sqrt{j l(\eta _j)}}, \frac{M_{k,j}-b_j}{a_j}\right) \right) \right| \nonumber \\&+\left| \mathrm{Cov}\left( g\left( \frac{\bar{S}_k-\mathbb {E}\bar{S}_k}{\sqrt{kl(\eta _k)}}, \frac{M_k-b_k}{a_k}\right) , g\left( \frac{\bar{S}_j-\mathbb {E}\bar{S}_j}{\sqrt{j l(\eta _j)}}, \frac{M_{k,j}-b_j}{a_j}\right) \right. \right. \nonumber \\&\left. \left. -\,g\left( \frac{\bar{S}_{k,j}-\mathbb {E}\bar{S}_{k,j}}{\sqrt{j l(\eta _j)}}, \frac{M_{k,j}-b_j}{a_j}\right) \right) \right| \nonumber \\&+\left| \mathrm{Cov}\left( g\left( \frac{\bar{S}_k-\mathbb {E}\bar{S}_k}{\sqrt{kl(\eta _k)}}, \frac{M_k-b_k}{a_k}\right) , g\left( \frac{\bar{S}_{k,j}-\mathbb {E}\bar{S}_{k,j}}{\sqrt{j l(\eta _j)}}, \frac{M_{k,j}-b_j}{a_j}\right) \right) \right| \nonumber \\\ll & {} \mathbb {E}\left| g\left( \frac{\bar{S}_j-\mathbb {E}\bar{S}_j}{\sqrt{j l(\eta _j)}}, \frac{M_j-b_j}{a_j}\right) -g\left( \frac{\bar{S}_j-\mathbb {E}\bar{S}_j}{\sqrt{j l(\eta _j)}}, \frac{M_{k,j}-b_j}{a_j}\right) \right| \nonumber \\&+\,\mathbb {E}\left| g\left( \frac{\bar{S}_j-\mathbb {E}\bar{S}_j}{\sqrt{j l(\eta _j)}},\frac{M_{k,j}-b_j}{a_j}\right) -g\left( \frac{\bar{S}_{k,j}-\mathbb {E}\bar{S}_{k,j}}{\sqrt{j l(\eta _j)}}, \frac{M_{k,j}-b_j}{a_j}\right) \right| \nonumber \\:= & {} H_1+H_2. \end{aligned}$$
(3.8)

From the fact that g(xy) is a non-negative, bounded Lipschitz function, it follows that

$$\begin{aligned} H_1\ll \mathbb {E}\left( \min \left( \frac{M_{j}-M_{k,j}}{a_j}, 2\right) \right) \ll \mathbb {P}(M_{j}\ne M_{k,j})=\mathbb {P}(M_{j}> M_{k,j})\ll \frac{k}{j}. \end{aligned}$$
(3.9)

By the definition of \(\eta _j\) and Cauchy–Schwarz inequality, we get

$$\begin{aligned} H_2\ll \mathbb {E}\left| \frac{\bar{S}_j-\bar{S}_{k,j}-\mathbb {E}(\bar{S}_j-\bar{S}_{k,j})}{\sqrt{j l(\eta _j)}}\right| \ll \frac{\sqrt{k\mathbb {E}X^2I(|X|\le \eta _j)}}{\sqrt{j l(\eta _j)}}=\left( \frac{k}{j}\right) ^{1/2}. \end{aligned}$$
(3.10)

On the other hand, by (3.9) and (3.10),

$$\begin{aligned} \mathbb {E}|\xi _j-\xi _{k,j}|\ll & {} \mathbb {E}\left| g\left( \frac{\bar{S}_j-\mathbb {E}\bar{S}_j}{\sqrt{j l(\eta _j)}}, \frac{M_j-b_j}{a_j}\right) -g\left( \frac{\bar{S}_{k,j}-\mathbb {E}\bar{S}_{k,j}}{\sqrt{j l(\eta _j)}}, \frac{M_{k,j}-b_j}{a_j}\right) \right| \nonumber \\\le & {} \mathbb {E}\left| g\left( \frac{\bar{S}_j-\mathbb {E}\bar{S}_j}{\sqrt{j l(\eta _j)}}, \frac{M_j-b_j}{a_j}\right) -g\left( \frac{\bar{S}_{j}-\mathbb {E}\bar{S}_{j}}{\sqrt{j l(\eta _j)}}, \frac{M_{k,j}-b_j}{a_j}\right) \right| \nonumber \\&+\,\mathbb {E}\left| g\left( \frac{\bar{S}_j-\mathbb {E}\bar{S}_j}{\sqrt{j l(\eta _j)}},\frac{M_{k,j}-b_j}{a_j}\right) -g\left( \frac{\bar{S}_{k,j}-\mathbb {E}\bar{S}_{k,j}}{\sqrt{j l(\eta _j)}}, \frac{M_{k,j}-b_j}{a_j}\right) \right| \nonumber \\= & {} H_1+H_2\ll \left( \frac{k}{j}\right) ^{1/2}. \end{aligned}$$
(3.11)

By Lemma 2.2, (3.7) holds from (3.8)–(3.11), i.e., (2.5) holds.

In a similar way, we prove (2.6). For any \(1\le k< j\), let

$$\begin{aligned} Z_k:=I\left( \bigcup \limits _{i=1}^k(|X_i|>\eta _k)\right) -\mathbb {E}I\left( \bigcup \limits _{i=1}^k(|X_i|>\eta _k)\right) , \end{aligned}$$

and

$$\begin{aligned} Z_{k,j}:=I\left( \bigcup \limits _{i=k+1}^j(|X_i|>\eta _j)\right) - \mathbb {E}I\left( \bigcup \limits _{i=k+1}^j(|X_i|>\eta _j)\right) . \end{aligned}$$

It is known that \(I(A\cup B)-I(B)\le I(A)\) for any sets A and B, then for \(1\le k< j\), by (2.13),

$$\begin{aligned} |\mathbb {E}Z_kZ_j|= & {} \left| \mathrm{Cov}\left( I\left( \bigcup \limits _{i=1}^k(|X_i|>\eta _k)\right) , I\left( \bigcup \limits _{i=1}^j(|X_i|>\eta _j)\right) \right) \right| \\= & {} \left| \mathrm{Cov}\left( I\left( \bigcup \limits _{i=1}^k(|X_i|>\eta _k)\right) , I\left( \bigcup \limits _{i=1}^j(|X_i|>\eta _j)\right) -I\left( \bigcup \limits _{i=k+1}^j(|X_i|>\eta _j)\right) \right) \right| \\\le & {} \mathbb {E}\left| I\left( \bigcup \limits _{i=1}^j(|X_i|>\eta _j)\right) - I\left( \bigcup \limits _{i=k+1}^j(|X_i|>\eta _j)\right) \right| \\\le & {} \mathbb {E}I\left( \bigcup \limits _{i=1}^k(|X_i|>\eta _j)\right) \le k\mathbb {P}(|X|>\eta _j)\\\le & {} \frac{k}{j}, \end{aligned}$$

and

$$\begin{aligned} \mathbb {E}|Z_j-Z_{k,j}|\ll \mathbb {E}I\left( \bigcup \limits _{i=1}^k(|X_i|>\eta _j)\right) \le \frac{k}{j}. \end{aligned}$$

By Lemma 2.2, (2.6) holds.

Finally, we prove (2.7). For any \(1\le k<j\), let

$$\begin{aligned} \zeta _k:=f\left( \frac{\bar{V}^2_k}{kl(\eta _k)}\right) - \mathbb {E}f\left( \frac{\bar{V}^2_k}{kl(\eta _k)}\right) , \end{aligned}$$

and

$$\begin{aligned} \zeta _{k,j}:=f\left( \frac{\bar{V}^2_{k,j}}{j l(\eta _j)}\right) - \mathbb {E}f\left( \frac{\bar{V}^2_{k,j}}{j l(\eta _j)}\right) . \end{aligned}$$

For \(1\le k< j\), noting that \(f\left( \frac{\bar{V}^2_k}{\sqrt{kl(\eta _k)}}\right) \) and \(f\left( \frac{\bar{V}^2_{k,j}}{\sqrt{j l(\eta _j)}}\right) \) are independent, we get

$$\begin{aligned} |\mathbb {E}\zeta _k\zeta _j|= & {} \left| \mathrm{Cov}\left( f\left( \frac{\bar{V}^2_k}{kl(\eta _k)}\right) , f\left( \frac{\bar{V}^2_j}{j l(\eta _j)}\right) \right) \right| \\= & {} \left| \mathrm{Cov}\left( f\left( \frac{\bar{V}^2_k}{kl(\eta _k)}\right) , f\left( \frac{\bar{V}^2_j}{j l(\eta _j)}\right) -f\left( \frac{\bar{V}^2_{k,j}}{j l(\eta _j)}\right) \right) \right| \\\ll & {} \frac{\mathbb {E}\left( \sum \limits _{i=1}^{k}X^2_i I(|X_i|\le \eta _j)\right) }{j l(\eta _j)}= \frac{k\mathbb {E}X^2I(|X|\le \eta _j)}{j l(\eta _j)}=\frac{k l(\eta _j)}{j l(\eta _j)}\\= & {} \frac{k}{j}, \end{aligned}$$

and

$$\begin{aligned} \mathbb {E}|\zeta _j-\zeta _{k,j}|\ll \mathbb {E}\left| f\left( \frac{\bar{V}^2_j}{j l(\eta _j)}\right) -f\left( \frac{\bar{V}^2_{k,j}}{j l(\eta _j)} \right) \right| \ll \frac{\mathbb {E}\left( \sum \limits _{i=1}^{k}X^2_i I(|X_i|\le \eta _j)\right) }{j l(\eta _j)}=\frac{k}{j}, \end{aligned}$$

By Lemma 2.2, (2.7) holds. This completes the proof of Lemma 2.3.