Abstract
Let \(X, X_1, X_2,\ldots \) be a sequence of independent and identically distributed random variables with zero mean and finite second moment. A universal result in almost sure central limit theorem for the self-normalized partial sums \(S_n/V_n\) and maxima \(M_n\) is established, where \(S_n=\sum _{i=1}^nX_i, V^2_n=\sum _{i=1}^nX^2_i,\) and \(M_n=\max _{1\le i\le n}X_i.\)
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Throughout this paper we assume \(\{X, X_{n}\}_{n\in \mathbb {N}}\) is a sequence of independent and identically distributed (i.i.d.) random variables with zero mean and finite second moment. For each \(1\le k\le n\), let \(S_{n}=\sum _{i=1}^{n} X_i, S_{k, n}=\sum _{i=k+1}^{n} X_i, V^2_n=\sum _{i=1}^nX^2_i, M_n=\max _{1\le i\le n}X_i, M_{k, n}=\max _{k+1\le i\le n}X_i\). The symbols \(S_n/V_n\) and \(M_n\) denote self-normalized partial sums and maxima respectively. Random sequence \(\{X_n\}_{n\in \mathbb {N}}\) is said to satisfy the central limit theorem (CLT), or random variable X is said to belong to the domain of attraction of the normal law, if there exist constants \(a_n>0\), \(b_n\in \mathbb {R}\) such that \((S_n-b_n)/a_n\mathop {\longrightarrow }\limits ^{d} \mathcal {N},\) where \(\mathcal {N}\) is the standard normal random variable and \(\mathop {\longrightarrow }\limits ^{d}\) denotes the convergence in distribution. It is known that CLT is equivalent to
Giné et al. [7] considered the limiting properties of self-normalized partial sums and obtained the following self-normalized version of the central limit theorem: \( {S_n}/V_n\mathop {\longrightarrow }\limits ^{d} \mathcal {N}\) as \(n\rightarrow \infty \) if and only if (1.1) holds.
Brosamler [3] and Schatte [15] obtained the almost sure central limit theorem (ASCLT) for partial sums. Some improved and generalized ASCLT results for partial sums were obtained by Miao [12], Berkes and Csáki [1], Hörmann [8], and Wu [16, 17]. Huang and Pang [10], Wu [18], Wu and Chen [21] and Zhang and Yang [23] obtained ASCLT results for self-normalized version. Lin [11] and Cao and Peng [4] obtained ASCLT results for maxima. Further, Zang [22] and Peng et al. [14] obtained the ASCLT result for partial sums and maxima. Peng et al. [14] obtained the following ASCLT result for partial sums and maxima: Let \(\{X, X_{n}\}_{n\in \mathbb {N}}\) be i.i.d. random variables with \(EX=0\) and \(EX^2=1\). Suppose there exist constants \(a_{n}>0, b_n\in \mathbb {R}\) and a nondegenerate distribution G(y) such that
Then
with \(d_k=1/k\) and \(D_n=\sum _{k=1}^{n}d_k\), where I denotes an indicator function, and \(\Phi (x)\) is the standard normal distribution function.
However, ASCLT result for self-normalized partial sums and maxima has not been reported. Because the denominator of self-normalized partial sums contains random variables, so ASCLT for self-normalized partial sums and maxima is more difficult to study.
The difference between CLT and ASCLT lies in the weight in ASCLT. By a classical theorem of Hardy (see e.g. [5]: p. 35), if (1.3) holds with a weight sequence \(\{d_k; k\ge 1\}\), then, under certain regularity conditions, it will also hold for all smaller weight sequences. Therefore, one should also expect to get stronger results if we use larger weights. Schatte [15] pointed out that ASCLT fails for weight \(d_k=1\). It would be of considerable interest to determine the optimal weights.
The purpose of this paper is to study and establish the ASCLT for self-normalized partial sums and maxima of i.i.d. random variables, we will show that the ASCLT holds under a fairly general growth condition on \(d_k=k^{-1}\exp (\ln ^\alpha k)\), \(0\le \alpha <1\).
Our theorem is formulated as follows.
Theorem 1.1
Let \(\{X,X_n\}_{n\in \mathbb {N}}\) be a sequence of i.i.d. random variables with \(EX=0\) and \(EX^2=1\). Set
Suppose that (1.2) holds. Then
Remark 1.2
By the terminology of summation procedures, Theorem 1.1 remains valid if we replace the weight sequence \(\{d_k\}_{k\in \mathbb {N}}\) by any \(\{d_k^*\}_{k\in \mathbb {N}}\) such that \(0\le d_k^*\le d_k\), \(\sum _{k=1}^\infty d_k^*=\infty \).
Remark 1.3
Our results not only extend the ASCLT for partial sums and maxima obtained by Peng et al. [14] to the case of self-normalized partial sums and maxima but also give substantial improvements for weight sequence in Corollary 2.2 in Peng et al. [14].
Remark 1.4
By the Theorem 1 of Schatte [15], for \(\alpha =1\), i.e., \(d_k=1\), ASCLT does not hold. Therefore, in a sense, our Theorem 1.1 has been reached optimal form.
Remark 1.5
Obviously, \(\mathbb {E}X^2=1<\infty \) implies that (1.1) holds, so X is in the domain of attraction of the normal law.
2 Proofs
In the following, \(a_n\sim b_n\) denotes \(\mathop {\mathrm{lim}}\nolimits _{n\rightarrow \infty } a_n/b_n=1\) and \(a_n\ll b_n\) denotes that there exists a constant \(c>0\) such that \(a_n\le cb_n\) for sufficiently large n. The symbol c stands for a generic positive constant which may differ from one place to another.
To prove Theorem 1.1, the following three lemmas play important role, the Lemma 2.1 is due to Csörgő et al. [6]. Proof of Lemmas 2.2 and 2.3 is very difficult and the proof is given in Appendix.
Lemma 2.1
Let X be a random variable, and denote \(l(x)=\mathbb {E}X^2I\{|X|\le x\}\). The following statements are equivalent:
-
(i)
X is in the domain of attraction of the normal law.
-
(ii)
\(x^2\mathbb {P}(|X|>x)=o(l(x))\).
-
(iii)
\(x\mathbb {E}(|X|I(|X|>x))=o(l(x))\).
-
(iv)
\(\mathbb {E}(|X|^\beta I(|X|\le x))=o(x^{\beta -2}l(x))\) for \(\beta >2\).
Lemma 2.2
Let \(\{X_n\}_{n\in \mathbb {N}}\) be a sequence of random variables, and let \(\xi _{k,j}:={f}(X_{k+1},\ldots ,X_j)\) and \(\xi _{j}:=\xi _{0,j}={g}(X_1,\ldots ,X_j)\) be two functions which they are only related to \(X_{k+1},\ldots ,X_j\) and \(X_1,\ldots ,X_j\), respectively. If there exist constants \(c>0\) and \(\delta >0\) such that
and
then
where \(d_k\) and \(D_n\) are defined by (1.4).
Let \(l(x)=\mathbb {E}X^2I\{|X|\le x\}\), \(b=\inf \{x\ge 1; l(x)>0\}\) and
By the definition of \(\eta _j\), we have \(jl(\eta _j)\le \eta ^2_j\) and \(jl(\eta _j-\varepsilon )>(\eta _j-\varepsilon )^2\) for any \(\varepsilon >0\). It implies that
Let
Lemma 2.3
Suppose that the assumptions of Theorem 1.1 hold. Then
where \(d_k\) and \(D_n\) are defined by (1.4) and f is a non-negative, bounded Lipschitz function.
Proof of Theorem 1.1
For any given \(0<\varepsilon <1\), note that
and
Hence, to prove (1.5), it suffices to prove
by the arbitrariness of \(\varepsilon >0\).
We first prove (2.8). Let \(0<\beta <1/2\) and \(h(\cdot ,\cdot )\) be a real function, such that for any given \(x, y\in \mathbb {R}\),
Obviously, \(\mathbb {E}X^2=1<\infty \) implies that (1.1) holds, so X is in the domain of attraction of the normal law. By \(\mathbb {E}X=0\), Lemma 2.1 (iii) and (2.4), we have
This, combining with (2.5), (2.12) and the arbitrariness of \(\beta \) in (2.12), (2.8) holds.
By Lemma 2.1 (ii) and (2.4), we get
This, combining with (2.6) and the Toeplitz lemma,
That is (2.9) holds.
Now we prove (2.10). For any \(\mu >0\), let f be a non-negative, bounded Lipschitz function such that
From \(\mathbb {E}\bar{V}^2_k=kl(\eta _k)\), \(\bar{X}_{ni}\) is i.i.d., Lemma 2.1 (iv), and (2.4),
Therefore, from (2.7) and the Toeplitz lemma,
Hence, (2.10) holds. By similar methods used to prove (2.10), we can prove (2.11). This completes the proof of Theorem 1.1.
3 Appendix
Proof of Lemma 2.2
Without loss of generality, we can suppose that \(\alpha >0\). By the proof of Lemma 2.2 in Wu [19], we know that (2.1) and (2.2) imply the following formula.
Note that
By (2.10) in Wu [20],
Hence,
Thus, let \(\alpha _1=\min (2, (1-\alpha )/\alpha )>0\), by (3.2), (3.3) and (3.5), we get
Let \(p>2(3\alpha +1)/(\alpha _1\alpha )\), i.e., \(\alpha _1p/2-1>2\), by the Markov inequality, \((\ln \ln D_n)^{p/2}=o(\ln D_n)\) for any \(p>0\), (3.1) and (3.6), for sufficiently large n, we have
By (3.4), we have \(D_{n+1}\sim D_n\). Let \(n_k=\inf \{n; D_n\ge \exp (k^{2/3})\}\), then \(D_{n_k}\ge \exp (k^{2/3}), D_{n_k-1}<\exp (k^{2/3})\). Therefore
i.e.,
Therefore, let \(T_n:=\frac{1}{D_n}\sum \nolimits _{i=1}^{n}d_i\xi _i\), we have
i.e.,
For \(n_k\le n< n_{k+1}\), by (2.1)
from \(D_{n_{k+1}}/D_{n_k}=\exp ((k+1)^{2/3}-k^{2/3})=\exp (k^{2/3}[(1+1/k)^{2/3}-1])\sim \exp (2k^{-1/3}/3)\rightarrow 1\). Therefore, (2.3) holds. This completes the proof of Lemma 2.2.
Proof of Lemma 2.3
By the central limit theorem for i.i.d. random variables and \(\mathrm{Var}\bar{S}_n\sim nl(\eta _n)\) as \(n\rightarrow \infty \) from \(\mathbb {E}X=0\), Lemma 2.1 (iii), and (2.4), it follows that
where \(\mathcal {N}\) denotes the standard normal random variable. By Theorem 1.1 in Hsing [9], we get
This implies that for any g(x, y) which is a non-negative, bounded Lipschitz function
Hence, we obtain
from the Toeplitz lemma.
On the other hand, note that (2.5) is equivalent to
from Theorem 7.1 of Billingsley [2] and Section 2 of Peligrad and Shao [13]. Hence, to prove (2.5), it suffices to prove
for any g(x, y) which is a non-negative, bounded Lipschitz function.
For any \(1\le k<j\), let
For any \(1\le k<j\), noting that \(g\left( \frac{\bar{S}_k-\mathbb {E}\bar{S}_k}{\sqrt{kl(\eta _k)}}, \frac{M_k-b_k}{a_k}\right) \) and \(g\left( \frac{\bar{S}_{k,j}-\mathbb {E}\bar{S}_{k,j}}{\sqrt{j l(\eta _j)}}, \frac{M_{k,j}-b_j}{a_j}\right) \) are independent, and the fact that g(x, y) is a non-negative, bounded Lipschitz function, it is easy to see that
From the fact that g(x, y) is a non-negative, bounded Lipschitz function, it follows that
By the definition of \(\eta _j\) and Cauchy–Schwarz inequality, we get
On the other hand, by (3.9) and (3.10),
By Lemma 2.2, (3.7) holds from (3.8)–(3.11), i.e., (2.5) holds.
In a similar way, we prove (2.6). For any \(1\le k< j\), let
and
It is known that \(I(A\cup B)-I(B)\le I(A)\) for any sets A and B, then for \(1\le k< j\), by (2.13),
and
Finally, we prove (2.7). For any \(1\le k<j\), let
and
For \(1\le k< j\), noting that \(f\left( \frac{\bar{V}^2_k}{\sqrt{kl(\eta _k)}}\right) \) and \(f\left( \frac{\bar{V}^2_{k,j}}{\sqrt{j l(\eta _j)}}\right) \) are independent, we get
and
By Lemma 2.2, (2.7) holds. This completes the proof of Lemma 2.3.
References
Berkes, I., Csáki, E.: A universal result in almost sure central limit theory. Stoch. Process. Appl. 94, 105–134 (2001)
Billingsley, P.: Convergence of Probability Measures. Wiley, New York (1968)
Brosamler, G.A.: An almost everywhere central limit theorem. Math. Proc. Camb. Philos. Soc. 104, 561–574 (1988)
Cao, L.F., Peng, Z.X.: Asymptotic distributions of maxima of complete and incomplete samples from strongly dependent stationary Gaussian sequences. Appl. Math. Lett. 24, 243–247 (2011)
Chandrasekharan, K., Minakshisundaram, S.: Typical Means. Oxford University Press, Oxford (1952)
Csörgő, M., Szyszkowicz, B., Wang, Q.Y.: Donsker’s theorem for self-normalized partial sums processes. Ann. Probab. 31(3), 1228–1240 (2003)
Giné, E., Götze, F., Mason, D.M.: When is the Student t-statistic asymptotically standard normal? Ann. Probab. 25, 1514–1531 (1997)
Hörmann, S.: Critical behavior in almost sure central limit theory. J. Theor. Probab. 20, 613–636 (2007)
Hsing, T.: A note on the asymptotic independence of the sum and maximum of strongly mixing stationary random variables. Ann. Probab. 23, 938–947 (1995)
Huang, S.H., Pang, T.X.: An almost sure central limit theorem for self-normalized partial sums. Comput. Math. Appl. 60, 2639–2644 (2010)
Lin, F.M.: Almost sure limit theorem for the maxima of strongly dependent Gaussian sequences. Electron. Commun. Probab. 14, 224–231 (2009)
Miao, Y.: Central limit theorem and almost sure central limit theorem for the product of some partial sums. Proc. Indian Acad. Sci. C Math. Sci. 118(2), 289–294 (2008)
Peligrad, M., Shao, Q.M.: A note on the almost sure central limit theorem for weakly dependent random variables. Stat. Probab. Lett. 22, 131–136 (1995)
Peng, Z.X., Wang, L.L., Nadarajah, S.: Almost sure central limit theorem for partial sums and maxima. Math. Nachr. 282(4), 632–636 (2009)
Schatte, P.: On strong versions of the central limit theorem. Mathematische Nachrichten 137, 249–256 (1988)
Wu, Q.Y.: Almost sure limit theorems for stable distribution. Stat. Probab. Lett. 81(6), 662–672 (2011a)
Wu, Q.Y.: An almost sure central limit theorem for the weight function sequences of NA random variables. Proc. Math. Sci. 121(3), 369–377 (2011b)
Wu, Q.Y.: A note on the almost sure limit theorem for self-normalized partial sums of random variables in the domain of attraction of the normal law. J. Inequal. Appl. 2012(2012), 17 (2012). doi:10.1186/1029-242X-2012-17
Wu, Q.Y.: Almost sure central limit theory for products of sums of partial sums. Appl. Math. A J. Chin. Univ. Ser. B 27(2), 169–180 (2012b)
Wu, Q.Y.: An improved result in almost sure central limit theory for products of partial sums with stable distribution. Chin. Ann. Math. Ser. B 33(6), 919–930 (2012c)
Wu, Q.Y., Chen, P.Y.: An improved result in almost sure central limit theorem for self-normalized products of partial sums. J. Inequal. Appl. 2013, 129 (2013). doi:10.1186/1029-242X-2013-129
Zang, Q.P.: A note on the almost sure central limit theorems for the maxima and sums. J. Inequal. Appl. 2012(2012), 223 (2012)
Zhang, Y., Yang, X.Y.: An almost sure central limit theorem for self-normalized products of sums of i.i.d. random variables. J. Math. Anal. Appl. 376, 29–41 (2011)
Acknowledgments
We are very grateful to the referees and the Editors for their valuable comments and some helpful suggestions that improved the clarity and readability of the article.
Author information
Authors and Affiliations
Corresponding author
Additional information
Supported by the National Natural Science Foundation of China (11361019), project supported by Program to Sponsor Teams for Innovation in the Construction of Talent Highlands in Guangxi Institutions of Higher Learning ([2011] 47), and the Support Program of the Guangxi China Science Foundation (2013GXNSFDA019001, 2015GXNSFAA139008).
Qunying Wu: Professor, Doctor, working in the field of probability and statistics.
Rights and permissions
About this article
Cite this article
Wu, Q., Jiang, Y. Almost sure central limit theorem for self-normalized partial sums and maxima. RACSAM 110, 699–710 (2016). https://doi.org/10.1007/s13398-015-0259-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13398-015-0259-x