1 Introduction

In applications of statistics to data indexed by location, there is often an apparent lack of both stationarity and independence, but with a reasonable indication of “weak dependence” between data whose locations are “far apart.” This has motivated a large amount of research on the theoretical question of to what extent central limit theorems hold for non-stationary random fields. This paper will examine that theoretical question for “arrays of (non-stationary) random fields” under mixing assumptions analogous to those studied by Peligrad [3] in central limit theorems for “arrays of random sequences.”

Let \((\varOmega , \mathcal {F}, \mathbb {P})\) be a probability space. For any two \(\sigma \)-fields \(\mathcal {A}, \ \mathcal {B} \subseteq \mathcal {F}\), define now the strong mixing coefficient

$$\begin{aligned} \alpha (\mathcal {A}, \mathcal {B}):=\sup _{A \in \mathcal {A}, B \in \mathcal {B}}|P(A\cap B)-P(A)P(B)| \end{aligned}$$

and the maximal coefficient of correlation

$$\begin{aligned} \rho (\mathcal {A},\mathcal {B}):=\sup |Corr(f, g)|, \text { } f \in L^2_{\text {real}}(\mathcal {A}), \text { } g \in L^2_{\text {real}}(\mathcal {B}). \end{aligned}$$

Suppose d is a positive integer and \(X:=(X_k, k\in \mathbb {Z}^d)\) is not necessarily a strictly stationary random field. In this context, for each positive integer n, define the following quantity:

$$\begin{aligned} \alpha (X, n):=\sup \alpha (\sigma (X_k, k\in Q), \sigma (X_k, k \in S)), \end{aligned}$$

where the supremum is taken over all pairs of nonempty, disjoint sets Q, \(S \subset \mathbb {Z}^d\) with the following property: There exist \(u \in \{1, 2, \ldots , d\}\) and \(j \in \mathbb {Z}\) such that \(Q \subset \{k:=(k_1, k_2, \ldots , k_d) \in \mathbb {Z}^d: k_u \le j \}\) and \(S \subset \{k:=(k_1, k_2, \ldots , k_d) \in \mathbb {Z}^d: k_u \ge j+n \}\).

The random field \(X:=(X_k, k\in \mathbb {Z}^d)\) is said to be “strongly mixing” (or “\(\alpha \)-mixing”) if \(\alpha (X, n)\rightarrow 0\) as \(n\rightarrow \infty \).

Also, for each positive integer n, define the following quantity:

$$\begin{aligned} \rho '(X, n):=\sup \rho (\sigma (X_k, k\in Q), \sigma (X_k, k \in S)), \end{aligned}$$

where the supremum is taken over all pairs of nonempty, finite disjoint sets Q, \(S\subset \mathbb {Z}^d\) with the following property: There exist \(u \in \{1,2,\ldots , d\}\) and nonempty disjoint sets A, \(B \subset \mathbb {Z}\), with \(dist(A, B):=\min _{a\in A, b\in B}|a-b|\ge n\) such that \(Q\subset \{k:=(k_1, k_2, \ldots , k_d) \in \mathbb {Z}^d: k_u \in A \}\) and \(S\subset \{k:=(k_1, k_2, \ldots , k_d) \in \mathbb {Z}^d: k_u \in B \}\).

The random field \(X:=(X_k, k\in \mathbb {Z}^d)\) is said to be “\(\rho '\)-mixing” if \(\rho '(X, n)\rightarrow 0\) as \(n\rightarrow \infty \).

Again, suppose d is a positive integer. For a given random field \(X:=(X_k, k \in \mathbb {Z}^d)\) and for each \(L:=(L_{1}, L_{2}, \ldots , L_{d}) \in \mathbb {N}^d\), define the “box”

$$\begin{aligned} B(L):=\{ k:=(k_1, k_2, \ldots ,k_d)\in \mathbb {N}^d: \forall u \in \{1, 2, \ldots , d\}, 1\le k_u\le L_u\}. \end{aligned}$$
(1.1)

Obviously, the number of elements in the set B(L) is \(L_1 \cdot L_2 \cdot \ldots \cdot L_d\).

For any given \(L \in \mathbb {N}^d\) and any given “collection” \(X:=(X_k, k \in B(L))\), the dependence coefficients mentioned above can be defined for \(n \in \mathbb {N}\) in the following way for convenience: One can trivially extend that collection X to a random field \(\widetilde{X}:=(X_k, k \in \mathbb {Z}^d)\) by defining \(X_k=0\) for each \(k \in \mathbb {Z}^d-B(L)\), and then one can define the dependence coefficients introduced in the previous section in the following way: For example, for \(n \in \mathbb {N}\), \(\rho '(X, n):=\rho '(\widetilde{X}, n)\).

We are interested in obtaining CLT’s for non-stationary strongly mixing random fields, in the presence of an extra condition involving the maximal correlation coefficient \(\rho '(X, n)\) defined above.

Our main result presents a central limit theorem for sequences of random fields that satisfy a Lindeberg condition and uniformly satisfy both strong mixing and an upper bound less than 1 on \(\rho '(\cdot \ , 1)\), in the absence of stationarity. There is no requirement of either a mixing rate assumption or the existence of moments of order higher than two. The additional assumption of a uniform upper bound less than 1 for \(\rho '(\cdot \ , 1)\) cannot simply be deleted altogether from the theorem, even in the case of strict stationarity. For the case \(d=1\), that can be seen from any (finite-variance) strictly stationary, strongly mixing counterexample to the CLT such that the rate of growth of the variances of the partial sums is at least linear; for several such examples, see, e.g., [1], Theorem 10.25 and Chapters 30–33. Our main theorem and an extension of it, given at the end of the paper, extend certain central limit theorems of Peligrad [3] involving “arrays of random sequences.”

The main result of this paper will be given in Theorem 1.1. Then the material of this article will be divided as follows: Background results necessary in the proof of the main result will be given in Sect. 2. Sections 3, 4 and 5 will contain the proof of Theorem 1.1. More precisely, Sect. 3 will set up the induction assumption of the proof and contains two special cases introduced, respectively, in Lemma 3.1 and Lemma 3.2, which imply our result. The general case will be presented in Lemma 4.1, which covers Sect. 4 entirely. Section 5 of the paper will deal with the Lindeberg condition and the truncation argument. Finally, Sect. 6 will state an extension of Theorem 1.1 to a more general setup.

Theorem 1.1

Suppose d is a positive integer. For each \(n \in N\), suppose \(L_n:=(L_{n1}, L_{n2}, \ldots , L_{nd})\) is an element of \(N^d\), and suppose \(X^{(n)}:=\left( X_k^{(n)},k\in B(L_n) \right) \) is an array of random variables such that for each \(k \in B(L_n)\), \(EX_{k}^{(n)}=0\) and \(E \left( X_{k}^{(n)}\right) ^2<\infty \). Suppose the following mixing assumptions hold:

$$\begin{aligned} \alpha (m):= & {} \sup _n \alpha (X^{(n)}, m)\rightarrow 0 \text { as } m\rightarrow \infty \text { and} \end{aligned}$$
(1.2)
$$\begin{aligned} \rho '(1):= & {} \sup _n \rho '(X^{(n)},1)<1. \end{aligned}$$
(1.3)

For each \(n \in \mathbb {N}\), define the random sum \(S(X^{(n)}, L_n )=\sum _{k \in B(L_n)} X_k^{(n)}\), define the quantity \(\sigma _n^2:=E(S(X^{(n)}, L_n))^2\), and assume that \(\sigma _n^2>0\). Suppose also that the Lindeberg condition

$$\begin{aligned} \forall \varepsilon >0,\ \lim _{n\rightarrow \infty } \frac{1}{\sigma _n^2}\sum _{k\in B(L_n)} E \left( X_{k}^{(n)} \right) ^2I \left( \left| X_k^{(n)}\right| >\varepsilon \sigma _n\right) =0 \end{aligned}$$
(1.4)

holds. Then

$$\begin{aligned} \sigma _n^{-1}S(X^{(n)}, L_n)\Rightarrow N(0, 1) \text { as } n\rightarrow \infty . \end{aligned}$$

(Here and throughout the paper \(\Rightarrow \) denotes convergence in distribution.)

This result extends a theorem of Peligrad (see [3], Theorem 2.2), which is Theorem 1.1 for the case \(d=1\). Later on, Peligrad and Utev [5] obtained an invariance principle for random elements associated to sums of strongly mixing triangular arrays of random variables associated with the interlaced mixing coefficients \(\rho ^*_n\). Their invariance principle generalizes the corresponding results for independent random variables treated, e.g., by Prohorov [6]. For the strictly stationary case see Peligrad [4].

For a sequence of strictly stationary random fields that are uniformly \(\rho '\)-mixing and satisfy a Lindeberg condition, a central limit theorem is obtained in [7] for sequences of “rectangular” sums from the given random fields. The “Lindeberg CLT” is then used to prove a CLT for some kernel estimators of probability density for some strictly stationary random fields satisfying \(\rho '\)-mixing, and whose probability density and joint densities are absolutely continuous, generalizing the results in [2], under \(\rho ^*\)-mixing.

2 Background Results

The proof of Theorem 1.1 uses frequently the following results. The first one is a consequence of Theorem 28.10(I) [1] which gives an upper bound for the variance of partial sums.

Theorem 2.1

Suppose d is a positive integer, \(L\in \mathbb {N}^d\), and \(X:=\left( X_k, k \in B(L) \right) \) is a (not necessarily strictly stationary) random field such that for each \(k\in B(L)\), the random variable \( X_k\) has mean zero and finite second moments. Suppose \(\rho '(X, j)<1\) for some \(j \in \mathbb {N}\). Then for any nonempty finite set \(S \subseteq B(L)\),

$$\begin{aligned} E \left| \sum _{k \in S} X_k \right| ^2 \le C \sum _{k \in S} E \left( X_k \right) ^2, \end{aligned}$$
(2.1)

where \(C:=j^d \left( 1+\rho '(X, j) \right) ^d/\left( 1- \rho '(X, j) \right) ^d.\)

The second result is a consequence of Theorem 28.9 [1] which gives lower and upper bounds for the variance of partial sums.

Theorem 2.2

Suppose d is a positive integer, \(L \in \mathbb {N}^d\), and \(X:=\left( X_k, k \in B(L) \right) \) is a (not necessarily strictly stationary) random field such that for each \(k\in B(L)\), the random variable \(X_k\) has mean zero and finite second moments. Suppose \(\rho '(X,1)<1\). Then for any nonempty finite set \(S \subseteq B(L)\),

$$\begin{aligned} C^{-1} \sum _{k \in S} E \left| X_k \right| ^2 \le E \left| \sum _{k \in S} X_k \right| ^2 \le C \sum _{k \in S}E \left| X_k \right| ^2, \end{aligned}$$
(2.2)

\(\text {where }C:=(1+ \rho '(X, 1))^d / (1- \rho '(X, 1))^d\).

The next result used is a particular case of the Rosenthal inequality (see Theorem 29.30, [1]) for the exponent 4.

Theorem 2.3

Suppose d and m are each a positive integer and \(r \in [0, 1)\). Then there exists a constant \(C:=C(d, 4, r, m)\) such that the following holds:

Suppose \(L \in \mathbb {N}^d\) and \(X:=\left( X_k, k \in B(L) \right) \) is a (not necessarily strictly stationary) random field such that for each \(k \in B(L)\), \(EX_k=0\) and \(E|X_k|^4<\infty \), and \(\rho '(X, m)\le r\). Then for any nonempty finite set \(S \subseteq B(L)\), one has that

$$\begin{aligned} E \left| \sum _{k \in S} X_k\right| ^4 \le C \cdot \left[ \sum _{k \in S}E \left| X_k \right| ^4+\left( \sum _{k \in S}E \left| X_k\right| ^2 \right) ^2 \right] . \end{aligned}$$
(2.3)

3 Induction Assumption

The proof of Theorem 1.1 will be done by induction on d. For \(d=1\), Theorem 1.1 was proved by Peligrad ([3], Theorem 2.2). Now suppose d is an integer such that \(d \ge 2\). As the induction hypothesis, suppose Theorem 1.1 holds in the case where d is replaced by the particular integer \(d-1\). To complete the induction step (and thereby the proof of Theorem 1.1), it suffices to prove Theorem 1.1 in the case of the given integer d.

To carry out the induction step, we will first treat the case where

$$\begin{aligned} \inf _{n\in \mathbb {N}} \sigma _n^2>0 \end{aligned}$$
(3.1)

and

$$\begin{aligned} \theta _n:=\sup _{k \in B(L_n)} \left\| X_k^{(n)} \right\| _{\infty } \rightarrow 0. \end{aligned}$$
(3.2)

Notice that (3.2) [together with (3.1)] implies the Lindeberg condition (1.4). Our goal in Sects. 3 and 4 is to show that for \(X^{(n)}:=\left( X_k^{(n)}, k \in B(L_n) \right) \) satisfying (1.2), (1.3), (3.1), and (3.2), the CLT holds, that is

$$\begin{aligned} \frac{1}{\sigma _n}\sum _{k \in B(L_n)} X_k^{(n)} \Rightarrow N(0, 1) \text { as } n\rightarrow \infty . \end{aligned}$$
(3.3)

Then in Sect. 5, the induction argument will be completed with the use of a standard truncation argument to reduce to the case of the restrictions (3.1)–(3.2).

In what follows, for convenience, we shall use the notation \(L_n:=L^{(n)}:=\left( L_1^{(n)}, L_2^{(n)},\ldots , L_d^{(n)} \right) \).

Lemma 3.1

Suppose in addition to the properties (1.2), (1.3), (3.1), and (3.2) that \(\sup _{n \in \mathbb {N}} L_1^{(n)}<\infty \). For each \(n \ge 1\), define the element \( \widetilde{L}^{(n)} \in \mathbb {N}^{d-1}\) by \(\widetilde{L}^{(n)}:=\left( L_2^{(n)}, L_3^{(n)}, \ldots , L_d^{(n)} \right) \). For each \(n \ge 1\), define the random field \(W^{(n)}:=\left( W^{(n)}_k, k \in B(\widetilde{L}^{(n)}) \right) \) as follows: For each \(k:=(k_2, k_3, \ldots , k_d) \in B(\widetilde{L}^{(n)})\),

$$\begin{aligned} W_{k}^{(n)}:=\sum _{u \in \left\{ 1, 2, \ldots , L_1^{(n)} \right\} }X^{(n)}_{ \left( u, \ k\right) }. \end{aligned}$$

Then

$$\begin{aligned} \frac{1}{\sigma _n}\sum _{k \in B(L^{(n)})} X_k^{(n)}= \left( E \left( \sum _{k \in B(\widetilde{L}^{(n)})} W_k^{(n)} \right) ^{2} \right) ^{-1/2} \sum _{k \in B(\widetilde{L}^{(n)})} W_k^{(n)} \Rightarrow N(0, 1) \end{aligned}$$

\(\text {as } n\rightarrow \infty \).

Proof

It is easy to see that

$$\begin{aligned} E \left( \sum _{k \in B(\widetilde{L}^{(n)}) }W_k^{(n)} \right) ^{2} =E \left( \sum _{k \in B(\widetilde{L}^{(n)}) }\sum _{u=1}^{L_1^{(n)}} X_{(u,k)}^{(n)} \right) ^{2}=\sigma _n^2. \end{aligned}$$

The random field \( W^{(n)}\) inherits the properties from the parent random field \(X^{(n)}\), that is, the mixing and the moment properties. In addition,

$$\begin{aligned}&\sup _{k \in B(\widetilde{L}^{(n)}) } \left\| W_k^{(n)}\right\| _{\infty }= \sup _{k \in B(\widetilde{L}^{(n)}) } \left\| \sum _{u=1}^{L_1^{(n)}} X_{(u,k)}^{(n)}\right\| _{\infty } \le \sup _{k \in B(\widetilde{L}^{(n)}) } \sum _{u=1}^{L_1^{(n)}} \left\| X_{(u,k)}^{(n)}\right\| _{\infty }\\&\quad \le \sum _{u=1}^{L_1^{(n)}} \sup _{k \in B(\widetilde{L}^{(n)}) }\left\| X_{(u,k)}^{(n)}\right\| _{\infty } \le \sum _{u=1}^{L_1^{(n)}} \sup _{k \in B(L^{(n)} )} \left\| X_k^{(n)}\right\| _{\infty } = L_1^{(n)} \theta _n \rightarrow 0 \text { as } n\rightarrow \infty . \end{aligned}$$

By the induction hypothesis for \(d-1\), the CLT holds, and the proof of Lemma 3.1 is complete. \(\square \)

Lemma 3.2

Suppose that \( L_1^{(n)} \rightarrow \infty \) as \(n\rightarrow \infty \) together with the properties mentioned earlier, namely, (1.2), (1.3), (3.1), and (3.2). For \(\forall n \in \mathbb {N}, \ \forall j \in \{1,2,\ldots , L_1^{(n)}\}\), let us define the random variable

$$\begin{aligned} Y_j^{(n)}=\sum _{\{ k=(k_1, \ldots , k_d) \in B(L^{(n)}) : k_1=j \} } X^{(n)}_k. \end{aligned}$$

Assume also that

$$\begin{aligned} \sup _{j \in \{1, 2, \ldots , L_1^{(n)} \} } \left( s_j^{(n)} \right) ^2 \rightarrow 0 \text { as } n\rightarrow \infty , \text { where } \left( s_j^{(n)} \right) ^2= E \left( Y_j^{(n)} \right) ^2. \end{aligned}$$
(3.4)

Then

$$\begin{aligned} \frac{1}{\sigma _n}\sum _{k \in B(L^{(n)})} X_k^{(n)}=\frac{1}{\sigma _n} \sum _{j=1}^{L_1^{(n)}} Y_j^{(n)} \Rightarrow N(0, 1) \text { as } n\rightarrow \infty . \end{aligned}$$
(3.5)

Proof

We shall first give some notations and basic observations that will be used in both the main argument below for Lemma 3.2 and the argument for Lemma 4.1 in Sect. 4.

For each \(n \in \mathbb {N}\) and each \(j \in \left\{ 1, 2, \ldots , L_1^{(n)} \right\} \), define the (“slice”) set

$$\begin{aligned} \text {slice}_j^{(n)}:=\left\{ k:=(k_1, \ldots , k_d) \in B(L^{(n)}) : k_1=j \right\} . \end{aligned}$$

Then for each such n and j, \(Y_j^{(n)}=\sum _{k \in \text {slice}_j^{(n)}} X^{(n)}_k\). By Theorem 2.2, for each such n and j, the two numbers \(( s_j^{(n)} )^2=E ( Y_j^{(n)})^2= E ( \sum _{k \in \text {slice}_j^{(n)}} X^{(n)}_k )^2\) and \(\sum _{k \in \text {slice}_j^{(n)}}E ( X^{(n)}_k )^2\) either are both 0 or are both positive and within a constant factor (in \([c^{-1}, c]\), where \(c :=(1+\rho '(1) )^d/ (1-\rho '(1))^d \)) of each other. Similarly, by (3.1) and Theorem 2.2, for each \(n \in \mathbb {N}\), the following three quantities are positive and are within a constant factor (in the same interval \([c^{-1}, c]\)) of each other:

$$\begin{aligned}&\sigma _n^2= E \left( \sum _{k \in B(L_n)} X_k^{(n)}\right) ^2 = E \left( \sum _{j=1}^{L_1^{(n)}} Y_j^{(n)}\right) ^2;\\&\sum _{j=1}^{L_1^{(n)} } \left( s_j^{(n)}\right) ^2= \sum _{j=1}^{L_1^{(n)} } E \left( Y_j^{(n)} \right) ^2=\sum _{j=1}^{L_1^{(n)} } E \left( \sum _{k \in \text {slice}_j^{(n)} } X_k^{(n)} \right) ^2;\\&\sum _{j=1}^{L_1^{(n)} } \sum _{k \in \text {slice}_j^{(n)} } E \left( X_k^{(n)} \right) ^2 =\sum _{k \in B(L^{(n)})}E \left( X_k^{(n)} \right) ^2. \end{aligned}$$

Finally, by (3.1), \(\sigma ^2_n \ll \sigma _n^4\) as \( n \rightarrow \infty \). Here and below, the notation “\(\ll \)” means \(O(\ldots )\).

To prove (3.5), the main task will be to show that Lyapounov’s condition holds (with exponent 4), that is,

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{\sigma _n^4} \sum _{j=1}^{L_1^{(n)} }E \left( Y_j^{(n)} \right) ^4=0. \end{aligned}$$
(3.6)

For each \(n \in \mathbb {N}\), applying (1.3) and Theorem 2.3 (and using its constant C) and then adding up over all \(j \in \left\{ 1, 2, \ldots , L_1^{(n)} \right\} \), we obtain that

$$\begin{aligned} \sum _{j=1}^{L_1^{(n)} } E \left( Y_j^{(n)} \right) ^4 \le C \left[ \sum _{j=1}^{L_1^{(n)} } \sum _{k \in \text {slice}_j^{(n)}} E \left( X_k^{(n)} \right) ^4 + \sum _{j=1}^{L_1^{(n)} } \left( \sum _{k \in \text {slice}_j^{(n)} } E \left( X_k^{(n)} \right) ^2 \right) ^2 \right] \end{aligned}$$
(3.7)

Using (3.2) and Theorem 2.2, the first term in the right-hand side of (3.7) can be bounded above in the following way:

$$\begin{aligned} \begin{array}{l} \sum \limits _{j=1}^{L_1^{(n)} } \sum \limits _{k \in \text {slice}_j^{(n)}} E \left( X_k^{(n)} \right) ^4 = \sum \limits _{j=1}^{L_1^{(n)} } \sum \limits _{k \in \text {slice}_j^{(n)}} E \left[ \left( X_k^{(n)} \right) ^2 \left( X_k^{(n)} \right) ^2 \right] \\ \quad \le \theta _n^2 \sum \limits _{j=1}^{L_1^{(n)} } \sum \limits _{k \in \text {slice}_j^{(n)}} E \left( X_k^{(n)} \right) ^2 \ll \theta _n^2 \sigma _n^2 \ll \theta _n^2 \sigma _n^4=o(\sigma _n^4) \text { as } n \rightarrow \infty . \end{array} \end{aligned}$$

By (3.4) (and the fact \(\sigma _n^2 \ll \sigma _n^4\)), the second term in the right-hand side of (3.7) can be bounded above in the following way:

$$\begin{aligned}&\sum _{j=1}^{L_1^{(n)} } \left( \sum _{k \in \text {slice}_j^{(n)} } E \left( X_k^{(n)} \right) ^2 \right) ^2 = \sum _{j=1}^{L_1^{(n)} } \left( \sum _{k \in \text {slice}_j^{(n)} } E \left( X_k^{(n)} \right) ^2 \right) \left( \sum _{k \in \text {slice}_j^{(n)} } E \left( X_k^{(n)} \right) ^2 \right) \\&\quad \ll \left[ \sup _{j \in \{1, 2, \ldots , L_1^{(n)} \} } \left( s_j^{(n)} \right) ^2 \right] \sum _{j=1}^{L_1^{(n)} } \sum _{k \in \text {slice}_j^{(n)} } E \left( X_k^{(n)} \right) ^2\\&\quad \ll \left[ \sup _{j \in \{1, 2,\ldots , L_1^{(n)} \} } \left( s_j^{(n)} \right) ^2 \right] \sigma _n^2 =o(\sigma _n^4) \text { as } n \rightarrow \infty . \end{aligned}$$

Hence, (3.6) holds, and as a consequence, the Lindeberg condition is satisfied. Applying Peligrad’s CLT for \(d=1\) (see [3], Theorem 2.2) to the array \(\left( Y_j^{(n)},\ n\in \mathbb {N}, \ j\in \left\{ 1, 2, \ldots , L_1^{(n)} \right\} \right) \), one has that (3.5) holds. The proof of Lemma 3.2 is complete. \(\square \)

4 “General Lemma”

The following lemma deals with the most general case under the restrictions (3.1) and (3.2).

Lemma 4.1

Suppose that for each \(n \in \mathbb {N}\), \(L_n \in \mathbb {N}^d\), \(X^{(n)}:=\left( X_k^{(n)}, k \in B(L_n) \right) \) is a (not necessarily strictly stationary) random field such that for each \(k\in B(L_n)\), \( X_k^{(n)}\) has mean zero and finite second moment. Suppose that (1.2), (1.3), (3.1), and (3.2) are satisfied. Then

$$\begin{aligned} \frac{1}{\sigma _n}\sum _{k \in B(L_n)} X_k^{(n)} \Rightarrow N(0, 1) \text { as } n\rightarrow \infty . \end{aligned}$$

Proof

It suffices to show that for an arbitrary fixed infinite set \(S \subseteq \mathbb {N}\), there exists an infinite set \(T \subseteq S\) such that

$$\begin{aligned} \frac{1}{\sigma _n}\sum _{k \in B(L_n)} X_k^{(n)} \Rightarrow N(0, 1) \text { as } n\rightarrow \infty , \ n \in T. \end{aligned}$$
(4.1)

Again we write \(L_n\) as \(L^{(n)}:=\left( L_1^{(n)}, L_2^{(n)}, \ldots , L_d^{(n)} \right) \). We freely use the notations \(Y_j^{(n)}, \left( s_j^{(n)} \right) ^2\) and \(\text {slice}_j^{(n)}\) from Lemma 3.2 and its proof. The observations in the first part of the proof of Lemma 3.2 (that is, prior to the paragraph containing Eq. (3.6)) hold in our context here, and will be used freely. (Of course the convergence to 0 in (3.4) is not assumed, and may not hold, in our context here.) Applying those observations, without loss of generality (that is, without sacrificing (3.1) or (3.2)), we now normalize so that

$$\begin{aligned} \forall n \ge 1, \ \sum _{j=1}^{L_1^{(n)} } \left( s_j^{(n)} \right) ^2=1. \end{aligned}$$
(4.2)

The proof of (4.1) (including the choice of an appropriate infinite set \(T \subseteq S\)) will be divided into 12 “steps.”

Step 1: Consider first the case where \(\sup _{n \in S} L_1^{(n)}< \infty \). By Lemma 3.1, the asymptotic normality in (4.1) holds with \(T:=S\), and for this case we are done.

Step 2: Now henceforth suppose that \(\sup _{n \in S} L_1^{(n)}= \infty \).

Let us choose an infinite set \(S_0 \subseteq S\) be such that \(L_1^{(n)}\rightarrow \infty \text { as } n\rightarrow \infty , \ n \in S_0\). For each \(n \ge 1\), let p(nj), \(j \in \{1, 2, \ldots , L_1^{(n)} \} \) be a permutation of the set \( \{1, 2, \ldots , L_1^{(n)} \} \) such that

$$\begin{aligned} \left( s_{p(n, 1)}^{(n)} \right) ^2 \ge \left( s_{p(n, 2)}^{(n)} \right) ^2 \ge \ldots \ge \left( s_{p(n, L_1^{(n)} )}^{(n)} \right) ^2. \end{aligned}$$
(4.3)

By (4.2), we obtain that

$$\begin{aligned} \sum _{j=1}^{L_1^{(n)} } \left( s_{p(n, j)}^{(n)} \right) ^2=1. \end{aligned}$$
(4.4)

As a consequence, by (4.3) and (4.4),

$$\begin{aligned} \forall n \ge 1,\ \forall j \in \left\{ 1, 2, \ldots ,L_1^{(n)}\right\} , \ \left( s_{p(n, j)}^{(n)} \right) ^2 \le \frac{1}{j}. \end{aligned}$$
(4.5)

Of course since \(L_1^{(n)} \rightarrow \infty \) as \(n \rightarrow \infty , \ n \in S_0\), one has that for each \(l\ge 1\), the index p(nl) and the number \( \left( s_{p(n, l)}^{(n)} \right) ^2\) are defined for all sufficiently large \(n \in S_0\). That will be used repeatedly in what follows.

Let us now define the following infinite sets:

$$\begin{aligned} S_1\subseteq & {} S_0\text { such that } \lambda _1=\lim _{ n \rightarrow \infty , \ n \in S_1}\left( s_{p(n, 1)}^{(n)} \right) ^2 \text { exists};\\ S_2\subseteq & {} S_1 \text { such that } \lambda _2=\lim _{ n \rightarrow \infty ,\ n \in S_2} \left( s_{p(n, 2)}^{(n)} \right) ^2 \text { exists};\\ S_3\subseteq & {} S_2 \text { such that } \lambda _3=\lim _{ n \rightarrow \infty ,\ n \in S_3} \left( s_{p(n, 3)}^{(n)} \right) ^2 \text { exists}; \end{aligned}$$

and so on. By the Cantor diagonalization method, we obtain an infinite set \(S_{00}:=\{ \widetilde{n}_1<\widetilde{n}_2<\widetilde{n}_3< \ldots \} \) such that \(\widetilde{n}_l \in S_l\) and \(S_l \supseteq \{\widetilde{n}_l, \widetilde{n}_{l+1}, \widetilde{n}_{l+2}, \ldots \}\). For the resulting infinite set \(S_{00}\), one has that \(S_{00} \subseteq S_0 \subseteq S\), and by (4.3) one also has that

$$\begin{aligned} \forall l\ge 1,\ \lim _{ n \rightarrow \infty , \ n \in S_{00}} \left( s_{p(n, l)}^{(n)} \right) ^2 =\lambda _l; \text { with } \lambda _1 \ge \lambda _2 \ge \lambda _3 \ldots . \end{aligned}$$
(4.6)

In addition, \(\forall m\ge 1\), one has by (4.4) that \( \sum _{j=1}^{m } \left( s_{p(n, j)}^{(n)} \right) ^2\le 1 \text { for all } n \in S_{00}\) sufficiently large such that \(L_1^{(n)} \ge m\); and hence for every \(m \ge 1\), \( \sum _{j=1}^m \lambda _j \le 1 \) by (4.6). Hence

$$\begin{aligned} \lambda :=\sum _{j=1}^{\infty } \lambda _j \le 1. \end{aligned}$$
(4.7)

Step 3: Consider first the case where \(\lambda =0\). Then \(\lambda _j=0\) for all \( j \ge 1\). By (4.5), (4.6), and a simple argument, \(\sup _{j \in \{1, 2, \ldots , L_1^{(n)} \} } \left( s_{p(n, j)}^{(n)} \right) ^2 \rightarrow 0 \text { as } n\rightarrow \infty , \ n \in S_{00}.\) By Lemma 3.2,

$$\begin{aligned} \frac{1}{\sigma _n} \sum _{j=1}^{L_1^{(n)}} Y_j^{(n)}=\frac{1}{\sigma _n}\sum _{k \in B(L^{(n)})} X_k^{(n)} \Rightarrow N(0, 1) \text { as } n\rightarrow \infty , \ n \in S_{00}. \end{aligned}$$
(4.8)

Thus (4.1) holds with \(T:=S_{00}\), and for this case we are done.

Step 4: Now henceforth suppose that \( \lambda >0\). (Then by (4.6) and (4.7), \(\lambda _1>0\).) Our task now is to show that (4.1) holds for some infinite set \(T \subseteq S_{00}\).

Recall again that \(L_1^{(n)} \rightarrow \infty \text { as }n\rightarrow \infty , \ n \in S_{00}\). For each \(q \ge 1\) and each \( n \in S_{00}\) such that \(L_1^{(n)} >q\), define the set

$$\begin{aligned} \overline{\varGamma }_1^{(q, n)}=\{ p(n, 1), p(n, 2), \ldots , p(n, q)\} \end{aligned}$$

and the random variable

$$\begin{aligned} W^{(q, n)}:= \sum _{j \in \overline{\varGamma }_1^{(q, n)} } Y_j^{(n)}. \end{aligned}$$

Recall that (here in Step 4 and henceforth) \(\lambda _1>0\). By (4.6), \(E \left( Y_{p(n, 1)}^{(n)} \right) ^2=\left( s_{p(n, 1)}^{(n)} \right) ^2>\lambda _1/2\) for all \(n \in S_{00}\) sufficiently large.

For each positive integer q, the following observations hold: Trivially, we have that \(\sum _{j \in \overline{\varGamma }_1^{(q, n)}} E \left( Y_j^{(n)} \right) ^2 \ge E \left( Y_{p(n, 1)}^{(n)} \right) ^2 \ge \lambda _1/2 \) for all \(n \in S_{00}\) sufficiently large. Hence, by Theorem 2.2, there exists a positive number \(c_0\) (not even depending on q) such that \(E \left( W^{(q, n)} \right) ^2\ge c_0\) for all \(n \in S_{00}\) sufficiently large. That is the analog of (3.1) for sufficiently large \(n \in S_{00}\) when the indices \(k:=(k_1, \ldots , k_d) \in B(L^{(n)})\) are restricted to the ones such that \(k_1 \in \overline{\varGamma }_1^{(q, n)}\). Hence, one can apply Lemma 3.1, and one obtains that

$$\begin{aligned} \frac{W^{(q, n)}}{ \Vert W^{(q, n)} \Vert _2} \Rightarrow N(0, 1)\text { as } n\rightarrow \infty , \ n \in S_{00}. \end{aligned}$$

The convergence above was shown for arbitrary \(q\ge 1\). By a well-known theorem for continuous limiting distributions, one now has that

$$\begin{aligned} \forall q \ge 1,\ \sup _{x\in \mathbb {R}} \left| F_{ W^{(q, n)} / \Vert W^{(q, n)}\Vert _2}(x ) -\varPhi (x) \right| \rightarrow 0 \text { as } n\rightarrow \infty , \ n \in S_{00}. \end{aligned}$$

Here \(\varPhi (x)\) represents the distribution function of a N(0, 1) random variable and \(F_V\) is the distribution function of a given random variable V.

Step 5: For each \(q \ge 1\), let \(m_q \in \mathbb {N}\) be such that

$$\begin{aligned} \alpha (m_q)<\frac{1}{q^2}. \end{aligned}$$
(4.9)

Let \(n_1<n_2< \ldots \in S_{00}\) be such that for all \(q \ge 1\), the following hold:

$$\begin{aligned}&L_1^{(n_q)}>q^2m_q; \end{aligned}$$
(4.10)
$$\begin{aligned}&\Vert W^{(q, n_q)} \Vert _2>0 \text { and } \sup _{x\in \mathbb {R}} \left| F_{W^{(q, n_q)} / \Vert W^{(q, n_q)} \Vert _2}(x)-\varPhi (x) \right| \le \frac{1}{q}, \text { and} \end{aligned}$$
(4.11)
$$\begin{aligned}&\left| \sum _{j=q+1}^{q^2m_q} \left( s_{p(n_q, j)}^{(n_q)} \right) ^2 - \sum _{j=q+1}^{q^2m_q} \lambda _j \right| \le \frac{1}{q}. \end{aligned}$$
(4.12)

(To justify (4.12), see (4.6).)

For each \(q \ge 1\), define the following four index sets:

$$\begin{aligned} {\left\{ \begin{array}{ll} \varGamma _1^{(q)}=\{ p(n_q, 1), p(n_q, 2), \ldots , p(n_q, q)\}, \\ \varGamma _2^{(q)}=\{ p(n_q, q+1), p(n_q, q+2), \ldots , p(n_q, q^2m_q)\}, \\ \varGamma _3^{(q)}=\{ j \in \{1, \ldots , L_1^{(n_q)} \} -\{ \varGamma _1^{(q)} \cup \varGamma _2^{(q)} \} | \ \exists i \in \varGamma _1^{(q)} \text { such that } \ |i-j|\le m_q \},\\ \varGamma _4^{(q)}=\{ j \in \{1, \ldots , L_1^{(n_q)} \} -\{ \varGamma _1^{(q)} \cup \varGamma _2^{(q)} \} | \ \forall i \in \varGamma _1^{(q)} , \ |i-j|> m_q \}. \end{array}\right. }\nonumber \\ \end{aligned}$$
(4.13)

For each \(q \ge 1\), those four sets in (4.13) form a partition of the set \( \left\{ 1, 2, \ldots , L_1^{(n_q)} \right\} \) [see (4.10)]. For a given \(q \ge 1\), one of the latter two sets \(\varGamma _3^{(q)}\), \(\varGamma _4^{(q)}\) could perhaps be empty. Note that for each \(q \ge 1\), the set \(\varGamma _1^{(q)}\) here is the set \(\overline{\varGamma }_1^{(q, n_q)}\) in the notations in Step 4.

For each \(q \ge 1\) and each \(i \in \{1, 2, 3, 4\}\), define the random variable

$$\begin{aligned} U_i^{(q)}=\sum _{j \in \varGamma _i^{(q)}} Y_j^{(n_q)}. \end{aligned}$$
(4.14)

Note that for each \(q \ge 1\), \(U_1^{(q)}=W^{(q, n_q)}\) by (4.14) (see Step 4), and also

$$\begin{aligned} \sum _{i=1}^4 U_i^{(q)}=\sum _{j=1}^{L_1^{(n_q)}} Y_j^{(n_q)}=\sum _{k \in B \left( L^{(n_q)} \right) } X_k^{(n_q)} . \end{aligned}$$
(4.15)

Step 6: Notice that due to (1.3), Theorem 2.2, followed by (4.2), we obtain that for each \(q \ge 1\),

$$\begin{aligned} 0&\le \left( \frac{1-\rho '(1)}{1+\rho '(1)} \right) \sum _{j \in \varGamma _1^{(q)} } E \left( Y_j^{(n_q)} \right) ^2 \le E \left( U_1^{(q)} \right) ^2 \le \left( \frac{1+\rho '(1)}{1-\rho '(1)} \right) \sum _{j \in \varGamma _1^{(q)} } E \left( Y_j^{(n_q)} \right) ^2\\&\le \left( \frac{1+\rho '(1)}{1-\rho '(1)} \right) < \infty . \end{aligned}$$

Similarly, for each \(q \ge 1\),

$$\begin{aligned} 0 \le E \left( U_4^{(q)} \right) ^2 \le \left( \frac{1+\rho '(1)}{1-\rho '(1)} \right) < \infty . \end{aligned}$$

Hence, there exists an infinite set \(T \subseteq \mathbb {N}\) such that

$$\begin{aligned} \eta _1^2:= & {} \lim _{q \rightarrow \infty , \ q \in T} E \left( U_1^{(q)} \right) ^2 \text { exists (in } \mathbb {R}) \text {, and}\\ \eta _4^2:= & {} \lim _{q \rightarrow \infty , \ q \in T} E \left( U_4^{(q)} \right) ^2 \text { exists (in } \mathbb {R}). \end{aligned}$$

Our goal now is to prove that for the infinite set T just specified here,

$$\begin{aligned} \sigma _{n_q}^{-1} \sum _{k \in B\left( L^{(n_q)} \right) } X_k^{(n_q)} \Rightarrow N(0, 1) \text { as } q \rightarrow \infty , \ q \in T. \end{aligned}$$

That will accomplish (4.1) (and therefore complete the proof of Lemma 4.1) with the set T in (4.1) replaced here by the set \(\{n_q: q \in T\}\), which is an infinite subset of \(S_{00}\) and hence of S.

In what follows, the “N(0, 0) distribution” will of course mean the degenerate “point mass at 0.” It will be tacitly kept in mind and used freely that if a sequence of random variables converges to 0 in the 2-norm, then it converges to 0 in probability and hence converges to N(0, 0) in distribution.

Step 7: “The asymptotic normality of \(U_1^{(q)}\).” By (4.11), we obtain that

$$\begin{aligned}&\sup _{x\in \mathbb {R}} \left| F_{ W^{(q, n_q)} / \Vert W^{(q, n_q)} \Vert _2}(x )-\varPhi (x) \right| \rightarrow 0 \text { as } q \rightarrow \infty , \text { hence}\\&\frac{ U_1^{(q)}}{\left\| U_1^{(q)} \right\| _2 } \Rightarrow N(0, 1) \text { as } q \rightarrow \infty , \ q \in T. \end{aligned}$$

So, we obtain the asymptotic normality of the random variable \(U_1^{(q)}\), namely

$$\begin{aligned} U_1^{(q)} \Rightarrow N(0, \eta _1^2) \text { as } q \rightarrow \infty , \ q \in T. \end{aligned}$$
(4.16)

Step 8: “The asymptotic normality of \(U_4^{(q)}\).” Recall from (4.10) that \(L_1^{(n_q)} \rightarrow \infty \) as \( q \rightarrow \infty \). In addition, by (4.5) and the definition of \(\varGamma _4^{(q)}\) in (4.13),

$$\begin{aligned} \sup _{j \in \varGamma _4^{(q)} } E \left( Y_j^{(n_q)} \right) ^2 \le \frac{1}{q^2 m_q+1} \rightarrow 0\text { as } q \rightarrow \infty , \ q \in T. \end{aligned}$$

Trivially if \(\eta _4^2=0\), or if instead \(\eta _4^2>0\) then by Lemma 3.2 (with the indices \(k:=(k_1, \ldots , k_d) \in B\left( L^{(n_q)} \right) \) restricted to the ones such that \(k_1 \in \varGamma _4^{(q)}\)), one has that

$$\begin{aligned} U_4^{(q)} \Rightarrow N\left( 0,\eta _4^2\right) \text { as } q \rightarrow \infty , \ q \in T. \end{aligned}$$
(4.17)

Step 9: “Negligibility of \(U_2^{(q)}\).” By (4.7),

$$\begin{aligned} \sum _{j=q+1}^{\infty } \lambda _j \rightarrow 0 \text { as } q \rightarrow \infty , \ q \in T. \end{aligned}$$

Therefore,

$$\begin{aligned} \sum _{j=q+1}^{q^2m_q} \lambda _j \rightarrow 0 \text { as } q \rightarrow \infty , \ q \in T, \end{aligned}$$

which gives us by (4.12) that

$$\begin{aligned} \sum _{j=q+1}^{q^2m_q}E \left( Y_{p(n_q, j)}^{(n_q)} \right) ^2 \rightarrow 0 \text { as } q \rightarrow \infty , \ q \in T. \end{aligned}$$

As a consequence, referring to (4.13) and (4.14) and bounding above the second moment of the random variable \(U_2^{(q)}\) using Theorem 2.2, we obtain that

$$\begin{aligned} E \left( \sum _{j \in \varGamma _2^{(q)}} Y_j^{(n_q)} \right) ^2 \rightarrow 0\text { as } q \rightarrow \infty , \ q \in T, \end{aligned}$$

hence

$$\begin{aligned} U_2^{(q)} \rightarrow 0 \text { in probability as } q \rightarrow \infty , \ q \in T. \end{aligned}$$
(4.18)

Step 10: “Negligibility of \(U_3^{(q)}\).” By (4.13), for each \(q \ge 1\), \(\text {card }\varGamma _1^{(q)}=q\) and hence by a simple argument, \(\text {card }\varGamma _3^{(q)} \le 2q \cdot m_q\). Using the definition of \(U_3^{(q)} \) given in (4.14), by Theorem 2.2 and Eqs. (4.5), (4.10), and (4.13) (and using an obvious constant C),

$$\begin{aligned}&E \left( U_3^{(q)}\right) ^2= E \left( \sum _{j \in \varGamma _3^{(q)} } Y_j^{(n_q)} \right) ^2 \le \left( \frac{1+\rho '(1)}{1-\rho '(1)} \right) ^d \sum _{j \in \varGamma _3^{(q)} }\left( s_j^{(n_q)}\right) ^2\\&\le C \cdot \sum _{j \in \varGamma _3^{(q)} } \frac{1}{q^2m_q} \le \frac{C \cdot 2q \cdot m_q}{q^2m_q} \rightarrow 0 \text { as } q \rightarrow \infty , \ q \in T. \end{aligned}$$

Therefore,

$$\begin{aligned} U_3^{(q)}\rightarrow 0 \text { in probability as } q \rightarrow \infty ,\,q\in T. \end{aligned}$$
(4.19)

Step 11: “A Special Blocking Argument.” We now return to the index sets \(\varGamma _1^{(q)}\) and \(\varGamma _4^{(q)}\) and the random variables \(U_1^{(q)}\) and \(U_4^{(q)}\), from (4.13), (4.14), and Steps 6, 7, and 8. We will set up (possibly “porous”) “blocks” that alternate between indices in \(\varGamma _1^{(q)}\) and \(\varGamma _4^{(q)}\). We carry out this process for the case where, for a given \(q\ge 1\), the minimum and maximum elements of \(\varGamma _1^{(q)}\cup \varGamma _4^{(q)}\) both belong to \(\varGamma _4^{(q)}\). Then we will indicate the trivial changes needed for the other cases.

Suppose \(q \ge 1\). Suppose that \(\min \left( \varGamma _1^{(q)}\cup \varGamma _4^{(q)} \right) \) and \(\max \left( \varGamma _1^{(q)}\cup \varGamma _4^{(q)} \right) \) each belong to \(\varGamma _4^{(q)}\). Recall from (4.13) that \(\text {card } \varGamma _1^{(q)}=q\). For some positive integer h(q) such that \(h(q) \le q\), there exists an “alternating sequence” of nonempty, finite, (pairwise) disjoint subsets of \(\mathbb {Z}\), namely \( \beta _1^{(q)}\), \(\gamma _1^{(q)}\), \( \beta _2^{(q)}\), \( \gamma _2^{(q)}, \ldots , \beta _{h(q)}^{(q)}\), \(\gamma _{h(q)}^{(q)}\), and, \(\beta _{h(q)+1}^{(q)}\) with the following properties:

$$\begin{aligned}&\varGamma _1^{(q)}=\bigcup _{i=1}^{h(q)} \gamma _i^{(q)};\\&\varGamma _4^{(q)}=\bigcup _{i=1}^{h(q)+1} \beta _i^{(q)};\\&\forall i \in \{1, 2, \ldots , h(q)\}, \ m_q+\max \beta _i^{(q)} \le \min \gamma _i^{(q)};\\&\forall i \in \{1, 2, \ldots , h(q)\}, \ m_q+\max \gamma _i^{(q)} \le \min \beta _{i+1}^{(q)}. \end{aligned}$$

(The last two properties come from the definition of \(\varGamma _4^{(q)}\) in (4.13).) Next, define the following random variables:

$$\begin{aligned} \forall i\in & {} \{1, 2, \ldots , h(q)+1\}, \ V_i^{(q)}:=\sum _{j \in \beta _i^{(q)} }Y_j^{(n_q)} \text { and} \end{aligned}$$
(4.20)
$$\begin{aligned} \forall i\in & {} \{1, 2, \ldots , h(q)\}, \ Z_i^{(q)}:=\sum _{j \in \gamma _i^{(q)} }Y_j^{(n_q)}. \end{aligned}$$
(4.21)

Then by (4.14), we have the following identities:

$$\begin{aligned} U_1^{(q)}= & {} \sum _{i=1}^{h(q)} Z_i^{(q)}; \end{aligned}$$
(4.22)
$$\begin{aligned} U_4^{(q)}= & {} \sum _{i=1}^{h(q)+1} V_i^{(q)}. \end{aligned}$$
(4.23)

For a given \(q \ge 1\), those notations were defined in the case where \(\min ( \varGamma _1^{(q)}\cup \varGamma _4^{(q)} )\) and \(\max ( \varGamma _1^{(q)}\cup \varGamma _4^{(q)})\) both belong to \(\varGamma _4^{(q)}\). In the other cases, the notations are the same, but with one or both of the following trivial changes: (i) If \(\min (\varGamma _1^{(q)}\cup \varGamma _4^{(q)})\) belongs to \(\varGamma _1^{(q)}\), then the set \(\beta _1^{(q)}\) is empty and the random variable \(V_1^{(q)}\) is identically 0. (ii) If \(\max (\varGamma _1^{(q)}\cup \varGamma _4^{(q)})\) belongs to \(\varGamma _1^{(q)}\), then the set \(\beta _{h(q)+1}^{(q)}\) is empty and the random variable \(V_{h(q)+1}^{(q)}\) is identically 0.

The rest of the argument here in Step 11 will be carried out in the case where for each \(q\ge 1\), \(\min \left( \varGamma _1^{(q)}\cup \varGamma _4^{(q)} \right) \) and \(\max \left( \varGamma _1^{(q)}\cup \varGamma _4^{(q)} \right) \) both belong to \(\varGamma _4^{(q)}\). The changes needed in the argument to accommodate all other cases are trivial and need not be spelled out here.

For each \(q \ge 1\), construct independent copies of the random variables defined in (4.20) and (4.21), denoted \( \widetilde{V}_1^{(q)}\), \( \widetilde{Z}_1^{(q)}\), \( \widetilde{V}_2^{(q)}\), \( \widetilde{Z}_2^{(q)}\), \(\dots , \widetilde{V}_{h(q)}^{(q)}\), \(\widetilde{Z}_{h(q)}^{(q)}\), and \(\widetilde{V}_{h(q)+1}^{(q)}\). By (4.22) and Step 7, we obtain that

$$\begin{aligned} \sum _{i=1}^{h(q)}Z_i^{(q)} \Rightarrow N(0, \eta _1^2) \text { as } q \rightarrow \infty , \ q \in T. \end{aligned}$$

By (4.9), the following holds:

$$\begin{aligned} \sum ^{h(q)-1}_{k=1} \alpha \left( \sigma \left( Z^{(q)}_i, 1\le i \le k \right) , \sigma \left( Z^{(q)}_{k+1} \right) \right)\le & {} \sum ^{h(q)-1}_{k=1} \alpha ( 2m_q ) \\&\le \frac{q}{q^2} \rightarrow 0 \text { as } q \rightarrow \infty , \ q\in T. \end{aligned}$$

Hence, by [1] (Theorem 25.56),

$$\begin{aligned} \sum _{i=1}^{h(q)} \widetilde{Z}_i^{(q)} \Rightarrow N(0, \eta _1^2) \text { as } q \rightarrow \infty , \ q \in T. \end{aligned}$$
(4.24)

Similarly, we obtain that

$$\begin{aligned} \sum ^{h(q)}_{k=1} \alpha \left( \sigma \left( V^{(q)}_i, 1\le i \le k \right) , \sigma \left( V^{(q)}_{k+1} \right) \right)\le & {} \sum ^{h(q)-1}_{k=1} \alpha ( 2m_q ) \le \frac{q}{q^2} \rightarrow 0 \text { as } q \rightarrow \infty , \nonumber \\&q\in T, \end{aligned}$$

and hence,

$$\begin{aligned} \sum _{i=1}^{h(q)+1} \widetilde{V}_i^{(q)} \Rightarrow N(0, \eta _4^2) \text { as } q \rightarrow \infty , \ q \in T. \end{aligned}$$
(4.25)

By Eqs. (4.24), (4.25), and independence of the random variables \(\widetilde{V}_{i}^{(q)}\), \(\widetilde{Z}_{j}^{(q)}\), with \(i \in \{1, 2, \ldots , h(q)+1\} \) and \(j \in \{1, 2, \ldots , h(q)\} \), we obtain that

$$\begin{aligned} \sum _{i=1}^{h(q)} \widetilde{Z}_i^{(q)} + \sum _{i=1}^{h(q)+1} \widetilde{V}_i^{(q)} \Rightarrow N(0, \eta _1^2+\eta _4^2) \text { as } q \rightarrow \infty , \ q \in T. \end{aligned}$$
(4.26)

Next, for the entire “alternating sequence” \(V_1^{(q)}, Z_1^{(q)}, V_2^{(q)}, Z_2^{(q)}, \ldots , V_{h(q)+1}^{(q)}\), we note from (4.9) that

$$\begin{aligned} 2q\cdot \alpha ( m_q ) \le \frac{2q}{q^2} \rightarrow 0 \text { as } q\rightarrow \infty , \ q\in T, \end{aligned}$$

and applying again [1] (Theorem 25.56) and (4.26), we obtain the analog of (4.26) with \(\widetilde{Z}_i^{(q)}\) and \(\widetilde{V}_i^{(q)}\) replaced by \(Z_i^{(q)}\) and \( V_i^{(q)}\), that is,

$$\begin{aligned} U_1^{(q)} + U_4^{(q)} \Rightarrow N(0, \eta _1^2+\eta _4^2)) \text { as } q \rightarrow \infty , \ q \in T. \end{aligned}$$
(4.27)

Applying Slutski’s theorem, by (4.18), (4.19), and (4.27), we obtain that

$$\begin{aligned} \sum _{k \in B(L^{(n_q)} ) }X_k^{(n_q)} = \sum _{ k \in \text {slice}_j^{(n_q)}} Y_j^{(n_q)}= \sum _{i=1}^4 U_i^{(q)} \Rightarrow N(0, \eta _1^2+\eta _4^2)) \text { as } q \rightarrow \infty , \ q \in T. \end{aligned}$$
(4.28)

Step 12: “Convergence of Variance.” Refer to (3.1), the last paragraph of Step 6 and the last line of Step 11. To complete the proof of Lemma 4.1, we now only need to show that

$$\begin{aligned} \sigma ^2_{n_q} \rightarrow \eta _1^2+\eta _4^2\text { as } q \rightarrow \infty , \ q \in T. \end{aligned}$$
(4.29)

To accomplish that, it will (by a well know theorem) suffice to show that there is an upper bound on the fourth moments of the random variables \( \sum _{i=1}^4 U_i^{(q)} \), \(q \in T\).

Referring to the first equality in (4.28), one of course has by (4.2), (1.3), and Theorem 2.2 that the set of numbers \(\sigma _{n_q}^2, \ q \in T\) is bounded.

Since \(\rho '(1)<1\), by Theorem 2.3, we obtain (for the constant C in Theorem 2.3) that

$$\begin{aligned}&E \left( \sum _{i=1}^{4 } U_i^{(q)} \right) ^4 =E \left( \sum _{k \in B(L^{(n_q)})} X_k^{(n)} \right) ^4\nonumber \\&\quad \le C \left[ \sum _{k \in B(L^{(n_q)})} E \left( X_k^{(n)} \right) ^4 + \left( \sum _{k \in B(L^{(n_q)})} E \left( X_k^{(n_q)} \right) ^2 \right) ^2 \right] . \end{aligned}$$
(4.30)

Using (3.2) and Theorem 2.2, the first term in the right-hand side of (4.30) can be bounded above in the following way:

$$\begin{aligned} \begin{array}{l} \sum \limits _{k \in B(L^{(n_q)}) } E \left( X_k^{(n_q)} \right) ^4 = \sum \limits _{k \in B(L^{(n_q)}) } E \left[ \left( X_k^{(n_q)} \right) ^2 \left( X_k^{(n_q)} \right) ^2 \right] \\ \quad \le \theta _{n_q}^2 \sum \limits _{k \in B(L^{(n_q)}) } E \left( X_k^{(n_q)} \right) ^2 \ll \theta _{n_q}^2 \cdot \sigma _{n_q}^2 \rightarrow 0 \text { as } q \rightarrow \infty , \ q \in T. \end{array}. \end{aligned}$$

The second term in the right-hand side of (4.30) can be bounded above as follows: As \(q \rightarrow \infty , \ q \in T\), by Theorem 2.2 again,

$$\begin{aligned} \left( \sum _{k \in B(L^{(n_q)})} E \left( X_k^{(n_q)} \right) ^2 \right) ^2&= \left( \sum _{k \in B(L^{(n_q)}) } E \left( X_k^{(n_q)} \right) ^2 \right) \left( \sum _{k \in B(L^{(n_q)}) } E \left( X_k^{(n_q)} \right) ^2 \right) \\&\ll \left( \sigma _{n_q}^2 \right) ^2\ll 1. \end{aligned}$$

Hence, \(\sup _{q \in T} E \left( \sum _{i=1}^{4 } U_i^{(q)} \right) ^4 <\infty \). That completes the proof of Lemma 4.1. \(\square \)

5 Lindeberg Condition and Truncation

Recall the Lindeberg condition in (1.4). Without loss of generality, we can assume \(\sigma ^2_n=1\) for each \(n \in \mathbb {N}\). Then by a simple argument,

$$\begin{aligned} \exists \epsilon _1 \ge \epsilon _2 \ge \ldots \downarrow 0 \text { such that } \lim _{n\rightarrow \infty } \sum _{k\in B(L_n)} E \left( X_{k}^{(n)} \right) ^2I \left( \left| X_k^{(n)}\right| >\epsilon _n\right) =0. \end{aligned}$$
(5.1)

We truncate now at the level \(\epsilon _n\). Define the following random variables: for every \( n \in \mathbb {N}\) and every \(k \in B(L_n)\),

$$\begin{aligned} X^{'(n)}_k:= & {} X_k^{(n)}I\left( \left| X_k^{(n)}\right| \le \epsilon _n\right) -EX_k^{(n)}I\left( \left| X_k^{(n)}\right| \le \epsilon _n\right) \text { and} \end{aligned}$$
(5.2)
$$\begin{aligned} X^{''(n)}_k:= & {} X_k^{(n)}I\left( \left| X_k^{(n)}\right| > \epsilon _n\right) -EX_k^{(n)}I\left( \left| X_k^{(n)}\right| > \epsilon _n\right) . \end{aligned}$$
(5.3)

Obviously (since \(EX_k^{(n)}=0\) for each n and k),

$$\begin{aligned} \sum _{k \in B(L_n)} X^{(n)}_k= \sum _{k \in B(L_n)} X^{'(n)}_k+ \sum _{k \in B(L_n)} X^{''(n)}_k. \end{aligned}$$
(5.4)

Since \(\rho '(1)<1\), we can apply again Theorem 2.2 and by (5.1), we obtain that

$$\begin{aligned} 0&\le E \left( \sum _{k \in B(L_n)} X^{''(n)}_k \right) ^2 \le \left( \frac{1+\rho '(1)}{1-\rho '(1)} \right) ^d \sum _{k \in B(L_n)}E \left( X^{''(n)}_k \right) ^2\\&\le C \sum _{k \in B(L_n)} E \left( X_k^{(n)} \right) ^2I(|X_k^{(n)}|> \epsilon _n) \rightarrow 0 \text { as }n \rightarrow \infty . \end{aligned}$$

Therefore,

$$\begin{aligned} \sum _{k \in B(L_n)} X^{''(n)}_k \rightarrow 0 \text { in probability as }n \rightarrow \infty . \ \end{aligned}$$

As a consequence, by Slutski’s theorem, to prove that

$$\begin{aligned} \sum _{k \in B(L_n)} X_k^{(n)} \Rightarrow N(0, 1) \text { as } n\rightarrow \infty , \end{aligned}$$
(5.5)

we only have left to show that

$$\begin{aligned} \sum _{k \in B(L_n)} X^{'(n)}_k \Rightarrow N(0, 1) \text { as } n \rightarrow \infty . \end{aligned}$$
(5.6)

Note that \(\Vert X^{'(n)}_k \Vert _{\infty } \le 2\epsilon _n\) for every \(n \in \mathbb {N}\) and every \(k \in B(L_n)\). Since \(\epsilon _n \rightarrow 0\) as \(n \rightarrow \infty \) by (5.1), we have that

$$\begin{aligned} \sup _{k \in B(L_n)}\Vert X^{'(n)}_k \Vert _{\infty } \rightarrow 0 \text { as } n \rightarrow \infty . \end{aligned}$$

Hence by Lemma 4.1, (5.6) holds, and hence also (5.5). The proof of Theorem 1.1 is complete.

6 Generalization

Theorem 6.1

Suppose d is a positive integer. For each \(n \in N\), suppose \(L_n:=(L_{n1}, L_{n2}, \ldots , L_{nd})\) is an element of \(N^d\), and suppose \(X^{(n)}:=\left( X_k^{(n)}, k \in B(L_n) \right) \) is an array of random variables such that for each \(k \in B(L_n)\), \(EX_{k}^{(n)}=0\) and \(E \left( X_{k}^{(n)} \right) ^2<\infty \), and for at least one \(k \in B(L_n)\), \(E \left( X_{k}^{(n)} \right) ^2>0\). Suppose also that the mixing assumptions (1.2) and

$$\begin{aligned} \lim _{m\rightarrow \infty }\rho '(m)<1 \end{aligned}$$
(6.1)

hold, where for each \(m \in \mathbb {N}\),

$$\begin{aligned} \rho '(m):=\sup _{n \in N} \rho '(X^{(n)}, m). \end{aligned}$$

For each \(n \in \mathbb {N}\), define the random sum \(S \left( X^{(n)}, L_n \right) :=\sum _{k \in B(L_n)} X_k^{(n)}\) and define the quantity \(\sigma _n^2:= E \left( S \left( X^{(n)}, L_n \right) \right) ^2\). Suppose there exists a positive constant C such that for every \(n \in \mathbb {N}\) and every nonempty set \( S \in B(L_n)\),

$$\begin{aligned} E \left( \sum _{ k \in S} X_k^{(n)} \right) ^2 \ge C \cdot \sum _{k \in S} E \left( X_k^{(n)} \right) ^2. \end{aligned}$$
(6.2)

Suppose the Lindeberg condition (1.4) holds. Then

$$\begin{aligned} \sigma _n^{-1}S(X^{(n)}, L_n)\Rightarrow N(0, 1) \text { as } n\rightarrow \infty . \end{aligned}$$

For \(d=1\), this result was proved by Peligrad ([3], Theorem 2.1), with (6.2) replaced by a weaker assumption. The proof of Theorem 6.1 again involves induction on the dimension d, and is just a slight modification of the argument in Sects. 34, and 5 for Theorem 1.1. In essence, in place of (1.3) and Theorem 2.2, one uses (6.1), Theorem 2.1, and (6.2).

In fact, to make that argument work smoothly, it suffices to have a weaker version of (6.2) in which, for a given \(n \in \mathbb {N}\), the sets \(S \subseteq B(L_n)\) are restricted to certain special “rectangles” of the form \(S=S_1\times S_2 \times \ldots \times S_d\) where for each \( j \in \{ 1, 2, \ldots , d\}\), the set \(S_j\) either is \(\{1, 2, \ldots , L_{n_j} \}\) or is \(\{k\}\) for some \(k \in \{1, 2, \ldots , L_{n_j}\}\).