1 Introduction

We consider oriented bond percolation on the two-dimensional integer lattice. For background on this process, we refer to the review [15]. We show that the process exhibits classic central limit theorem (CLT) behavior in all of the supercritical phase; meaning that the law of the diffusively rescaled cardinality of the process started from a site conditioned to percolate converges asymptotically in distribution to the standard normal law. The continuous-time analog of two-dimensional oriented percolation is the basic contact process in one (spatial) dimension. Our result and approach convey analogously to this process. The contact process on integer-lattices was introduced in [33]. The corresponding strong law of large numbers (SLLN) was shown in [21]. The CLT has been posed as an open problem originally in [15], and later in [16], and more recently, in [17]. The contact process falls into the subject of interacting particle systems (IPS), for background on which we refer to the classic accounts [16, 26, 39], furthermore, we refer to the later, more comprehensive accounts [18, 41] as the subject received additional attention afterward in the literature. Percolation theory originates in [9] and, for more in this regard, we refer to the classic accounts [8, 28].

Harris’ Growth Theorem [35] regards that the rate of growth of the highly supercritical contact process conditioned to percolate is almost surely linear in all dimensions.Footnote 1 The corresponding \(L^{1}\)-LLN for the supercritical process in one dimension was shown by means of subadditivity and coupling arguments in [14].Footnote 2 We note that this result is considered a precursor to the general subadditive ergodic theorem shown in [40]. The range of parameter values for which the Harris’ Growth Theorem holds was extended by means of improvements to Peierls’ argument in continuous time, shown in [27].Footnote 3 The SLLN for the process in all dimensions with parameter value larger than the critical value of the one-sided process in dimension one was derived, as a corollary of the general shape theorem, in [20]. The SLLN, valid for all of the supercritical phase in dimension one, was shown by means of renormalization group techniques in [21].Footnote 4 Furthermore, the important property that the invariant measure possesses exponentially decaying correlations, together with other exponential estimates, was shown there.Footnote 5 This key property, together with the SLLN for the position of the endpoints, result shown earlier in [14], enabled the proof of the SLLN in [21]. Among other landmark results, the shape theorem, and hence the SLLN, valid for all of the supercritical phase and in all dimensions, was shown by means of renormalization techniques in [5], see also the review [17]. To date the following CLT’s have been derived in the literature regarding other functionals of the contact process. The CLT regarding time-averages of finite support functions of the infinite one-dimensional supercritical contact process was shown in [52], by means of following an approach by [12], using an exponential decay property by [20], and applying general results of [44, 45]. Further, we mention that the CLT regarding the endpoints of the process was shown by means of mixing techniques in [25], and later by means of elementary arguments in [37]. In addition, for a detailed literature account regarding known CLT’s in classic percolation, we refer to \(\S \) 11.6 in [28], see also the later [48, 49].

Furthermore, we derive certain CLT’s regarding randomly-indexed partial sums of non-stationary, associated r.v.’s (random variables), as byproducts of our proof technique. To the limits of one’s knowledge, randomly-indexed CLT’s for families of associated r.v.’s have not been considered elsewhere in the literature. The introduction and the appreciation of the usefulness of association in percolation dates back to the Harris’ Lemma [32],Footnote 6 with most prominent extensions to non-product measures, being the FKG inequality [23], the Holley inequality [36],Footnote 7 and the Ahlswede–Daykin inequality [1].Footnote 8 The systematic study of this concept as a general dependence structure was initiated in [22]. The acknowledgment of that asymptotics for the correlation structure are useful in studying approximate independence of associated r.v.’s originates to [38], where necessary and sufficient conditions of this sort for the ergodic theorem to extend to this case were shown. The first corresponding CLT was derived in [43], whereas the key notion of demimartingales was introduced in [44]. Other notable CLT’s, which replace the stationarity assumption with moment conditions, are those due to [13], and also [7]. For background and comprehensive expositions on association, classic limit theorems for associated r.v.’s in particular, and much more about recent advances on the subject, we refer to the reviews [10, 46, 47, 50].

Our main result comprises the CLT for supercritical oriented percolation in two dimensions, which we state explicitly in Sect. 2.1, Theorem 2.1. Regarding the CLT’s for randomly-indexed associated r.v.’s, see Sect. 2.2.

1.1 Definition of the Process

We let \({\mathscr {L}}= {\mathscr {L}}( {\mathbb {L}}, {\mathbb {B}})\) be the usual two-dimensional oriented percolation lattice graph, for which the set of sites is \({\mathbb {L}}= \{ (x,n) \in {\mathbb {Z}}^{2}: x+n \in 2{\mathbb {Z}} \text{ and } n \ge 0\}\), \(2{\mathbb {Z}}= \{2k: k \in {\mathbb {Z}}\}\), and the set of bonds is \({\mathbb {B}}= \{[(x,n), (y, n+1)\rangle : |x-y| =1\}\), and where \([s, u\rangle \) means that an arrow (or bond) directed from site s to site u is present, see fig. 1, p. 1001, [15], for this and other transpositions of \({\mathscr {L}}\) in the plane. We consider independent bond percolation on \({\mathscr {L}}\) with open (or retaining) probability parameter \(p \in [0,1]\), defined as follows. We consider the configuration space \(\Omega = \{0,1\}^{{\mathbb {B}}} = \{ \omega : {\mathbb {B}}\rightarrow \{0,1\} \}\). We let \({\mathbb {P}}(= {\mathbb {P}}_{p})\) denote the joint distribution of \((\omega (b): b \in {\mathbb {B}})\), an ensemble of i.i.d. p-Bernoulli r.v.’s, which is, \(\mu (\omega (b)= 1) = 1- \mu (\omega (b)= 0) = p\). We note that \({\mathbb {P}}\) yields a probability measure on \(\Omega \), equipped as usual with \({\mathcal {F}}\), the \(\sigma \)-field of subsets of \(\Omega \) generated by finite-dimensional cylinders. We may further let \({\mathscr {G}}= {\mathscr {G}}( {\mathbb {L}}, {\mathbb {B}}_{1})\), \({\mathbb {B}}_{1} = \{b : \omega (b) =1\}\), be the subgraph of \({\mathscr {L}}\) in which b is retained if and only if \(\omega (b) = 1\).

For given \(\omega \in \Omega \), bonds b such that \(\omega (b)= 1\) are thought of as open (or retained); whereas if b is assigned value \(\omega (b)= 0\), we consider b as closed (or removed), which may be thought of as flow being disallowed. If \(s_{m}, s_{n} \in {\mathbb {L}}\), \(s_{m}= (x_{m}, m)\), \(s_{n}= (x_{n},n)\), \(m \le n\), then, given \(\omega \in \Omega \), we write \( s_{m} \rightarrow s_{n}\) whenever there is a directed path from \( s_{m}\) to \(s_{n}\) in \({\mathscr {G}}(\omega )\), that is, there is \(s_{m+1} = (x_{m+1}, m+1), \dots , s_{n-1} = (x_{n-1}, n-1)\) such that \( \omega ([s_{k}, s_{k+1}\rangle ) = 1\) for all \(m \le k \le n-1\).

We let \(\xi _{n}^{\eta } = \{x: \left( y,0 \right) \rightarrow (x,n), \text{ for } \text{ some } y \in \eta \}\), \(\eta \subseteq 2{\mathbb {Z}}\). We note also that, when convenient, we shall use the coordinate-wise notation \(\xi _{n}^{\eta }(x) = 1(x \in \xi _{n}^{\eta })\), where we denote by 1(A) the indicator random variable of an event A. We note that \((\xi _{n}^{\eta }, \eta \subset 2{\mathbb {Z}})\) furthermore admits a Markovian definition, and hence, we may think of the vertices’ first and second coordinates as space and time, respectively. We also note that, by definition of \({\mathbb {L}}\), \(\xi _{n}^{\eta }\subset 2{\mathbb {Z}}\), for \(n \in 2{\mathbb {Z}}_{+}\), and \(\xi _{n}^{\eta }\subset 2{\mathbb {Z}}+1\), for \(n \in 2{\mathbb {Z}}_{+} +1\).

We will denote simply by \((\xi _{n})\) the process started from \(O = \{0\}\) and, in general, we will drop superscripts associated to the starting set in our notation when referring to \(O = \{0\}\).

1.2 The Critical Value and the Upper-Invariant Measure

We state here some basic definitions and facts, for a more detailed exposition of which, see for instance, [15, 29, 42]. We let \(\Omega _{\infty }^{\eta }\) be the percolation event for initial configurations \(\eta \), \(|\eta |< \infty \), that is, we let

$$\begin{aligned} \Omega _{\infty }^{\eta } := \cap _{n\ge 1} \Omega _{n}^{\eta } \qquad \text{ and } \qquad \Omega _{n}^{\eta } := \left\{ |\xi _{n}^{\eta }| \ge 1\right\} , \end{aligned}$$
(1)

where \(| \cdot |\) denotes cardinality; and we also note that \(\Omega _{n}^{\eta } \supseteq \Omega _{n+1}^{\eta }\), \({\mathbb {P}}\)-a.s.. Further, we let \(\rho = \rho (p)\) be the so-called asymptotic density, defined as follows

$$\begin{aligned} \rho (p) = {\mathbb {P}}(\Omega _{\infty }) = \lim _{n \rightarrow \infty } \rho _{n}, \quad \quad \rho _{n} := {\mathbb {P}}(\Omega _{n}), \end{aligned}$$
(2)

where the appellation derives in view of (5), from the resultant equality of \(\rho (p)\) by appropriately applying (6) below. We let in addition \(p_{c}\) be the critical value, defined as follows

$$\begin{aligned} p_{c} = \inf \{ p: \rho (p) >0\}. \end{aligned}$$
(3)

Where we recall that it is elementary that \(p_{c} \in (0,1)\), and that the well-definedness of \(p_{c}\) comes from that \(\rho (p)\) is non-decreasing in p, which is an elementary consequence of the construction by the superposition of Bernoulli r.v.’s property. We also note that the assumption that \(p>p_{c}\) we require here may be replaced by the apriori weaker assumption that \(\rho (p)>0\), since the two assumptions are equivalent due to that \(\rho (p_{c})=0\), shown for all dimensions in [5], see also [17], and also [4] for the extension of this results to general attractive spin systems.

Let further,

$$\begin{aligned} \Sigma _{0} =\{\eta \subset 2{\mathbb {Z}}: |\eta | < \infty \}, \qquad \Sigma =\{\eta \subset 2{\mathbb {Z}}, |\eta | = \infty \}, \end{aligned}$$
(4)

We recall also that, if \(\mu _{n}\) denotes the distribution of \(\xi _{2n}^{2{\mathbb {Z}}}\), then we have that

$$\begin{aligned} \mu _{n} \Rightarrow {\bar{\nu }}, \text{ as } n \rightarrow \infty , \end{aligned}$$
(5)

where \({\bar{\nu }}\) is the so-called upper-invariant measure, defined on \(\Sigma \) and uniquely determined by its cylinders, which, in view of the so-called self-duality property (see, for example, (34) below), is such that

$$\begin{aligned} {\bar{\nu }}(\eta : \eta \cap B \not = \emptyset ) = {\mathbb {P}}\left( \xi _{n}^{B} \not = \emptyset , \text{ for } \text{ all } n \ge 1\right) , \end{aligned}$$
(6)

\(B \in \Sigma _{0}\), and where ‘\(\Rightarrow \)’ denotes weak convergence, which we define as convergence of the finite-dimensional distributions

$$\begin{aligned} {\mathbb {P}}\left( \xi _{2n}^{2{\mathbb {Z}}} \cap B = C\right) , \text{ for } C \subset B \in \Sigma _{0}, \end{aligned}$$

as \(n \rightarrow \infty \). To see the reason that we refer to \(\rho (p)\) as the asymptotic density, note that by (6) we have that \({\bar{\nu }}(\eta : \eta \cap O \not = \emptyset ) = \rho \). Further, we note that (5) is denoted below simply as follows,

$$\begin{aligned} \xi _{2n}^{2{\mathbb {Z}}} \Rightarrow {\bar{\xi }}, \quad \quad n \rightarrow \infty , \end{aligned}$$

where \({\bar{\xi }}\) is a random field distributed according to \({\bar{\nu }}\), denoted as \({\bar{\xi }} \sim {\bar{\nu }}\) below.

Finally, prior to turning to our main statement in the section below, we give some additional notation first. We note that the shorthands \({\mathcal {L}}(X_{n}) \xrightarrow {w} {\mathcal {N}}(0,\sigma ^{2})\), as \(n \rightarrow \infty \), \(n \in {\mathbb {N}}\), as well as \(X_{n} \xrightarrow {w} {\mathcal {N}}(0,\sigma ^{2})\), as \(n \rightarrow \infty \), will be in force in the sequel in order to denote weak convergence to a normal distribution with mean 0 and variance \(\sigma ^{2}\), which is that, if we let \(F_{n}(x)\) be the cumulative distribution function associated with \(X_{n}\), we have that \(F_{n}(x) \rightarrow \int _{- \infty }^{x} (2\pi )^{- 1/2} e^{- u^{2}/2} \text{ d } u\), as \(n \rightarrow \infty \).

2 Results

2.1 The CLT

To state next our main result, we let \(p>p_{c}\) and we let \({\bar{\xi }} \sim {\bar{\nu }}\). We let also

$$\begin{aligned} \sigma ^{2} = \sum _{x} Cov (x \in {\bar{\xi }},O \in {\bar{\xi }}) < \infty . \end{aligned}$$
(7)

Furthermore, we let \({\bar{{\mathbb {P}}}}\) be the probability measure induced by the original \({\mathbb {P}}\) by conditioning on \(\Omega _{\infty }\), which is, \({\bar{{\mathbb {P}}}}(\cdot ) = {\mathbb {P}}(\cdot | \Omega _{\infty })\). We denote by \({\mathcal {L}}(X| \Omega _{\infty })\) the law of a r.v. X under \({\bar{{\mathbb {P}}}}\). In addition, we let \(r_{n} = \sup \xi _{n}\) and \(l_{n} = \inf \xi _{n}\), and we further let \(d_{n} = \frac{1}{2} (r_{n} - l_{n}) +1\). We recall also that \(\rho _{n} = {\mathbb {P}}(\Omega _{n})\).

Theorem 2.1

Let \(p > p_{c}\). We have that, as \(n \rightarrow \infty \),

$$\begin{aligned} {\mathcal {L}}\left( \frac{ | \xi _{n}| - d_{n} \rho _{n} }{\sigma \sqrt{d_{n}}} \,\, \Bigg \vert \,\, \Omega _{\infty } \right) \overset{w}{\longrightarrow } {\mathcal {N}}(0, 1). \end{aligned}$$

From a technical perspective, the main novelty in our proof approach is the consideration of the (non-stopping) time regarding the last-intersection of the two infinite endpoints processes. To briefly elaborate on this coupling observation here (see the Lemmas subsequent to its definition in (44) for the exact statements), we let

$$\begin{aligned} r^{-}_{n} = \sup \xi _{n}^{2{\mathbb {Z}}_{-}} \text{ and } l_{n}^{+}= \inf \xi _{n}^{2{\mathbb {Z}}_{+}}, \end{aligned}$$

where \(2{\mathbb {Z}}_{-} = \{ \dots ,-2, 0\}\), and \(2{\mathbb {Z}}_{+} = \{ 0, 2, \dots \}\). We note that, as will be seen from the proof, the distribution of \(|\xi _{n}| = \sum _{x=l_{n}}^{r_{n}} \xi _{n}(x)\) conditioned on \(\Omega _{\infty }\), is equal to that of \(\sum _{x=l_{n}^{+}}^{r^{-}_{n}} \xi _{n}^{2 {\mathbb {Z}}}(x)\), for all n after this random time occurs, and therefore, asymptotics for the latter process permit to infer the same asymptotics for the former one. In this manner, we circumvent the effects of altering the distribution of \(\xi _{n}\) when conditioning on \(\Omega _{\infty }\), and thus, we are able to deduce Theorem 2.1 by working on the whole probability space, and dealing with partial sums of the infinite processes, involved in \(\sum _{x=l_{n}^{+}}^{r^{-}_{n}} \xi _{n}^{2 {\mathbb {Z}}}(x)\), instead. We further note that, in order to deal with the fact that this is a non-stopping time in our proof, we reside on independence inherited from the independence of the underlying Bernoulli r.v.’s in disjoint parts of \({\mathscr {L}}\), an observation applied in a different context by [37]. We also note that this coupling is intrinsic to two-dimensions, since it relies on path intersection properties, and that hence, we expect that new methods will be required for the extension of this result to higher dimensions. On the other hand, we believe and pursue in forthcoming work [54] that the techniques we develop can be used in order to give a proof of the law of the iterated logarithm corresponding to Theorem 2.1. Furthermore, we note that the method of proof of Theorem 2.1 relies on Proposition 2.3, stated in Sect. 2.3 below, and also incorporates an earlier observation due to [15]. We note that a key ingredient for Proposition 2.3 to apply in the context of supercritical oriented percolation is the Harris’ correlation inequality [34]; see also Theorem B.17 in [41] and the references therein. In addition, we note that our proof approach, and the techniques involved, differentiate from those devised in known CLT’s for percolation processes, due to the fact that we consider partial sums that are indexed randomly, depending on the state of the process itself.

2.2 Random-Indices CLT’s

We recall here the following definition.

Association A collection of r.v.’s \((X_{i}: i \in I)\), \(|I| = \infty \), is associated if for all finite sub-collections \(X_{1}, \dots , X_{m}\) and all coordinate-wise non-decreasing \(f_{1}, f_{2}: {\mathbb {R}}^{m} \rightarrow {\mathbb {R}}\) we have that \(\mathrm {Cov}({\tilde{f}}_{1}, {\tilde{f}}_{2}) \ge 0\), \({\tilde{f}}_{j} := f_{j}(X_{1}, \dots , X_{m})\), \(j=1,2\), whenever this covariance exists.

By known results our proof approach provides with certain random-indices central limit theorems for associated triangular arrays of r.v.’s, which we effectively obtain as direct byproducts. One important aspect of those statements is that nothing is assumed regarding independence among the summands and the index family of r.v.’s. Corollary 2.2 stated next in particular is a random-index extended version of the CLT in Theorem 1 due to [13] with the additional proviso (11) below. Furthermore, we note that we may in addition obtain in a manner which is directly analogous and is thus omitted the corresponding random-index CLT’s extensions to Theorem 3 in [7], or Theorem 3 in [44], with the said additional proviso.

Corollary 2.2

Let \(\{X_{n}(j): 0 \le j \le n\}\) be such that \({\mathbb {E}}(X_{n}(j))= 0\), \(\forall \) nj, and that, for each n,

$$\begin{aligned} \{X_{n}(j)\} \text{ are } \text{ associated }. \end{aligned}$$
(8)

Suppose also that

$$\begin{aligned} \inf _{j, n} \mathrm {Var}(X_{n}(j))>0 \qquad \text{ and } \qquad \sup _{j, n}{\mathbb {E}}(|X_{n}(j)|^{3}) < \infty . \end{aligned}$$
(9)

Furthermore, suppose that \(u(r) = \sup _{j, n}\sum _{|k - j| \ge r} \mathrm {Cov}(X_{n}(j), X_{n}(k))\), \(r \ge 0\), is such that

$$\begin{aligned} u(r) < \infty , \text{ for } \text{ all } r, \text{ and } \text{ that } u(r) \rightarrow 0, \text{ as } r \rightarrow \infty . \end{aligned}$$
(10)

Let \(S_{n}(i) = \sum _{j=0}^{i} X_{n}(j)\), and assume in addition that

$$\begin{aligned} \sup _{j, n}\mathrm {Cov}\left( X_{n}(j), S_{n}(j-1) \right) <\infty . \end{aligned}$$
(11)

Let \((N_{n}, n \in {\mathbb {N}})\) be integer-valued and positive r.v.’s, such that

$$\begin{aligned} \frac{N_{n}}{n} \xrightarrow {w} \theta , \text{ as } n \rightarrow \infty , \end{aligned}$$
(12)

for some \(0<\theta \le 1\).

We then have that

$$\begin{aligned} \frac{S_{n}(N_{n})}{\sqrt{N_{n}}} \xrightarrow {w} {\mathcal {N}}(0,\sigma ^{2}), \text{ as } n \rightarrow \infty \end{aligned}$$

and also that

$$\begin{aligned} \frac{S_{n}(N_{n})}{\sqrt{ \theta n}} \xrightarrow {w} {\mathcal {N}}(0,\sigma ^{2}), \text{ as } n \rightarrow \infty , \end{aligned}$$

where \(\sigma ^{2}:= \lim _{n \rightarrow \infty } \text{ Var } (S_{[ \theta n]} / \sqrt{[\theta n]})\), \(0<\sigma ^{2}<\infty \).

By Theorem 1 in [13], the method of proof of Corollary 2.2 further relies on an application of Lemma 4.2, which we derive on the way to the proof of Proposition 2.3 below.

2.3 Anscombe’s Condition

Proposition 2.3 next regards a condition about deviations of random partial sums from deterministic ones the interval. This condition in the i.i.d. case was shown in [3].Footnote 9 The validity of this condition has not been anticipated to extend in the generality of Proposition 2.3, see Remark 3.3. To state it, we write \(X_{n} \xrightarrow {p} X\), as \(n \rightarrow \infty \), to denote convergence in probability. Further, we let \(\{X_{t}(j): (j, t) \in {\mathscr {L}}\}\) and let \(S_{t}(u, v) = \sum _{j =u}^{v} X_{t}(j)\). We introduce the following assumptions, which we will invoke there.

$$\begin{aligned} {\mathbb {E}}(X_{t}(j))= 0, \text{ for } \text{ all } (j, t) \in {\mathbb {L}}, \end{aligned}$$
(13)
$$\begin{aligned} \{X_{t}(j)\} \text{ is } \text{ associated } \text{ for } \text{ each } t, \end{aligned}$$
(14)
$$\begin{aligned} \sup _{j, t}{\mathbb {E}}(X_{t}(j)^{2})< \infty ; \end{aligned}$$
(15)

furthermore, let \(S_{t}^{+}(v) = \sum _{j = 0}^{v} X_{t}(j)\), \(S_{t}^{-}(u) = \sum _{j = 0}^{u} X'_{t}(j)\), \(X'_{t}(j) = X_{t}(-j-1)\), \(u, v \ge 0\), \(j\ge 0\) and assume that

$$\begin{aligned} C^{+}= & {} \sup _{j, t \ge 0}\mathrm {Cov}\left( X_{t}(j), S_{t}^{+}(j-1) \right)<\infty , C^{-} \nonumber \\:= & {} \sup _{j, t \ge 0}\mathrm {Cov}\left( X_{t}'(j), S_{t}^{-}(j-1) \right) <\infty . \end{aligned}$$
(16)

We further let \((M_{t}: t\ge 0)\) and \((m_{t}: t\ge 0)\) be such that \((M_{t}, t) \in {\mathbb {L}}\) and that \((m_{t}, t) \in {\mathbb {L}}\); we assume that, for some \(0<\theta <\infty \),

$$\begin{aligned} \frac{M_{t}}{t} \xrightarrow {w} \theta \qquad \text{ and } \qquad \frac{m_{t}}{t} \xrightarrow {w} - \theta , \text{ as } t \rightarrow \infty . \end{aligned}$$
(17)

Proposition 2.3

We let \(\{X_{t}(j): (j, t) \in {\mathbb {L}}\}\) and let \(S_{t}(u, v) = \sum _{j =u}^{v} X_{t}(j)\). Let us assume that conditions (13), (14), (15), and (16) are fulfilled. We let \((M_{t}: t\ge 0)\) and \((m_{t}: t\ge 0)\) be such that \((M_{t}, t) \in {\mathbb {L}}\) and that \((m_{t}, t) \in {\mathbb {L}}\) and, further, assume that (17) is fulfilled. We then have that

$$\begin{aligned} \frac{S_{t}(m_t, M_{t}) - S_{t}(-\theta t, \theta t)}{ \sqrt{ \theta t} } \xrightarrow {p} 0, \text{ as } t \rightarrow \infty . \end{aligned}$$
(18)

Where we note that throughout here, and in the above statement in particular, we will write that \(\sum _{x = -c}^{C}\) for \(\sum _{x = -[c] -1 }^{ [C] }\), where \([\cdot ]\) denotes the largest integer smaller than the argument, and that we also use the notational convention \(\sum _{0}^{-1} := 0\).

The method of proof of Proposition 2.3 extends the direct proof approach due to [51] for showing the Anscombe condition in the case of i.i.d. summands. We note that our proof invokes the so-called Hajek–Rényi inequality for associated r.v.’s, due to [11]. The proof of Theorem 2.1 given here relies on Proposition 2.3 and thus, follows an elementary approach, see also Remark 3.12 for a different approach. The random-index CLT’s in Sect. 2.2, as we noted above, are in addition consequences of Proposition 2.3, which we find of independent interest.

Outline of Proofs The remainder of this paper is organized as follows. The proof of Theorem 2.1, by means of applying Proposition 2.3, is given in Sect. 3. Preliminaries we will invoke in this proof are stated first in Sect.  3.1 separately, whereas another proof of Proposition 3.2 stated below in there is provided with for completeness in the Appendix. In Sect. 4, the proof of Proposition 2.3 is provided with, see Sect. 4.1. That of Corollary 2.2 is also given there, in Sect. 4.2.

3 Theorem 2.1

3.1 Preliminaries

We briefly state certain facts on oriented percolation that we use later on.

Some Notation The following definitions, which will also be useful for simplifying notations below, are introduced. Recall that we let \(r^{-}_{n} = \sup \xi _{n}^{2{\mathbb {Z}}_{-}}\) and \(l_{n}^{+}= \inf \xi _{n}^{2{\mathbb {Z}}_{+}}\), where \(2{\mathbb {Z}}_{-} = \{ \dots ,-2, 0\}\), and \(2{\mathbb {Z}}_{+} = \{ 0, 2, \dots \}\). We let

$$\begin{aligned} {\mathcal {I}}_{n} = \{x: l_{n} \le x \le r_{n}, (x,n) \in {\mathbb {L}}\} \end{aligned}$$
(19)

if \(l_{n} \le r_{n}\), and \({\mathcal {I}}_{n} = O\), otherwise. Similarly, we let

$$\begin{aligned} {\mathcal {J}}_{n} = \left\{ x: l_{n}^{+} \le x \le r_{n}^{-}, (x,n) \in {\mathbb {L}}\right\} \end{aligned}$$
(20)

if \(l_{n}^{+} \le r_{n}^{-}\), and \({\mathcal {J}}_{n} = O\), otherwise. To see our motivation for considering \({\mathcal {J}}_{n}\), and \({\mathcal {I}}_{n}\) analogously, note that

$$\begin{aligned} |{\mathcal {J}}_{n}| = \frac{r_{n}^{-} - l^{+}_{n}}{2} + 1, \,\, \text{ on } \,\, \{l^{+}_{n} \le r_{n}^{-}\}, \end{aligned}$$

and \(|{\mathcal {J}}_{n}| = 1\), otherwise. We let in addition the family of centered r.v.’s, which will play a central rôle in our analysis below, \(({\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x): x \in 2{\mathbb {Z}})\), as follows. We let

$$\begin{aligned} {\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x) = \xi _{n}^{2{\mathbb {Z}}}(x) - \rho _{n}, \text{ for } \text{ all } n \ge 1, \end{aligned}$$
(21)

where \(\rho _{n} = {\mathbb {P}}(\Omega _{n})\) is defined in (2). We note that \(({\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x): x \in 2{\mathbb {Z}})\) are zero-mean, since by (34) below, \({\mathbb {E}}(\xi _{n}^{2{\mathbb {Z}}}(x)) = \rho _{n}\).

The Basic Coupling We state an important observation due to [14], which comprises the following consequence of path intersection properties. We have that

$$\begin{aligned} \xi _{n} = \xi _{n}^{2 {\mathbb {Z}}} \cap [l_{n}, r_{n}] = \xi _{n}^{2 {\mathbb {Z}}} \cap \left[ l_{n}^{+}, r_{n}^{-}\right] \quad \quad \text{ on } \Omega _{n}, \end{aligned}$$
(22)

\({\mathbb {P}}\text{-a.s. }\) and, in particular,

$$\begin{aligned} r_{n} = r_{n}^{-} \qquad \text{ and } \qquad l_{n} = l_{n}^{+}, \quad \quad \text{ on } \Omega _{n}, \end{aligned}$$
(23)

\({\mathbb {P}}\text{-a.s. }\) and further, (22) gives that

$$\begin{aligned} |\xi _{n}| = \sum _{x \in {\mathcal {J}}_{n}} \xi _{n}^{2{\mathbb {Z}}}(x), \,\, \quad \quad \text{ on } \Omega _{n} \end{aligned}$$
(24)

\({\mathbb {P}}\text{-a.s. }\). Furthermore, since \(\Omega _{n} = \{r_{k} \ge l_{k}, \forall k \le n\}\), we also have that

$$\begin{aligned} \Omega _{n} = \left\{ r_{k}^{-} \ge l_{k}^{+}, \forall k \le n \right\} . \end{aligned}$$
(25)

Further, we note that (25) is in fact a special case of the following statement, regarding general initial configurations. Let \(\eta ^{-}\) and \(\eta ^{+}\) be such that \(\eta ^{-}(O) =\eta ^{+}(O) =1\), and also \(\eta ^{-}(x)=0\), for \( x\ge 2\), whereas, \(\eta ^{+}(x) = 0\), \(x \le -2\), and otherwise arbitrary. Letting \(r_{n}^{\eta ^{-}} = \sup \{x: \xi _{n}^{\eta ^{-}}(x) =1 \}\) and \(l_{n}^{\eta ^{+}}= \inf \{x: \xi _{n}^{\eta ^{+}}(x) = 1\}\), we have that, for all \(n \ge 1\), on \(\Omega _{n}\), \(r_{n} = r^{\eta ^{-}}_{n}\) and \(l_{n} = l_{n}^{\eta ^{+}}\) and, further that

$$\begin{aligned} \Omega _{n} = \left\{ r_{m}^{\eta ^{-}} \ge l_{m}^{\eta ^{+}}, \text{ for } \text{ all } m \le n\right\} . \end{aligned}$$
(26)

For proofs of these statements, see for instance, \(\S \)3, [15], see also [14, 27].

The Asymptotic Velocity For all \(p > p_{c}\), there is \(\alpha = \alpha (p)>0\), such that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{r^{-}_{n}}{n} = \alpha \quad \text{ and } \qquad \lim _{n \rightarrow \infty }\frac{l^{+}_{n}}{n} = -\alpha , \end{aligned}$$
(27)

\({\mathbb {P}}\)-a.s.. Further, we have that (27) yields from (23), that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{r_{n}}{n} = \lim _{n \rightarrow \infty } \frac{-l_{n}}{n} = \alpha , \,\, {\bar{{\mathbb {P}}}} \text{ a.s. }, \end{aligned}$$
(28)

where we refer to \(\alpha := \alpha (p)\) as the asymptotic velocity. For a proof of (27) we refer to Theorem 1.4 in [14], and also (7) in \(\S \) 3 in [15].

The SLLN Let \(p > p_{c}\). Let \(\rho \) and \(\alpha \) be the asymptotic density and velocity, as defined in (27) and in (2), respectively. We have that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{|\xi _{n}| }{n} = \alpha \rho , \, {\bar{{\mathbb {P}}}} \text{-a.s. }, \end{aligned}$$
(29)

For a proof of (29) we refer to Theorem 9 in [21], see also (2) in \(\S \) 13 in [15]. We mention here that from (22) and (28), since \(|{\mathcal {I}}_{n}| = \frac{r_{n}-l_{n}}{2}+1\), we have that, as \(n \rightarrow \infty \), \(\displaystyle {\frac{|{\mathcal {I}}_{n}|}{n} \rightarrow \alpha ,\, {\bar{{\mathbb {P}}}} \text{ a.s. }}\). Thus, we have that (29) yields that, as \(n \rightarrow \infty \), \(\displaystyle {\frac{\sum _{x \in {\mathcal {I}}_{n}} \xi _{n}(x)}{|{\mathcal {I}}_{n}|} \rightarrow \rho , \quad {\bar{{\mathbb {P}}}} \text{ a.s. }}\).

Large Deviations Let \(p > p_{c}\). let \(a< \alpha (p)\). Then, the following limit exists and is strictly negative,

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n} \log {\mathbb {P}}\left( r_{n}^{-} < a\right) . \end{aligned}$$
(30)

We require in addition below, the following known elementary consequence of (30). There are \(C, \gamma \in (0, \infty )\), such that

$$\begin{aligned} {\mathbb {P}}\left( \exists m\ge n: r^{-}_{m} < 0\right) \le Ce^{- \gamma n}, \end{aligned}$$
(31)

\(n \ge 1\), see, for instance [15], p. 1031, \(\S \) 12, first display in the proof of (1).

Monotonicity and Self-Duality An immediate consequence of the construction is that

$$\begin{aligned} A \subseteq B \quad \Longrightarrow \quad \xi _{n}^{A} \subset \xi _{n}^{A}, \end{aligned}$$
(32)

\({\mathbb {P}}\)-a.s. Further, we have that, for all n even,

$$\begin{aligned} {\mathbb {P}}(\xi _{n}^{A} \cap B \not = \emptyset ) = {\mathbb {P}}\left( \xi _{n}^{B} \cap A \not = \emptyset \right) , \end{aligned}$$
(33)

\(A, B \subset 2{\mathbb {Z}}\), and analogously, for n odd. The proof of (33), see \(\S \) 8, (2), p. 1021 in [15] relies on the observation that, after reversing the direction of all arrows in any realization, the law of the process started from (B, 2n), defined analogously by these new paths, and going backwards in time is the same as that of (B, 0); and, moreover, that a path connecting (A, 0) to (B, 2n) exists in the original sample point if and only if there is a backwards in time path connecting (B, 2n) to (A, 0) in the corresponding sample point. By an application of (33) and by the definition of the upper invariant measure \({\bar{\nu }}\), see (5), we note that,

$$\begin{aligned} {\mathbb {P}}\left( \xi _{n}^{A} \not = \emptyset , \text{ for } \text{ all } n \ge 1 \right)= & {} \lim _{n \rightarrow \infty } {\mathbb {P}}\left( \xi _{2n}^{2{\mathbb {Z}}} \cap A \not = \emptyset \right) \nonumber \\= & {} {\bar{\nu }}( \eta : \eta \cap A \not = \emptyset ), \end{aligned}$$
(34)

\(A \subset \Sigma _{0}\). Furthermore, by (34) and recalling the definition of \(\rho \) from (2), gives that

$$\begin{aligned} \rho = {\bar{\nu }}( \eta : \eta \cap \{O\} \not = \emptyset ) = \lim _{n \rightarrow \infty } {\mathbb {E}}\left( \xi _{2n}^{2 {\mathbb {Z}}}(O)\right) . \end{aligned}$$
(35)

CLT for the Upper Invariant Measure: Decay of Correlations Whenever \(p > p_{c}\), the upper invariant measure \({\bar{\nu }}\) possesses positive, and exponentially decaying, correlations. That is, if \({\bar{\xi }} \sim {\bar{\nu }}\), we have that there are \(C, \gamma \in (0, \infty )\), such that

$$\begin{aligned} 0 \le \mathrm {Cov}({\bar{\xi }}(0), {\bar{\xi }}(x)) \le C e^{- \gamma x}, \end{aligned}$$
(36)

\(x \in 2{\mathbb {Z}}\). As pointed out to in [21], p. 2, see also the final Remark in \(\S \) 6 in [27], property (36) implies the following Lemma by general results for random fields, see for instance, Theorem 12 in [46], or the list of references before statement Proposition 4.18 in Chpt. I, [39].

Lemma 3.1

\(\displaystyle {{\mathcal {L}}\left( \frac{\sum _{x = -\alpha n}^{\alpha n} ({\bar{\xi }}(x) - \rho )}{\sqrt{\alpha n}}\right) \xrightarrow {w} {\mathcal {N}}(0, \sigma ^{2})}\), as \(n \rightarrow \infty \), \(\sigma ^{2} < \infty \),

where \(\sigma ^{2} < \infty \) because \({\bar{\xi }}\) is strictly stationary (translation invariant), and we thus have that \(\sigma ^{2} = \mathrm {Var}({\bar{\xi }}(0)) + 2 \sum _{x \ge 2} \mathrm {Cov}({\bar{\xi }}(0), {\bar{\xi }}(x))\), and \(\mathrm {Var}({\bar{\xi }}(0)) = \rho - \rho ^{2}\), so that \(\sigma ^{2} < \infty \), by (36).

Furthermore in [21] the following stronger than (36) property is shown. To state it, consider \(({\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x): x \in 2{\mathbb {Z}})\), as defined in (21). We have that, for all \(p >p_{c}\), there are \(C, \gamma \in (0, \infty )\), such that, for any n, and \((x_{i} \in 2{\mathbb {Z}}: i =1, \dots , k)\), \(k < \infty \), \(|x_{i}- x_{j}| > 2m\), we have that

$$\begin{aligned} \left| {\mathbb {E}}\left( \prod _{i = 1}^{k} {\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x_{i})\right) \right| \le Ce^{-\gamma m}, \end{aligned}$$
(37)

and we refer to Theorem 8 in [21], see also (1), p. 1033, [15], for a proof of (37). We also finally mention two other routes to derive Lemma 3.1. One of them is provided with in the discussion prior to Theorem 3.23 in Chpt. VI, [39]. This approach relies on deriving, by means of (30), that the convergence in (5) occurs exponentially fast, which then implies as shown there by general results that \({\bar{\nu }}\) has exponentially decaying correlations, from which the conclusion follows as noted above. The other route is provided with in \(\S \) 6 of [27], where Lemma 3.1 is derived under the condition that, there exists \(C, \gamma \in (0, \infty )\), such that \({\mathbb {P}}({\bar{\Omega }}_{\infty }^{\{0, \dots , 2n\}}) \le C e ^{-\gamma n},\) for all \(n \ge 1\), which is shown there to be valid for sufficiently large values of p, and later shown for all \(p> p_{c}\) in [21], where we note that \({\bar{E}}\) denotes the complement of event E.

CLT for the Infinite Process in a Cone We state here an observation, pointed out in [15] see \(\S \) 13, (4); see also p. 286 in [16]. Property (37), together with the corresponding extension of Theorem 3 in [44] to triangular arrays, yields that \(\xi _{n}^{2 {\mathbb {Z}}}\) obeys classic CLT behavior. To state this explicitly, recall the definition of \({\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x) = \xi _{n}^{2{\mathbb {Z}}}(x) - \rho _{n}\) in (21). We let \(p >p_{c}\) and let \(\alpha >0\) be the associated asymptotic velocity, and \({\bar{\xi }} \sim {\bar{\nu }}\), and \(\sigma ^{2}:= \sum _{x \in 2{\mathbb {Z}}} \mathrm {Cov}({\bar{\xi }}(O), {\bar{\xi }}(x))\), as in (7).

Proposition 3.2

\(\displaystyle {{\mathcal {L}}\left( \frac{\sum _{x = - \alpha n}^{\alpha n} {\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x)}{\sqrt{\alpha n}}\right) \xrightarrow {w} {\mathcal {N}}(0, \sigma ^{2}), \text{ as } n \rightarrow \infty ,}\) and \(\sigma ^{2} < \infty \).

Remark 3.3

The following heuristic, which is suggested from properties of the basic coupling above, is stated next in (5) there:

$$\begin{aligned} |\xi _{n}^{2{\mathbb {Z}}} \cap \left[ l_{n}^{+}, r_{n}^{-}\right] \Vert - |\xi _{n}^{2{\mathbb {Z}}} \cap [-\alpha n, \alpha n]\Vert \approx \rho _{n}\frac{r_{n}^{-}-\alpha n}{2} + \rho _{n}\frac{+\alpha n -l^{+}_{n}}{2} \end{aligned}$$

Note that as expected there, and proved later in [25], and in [37], when diffusively normalized each term of the RHS converges asymptotically to a normal distribution, which then suggests that the RHS fluctuations diffusively rescaled would not converge in distribution to zero; see also the form of the variance conjectured in the latter reference for Theorem 2.1. We note that Proposition 2.3 will allow us to show that when normalized, the LHS converges in law to zero.

An Exponential Estimate We require below the following known estimate. Recall that \(\rho _{n} ={\mathbb {P}}(\Omega _{n})\) \(\rho = {\mathbb {P}}(\Omega _{\infty })\). Let \(p > p_{c}\). There are \(C, \gamma \in (0, \infty )\), such that

$$\begin{aligned} |\rho _{n} - \rho | \le C e^{- \gamma n}, \end{aligned}$$
(38)

\(n \ge 1\). To see that (38) follows from known facts note that, if \(p > p_{c}\), then there are \(C, \gamma \in (0, \infty )\), such that

$$\begin{aligned} {\mathbb {P}}(\Omega _{n} \cap {\bar{\Omega }}_{\infty }) \le C e^{- \gamma n}, \end{aligned}$$

\(n \ge 1\), where for a proof of the above display, see [21], see also [15, (1), p. 1031]. By the law of total probability, and because \(\Omega _{n} \supseteq \Omega _{\infty }\), we have that

$$\begin{aligned} {\mathbb {P}}(\Omega _{n}) - {\mathbb {P}}(\Omega _{\infty }) = {\mathbb {P}}(\Omega _{n} \cap {\bar{\Omega }}_{\infty }). \end{aligned}$$

Combining the two displays above, and noting that by definition, if \(m \le n\), then \(\Omega _{n} \subseteq \Omega _{m}\), and therefore \({\mathbb {P}}(\Omega _{n}) - {\mathbb {P}}(\Omega _{\infty }) \ge 0\), we arrive at (38).

Elementary Facts We give next certain elementary probability statements. To this end, we let \((X_{n}: n\ge 0)\) and \((Y_{n}: n \ge 0)\) be collections of r.v.’s. For a proof of Lemma 3.4 stated next, see for instance, 5.11.4 in [31]. Lemma 3.5 regards the the basic fact that almost sure convergence is stronger than convergence in distribution, see for instance, Theorem 5.3.1 in [31]. Finally, Lemma 3.6 follows by noting that, as \(n \rightarrow \infty \), \(X_{n}(\omega ) - X_{k + n}(\omega ) \rightarrow 0\), \(\forall \omega \in \{\tau = k\}\) and any \(k \ge 0\), and then considering the partition \(\cup _{k \ge 0} \{\tau = k\}\), to conclude that \(X_{n} - X_{\tau + n} \rightarrow 0\) a.s..

Lemma 3.4

We have that

$$\begin{aligned} {\mathcal {L}}(X_{n}) \xrightarrow {w} X \text{ and } {\mathcal {L}}(X_{n} - Y_{n}) \xrightarrow {w} 0 \Longrightarrow {\mathcal {L}}(Y_{n}) \xrightarrow {w} X, \end{aligned}$$
(39)

as \(n \rightarrow \infty \). Furthermore,

$$\begin{aligned} {\mathcal {L}}(X_{n}) \xrightarrow {w} \gamma \text{ and } {\mathcal {L}}(Y_{n}) \xrightarrow {w} Y \Longrightarrow {\mathcal {L}}\left( \frac{Y_{n}}{X_{n}}\right) \xrightarrow {w} \frac{Y}{\gamma }, \end{aligned}$$
(40)

\(\gamma \in {\mathbb {R}}\backslash \{0\}\), as \(n \rightarrow \infty \).

Lemma 3.5

We have that, as \(n \rightarrow \infty \),

$$\begin{aligned} X_{n} \rightarrow X \text{ a.s. } \Longrightarrow {\mathcal {L}}(X_{n}) \xrightarrow {w} X. \end{aligned}$$
(41)

Lemma 3.6

Let \(\tau < \infty \) a.s., but otherwise arbitrary. We have that \(X_{\tau + n} - X_{n} \rightarrow 0\) a.s..

3.2 Proof of Theorem 2.1

The contents of this section comprise primarily the proof of Theorem 2.1 and may be outlined as follows. This proof is given here first, by means of relying on Propositions 3.7 and 3.9 we state in it and prove immediately afterward in the same order as stated. Proofs of various auxiliary statements, denominated Lemmas, required along the way in our proofs are further postponed, in order not to interrupt its course, to the end of this section. In Remark 3.12 We discuss here briefly in regard to modifications required to obtain the IP’s associated to our Theorem 2.1

Certain remarks regarding other route approaches are provided with at the end of these proofs.

Proof of Theorem  2.1

Note that, due to that \({\mathcal {I}}_{n} = \{x: l_{n} \le x \le r_{n}, (x,n) \in {\mathbb {L}}\}\), the result comprises the statement

$$\begin{aligned} {\mathcal {L}} \left. \left( \frac{\sum _{x \in {\mathcal {I}}_{n}} \xi _{n}(x) - |{\mathcal {I}}_{n}| \rho _{n}}{ \sqrt{|{\mathcal {I}}_{n}|}} \right| \Omega _{\infty }\right) \xrightarrow {w} {\mathcal {N}}(0, \sigma ^{2}), \text{ as } n \rightarrow \infty . \end{aligned}$$

Taking into account also that \(\sum _{x \in {\mathcal {I}}_{n}}( \xi _{n}(x) - \rho _{n}) = |\xi _{n}| - | {\mathcal {I}}_{n}|\rho _{n}, \text{ on } \Omega _{\infty },\) we have that, if we let \(A_{n} = \frac{\sum _{x \in {\mathcal {I}}_{n}}(\xi _{n}^{O}(x) - \rho _{n})}{\sqrt{|{\mathcal {I}}_{n}|}}\), then to complete this proof, it suffices to show that

$$\begin{aligned} {\mathcal {L}}(A_{n}| \Omega _{\infty }) \xrightarrow {w} {\mathcal {N}}(0, \sigma ^{2}), \text{ as } n \rightarrow \infty . \end{aligned}$$
(42)

We state next a key Proposition and subsequently state an auxiliary Lemma we require. We recall that we provide with the proofs of Propositions and Lemmas stated here afterward. Recall first that \(r^{-}_{n} = \sup \xi _{n}^{2{\mathbb {Z}}_{-}}\) and \(l_{n}^{+}= \inf \xi _{n}^{2 {\mathbb {Z}}^{+}}\), \(2{\mathbb {Z}}_{-}= \{ \dots ,-2, 0\}\), \(2 {\mathbb {Z}}^{+} = \{ 0, 2, \dots \}\). Recall further that we let \(({\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x): (x,n) \in {\mathbb {L}})\), \({\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x) = \xi _{n}^{2{\mathbb {Z}}}(x) - \rho _{n}\), \(\rho _{n} = {\mathbb {P}}(\Omega _{n})\), as defined in (21). Recall also that \({\mathcal {J}}_{n} = \{x: l_{n}^{+} \le x \le r_{n}^{-}, (x,n) \in {\mathbb {L}}\}\), whenever \(l_{n}^{+} \le r_{n}^{-}\), and \({\mathcal {J}}_{n} = O\), otherwise, as in (20). \(\square \)

Proposition 3.7

Let \(\displaystyle {{\bar{A}}_{n} = \frac{\sum _{x \in {\mathcal {J}}_{n}} {\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x)}{\sqrt{|{\mathcal {J}}_{n}|}}}\). We have that

$$\begin{aligned} {\mathcal {L}}({\bar{A}}_{n}) \xrightarrow {w} {\mathcal {N}}(0, \sigma ^{2}), \text{ as } n \rightarrow \infty . \end{aligned}$$
(43)

We state the other key Proposition below, after the following auxiliary Lemma required first.

Lemma 3.8

Let

$$\begin{aligned} \tau := \inf \left\{ n \ge 0: {r^{-}_{n}= l^{+}_{n}} \text{ and } {r^{-}_{m} \ge l^{+}_{m}} \,\, \forall \,\,m> n \right\} . \end{aligned}$$
(44)

We have that \(\tau < \infty \), a.s.

We may now give the said Proposition.

Proposition 3.9

\({\mathcal {L}}({\bar{A}}_{n + \tau }) = {\mathcal {L}}(A_{n}| \Omega _{\infty })\), for all \(n \ge 0\).

Note that, Lemma 3.8 by an application of Lemma 3.6 gives that, as \(n \rightarrow \infty \),

$$\begin{aligned} {\bar{A}}_{n + \tau } - {\bar{A}}_{n} \rightarrow 0, \text{ a.s. }. \end{aligned}$$
(45)

However (45) by Lemma 3.5 gives that \({\mathcal {L}}({\bar{A}}_{n + \tau } - {\bar{A}}_{n}) \rightarrow 0\), as \(n \rightarrow \infty \), and hence, Proposition 3.7 by an application of (39)an application of (39) yields that

$$\begin{aligned} {\mathcal {L}}({\bar{A}}_{n + \tau }) \xrightarrow {w} {\mathcal {N}}(0, \sigma ^{2}), \text{ as } n \rightarrow \infty . \end{aligned}$$
(46)

From Proposition 3.9 and (46), we have that (42) follows, and, therefore, the proof is complete. \(\square \)

Proof of Proposition 3.7

From Proposition 3.2 we have that

$$\begin{aligned} {\mathcal {L}}\left( \frac{\sum _{x = - \alpha n}^{ \alpha n} {\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x)}{\sqrt{\alpha n}}\right) \xrightarrow {w} {\mathcal {N}}(0, \sigma ^{2}), \text{ as } n \rightarrow \infty \end{aligned}$$

where we recall that \(\alpha : = \alpha (p)>0\), \(p > p_{c}\), is the asymptotic velocity, as defined in (27); however, note that (27) gives that \(\sqrt{\frac{|{\mathcal {J}}_{n}|}{[\alpha n]}} \rightarrow 1\), as \(n \rightarrow \infty \), a.s., therefore, by Lemma 3.4, (40), we have that

$$\begin{aligned} {\mathcal {L}}\left( \frac{\sum _{x = - \alpha n}^{ \alpha n} {\hat{\xi }}_{2n}^{2{\mathbb {Z}}}(x)}{\sqrt{|{\mathcal {J}}_{n}|}}\right) \xrightarrow {w} {\mathcal {N}}(0, \sigma ^{2}), \text{ as } n \rightarrow \infty . \end{aligned}$$
(47)

Hence, if we assume that

$$\begin{aligned} \frac{\sum _{x \in {\mathcal {J}}_{n}} {\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x)- \sum _{x = - \alpha n}^{ \alpha n} {\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x)}{\sqrt{|{\mathcal {J}}_{n}|}} \xrightarrow {p}0, \text{ as } n \rightarrow \infty , \end{aligned}$$
(48)

then, (47) together with an application of Lemma 3.4, (39), yields (43), and the result is proved.

We prove the remaining (48). To do this, we will show that the hypotheses of the general Proposition 2.3 are fulfilled when setting \(({\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x), l_{n}^{+}, r_{n}^{-})\) equal to \((X_{t}(j), m_{t}, M_{t})\) there. We have that: a) Recall that \({\mathbb {E}}(\xi _{n}^{2{\mathbb {Z}}}(x)) = \rho _{n}\), where this equality comes from self-duality, see (33). We therefore have that assumption (13) holds since \(({\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x))\) are centered r.v.’s. b) We now show that assumption (14) is granted for \(({\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x))\) as follows. Note that, due to a corollary to Harris’ correlation inequality [34], see [39, Thm. 2.14, Chpt. II], which applies since every deterministic configuration is positively correlated, we have that \((\xi _{n}^{2 {\mathbb {Z}}})\) has positive correlations for all n. Because \(\xi _{n}^{2 {\mathbb {Z}}}\) takes values on a partially ordered set, this gives that \(\{\xi _{n}^{2 {\mathbb {Z}}}(x)\}\) are associated, and hence also \(({\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x))\) are associated, because increasing functions of associated r.v.’s are also associated by using the definition. c) Because \(\xi _{n}^{2 {\mathbb {Z}}}(x) \in \{0,1\}\), we have that \({\mathbb {E}}|\xi _{n}^{2 {\mathbb {Z}}}(x)| ^{2} \le 1\), and therefore assumption (15) is also fulfilled. d) Furthermore, we have that (37) gives that (16) is valid, because, by using that the covariance is a linear operation in the one argument if the other is fixed, we then have that \(C^{-} = C^{+} = \frac{C}{1-\gamma } < \infty \). e) Finally, we have that (17) is valid for \(m_{t} = l_{n}^{+}\) and \(M_{t} = r^{-}_{n}\) due to (27). Hence, we have that (48) holds, and the proof is thus complete. \(\square \)

Proof of Proposition 3.9

We will show that

$$\begin{aligned} {\mathbb {P}}(A_{n} \ge a | \Omega _{\infty }) = {\mathbb {P}}({\bar{A}}_{n+k} \ge a | \tau = k), \end{aligned}$$
(49)

for all \(k \ge 0, a\in {\mathbb {R}}\). To see that proving the above display suffices complete this proof note that, due to Lemma 3.8, we have by the law of total probability and (49), that

$$\begin{aligned} {\mathbb {P}}({\bar{A}}_{n + \tau } \ge a)= & {} \sum _{k = 0}^{\infty } {\mathbb {P}}({\bar{A}}_{n + k} \ge a | \tau = k){\mathbb {P}}(\tau = k) \\= & {} {\mathbb {P}}(A_{n} \ge a | \Omega _{\infty }). \end{aligned}$$

To state the next auxiliary lemma we require, we let \({\mathcal {F}}_{n}\) denote the \(\sigma \)-algebra associated to the part of the construction of the processes with bonds the end-vertices of which have time-coordinate no greater than n. \(\square \)

Lemma 3.10

We let

$$\begin{aligned} \xi ^{(x,n)}_{m} = \{y: (x,n) \rightarrow (y,m+n)\}, \, \Omega _{(x,n)} = \left\{ \xi ^{(x,n)}_{m} \not = \emptyset , \text{ for } \text{ all } m \ge 0\right\} , \end{aligned}$$
(50)

\((x,n) \in {\mathbb {L}}\), \(m \ge 0\). We have the following representation

$$\begin{aligned} \left\{ \tau = n, r^{-}_{n} = l^{+}_{n} = x \right\} = \Omega _{(x, n)} \cap F, \end{aligned}$$
(51)

for some \(F \in {\mathcal {F}}_{n-1}\).

We state next another general auxiliary statement we will use below.

Lemma 3.11

Let \(\eta ^{-}, \eta ^{+}\) be such that \(\eta ^{-}(O) =\eta ^{+}(O) =1\) and \(\eta ^{-}(x)=0\), \( x\ge 2\), \(\eta ^{+}(x) = 0\), \(x \le -2\). Let \(r_{n}^{\eta ^{-}} = \sup \{x: \xi _{n}^{\eta ^{-}}(x) =1 \}\) and \(l_{n}^{\eta ^{+}}= \inf \{x: \xi _{n}^{\eta ^{+}}(x) = 1\}\). We have that \(\Omega _{\infty } =\{r_{n}^{\eta ^{-}} \ge l_{n}^{\eta ^{+}}, \text{ for } \text{ all } n \ge 1\}\).

We now have that, for any \((x,k) \in {\mathbb {L}}\),

$$\begin{aligned} {\mathbb {P}}({\bar{A}}_{n+k} \ge a | \tau = k, r_{k}^{-} = l^{+}_{k} = x)= & {} {\mathbb {P}}\left( {\bar{A}}_{n+k} \ge a | \Omega _{(x, k)}, r_{k}^{-} = l^{+}_{k} = x, F\right) \end{aligned}$$
(52)
$$\begin{aligned}= & {} {\mathbb {P}}\left( {\bar{A}}_{n+k} \ge a | \Omega _{(x, k)}, r_{k}^{-} = l^{+}_{k} = x\right) \end{aligned}$$
(53)
$$\begin{aligned}= & {} {\mathbb {P}}({\bar{A}}_{n} \ge a | \Omega _{\infty }) \end{aligned}$$
(54)
$$\begin{aligned}= & {} {\mathbb {P}}(A_{n} \ge a | \Omega _{\infty }), \end{aligned}$$
(55)

where in (52) we plug in (51) from Lemma 3.10, in (53) we use independence of events measurable with respect to disjoint parts of \({\mathscr {L}}\) by construction, in (54) we use translation-invariance with respect to (xk), and finally in (55) we use that, by (22), \({\bar{A}}_{n} = A_{n}\) a.s. on \(\Omega _{\infty }\).

The law of total probability gives

$$\begin{aligned} {\mathbb {P}}({\bar{A}}_{n+k} \ge a | \tau = k)= & {} \sum _{x: (x, k) \in {\mathbb {L}}} {\mathbb {P}}\left( {\bar{A}}_{n+k} \ge a | \tau = k, r_{\tau }^{-} = x\right) {\mathbb {P}}(r_{\tau }^{-} = x |\tau = k) \nonumber \\= & {} {\mathbb {P}}(A_{n} \ge a | \Omega _{\infty }) \sum _{x : (x, k) \in {\mathbb {L}}} {\mathbb {P}}\left( r_{\tau }^{-} = x |\tau = k\right) \end{aligned}$$
(56)
$$\begin{aligned}= & {} {\mathbb {P}}(A_{n} \ge a | \Omega _{\infty }), \end{aligned}$$
(57)

where (56) follows from (55), and (57) follows from that, \({\mathbb {P}}(|r_{k}^{-}| < \infty | \tau = k) = 1\), for all k, due to that \({\mathbb {P}}(|r_{n}^{-}| < \infty ) = 1\), which follows from (27) by considering the contrapositive statement. This proof is thus complete. \(\square \)

We prove the remaining Lemmas 3.8, 3.10, and 3.11 that we stated and used above.

Proof of Lemma 3.8

We will derive the estimate that there are \(C, \gamma \in (0, \infty )\) such that

$$\begin{aligned} {\mathbb {P}}( \tau \ge n) \le C e^{-\gamma n}, \end{aligned}$$
(58)

for all \(n \ge 1\).

Let \(E^{r}_{n} = \{\exists m\ge n: r^{-}_{m} < 0\} \text{ and } E^{l}_{n} = \{\exists m\ge n: l^{+}_{m} >0\}.\) From (31), we have that

$$\begin{aligned} {\mathbb {P}}(E^{r}_{n})\le Ce^{- \gamma n}, \end{aligned}$$
(59)

\( n \ge 1\). Further, note that

$$\begin{aligned} \{ \tau \ge n\} \subseteq E_{n}^{r} \cup E_{n}^{l}, \end{aligned}$$
(60)

where (60) follows by (44) and considering the contra-positive relation, i.e. that

$$\begin{aligned} \bar{E_{n}^{r}} \cap \bar{E_{n}^{l}} \subseteq \{ \tau \le n\}. \end{aligned}$$

Hence, subadditivity and noting that by symmetry \({\mathbb {P}}(E^{r}_{n}) = {\mathbb {P}}(E^{l}_{n})\), gives

$$\begin{aligned} {\mathbb {P}}( \tau \ge n) \le 2{\mathbb {P}}\left( E^{r}_{n}\right) , \end{aligned}$$

from which the proof of (58) is complete by (59). \(\square \)

Proof of Lemma 3.10

To prove (51), note that it suffices to show that

$$\begin{aligned} \tau = \inf \left\{ n \ge 0: \cup _{(x,n) \in {\mathbb {L}}}\left\{ r^{-}_{n}= l^{+}_{n} = x\right\} \cap \Omega _{(x, n)}\right\} . \end{aligned}$$
(61)

However, Lemma 3.11 and translation invariance give that, for any \((x, n) \in {\mathbb {L}}\),

$$\begin{aligned} \Omega _{(x,n)} = \left\{ r^{-}_{m} \ge l^{+}_{m}, \,\, \forall m> n\right\} , \qquad \text{ on } \left\{ r^{-}_{n}= l^{+}_{n} = x\right\} , \end{aligned}$$

hence (61) is identical to (44), and (51) follows. \(\square \)

Proof of Lemma 3.11

Note that, since \(\Omega _{\infty } = \cap _{n \ge 1} \Omega _{n}\), this statement follows directly from (26). \(\square \)

Remark 3.12

To derive the IP corresponding to Proposition 3.2, that is that, if we let \(V_{t}^{n} = \frac{1}{\sigma \sqrt{\alpha n}}\sum _{x = - \alpha n}^{\alpha n} {\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x)\), then we have that

$$\begin{aligned} V^{n} \Rightarrow W, \end{aligned}$$
(62)

which means that the random functions \(V^{n}\) converge to the Wiener measure, W, in the space D, see Section 13 in [6] for definitions and background in this regard. To derive (62) one may either modify the proof of Theorem 3 in [44] to the case of arrays of r.v.’s in which the length of each row grows linearly with the row number, or alternatively, one may modify the proof of Proposition 3.2 in the Appendix by invoking the IP extension of Lemma 3.1 instead there. Letting \(U_{t}^{n} = \frac{1}{\sigma \sqrt{\alpha n}}\sum _{x \in {\mathcal {J}}_{n}} {\hat{\xi }}_{n}^{2{\mathbb {Z}}}(x)\), we then have that by (62) and (27) the assumptions of Theorem 14.4 in [6] are fulfilled, hence yielding that

$$\begin{aligned} U^{n} \Rightarrow W. \end{aligned}$$
(63)

By appropriately modifying the argument from (44) onwards in the proof above we have that the corresponding IP to Theorem 2.1 may also be derived from (63). Our proof of Theorem 2.1 contrasts to the approach we briefly sketched here in that the former does not prerequisite invoking any general statements. Further, note that the latter approach does not go through Proposition 2.3 and hence, does not provide with the random-indices CLT’s we provide with in Sect. 2.2.

4 Proposition  2.3 and Corollary 2.2

4.1 Proof of Proposition  2.3.

The proof of Proposition 2.3 is divided into two parts. We will first derive Proposition 2.3 by means of relying on Lemma 4.2, stated below here next, and proved below immediately thereafter, in this section. Prior to that, we also state here the Hajek–Rényi inequality for associated r.v.’s, due to [11], see also [53]. Recall the definition of association given in Sect. 2.2.

Lemma 4.1

Let \((X_{j}: j = 1, \dots , n)\) be associated r.v.’s such that \({\mathbb {E}}(X_{j})= 0\) for all j, and let also \((c_{j}: j = 1, \dots , n)\) be a sequence of non-increasing and positive numbers. Let \(S_{k} = \sum _{j=1}^{k} X_{j}\). Then, we have that

$$\begin{aligned} {\mathbb {P}}\left( \max _{1 \le k \le n} c_{k} \left| S_{k}\right| \ge \epsilon \right) \le 2\epsilon ^{-2}\left( 2 \sum _{j=1}^{n} c_{j}^{2} \mathrm {Cov}\left( X_{j}, S_{j-1} \right) + \sum _{j=1}^{n} c_{j}^{2} {\mathbb {E}}\left( X_{j}^{2}\right) \right) . \end{aligned}$$

We may now state the following Lemma.

Lemma 4.2

Let \(\{X_{t}(j): j, t \ge 0 \}\) be such that \({\mathbb {E}}(X_{t}(j))= 0\), \(\forall \,t, j\). Let also \({S_{t}(i) = \sum _{j =0}^{i} X_{t}(j)}\). We assume the following:

$$\begin{aligned}&\{X_{t}(j): j \ge 0 \} \text{ is } \text{ associated } \text{ for } \text{ each } t. \end{aligned}$$
(64)
$$\begin{aligned}&\quad \sup _{j, t}{\mathbb {E}}(X_{t}(j)^{2})< \infty , \end{aligned}$$
(65)
$$\begin{aligned}&\quad \sup _{j, t}\mathrm {Cov}\left( X_{t}(j), S_{t}(j-1) \right) <\infty , \end{aligned}$$
(66)

Furthermore, we let \((N_{t}: t\ge 0)\) be integer-valued and non-negative r.v.’s, such that, for some \(0<\theta <\infty \),

$$\begin{aligned} {\mathcal {L}}\left( \frac{N_{t}}{t}\right) \xrightarrow {w} \theta , \text{ as } t \rightarrow \infty . \end{aligned}$$
(67)

Then, we have that

$$\begin{aligned} {\frac{S_{t}(N_t) - S_{t}([\theta t])}{ \sqrt{ [\theta t]} } \xrightarrow {p} 0, \text{ as } t \rightarrow \infty .} \end{aligned}$$

Proof of Proposition 2.3

Let \(M'_{t}= M_{t} \cdot 1\{M_{t} \ge 0\}\) and \(m_{t}' = m_{t}\cdot 1\{m_{t} \le 0\}\), where we recall that \(M_{t}\) and \(m_{t}\) are as in (17). Note that in the notation introduced we have that

$$\begin{aligned} S_{t}(m'_{t}, M'_{t}) = S_{t}^{-}(m'_{t}) +S_{t}^{+}(M'_{t}), \text{ and } S_{t}(-[\theta t], [\theta t]) = S_{t}^{-}([\theta t]) +S_{t}^{+}([\theta t]), \end{aligned}$$

and thus, by the triangle inequality, we have that

$$\begin{aligned} \frac{|S_{t}(m'_t, M'_{t}) - S_{t}(-[\theta t], [\theta t])|}{ \sqrt{ [\theta t]}} \le \frac{|S_{t}^{-}(m'_{t})- S_{t}^{-}([\theta t])|}{ \sqrt{ [\theta t]}} + \frac{|S_{t}^{+}(M'_{t})- S_{t}^{+}([\theta t])|}{ \sqrt{ [\theta t]}}. \end{aligned}$$
(68)

However, the assumptions of Lemma 4.2 are appropriately satisfied, yielding that

$$\begin{aligned} \frac{|S_{t}^{-}(m'_{t})- S_{t}^{-}([\theta t])|}{ \sqrt{ [\frac{\theta t}{2}]}} \xrightarrow {p} 0, \text{ and } \frac{|S_{t}^{+}(M'_{t})- S_{t}^{+}([\theta t])|}{ \sqrt{ [\frac{\theta t}{2}]}} \xrightarrow {p} 0. \end{aligned}$$

as \(t \rightarrow \infty \). Hence, from (68) and the display above, we have that

$$\begin{aligned} \frac{S_{t}(m_t', M_{t}') - S_{t}(-[\theta t], [\theta t])}{ \sqrt{ [\theta t]} } \xrightarrow {p} 0, \text{ as } t \rightarrow \infty . \end{aligned}$$
(69)

To conclude the proof of (18), note that, again by the triangle inequality,

$$\begin{aligned} \frac{|S_{t}(m_t, M_{t}) - S_{t}(-[\theta t], [\theta t])|}{ \sqrt{ [\theta t]} }\le & {} \frac{|S_{t}(m_t, M_{t}) - S_{t}(m_t', M_{t}')|}{ \sqrt{ [\theta t]} } \\&+ \frac{|S_{t}(m_t', M_{t}') - S_{t}(-[\theta t], [\theta t])|}{ \sqrt{ [\theta t]}}, \end{aligned}$$

so that in view of (69), it suffices to show that

$$\begin{aligned} \lim _{t \rightarrow \infty } {\mathbb {P}}(|S_{t}(m_t, M_{t}) - S_{t}(m'_t, M'_{t})| > \epsilon ) = 0, \end{aligned}$$
(70)

which follows simply by noting that, for all \(\epsilon >0\),

$$\begin{aligned} {\mathbb {P}}(|S_{t}(m_t, M_{t}) - S_{t}(m'_t, M'_{t})| > \epsilon ) \le {\mathbb {P}}(M_{t} \le -1 \text{ or } m_{t} \ge 1), \end{aligned}$$

and hence (70) follows, since from (17) we have that \(\lim _{t \rightarrow \infty } {\mathbb {P}}(M_{t} \le -1) = 0\) and \(\lim _{t \rightarrow \infty } {\mathbb {P}}(m_{t} \ge 1) = 0\). The proof is thus complete. \(\square \)

Proof of Lemma 4.2

Note that without loss of generality we may take \(\theta = 1\). Let \(\epsilon \in (0,1)\), and also let \(m(t) = [t(1-\epsilon ^{3})]+1\) and \(n(t) = [t(1+\epsilon ^{3})]\). Let \(Y_{i}(t) = X_{t}(t+i)\), \(i = 1, \dots , n(t)\) and denote their partial sums as \(Z_{k}(t) = \sum _{i=1}^{k} Y_{i}(t)\), then we have that

$$\begin{aligned} \max _{k=t, \dots , n(t)} |S_{t}(k) - S_{t}(t)|= & {} \max _{k=t+1, \dots , n(t)} \left| \sum _{j = t+1}^{k} X_{t}(j) \right| \nonumber \\= & {} \max _{k=1, \dots , [t\epsilon ^{3}]} |Z_{k}(t)|. \end{aligned}$$
(71)

From (64) we have that Lemma 4.1 applies and choosing there \(c_{k} = \frac{1}{\sqrt{t}}\), gives that

$$\begin{aligned} {\mathbb {P}}\left( \max _{k=t, \dots , n(t)} |S_{t}(k) - S_{t}(t)| \ge \epsilon \sqrt{t}\right)= & {} {\mathbb {P}}\left( \max _{k=1, \dots , [t\epsilon ^{3}]} |Z_{k}(t)| \ge \epsilon \sqrt{t}\right) \nonumber \\\le & {} \frac{2}{\epsilon ^{2} t} \left( 2 \sum _{j =1}^{[t \epsilon ^{3}]} \mathrm {Cov}(Y_{j}(t), Z_{j-1}(t)) + \sum _{j =1}^{[t \epsilon ^{3}]} {\mathbb {E}}(Y_{j}(t))^{2} \right) \nonumber \\\le & {} C \epsilon , \end{aligned}$$
(72)

where C is independent of t, and (72) comes from (65) and (66). Similarly, letting \(Y'_{i}(t) =X_{t}(t+1-i)\), for \(i = 1, \dots t - m(t) +1\) and \(Z'_{k}(t) = \sum _{j=1}^{k}Y'_{i}(t)\), we have that

$$\begin{aligned} \max _{k=m(t), \dots , t} |S_{t}(k) - S_{t}(t)|= & {} \max _{k=m(t), \dots , t-1} \left| \sum _{j = k+1}^{t} X_{t}(j) \right| \nonumber \\= & {} \max _{k = 1, \dots , [t \epsilon ^{3}]-1} |Z_{k}'(t)|. \end{aligned}$$
(73)

Again, we can apply Lemma 4.1 with \(c_{k} = \frac{1}{\sqrt{t}}\) from (64), so that

$$\begin{aligned}&{\mathbb {P}}\left( \max _{k=m(t), \dots , t} |S_{t}(k) - S_{t}(t)| \ge \epsilon \sqrt{t}\right) = {\mathbb {P}}\left( \max _{k = 1, \dots , [t \epsilon ^{3}]-1} |Z_{k}'(t)| \ge \epsilon \sqrt{t}\right) \nonumber \\&\quad \le \frac{2}{\epsilon ^{2} t} \left( 2 \sum _{j =1}^{[t \epsilon ^{3}]-1} \mathrm {Cov}(Y'_{j}(t), Z'_{j-1}(t)) + \sum _{j =1}^{[t \epsilon ^{3}]-1} {\mathbb {E}}(Y_{j}'(t))^{2} \right) \nonumber \\&\quad \le C \epsilon , \end{aligned}$$
(74)

where again we use (65) and (66) in (74).

Partitioning according to the event \(N_{t} \in [m(t), n(t)]\) and its complement and then using (72) and (74) gives that

$$\begin{aligned} {\mathbb {P}}(|S_{N_{t}} - S_{t}| \ge \epsilon \sqrt{ t})\le & {} {\mathbb {P}}(|S_{N_{t}} - S_{t}| \ge \epsilon \sqrt{ t}, N_{t} \in [m(t), n(t)]) + {\mathbb {P}}(N_{t} \not \in [m(t), n(t)]) \nonumber \\\le & {} {\mathbb {P}}\left( \max _{m(t)\le k \le t} |S_{k} - S_{t}| \ge \epsilon \sqrt{t}\right) \nonumber \\&+ \quad {\mathbb {P}}\left( \max _{t \le k \le n(t)} |S_{k} - S_{t}| \ge \epsilon \sqrt{ t}\right) + {\mathbb {P}}(N_{t} \not \in [m(t), n(t)]) \nonumber \\\le & {} 2C \epsilon + {\mathbb {P}}\left( N_{t} \not \in [m(t), n(t)]\right) , \end{aligned}$$
(75)

where for the last inequality we invoke (72) and (74). However, (67) gives

$$\begin{aligned} \limsup _{t\rightarrow \infty }{\mathbb {P}}\left( N_{t} \not \in [m(t), n(t)]\right) = 0, \end{aligned}$$

and hence, from (75) we get that

$$\begin{aligned} \limsup _{t\rightarrow \infty }{\mathbb {P}}(S_{N_{t}} - S_{t} \ge \epsilon \sqrt{ t}) \le 2C \epsilon , \end{aligned}$$

which due to that \(\epsilon \) is arbitrary, completes the proof. \(\square \)

4.2 Proof of Corollary 2.2

The following Theorem, due to [13], is applied in the proof of this Corollary following next.

Theorem 4.3

Let \(\{X_{n}(j): 0 \le j \le n\}\) be such that \({\mathbb {E}}(X_{n}(j))= 0\), \(\forall \,n, j\), and suppose that (8), (9), and (10) hold. Letting \(S_{n}(n) = \sum _{j =0}^{n} X_{n}(j)\), we then have that

$$\begin{aligned} \frac{S_{n}(n)}{\sqrt{n}} \xrightarrow {w} {\mathcal {N}}(0,\sigma ^{2}), \text{ as } n \rightarrow \infty , \end{aligned}$$
(76)

where \(\sigma ^{2}:= \lim _{n \rightarrow \infty } {\text{ Var }} (S_{n}(n) / \sqrt{n})\), \(0<\sigma ^{2}<\infty \).

Proof of Corollary 2.2

Note that it suffices to only show the first conclusion Corollary 2.2, for the second one follows from that by an application of Lemma 3.4. Note also that there is no loss of generality in assuming \(\theta =1\). We thus have that the hypotheses of Theorem 4.3 are met, and hence, (76) holds. From it and Lemma 4.2, the proof is complete by an application of Lemma 3.4. \(\square \)