1 Introduction

Let \(\{X_n,n\ge 1\}\) be a sequence of identically distributed random variables, \(\{a_{nk}, n\ge 1, 1\le k\le n\}\) an array of constants. A weighted sum is defined by

$$\begin{aligned} \sum _{k=1}^n a_{nk}X_k. \end{aligned}$$
(1.1)

The \(a_{nk}\) are called weights. Since many useful linear statistics, e.g., least-squares estimators and nonparametric regression function estimators, are of the form (1.1), it is very interesting and meaningful to study the limiting behaviors for the weighted sums of random variables.

The classical Kolmogorov strong law of large numbers states that if \(\{X_n,n\ge 1\}\) is a sequence of independent and identically distributed random variables with \(EX_1=0,\) then \(n^{-1}\sum _{k=1}^nX_k\rightarrow 0\) a.s. The Kolmogorov strong law of large numbers has been extended to weighted sums by many authors. Let \(\{X_n,n\ge 1\}\) be a sequence of independent and identically distributed random variables, \(\{a_{nk}, n\ge 1, 1\le k\le n\}\) an array of uniformly bounded constants, i.e., \(\sup _{n\ge 1}\max _{1\le k\le n}|a_{nk}|<\infty \). Choi and Sung (1987) showed that if \(EX_1=0\), then

$$\begin{aligned} n^{-1}\sum ^n_{k=1}a_{nk}X_k\rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$
(1.2)

When the weights \(a_{nk}\) are \(\alpha \)-th Cesàro uniformly bounded for some \(1<\alpha \le \infty ,\) that is, \(\sup _{n\ge 1}n^{-1}\sum ^n_{k=1}|a_{nk}|^\alpha <\infty \) (when \(\alpha =\infty \) we interpret this as \(\sup _{n\ge 1}\max _{1\le k\le n}|a_{nk}|<\infty \)), Cuzick (1995) showed that (1.2) holds under the moment conditions that \(EX_1=0\) and \(E|X_1|^\beta <\infty ,\) where \(1/\alpha +1/\beta =1\). When \(\alpha =\infty ,\) this reduces to the result of Choi and Sung (1987). Bai and Cheng (2000) extended and generalized the result of Cuzick (1995) to the Marcinkiewicz type strong law, and Chen and Gan (2007) generalized the result of Bai and Cheng (2000) in some directions. Huang et al. (2014) extended the corresponding result of Bai and Cheng (2000) to \(\varphi \)-mixing random variables with \(\sum ^\infty _{n=1}\varphi ^{1/2}(n)<\infty \).

When the weights \(a_{nk}\) are independent of n,  i.e., \(a_{kk}=a_{k+1, k}=\cdots ,\) it is possible to prove (1.2) under weaker moment conditions. In fact, Baxter et al. (2004) showed that if \(\{X_n,n\ge 1\}\) is a sequence of independent and identically distributed random variables with \(EX_1=0,\) and \(\{a_n,n\ge 1\}\) is a sequence of \(\alpha \)-th Cesàro uniformly bounded constants for some \(1<\alpha <\infty \), then

$$\begin{aligned} n^{-1}\sum ^n_{k=1}a_kX_k\rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$
(1.3)

For the sake of clarity, let us recall the concept of the \(\psi \)-mixing random variables or random vectors.

Definition 1.1

For a sequence \(\{X_n,n\ge 1\}\) of random variables or random vectors, the \(\psi \)-mixing coefficient \(\psi (n)\) is defined as

$$\begin{aligned} \psi (n)=\sup _{m\ge 1}\sup _{A\in \mathcal{F}^m_1, B\in \mathcal{F}^\infty _{m+n}, P(A)P(B)\not = 0} \left| \frac{P(AB)}{P(A)P(B)}-1\right| , \end{aligned}$$

where \(\mathcal{F}^m_n=\sigma (X_i: n\le i\le m)\). Then \(\{X_n,n\ge 1\}\) is said to be \(\psi \)-mixing or *-mixing if \(\psi (n)\rightarrow 0\) as \(n\rightarrow \infty \).

The concept of \(\psi \)-mixing was introduced by Blum et al. (1963). They proved the Kolmogorov strong law of large numbers for identically distributed \(\psi \)-mixing random variables without any conditions on mixing rate. For \(\psi \)-mixing random variables with \(\sum ^\infty _{n=1}\psi (n)<\infty \), Yang (1995) obtained the moment inequality, exponent inequality and strong law for weighted sums, Wang et al. (2010) obtained the maximal inequality and gave some applications, Xu and Tang (2013) discussed the strong law for Jamison’s type weighted sums. Since \(\psi \)-mixing is stronger than \(\varphi \)-mixing (see, for example, Lin and Lu 1997), the results on \(\varphi \)-mixing also hold for \(\psi \)-mixing.

Motivated by the work of Blum et al. (1963), it is interesting to obtain the limiting behavior for some kind of mixing random variables without any conditions on mixing rate. Shao (1988) obtained the complete convergence for \(\varphi \)-mixing random variables without any conditions on mixing rate, Chen et al. (2009) extended the result of Shao (1988) to the moving average processes based on \(\varphi \)-mixing random variables. Since \(\psi \)-mixing implies \(\varphi \)-mixing, their result holds for \(\psi \)-mixing random variables without any conditions on mixing rate. However, it is not known whether (1.2) or (1.3) holds for \(\varphi \)-mixing random variables without any conditions on mixing rate. In this paper, we will prove that (1.2) and (1.3) holds for \(\psi \)-mixing random variables.

We now state the main results. The first one extends and generalizes that of Baxter et al. (2004) from independent case to \(\psi \)-mixing, and that of Blum et al. (1963) from partial sums to weighted sums.

Theorem 1.1

Let \(\{X_n,n\ge 1\}\) be a sequence of identically distributed \(\psi \)-mixing random variables, \(\{a_n, n\ge 1\}\) a sequence of constants satisfying \(\sup _{n\ge 1}n^{-1}\sum ^n_{k=1}|a_k|^\alpha <\infty \) for some \(1<\alpha <\infty \). Then \(EX_1=0\) and \(E|X_1|<\infty \) imply that (1.3) holds, i.e., \(n^{-1} \sum _{k=1}^n a_k X_k\rightarrow 0\) a.s. In particular,

$$\begin{aligned} n^{-1}\sum ^n_{k=1}X_k\rightarrow 0\ \ a.s. \end{aligned}$$
(1.4)

The second one extends and generalizes those of Choi and Sung (1987) and Cuzick (1995) from independent case to \(\psi \)-mixing.

Theorem 1.2

Let \(\{X_n,n\ge 1\}\) be a sequence of identically distributed \(\psi \)-mixing random variables, \(\{a_{nk}, n\ge 1, 1\le k\le n\}\) an array of constants satisfying

$$\begin{aligned} \sup _{n\ge 1} n^{-1}\sum ^n_{k=1}|a_{nk}|^\alpha <\infty \end{aligned}$$
(1.5)

for some \(1<\alpha \le \infty \) (when \(\alpha =\infty \) we interpret this as \(\sup _{n\ge 1}\max _{1\le k\le n}|a_{nk}|<\infty \)). If \(EX_1=0\) and \(E|X_1|^\beta <\infty ,\) where \(1/\alpha +1/\beta =1\), then (1.2) holds, i.e., \(n^{-1}\sum _{k=1}^n a_{nk}X_k\rightarrow 0\) a.s.

Some lemmas and the proofs of the main results will be detailed in the next section. The applications of Theorems 1.1 and 1.2 to the least-squares estimators will be shown in Section 3.

Throughout this paper, let C be a positive constant which is not necessarily the same one in each appearance. The symbol I(A) denotes the indicator function of the event \(A, \lfloor x\rfloor \) denotes the integer part of x, and \(\#B\) denotes the number of elements belonged to the set B.

2 Lemmas and Proofs

To prove Theorem 1.1, we need an analog of the Chung strong law of large numbers, which slightly extends Theorem 2 of Blum et al. (1963), and Theorem 2.20 of Hall and Heyde (1980).

Lemma 2.1

Let \(1<p\le 2, \{Y_n,n\ge 1\}\) be a sequence of \(\psi \)-mixing random variables with \(EY_n=0\) and \(E|Y_n|^p<\infty \) for all \(n\ge 1\). Suppose that

$$\begin{aligned} \sum ^\infty _{n=1}\frac{E|Y_n|^p}{n^p}<\infty \end{aligned}$$
(2.1)

and

$$\begin{aligned} \sup _{n\ge 1}n^{-1}\sum ^n_{k=1}E|Y_k|<\infty . \end{aligned}$$
(2.2)

Then

$$\begin{aligned} n^{-1}\sum ^n_{k=1}Y_k\rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$
(2.3)

Proof

By Markov’s inequality and (2.1)

$$\begin{aligned} \sum ^\infty _{n=1}P\{Y_n\not =Y_nI(|Y_n|\le n)\}=\sum ^\infty _{n=1}P\{|Y_n|>n\}\le \sum ^\infty _{n=1}\frac{E|Y_n|^p}{n^p}<\infty . \end{aligned}$$

So to prove (2.3), by the Borel–Cantelli lemma, it suffices to prove that

$$\begin{aligned} n^{-1}\sum ^n_{k=1}Y_kI(|Y_k|\le k)\rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$
(2.4)

By \(EY_n=0\) for all \(n\ge 1\), (2.1) and Kronecker’s lemma

$$\begin{aligned} \left| n^{-1}\sum ^n_{k=1}EY_kI(|Y_k|\le k)\right| \le n^{-1}\sum ^n_{k=1}E|Y_k|I(|Y_k|>k) \le n^{-1}\sum ^n_{k=1}\frac{E|Y_k|^p}{k^{p-1}}\rightarrow 0 \end{aligned}$$

as \(n\rightarrow \infty \). So to prove (2.4), it suffices to prove that

$$\begin{aligned} n^{-1}\sum ^n_{k=1}[Y_kI(|Y_k|\le k)-EY_kI(|Y_k|\le k)]\rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$
(2.5)

Note that by (2.1)

$$\begin{aligned}&\sum ^\infty _{n=1}\frac{E|Y_nI(|Y_n|\le n)-EY_nI(|Y_n|\le n)|^2}{n^2}\\&\quad \le \sum ^\infty _{n=1}\frac{EY_n^2I(|Y_n|\le n)}{n^2} \le \sum ^\infty _{n=1}\frac{E|Y_n|^p}{n^p}<\infty \end{aligned}$$

and by (2.2)

$$\begin{aligned} \sup _{n\ge 1}n^{-1}\sum ^n_{k=1}E|Y_kI(|Y_k|\le k)-EY_kI(|Y_k|\le k)| \le 2\sup _{n\ge 1}n^{-1}\sum ^n_{k=1}E|Y_k|<\infty . \end{aligned}$$

Then (2.5) holds from an application of Theorem 2.20 of Hall and Heyde (1980), and the proof is completed. \(\square \)

Proof of Theorem 1.1

By Hölder’s inequality, we can assume that \(1<\alpha \le 2\) such that \(\sup _{n\ge 1}n^{-1}\sum ^n_{k=1}|a_k|^\alpha <\infty \). By Abel’s method, we have that for all \(k\ge 1\)

$$\begin{aligned} \sum ^\infty _{n=k}\frac{|a_n|^\alpha }{n^{\alpha }} \le \frac{\alpha }{\alpha -1}\left( \sup _{n\ge 1}n^{-1}\sum ^n_{k=1}|a_k|^\alpha \right) \cdot k^{1-\alpha }. \end{aligned}$$
(2.6)

On account of \(E|X_1|<\infty \)

$$\begin{aligned} \sum ^\infty _{n=1}P\{X_n\not =X_nI(|X_n|\le n)\}=\sum ^\infty _{n=1}P\{|X_n|>n\}=\sum ^\infty _{n=1}P\{|X_1|>n\}\le E|X_1|<\infty , \end{aligned}$$

and so to prove (1.3), by the Borel–Cantelli lemma, it suffices to prove that

$$\begin{aligned} n^{-1}\sum ^n_{k=1}a_k[X_kI(|X_k|\le k)-EX_kI(|X_k|\le k)]\rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$
(2.7)

and

$$\begin{aligned} n^{-1}\sum ^n_{k=1}a_kEX_kI(|X_k|\le k)\rightarrow 0. \end{aligned}$$
(2.8)

By Lemma 2.1, to prove (2.7), it suffices to prove that (2.1) and (2.2) hold for \(p=\alpha \) and \(Y_n=a_n[X_nI(|X_n|\le n)-EX_nI(|X_n|\le n)], n\ge 1\). In fact, by the \(c_r\)-inequality, Hölder’s inequality and (2.6)

$$\begin{aligned} \sum ^\infty _{n=1}\frac{E|Y_n|^\alpha }{n^\alpha }\le & {} C\sum ^\infty _{n=1}\frac{|a_n|^\alpha E|X_n|^\alpha I(|X_n|\le n)}{n^{\alpha }} =C\sum ^\infty _{n=1}\frac{|a_n|^\alpha E|X_1|^\alpha I(|X_1|\le n)}{n^{\alpha }}\\= & {} C\sum ^\infty _{n=1}\frac{|a_n|^\alpha }{n^\alpha }\sum ^n_{k=1}E|X_1|^\alpha I(k-1<|X_1|\le k)\\= & {} C\sum ^\infty _{k=1}E|X_1|^\alpha I(k-1<|X_1|\le k)\sum ^\infty _{n=k}\frac{|a_n|^\alpha }{n^\alpha }\\\le & {} C\sum ^\infty _{k=1}k^{1-\alpha }E|X_1|^\alpha I(k-1<|X_1|\le k)\\\le & {} CE|X_1|<\infty . \end{aligned}$$

By Hölder’s inequality, we have that for all \(n\ge 1\)

$$\begin{aligned} n^{-1}\sum ^n_{k=1}E|Y_k|\le & {} 2n^{-1}\sum ^n_{k=1}|a_k|E|X_k|I(|X_k|\le k)\le 2(E|X_1|)\left( n^{-1}\sum ^n_{k=1}|a_k|^\alpha \right) ^{1/\alpha }\\\le & {} 2(E|X_1|)\left( \sup _{n\ge 1}n^{-1}\sum ^n_{k=1}|a_k|^\alpha \right) ^{1/\alpha }<\infty . \end{aligned}$$

Therefore, (2.7) holds. By \(E|X_1|<\infty \), \(E|X_1|I(|X_1|>n)\rightarrow 0\) and hence

$$\begin{aligned} n^{-1}\sum ^n_{k=1}[E|X_1|I(|X_1|>k)]^s\rightarrow 0 \end{aligned}$$

as \(n\rightarrow \infty \) for any \(s>0\). By \(EX_1=0\) and Hölder’s inequality

$$\begin{aligned}&\left| n^{-1}\sum ^n_{k=1}a_kEX_kI(|X_k|\le k)\right| \le n^{-1}\sum ^n_{k=1}|a_k|E|X_1|I(|X_1|>k)\\&\quad \le \left( \sup _{n\ge 1}n^{-1}\sum ^n_{k=1}|a_k|^\alpha \right) ^{1/\alpha }\cdot \left( n^{-1}\sum ^n_{k=1}[E|X_1|I(|X_1|>k)]^\beta \right) ^{1/\beta }\\&\quad \rightarrow 0\ \mathrm{as}\ n\rightarrow \infty , \end{aligned}$$

i.e., (2.8) holds for \(1/\alpha +1/\beta =1\). The proof is completed. \(\square \)

To prove Theorem 1.2, the following two lemmas on \(\varphi \)-mixing random variables are needed. The first one is a Rosenthal type inequality for \(\varphi \)-mixing random variables (see Shao 1988). The second one shows that Theorem 1.1 holds for uniformly bounded \(\varphi \)-mixing random variables without any conditions on mixing rate and is interesting in itself. Since \(\psi \)-mixing implies \(\varphi \)-mixing, Lemma 2.3 also holds for \(\psi \)-mixing random variables.

Lemma 2.2

Let \(\{Y_n, n\ge 1 \}\) be a sequence of \(\varphi \)-mixing random variables with \(E|Y_n|^s<\infty \) for all \(n\ge 1\) and for some \(s\ge 2\). Then there exists a positive constant C depending only on s and the \(\varphi \)-mixing coefficient \(\varphi (\cdot )\) such that for all \(n\ge 1\)

$$\begin{aligned}&E\left| \sum _{k=1}^n (Y_k-EY_k)\right| ^s\le C\\&\quad \times \left\{ \left[ \exp \left( 6\sum ^{\lfloor \log n\rfloor }_{i=1}\varphi ^{1/2}(2^i)\right) \cdot n\max _{1\le k\le n}EY_k^2\right] ^{s/2}+\sum _{k=1}^n E|Y_k|^s\right\} . \end{aligned}$$

Remark 2.1

Set \(a(x)=\sum ^{\lfloor \log x\rfloor }_{i=1}\varphi ^{1/2}(2^i)\), \(x>0\). Then by \(\varphi (2^n)\rightarrow 0\) as \(n\rightarrow \infty \), \(\lim _{x\rightarrow \infty }a(x)/\log x=0\) and hence \(\lim _{x\rightarrow \infty }x^{-\delta }\exp (sa(x))=0\) for any \(s>0\) and \(\delta >0\). Therefore, the series \(\sum ^\infty _{n=1}n^{-\lambda }\exp (sa(n))\) converges for any \(s>0\) and \(\lambda >1.\)

Lemma 2.3

Let \(\{Y_n,n\ge 1\}\) be a sequence of \(\varphi \)-mixing random variables with \(\sup _{n\ge 1}|Y_n|\le M\) a.s. for some constant \(M>0, \{a_{nk}, n\ge 1, 1\le k\le n\}\) an array of constants satisfying (1.5) for some \(1<\alpha \le \infty \). Then

$$\begin{aligned} n^{-1}\sum ^n_{k=1}a_{nk}(Y_k-EY_k)\rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$
(2.9)

Proof

Set \(a(n)=\sum ^{\lfloor \log n\rfloor }_{i=1}\varphi ^{1/2}(2^i)\), \(n\ge 1\). We first prove (2.9) for the case \(\alpha =\infty \). When \(\alpha =\infty \), we have \(\sup _{n\ge 1}\max _{1\le k\le n}|a_{nk}|<\infty \). By Markov’s inequality and Lemma 2.2, we have that for any \(s>2\) and any \(\varepsilon >0\)

$$\begin{aligned}&P\left\{ \left| \sum ^n_{k=1}a_{nk}(Y_k-EY_k)\right| >\varepsilon n\right\} \\&\quad \le Cn^{-s}E\left| \sum ^n_{k=1}a_{nk}(Y_k-EY_k)\right| ^s\\&\quad \le Cn^{-s}\left\{ \left[ \exp (6a(n)) \cdot n\max _{1\le k\le n}E(a_{nk}Y_k)^2\right] ^{s/2}+\sum _{k=1}^n E|a_{nk}Y_k|^s\right\} \\&\quad \le C\exp (3sa(n))\cdot n^{-s/2}+Cn^{-s+1}, \end{aligned}$$

which ensures by Remark 2.1 that

$$\begin{aligned} \sum ^\infty _{n=1}P\left\{ \left| \sum ^n_{k=1}a_{nk}(Y_k-EY_k)\right|>\varepsilon n\right\} <\infty ,\ \forall \ \varepsilon >0. \end{aligned}$$

Then (2.9) holds by the Borel–Cantelli lemma.

We now prove (2.9) for the case \(1<\alpha <\infty \). Without loss of generality, we can assume that for all \(n\ge 1\)

$$\begin{aligned} \sum ^n_{k=1}|a_{nk}|^\alpha \le n. \end{aligned}$$
(2.10)

Then by Hölder’s inequality

$$\begin{aligned} \max _{1\le k\le n}|a_{nk}|\le n^{1/\alpha },\ \ \sum ^n_{k=1}|a_{nk}|^s\le n^{s/\alpha } \end{aligned}$$
(2.11)

for all \(n\ge 1\) and for any \(s>\alpha \). Set

$$\begin{aligned} A_n^{(0)}=\{1,2,\ldots , n\}, A_n^{(m)}=\{k: |a_{nk}|^\alpha >n^{m/\beta }\},\ m\ge 1, \end{aligned}$$

where \(1/\alpha +1/\beta =1\). Then \(A_n^{(0)}\supset A_n^{(1)}\supset \cdots \), and by (2.10) and the definition of \(A_n^{(m)}\)

$$\begin{aligned} n\ge \sum ^n_{k=1}|a_{nk}|^\alpha \ge \sum _{k \in A_n^{(m)}}|a_{nk}|^\alpha >n^{m/\beta }\#A_n^{(m)}, \end{aligned}$$

which implies that

$$\begin{aligned} \#A_n^{(m)}\le n^{1-m/\beta }, m\ge 1. \end{aligned}$$
(2.12)

Take \(m=5\) if \(\alpha >7/6\), and take \(m\ge 6\) such that \((m+2)/(m+1)<\alpha \le (m+1)/m\) if \(1<\alpha \le 7/6\). Then to prove (2.9), it suffices to prove that

$$\begin{aligned} n^{-1}\sum _{k\in A_n^{(j-1)}\setminus A_n^{(j)}}a_{nk}(Y_k-EY_k)\rightarrow 0\ \ \mathrm{a.s.}, \ j=1,\ldots ,m, \end{aligned}$$
(2.13)

and

$$\begin{aligned} n^{-1}\sum _{k\in A_n^{(m)}}a_{nk}(Y_k-EY_k)\rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$
(2.14)

By Lemma 2.2 and (2.11), we obtain that for any \(s>\max \{\alpha ,2\}\) and \(\varepsilon >0\)

$$\begin{aligned}&P\left\{ \left| \sum _{k\in A_n^{(j-1)}\setminus A_n^{(j)}}a_{nk}(Y_k-EY_k)\right| >\varepsilon n\right\} \nonumber \\&\quad \le Cn^{-s}\left\{ \left[ \exp (6a(n)) \cdot \#A_n^{(j-1)}\max _{k\in A_n^{(j-1)}\setminus A_n^{(j)}}E|a_{nk}Y_k|^2\right] ^{s/2}\right. \nonumber \\&\qquad \left. +\sum _{k\in A_n^{(j-1)}\setminus A_n^{(j)}}E|a_{nk}Y_k|^s\right\} \nonumber \\&\quad \le C\exp (3sa(n))\cdot n^{-s[2j/\alpha ^2-(3j-1)/\alpha +j]/2}+ Cn^{-s/\beta }. \end{aligned}$$
(2.15)

It is easy to show that

$$\begin{aligned} 2j/\alpha ^2-(3j-1)/\alpha +j>0 \end{aligned}$$

for \(j=1,\ldots ,5\) and for any \(1<\alpha <\infty \). The above also holds for \(1\le j\le m\) when \((m+2)/(m+1)< \alpha \le (m+1)/m\). Then we can take s large enough such that

$$\begin{aligned} s[2j/\alpha ^2-(3j-1)/\alpha +j]/2>1, s/\beta >1, \end{aligned}$$

which ensures by Remark 2.1 that

$$\begin{aligned} \sum ^\infty _{n=1}P\left\{ \left| \sum _{k\in A_n^{(j-1)}\setminus A_n^{(j)}}a_{nk}(Y_k-EY_k)\right|>\varepsilon n\right\} <\infty ,\ \ \forall \ \varepsilon >0. \end{aligned}$$

Then by the Borel–Cantelli lemma, (2.13) holds.

A similar method of (2.15) leads to

$$\begin{aligned}&P\left\{ \left| \sum _{k\in A_n^{(m)}}a_{nk}(Y_k-EY_k)\right| >\varepsilon n\right\} \\&\quad \le C\exp (3sa(n))\cdot n^{-s[m+1-(m+2)/\alpha ]/2}+ Cn^{-s/\beta }. \end{aligned}$$

Since we always have \(\alpha >(m+2)/(m+1)\), we can take s large enough such that

$$\begin{aligned} s[m+1-(m+2)/\alpha ]/2>1, \quad \ s/\beta >1, \end{aligned}$$

which ensures by Remark 2.1 that

$$\begin{aligned} \sum ^\infty _{n=1}P\left\{ \left| \sum _{k\in A_n^{(m)}}a_{nk}(Y_k-EY_k)\right|>\varepsilon n\right\} <\infty ,\ \ \forall \ \varepsilon >0. \end{aligned}$$

By the Borel–Cantelli lemma, (2.14) holds. The proof is completed. \(\square \)

Proof

For any \(\varDelta >0\), set

$$\begin{aligned}&Y_n=X_nI(|X_n|\le \varDelta )-EX_nI(|X_n|\le \varDelta ),\\&Z_n=X_nI(|X_n|>\varDelta )-EX_nI(|X_n|>\varDelta ) \end{aligned}$$

for \(n\ge 1\). Since \(|Y_n|\le 2\varDelta \) for all \(n\ge 1,\) Lemma 2.3 implies that

$$\begin{aligned} n^{-1}\sum ^n_{k=1}a_{nk}Y_k\rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$

By Hölder’s inequality

$$\begin{aligned} \frac{1}{n}\left| \sum ^n_{k=1}a_{nk}Z_k\right| \le \left( \frac{1}{n}\sum ^n_{k=1}|a_{nk}|^\alpha \right) ^{1/\alpha } \cdot \left( \frac{1}{n}\sum ^n_{k=1}|Z_k|^\beta \right) ^{1/\beta } \ \ \mathrm{a.s.} \end{aligned}$$

and by Theorem 1.1

$$\begin{aligned}&n^{-1}\sum ^n_{k=1}|Z_k|^\beta \rightarrow E|X_1I(|X_1|>\varDelta )-EX_1I(|X_1|>\varDelta )|^\beta \\&\quad \le 2^\beta E|X_1|^\beta I(|X_1|>\varDelta )\ \ \mathrm{a.s.} \end{aligned}$$

Noting that \(X_n=Y_n+Z_n\) for all \(n\ge 1\) and \(EX_1=0\), we have

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{1}{n}\left| \sum ^n_{k=1}a_{nk}X_k\right| \le 2\sup _{n\ge 1}\left( \frac{1}{n}\sum ^n_{k=1}|a_{nk}|^\alpha \right) ^{1/\alpha }\cdot (E|X_1|^\beta I(|X_1|>\varDelta ))^{1/\beta }\ \ \mathrm{a.s.}, \end{aligned}$$

which ensures (1.2) by letting \(\varDelta \rightarrow \infty \). \(\square \)

Remark 2.2

The Kolmogorov strong law of large numbers holds for identically distributed \(\varphi \)-mixing random variables with \(\sum ^\infty _{n=1}\varphi ^{1/2}(2^n)<\infty \) (see, for example, Theorem 8.2.2 of Lin and Lu 1997). By the same proof of Theorem 1.2, except that Theorem 1.1 is replaced by Theorem 8.2.2 of Lin and Lu (1997), we can obtain that Theorem 1.2 also holds for \(\varphi \)-mixing random variables with \(\sum ^\infty _{n=1}\varphi ^{1/2}(2^n)<\infty .\)

3 Applications

We consider the simple linear errors-in-variables (EV) regression model:

$$\begin{aligned} \eta _k=\theta +\beta x_k +\varepsilon _k, \quad \ \xi _k=x_k+\delta _k,\ \ 1\le k\le n, \end{aligned}$$
(3.1)

where \(\theta ,\beta , x_1, \ldots , x_n\) are unknown parameters or constants, \((\varepsilon _k,\delta _k), 1\le k\le n,\) are random vectors and \(\xi _k,\eta _k, 1\le k\le n,\) are observable variables. From (3.1), we have

$$\begin{aligned} \eta _k=\theta +\beta \xi _k+(\varepsilon _k-\beta \delta _k), \quad \ 1\le k\le n. \end{aligned}$$

As a usual regression model of \(\eta _k\) on \(\xi _k\) with the errors \(\varepsilon _k-\beta \delta _k\), the least-squares (LS) estimators of \(\beta \) and \(\theta \) are given as

$$\begin{aligned} \hat{\beta }_n=\frac{\sum ^n_{k=1}(\xi _k-\bar{\xi }_n)(\eta _k-\bar{\eta }_n)}{\sum ^n_{k=1}(\xi _k-\bar{\xi }_n)^2}, \ \ \hat{\theta }_n=\bar{\eta }_n-\hat{\beta }_n\bar{\xi }_n, \end{aligned}$$

where \(\bar{\xi }_n=n^{-1}\sum ^n_{k=1}\xi _k\), and other similar notations, such as \(\bar{\eta }_n, \bar{\delta }_n\) and \(\bar{x}_n\), are defined in the same way.

Define \(s_n=\sum ^n_{k=1}(x_i-\bar{x}_n)^2\) for each \(n\ge 1\). Based on the above notations, we have

$$\begin{aligned} \hat{\beta }_n-\beta =\frac{\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)\varepsilon _k +\sum ^n_{k=1}(x_k-\bar{x}_n)(\varepsilon _k-\beta \delta _k) -\beta \sum ^n_{k=1}(\delta _k-\bar{\delta }_n)^2}{\sum ^n_{k=1}(\xi _k-\bar{\xi }_n)^2}\nonumber \\ \end{aligned}$$
(3.2)

and

$$\begin{aligned} \hat{\theta }_n-\theta =\bar{x}_n(\beta -\hat{\beta }_n)+(\beta -\hat{\beta }_n)\bar{\delta }_n+\bar{\varepsilon }_n -\beta \bar{\delta }_n. \end{aligned}$$
(3.3)

This model was proposed by Deaton (1985) to correct the effects of the sampling errors and is somewhat more practical than the ordinary regression model. Fuller (1987) summarized many early works for the EV models. The last two decades, the studies for the EV model have attracted much attention due to its simple form and wide applicability, for more details, we refer to Liu and Chen (2005), Miao et al. (2011) and Wang et al. (2015) and their references. In particular, Liu and Chen (2005) discussed the necessary and sufficient conditions for the strong consistency of \(\hat{\beta }_n\) and the weak consistency of \(\hat{\theta }_n\) under the assumptions that the errors \((\varepsilon _n, \delta _n), n\ge 1,\) are independent and identically distributed random variables. However, the independence assumption for the errors is not always valid in many applications. In particular, when the data are collected sequentially in time, e.g., consumer price index and rainfall by year, the errors do not satisfy independence. The result of Liu and Chen (2005) has been partially extended to the model with dependent errors. Fan et al. (2010) proved the sufficient condition for the strong consistency of \(\hat{\beta }_n\) when the errors are stationary \(\alpha \)-mixing with a condition on mixing rate and with higher order moment conditions.

As the applications of Theorems 1.1 and 1.2, we will extend and generalize the results of Liu and Chen (2005) from the independent case to the \(\psi \)-mixing setting.

In the following, the strong consistency for LS estimators of the unknown parameters is given. The first one is the strong consistency for the estimator of \(\beta \).

Theorem 3.1

Under the model (3.1), assume that \(\{(\varepsilon _n, \delta _n),n\ge 1\}\) is a sequence of identically distributed \(\psi \)-mixing random vectors with \(E\varepsilon _1=E\delta _1=0\), \(0<E\varepsilon _1^2, E\delta _1^2<\infty \). Then \(s_n/n\rightarrow \infty \) implies that

$$\begin{aligned} \hat{\beta }_n\rightarrow \beta \ \ \mathrm{a.s.} \end{aligned}$$

Conversely, if \(E(\varepsilon _1\delta _1)-\beta E\delta _1^2\not =0\), then \(\hat{\beta }_n\rightarrow \beta \ \mathrm{a.s.}\) implies that \(s_n/n\rightarrow \infty \).

Proof

of sufficiency Assume that \(s_n/n\rightarrow \infty \). To prove \(\hat{\beta }_n\rightarrow \beta \) a.s., by (3.2), it suffices to prove that

$$\begin{aligned}&s_n^{-1}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)\varepsilon _k\rightarrow 0\ \ \mathrm{a.s.,} \end{aligned}$$
(3.4)
$$\begin{aligned}&s_n^{-1}\sum ^n_{k=1}(x_k-\bar{x}_n)(\varepsilon _k-\beta \delta _k)\rightarrow 0\ \ \mathrm{a.s.,} \end{aligned}$$
(3.5)
$$\begin{aligned}&s_n^{-1}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)^2\rightarrow 0\ \ \mathrm{a.s.,} \end{aligned}$$
(3.6)
$$\begin{aligned}&s_n^{-1}\sum ^n_{k=1}(\xi _k-\bar{\xi }_n)^2\rightarrow 1\ \ \mathrm{a.s.} \end{aligned}$$
(3.7)

By Theorem 1.1

$$\begin{aligned} s_n^{-1}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)\varepsilon _k =\frac{n}{s_n}\cdot \left( \frac{1}{n}\sum ^n_{k=1}\varepsilon _k\delta _k-\bar{\varepsilon }_n\bar{\delta }_n\right) \rightarrow 0\times [E(\varepsilon _1\delta _1)-0]=0\ \ \mathrm{a.s.} \end{aligned}$$

and

$$\begin{aligned} s_n^{-1}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)^2 =\frac{n}{s_n}\cdot \left( \frac{1}{n}\sum ^n_{k=1}\delta _k^2-\bar{\delta }_n^2\right) \rightarrow 0\times (E\delta _1^2-0)=0\ \ \mathrm{a.s.} \end{aligned}$$

Hence, (3.4) and (3.6) hold. Set \(a_{nk}=n(x_k-\bar{x}_n)/s_n\) for \(n\ge 1\) and \(1\le k\le n\). Then

$$\begin{aligned} \sup _{n\ge 1}n^{-1}\sum ^n_{k=1}|a_{nk}|^2=\sup _{n\ge 1} n/s_n<\infty . \end{aligned}$$

Therefore, by Theorem 1.2

$$\begin{aligned} s_n^{-1}\sum ^n_{k=1}(x_k-\bar{x}_n)\varepsilon _k=n^{-1}\sum ^n_{k=1}a_{nk}\varepsilon _k\rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$
(3.8)

and

$$\begin{aligned} s_n^{-1}\sum ^n_{k=1}(x_k-\bar{x}_n)\delta _k=n^{-1}\sum ^n_{k=1}a_{nk}\delta _k\rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$
(3.9)

Then (3.5) holds from (3.8) and (3.9). Note that

$$\begin{aligned} s_n^{-1}\sum ^n_{k=1}(\xi _k-\bar{\xi }_n)^2 =1+2s_n^{-1}\sum ^n_{k=1}(x_k-\bar{x}_n)\delta _k+s_n^{-1}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)^2. \end{aligned}$$

Then (3.7) holds by (3.6) and (3.9).

Proof

of necessity Suppose that \(s_n/n\rightarrow \infty \) does not hold. Taking a subsequence if necessary, we may assume that

$$\begin{aligned} s_n/n\rightarrow c\in [0,\infty )\ \mathrm{as}\ n\rightarrow \infty . \end{aligned}$$
(3.10)

By Theorem 1.1

$$\begin{aligned} n^{-1}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)\varepsilon _k\rightarrow E(\varepsilon _1\delta _1)\ \ \mathrm{a.s.} \end{aligned}$$
(3.11)

and

$$\begin{aligned} n^{-1}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)^2\rightarrow E\delta _1^2\ \ \mathrm{a.s.} \end{aligned}$$
(3.12)

Set \(a_{nk}=x_k-\bar{x}_n\) for \(n\ge 1\) and \(1\le k\le n\). By (3.10)

$$\begin{aligned} \sup _{n\ge 1}n^{-1}\sum ^n_{k=1}a_{nk}^2=\sup _{n\ge 1}s_n/n<\infty . \end{aligned}$$

Then by Theorem 1.2

$$\begin{aligned}&n^{-1}\sum ^n_{k=1}(x_k-\bar{x}_n)\varepsilon _k\rightarrow 0\ \ \mathrm{a.s.} \nonumber \\&n^{-1}\sum ^n_{k=1}(x_k-\bar{x}_n)\delta _k\rightarrow 0\ \ \mathrm{a.s.}, \end{aligned}$$
(3.13)

which follow that

$$\begin{aligned} n^{-1}\sum ^n_{k=1}(x_k-\bar{x}_n)(\varepsilon _k-\beta \delta _k)\rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$
(3.14)

By (3.10), (3.12) and (3.13)

$$\begin{aligned} n^{-1}\sum ^n_{k=1}(\xi _k-\bar{\xi }_n)^2=\frac{s_n}{n}+\frac{2}{n}\sum ^n_{k=1}(x_k-\bar{x}_n)\delta _k+\frac{1}{n}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)^2 \rightarrow c+E\delta _1^2\ \ \mathrm{a.s.}\nonumber \\ \end{aligned}$$
(3.15)

Thus by (3.2), (3.11), (3.12), (3.14) and (3.15)

$$\begin{aligned} \hat{\beta }_n-\beta \rightarrow \frac{E(\varepsilon _1\delta _1)-\beta E\delta _1^2}{c+E\delta _1^2}\ \ \mathrm{a.s.}, \end{aligned}$$

which leads to a contradiction to \(\hat{\beta }_n\rightarrow \beta \ \mathrm{a.s.}\), so we have \(s_n/n\rightarrow \infty \). The proof is completed. \(\square \)

Remark 3.1

When \(\{(\varepsilon _n, \delta _n),n\ge 1\}\) is a sequence of independent and identically distributed random variables, Theorem 3.1 was proved by Liu and Chen (2005). When \(\{(\varepsilon _n, \delta _n),n\ge 1\}\) is a sequence of stationary \(\alpha \)-mixing with higher order moment conditions \(E|\varepsilon _1|^{2+t}<\infty \) and \(E|\delta _1|^{2+t}<\infty \) for some \(t>0\) and with a mixing condition \(\alpha (n)=O(\log ^{-\gamma }n)\) for some \(\gamma >1+2/t,\) Fan et al. (2010) proved the sufficiency part of Theorem 3.1. Although \(\psi \)-mixing is stronger than \(\alpha \)-mixing, Theorem 3.1 is a complete extension of Liu and Chen (2005).

The second one is the strong consistency for the estimator of \(\theta .\)

Theorem 3.2

Under the assumptions of Theorem 3.1, further assume that \(\sup _{n\ge 1}n\bar{x}_n^2/s_n^*<\infty \), where \(s_n^*=\max \{n,s_n\}.\) Then \(n\bar{x}_n/s_n^*\rightarrow 0\) implies that

$$\begin{aligned} \hat{\theta }_n\rightarrow \theta \ \mathrm{a.s.} \end{aligned}$$

Conversely, if \(E(\varepsilon _1\delta _1)-E\delta _1^2\not =0\), then \(\hat{\theta }_n\rightarrow \theta \ \mathrm{a.s.}\) implies that \(n\bar{x}_n/s_n^*\rightarrow 0\).

Proof

of sufficiency Assume that \(n\bar{x}_n/s_n^*\rightarrow 0\). By (3.11)

$$\begin{aligned} \frac{\bar{x}_n}{s_n^*}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)\varepsilon _k =\frac{n\bar{x}_n}{s_n^*}\cdot \frac{1}{n}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)\varepsilon _k \rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$
(3.16)

and

$$\begin{aligned} \limsup _{n\rightarrow \infty }\left| \frac{1}{s_n^*}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)\varepsilon _k\right| =\limsup _{n\rightarrow \infty }\left| \frac{n}{s_n^*}\cdot \frac{1}{n}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)\varepsilon _k\right| \le |E(\varepsilon _1\delta _1)|\ \ \mathrm{a.s.}\nonumber \\ \end{aligned}$$
(3.17)

By (3.12)

$$\begin{aligned} \frac{\bar{x}_n}{s_n^*}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)^2 =\frac{n\bar{x}_n}{s_n^*}\cdot \frac{1}{n}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)^2 \rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$
(3.18)

and

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{1}{s_n^*}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)^2 =\limsup _{n\rightarrow \infty }\frac{n}{s_n^*}\cdot \frac{1}{n}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)^2 \le E\delta _1^2\ \ \mathrm{a.s.} \end{aligned}$$
(3.19)

Note that

$$\begin{aligned} \sup _{n\ge 1}\frac{1}{n}\sum ^n_{k=1}\left| \frac{n\bar{x_n}(x_k-\bar{x}_n)}{s_n^*}\right| ^2 =\sup _{n\ge 1}\frac{n\bar{x}_n^2s_n}{(s_n^*)^2} \le \sup _{n\ge 1}\frac{n\bar{x}_n^2}{s_n^*}<\infty \end{aligned}$$

and

$$\begin{aligned} \sup _{n\ge 1}\frac{1}{n}\sum ^n_{k=1}\left| \frac{n(x_k-\bar{x}_n)}{s_n^*}\right| ^2 =\sup _{n\ge 1}\frac{ns_n}{(s_n^*)^2}\le 1<\infty . \end{aligned}$$

Hence, by Theorem 1.2

$$\begin{aligned} \frac{\bar{x}_n}{s_n^*}\sum ^n_{k=1}(x_k-\bar{x}_n)(\varepsilon _k-\beta \delta _k) =\frac{1}{n}\sum ^n_{k=1}\frac{n\bar{x}_n(x_k-\bar{x}_n)}{s_n^*}(\varepsilon _k-\beta \delta _k) \rightarrow 0\ \ \mathrm{a.s.}\qquad \end{aligned}$$
(3.20)

and

$$\begin{aligned} \frac{1}{s_n^*}\sum ^n_{k=1}(x_k-\bar{x}_n)(\varepsilon _k -\beta \delta _k) =\frac{1}{n}\sum ^n_{k=1}\frac{n(x_k-\bar{x}_n)(\varepsilon _k -\beta \delta _k)}{s_n^*} \rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$
(3.21)

By the definition of \(s_n^*\), (3.19) and (3.21)

$$\begin{aligned}&\min \{1, E\delta _1^2\}\le \liminf _{n\rightarrow \infty }\frac{1}{s_n^*}\sum ^n_{k=1}(\xi _k-\bar{\xi }_n)^2\nonumber \\&\quad \le \limsup _{n\rightarrow \infty } \frac{1}{s_n^*}\sum ^n_{k=1}(\xi _k-\bar{\xi }_n)^2\le 1+E\delta _1^2\ \ \mathrm{a.s.} \end{aligned}$$
(3.22)

By (3.2), (3.16), (3.18), (3.20) and (3.22)

$$\begin{aligned} \bar{x}_n(\hat{\beta }-\beta )\rightarrow 0\ \ \mathrm{a.s.} \end{aligned}$$
(3.23)

By (3.2), (3.17), (3.19), (3.21) and (3.22)

$$\begin{aligned} \limsup _{n\rightarrow \infty }|\hat{\beta }_n-\beta |\le \frac{|E(\varepsilon _1\delta _1)|+E\delta _1^2}{\min \{1,E\delta _1^2\}}\ \ \mathrm{a.s.} \end{aligned}$$
(3.24)

By (3.3), (3.23), (3.24), and noting that \(\bar{\varepsilon }_n\rightarrow 0\ \ \mathrm{a.s.}\) and \(\bar{\delta }_n\rightarrow 0\ \ \mathrm{a.s.}\) by Theorem 1.1

$$\begin{aligned} \hat{\theta }\rightarrow \theta \ \ \mathrm{a.s.} \end{aligned}$$

Proof

of necessity Suppose that \(n\bar{x}_n/s_n^*\rightarrow 0\) does not hold. Taking a subsequence if necessary, we may assume that

$$\begin{aligned} n\bar{x}_n/s_n^*\rightarrow c\in [-\infty , 0)\cup (0, \infty ]\ \ \mathrm{as}\ \ n\rightarrow \infty . \end{aligned}$$
(3.25)

Then

$$\begin{aligned} \frac{\bar{x}_n}{s_n^*}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)\varepsilon _k =\frac{n\bar{x}_n}{s_n^*}\cdot \frac{1}{n}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)\varepsilon _k \rightarrow cE(\varepsilon _1\delta _1)\ \ \mathrm{a.s.} \end{aligned}$$
(3.26)

and

$$\begin{aligned} \frac{\bar{x}_n}{s_n^*}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)^2 =\frac{n\bar{x}_n}{s_n^*}\cdot \frac{1}{n}\sum ^n_{k=1}(\delta _k-\bar{\delta }_n)^2 \rightarrow cE\delta _1^2\ \ \mathrm{a.s.} \end{aligned}$$
(3.27)

Hence, by (3.2), (3.20), (3.22), (3.26) and (3.27)

$$\begin{aligned} \liminf _{n\rightarrow \infty }|\bar{x}_n(\hat{\beta }_n-\beta )|\ge \frac{|c[E(\varepsilon _1\delta _1)-E\delta _1^2]|}{1+E\delta _1^2}\ \ \mathrm{a.s.} \end{aligned}$$
(3.28)

Therefore, by (3.3), (3.28), (3.24), \(\bar{\varepsilon }_n\rightarrow 0\ \ \mathrm{a.s.}\), and \(\bar{\delta }_n\rightarrow 0\ \ \mathrm{a.s.}\), we have

$$\begin{aligned} \liminf _{n\rightarrow \infty }|\hat{\theta }_n-\theta |\ge \frac{|c[E(\varepsilon _1\delta _1)-E\delta _1^2]|}{1+E\delta _1^2}\ \ \mathrm{a.s.}, \end{aligned}$$

which leads to a contradiction to \(\hat{\theta }_n\rightarrow \theta \ \mathrm{a.s.}\), so we have \(n\bar{x}_n/s_n^*\rightarrow 0\). We complete the proof.\(\square \)

Remark 3.2

Even though the errors are independent, Theorem 3.2 is not known.

4 Conclusions

For a sequence \(\{X_n, n\ge 1 \}\) of identically distributed \(\psi \)-mixing random variables with \(EX_1=0\) and an array \(\{a_{nk}, n\ge 1, 1\le k\le n\}\) of constant weights, conditions on both the weights and the moment of \(X_1\) have been given under which the weighted sums \(n^{-1}\sum _{k=1}^n a_{nk}X_k\) converge to 0 a.s. In general, the condition on weights \(a_{nk}\) is \(\sup _{n\ge 1} n^{-1}\sum ^n_{k=1}|a_{nk}|^\alpha <\infty \) for some \(1<\alpha \le \infty \) (when \(\alpha =\infty \) we interpret this as \(\sup _{n\ge 1}\max _{1\le k\le n}|a_{nk}|<\infty \)), and the moment condition is \(E|X_1|^\beta <\infty ,\) where \(1/\alpha +1/\beta =1.\) When the weights \(a_{nk}\) are independent of n,  i.e., \(a_{kk}=a_{k+1, k}=\cdots ,\) the moment condition can be weakened to \(E|X_1|<\infty .\)

The above results based on two types of weights have been applied to the simple linear errors-in-variables regression model:

$$\begin{aligned} \eta _k=\theta +\beta x_k +\varepsilon _k, \quad \ \xi _k=x_k+\delta _k,\ \ 1\le k\le n, \end{aligned}$$

where \(\theta ,\beta , x_1, \ldots , x_n\) are unknown parameters or constants, \(\{(\varepsilon _n,\delta _n),n\ge 1\}\) is a sequence of identically distributed \(\psi \)-mixing random vectors with \(E\varepsilon _1=E\delta _1=0\), \(0<E\varepsilon _1^2, E\delta _1^2<\infty .\) Under the condition of \(E(\varepsilon _1\delta _1)-\beta E\delta _1^2\not =0,\) the necessary and sufficient condition for the strong consistency of LS estimator \(\hat{\beta }_n\) is \(s_n/n\rightarrow \infty .\) Namely,

$$\begin{aligned} \hat{\beta }_n\rightarrow \beta \ \mathrm{a.s.} \quad \Longleftrightarrow \quad s_n/n\rightarrow \infty . \end{aligned}$$

Furthermore, under the conditions of \(\sup _{n\ge 1}n\bar{x}_n^2/s_n^*<\infty \) and \(E(\varepsilon _1\delta _1)-E\delta _1^2\not =0\), the necessary and sufficient condition for the strong consistency of LS estimator \(\hat{\theta }_n\) is \(n\bar{x}_n/s_n^*\rightarrow 0,\) where \(s_n^*=\max \{n,s_n\}.\)