Abstract
In this paper, we establish strong laws for weighted sums of identically distributed \(\psi \)-mixing random variables without any conditions on mixing rate. The classical Kolmogorov strong law of large numbers is extended to weighted sums of \(\psi \)-mixing random variables. Two types of weights are considered for the weighted sums. These results are applied to the least-squares estimators in the simple linear errors-in-variables regression model when the errors are \(\psi \)-mixing random vectors.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let \(\{X_n,n\ge 1\}\) be a sequence of identically distributed random variables, \(\{a_{nk}, n\ge 1, 1\le k\le n\}\) an array of constants. A weighted sum is defined by
The \(a_{nk}\) are called weights. Since many useful linear statistics, e.g., least-squares estimators and nonparametric regression function estimators, are of the form (1.1), it is very interesting and meaningful to study the limiting behaviors for the weighted sums of random variables.
The classical Kolmogorov strong law of large numbers states that if \(\{X_n,n\ge 1\}\) is a sequence of independent and identically distributed random variables with \(EX_1=0,\) then \(n^{-1}\sum _{k=1}^nX_k\rightarrow 0\) a.s. The Kolmogorov strong law of large numbers has been extended to weighted sums by many authors. Let \(\{X_n,n\ge 1\}\) be a sequence of independent and identically distributed random variables, \(\{a_{nk}, n\ge 1, 1\le k\le n\}\) an array of uniformly bounded constants, i.e., \(\sup _{n\ge 1}\max _{1\le k\le n}|a_{nk}|<\infty \). Choi and Sung (1987) showed that if \(EX_1=0\), then
When the weights \(a_{nk}\) are \(\alpha \)-th Cesàro uniformly bounded for some \(1<\alpha \le \infty ,\) that is, \(\sup _{n\ge 1}n^{-1}\sum ^n_{k=1}|a_{nk}|^\alpha <\infty \) (when \(\alpha =\infty \) we interpret this as \(\sup _{n\ge 1}\max _{1\le k\le n}|a_{nk}|<\infty \)), Cuzick (1995) showed that (1.2) holds under the moment conditions that \(EX_1=0\) and \(E|X_1|^\beta <\infty ,\) where \(1/\alpha +1/\beta =1\). When \(\alpha =\infty ,\) this reduces to the result of Choi and Sung (1987). Bai and Cheng (2000) extended and generalized the result of Cuzick (1995) to the Marcinkiewicz type strong law, and Chen and Gan (2007) generalized the result of Bai and Cheng (2000) in some directions. Huang et al. (2014) extended the corresponding result of Bai and Cheng (2000) to \(\varphi \)-mixing random variables with \(\sum ^\infty _{n=1}\varphi ^{1/2}(n)<\infty \).
When the weights \(a_{nk}\) are independent of n, i.e., \(a_{kk}=a_{k+1, k}=\cdots ,\) it is possible to prove (1.2) under weaker moment conditions. In fact, Baxter et al. (2004) showed that if \(\{X_n,n\ge 1\}\) is a sequence of independent and identically distributed random variables with \(EX_1=0,\) and \(\{a_n,n\ge 1\}\) is a sequence of \(\alpha \)-th Cesàro uniformly bounded constants for some \(1<\alpha <\infty \), then
For the sake of clarity, let us recall the concept of the \(\psi \)-mixing random variables or random vectors.
Definition 1.1
For a sequence \(\{X_n,n\ge 1\}\) of random variables or random vectors, the \(\psi \)-mixing coefficient \(\psi (n)\) is defined as
where \(\mathcal{F}^m_n=\sigma (X_i: n\le i\le m)\). Then \(\{X_n,n\ge 1\}\) is said to be \(\psi \)-mixing or *-mixing if \(\psi (n)\rightarrow 0\) as \(n\rightarrow \infty \).
The concept of \(\psi \)-mixing was introduced by Blum et al. (1963). They proved the Kolmogorov strong law of large numbers for identically distributed \(\psi \)-mixing random variables without any conditions on mixing rate. For \(\psi \)-mixing random variables with \(\sum ^\infty _{n=1}\psi (n)<\infty \), Yang (1995) obtained the moment inequality, exponent inequality and strong law for weighted sums, Wang et al. (2010) obtained the maximal inequality and gave some applications, Xu and Tang (2013) discussed the strong law for Jamison’s type weighted sums. Since \(\psi \)-mixing is stronger than \(\varphi \)-mixing (see, for example, Lin and Lu 1997), the results on \(\varphi \)-mixing also hold for \(\psi \)-mixing.
Motivated by the work of Blum et al. (1963), it is interesting to obtain the limiting behavior for some kind of mixing random variables without any conditions on mixing rate. Shao (1988) obtained the complete convergence for \(\varphi \)-mixing random variables without any conditions on mixing rate, Chen et al. (2009) extended the result of Shao (1988) to the moving average processes based on \(\varphi \)-mixing random variables. Since \(\psi \)-mixing implies \(\varphi \)-mixing, their result holds for \(\psi \)-mixing random variables without any conditions on mixing rate. However, it is not known whether (1.2) or (1.3) holds for \(\varphi \)-mixing random variables without any conditions on mixing rate. In this paper, we will prove that (1.2) and (1.3) holds for \(\psi \)-mixing random variables.
We now state the main results. The first one extends and generalizes that of Baxter et al. (2004) from independent case to \(\psi \)-mixing, and that of Blum et al. (1963) from partial sums to weighted sums.
Theorem 1.1
Let \(\{X_n,n\ge 1\}\) be a sequence of identically distributed \(\psi \)-mixing random variables, \(\{a_n, n\ge 1\}\) a sequence of constants satisfying \(\sup _{n\ge 1}n^{-1}\sum ^n_{k=1}|a_k|^\alpha <\infty \) for some \(1<\alpha <\infty \). Then \(EX_1=0\) and \(E|X_1|<\infty \) imply that (1.3) holds, i.e., \(n^{-1} \sum _{k=1}^n a_k X_k\rightarrow 0\) a.s. In particular,
The second one extends and generalizes those of Choi and Sung (1987) and Cuzick (1995) from independent case to \(\psi \)-mixing.
Theorem 1.2
Let \(\{X_n,n\ge 1\}\) be a sequence of identically distributed \(\psi \)-mixing random variables, \(\{a_{nk}, n\ge 1, 1\le k\le n\}\) an array of constants satisfying
for some \(1<\alpha \le \infty \) (when \(\alpha =\infty \) we interpret this as \(\sup _{n\ge 1}\max _{1\le k\le n}|a_{nk}|<\infty \)). If \(EX_1=0\) and \(E|X_1|^\beta <\infty ,\) where \(1/\alpha +1/\beta =1\), then (1.2) holds, i.e., \(n^{-1}\sum _{k=1}^n a_{nk}X_k\rightarrow 0\) a.s.
Some lemmas and the proofs of the main results will be detailed in the next section. The applications of Theorems 1.1 and 1.2 to the least-squares estimators will be shown in Section 3.
Throughout this paper, let C be a positive constant which is not necessarily the same one in each appearance. The symbol I(A) denotes the indicator function of the event \(A, \lfloor x\rfloor \) denotes the integer part of x, and \(\#B\) denotes the number of elements belonged to the set B.
2 Lemmas and Proofs
To prove Theorem 1.1, we need an analog of the Chung strong law of large numbers, which slightly extends Theorem 2 of Blum et al. (1963), and Theorem 2.20 of Hall and Heyde (1980).
Lemma 2.1
Let \(1<p\le 2, \{Y_n,n\ge 1\}\) be a sequence of \(\psi \)-mixing random variables with \(EY_n=0\) and \(E|Y_n|^p<\infty \) for all \(n\ge 1\). Suppose that
and
Then
Proof
By Markov’s inequality and (2.1)
So to prove (2.3), by the Borel–Cantelli lemma, it suffices to prove that
By \(EY_n=0\) for all \(n\ge 1\), (2.1) and Kronecker’s lemma
as \(n\rightarrow \infty \). So to prove (2.4), it suffices to prove that
Note that by (2.1)
and by (2.2)
Then (2.5) holds from an application of Theorem 2.20 of Hall and Heyde (1980), and the proof is completed. \(\square \)
Proof of Theorem 1.1
By Hölder’s inequality, we can assume that \(1<\alpha \le 2\) such that \(\sup _{n\ge 1}n^{-1}\sum ^n_{k=1}|a_k|^\alpha <\infty \). By Abel’s method, we have that for all \(k\ge 1\)
On account of \(E|X_1|<\infty \)
and so to prove (1.3), by the Borel–Cantelli lemma, it suffices to prove that
and
By Lemma 2.1, to prove (2.7), it suffices to prove that (2.1) and (2.2) hold for \(p=\alpha \) and \(Y_n=a_n[X_nI(|X_n|\le n)-EX_nI(|X_n|\le n)], n\ge 1\). In fact, by the \(c_r\)-inequality, Hölder’s inequality and (2.6)
By Hölder’s inequality, we have that for all \(n\ge 1\)
Therefore, (2.7) holds. By \(E|X_1|<\infty \), \(E|X_1|I(|X_1|>n)\rightarrow 0\) and hence
as \(n\rightarrow \infty \) for any \(s>0\). By \(EX_1=0\) and Hölder’s inequality
i.e., (2.8) holds for \(1/\alpha +1/\beta =1\). The proof is completed. \(\square \)
To prove Theorem 1.2, the following two lemmas on \(\varphi \)-mixing random variables are needed. The first one is a Rosenthal type inequality for \(\varphi \)-mixing random variables (see Shao 1988). The second one shows that Theorem 1.1 holds for uniformly bounded \(\varphi \)-mixing random variables without any conditions on mixing rate and is interesting in itself. Since \(\psi \)-mixing implies \(\varphi \)-mixing, Lemma 2.3 also holds for \(\psi \)-mixing random variables.
Lemma 2.2
Let \(\{Y_n, n\ge 1 \}\) be a sequence of \(\varphi \)-mixing random variables with \(E|Y_n|^s<\infty \) for all \(n\ge 1\) and for some \(s\ge 2\). Then there exists a positive constant C depending only on s and the \(\varphi \)-mixing coefficient \(\varphi (\cdot )\) such that for all \(n\ge 1\)
Remark 2.1
Set \(a(x)=\sum ^{\lfloor \log x\rfloor }_{i=1}\varphi ^{1/2}(2^i)\), \(x>0\). Then by \(\varphi (2^n)\rightarrow 0\) as \(n\rightarrow \infty \), \(\lim _{x\rightarrow \infty }a(x)/\log x=0\) and hence \(\lim _{x\rightarrow \infty }x^{-\delta }\exp (sa(x))=0\) for any \(s>0\) and \(\delta >0\). Therefore, the series \(\sum ^\infty _{n=1}n^{-\lambda }\exp (sa(n))\) converges for any \(s>0\) and \(\lambda >1.\)
Lemma 2.3
Let \(\{Y_n,n\ge 1\}\) be a sequence of \(\varphi \)-mixing random variables with \(\sup _{n\ge 1}|Y_n|\le M\) a.s. for some constant \(M>0, \{a_{nk}, n\ge 1, 1\le k\le n\}\) an array of constants satisfying (1.5) for some \(1<\alpha \le \infty \). Then
Proof
Set \(a(n)=\sum ^{\lfloor \log n\rfloor }_{i=1}\varphi ^{1/2}(2^i)\), \(n\ge 1\). We first prove (2.9) for the case \(\alpha =\infty \). When \(\alpha =\infty \), we have \(\sup _{n\ge 1}\max _{1\le k\le n}|a_{nk}|<\infty \). By Markov’s inequality and Lemma 2.2, we have that for any \(s>2\) and any \(\varepsilon >0\)
which ensures by Remark 2.1 that
Then (2.9) holds by the Borel–Cantelli lemma.
We now prove (2.9) for the case \(1<\alpha <\infty \). Without loss of generality, we can assume that for all \(n\ge 1\)
Then by Hölder’s inequality
for all \(n\ge 1\) and for any \(s>\alpha \). Set
where \(1/\alpha +1/\beta =1\). Then \(A_n^{(0)}\supset A_n^{(1)}\supset \cdots \), and by (2.10) and the definition of \(A_n^{(m)}\)
which implies that
Take \(m=5\) if \(\alpha >7/6\), and take \(m\ge 6\) such that \((m+2)/(m+1)<\alpha \le (m+1)/m\) if \(1<\alpha \le 7/6\). Then to prove (2.9), it suffices to prove that
and
By Lemma 2.2 and (2.11), we obtain that for any \(s>\max \{\alpha ,2\}\) and \(\varepsilon >0\)
It is easy to show that
for \(j=1,\ldots ,5\) and for any \(1<\alpha <\infty \). The above also holds for \(1\le j\le m\) when \((m+2)/(m+1)< \alpha \le (m+1)/m\). Then we can take s large enough such that
which ensures by Remark 2.1 that
Then by the Borel–Cantelli lemma, (2.13) holds.
A similar method of (2.15) leads to
Since we always have \(\alpha >(m+2)/(m+1)\), we can take s large enough such that
which ensures by Remark 2.1 that
By the Borel–Cantelli lemma, (2.14) holds. The proof is completed. \(\square \)
Proof
For any \(\varDelta >0\), set
for \(n\ge 1\). Since \(|Y_n|\le 2\varDelta \) for all \(n\ge 1,\) Lemma 2.3 implies that
By Hölder’s inequality
and by Theorem 1.1
Noting that \(X_n=Y_n+Z_n\) for all \(n\ge 1\) and \(EX_1=0\), we have
which ensures (1.2) by letting \(\varDelta \rightarrow \infty \). \(\square \)
Remark 2.2
The Kolmogorov strong law of large numbers holds for identically distributed \(\varphi \)-mixing random variables with \(\sum ^\infty _{n=1}\varphi ^{1/2}(2^n)<\infty \) (see, for example, Theorem 8.2.2 of Lin and Lu 1997). By the same proof of Theorem 1.2, except that Theorem 1.1 is replaced by Theorem 8.2.2 of Lin and Lu (1997), we can obtain that Theorem 1.2 also holds for \(\varphi \)-mixing random variables with \(\sum ^\infty _{n=1}\varphi ^{1/2}(2^n)<\infty .\)
3 Applications
We consider the simple linear errors-in-variables (EV) regression model:
where \(\theta ,\beta , x_1, \ldots , x_n\) are unknown parameters or constants, \((\varepsilon _k,\delta _k), 1\le k\le n,\) are random vectors and \(\xi _k,\eta _k, 1\le k\le n,\) are observable variables. From (3.1), we have
As a usual regression model of \(\eta _k\) on \(\xi _k\) with the errors \(\varepsilon _k-\beta \delta _k\), the least-squares (LS) estimators of \(\beta \) and \(\theta \) are given as
where \(\bar{\xi }_n=n^{-1}\sum ^n_{k=1}\xi _k\), and other similar notations, such as \(\bar{\eta }_n, \bar{\delta }_n\) and \(\bar{x}_n\), are defined in the same way.
Define \(s_n=\sum ^n_{k=1}(x_i-\bar{x}_n)^2\) for each \(n\ge 1\). Based on the above notations, we have
and
This model was proposed by Deaton (1985) to correct the effects of the sampling errors and is somewhat more practical than the ordinary regression model. Fuller (1987) summarized many early works for the EV models. The last two decades, the studies for the EV model have attracted much attention due to its simple form and wide applicability, for more details, we refer to Liu and Chen (2005), Miao et al. (2011) and Wang et al. (2015) and their references. In particular, Liu and Chen (2005) discussed the necessary and sufficient conditions for the strong consistency of \(\hat{\beta }_n\) and the weak consistency of \(\hat{\theta }_n\) under the assumptions that the errors \((\varepsilon _n, \delta _n), n\ge 1,\) are independent and identically distributed random variables. However, the independence assumption for the errors is not always valid in many applications. In particular, when the data are collected sequentially in time, e.g., consumer price index and rainfall by year, the errors do not satisfy independence. The result of Liu and Chen (2005) has been partially extended to the model with dependent errors. Fan et al. (2010) proved the sufficient condition for the strong consistency of \(\hat{\beta }_n\) when the errors are stationary \(\alpha \)-mixing with a condition on mixing rate and with higher order moment conditions.
As the applications of Theorems 1.1 and 1.2, we will extend and generalize the results of Liu and Chen (2005) from the independent case to the \(\psi \)-mixing setting.
In the following, the strong consistency for LS estimators of the unknown parameters is given. The first one is the strong consistency for the estimator of \(\beta \).
Theorem 3.1
Under the model (3.1), assume that \(\{(\varepsilon _n, \delta _n),n\ge 1\}\) is a sequence of identically distributed \(\psi \)-mixing random vectors with \(E\varepsilon _1=E\delta _1=0\), \(0<E\varepsilon _1^2, E\delta _1^2<\infty \). Then \(s_n/n\rightarrow \infty \) implies that
Conversely, if \(E(\varepsilon _1\delta _1)-\beta E\delta _1^2\not =0\), then \(\hat{\beta }_n\rightarrow \beta \ \mathrm{a.s.}\) implies that \(s_n/n\rightarrow \infty \).
Proof
of sufficiency Assume that \(s_n/n\rightarrow \infty \). To prove \(\hat{\beta }_n\rightarrow \beta \) a.s., by (3.2), it suffices to prove that
By Theorem 1.1
and
Hence, (3.4) and (3.6) hold. Set \(a_{nk}=n(x_k-\bar{x}_n)/s_n\) for \(n\ge 1\) and \(1\le k\le n\). Then
Therefore, by Theorem 1.2
and
Then (3.5) holds from (3.8) and (3.9). Note that
Then (3.7) holds by (3.6) and (3.9).
Proof
of necessity Suppose that \(s_n/n\rightarrow \infty \) does not hold. Taking a subsequence if necessary, we may assume that
By Theorem 1.1
and
Set \(a_{nk}=x_k-\bar{x}_n\) for \(n\ge 1\) and \(1\le k\le n\). By (3.10)
Then by Theorem 1.2
which follow that
Thus by (3.2), (3.11), (3.12), (3.14) and (3.15)
which leads to a contradiction to \(\hat{\beta }_n\rightarrow \beta \ \mathrm{a.s.}\), so we have \(s_n/n\rightarrow \infty \). The proof is completed. \(\square \)
Remark 3.1
When \(\{(\varepsilon _n, \delta _n),n\ge 1\}\) is a sequence of independent and identically distributed random variables, Theorem 3.1 was proved by Liu and Chen (2005). When \(\{(\varepsilon _n, \delta _n),n\ge 1\}\) is a sequence of stationary \(\alpha \)-mixing with higher order moment conditions \(E|\varepsilon _1|^{2+t}<\infty \) and \(E|\delta _1|^{2+t}<\infty \) for some \(t>0\) and with a mixing condition \(\alpha (n)=O(\log ^{-\gamma }n)\) for some \(\gamma >1+2/t,\) Fan et al. (2010) proved the sufficiency part of Theorem 3.1. Although \(\psi \)-mixing is stronger than \(\alpha \)-mixing, Theorem 3.1 is a complete extension of Liu and Chen (2005).
The second one is the strong consistency for the estimator of \(\theta .\)
Theorem 3.2
Under the assumptions of Theorem 3.1, further assume that \(\sup _{n\ge 1}n\bar{x}_n^2/s_n^*<\infty \), where \(s_n^*=\max \{n,s_n\}.\) Then \(n\bar{x}_n/s_n^*\rightarrow 0\) implies that
Conversely, if \(E(\varepsilon _1\delta _1)-E\delta _1^2\not =0\), then \(\hat{\theta }_n\rightarrow \theta \ \mathrm{a.s.}\) implies that \(n\bar{x}_n/s_n^*\rightarrow 0\).
Proof
of sufficiency Assume that \(n\bar{x}_n/s_n^*\rightarrow 0\). By (3.11)
and
By (3.12)
and
Note that
and
Hence, by Theorem 1.2
and
By the definition of \(s_n^*\), (3.19) and (3.21)
By (3.2), (3.16), (3.18), (3.20) and (3.22)
By (3.2), (3.17), (3.19), (3.21) and (3.22)
By (3.3), (3.23), (3.24), and noting that \(\bar{\varepsilon }_n\rightarrow 0\ \ \mathrm{a.s.}\) and \(\bar{\delta }_n\rightarrow 0\ \ \mathrm{a.s.}\) by Theorem 1.1
Proof
of necessity Suppose that \(n\bar{x}_n/s_n^*\rightarrow 0\) does not hold. Taking a subsequence if necessary, we may assume that
Then
and
Hence, by (3.2), (3.20), (3.22), (3.26) and (3.27)
Therefore, by (3.3), (3.28), (3.24), \(\bar{\varepsilon }_n\rightarrow 0\ \ \mathrm{a.s.}\), and \(\bar{\delta }_n\rightarrow 0\ \ \mathrm{a.s.}\), we have
which leads to a contradiction to \(\hat{\theta }_n\rightarrow \theta \ \mathrm{a.s.}\), so we have \(n\bar{x}_n/s_n^*\rightarrow 0\). We complete the proof.\(\square \)
Remark 3.2
Even though the errors are independent, Theorem 3.2 is not known.
4 Conclusions
For a sequence \(\{X_n, n\ge 1 \}\) of identically distributed \(\psi \)-mixing random variables with \(EX_1=0\) and an array \(\{a_{nk}, n\ge 1, 1\le k\le n\}\) of constant weights, conditions on both the weights and the moment of \(X_1\) have been given under which the weighted sums \(n^{-1}\sum _{k=1}^n a_{nk}X_k\) converge to 0 a.s. In general, the condition on weights \(a_{nk}\) is \(\sup _{n\ge 1} n^{-1}\sum ^n_{k=1}|a_{nk}|^\alpha <\infty \) for some \(1<\alpha \le \infty \) (when \(\alpha =\infty \) we interpret this as \(\sup _{n\ge 1}\max _{1\le k\le n}|a_{nk}|<\infty \)), and the moment condition is \(E|X_1|^\beta <\infty ,\) where \(1/\alpha +1/\beta =1.\) When the weights \(a_{nk}\) are independent of n, i.e., \(a_{kk}=a_{k+1, k}=\cdots ,\) the moment condition can be weakened to \(E|X_1|<\infty .\)
The above results based on two types of weights have been applied to the simple linear errors-in-variables regression model:
where \(\theta ,\beta , x_1, \ldots , x_n\) are unknown parameters or constants, \(\{(\varepsilon _n,\delta _n),n\ge 1\}\) is a sequence of identically distributed \(\psi \)-mixing random vectors with \(E\varepsilon _1=E\delta _1=0\), \(0<E\varepsilon _1^2, E\delta _1^2<\infty .\) Under the condition of \(E(\varepsilon _1\delta _1)-\beta E\delta _1^2\not =0,\) the necessary and sufficient condition for the strong consistency of LS estimator \(\hat{\beta }_n\) is \(s_n/n\rightarrow \infty .\) Namely,
Furthermore, under the conditions of \(\sup _{n\ge 1}n\bar{x}_n^2/s_n^*<\infty \) and \(E(\varepsilon _1\delta _1)-E\delta _1^2\not =0\), the necessary and sufficient condition for the strong consistency of LS estimator \(\hat{\theta }_n\) is \(n\bar{x}_n/s_n^*\rightarrow 0,\) where \(s_n^*=\max \{n,s_n\}.\)
References
Bai ZD, Cheng PE (2000) Marcinkiewicz strong laws for linear statistics. Stat Probab Lett 46:105–112
Baxter J, Jones R, Lin M, Olsen J (2004) SLLN for weighted independent identically distributed random variables. J Theor Probab 17:165–181
Blum JR, Hanson DL, Koopmans LH (1963) On the strong law of large numbers for a class of stochastic processes. Z Wahrsch Verw Gebiete 2:1–11
Chen P, Hu T-C, Volodin A (2009) Limiting behaviour of moving average processes under \(\varphi \)-mixing assumption. Stat Probab Lett 79:105–111
Chen P, Gan S (2007) Limiting behavior of weighted sums of i.i.d. random variables. Stat Probab Lett 77:1589–1599
Choi BD, Sung SH (1987) Almost sure convergence theorems of weighted sums of random variables. Stoch Anal Appl 5:365–377
Cuzick J (1995) A strong law for weighted sums of i.i.d. random variables. J Theor Probab 8:625–641
Deaton A (1985) Panel data from time series of cross-sections. J Econom 30:109–126
Fan GL, Liang HY, Wang JF, Xu HX (2010) Asymptotic properties for LS estimators in EV regression model with dependent errors. AStA Adv Stat Anal 94:89–103
Fuller W (1987) Measurement error models. Wiley, New York
Hall P, Heyde CC (1980) Martingale limit theory and its application. Academic Press, New York
Huang H, Wang D, Peng J (2014) On the strong law of large numbers for weighted sums of \(\varphi \)-mixing random variables. J Math Inequal 8:465–473
Lin Z, Lu C (1997) Limit theory for mixing dependent random variables. Kluwer Academic Publishers/Science Press, Dordrecht/Beijing
Liu JX, Chen XR (2005) Consistency of LS estimator in simple linear EV regression models. Acta Math Sci Ser B Engl Ed 25:50–58
Miao Y, Wang K, Zhao F (2011) Some limit behaviors for the LS estimator in simple linear EV regression models. Stat Probab Lett 81:92–102
Shao Q (1988) A moment inequality and its application. Acta Math Sin 31:736–747 (in Chinese)
Wang X, Hu S, Shen Y, Yang W (2010) Maximal inequality for \(\psi \)-mixing sequences and its applications. Appl Math Lett 23:1156–1161
Wang X, Shen A, Chen Z, Hu S (2015) Complete convergence for weighted sums of NSD random variables and its application in the EV regression model. Test 24:166–184
Xu H, Tang L (2013) Strong convergence properties for \(\psi \)-mixing random variables. J Inequal Appl 2013:360
Yang SC (1995) Almost sure convergence of weighted sums of mixing sequences. J Syst Sci Math Sci 15:254–265
Acknowledgements
The authors would like to thank the Editor-in-Chief Ana F. Militino and the referees for the helpful comments and suggestions that considerably improved the presentation of this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
The research of Pingyan Chen is supported by the National Natural Science Foundation of China (No. 11271161). The research of Soo Hak Sung is supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2014R1A1A2058041).
Rights and permissions
About this article
Cite this article
Hu, D., Chen, P. & Sung, S.H. Strong laws for weighted sums of \(\psi \)-mixing random variables and applications in errors-in-variables regression models. TEST 26, 600–617 (2017). https://doi.org/10.1007/s11749-017-0526-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11749-017-0526-6
Keywords
- Strong law of large numbers
- Weighted sum
- \(\psi \)-Mixing
- Strong consistency
- Simple linear errors-in-variables regression model