1 Introduction

Among many statistical problems, people tend to assume that the random variables are independent. However, this assumption is not practical in some senses, such as when dealing with financial time series. Therefore, various dependent structures have been introduced by many authors. The widely orthant-dependent (WOD, for short) structure is one of the most general dependent structures, which was introduced by Wang et al. [25] as follows.

Definition 1.1

For the random variables \(\{X_{n},n\ge 1\}\), if there exists a finite real sequence \(\{g_{U}(n),n\ge 1\}\) satisfying for each \(n\ge 1\) and for all \(x_{k}\in (-\infty ,\infty ),1\le k\le n\),

$$\begin{aligned} P(X_{1}>x_{1}, X_{2}> x_{2},\ldots ,X_{n}> x_{n})\le g_{U}(n)\prod _{k=1}^{n}P(X_{k}> x_{k}), \end{aligned}$$

then we say that the \(\{X_{n},n\ge 1\}\) are widely upper orthant dependent (WUOD, for short); if there exists a finite real sequence \(\{g_{L}(n),n\ge 1\}\) satisfying for each \(n\ge 1\) and for all \(x_{k}\in (-\infty ,\infty ),1\le k\le n\),

$$\begin{aligned}P(X_{1}\le x_{1}, X_{2}\le x_{2},\ldots ,X_{n}\le x_{n})\le g_{L}(n)\prod _{k=1}^{n}P(X_{k}\le x_{k}), \end{aligned}$$

then we say that the \(\{X_{n},n\ge 1\}\) are widely lower orthant dependent (WLOD, for short); if they are both WUOD and WLOD, then we say that the \(\{X_{n},n\ge 1\}\) are WOD random variables, and \(g_{U}(n)\), \(g_{L}(n), n\ge 1,\) are called dominating coefficients.

Since the concept of WOD random variables was introduced, many authors have focused on the limit behavior of WOD random variables. For example, Wang et al. [26] studied the precise large deviations for WOD random variables with dominated varying tails; Yang et al. [36] investigated the Bahadur representation of sample quantiles for WOD random variables; Gao et al. [6] established the precise large deviations for WOD random variables with different distributions; Lu et al. [14] investigated the complete f-moment convergence for WOD random variables and gave its applications; Shen et al. [19] studied the asymptotic properties for the estimators of survival function and failure rate function based on WOD samples; He [8] established the strong consistency and complete consistency for the Priestley-Chao estimator in nonparametric regression models with widely orthant dependent errors under some general conditions; Wu et al. [32] studied the limiting behavior for arrays of rowwise WOD random variables under conditions of Rh-integrability and its application; Shen et al. [18] provided some sufficient conditions to prove the complete convergence for widely negatively orthant dependent (WNOD, for short) random variables; Ning et al. [17] investigated the complete convergence of weighted sums of WNOD random variables and gave its application in nonparametric regression models. Now, we introduce the problem that we focus on.

It is well-known that for a sequence of independent and identically distributed random variables \(\{X,X_{n},n\ge 1\}\), Spitzer [21] proved that \(EX=0\) is equivalent to

$$\begin{aligned} \sum _{n=1}^{\infty }n^{-1}P\left( \left| \sum _{i=1}^{n}X_{i}\right|>\varepsilon n\right) <\infty ,\text {~~for~~all~~}\varepsilon >0, \end{aligned}$$
(1.1)

and (1.1) is equivalent to

$$\begin{aligned} \sum _{n=1}^{\infty }n^{-1}P\left( \max _{1\le k \le n}\left| \sum _{i=1}^{k}X_{i}\right|>\varepsilon n\right) <\infty ,\text {~~for~~all~~}\varepsilon >0. \end{aligned}$$
(1.2)

Chen and Sung [2] extended the result (1.2) to WOD random variables.

Theorem A

Let \(\{X,X_n,n\ge 1\}\) be a sequence of identically distributed WOD random variables with dominating coefficients \(g_{L}(n), g_{U}(n)\) for \(n\ge 1\). Suppose that there exists a nondecreasing positive function g(x) on \([0,\infty )\) such that \(\max \{g_{L}(n),g_{U}(n)\}\le g(n)\) for \(n\ge 1\). Assume that one of the following conditions holds:

  1. (i)

    \(g(x)/x^{\tau }\downarrow \) for some \(0<\tau <1\), and \(E|X|g(|X|)<\infty \);

  2. (ii)

    There exists a nondecreasing positive function h(x) on \([0,\infty )\), such that \(h(x)/x\downarrow \) and \(\sum _{n=1}^{\infty }g(n)/(nh^\gamma (n))<\infty \) for some \(\gamma >0\), and \(E|X|h(|X|)<\infty \).

Then

$$\begin{aligned} \sum _{n=1}^{\infty }n^{-1}P\left( \max _{1\le k \le n}\left| \sum _{i=1}^{k}(X_{i}-EX_{i})\right|>\varepsilon n\right) <\infty ,\text {~~for~~all~~}\varepsilon >0. \end{aligned}$$

However, the above result only considers the special moment condition \(E|X|g(|X|)<\infty \) or \(E|X|h(|X|) <\infty \). Besides, the general constant weights can also be considered. Furthermore, the condition of identical distribution seems too strong.

The above type of convergence is called complete convergence, which was introduced by Hsu and Robbins [10] as follows:

Definition 1.2

A sequence \(\{X_{n},n\ge 1\}\) of random variables converges completely to the constant C, if

$$\begin{aligned}\ \sum _{n=1}^{\infty }P(|X_{n}-C|>\varepsilon )<\infty ,\text {~~for~~all~~} \varepsilon >0. \end{aligned}$$

By the Borel–Cantelli lemma, it is obvious that the complete convergence implies \(X_{n}\rightarrow C\) almost surely (a.s., for short). Therefore, complete convergence is significant in proving almost sure convergence.

The researches of complete convergence for different dependent structures have been established by many authors. For example, Baum and Katz [1] established a rate of convergence in the Marcinkiewicz–Zygmund law of large numbers; Sung [23] investigated the complete convergence for weighted sums of \(\rho ^{*}\)-mixing random variables; Wang et al. [27] studied the complete convergence for weighted sums of negative superadditive dependent random variables (NSD, for short); Li et al. [12] established the complete convergence for randomly weighted extended negatively dependent random variables (END, for short) and gave its application; Wu et al. [34] investigated the complete convergence for identically distributed END random variables; Zhang et al. [37] studied the complete consistency for the weighted estimator of a nonparametric regression model; Hosseini and Nezakati [9] investigated the complete moment convergence for END linear processes with random coefficients and gave its applications; Liang and Wu [13] provided the complete convergence and complete moment convergence for END random variables under sub-linear expectations; Lu and Wang [16] studied the complete moment convergence for the widely orthant dependent linear processes with random coefficients; Lu et al. [15] established the complete and complete moment convergence for randomly weighted sums of \(\rho ^{*}\)-mixing random variables and gave its application in linear-time-invariant systems and nonparametric regression models; Deng and Wang [4] investigated the complete convergence for extended independent random variables under sub-linear expectations.

Inspired by the above researches and the problem that we focus on, the main purpose of this paper is to extend the result of Chen and Sung [2] to a much more general type of complete convergence for WOD random variables and the identical distribution is replaced by stochastic domination. Furthermore, we give an application to the complete consistency for the estimator in the nonparametric regression model and present a simulation study.

The following concept of stochastic domination will be used in this paper as follows:

Definition 1.3

A sequence \(\{X_{n},n\ge 1\}\) of random variables is said to be stochastically dominated by a random variable X if there exists a constant C, such that

$$\begin{aligned} P(|X_{n}|>x)\le CP(|X|>x)\text {~~for all~~}x>0 \text {~~and~~} n\ge 1. \end{aligned}$$

The layout of this paper is as follows: some preliminary lemmas are contained in Sect. 2. Main results and their proofs are given in Sect. 3. An application of our main results to the nonparametric regression model and numerical simulation are presented in Sect.  4. Finally, conclusions are obtained in Sect. 5.

Throughout this paper, the symbol C denotes a positive constant which is not necessarily the same one in each appearance. Denote \(\log x=\ln (\max \{x,e\})\). \(A_{n}=O(B_{n})\) stands for \(|A_{n}|\le C|B_{n}|\) for all \(n\ge 1\). Let \(g(n)=\max \{g_{L}(n),g_{U}(n)\}\) be the dominating coefficients of the WOD sequence and I(A) be the indicator function of the set A.

2 Preliminary lemmas

To prove our main results, we need the following important lemmas. The first one is a basic property for WOD random variables, which was established by Wang et al. [28].

Lemma 2.1

  Let \(\{X_n,n\ge 1\}\) be a sequence of WOD random variables. If \(\{f_n,n\ge 1\}\) is a sequence of all nondecreasing (or all nonincreasing) functions, then \(\{f_n(X_n),n\ge 1\}\) is also a sequence of WOD random variables with the same dominating coefficients.

We need some moment inequalities to prove our main results. The first one is the Rosenthal-type moment inequality for WOD random variables which was obtained by Wang et al. [28].

Lemma 2.2

Let \(p\ge 2\) and \(\{X_n,n\ge 1\}\) be a sequence of zero mean WOD random variables with the dominating coefficients g(n) and \(E|X_n|^p<\infty \) for each \(n\ge 1\). Then there exists a positive constant \(C_p\) depending only on p such that

$$\begin{aligned} E\left| \sum \limits _{i=1}^{n}X_i\right| ^p\le C_p\left\{ \sum \limits _{i=1}^{n}E|X_i|^p+g(n)\left( \sum \limits _{i=1}^{n}EX_i^2\right) ^{p/2}\right\} . \end{aligned}$$

By Corollary 2.3 in Wang et al. [28] and Theorem 2.3.1 in Stout [24], we can obtain the Rosenthal-type moment inequality for the maximum of partial sums of WOD random variables.

Lemma 2.3

Let \(p\ge 2\) and \(\{X_n,n\ge 1\}\) be a sequence of zero mean WOD random variables with the dominating coefficients g(n) and \(E|X_n|^p<\infty \) for each \(n\ge 1\). Then there exists a positive constant \(C_p\) depending only on p such that

$$\begin{aligned} E\max _{1\le j\le n}\left| \sum \limits _{i=1}^{j}X_i\right| ^p\le C_p(\log n)^p\left\{ \sum \limits _{i=1}^{n}E|X_i|^p+g(n)\left( \sum \limits _{i=1}^{n}EX_i^2\right) ^{p/2}\right\} . \end{aligned}$$

To release the identical distribution to stochastic domination, we need the following lemma, which can be found in Wu [30] or Shen et al. [20].

Lemma 2.4

Let \(\lbrace X_n,n\ge 1\rbrace \) be a sequence of random variables which is stochastically dominated by a random variable X. For any \(\alpha >0\) and \(b>0\), the following two statements hold:

$$\begin{aligned}&E\vert X_n\vert ^{\alpha }I(\vert X_n\vert \le b)\le C_1[E\vert X\vert ^{\alpha }I(\vert X\vert \le b)+b^{\alpha } P(\vert X\vert>b)],\\&\quad E\vert X_n\vert ^{\alpha }I(\vert X_n\vert>b)\le C_2E\vert X\vert ^{\alpha }I(\vert X\vert >b), \end{aligned}$$

where \(C_1\) and \(C_2\) are positive constants. Consequently, \(E\vert X_n\vert ^\alpha \le CE\vert X\vert ^\alpha \), where C is a positive constant.

For the convenience of our proofs, we need the last lemma which can be obtained in the proof of Theorem 1.1 in Chen and Sung [2].

Lemma 2.5

For a random variable X, let g(x) be a nondecreasing positive function on \([0,\infty )\). Assume that \(g(x)/x^\tau \downarrow \) for some \(0<\tau <1\) and \(E|X|g(|X|)<\infty \). Then

$$\begin{aligned} \sum \limits _{n=1}^{\infty }n^{-2}g(n)EX^2I(|X|\le n)+\sum \limits _{n=1}^{\infty }g(n)P(|X|>n)<\infty . \end{aligned}$$

3 The main results and their proofs

Before presenting our main results and their proofs, we need the following assumptions.

(A.1):

  Let \(\{X_i,i\ge 1\}\) be a sequence of WOD random variables which is stochastically dominated by a random variable X with dominating coefficients \(g(n), n\ge 1\).

(A.2):

  Let g(x) be a nondecreasing positive function on \([0,\infty )\),such that \(g(x)/x^\tau \downarrow \) for some \(0<\tau <1\).

(A.3):

  There exists a nondecreasing positive function h(x) on \([0,\infty )\),such that \(h(x)/x\downarrow \) and \(\sum _{n=1}^{\infty }g(n)/(nh^\gamma (n))<\infty \) for some \(\gamma >0\).

Theorem 3.1

Let (A.1) hold and \(p\ge 1\). When \(p=1\), assume that (A.2) holds and \(E|X|g(|X|)<\infty \) or (A.3) holds and \(E|X|h(|X|)<\infty \). When \(p>1\), assume that \(E|X|^{p}<\infty \) and (A.2) or (A.3) holds. Let \(\{a_{ni},1\le i\le n,n\ge 1\}\) be an array of real numbers satisfying

$$\begin{aligned} \max _{1\le i\le n}|a_{ni}|=O(n^{-1}). \end{aligned}$$
(3.1)

Then, for all \(\varepsilon >0\),

$$\begin{aligned} \sum _{n=1}^{\infty }n^{p-2}P\left( \max _{1\le k\le n}\left| \sum _{i=1}^{k}a_{ni}(X_{i}-EX_{i})\right| >\varepsilon \right) <\infty . \end{aligned}$$
(3.2)

Proof

Without loss of generality, we may assume that \(X_{i}\ge 0\) for \(i\ge 1\) and \(a_{ni}\ge 0\) for \(1\le i\le n\) and all \(n\ge 1\). To simplify our proof, we may assume that \(\max \limits _{1\le i\le n}a_{ni}\le n^{-1}\), and thus,

$$\begin{aligned} \sum _{i=1}^{n}a_{ni}^{2}\le \left( \max _{1\le i\le n}a_{ni}\right) ^{2}\sum _{i=1}^{n}1\le n^{-1}. \end{aligned}$$
(3.3)

When \(p=1\), it is obvious that \(E|X|<\infty \) by \(E|X|g(|X|)<\infty \) or \(E|X|h(|X|)<\infty \). When \(p>1\), \(E|X|<\infty \) follows from \(E|X|^{p}<\infty \). Therefore, there exists a positive integer N such that \(E|X|I(|X|>N)<\frac{\varepsilon }{8C}.\)

Denote for \(i\ge 1\) that

$$\begin{aligned} X_{i}'=X_{i}I(X_{i}\le N)+NI(X_{i}> N), X_{i}''=X_i-X_{i}'=(X_{i}-N)I(X_{i}> N). \end{aligned}$$

It is easily checked that

$$\begin{aligned}&\sum _{n=1}^{\infty }n^{p-2}P\left( \max _{1\le k\le n}\left| \sum _{i=1}^{k}a_{ni}(X_{i}-EX_{i})\right|>\varepsilon \right) \\&\quad \le \sum _{n=1}^{\infty }n^{p-2}P\left( \max _{1\le k\le n}\left| \sum _{i=1}^{k}a_{ni}(X_{i}'-EX_{i}')\right|>\frac{\varepsilon }{2}\right) \\&\qquad +\sum _{n=1}^{\infty }n^{p-2}P\left( \max _{1\le k\le n}\left| \sum _{i=1}^{k}a_{ni}(X_{i}''-EX_{i}'')\right| >\frac{\varepsilon }{2}\right) \\&\quad =:I_{1}+I_{2}. \end{aligned}$$

Thus, to prove the desired result (3.2), we only need to show \(I_{1}<\infty \) and \(I_{2}<\infty \).

For each \(n\ge 1\), it follows from Lemma 2.1 that \(\{a_{ni}(X_{i}'-EX_{i}'),1\le i\le n\}\) are also WOD random variables with the same dominating coefficients.

Taking \(q>\max \{2p,2(p+\gamma -1)\}\), we have by Lemma 2.3 and Markov’s inequality that

$$\begin{aligned} I_{1}\le & {} C\sum _{n=1}^{\infty }n^{p-2}E\left( \max _{1\le k\le n}\left| \sum _{i=1}^{k}a_{ni}(X_{i}'-EX_{i}')\right| \right) ^{q}\\\le & {} C\sum _{n=1}^{\infty }n^{p-2}(\log n)^{q}\left[ \sum _{i=1}^{n}E\left| a_{ni}(X_{i}'-EX_{i}')\right| ^{q}\right. \\&+\left. g(n)\left( \sum _{i=1}^{n}E\left( a_{ni}(X_{i}'-EX_{i}')\right) ^{2}\right) ^{q/2}\right] \\=: & {} I_{11}+I_{12}. \end{aligned}$$

For \(I_{11}\), it follows from the definition of \(\{X_{i}',i\ge 1\}\), Jensen’s inequality and (3.1) that

$$\begin{aligned} I_{11}\le & {} C\sum _{n=1}^{\infty }n^{p-2}(\log n)^{q}\sum _{i=1}^{n}a_{ni}^{q}E|X_{i}'|^q\\\le & {} C\sum _{n=1}^{\infty }n^{p-2}(\log n)^{q}\sum _{i=1}^{n}a_{ni}^{q}\\\le & {} C\sum _{n=1}^{\infty }n^{p-2}(\log n)^{q}\left( \max _{1\le i\le n}a_{ni}\right) ^{q}\sum _{i=1}^{n}1\\\le & {} C\sum _{n=1}^{\infty }n^{p-1-q}(\log n)^{q}<\infty . \end{aligned}$$

For \(I_{12}\), similarly, we have by (3.3) and (A.2) or (A.3) that

$$\begin{aligned} I_{12}\le & {} C\sum _{n=1}^{\infty }n^{p-2}(\log n)^{q}g(n)\left( \sum _{i=1}^{n}a_{ni}^{2}E(X_{i}')^2\right) ^{\frac{q}{2}}\\\le & {} C\sum _{n=1}^{\infty }n^{p-2}(\log n)^{q}g(n)\left( \sum _{i=1}^{n}a_{ni}^{2}\right) ^{\frac{q}{2}}\\\le & {} {\left\{ \begin{array}{ll}C\sum \limits _{n=1}^{\infty }n^{p-2+\tau }(\log n)^{q}\left( \sum _{i=1}^{n}a_{ni}^{2}\right) ^{\frac{q}{2}},&{}\text { if } \mathbf (A.2) \text { holds },\\ C\sum \limits _{n=1}^{\infty }n^{p-2-\frac{q}{2}}(\log n)^{q}\frac{g(n)h^{\gamma }(n)n^{\gamma +1}}{nh^{\gamma }(n)n^{\gamma }},&{}\text {if }{} \mathbf (A.3) \text { holds },\\ \end{array}\right. }\\\le & {} {\left\{ \begin{array}{ll}C\sum \limits _{n=1}^{\infty }n^{p-2+\tau -\frac{q}{2}}(\log n)^{q},&{}\text {if } \mathbf (A.2) \text { holds },\\ C\sum \limits _{n=1}^{\infty }n^{p-1+\gamma -\frac{q}{2}}(\log n)^{q}\frac{g(n)}{nh^{\gamma }(n)},&{}\text {if }{} \mathbf (A.3) \text { holds },\\ \end{array}\right. }\\< & {} \infty . \end{aligned}$$

Thus, we have proved \(I_{1}<\infty \).

Now, we deal with \(I_{2}\). For \(n>N\), denote for \(i\ge 1\) that

$$\begin{aligned}Y_{ni}=(X_{i}-N)I(N<X_{i}\le n)+(n-N)I(X_{i}>n). \end{aligned}$$

Then, we have

$$\begin{aligned} I_{2}\le & {} \sum _{n=1}^{\infty }n^{p-2}P\left( \sum _{i=1}^{n}a_{ni}X_{i}''+\sum _{i=1}^{n}a_{ni}EX_{i}''>\frac{\varepsilon }{2}\right) \\\le & {} \sum _{n=1}^{\infty }n^{p-2}\sum _{i=1}^{n}P(X_{i}>n)+\sum _{n=1}^{\infty }n^{p-2}P\left( \sum _{i=1}^{n}a_{ni}Y_{ni}+\sum _{i=1}^{n}a_{ni}EX_{i}''>\frac{\varepsilon }{2}\right) \\=: & {} I_{3}+I_{4}. \end{aligned}$$

For \(I_3\), we can get by \(E|X|^{p}<\infty \) that

$$\begin{aligned} I_{3}\le & {} C\sum _{n=1}^{\infty }n^{p-1}P(|X|>n)\nonumber \\= & {} C\sum _{n=1}^{\infty }n^{p-1}EI(|X|>n)\nonumber \\= & {} C\sum _{n=1}^{\infty }n^{p-1}\sum _{k=n}^{\infty }EI(k<|X|\le (k+1))\nonumber \\= & {} C\sum _{k=1}^{\infty }EI(k<|X|\le (k+1))\sum _{n=1}^{k}n^{p-1}\nonumber \\\le & {} C\sum _{k=1}^{\infty }k^{p}EI(k<|X|\le (k+1))\nonumber \\\le & {} CE|X|^{p}<\infty . \end{aligned}$$
(3.4)

It follows from Lemmas 2.4 and (3.1) that

$$\begin{aligned} \sum _{i=1}^{n}a_{ni}EX_{i}''\le & {} \sum _{i=1}^{n}a_{ni}EX_{i}I(X_{i}>N)\nonumber \\\le & {} C\max _{1\le i\le n}a_{ni}\sum _{i=1}^{n}E|X|I(|X|>N)\nonumber \\\le & {} C E|X|I(|X|>N)\nonumber \\\le & {} \frac{\varepsilon }{8} \end{aligned}$$
(3.5)

and

$$\begin{aligned} \sum _{i=1}^{n}a_{ni}EY_{ni}\le & {} \sum _{i=1}^{n}a_{ni}EX_{i}I(X_{i}>N)\le \frac{\varepsilon }{8}. \end{aligned}$$
(3.6)

We have by (3.5) and (3.6) that

$$\begin{aligned} I_{4}\le & {} \sum _{n=1}^{\infty }n^{p-2}P\left( \sum _{i=1}^{n}a_{ni}Y_{ni}>\frac{\varepsilon }{4}\right) +\sum _{n=1}^{\infty }n^{p-2}P\left( \sum _{i=1}^{n}a_{ni}EX_{i}''>\frac{\varepsilon }{4}\right) \nonumber \\\le & {} C\sum _{n=1}^{\infty }n^{p-2}P\left( \left| \sum _{i=1}^{n}a_{ni}(Y_{ni}-EY_{ni})\right|>\frac{\varepsilon }{8}\right) \\&+C\sum _{n=1}^{\infty }n^{p-2}P\left( \sum _{i=1}^{n}a_{ni}EY_{ni}>\frac{\varepsilon }{8}\right) \nonumber \\= & {} C\sum _{n=1}^{\infty }n^{p-2}P\left( \left| \sum _{i=1}^{n}a_{ni}(Y_{ni}-EY_{ni})\right| >\frac{\varepsilon }{8}\right) . \end{aligned}$$

To estimate \(I_{4}\), we consider the following two cases.

Case 1. \(p\ne 1\).

For each \(n\ge 1\), it follows from Lemma 2.1 that \(\{a_{ni}(Y_{ni}-EY_{ni}),1\le i \le n\}\) are WOD random variables with the same dominating coefficients.

Taking \(q>\max \{2p,2(p+\gamma -1)\}\) when \(p\ge 2\) and \(q>\max \{\frac{2p}{p-1},\frac{2(p+\gamma -1)}{p-1}\}\) when \(1<p<2\), we have by Lemma 2.2 and Markov’s inequality that

$$\begin{aligned} I_{4}\le & {} C\sum _{n=1}^{\infty }n^{p-2}E\left( \left| \sum _{i=1}^{n}a_{ni}(Y_{ni}-EY_{ni})\right| ^{q}\right) \\\le & {} C\sum _{n=1}^{\infty }n^{p-2}\left\{ \sum _{i=1}^{n}E|a_{ni}(Y_{ni}-EY_{ni})|^{q}\right. \\&+\left. g(n)\left( \sum _{i=1}^{n}E(a_{ni}(Y_{ni}-EY_{ni}))^{2}\right) ^{q/2}\right\} \\=: & {} I_{5}+I_{6}. \end{aligned}$$

For \(I_{5}\), it follows from \(E|X|^{p}<\infty \), (3.4) and Lemma 2.4 that

$$\begin{aligned} I_{5}\le & {} C\sum _{n=1}^{\infty }n^{p-2}\sum _{i=1}^{n}a_{ni}^{q}EY_{ni}^{q}\\\le & {} C\sum _{n=1}^{\infty }n^{p-2}\sum _{i=1}^{n}a_{ni}^{q}(EX_i^qI(X_i\le n)+n^qP(X_i>n))\\\le & {} C\sum _{n=1}^{\infty }n^{p-2}\sum _{i=1}^{n}a_{ni}^{q}(E|X|^qI(|X|\le n)+n^qP(|X|>n))\\\le & {} C\sum _{n=1}^{\infty }n^{p-q-1}E|X|^qI(|X|\le n)+C\sum _{n=1}^{\infty }n^{p-1}P(|X|>n)\\\le & {} C\sum _{n=1}^{\infty }n^{p-q-1}\sum _{k=1}^{n}E|X|^qI(k-1<|X|\le k)+CE|X|^p\\= & {} C\sum _{k=1}^{\infty }E|X|^qI(k-1<|X|\le k)\sum _{n=k}^{\infty }n^{p-q-1}+CE|X|^p\\\le & {} C\sum _{k=1}^{\infty }k^{p-q}E|X|^qI(k-1<|X|\le k)+CE|X|^p\\\le & {} CE|X|^p<\infty . \end{aligned}$$

For \(I_6\), we consider the following two cases.

Case 1.1. \(p\ge 2\).

We have by Lemma 2.4, (3.3) and \(I_{12}<\infty \) that

$$\begin{aligned} I_{6}\le & {} C\sum _{n=1}^{\infty }n^{p-2}g(n)\left( \sum _{i=1}^{n}a_{ni}^{2}EY_{ni}^{2}\right) ^{q/2}\\\le & {} C\sum _{n=1}^{\infty }n^{p-2}g(n)\left( \sum _{i=1}^{n}a_{ni}^{2}\left( EX_{i}^{2}I(X_i\le n)+n^2P(X_i> n)\right) \right) ^{q/2}\\\le & {} C\sum _{n=1}^{\infty }n^{p-2}g(n)\left( \sum _{i=1}^{n}a_{ni}^{2}\left( EX^{2}I(|X|\le n)+n^2P(|X|> n)\right) \right) ^{q/2}\\\le & {} C\sum _{n=1}^{\infty }n^{p-2}g(n)\left( \sum _{i=1}^{n}a_{ni}^{2}\right) ^{q/2}\left( EX^{2}\right) ^{q/2}\\< & {} \infty . \end{aligned}$$

Case 1.2. \(1<p<2\).

Similar to Case 1.1, we have

$$\begin{aligned} I_{6}\le & {} C\sum _{n=1}^{\infty }n^{p-2}g(n)\left( \sum _{i=1}^{n}a_{ni}^{2}\right) ^{q/2}\left( EX^2I(|X|\le n)+n^2P(|X|>n)\right) ^{q/2}\\\le & {} C\sum _{n=1}^{\infty }n^{p-2-q/2+(2-p)q/2}g(n)\left( E|X|^{p}\right) ^{q/2}\\\le & {} {\left\{ \begin{array}{ll}C\sum _{n=1}^{\infty }n^{p-2-q/2+(2-p)q/2+\tau },&{} \quad \text {if }{} \mathbf (A.2) \text { holds},\\ C\sum _{n=1}^{\infty }n^{p-2-q/2+(2-p)q/2+\gamma +1}\frac{g(n)}{nh^\gamma (n)},&{} \quad \text {if } \mathbf (A.3) \text { holds },\\ \end{array}\right. }\\< & {} \infty . \end{aligned}$$

Case 2. \(p=1\).

If (A.2) holds and \(E|X|g(|X|)<\infty \), similar to Case 1, we have by Lemma 2.2, Markov’s inequality, Lemmas 2.4, 2.5 and (3.3) that

$$\begin{aligned} I_{4}\le & {} C\sum _{n=1}^{\infty }n^{-1}E\left( \left| \sum _{i=1}^{n}a_{ni}(Y_{ni}-EY_{ni})\right| \right) ^{2}\nonumber \\\le & {} C\sum _{n=1}^{\infty }n^{-1}g(n)\sum _{i=1}^{n}a_{ni}^2EY_{ni}^2\nonumber \\\le & {} C\sum _{n=1}^{\infty }n^{-1}g(n)\sum _{i=1}^{n}a_{ni}^2\left[ EX^2I(|X|\le n)+n^2P(|X|>n)\right] \nonumber \\\le & {} C\sum _{n=1}^{\infty }n^{-2}g(n)EX^2I(|X|\le n)+C\sum _{n=1}^{\infty }g(n)P(|X|>n)\nonumber \\< & {} \infty . \end{aligned}$$
(3.7)

If (A.3) holds and \(E|X|h(|X|)<\infty \), taking \(q>\max \{2,2\gamma \}\), we have by Lemma 2.2 and Markov’s inequality that

$$\begin{aligned} I_{4}\le & {} C\sum _{n=1}^{\infty }n^{-1}E\left| \sum _{i=1}^{n}a_{ni}(Y_{ni}-EY_{ni})\right| ^{q}\nonumber \\\le & {} C\sum _{n=1}^{\infty }n^{-1}\left[ \sum _{i=1}^{n}E \left| a_{ni}(Y_{ni}-EY_{ni})\right| ^q \right. \nonumber \\&+\left. g(n)\left( \sum _{i=1}^{n}E \left( a_{ni}(Y_{ni}-EY_{ni})\right) ^2\right) ^{q/2}\right] \nonumber \\=: & {} I_{41}+I_{42}. \end{aligned}$$
(3.8)

It follows from Lemma 2.4 and (3.4) that

$$\begin{aligned} I_{41}\le & {} C\sum _{n=1}^{\infty }n^{-q}\left[ E|X|^qI(|X|\le n)+n^qP(|X|>n)\right] \nonumber \\= & {} C\sum _{n=1}^{\infty }n^{-q}E|X|^qI(|X|\le n)+C\sum _{n=1}^{\infty }P(|X|>n)\nonumber \\\le & {} C\sum _{k=1}^{\infty }E|X|^qI(k-1< |X|\le k)\sum _{n=k}^{\infty }n^{-q}+CE|X|\nonumber \\\le & {} CE|X|<\infty . \end{aligned}$$
(3.9)

Now we deal with \(I_{42}\). Since \(h(x)\uparrow \) and \(h(x)/x\downarrow \), we can get \(h(x)x\uparrow \). Thus, we have by (A.3) that

$$\begin{aligned} I_{42}\le & {} C\sum _{n=1}^{\infty }n^{-1}g(n)\left( \sum _{i=1}^{n}a_{ni}^2EY_{ni}^2\right) ^{q/2}\nonumber \\\le & {} C\sum _{n=1}^{\infty }n^{-1-q/2}g(n)\left[ EX^2I(|X|\le n)+n^2P(|X|>n)\right] ^{q/2}\nonumber \\\le & {} C\sum _{n=1}^{\infty }n^{-1-q/2}g(n)\left[ E\frac{|X|h(|X|)|X|}{h(|X|)}I(|X|\le n)\right. \nonumber \\&+\left. n^2E\left( \frac{|X|h(|X|)}{nh(n)}I(|X|>n)\right) \right] ^{q/2}\nonumber \\\le & {} C\sum _{n=1}^{\infty }n^{-1-q/2}g(n)\left[ \frac{n}{h(n)}E|X|h(|X|)I(|X|\le n)\right. \nonumber \\&+\left. \frac{n}{h(n)}E|X|h(|X|)I(|X|>n)\right] ^{q/2}\nonumber \\\le & {} C\sum _{n=1}^{\infty }n^{-1-q/2}g(n)\left( \frac{n}{h(n)}E|X|h(|X|)\right) ^{q/2}\nonumber \\\le & {} C\left( E|X|h(|X|)\right) ^{q/2}\sum _{n=1}^{\infty }\frac{g(n)}{nh^{q/2}(n)}\nonumber \\< & {} \infty . \end{aligned}$$
(3.10)

From (3.7)–(3.10), we have proved that \(I_4<\infty \) in the case of \(p=1\).

The proof is completed. \(\square \)

Theorem 3.1 considers the special constant weights satisfying (3.1). The next one deals with much more general weights than Theorem 3.1.

Theorem 3.2

Let

(A.1) hold, \(1/p\le \alpha <1\) and \(p>1\). Assume that \(E|X|^p<\infty \) and

(A.2) or

(A.3) holds. Let \(\{a_{ni},1\le i\le n,n\ge 1\}\) be an array of real numbers satisfying

$$\begin{aligned} \sum \limits _{i=1}^{n}a_{ni}^2=O(n^{-\alpha }) \end{aligned}$$
(3.11)

and

$$\begin{aligned} \max _{1\le i\le n}|a_{ni}|=O(n^{-\alpha }). \end{aligned}$$
(3.12)

Then, for all \(\varepsilon >0\),

$$\begin{aligned} \sum \limits _{n=1}^{\infty }n^{\alpha p-2}P\left( \max _{1\le k\le n}\left| \sum \limits _{i=1}^{k}a_{ni}(X_i-EX_i)\right| >\varepsilon \right) <\infty . \end{aligned}$$
(3.13)

Proof

Without loss of generality, we may assume that \(X_{i}\ge 0\) for \(i\ge 1\) and \(a_{ni}\ge 0\) for \(1\le i\le n\) and all \(n\ge 1\). For fixed \(n\ge 1\), denote for \(1\le i\le n\) that

$$\begin{aligned} Y_{ni}=X_iI(X_i\le n^\alpha )+n^\alpha I(X_i>n^\alpha ). \end{aligned}$$

We have

$$\begin{aligned}&\sum \limits _{n=1}^{\infty }n^{\alpha p-2}P\left( \max _{1\le k\le n}\left| \sum \limits _{i=1}^{k}a_{ni}(X_i-EX_i)\right|>\varepsilon \right) \\&\quad \le \sum \limits _{n=1}^{\infty }n^{\alpha p-2}\sum \limits _{i=1}^{n}P(X_i>n^\alpha )+\sum \limits _{n=1}^{\infty }n^{\alpha p-2}P\left( \max _{1\le k\le n}\left| \sum \limits _{i=1}^{k}a_{ni}(Y_{ni}-EX_{i})\right| >\varepsilon \right) \\&\quad =:I_1+I_2. \end{aligned}$$

Similar to the proof of (3.4), we can easily get that \(I_1<\infty \). To prove the desired result (3.13), we only need to show \(I_2<\infty \).

First, we will show that

$$\begin{aligned} \max _{1\le k\le n}\left| \sum _{i=1}^{k}a_{ni}(EY_{ni}-EX_i)\right| \rightarrow 0\text {~~as~~} n\rightarrow \infty . \end{aligned}$$
(3.14)

We have by (3.11), Hölder’s inequality and Lemma 2.4 that

$$\begin{aligned}&\max _{1\le k\le n}\left| \sum _{i=1}^{k}a_{ni}(EY_{ni}-EX_i)\right| \\&\quad =\max _{1\le k\le n}\left| \sum _{i=1}^{k}a_{ni}(EX_{i}I(X_i>n^{\alpha })-n^{\alpha }EI(X_i>n^{\alpha }))\right| \\&\quad \le C\max _{1\le k\le n}\sum _{i=1}^{k}a_{ni}EX_iI(X_i>n^{\alpha })\\&\quad \le C\sum _{i=1}^{n}a_{ni}E|X|I(|X|>n^{\alpha })\\&\quad \le C\left( \sum _{i=1}^{n}a_{ni}^2\right) ^{1/2}\left( \sum _{i=1}^{n}1\right) ^{1/2}E|X|^{1-p}|X|^pI(|X|>n^{\alpha })\\&\quad \le Cn^{\frac{1+\alpha }{2}-\alpha p}E|X|^pI(|X|>n^{\alpha })\rightarrow 0\text {~~as~~} n\rightarrow \infty , \end{aligned}$$

which implies (3.14).

For each \(n\ge 1\), \(\{a_{ni}(Y_{ni}-EY_{ni}),1\le i\le n\}\) are still WOD random variables with the same dominating coefficients by Lemma 2.1. Take \(q>\max \{2p,\frac{2(\alpha p-1+\gamma )}{\alpha }\}\) when \(p\ge 2\) and \(q>\max \{\frac{2\alpha p}{\alpha p-\alpha },\frac{2(\alpha p-1+\gamma )}{\alpha p-\alpha }\}\) when \(1<p<2\). We have by Markov’s inequality, (3.14) and Lemma 2.3 that

$$\begin{aligned} I_2\le & {} \sum \limits _{n=1}^{\infty }n^{\alpha p-2}P\left( \max _{1\le k\le n}\left| \sum \limits _{i=1}^{k}a_{ni}(Y_{ni}-EY_{ni})\right| >\frac{\varepsilon }{2}\right) \\\le & {} C\sum \limits _{n=1}^{\infty }n^{\alpha p-2}E\left( \max _{1\le k\le n}\left| \sum \limits _{i=1}^{k}a_{ni}(Y_{ni}-EY_{ni})\right| ^q\right) \\\le & {} C\sum \limits _{n=1}^{\infty }n^{\alpha p-2}(\log n)^q\left[ \sum \limits _{i=1}^{n}E|a_{ni}(Y_{ni}\right. \\&\left. -EY_{ni})|^q+g(n)\left( \sum \limits _{i=1}^{n}E(a_{ni}(Y_{ni}-EY_{ni}))^2\right) ^{q/2}\right] \\=: & {} I_3+I_4. \end{aligned}$$

For \(I_3\), we have by Lemma 2.4, (3.11) and (3.12) that

$$\begin{aligned} I_3\le & {} C\sum \limits _{n=1}^{\infty }n^{\alpha p-2}(\log n)^q\sum \limits _{i=1}^{n}a_{ni}^qEY_{ni}^q\\\le & {} C\sum \limits _{n=1}^{\infty }n^{\alpha p-2}(\log n)^q\sum \limits _{i=1}^{n}a_{ni}^q\left[ EX_i^qI(X_i\le n^\alpha )+n^{\alpha q}P(X_i>n^\alpha )\right] \\\le & {} C\sum \limits _{n=1}^{\infty }n^{\alpha p-2}(\log n)^q\left( \max _{1\le i\le n}a_{ni}\right) ^{q-2}\sum \limits _{i=1}^{n}a_{ni}^2\left[ EX_i^qI(X_i\le n^\alpha )\right. \\&+\left. n^{\alpha q}P(X_i>n^\alpha )\right] \\\le & {} C\sum \limits _{n=1}^{\infty }n^{\alpha p-\alpha q+\alpha -2}(\log n)^q\left[ E|X|^qI(|X|\le n^\alpha )+n^{\alpha q}P(|X|>n^\alpha )\right] \\\le & {} C\sum \limits _{n=1}^{\infty }n^{\alpha -2}(\log n)^qE|X|^p\\< & {} \infty . \end{aligned}$$
Fig. 1
figure 1

Boxplots of \(f_n(x)-f(x)\) with \(x=0.25\), \(f(x)=-x^3\) and \(\theta =0.8\)

Fig. 2
figure 2

Boxplots of \(f_n(x)-f(x)\) with \(x=0.5\), \(f(x)=-x^3\) and \(\theta =0.8\)

Fig. 3
figure 3

Boxplots of \(f_n(x)-f(x)\) with \(x=0.75\), \(f(x)=-x^3\) and \(\theta =0.8\)

Fig. 4
figure 4

Boxplots of \(f_n(x)-f(x)\) with \(x=0.25\), \(f(x)=\sin x\) and \(\theta =0.8\)

Fig. 5
figure 5

Boxplots of \(f_n(x)-f(x)\) with \(x=0.5\), \(f(x)=\sin x\) and \(\theta =0.8\)

Fig. 6
figure 6

Boxplots of \(f_n(x)-f(x)\) with \(x=0.75\), \(f(x)=\sin x\) and \(\theta =0.8\)

Table 1 MSE of the estimator \(f_{n}(x)\) when \(\theta =0.8\)
Table 2 MSE of the estimator \(f_{n}(x)\) when \(\theta =0.4\)
Fig. 7
figure 7

Boxplots of \(f_n(x)-f(x)\) with \(x=0.25\), \(f(x)=-x^3\) and \(\theta =0.4\)

Fig. 8
figure 8

Boxplots of \(f_n(x)-f(x)\) with \(x=0.5\), \(f(x)=-x^3\) and \(\theta =0.4\)

Fig. 9
figure 9

Boxplots of \(f_n(x)-f(x)\) with \(x=0.75\), \(f(x)=-x^3\) and \(\theta =0.4\)

Fig. 10
figure 10

Boxplots of \(f_n(x)-f(x)\) with \(x=0.25\), \(f(x)=\sin x\) and \(\theta =0.4\)

Fig. 11
figure 11

Boxplots of \(f_n(x)-f(x)\) with \(x=0.5\), \(f(x)=\sin x\) and \(\theta =0.4\)

Fig. 12
figure 12

Boxplots of \(f_n(x)-f(x)\) with \(x=0.75\), \(f(x)=\sin x\) and \(\theta =0.4\)

To deal with \(I_4\), we consider the following two cases.

Case 1. \(p\ge 2\).

We have by Lemma 2.4 and (3.11) that

$$\begin{aligned} I_4\le & {} C\sum \limits _{n=1}^{\infty }n^{\alpha p-2}(\log n)^qg(n)\left( \sum \limits _{i=1}^{n}E(a_{ni}Y_{ni})^2\right) ^{q/2}\\\le & {} C\sum \limits _{n=1}^{\infty }n^{\alpha p-2}(\log n)^qg(n)\left( \sum \limits _{i=1}^{n}a_{ni}^2(EX_i^2I(X_i\le n^\alpha )+n^{2\alpha }P(X_i>n^{\alpha }))\right) ^{q/2}\nonumber \\\le & {} C\sum \limits _{n=1}^{\infty }n^{\alpha p-2}(\log n)^qg(n)\left( \sum \limits _{i=1}^{n}a_{ni}^2\right) ^{q/2}(EX^2)^{q/2}\nonumber \\\le & {} C\sum \limits _{n=1}^{\infty }n^{\alpha p-2-\alpha q/2}(\log n)^qg(n)\\\le & {} {\left\{ \begin{array}{ll} C\sum \limits _{n=1}^{\infty }n^{\alpha p-2-\alpha q/2+\tau }(\log n)^q,&{} \quad \text {if } \mathbf (A.2) \text { holds },\\ C\sum \limits _{n=1}^{\infty }n^{\alpha p-2-\alpha q/2+\gamma +1}\frac{g(n)}{nh^\gamma (n)}(\log n)^q,&{} \quad \text {if } \mathbf (A.3) \text { holds },\\ \end{array}\right. }\\< & {} \infty . \end{aligned}$$

Case 2. \(1<p<2\).

Similar to Case 1, it can be checked that

$$\begin{aligned} I_4\le & {} C\sum \limits _{n=1}^{\infty }n^{\alpha p-2}(\log n)^qg(n)\left( \sum \limits _{i=1}^{n}a_{ni}^2(EX^2I(|X|\le n^\alpha )+n^{2\alpha }P(|X|>n^\alpha ))\right) ^{q/2}\\\le & {} C\sum \limits _{n=1}^{\infty }n^{\alpha p-2+\alpha (2-p)q/2-\alpha q/2}g(n)(\log n)^q(E|X|^p)^{q/2}\\\le & {} {\left\{ \begin{array}{ll} C\sum \limits _{n=1}^{\infty }n^{\alpha p-2+\alpha (2-p)q/2-\alpha q/2+\tau }(\log n)^q,&{} \quad \text {if } \mathbf (A.2) \text { holds },\\ C\sum \limits _{n=1}^{\infty }n^{\alpha p-2+\alpha (2-p)q/2-\alpha q/2+\gamma +1}\frac{g(n)}{nh^\gamma (n)}(\log n)^q,&{} \quad \text {if } \mathbf (A.3) \text { holds },\\ \end{array}\right. }\\< & {} \infty . \end{aligned}$$

The proof is completed. \(\square \)

It is obvious that \(p=1\) implies (3.13) as well when \(\alpha <1\). Combining Theorem 3.1 and Theorem 3.2, we have the following Theorem 3.3.

Theorem 3.3

Let

(A.1) hold, \(1/p\le \alpha \le 1\) and \(p\ge 1\). When \(p=1\), assume that

(A.2) holds and \(E|X|g(|X|)<\infty \) or

(A.3) holds and \(E|X|h(|X|)<\infty \). When \(p>1\), assume that \(E|X|^p<\infty \) and

(A.2) or

(A.3) holds. Let \(\{a_{ni},1\le i\le n,n\ge 1\}\) be an array of real numbers satisfying (3.12) and it also satisfies (3.11) when \(\alpha <1\). Then (3.13) holds.

Remark 3.1

It is easy to see that Theorem A only considers the case of \(a_{ni}=1/n\) and \(\alpha =p=1\) in Theorem 3.3. Thus, compared to Theorem A, we have the following generalizations or improvements: (i). Generalize the moment condition of X; (ii). Consider much more general weights than Theorem A; (iii). Release the identical distribution to stochastic domination.

Remark 3.2

The constant weights satisfying (3.11) and (3.12) have been considered by some researchers. For example, Wu et al. [34] investigated the complete convergence for END random variables with the same weights. However, we consider a much more general WOD setting and the scopes of \(\alpha \) and p are released from \(1/p\le \alpha <1\) and \(p\ge 2\) to \(1/p\le \alpha \le 1\) and \(p\ge 1\). Thus, our results extend the results of Wu et al. [34] as well.

Taking \(\alpha p=2\) in Theorem 3.3, we can get the following strong law of large numbers for weighted sums of WOD random variables by Borel–Cantelli lemma.

Corollary 3.1

Let

(A.1) hold and \(0<\alpha \le 1\). Assume that \(E|X|^{2/\alpha }<\infty \) and

(A.2) or

(A.3) holds. Let \(\{a_{ni},1\le i\le n,n\ge 1\}\) have the same setting as those in Theorem 3.3. Then

$$\begin{aligned} \sum \limits _{i=1}^{n}a_{ni}(X_i-EX_i)\rightarrow 0~a.s.,n\rightarrow \infty . \end{aligned}$$

4 An application in nonparametric regression models

4.1 Theoretical result

In what follows, we will apply the result of Theorem 3.3 to a nonparametric regression model and investigate the complete consistency for the nonparametric regression estimator based on WOD errors.

Consider the following nonparametric regression model:

$$\begin{aligned} Y_{ni}=f(x_{ni})+\varepsilon _{ni}, i=1,2,\ldots ,n,~n\ge 1, \end{aligned}$$
(4.1)

where \(x_{ni}\) are known fixed design points from a given compact set \(A\subset {\mathbb {R}}^{m}\) for some \(m\ge 1\), \(f(\cdot )\) is an unknown regression function defined on A and \(\varepsilon _{ni}\) are random errors. Assume that for each \(n\ge 1\), \((\varepsilon _{n1},\varepsilon _{n2},\ldots ,\varepsilon _{nn})\) have the same distribution as \((\varepsilon _1,\varepsilon _2,\ldots ,\varepsilon _n)\). As an estimator of \(f(\cdot )\), the following weighted regression estimator will be considered:

$$\begin{aligned} f_{n}(x)=\sum \limits _{i=1}^{n}W_{ni}(x)Y_{ni},~x\in A\subset {\mathbb {R}}^{m}, \end{aligned}$$
(4.2)

where \(W_{ni}(x)=W_{ni}(x;x_{n1},x_{n2},\ldots ,x_{nn}), i=1,2,\ldots ,n\) are the weight functions.

The above estimator with constant weights was first proposed by Stone [22], and adapted by Georgiev [7] in the fixed design case. Many authors have obtained many interesting results in recent years. For example, Wang et al. [28] discussed the complete convergence for weighted sums of arrays of rowwise WOD random variables in some special cases and gave its applications to nonparametric regression model; Wang et al. [29] obtained the complete consistency for the estimator of nonparametric regression models based on END errors; Wu et al. [31] studied a result on complete consistency for the weighted estimator in a nonparametric regression model based on \(\rho ^*\)-mixing errors; Wu et al. [33] investigated a result on complete consistency for the estimator in a nonparametric regression model based on array of rowwise negatively associated (NA, for short) random errors; Chen et al. [3] established the complete consistency for the weighted estimator in a nonparametric regression model based on asymptotically negatively associated (ANA, for short) random errors; Yan [35] provided some sufficient conditions for the complete convergence for maximal weighted sums of END random variables and gave some applications to a nonparametric regression model.

In this section, let c(f) denote all continuity points of the function f on A. The symbol \(\Vert X\Vert \) stands for the Euclidean norm. For any fixed point \(x\in A\), the following assumptions on weight functions \(W_{ni}(x)\) will be used:

\((H_{1})\):

\(\sum \limits _{i=1}^{n}W_{ni}(x)\rightarrow 1~\text {as}~n\rightarrow \infty ;\)

\((H_{2})\):

\(\sum \limits _{i=1}^{n}|W_{ni}(x)|\le C<\infty \) for all n

\((H_{3})\):

\(\sum \limits _{i=1}^{n}|W_{ni}(x)|\cdot |f(x_{ni})-f(x)|I(\Vert x_{ni}-x\Vert >a)\rightarrow 0\) as \(n\rightarrow \infty \) for all \(a>0.\)

Based on the assumptions above, we can get the following result on complete consistency for the nonparametric regression estimator \(f_{n}(x)\).

Theorem 4.1

Let \(p\ge 2\) and \(\{\varepsilon _{n},n\ge 1\}\) be a sequence of zero mean WOD random errors which is stochastically dominated by a random error X with dominating coefficients g(n), \(n\ge 1\). Assume that one of the following conditions holds.

  1. (I)

     Let g(x) be a nondecreasing positive function on \([0,\infty )\), such that \(g(x)/x^\tau \downarrow \) for some \(0<\tau <1\).

  2. (II)

     There exists a nondecreasing positive function h(x) on \([0,\infty )\), such that \(h(x)/x\downarrow \) and \(\sum _{n=1}^{\infty }g(n)/(nh^\gamma (n))<\infty \) for some \(\gamma >0\).

Suppose that conditions \((H_1)-(H_3)\) hold. If

$$\begin{aligned} \max _{1\le i\le n}\vert W_{ni}(x)\vert =O(n^{-2/p}) \end{aligned}$$
(4.3)

and \(E\vert X\vert ^{p}<\infty \), then for any \(x\in c(f)\),

$$\begin{aligned} f_{n}(x)\rightarrow f(x) ~\text {completely,}~~as~n\rightarrow \infty . \end{aligned}$$
(4.4)

Proof

For any \(a>0\) and \(x\in c(f)\), it follows from (4.1) and (4.2) that

$$\begin{aligned} |Ef_{n}(x)-f(x)|\le & {} \sum \limits _{i=1}^{n}|W_{ni}(x)|\cdot |f(x_{ni})-f(x)|I(\Vert x_{ni}-x\Vert \le a)\nonumber \\&+\sum \limits _{i=1}^{n}|W_{ni}(x)|\cdot |f(x_{ni})-f(x)|I(\Vert x_{ni}-x\Vert >a)\nonumber \\&+|f(x)|\cdot \left| \sum \limits _{i=1}^{n}W_{ni}(x)-1\right| . \end{aligned}$$
(4.5)

Since \(x\in c(f)\), for any \(\delta >0\), there exists a \(\theta >0\), such that \(|f(x')-f(x)|<\delta \) when \(\Vert x'-x\Vert <\theta \). Setting \(0<a<\theta \) in (4.5), we have

$$\begin{aligned} |Ef_{n}(x)-f(x)|\le & {} \delta \sum \limits _{i=1}^{n}|W_{ni}(x)|+\sum \limits _{i=1}^{n}|W_{ni}(x)|\cdot |f(x_{ni})\\&-f(x)|I(\Vert x_{ni}-x\Vert >a)\\&+|f(x)|\cdot \left| \sum \limits _{i=1}^{n}W_{ni}(x)-1\right| . \end{aligned}$$

Therefore by assumptions \((H_1)-(H_3)\) and the arbitrariness of \(\delta >0\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty }Ef_{n}(x)=f(x). \end{aligned}$$
(4.6)

In view of (4.6), to prove (4.4), we only need to verify that for all \(\varepsilon >0\),

$$\begin{aligned} \sum \limits _{n=1}^{\infty }P\left( \left| f_n(x)-Ef_n(x)\right|>\varepsilon \right) =\sum \limits _{n=1}^{\infty }P\left( \left| \sum \limits _{i=1}^{n}W_{ni}(x)\varepsilon _i\right| >\varepsilon \right) <\infty . \end{aligned}$$
(4.7)

We have by (4.3) and \((H_2)\) that

$$\begin{aligned} \sum \limits _{i=1}^{n}W_{ni}^2(x)\le & {} \max _{1\le i\le n}|W_{ni}(x)|\cdot \sum \limits _{i=1}^{n}|W_{ni}(x)|\le Cn^{-2/p}. \end{aligned}$$

Applying Theorem 3.3 with \(X_i=\varepsilon _i\), \(a_{ni}=W_{ni}(x)\) and \(\alpha =2/p\), we obtain (4.7) immediately, and thus (4.4) holds.

The proof is completed. \(\square \)

Remark 4.1

Compared with Theorem 1.1 in Zhang et al. [37], condition (I) is similar to the condition on the dominating coefficients g(n) in Theorem 1.1 in Zhang et al. [37]. However, as it is stated in Chen and Sung [2], condition (II) is not comparable to condition (I). Thus, we consider a wider assumption on g(n) than that in Zhang et al. [37]. Therefore, Theorem 4.1 extends and improves Theorem 1.1 in Zhang et al. [37] to some extent.

4.2 Numerical simulation

In this section, we will illustrate that the designed assumptions \((H_1)-(H_3)\) and (4.3) are satisfied for the nearest neighbor weights. According to Theorem 4.1, it is easy to see that the estimator \(f_{n}(x)\) converges to f(x) completely theoretically. Now we will show the numerical performance of \(f_{n}(x)\). First, let us recall the concept of the nearest neighbor weight function as follows.

Put \(A=[0,1]\) and let \(x_{ni}=i/n,~i=1,2,\ldots ,n\). For any \(x\in A\), we rewrite

$$\begin{aligned} |x_{n1}-x|,|x_{n2}-x|,\ldots ,|x_{nn}-x| \end{aligned}$$

as follows:

$$\begin{aligned} |x_{n,R_1(x)}-x|\le |x_{n,R_2(x)}-x|\le \cdots \le |x_{n,R_n(x)}-x|, \end{aligned}$$

if \(|x_{nk}-x|=|x_{nj}-x|\), then \(|x_{nk}-x|\) is located before \(|x_{nj}-x|\) when \(x_{nk}<x_{nj}\).

Let \(1\le k_n\le n\), the nearest neighbor weight function is defined as follows:

$$\begin{aligned} W_{ni}(x)=\left\{ \begin{array}{ll} 1/k_{n}, &{} \quad if~|x_{ni}-x|\le |x_{n,R_{k_{n}}(x)}-x|,\\ 0, &{} \quad otherwise. \end{array} \right. \end{aligned}$$

Then, we will generate the data. For any fixed \(n\ge 3\), let normal random vector \((\varepsilon _1,\varepsilon _2,\ldots ,\varepsilon _n)\sim N_n({\mathbf {0}},\varvec{\Sigma })\), where \({\mathbf {0}}\) represents zero vector and

$$\begin{aligned} \varvec{\Sigma }=\left( \begin{array}{ccccccc} 1+\theta ^2 &{} \quad -\theta &{} \quad 0 &{} \quad \cdots &{} \quad 0 &{} \quad 0 &{} \quad 0\\ -\theta &{} \quad 1+\theta ^2 &{} \quad -\theta &{} \quad \cdots &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 0 &{} \quad -\theta &{} \quad 1+\theta ^2 &{} \quad \cdots &{} \quad 0 &{} \quad 0 &{} \quad 0\\ \vdots &{} \quad \vdots &{} \quad \vdots &{} \quad &{} \quad \vdots &{} \quad \vdots &{} \quad \vdots \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \cdots &{} \quad 1+\theta ^2 &{} \quad -\theta &{} \quad 0\\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \cdots &{} \quad -\theta &{} \quad 1+\theta ^2 &{} \quad -\theta \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \cdots &{} \quad 0 &{} \quad -\theta &{} \quad 1+\theta ^2\\ \end{array} \right) _{n\times n} ,\end{aligned}$$

where \(0<\theta <1.\) By Joag-Dev and Proschan [11], it can be seen that \((\varepsilon _1,\varepsilon _2,\ldots ,\varepsilon _n)\) is a NA vector for each \(n\ge 3\) with finite moment of any order, and thus is a WOD vector satisfying condition (I) and (II). We choose casually that \(\theta =0.8\) and \(\theta =0.4\), \(p=3\), \(k_n=\lfloor n^{2/3}\rfloor \), where the \(\lfloor x\rfloor \) stands for the integer part of x. As is stated in Wang et al. [28], the assumptions \((H_{1})-(H_3)\) hold true, besides the condition (4.3) is easy to be checked. Taking the points \(x=0.25,0.5,0.75\) and the sample sizes n as \(n=100,400,800,1200\) respectively, we use Matlab software to compute \(f_{n}(x)-f(x)\) with \(f(x)=-x^3\) and \(f(x)= \sin x\) for 500 times and obtain the boxplots of \(f_n(x)-f(x)\) in Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12. We show the mean square error (MSE, for short) of \(f_n(x)\) in Tables 1 and  2.

When \(\theta =0.8\), Figs. 1, 2 and 3 are the boxplots of \(f_n(x)-f(x)\) for \(f(x)=-x^3\) and Figs. 4, 5 and 6 are the boxplots of \(f_n(x)-f(x)\) for \(f(x)=\sin x\) with \(x=0.25,0.5,0.75\), respectively. We can find that no matter \(f(x)=-x^3\) or \(f(x)=\sin x\), for \(x=0.25,0.5,0.75\), the differences \(f_n(x)-f(x)\) fluctuate to zero and the variation range decreases markedly as the sample n increases. Table 1 reflects precisely that the MSE of \(f_n(x)\) decreases markedly as n increases. These simulation results agree with the theoretical result.

When \(\theta =0.4\), Figs. 7, 8 and 9 are the boxplots of \(f_n(x)-f(x)\) for \(f(x)=-x^3\) and Figs. 10, 11 and 12 are the boxplots of \(f_n(x)-f(x)\) for \(f(x)=\sin x\) with \(x=0.25,0.5,0.75\), respectively. We can obtain the same conclusions as those when \(\theta =0.8\). Table 2 also reflects precisely that the MSE of \(f_n(x)\) decreases markedly as n increases. These simulation results agree with the theoretical result again.

5 Conclusions

The goal of this paper is to establish the complete convergence for WOD random variables. By using the Rosenthal-type moment inequalities and inequalities for stochastic domination, we extend the results of Chen and Sung [2] and further investigate the complete consistency for the estimator in nonparametric regression models based on WOD errors. Finally, the simulation study is provided to assess the finite sample performance of the theoretical results.

For further direction of research, it is interesting to consider real data analysis in financial markets. Besides, the theoretical results presented in this paper can also be a useful tool when establishing the complete and strong consistency for the estimators of semiparametric regression models based on various types of dependent errors, which was proposed by Engle et al. [5].