1 Introduction

The random variables are usually assumed to be independent in many statistical applications. However it is not a realistic assumption. Therefore, many statisticians extended this condition to various dependence structures and mixing structures. In this paper, we are interested in the widely orthant dependent (WOD, for short) structure which includes independent random variables, negatively associated (NA, for short) random variables, negatively superadditive dependent (NSD, for short) random variables, negatively orthant dependent (NOD, for short) random variables, and extended negatively dependent (END, for short) random variables as special cases.

Firstly, let us recall the concepts of complete convergence and stochastic domination. The concept of complete convergence was introduced by Hsu and Robbins [7] as follows: a sequence \(\{X_n,n\ge 1\}\) of random variables converges completely to a constant C if for all \(\varepsilon >0\),

$$\begin{aligned} \sum \limits _{n=1}^{\infty }P(|X_{n}-C|>\varepsilon )<\infty . \end{aligned}$$

By the Borel-Cantelli lemma, complete convergence implies that \(X_{n}\rightarrow C\)a.s. and so complete convergence is stronger than a.s. convergence.

The concept of stochastic domination below will be used frequently throughout the paper.

Definition 2.1

A sequence \(\{X_{n}, n\ge 1\}\) of random variables is said to be stochastically dominated by a nonnegative random variable X if there exists a positive constant C such that

$$\begin{aligned} P(|X_{n}|>x)\le CP(X>x) \end{aligned}$$

for all \(x\ge 0\) and \(n\ge 1\).

Now, let us recall the concept of WOD random variables, which was introduced by Wang et al. [27] as follows.

Definition 1.1

A finite collection of random variables \(X_1,X_2,\ldots ,X_n\) is said to be widely upper orthant dependent (WUOD, for short) if there exists a finite real number \(g_{U}(n)\) such that for all finite real numbers \(x_{i}, 1\le i\le n\),

$$\begin{aligned} P(X_1>x_1,X_2>x_2,\ldots ,X_n>x_n)\le g_{U}(n)\prod _{i=1}^nP(X_i>x_i). \end{aligned}$$
(1.1)

A finite collection of random variables \(X_1,X_2,\ldots ,X_n\) is said to be widely lower orthant dependent (WLOD, for short) if there exists a finite real number \(g_{L}(n)\) such that for all finite real numbers \(x_{i}, 1\le i\le n\),

$$\begin{aligned} P(X_1\le x_1, X_2\le x_2,\ldots ,X_n\le x_n)\le g_{L}(n)\prod _{i=1}^nP(X_i\le x_i). \end{aligned}$$
(1.2)

If \(X_1,X_2,\ldots ,X_n\) are both WUOD and WLOD, we then say that \(X_1,X_2,\ldots ,X_n\) are widely orthant dependent (WOD, for short) random variables, and \(g_{U}(n), g_{L}(n)\) are called dominating coefficients. A sequence \(\{X_n, n\ge 1\}\) of random variables is said to be WOD if every finite subcollection is WOD.

An array \(\{X_{ni}, 1\le i\le k_n, n\ge 1\}\) of random variables is said to be rowwise WOD if for every \(n\ge 1\), \(\{X_{ni}, 1\le i\le k_n\}\) are WOD random variables.

It follows from (1.1) and (1.2) that \(g_{U}(n)\ge 1\) and \(g_{L}(n)\ge 1\). If \(g_{U}(n)=g_{L}(n)=M\) for all \(n\ge 1\), where M is a positive constant, then \(\{X_n, n\ge 1\}\) is called END, which was introduced by Liu [14]. If \(M=1\), then \(\{X_n, n\ge 1\}\) is called NOD, which was introduced by Lehmann [12] and carefully studied by Joag-Dev and Proschan [11]. Note that NA implies NOD. Furthermore, Hu [10] pointed out that NSD random variables are NOD. Hence, the class of WOD random variables includes independent random variables, NA random variables, NSD random variables, NOD random variables, and END random variables as special cases. So, studying the limit behavior of WOD random variables and its applications are of great interest.

Since Wang et al. [27] introduced the concept of WOD random variables, many authors were devoted to studying the probability limit theory and statistical large sample theory. Wang et al. [27] provided some examples which showed that the class of WOD random variables contains some common negatively dependent random variables, some positively dependent random variables and some others; in addition, they studied the uniform asymptotics for the finite-time ruin probability of a new dependent risk model with a constant interest rate. Wang and Cheng [31] presented some basic renewal theorems for a random walk with widely dependent increments and gave some applications. Wang et al. [32] studied the asymptotics of the finite-time ruin probability for a generalized renewal risk model with independent strong subexponential claim sizes and widely lower orthant dependent inter-occurrence times. Chen et al. [1] considered uniform asymptotics for the finite-time ruin probabilities of two kinds of nonstandard bidimensional renewal risk models with constant interest forces and diffusion generated by Brownian motions. Shen [19] established the Bernstein type inequality for WOD random variables and gave some applications. Wang et al. [29] studied the complete convergence for WOD random variables and gave its applications in nonparametric regression models. Yang et al. [34] established the Bahadur representation of sample quantiles for WOD random variables under some mild conditions. Shen [21] obtained the asymptotic approximation of inverse moments for a class of nonnegative random variables, including WOD random variables as special cases. Qiu and Chen [16] studied the complete and complete moment convergence for weighted sums of WOD random variables. Wang and Hu [28] investigated the consistency of the nearest neighbor estimator of the density function based on WOD samples. Chen et al. [2] established a more accurate inequality for identically distributed WOD random variables, and gave its application to limit theorems, including the strong law of large numbers, the complete convergence, the a.s. elementary renewal theorem and the weighted elementary renewal theorem. Shen et al. [23] obtained some exponential probability inequalities for WOD random variables and gave some applications, and so on. In this work, we will further study the convergence properties for WOD random variables, and then apply it to nonparametric regression model.

Consider the following nonparametric regression model:

$$\begin{aligned} Y_{ni} = f\left( x_{ni}\right) + \varepsilon _{ni},\quad i = 1, 2, \ldots , n,\quad n\ge 1, \end{aligned}$$
(1.3)

where \(x_{ni}\) are known fixed design points from A, where \(A\subset {\mathbb {R}}^d\) is a given compact set for some \(d\ge 1\), \(f(\cdot )\) is an unknown regression function defined on A and \(\varepsilon _{ni}\) are random errors. Assume that for each n, \(\{\varepsilon _{ni},1\le i\le n\}\) has the same distribution as that of \(\{\varepsilon _{i},1\le i\le n\}\). As an estimator of \(f (\cdot )\), the following weighted regression estimator will be considered:

$$\begin{aligned} f_n(x)=\sum _{i=1}^n W_{ni}(x)Y_{ni},\quad x\in A\subset {\mathbb {R}}^d, \end{aligned}$$
(1.4)

where \(W_{ni}(x)= W_{ni}(x; x_{n1},x_{n2},\ldots ,x_{nn})\), \(i=1,2,\ldots ,n\) are the weight functions.

The above weighted estimator \(f_{n}(x)\) was first proposed by Stone [25] and adapted by Georgiev [4] to the fixed design case and then constantly studied by many authors. For instance, Georgiev and Greblicki [6], Georgiev [5] and Müller[15] among others studied the consistency and asymptotic normality for the weighted estimator \(f_n(x)\) when \(\varepsilon _{ni}\) are assumed to be independent. When \(\varepsilon _{ni}\) are dependent errors, many authors have also obtained many interesting results in recent years. Fan [3] extended the work of Georgiev [5] and Müller [15] in the estimation of the regression model to the case where form an \(L_q\)-mixingale sequence for some \(1\le q\le 2\). Roussas [17] discussed strong consistency and quadratic mean consistency for \(f_n(x)\) under mixing conditions. Roussas et al. [18] established asymptotic normality of \(f_n(x)\) assuming that the errors are from a strictly stationary stochastic process and satisfying the strong mixing condition. Tran et al. [26] discussed again asymptotic normality of \(f_n(x)\) assuming that the errors form a linear time series, more precisely, a weakly stationary linear process based on a martingale difference sequence. Hu et al. [8] studied the asymptotic normality for double array sum of linear time series. Hu et al. [9] gave the mean consistency, complete consistency and asymptotic normality of regression models with linear process errors. Liang and Jing [13] presented some asymptotic properties for estimates of nonparametric regression models based on negatively associated sequences. Shen [19] presentend the Bernstein-type inequality for widely dependent sequence and gave its applications to nonparametric regression models. Wang et al. [29] studied the complete convergence for WOD random variables and gave its application to nonparametric regression models. Wang et al. [28] established some results on complete consistency for the weighted estimator of nonparametric regression models based on END random errors. Shen et al. [24] presented the Rosenthal-type inequality for NSD random variables and gave its application to nonparametric regression models. Yang et al. [35] provided the convergence rate for the complete consistency of the weighted estimator of nonparametric regression models based on END random errors, and so on.

Unless otherwise specified, we assume throughout the paper that \(f_n(x)\) is defined by (1.4). For any function f(x), we use c(f) to denote all continuity points of the function f on A. The norm \(\Vert x\Vert \) is the Eucledean norm. For any fixed design point \(x \in A\), the following assumptions on weight function \(W_{ni}(x)\) will be used:

(\(H_1\)):

\(\sum _{i=1}^nW_{ni}(x)\rightarrow 1\) as \(n\rightarrow \infty \);

(\(H_2\)):

\(\sum _{i=1}^n\left| W_{ni}(x)\right| \le C<\infty \) for all n;

(\(H_3\)):

\(\sum _{i=1}^n\left| W_{ni}(x)\right| \cdot \left| f(x_{ni})-f(x)\right| I(\Vert x_{ni}-x\Vert >a)\rightarrow 0\) as \(n\rightarrow \infty \) for all \(a>0\).

Recently, Wang et al. [29] established the following result on complete consistency for the weighted estimator \(f_n(x)\) based on the assumptions above.

Theorem A

Let \(\{\varepsilon _{n}, n \ge 1\}\) be a sequence of WOD random variables with mean zero, which is stochastically dominated by a random variable X. Suppose that the conditions (\(H_1\))–(\(H_3\)) hold true, and

$$\begin{aligned} \max _{1\le i\le n}|W_{ni}(x)|=O\left( n^{-1/p}\right) \end{aligned}$$
(1.5)

holds for some \(p \ge 1\). Assume further that there exists some \(0\le \lambda <1\) such that \(g(n)=O(n^{\lambda /p})\). If \(E|X|^{2p+\lambda }<\infty \), then for any \(x \in c(f)\),

$$\begin{aligned} f_n(x)\rightarrow f(x)~~\text {completely},\quad \text {as}\; n\rightarrow \infty . \end{aligned}$$
(1.6)

In Theorem A, the moment condition \(E|X|^{2p+\lambda }<\infty \) depends not only on p but also on \(\lambda \), which seems strange. We wonder whether \(E|X|^{2p+\lambda }<\infty \) could be improved to \(E|X|^{2p}<\infty \). In addition, whether \(g(n)=O(n^{\lambda /p})\) for some \(0\le \lambda <1\) and \(p \ge 1\) could be replaced by a more general condition \(g(n)=O(n^{\lambda })\) for some \(\lambda \ge 0\). The answers are positive. Our main result is as follows.

Theorem 1.1

Let \(\{\varepsilon _{n}, n \ge 1\}\) be a sequence of WOD random variables with mean zero, which is stochastically dominated by a random variable X. Suppose that the conditions (\(H_1\))–(\(H_3\)) hold true, and (1.5) holds for some \(p\ge 1\). Assume further that there exists some \(\lambda \ge 0\) such that \(g(n)=O(n^{\lambda })\). If \(E|X|^{2p}<\infty \), then (1.6) holds for any \(x \in c(f)\).

Remark 1.1

Comparing Theorem 1.1 with Theorem A, we can see that the moment condition \(E|X|^{2p}<\infty \) in Theorem 1.1 is weaker than \(E|X|^{2p+\lambda }<\infty \) in Theorem A. In addition, the condition on dominating coefficients \(g(n)=O(n^{\lambda })\) in Theorem 1.1 is also weaker than \(g(n)=O(n^{\lambda /p})\) in Theorem A. Hence, the result of Theorem 1.1 generalizes and improves the corresponding one of Theorem A.

Remark 1.2

If \(g(n)=O(1)\), then sequence of WOD random variables reduces to the sequence of END random variables. Hence, Theorem 1.1 holds for END random variables. In addition, the result of Theorem 1.1 generalizes the corresponding one of Shen [22], Corollary 3.1] for END random variables to the case of WOD random variables.

2 An application to nearest neighbor estimation and numerical simulation

In this section, we will give an application of the main result to nearest neighbor estimation and carry out a numerical simulation to verify the result that we obtained. Wang et al. [29] have shown that conditions (\(H_{1}\))–(\(H_{3}\)) are satisfied for the nearest neighbor estimator by choosing \(k_{n}=\lfloor n^{1/p}\rfloor \) for some \(p>1\), where here and below \(\lfloor x\rfloor \) denotes the integer part of x. We immediately obtain the following result by Theorem 1.1. The details are omitted.

Theorem 2.1

Let \(\{\varepsilon _{n}, n \ge 1\}\) be a sequence of WOD random variables with mean zero, which is stochastically dominated by a random variable X. Suppose that \(f_{n}(x)\) is the nearest neighbor estimator of f(x) and \(k_{n}=\lfloor n^{1/p}\rfloor \) for some \(p>1\). Assume further that there exists some \(\lambda \ge 0\) such that \(g(n)=O(n^{\lambda })\). If \(E|X|^{2p}<\infty \), then (1.6) holds for any \(x \in c(f)\).

Now, we give the simulation study. The data are generated from model (1.3). For any fixed \(n\ge 3\), let \((\varepsilon _{1},\varepsilon _{2},\ldots ,\varepsilon _{n})\sim N_{n}({\varvec{0}},{\varvec{\Sigma }})\), where \({\varvec{0}}\) represents zero vector and

$$\begin{aligned} {\varvec{\Sigma }}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c}1.25 &{}-0.5 &{}0&{}\cdots &{} 0 &{} 0 &{} 0 \\ -0.5 &{}1.25 &{}-0.5&{}\cdots &{} 0 &{} 0 &{} 0 \\ 0 &{}-0.5 &{}1.25&{}\cdots &{} 0 &{} 0 &{} 0 \\ \vdots &{}\vdots &{} \vdots &{} &{} \vdots &{} \vdots &{} \vdots \\ 0 &{}0 &{}0 &{}\cdots &{} 1.25 &{} -0.5 &{}0 \\ 0 &{}0 &{}0 &{}\cdots &{} -0.5 &{} 1.25 &{} -0.5 \\ 0 &{}0 &{}0 &{}\cdots &{} 0&{}-0.5 &{} 1.25 \end{array}\right) _{n\times n}. \end{aligned}$$

By Joag-Dev and Proschan [11], it can be seen that \((\varepsilon _{1},\varepsilon _{2},\ldots ,\varepsilon _{n})\) is a NA vector for each \(n\ge 3\), and thus is a WOD vector. Choosing \(k_{n}=\lfloor n^{0.5}\rfloor \) and taking the points \(x=i/n\) for \(i=1,2,\ldots ,n\) and the sample sizes n as \(n=50,100,200\) respectively, we use R software to compute the estimator \(f_{n}(x)\) of f(x) with \(f(x)=\sin (2\pi x)\) and \(f(x)=x^{2}\) for 500 times. We obtain the comparison of \(f_{n}(x)\) and f(x) in Figs. 1, 2, 3, 4, 5 and 6 as follows.

Fig. 1
figure 1

Comparison of fn(x) and f(x) = sin(2\(\uppi \)x) with n = 50

Fig. 2
figure 2

Comparison of fn(x) and f(x) = sin(2\(\uppi \)x) with n = 100

Fig. 3
figure 3

Comparison of fn(x) and f(x) = sin(2\(\uppi \)x) with n = 200

Figures 1, 2 and 3 are the comparison of \(f_{n}(x)\) and f(x) with \(f(x)=\sin (2\pi x)\) and Figs. 4, 5 and 6 are the comparison of \(f_{n}(x)\) and f(x) with \(f(x)=x^2\), respectively, where the solid lines are the true functions and the dashed lines are the estimators. From Figs. 1, 2 and 3 we can see a good fit of the true function. There are some fluctuations in Figs. 2 and 3, that is because the value of x are divided into more fragments as n increases. Especially, when x closes to 0 or 1, the estimator converges to the true value as n increases. Figures 4, 5 and 6 also reflect the same result, that is, the estimator converges to the true function as the sample sizes n increases. These basically agree with the results we obtained in the paper.

Fig. 4
figure 4

Comparison of fn(x) and f(x) = x2 with n = 50

Fig. 5
figure 5

Comparison of fn(x) and f(x) = x2 with n = 100

Fig. 6
figure 6

Comparison of fn(x) and f(x) = x2 with n = 200

3 Some lemmas

In this section, we will show some crucial lemmas which will be used to prove the main results. The first one is a basic property for WOD random variables, which can be found in Wang et al. [29].

Lemma 3.1

Let \(\{X_n, n\ge 1\}\) be a sequence of WOD random variables.

  1. (i)

    If \(\{f_n(\cdot ), n\ge 1\}\) are all nondecreasing (or all nonincreasing), then \(\{f_n(X_n), n\ge 1\}\) are still WOD.

  2. (ii)

    For each \(n\ge 1\) and any \(s\in {\mathbb {R}}\),

    $$\begin{aligned} E\exp \left\{ s\sum _{i=1}^nX_i\right\} \le g (n)\prod _{i=1}^nE\exp \{sX_i\}. \end{aligned}$$

The next one is a complete convergence result for arrays of rowwise WOD random variables, which has some interest itself and plays a key role to prove the main result of the paper.

Lemma 3.2

Let \(\{X_{ni}, 1\le i\le n, n\ge 1\}\) be an array of rowwise WOD random variables with mean zero and \(\{a_n, n\ge 1\}\) be a sequence of positive constants. Suppose that the following conditions hold:

(i):

\(\max \nolimits _{1\le i\le n}|X_{ni}|\le Ca_n\)a.s.;

(ii):

\(\sum \nolimits _{i=1}^{n}EX_{ni}^2=o(a_n)\);

(iii):

there exists some \(\beta >0\) such that

$$\begin{aligned} \sum \limits _{n=1}^{\infty }g(n)e^{-\frac{\beta }{a_n}}<\infty . \end{aligned}$$

Then for any \(\varepsilon >0\),

$$\begin{aligned} \sum \limits _{n=1}^{\infty }P\left( \left| \sum \limits _{i=1}^{n}X_{ni}\right| \ge \varepsilon \right) <\infty , \end{aligned}$$
(3.1)

i.e. \(\sum \limits _{i=1}^{n}X_{ni}\rightarrow 0\) completely as \(n\rightarrow \infty \).

Proof

Noting that \(EX_{ni}=0\), we have by the elementary inequality \(e^x\le 1+x+\frac{1}{2}x^2 e^{|x|}\) for \(x\in {\mathbb {R}}\) and condition (i) that

$$\begin{aligned} E\exp (tX_{ni})\le & {} E\left\{ 1+tX_{ni}+\frac{1}{2}t^2X_{ni}^2\exp (|tX_{ni}|)\right\} \nonumber \\= & {} 1+\frac{1}{2}t^2EX_{ni}^2\exp (|tX_{ni}|)\nonumber \\\le & {} \exp \left\{ \frac{1}{2}t^2EX_{ni}^2\exp (|tX_{ni}|)\right\} \nonumber \\\le & {} \exp \left\{ \frac{1}{2}t^2e^{Cta_n}EX_{ni}^2\right\} . \end{aligned}$$
(3.2)

By Markov’s inequality and Lemma 3.1 (ii), we can get that

$$\begin{aligned} P\left( \sum \limits _{i=1}^{n}X_{ni} \ge \varepsilon \right)\le & {} e^{-t\varepsilon }E\exp \left\{ t\sum \limits _{i=1}^{n}X_{ni}\right\} \nonumber \\\le & {} g(n)e^{-t\varepsilon }\prod \limits _{i=1}^{n}E\exp (tX_{ni}), \end{aligned}$$

which together with (3.2) and condition (ii) yields that

$$\begin{aligned} P\left( \sum \limits _{i=1}^{n}X_{ni} \ge \varepsilon \right)\le & {} g(n)e^{-t\varepsilon }\exp \left\{ \frac{1}{2}t^2e^{Cta_n}\sum \limits _{i=1}^{n}EX_{ni}^2\right\} \nonumber \\\le & {} g(n)e^{-t\varepsilon }\exp \left\{ \frac{1}{2}t^2e^{Cta_n}o(a_n)\right\} . \end{aligned}$$
(3.3)

Setting \(t=\frac{\beta +1}{\varepsilon a_n}\) in (3.3), we can see that

$$\begin{aligned} P\left( \sum \limits _{i=1}^{n}X_{ni} \ge \varepsilon \right)\le & {} g(n)e^{-\frac{\beta +1}{a_n}}\exp \left\{ \frac{1}{2}\left( \frac{\beta +1}{\varepsilon }\right) ^2o\left( \frac{1}{a_n}\right) e^{\frac{C(\beta +1)}{\varepsilon }}\right\} \nonumber \\\le & {} g(n)e^{-\frac{\beta }{a_n}} \end{aligned}$$
(3.4)

holds for all n sufficiently enough. Hence, we have by (3.4) and condition (iii) that

$$\begin{aligned} \sum \limits _{n=1}^{\infty }P\left( \sum \limits _{i=1}^{n}X_{ni} \ge \varepsilon \right) \le \sum \limits _{n=1}^{\infty }g(n)e^{-\frac{\beta }{a_n}}<\infty . \end{aligned}$$
(3.5)

It follows by Lemma 3.1 (i) that \(\{-X_{ni},1\le i\le n,n\ge 1\}\) is still an array of rowwise WOD random variables with mean zero. Hence, we have by (3.5) that

$$\begin{aligned} \sum \limits _{n=1}^{\infty }P\left( \sum \limits _{i=1}^{n}X_{ni} \le - \varepsilon \right) <\infty . \end{aligned}$$
(3.6)

Note for any \(\varepsilon >0\),

$$\begin{aligned} P\left( \left| \sum \limits _{i=1}^{n}X_{ni}\right| \ge \varepsilon \right) =P\left( \sum \limits _{i=1}^{n}X_{ni} \ge \varepsilon \right) +P\left( \sum \limits _{i=1}^{n}X_{ni} \le - \varepsilon \right) . \end{aligned}$$
(3.7)

Therefore, the desired result (3.1) follows from (3.5)–(3.7) immediately. The proof is completed. \(\square \)

The last one is a basic property for stochastic domination, which can be found in Wu [33], Shen [20] among others.

Lemma 3.3

Let \(\{X_{ni},1\le i\le n,n\ge 1\}\) be an array of random variables which is stochastically dominated by a random variable X. For any \(\alpha >0\) and \(b>0\), it follows that

$$\begin{aligned} E|X_{ni}|^\alpha I(|X_{ni}|\le b)\le & {} C_1[E|X|^\alpha I(|X|\le b)+b^\alpha P(|X|>b)],\nonumber \\ E|X_{ni}|^\alpha I(|X_{ni}|>b)\le & {} C_2E|X|^\alpha I(|X|>b), \end{aligned}$$

where \(C_1\) and \(C_2\) are positive constants.

4 Proof of Theorem 1.1

For any \(x\in c(f)\) and \(a>0\), it follows by (1.3) and (1.4) that

$$\begin{aligned} |Ef_n(x)-f(x)|\le & {} \sum \limits _{i=1}^{n}|W_{ni}(x)|\cdot |f(x_{ni})-f(x)|I(\Vert x_{ni}-x\Vert \le a)\nonumber \\&+\sum \limits _{i=1}^{n}|W_{ni}(x)|\cdot |f(x_{ni})-f(x)|I(\Vert x_{ni}-x\Vert >a)\nonumber \\&+|f(x)|\cdot \left| \sum \limits _{i=1}^{n}W_{ni}(x)-1\right| . \end{aligned}$$
(4.1)

Because \(x\in c(f)\), it follows that for any \(\varepsilon >0\), there exists a \(\delta >0\) such that \(|f(x^*)-f(x)|<\varepsilon \) when \(\Vert x^*-x\Vert <\delta \). Letting \(a\in (0,\delta )\) in (4.1), we have

$$\begin{aligned} |Ef_n(x)-f(x)|\le & {} \varepsilon \sum \limits _{i=1}^{n}|W_{ni}(x)|+|f(x)|\cdot \left| \sum \limits _{i=1}^{n} W_{ni}(x)-1\right| \nonumber \\&+\sum \limits _{i=1}^{n}|W_{ni}(x)|\cdot |f(x_{ni})-f(x)|I(\Vert x_{ni}-x\Vert >a). \end{aligned}$$
(4.2)

By (4.1) and \((H_1)\)\((H_3)\), we can get that for any \(x\in c(f)\),

$$\begin{aligned} \lim _{n\rightarrow \infty }Ef_n(x)=f(x). \end{aligned}$$
(4.3)

Noting that \(W_{ni}(x)=W_{ni}(x)^+-W_{ni}(x)^-\), so without loss of generality, we assume that \(W_{ni}(x)>0\) and \(\max \nolimits _{1\le i\le n}W_{ni}(x)\le n^{-1/p}\) for any \(x\in c(f)\). In view of (4.3), to prove (1.6), we only need to show

$$\begin{aligned} f_n(x)-Ef_n(x)=\sum \limits _{i=1}^{n}W_{ni}(x)\varepsilon _{ni}\doteq \sum \limits _{i=1}^{n}T_{ni}\rightarrow 0 \quad \text {completely, \quad as}~n\rightarrow \infty , \end{aligned}$$

i.e.

$$\begin{aligned} \sum _{n=1}^\infty P\left( \left| \sum \limits _{i=1}^{n}T_{ni}\right|>\varepsilon \right) <\infty \quad \text {for any }\varepsilon >0, \end{aligned}$$
(4.4)

where \(T_{ni}=W_{ni}(x)\cdot \varepsilon _{ni}\).

For any \(\varepsilon >0\), take q such that \(p<q<2p\) and denote for \(i=1,2,\ldots ,n\) and \(n\ge 1\) that

$$\begin{aligned} X_{ni}(1)= & {} -n^{\frac{1}{q}}I(\varepsilon _{ni}<-n^{\frac{1}{q}})+\varepsilon _{ni}I(|\varepsilon _{ni}|\le n^{\frac{1}{q}})+n^{\frac{1}{q}}I(\varepsilon _{ni}>n^{\frac{1}{q}}),\nonumber \\ X_{ni}(2)= & {} (\varepsilon _{ni}-n^{\frac{1}{q}})I(n^{\frac{1}{q}}<\varepsilon _{ni}\le \varepsilon n^{1/p}/N),\\ X_{ni}(3)= & {} (\varepsilon _{ni}+n^{\frac{1}{q}})I(-n^{\frac{1}{q}}>\varepsilon _{ni}\ge -\varepsilon n^{1/p}/N),\\ X_{ni}(4)= & {} (\varepsilon _{ni}-n^{\frac{1}{q}})I(\varepsilon _{ni}>\varepsilon n^{1/p}/N)+(\varepsilon _{ni}+n^{1/p})I(\varepsilon _{ni}<-\varepsilon n^{1/p}/N), \end{aligned}$$

where N is a positive integer, whose value will be specified later. Noting that \(X_{ni}(1)+X_{ni}(2)+X_{ni}(3)+X_{ni}(4)=\varepsilon _{ni}\), we know that

$$\begin{aligned} \sum _{n=1}^\infty P\left( \left| \sum \limits _{i=1}^{n}T_{ni}\right|>\varepsilon \right)\le & {} \sum _{j=1}^4 \sum _{n=1}^\infty P\left( \left| \sum \limits _{i=1}^{n}W_{ni}(x)X_{ni}(j)\right| >\frac{\varepsilon }{4}\right) \nonumber \\\doteq & {} I_1+I_2+I_3+I_4. \end{aligned}$$
(4.5)

Thus, in order to prove (4.4), we only need to prove \(I_j<\infty \) for \(j=1,2,3,4\).

First, we will prove \(I_1<\infty \). Noting that \(E\varepsilon _{i}=0\), we have by Markov’s inequality, condition \((H_2)\) and Lemma 3.3 that

$$\begin{aligned} \left| \sum \limits _{i=1}^{n}W_{ni}(x)EX_{ni}(1)\right|\le & {} n^{1/q}\sum \limits _{i=1}^{n}W_{ni}(x)P\left( |\varepsilon _{ni}|>n^{1/q}\right) +\left| \sum \limits _{i=1}^{n}W_{ni}(x)E\varepsilon _{ni}I(|\varepsilon _{ni}|\le n^{1/q})\right| \nonumber \\= & {} n^{1/q}\sum \limits _{i=1}^{n}W_{ni}(x)P\left( |\varepsilon _{i}|>n^{1/q}\right) +\left| \sum \limits _{i=1}^{n}W_{ni}(x)E\varepsilon _{i}I(|\varepsilon _{i}|> n^{1/q})\right| \nonumber \\\le & {} 2\sum \limits _{i=1}^{n}W_{ni}(x)E|X|I(|X|> n^{1/q})\nonumber \\\le & {} 2n^{-(2p-1)/q}\sum \limits _{i=1}^{n}W_{ni}(x)E|X|^{2p}I(|X|>n^{1/q})\nonumber \\\le & {} CE|X|^{2p}n^{-(2p-1)/q}\rightarrow 0, \quad \text {as} \; n\rightarrow \infty , \end{aligned}$$

which implies that for all n large enough,

$$\begin{aligned} \left| \sum \limits _{i=1}^{n}W_{ni}(x)EX_{ni}(1)\right| < \frac{\varepsilon }{8}. \end{aligned}$$

Thus, to prove \(I_1<\infty \), we only need to show

$$\begin{aligned} \sum _{n=1}^\infty P\left( \left| \sum \limits _{i=1}^{n}W_{ni}(x)(X_{ni}(1)-EX_{ni}(1))\right| >\frac{\varepsilon }{8}\right) <\infty . \end{aligned}$$
(4.6)

For fixed \(n\ge 1\) and \(x\in c(f)\), we can see that \(\{W_{ni}(x)(X_{ni}(1)-EX_{ni}(1)), 1\le i\le n\}\) are still mean zero WOD random variables by Lemma 3.1. To prove (4.6), we will make the use of Lemma 3.2, where \(X_{ni}=W_{ni}(x)(X_{ni}(1)-EX_{ni}(1))\) and \(a_n=(\log n)^{-1}\). Now, we will check that all conditions of Lemma 3.2 hold true.

Since \(p<q\), we can get that

$$\begin{aligned} \max \limits _{1\le i\le n}\left| W_{ni}(x)(X_{ni}(1)-EX_{ni}(1))\right| \le \frac{2n^{1/q}}{n^{1/p}}=O((\log n)^{-1})\doteq O(a_n), \end{aligned}$$
(4.7)

which implies that condition (i) of Lemma 3.2 satisfies.

Noting that \(E|X|^{2p}<\infty \) implies \(EX^2<\infty \), since \(p\ge 1\), we have by Lemma 3.3, condition \((H_2)\) and (1.5) that

$$\begin{aligned} \sum \limits _{i=1}^{n}E|W_{ni}(x)(X_{ni}(1)-EX_{ni}(1))|^2\le & {} \sum \limits _{i=1}^{n}W_{ni}^2(x)EX^2_{ni}(1)\nonumber \\\le & {} \mathop {\max }\limits _{1\le i\le n}W_{ni}(x)\sum \nolimits _{i=1}^{n}W_{ni}(x)E\varepsilon _{i}^2\nonumber \\\le & {} Cn^{-1/p}~=~o((\log n)^{-1}), \end{aligned}$$
(4.8)

which implies that condition (ii) of Lemma 3.2 satisfies.

For condition (iii) of Lemma 3.2, noting that \(g(n)=O(n^\delta )\) for some \(\delta \ge 0\), so we only need to choose \(\beta >1\). In this case, condition (iii) of Lemma 3.2 satisfies. Therefore, (4.6) follows from Lemma 3.2 immediately, and thus, \(I_1<\infty \) holds.

Next, we will show that \(I_2<\infty \). By the definition of \(X_{ni}(2)\), we can see that \(0\le X_{ni}(2)<\varepsilon n^{1/p}/N\). On the other hand, we have \(0<W_{ni}(x)\le 1/n^{1/p}\). Hence, for any \(\varepsilon >0\), \(\left| \sum \nolimits _{i=1}^{n}W_{ni}(x)X_{ni}(2)\right| =\sum \nolimits _{i=1}^{n}W_{ni}(x)X_{ni}(2)>\varepsilon /4\) yields that there exist at least \(N's\) nonzero \(X_{ni}(2)\). Thus, we have by the definition of WOD random variables and \(E|X|^{2p}<\infty \) that

$$\begin{aligned}&P\left( \left| \sum \limits _{i=1}^{n}W_{ni}(x)X_{ni}(2)\right|>\frac{\varepsilon }{4}\right) \nonumber \\&\le P\left( \text {there exist at least }N's\text { nonzero } X_{ni}(2)\right) \nonumber \\&\le \sum \limits _{1\le i_1<i_2<\cdots<i_N\le n}P(X_{ni_1}(2)\ne 0,X_{ni_2}(2)\ne 0,\ldots ,X_{ni_N}(2)\ne 0)\nonumber \\&\le \sum \limits _{1\le i_1<i_2<\cdots<i_N\le n}P(\varepsilon _{ni_1}>n^{1/q},\varepsilon _{ni_2}>n^{1/q},\ldots ,\varepsilon _{ni_N}>n^{1/q})\nonumber \\&\le g(n)\sum \limits _{1\le i_1<i_2<\cdots <i_N\le n}P(\varepsilon _{ni_1}>n^{1/q})P(\varepsilon _{ni_2}>n^{1/q})\cdots P(\varepsilon _{ni_N}>n^{1/q})\nonumber \\&\le g(n)\left[ \sum \limits _{i=1}^{n}P(\varepsilon _{i}>n^{1/q})\right] ^N~\le ~ g(n) \left[ nP(|X|>n^{1/q})\right] ^N\nonumber \\&\le g(n) n^{-(2p/q-1)N}. \end{aligned}$$
(4.9)

Noting that \(g(n)=O(n^\delta )\) for some \(\delta \ge 0\), we have by (4.9) that

$$\begin{aligned} I_2=\sum _{n=1}^\infty P\left( \left| \sum \limits _{i=1}^{n}W_{ni}(x)X_{ni}(2)\right| >\frac{\varepsilon }{4}\right) \le C \sum _{n=1}^\infty n^{\delta -(2p/q-1)N}<\infty , \end{aligned}$$
(4.10)

provided that \(N>\frac{q(\delta +1)}{2p-q}\).

For \(I_3\), due to \(-\varepsilon n^{1/p}/N<X_{ni}(3)\le 0\) and \(0<W_{ni}(x)\le 1/n^{1/p}\), \(\left| \sum \nolimits _{i=1}^{n}W_{ni}(x)(X_{ni}(3)\right| =-\sum \nolimits _{i=1}^{n}W_{ni}(x)X_{ni}(3)>\varepsilon \) implies that there exist at lease \(N's\) nonzero \(X_{ni}(3)\). Analogous to the proof of \(I_2<\infty \), we have \(I_3<\infty \).

At last, we will show that \(I_4<\infty \). Noting that \(E|X|^{2p}<\infty \), we have

$$\begin{aligned} I_4\doteq & {} \sum \limits _{n=1}^{\infty }P\left( \left| \sum \limits _{i=1}^{n}W_{ni}(x)X_{ni}(4)\right|>\frac{\varepsilon }{4}\right) \nonumber \\\le & {} \sum \limits _{n=1}^{\infty }\sum \limits _{i=1}^{n}P(|\varepsilon _{i}|>\varepsilon n^{1/p}/N)\nonumber \\\le & {} C\sum \limits _{n=1}^{\infty }nP(|X|>\varepsilon n^{1/p}/N)\nonumber \\\le & {} CE|X|^{2p}<\infty , \end{aligned}$$
(4.11)

which implies that \(I_4<\infty \). Thus, (4.4) follows from (4.5) and \(I_1<\infty ,I_2<\infty ,I_3<\infty ,I_4<\infty \) immediately. This completes the proof of the theorem. \(\square \)