1 Introduction

Let \(\{X_n,n\ge 1\}\) be a sequence of random variables defined on a fixed probability space \((\Omega ,\mathcal {F},P)\). The Rosenthal-type inequality for the maximum partial sum \(\max _{1\le m\le n}\sum _{i=1}^m X_i\) plays an important role in probability limit theory and mathematical statistics. The main purpose of the paper is to present some applications of the Rosenthal-type inequality for negatively superadditive dependent random variables, such as the consistency for an estimator of a nonparametric regression model, and a weak law of large numbers. Firstly, let us recall the concept of negatively superadditive dependent random variables, which was introduced by Hu (2000).

Definition 1.1

(c.f.Kemperman 1977) A function \(\phi ~:\mathbb {R}^{n}\rightarrow \mathbb {R}\) is called superadditive if \(\phi (\mathbf{x} \vee \mathbf{y})+\phi (\mathbf{x} \wedge \mathbf{y})\ge \phi (\mathbf{x})+\phi (\mathbf{y})\) for all \(\mathbf{x},\mathbf{y}\in R^{n}\), where \(\vee \) stands for componentwise maximum, and \(\wedge \) denotes componentwise minimum.

Definition 1.2

(c.f.Hu 2000) A random vector \(\mathbf{X}=(X_1,X_2,\ldots ,X_n)\) is said to be negatively superadditive dependent (NSD) if

$$\begin{aligned} E\phi (X_1,X_2,\ldots ,X_n)\le E\phi (X_1^{*},X_2^{*},\ldots ,X_n^{*}), \end{aligned}$$
(1.1)

where \(X_1^{*},X_2^{*},\ldots ,X_n^{*}\) are independent such that \(X_i^{*}\) and \(X_i\) have the same distribution for each \(i\), and \(\phi \) is a superadditive function such that the expectations in (1.1) exist.

A sequence \(\{X_n,n\ge 1\}\) of random variables is said to be NSD if for all \(n\ge 1,\, (X_1,X_2,\ldots ,X_n)\) is NSD.

The concept of NSD random variables was introduced by Hu (2000), which was based on the class of superadditive functions. Hu (2000) gave an example illustrating that NSD does not imply negative association (NA, in sort, see Joag-Dev and Proschan 1983; Wu and Jiang 2010a, b); or Wang et al. 2011), and Hu posed an open problem whether NA implies NSD. Christofides and Vaggelatou (2004) solved this open problem and indicated that NA implies NSD. The concept of negatively superadditive dependence extends the concept of negatively associated dependence, it is sometimes more useful than the latter, since it can be used to obtain many important probability inequalities. Eghbal et al. (2010) derived two maximal inequalities and strong law of large numbers of quadratic forms of NSD random variables under the assumption that \(\{X_i,i\ge 1\}\) is a sequence of nonnegative NSD random variables with \(EX_i^{r}<\infty \) for all \(i\ge 1\) and some \(r\,{>}\,1\). Eghbal et al. (2011) provided some Kolmogorov inequality for quadratic forms \(T_n=\sum _{1\le i<j\le n}X_iX_j\) and weighted quadratic forms \(Q_n=\sum _{1\le i<j\le n}a_{ij}X_iX_j\), where \(\{X_i,i\ge 1\}\) is a sequence of nonnegative NSD uniformly bounded random variables. Shen et al. (2013a) studied almost sure convergence and strong stability for weighted sums of NSD random variables, Shen et al. (2013b) studied strong convergence for NSD random variables and presented some moment inequalities. Wang et al. (2013a) studied complete convergence for arrays of rowwise NSD random variables, with applications to nonparametric regression.

Since NSD is much weaker than NA, the extension of the limit properties of independent or NA random variables to the case of NSD random variables is highly desirable and of considerable significance in theory and application.

The main purpose of this work is focused on applications of the Rosenthal-type inequality for NSD random variables. The work is organized as follows: some preliminary lemmas are provided in Sect. 2. The complete consistency for an estimator in a nonparametric regression model based on NSD errors is studied in Sect. 3, and the weak law of large numbers for NSD random variables is proved in Sect. 4.

Throughout the paper, \(C\) denotes a positive constant not depending on \(n\), which may be different in various places. \(a_{n}\ll b_{n}\) or \(a_{n}=O(b_{n})\) represents \(a_n\le Cb_n\) for all \(n\ge 1\). Let \(\lfloor x\rfloor \) denote the integer part of \(x\) and \(I(A)\) the indicator function of the set \(A\). Denote \(x^+ = xI(x \ge 0)\) and \(x^- = -xI(x < 0)\).

2 Preliminaries

In this section, we will present some important lemmas which will be used to prove the main results of the paper. The first one was provided by Hu (2000).

Lemma 2.1

If \((X_1,X_2,\ldots ,X_n)\) is NSD and \(g_1,g_2,\ldots ,g_n\) are nondecreasing functions, then \((g_{1}(X_1),g_{2}(X_2),\ldots ,g_{n}(X_n))\) is NSD.

The next one is the Rosenthal-type inequality for NSD random variables. For the proof, one can refer to Hu (2000) or Wang et al. (2013a). Here we omit the details.

Lemma 2.2

(Rosenthal-type inequality) Let \(p>1\) and \(\{X_n,n\ge 1\}\) be a sequence of NSD random variables with \(E|X_i|^p<\infty \) for each \(i\ge 1\). Then for all \(n\ge 1\),

$$\begin{aligned} E\left( \max _{1\le k \le n}\left| \sum _{i=1}^kX_i\right| ^p\right)&\le 2^{3-p}\sum _{i=1}^nE\left| X_i\right| ^p, \quad \hbox { for }~1<p\le 2 \end{aligned}$$
(2.1)

and

$$\begin{aligned} E\left( \max _{1\le k \le n}\left| \sum _{i=1}^kX_i\right| ^p\right) \le 2\left( \frac{15p}{\ln p}\right) ^p\left[ \sum _{i=1}^nE\left| X_i\right| ^p+\left( \sum _{i=1}^nE X_i^2\right) ^{p/2}\right] , \quad \hbox {for}~p>2.\nonumber \\ \end{aligned}$$
(2.2)

The concept of stochastic domination will be used in this work.

Definition 2.1

A sequence \(\{X_{n}, n\ge 1\}\) of random variables is said to be stochastically dominated by a random variable \(X\) if there exists a positive constant \(C\) such that

$$\begin{aligned} P(|X_{n}|>x)\le CP(|X|>x) \end{aligned}$$
(2.3)

for all \(x\ge 0\) and \(n\ge 1\).

By the definition of stochastic domination and integration by parts, we can get the following property for stochastic domination. For the details of the proof, one can refer to Wu (2010, 2012) or Tang (2013a, b).

Lemma 2.3

Let \(\{X_n,n\ge 1\}\) be a sequence of random variables which is stochastically dominated by a random variable \(X\). For any \(\alpha >0\) and \(b>0\), the following two statements hold:

$$\begin{aligned}&E|X_n|^\alpha I\left( |X_n|\le b\right) \le C_1\left[ E|X|^\alpha I\left( |X|\le b\right) +b^\alpha P\left( |X|>b\right) \right] ,\end{aligned}$$
(2.4)
$$\begin{aligned}&E|X_n|^\alpha I\left( |X_n|> b\right) \le C_2 E|X|^\alpha I\left( |X|> b\right) , \end{aligned}$$
(2.5)

where \(C_1\) and \(C_2\) are positive constants.

3 The consistency of an estimator in a nonparametric regression model based on NSD errors

In this section, we will give one application of the Rosenthal type inequality to nonparametric regression.

Let \(A\subset \mathbb {R}^p\) be a given compact set for some \(p\ge 1\). Consider the following fixed design nonparametric regression model

$$\begin{aligned} Y_{ni} = g\left( x_{ni}\right) + \varepsilon _{ni},\quad i = 1, 2, \ldots , n, ~n\ge 1, \end{aligned}$$
(3.1)

where \(x_{n1},x_{n2},\ldots ,x_{nn}\) are known fixed design points from \(A,\, g(\cdot )\) is an unknown real valued regression function defined on \(A\) and \(\varepsilon _{n1}, \varepsilon _{n2}, \ldots , \varepsilon _{nn}\) are random errors. Assume that for each \(n\ge 1\), \((\varepsilon _{n1}, \varepsilon _{n2}, \ldots , \varepsilon _{nn})\) have the same distribution as \((\varepsilon _{1}, \varepsilon _{2}, \ldots , \varepsilon _{n})\). As an estimator of \(g (\cdot )\), we shall consider the following weighted regression estimator:

$$\begin{aligned} g_n(x)=\sum _{i=1}^n W_{ni}(x)Y_{ni},\quad x\in A\subset \mathbb {R}^p, \end{aligned}$$
(3.2)

where the weight function \(W_{ni}(x)\) are of the form \(W_{ni}(x)= W_{ni}(x; x_{n1},x_{n2},\ldots ,\) \(x_{nn}),\, i=1,2,\ldots ,n\).

The class of estimators (3.2) was first introduced by Stone (1977) and next adapted by Georgiev (1983) to the fixed design case. Up to now, the estimator (3.2) has been studied by many authors. For instance, when the \(\varepsilon _{ni}\) are assumed to be independent, consistency and asymptotic normality have been studied by Georgiev and Greblicki (1986), Georgiev (1988) and Müller (1987), among others. Results for the case when \(\varepsilon _{ni}\) are dependent have also been studied by various authors in recent years. Fan (1990) extended the work of Georgiev (1988) and Müller (1987) in the estimation of the regression model to the case where the \(\varepsilon _{ni}\) form an \(L_q\)-mixingale sequence for some \(1\le q\le 2\). Roussas (1989) discussed strong consistency and quadratic mean consistency for \(g_n(x)\) under mixing conditions. Roussas et al. (1992) established asymptotic normality of \(g_n(x)\) assuming that the errors are from a strictly stationary stochastic process and satisfy the strong mixing condition. Tran et al. (1996) discussed again asymptotic normality of \(g_n(x)\) assuming that the errors form a linear time series, more precisely, a weakly stationary linear process based on a martingale difference sequence. Hu et al. (2002) obtained asymptotic normality for a double array sum of a linear time series, with applications to the regression model. Liang and Jing (2005) presented some asymptotic properties for estimates of nonparametric regression models based on negatively associated sequences, and Yang et al. (2012) extended the result of Liang and Jing (2005) to the case of negatively dependent errors. Wang et al. (2012) studied the complete consistency of the estimator of nonparametric regression models based on \(\tilde{\rho }\)-mixing sequences. Wang et al. (2013b) established the strong consistency of the estimator of fixed-design regression model under negatively dependent sequences, and so forth. The main purpose of this section is to investigate the complete consistency for an estimator of the nonparametric regression model based on NSD random variables, which generalizes the corresponding ones of Liang and Jing (2005) for negatively associated random variables.

Unless otherwise specified, we assume throughout the paper that the sample \((x_{ni}, Y_{ni})\) for \(1\le i\le n\) come from the fixed design nonparametric regression model (3.1) and \(g_n(x)\) is defined by (3.2). For any function \(g\), we use \(c(g)\) to denote the set of continuity points of \(g\) on \(A\). The norm \(\Vert x\Vert \) is the Euclidean norm. For any fixed design point \(x \in A\), the following assumptions on the weight function \(W_{ni}(x)= W_{ni}(x; x_{n1},x_{n2},\ldots ,x_{nn})\) will be used:

  1. (a)

    \(\sum _{i=1}^nW_{ni}(x)\rightarrow 1\) as \(n\rightarrow \infty \);

  2. (b)

    \(\sum _{i=1}^n\left| W_{ni}(x)\right| \le C<\infty \) for all \(n\);

  3. (c)

    \(\sum _{i=1}^n\left| W_{ni}(x)\right| \cdot \left| g(x_{ni})-g(x)\right| I(\Vert x_{ni}-x\Vert >a)\rightarrow 0\) as \(n\rightarrow \infty \) for all \(a>0\).

Based on the assumptions above, we can obtain the following complete consistency of the nonparametric regression estimator \(g_n(x)\) defined by (3.2).

Theorem 3.1

Let \(p>0\) and \(\{\varepsilon _{n}, n \ge 1\}\) be a sequence of NSD random variables with mean zero. Suppose that the following conditions are satisfied:

  1. (i)

    conditions \((a)\)\((c)\) hold true;

  2. (ii)

    \(\sup _{n\ge 1}E\varepsilon _n^2<\infty \) if \(p\in (0, 2]\) and \(\sup _{n\ge 1}E\left| \varepsilon _n\right| ^p<\infty \) if \(p>2\);

  3. (iii)

    for any \(x \in c(g)\),

    $$\begin{aligned} \sum _{n=1}^\infty \left[ \sum _{i=1}^nW_{ni}^2(x)\right] ^{p/2}<\infty . \end{aligned}$$
    (3.3)

    Then

    $$\begin{aligned} g_n(x)\rightarrow g(x)\quad \hbox {completely},\quad x \in c(g). \end{aligned}$$
    (3.4)

Proof

For \(x \in c(g)\) and \(a>0\), we have by (3.1) and (3.2) that

$$\begin{aligned} \left| Eg_n(x)-g(x)\right|&\le \sum _{i=1}^n\left| W_{ni}(x)\right| \cdot \left| g(x_{ni})-g(x)\right| I(\Vert x_{ni}-x\Vert \le a)\nonumber \\&\quad +~\sum _{i=1}^n\left| W_{ni}(x)\right| \cdot \left| g(x_{ni}) -g(x)\right| I(\Vert x_{ni}-x\Vert >a)\quad \nonumber \\&\quad +~|g(x)|\cdot \left| \sum _{i=1}^nW_{ni}(x)-1\right| . \end{aligned}$$
(3.5)

Since \(x \in c(g)\), hence for any \(\varepsilon >0\), there exists a \(\delta >0\) such that \(|g(x^{'})-g(x)|<\varepsilon \) when \(\Vert x^{'}-x\Vert <\delta \). Thus, by setting \(0<a<\delta \) in (3.5), we obtain

$$\begin{aligned} \left| Eg_n(x)-g(x)\right|&\le \varepsilon \sum _{i=1}^n \left| W_{ni}(x)\right| ~+~|g(x)|\cdot \left| \sum _{i=1}^nW_{ni}(x)-1\right| \nonumber \\&+ \sum _{i=1}^n\left| W_{ni}(x)\right| \cdot \left| g(x_{ni})-g(x)\right| I(\Vert x_{ni}-x\Vert >a).\quad \end{aligned}$$
(3.6)

By conditions (\(a\))–(\(c\)) and the arbitrariness of \(\varepsilon >0\), it follows that

$$\begin{aligned} \lim _{n\rightarrow \infty }Eg_n(x)=g(x),~~x \in c(g). \end{aligned}$$
(3.7)

For a fixed design point \(x\in c(g)\) and \(n\ge 1\), it is easily seen that \(\{W_{ni}^{+}(x)\varepsilon _i, 1\le i\le n\}\) and \(\{W_{ni}^{-}(x)\varepsilon _i, 1\le i\le n\}\) are still NSD random variables by Lemma 2.1. Note that \(W_{ni}(x)=W_{ni}^{+}(x)-W_{ni}^{-}(x)\), so without loss of generality, we assume that \(W_{ni}(x)\ge 0\) in what follows.

If \(0<p\le 2\), Jensen’ s inequality, Lemma 2.2 and \(\sup _{n\ge 1}E\varepsilon _n^2<\infty \) yield

$$\begin{aligned} E\left| g_n(x)-Eg_n(x)\right| ^p&= E\left| \sum _{i=1}^nW_{ni}(x)\varepsilon _{ni}\right| ^p~=~E\left| \sum _{i=1}^nW_{ni}(x)\varepsilon _{i}\right| ^p\nonumber \\&\le \left( E\left| \sum _{i=1}^nW_{ni}(x)\varepsilon _{i}\right| ^2\right) ^{p/2}~\le ~C\left( \sum _{i=1}^nW_{ni}^2(x)E\varepsilon _{i}^2\right) ^{p/2}\nonumber \\&\le C\left( \sum _{i=1}^nW_{ni}^2(x)\right) ^{p/2}. \end{aligned}$$
(3.8)

If \(p>2\), Lemma 2.2 and \(\sup _{n\ge 1}E\left| \varepsilon _n\right| ^p<\infty \) give

$$\begin{aligned} E\left| g_n(x)-Eg_n(x)\right| ^p&= E\left| \sum _{i=1}^nW_{ni}(x)\varepsilon _{ni}\right| ^p~=~E\left| \sum _{i=1}^nW_{ni}(x)\varepsilon _{i}\right| ^p\nonumber \\&\le C\left\{ \sum _{i=1}^nW_{ni}^p(x)E\left| \varepsilon _i\right| ^p+ \left[ \sum _{i=1}^nW_{ni}^2(x)E\varepsilon _i^2\right] ^{p/2}\right\} \nonumber \\&\le C\left\{ \left[ \sum _{i=1}^nW_{ni}^2(x)\right] ^{p/2}+ \left[ \sum _{i=1}^nW_{ni}^2(x)\right] ^{p/2}\right\} , \end{aligned}$$
(3.9)

since \(\left( \sum _{i=1}^na_i^\beta \right) ^{1/\beta }\le \left( \sum _{i=1}^na_i^\alpha \right) ^{1/\alpha }\) for any positive numbers \(\{a_i, 1\le i\le n\}\) and \(0< \alpha \le \beta \).

For any \(\varepsilon >0\), we have by Markov’s inequality, (3.8), (3.9) and (3.3) that

$$\begin{aligned} \sum _{n=1}^\infty P\left( |g_n(x)-Eg_n(x)|\ge \varepsilon \right)&\le \sum _{n=1}^\infty \frac{E\left| g_n(x)-Eg_n(x)\right| ^p}{\varepsilon ^p}\nonumber \\&\le C\sum _{n=1}^\infty \left[ \sum _{i=1}^nW_{ni}^2(x)\right] ^{p/2}~<~\infty . \end{aligned}$$
(3.10)

Together with (3.7) and (3.10) , we can get that

$$\begin{aligned}&\sum _{n=1}^\infty P\left( |g_n(x)-g(x)|\ge \varepsilon \right) \\&\quad \le \sum _{n=1}^\infty P\left( |g_n(x)-Eg_n(x)|\ge \frac{\varepsilon }{2}\right) +\sum _{n=1}^\infty P\left( |Eg_n(x)-g(x)|\ge \frac{\varepsilon }{2}\right) \\&\quad \le \sum _{n=1}^\infty P\left( |g_n(x)-Eg_n(x)|\ge \frac{\varepsilon }{2}\right) +C\\&\quad <\infty , \end{aligned}$$

which implies (3.4). This completes the proof of the theorem. \(\square \)

Similar to the proof of Theorem 3.1, we can obtain the following result. We only need to note that for \(0<p\le 2\),

$$\begin{aligned} E\left| g_n(x)-Eg_n(x)\right| ^p&= E\left| \sum _{i=1}^nW_{ni}(x)\varepsilon _{ni}\right| ^p~=~E\left| \sum _{i=1}^nW_{ni}(x)\varepsilon _{i}\right| ^p\nonumber \\&\le C\sum _{i=1}^nE\left| W_{ni}(x)\varepsilon _{i}\right| ^p~\le ~C\sum _{i=1}^n\left| W_{ni}(x)\right| ^p. \end{aligned}$$
(3.11)

Theorem 3.2

Let \(\{\varepsilon _{n}, n \ge 1\}\) be a sequence of NSD random variables with mean zero and \(\sup _{n\ge 1}E\left| \varepsilon _n\right| ^p<\infty \) for some \(0<p\le 2\). Assume that conditions \((a)\)\((c)\) hold and

$$\begin{aligned} \sum _{n=1}^\infty \sum _{i=1}^n\left| W_{ni}(x)\right| ^p<\infty ,~~x \in c(g). \end{aligned}$$

Then (3.4) holds.

As an application of Theorem 3.1, we give the complete consistency for the nearest neighbor estimator of \(g(x)\). Without loss of generality, put \(A=[0, 1]\), and take \(x_{ni}=\frac{i}{n}\), \(i=1, 2,\ldots , n\). For any \(x\in A\), let \(\left| x_{R_1(x)}^{(n)}-x\right| ,\left| x_{R_2(x)}^{(n)}-x\right| ,\ldots ,\left| x_{R_n(x)}^{(n)}-x\right| \) be a permutation of \(\left| x_{n1}-x\right| ,\left| x_{n2}-x\right| ,\ldots ,\left| x_{nn}-x\right| \) such that

$$\begin{aligned} \left| x_{R_1(x)}^{(n)}-x\right| \le \left| x_{R_2(x)}^{(n)}-x\right| \le \cdots \le \left| x_{R_n(x)}^{(n)}-x\right| , \end{aligned}$$

if \(\left| x_{ni}-x\right| =\left| x_{nj}-x\right| \), then \(\left| x_{ni}-x\right| \) is permuted before \(\left| x_{nj}-x\right| \) when \(x_{ni}<x_{nj}\).

Let \(1\le k_n\le n\), the nearest neighbor weight function estimator of \(g(x)\) in model (3.1) is

$$\begin{aligned} \tilde{g}_n(x)=\sum _{i=1}^n\tilde{W}_{ni}(x)Y_{ni}, \end{aligned}$$
(3.12)

where

$$\begin{aligned} \tilde{W}_{ni}(x)= \left\{ \begin{array}{lll} 1/k_n,&{}\quad \hbox {if}\,|x_{ni}-x|\le \left| x^{(n)}_{R_{k_n}(x)}-x\right| ,\\ 0,&{}\quad \hbox {otherwise}. \end{array}\right. \end{aligned}$$
(3.13)

Based on the notations above, we can get the following result by using Theorem 3.1.

Corollary 3.1

Let \(\{\varepsilon _{n}, n \ge 1\}\) be a sequence of NSD random variables with mean zero. Let \(g\) be continuous on the compact set \(A\). Assume that \(k_n=\lfloor n^\alpha \rfloor \) and \(\sup _{n\ge 1}E\left| \varepsilon _n\right| ^p<\infty \) for some \(2/p<\alpha <1\) and \(p>2\), then (3.4) holds, where \(g_n(x)\) is replaced by \(\tilde{g}_n(x)\).

Proof

It suffices to show that the conditions of Theorem 3.1 are satisfied. For any \(x\in [0,1]\), if follows from the definition of \(R_i(x)\) and \(\tilde{W}_{ni}(x)\) that

$$\begin{aligned} \sum _{i=1}^n\tilde{W}_{ni}(x)&= \sum _{i=1}^n\tilde{W}_{nR_i(x)}(x)~=~\sum _{i=1}^{k_n}\frac{1}{k_n}~=~1,\\ \sum _{i=1}^n\tilde{W}_{ni}^2(x)&= \sum _{i=1}^{k_n}\frac{1}{k_n^2}~=~\frac{1}{k_n},~~\tilde{W}_{ni}(x)\ge 0, \end{aligned}$$

and

$$\begin{aligned} \sum _{i=1}^n\left| \tilde{W}_{ni}(x)\right| I(|x_{ni}-x|>a)&\le \sum _{i=1}^n\frac{(x_{ni}-x)^2\left| \tilde{W}_{ni}(x)\right| }{a^2}\\ {}&= \sum _{i=1}^{k_n}\frac{\left( x_{R_i(x)}^{(n)}-x\right) ^2}{k_n a^2}~\le ~\sum _{i=1}^{k_n}\frac{\left( \frac{i}{n} \right) ^2}{k_n a^2}\\&\le \left( \frac{k_n}{na}\right) ^2,~~\hbox {for any }~~a>0. \end{aligned}$$

Hence, conditions (\(a\))–(\(c\)) are satisfied. For \(2/p<\alpha <1\) and \(p>2\), we can see that

$$\begin{aligned} \sum _{n=1}^\infty \left[ \sum _{i=1}^n\tilde{W}_{ni}^2(x)\right] ^{p/2}=\sum _{n=1}^\infty k_n^{-p/2}<\infty , \end{aligned}$$

which together with (\(a\))–(\(c\)) imply (3.4) by Theorem 3.1. This completes the proof of the corollary.\(\square \)

Theorem 3.3

Let \(\{\varepsilon _{n}, n \ge 1\}\) be a sequence of NSD random variables with mean zero, which is stochastically dominated by a random variable \(X\). Assume that conditions (\(a\))-(\(c\)) hold. If there exists some \(r>0\) such that \(E|X|^{1+1/r}<\infty \) and

$$\begin{aligned} \max _{1\le i\le n}|W_{ni}(x)|=O(n^{-r}), \end{aligned}$$
(3.14)

then (3.4) holds.

Proof

Without loss of generality, we assume that \(W_{ni}\ge 0\). From condition (3.14), we assume that

$$\begin{aligned} \max _{1\le i\le n}W_{ni}(x)=n^{-r},~n\ge 1. \end{aligned}$$
(3.15)

By (3.7), we can see that in order to prove (3.4), we only need to show that

$$\begin{aligned} g_n(x)-Eg_n(x)=\sum _{i=1}^nW_{ni}(x)\varepsilon _{ni}\rightarrow 0~~\hbox {completely as}~~n\rightarrow \infty . \end{aligned}$$
(3.16)

That is to say, it suffices to show that for all \(\varepsilon >0\),

$$\begin{aligned} \sum _{n=1}^\infty P\left( \left| \sum _{i=1}^nW_{ni}(x)\varepsilon _{ni}\right| > \varepsilon \right) <\infty . \end{aligned}$$
(3.17)

For fixed \(n\ge 1\), put

$$\begin{aligned} X_{ni}&= -I(W_{ni}(x)\varepsilon _{ni}< -1)+W_{ni}(x)\varepsilon _{ni}I(|W_{ni}(x)\varepsilon _{ni}|\le 1)\\&\quad +I(W_{ni}(x)\varepsilon _{ni}> 1),\quad i=1,2,\ldots ,n. \end{aligned}$$

It is easy to check that for any \(\varepsilon >0\),

$$\begin{aligned} \left( \left| \sum _{i=1}^n W_{ni}(x)\varepsilon _{ni}\right| >\varepsilon \right) \subset \left( \max _{1\le i\le n}|W_{ni}(x)\varepsilon _{ni}|>1\right) \bigcup \left( \left| \sum _{i=1}^n X_{ni}\right| >\varepsilon \right) , \end{aligned}$$

which implies that

$$\begin{aligned} \sum _{n\!=1}^\infty P\left( \left| \sum _{i\!=\!1}^n W_{ni}(x)\varepsilon _{ni}\right| >\varepsilon \right)&\le \sum _{n\!=\!1}^\infty \sum _{i\!=\!1}^n P\left( |W_{ni}(x)\varepsilon _{ni}|>1\right) \nonumber \\&+\sum _{n=1}^\infty P\left( \left| \sum _{i=1}^n X_{ni}\right| >\varepsilon \right) \nonumber \\&\doteq I+J. \end{aligned}$$
(3.18)

Hence, to prove (3.17), it suffices to show that \(I<\infty \) and \(J<\infty \).

By condition \((b)\) and \(E|X|^{1+1/r}<\infty \), we have

$$\begin{aligned} \sum _{n=1}^\infty \sum _{i=1}^n P\left( |W_{ni}(x)\varepsilon _{ni}|>1\right)&\le C\sum _{n=1}^\infty \sum _{i=1}^n P\left( |W_{ni}(x)X|>1\right) \nonumber \\&\le C\sum _{n=1}^\infty \sum _{i=1}^n W_{ni}(x) E|X| I\left( |W_{ni}(x)X|>1\right) \nonumber \\&\le C\sum _{n=1}^\infty E|X| I\left( |X|>n^r\right) \nonumber \\&\le C\sum _{n=1}^\infty \sum _{k=n}^\infty E|X| I\left( k^r\le |X|<(k+1)^r\right) \nonumber \\&= C\sum _{k=1}^\infty \sum _{n=1}^k E|X| I\left( k^r\le |X|<(k+1)^r\right) \nonumber \\&= C\sum _{k=1}^\infty k E|X| I\left( k^r\le |X|<(k+1)^r\right) \nonumber \\&\le C\sum _{k=1}^\infty E|X|^{1+1/r}I\left( k^r\le |X|<(k+1)^r\right) \nonumber \\&\le C E|X|^{1+1/r}<\infty , \end{aligned}$$
(3.19)

which implies \(I<\infty \).

Next, we will prove that \(J<\infty \). Firstly, we will show that

$$\begin{aligned} \left| \sum _{i=1}^n EX_{ni}\right| \rightarrow 0,\quad \hbox {as}\quad n\rightarrow \infty . \end{aligned}$$
(3.20)

Actually, by the conditions \(E\varepsilon _{i}=0\), Lemma 2.3, (3.15) and \(E|X|^{1+1/r}<\infty \), we have

$$\begin{aligned} \left| \sum _{i=1}^n EX_{ni}\right|&\le \left| \sum _{i=1}^n EW_{ni}(x)\varepsilon _{i}I(|W_{ni}(x)\varepsilon _{i}|\le 1)\right| +\sum _{i=1}^n P\left( |W_{ni}(x)\varepsilon _{i}|> 1\right) \nonumber \\ {}&= \left| \sum _{i=1}^n EW_{ni}(x)\varepsilon _{i}I(|W_{ni}(x)\varepsilon _{i}|> 1)\right| +\sum _{i=1}^n P\left( |W_{ni}(x)\varepsilon _{i}|> 1\right) \nonumber \\&\le C\sum _{i=1}^n E\left| W_{ni}(x)\varepsilon _{i}\right| ^{1+1/r}I\left( |W_{ni}(x)\varepsilon _{i}|>1\right) \nonumber \\&\le C\sum _{i=1}^n W_{ni}^{1+1/r}(x) E|X|^{1+1/r}I\left( |W_{ni}(x)X|>1\right) \nonumber \\&\le C\left( \max _{1\le i\le n}W_{ni}(x)\right) ^{1/r}\sum _{i=1}^n W_{ni}(x) E|X|^{1+1/r}I\left( |X|>{n^r}\right) \nonumber \\&\le C\left( n^{-r}\right) ^{1/r} E|X|^{1+1/r}I\left( |X|>{n^r}\right) \nonumber \\ {}&= Cn^{-1}E|X|^{1+1/r}I\left( |X|>{n^r}\right) \rightarrow 0,~~\hbox {as}~~n\rightarrow \infty , \end{aligned}$$
(3.21)

which implies (3.20). Hence, to prove \(J<\infty \), we only need to show that for all \(\varepsilon >0\),

$$\begin{aligned} J^*\doteq \sum _{n=1}^\infty P\left( \left| \sum _{i=1}^n\left( X_{ni}-EX_{ni}\right) \right| >\frac{\varepsilon }{2}\right) <\infty . \end{aligned}$$
(3.22)

By Markov’s inequality, Lemma 2.2, \(C_r\) inequality and Jensen’s inequality, we have for \(M\ge 2\) that

$$\begin{aligned} J^*&\le C\sum _{n=1}^\infty E\left( \left| \sum _{i=1}^n\left( X_{ni} -EX_{ni} \right) \right| ^M\right) \nonumber \\&\le C\sum _{n=1}^\infty \left( \sum _{i=1}^n E|X_{ni}|^2\right) ^{M/2}+C\sum _{n=1}^\infty \sum _{i=1}^n E|X_{ni}|^M\nonumber \\&\doteq J_1+J_2. \end{aligned}$$
(3.23)

Take

$$\begin{aligned} M>\max \left\{ 2, 2/r, 1+1/r\right\} , \end{aligned}$$

which implies \(-rM/2<-1\) and \(-r(M-1)<-1\). By \(C_r\) inequality and Lemma 2.3, we can get

$$\begin{aligned} J_1\le C\sum _{n=1}^\infty \left[ \sum _{i=1}^n P\left( |W_{ni}(x)X|>1\right) +\sum _{i=1}^n E|W_{ni}(x)X|^2I\left( |W_{ni}(x)X|\le 1\right) \right] ^{M/2}.\nonumber \\ \end{aligned}$$
(3.24)

If \(r>1\), Markov’s inequality, \(E|X|^{1+1/r}<\infty \) and (3.15) yield

$$\begin{aligned} J_1&\le C\sum _{n=1}^\infty \left( \sum _{i=1}^n W_{ni}^{1+1/r}(x) E|X|^{1+1/r}\right) ^{M/2}\nonumber \\&\le C\sum _{n=1}^\infty \left[ \left( \max _{1\le i\le n}W_{ni}(x)\right) ^{1/r}\sum _{i=1}^n W_{ni}(x)\right] ^{M/2}\nonumber \\&\le C\sum _{n=1}^\infty n^{-M/2}~<~\infty . \end{aligned}$$
(3.25)

If \(0<r\le 1\), we again have by Markov’s inequality, \(E|X|^{1+1/r}<\infty \) and (3.15) that

$$\begin{aligned} J_1&\le C\sum _{n=1}^\infty \left( \sum _{i=1}^n W_{ni}^2(x) E|X|^2\right) ^{M/2}\nonumber \\&\le C\sum _{n=1}^\infty \left[ \left( \max _{1\le i\le n}W_{ni}(x)\right) \sum _{i=1}^n W_{ni}(x)\right] ^{M/2}\nonumber \\&\le C\sum _{n=1}^\infty n^{-rM/2}~<~\infty . \end{aligned}$$
(3.26)

From (3.24)–(3.26), we have proved that \(J_1<\infty \).

By \(C_r\) inequality and Lemma 2.3, we can see that

$$\begin{aligned} J_2&\le C\sum _{n=1}^\infty \sum _{i=1}^n\left[ E|W_{ni}(x)\varepsilon _{i}|^MI\left( |W_{ni}(x)\varepsilon _{i}|\le 1\right) +P\left( |W_{ni}(x)\varepsilon _{i}|>1\right) \right] \nonumber \\&\le C\sum _{n=1}^\infty \sum _{i=1}^n P\left( |W_{ni}(x)X|>1\right) +C\sum _{n=1}^\infty \sum _{i=1}^n E|W_{ni}(x)X|^MI\left( |W_{ni}(x)X|\le 1\right) \nonumber \\&\doteq J_3+J_4. \end{aligned}$$
(3.27)

\(J_3<\infty \) has been proved by (3.19). In the following, we will show that \(J_4<\infty \). Put

$$\begin{aligned} I_{nj}=\left\{ i: [n(j+1)]^{-r}<W_{ni}(x)\le (nj)^{-r}\right\} ,~~n\ge 1,~j\ge 1. \end{aligned}$$
(3.28)

It is easily seen that \(I_{nk}\bigcap I_{nj}=\emptyset \) for \(k\ne j\) and \(\bigcup _{j=1}^\infty I_{nj}=\{1,2,\ldots ,n\}\) for all \(n\ge 1\). Writing \(\sharp M\) for the cardinality of a set \(M\), we thus have

$$\begin{aligned} J_4&\le C\sum _{n=1}^\infty \sum _{j=1}^\infty \sum _{i\in I_{nj}}E|W_{ni}(x)X|^MI\left( |W_{ni}(x)X|\le 1\right) \nonumber \\&\le C\sum _{n=1}^\infty \sum _{j=1}^\infty \left( \sharp I_{nj}\right) (nj)^{-rM}E|X|^MI\left( |X|\le [n(j+1)]^r\right) \nonumber \\&\le C\sum _{n=1}^\infty \sum _{j=1}^\infty \left( \sharp I_{nj}\right) (nj)^{-rM}\sum _{k=0}^{n(j+1)}E|X|^MI\left( k\le |X|^{\frac{1}{r}}<k+1\right) \nonumber \\&= C\sum _{n=1}^\infty \sum _{j=1}^\infty \left( \sharp I_{nj}\right) (nj)^{-rM}\sum _{k=0}^{2n}E|X|^MI\left( k\le |X|^{\frac{1}{r}}<k+1\right) \nonumber \\&\quad +C\sum _{n=1}^\infty \sum _{j=1}^\infty \left( \sharp I_{nj}\right) (nj)^{-rM}\sum _{k=2n+1}^{n(j+1)}E|X|^MI\left( k\le |X|^{\frac{1}{r}}< k+1\right) \nonumber \\&\doteq J_5+J_6. \end{aligned}$$
(3.29)

It is easily seen that for all \(m\ge 1\),

$$\begin{aligned} C&\ge \sum _{i=1}^n W_{ni}(x)=\sum _{j=1}^\infty \sum _{i\in I_{nj}}W_{ni}(x)~\ge ~\sum _{j=1}^\infty \left( \sharp I_{nj}\right) \left[ n(j+1)\right] ^{-r}\\&\ge \sum _{j=m}^\infty \left( \sharp I_{nj}\right) \left[ n(j+1)\right] ^{-r}~\ge ~\sum _{j=m}^\infty \left( \sharp I_{nj}\right) \left[ n(j+1)\right] ^{-r}\left[ \frac{n(m+1)}{n(j+1)}\right] ^{r(M-1)}\\&= \sum _{j=m}^\infty \left( \sharp I_{nj}\right) \left[ n(j+1)\right] ^{-rM}\left[ n(m+1)\right] ^{r(M-1)}, \end{aligned}$$

which implies that for all \(m\ge 1\),

$$\begin{aligned} \sum _{j=m}^\infty \left( \sharp I_{nj}\right) \left( nj\right) ^{-rM}\le C n^{-r(M-1)}\cdot m^{-r(M-1)}. \end{aligned}$$
(3.30)

Therefore,

$$\begin{aligned} J_5&\doteq C\sum _{n=1}^\infty \sum _{j=1}^\infty \left( \sharp I_{nj}\right) (nj)^{-rM}\sum _{k=0}^{2n}E|X|^MI\left( k\le |X|^{\frac{1}{r}}<k+1\right) \nonumber \\&\le C\sum _{n=1}^\infty n^{-r(M-1)}\sum _{k=0}^{2n}E|X|^MI\left( k\le |X|^{\frac{1}{r}}<k+1\right) \nonumber \\&\le C\sum _{k=0}^{2}\sum _{n=1}^\infty n^{-r(M-1)}E|X|^MI\left( k\le |X|^{\frac{1}{r}}<k+1\right) \nonumber \\&\quad +C\sum _{k=2}^{\infty }\sum _{n=\lfloor k/2\rfloor }^\infty n^{-r(M-1)}E|X|^MI\left( k\le |X|^{\frac{1}{r}}<k+1\right) \nonumber \\&\le C+C\sum _{k=2}^{\infty }k^{1-r(M-1)}E|X|^MI\left( k\le |X|^{\frac{1}{r}}<k+1\right) \nonumber \\&\le C+C\sum _{k=2}^{\infty }E|X|^{M+1/r-(M-1)}I\left( k\le |X|^{\frac{1}{r}}<k+1\right) \nonumber \\&\le C+CE|X|^{1+1/r}~<~\infty \end{aligned}$$
(3.31)

and

$$\begin{aligned} J_6&\doteq C\sum _{n=1}^\infty \sum _{j=1}^\infty \left( \sharp I_{nj}\right) (nj)^{-rM}\sum _{k=2n+1}^{n(j+1)}E|X|^MI\left( k\le |X|^{\frac{1}{r}}< k+1\right) \nonumber \\&\le C\sum _{n=1}^\infty \sum _{k=2n+1}^\infty \sum _{j\ge \frac{k}{n}-1}\left( \sharp I_{nj}\right) (nj)^{-rM}E|X|^MI\left( k\le |X|^{\frac{1}{r}}< k+1\right) \nonumber \\&\le C\sum _{n=1}^\infty \sum _{k=2n+1}^\infty n^{-r(M-1)}\left( \frac{k}{n}\right) ^{-r(M-1)}E|X|^MI\left( k\le |X|^{\frac{1}{r}}< k+1\right) \nonumber \\&\le C\sum _{k=2}^\infty \sum _{n=1}^{\lfloor k/2\rfloor } k^{-r(M-1)}E|X|^MI\left( k\le |X|^{\frac{1}{r}}< k+1\right) \nonumber \\&\le C\sum _{k=2}^\infty k^{1-r(M-1)}E|X|^MI\left( k\le |X|^{\frac{1}{r}}< k+1\right) \nonumber \\&\le C\sum _{k=2}^\infty E|X|^{M+1/r-(M-1)}I\left( k\le |X|^{\frac{1}{r}}< k+1\right) \nonumber \\&\le CE|X|^{1+1/r}~<~\infty . \end{aligned}$$
(3.32)

Thus, the inequality (3.22) follows from (3.23)–(3.27), (3.29), (3.31) and (3.32) immediately. This completes the proof of the theorem. \(\square \)

Remark 3.1

As mentioned in Sect. 1, NA implies NSD, so Theorems 3.1–3.3 and Corollary 3.1 hold for NA random variables without adding any extra conditions.

4 A weak law of large numbers for NSD random variables

In this section, we will present a weak law of large number for sequences of NSD random variables by using the Rosenthal type inequality. The main result is as follows.

Theorem 4.1

Let \(\alpha > 1/2\) and \(\{X, X_n, n\ge 1\}\) be a sequence of identically distributed NSD random variables. Denote \(S_n=\sum _{i=1}^nX_i\). If

$$\begin{aligned} \lim _{n\rightarrow \infty }nP(|X| > n^\alpha ) = 0, \end{aligned}$$
(4.1)

then

$$\begin{aligned} \frac{S_n}{n^\alpha }-n^{1-\alpha } EXI\left( |X|\le n^\alpha \right) \xrightarrow {P}0. \end{aligned}$$
(4.2)

Proof

For fixed \(n\ge 1\), let

$$\begin{aligned} Y_{ni}=-n^\alpha I(X_i<- n^\alpha )+X_iI(|X_i|\le n^\alpha )+n^\alpha I(X_i>n^\alpha ),\quad i=1,2,\ldots \end{aligned}$$

and \(T_n=\sum _{i=1}^nY_{ni}\) for each \(n\ge 1\). By (4.1), we have for any \(\varepsilon >0\),

$$\begin{aligned} P\left( \left| \frac{S_n}{n^\alpha }-\frac{T_n}{n^\alpha }\right| >\varepsilon \right)&\le P\left( S_n\ne T_n\right) ~ \le ~P\left( \bigcup _{i=1}^n(X_i\ne Y_{ni})\right) \\&\le \sum _{i=1}^nP\left( |X_i|>n^\alpha \right) ~=~nP\left( |X|>n^\alpha \right) ~\rightarrow ~0,~~n\rightarrow \infty , \end{aligned}$$

which implies

$$\begin{aligned} \frac{S_n}{n^\alpha }-\frac{T_n}{n^\alpha }\xrightarrow {P}0. \end{aligned}$$
(4.3)

Hence, in order to prove (4.2), we only need to show that

$$\begin{aligned} \frac{T_n}{n^\alpha }-\frac{ET_n}{n^\alpha }\xrightarrow {P}0. \end{aligned}$$
(4.4)

By (4.1) and Toeplitz’s lemma, we have

$$\begin{aligned} \frac{\sum _{k=1}^nk^{2\alpha -2}\cdot kP(|X|>k^\alpha )}{\sum _{k=1}^nk^{2\alpha -2}}\rightarrow 0,\quad n\rightarrow \infty . \end{aligned}$$
(4.5)

Note that

$$\begin{aligned} \sum _{k=1}^nk^{2\alpha -2}\ll n^{2\alpha -1}, \quad \hbox {for}\quad \alpha >1/2. \end{aligned}$$
(4.6)

Combining (4.5) and (4.6), we have

$$\begin{aligned} n^{-2\alpha +1}\sum _{k=1}^nk^{2\alpha -1} P(|X|>k^\alpha )\rightarrow 0,\quad n\rightarrow \infty . \end{aligned}$$
(4.7)

By Lemma 2.3, (4.1) and (4.7), it follows that

$$\begin{aligned}&P\left( |T_n-ET_n|>\varepsilon n^\alpha \right) ~\ll ~n^{-2\alpha }E|T_n-ET_n|^2~\ll ~n^{-2\alpha }\sum _{i=1}^nE Y_{ni}^2\\&\quad \ll n^{-2\alpha +1}\left[ EX^2I(|X|\le n^\alpha )+n^{2\alpha }P\left( |X|>n^\alpha \right) \right] \\&\quad =n^{-2\alpha +1} EX^2I(|X|\le n^\alpha )+nP\left( |X|>n^\alpha \right) \\&\quad =n^{-2\alpha +1} \sum _{k=1}^nEX^2I((k-1)^\alpha <|X|\le k^\alpha )+nP\left( |X|>n^\alpha \right) \\&\quad \le n^{-2\alpha +1} \sum _{k=1}^nk^{2\alpha }\left[ P\left( |X|>(k-1)^\alpha \right) -P\left( |X|>k^\alpha \right) \right] +nP\left( |X|>n^\alpha \right) \\&\quad \!=\!n^{-2\alpha \!+\!1}\left[ \sum _{k=1}^{n-1}\left( (k+1)^{2\alpha }\!-\!k^{2\alpha }\right) P(|X|>k^\alpha )+P(|X|>0)\!-\!n^{2\alpha }P(|X|>n^\alpha )\right] \\&\quad \quad +nP\left( |X|>n^\alpha \right) \\&\quad \ll n^{-2\alpha +1}\left[ \sum _{k=1}^nk^{2\alpha -1}P(|X|>k^\alpha )+1\right] +nP\left( |X|>n^\alpha \right) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\&\quad \rightarrow 0,~~n\rightarrow \infty . \end{aligned}$$

This completes the proof of the theorem. \(\square \)

Remark 4.1

When \(\alpha = 1\) and \(\{X, X_n, n\ge 1\}\) is a sequence of independent and identically distributed random variables, Theorem 4.1 is the weak law of large numbers due to Feller (1946). Hence, Theorem 4.1 extends the sufficient part of the Feller’s weak law of large numbers for independent and identically distributed random variables to the case of NSD random variables.