1 Introduction

It is well known that the exponential probability inequality plays an important role to prove strong law of large numbers, strong convergence rate, complete convergence, consistency, asymptotic normality, and so on. There are much literature studying exponential probability inequalities for independent sequences and some dependent sequences, such as negatively associated (NA, in short) sequence, positively associated (PA, in short) sequence, negatively orthant dependent (NOD, in short) sequence, extended negatively dependent (END, in short) sequence, and so on. There are very little literatures studying exponential probability inequalities for widely negative orthant dependent (WNOD, short) sequence, which includes NA sequence, NOD sequence, END sequence and some positive dependent sequences as special cases. The main purpose of the paper is to provide some exponential probability inequalities for WNOD sequence and give applications in complete convergence and complete moment convergence.

The concept of widely negative dependence structure was introduced by Wang et al. [23] as follows.

Definition 1.1

For the random variables \(\{X_n, n\ge 1\}\), if there exists a finite positive sequence \(\{g_U (n), n \ge 1\}\) satisfying for each \(n \ge 1\) and for all \(x_i \in (-\infty ,\infty )\), \(1\le i\le n\),

$$\begin{aligned} P(X_1>x_1, X_2>x_2, \ldots , X_n>x_n)\le g_U (n)\prod _{i=1}^nP(X_i>x_i), \end{aligned}$$
(1.1)

then we say that the random variables \(\{X_n, n\ge 1\}\) are widely negative upper orthant dependent (WNUOD, in short); if there exists a finite positive sequence \(\{g_L (n), n \ge 1\}\) satisfying for each \(n \ge 1\) and for all \(x_i \in (-\infty ,\infty )\), \(1\le i\le n\),

$$\begin{aligned} P(X_1\le x_1, X_2\le x_2, \ldots , X_n\le x_n)\le g_L (n)\prod _{i=1}^nP(X_i\le x_i), \end{aligned}$$
(1.2)

then we say that the \(\{X_n, n\ge 1\}\) are widely negative lower orthant dependent (WNLOD, in short); if they are both WNUOD and WNLOD, then we say that the \(\{X_n, n\ge 1\}\) are widely negative orthant dependent (WNOD, in short), and \(g_U (n)\), \(g_L (n)\), \(n\ge 1\), are called dominating coefficients.

An array \(\{X_{ni}, i\ge 1, n\ge 1\}\) of random variables is called rowwise WNOD random variables if for every \(n\ge 1\), \(\{X_{ni}, i\ge 1\}\) is a sequence of WNOD random variables.

For examples of WNOD random variables with various dominating coefficients, we refer the reader to Wang et al. [23, 33]. These examples show that WNOD random variables contain some common negatively dependent random variables, some positively dependent random variables and some others.

Assume that \(g_U(n)\ge 1\), \(g_L(n)\ge 1\). It is easily seen that if both (1.1) and (1.2) hold for \(g_L(n) = g_U (n) = M\) for any \(n\ge 1\), where M is a positive constant, then the random variables \(\{X_n, n\ge 1\}\) are called extended negatively dependent (END, in short). This is the definition of END sequences. For the details about the concept and the probability limit theory of END sequence, one can refer to Liu [9], Chen et al. [3], Shen [17], Wang and Chen [34], Wang and Wang [24], Wang et al. [28, 29, 32], Qiu et al. [13], and so forth. If both (1.1) and (1.2) hold for \(g_L(n) = g_U (n)=1\) for any \(n\ge 1\), then the random variables \(\{X_n, n\ge 1\}\) are called negatively orthant dependent (NOD, in short). For more details about NOD sequence, one can refer to Fakoor and Azarnoosh [6], Asadian et al. [1], Wang et al. [26, 27], Wu [37, 38], Wu and Jiang [39], Sung [21], Nili Sani et al. [11], Li et al. [8], Shen [16, 18, 20], and so on. It is well known that NA random variables are NOD random variables. Hu [7] pointed out that negatively superadditive dependent (NSD, in short) random variables are NOD. Hence, the class of WNOD random variables includes independent sequence, NA sequence, NSD sequence, NOD sequence and END sequence as special cases. Studying the probability inequalities, moment inequalities and convergence theorem of WNOD random variables are of great interest.

There are only some literatures studying the probability limiting behavior of WNOD random variables, such as Wang et al. [23], Wang and Cheng [34], Wang et al. [35], Liu et al. [10], Chen et al. [4], Shen [19], Qiu and Chen [12], Qiu and Hu [14], Wang et al. [31], Yang et al. [43], and so on. In these literatures, Wang et al. [31] made great contribution to the probability limit theory and statistical large sample theory for WNOD random variables; they established some general results on complete convergence for weighted sums of arrays of rowwise WNOD random variables and presented some sufficient conditions to prove the complete consistency for the estimator of nonparametric regression model based on WNOD errors. In this work, we will provide some exponential probability inequalities for WNOD random variables. As applications, we will study the complete convergence and complete moment convergence for WNOD random variables by using the exponential probability inequalities that we established.

Throughout the paper, let \(\{X_n, n\ge 1\}\) be a sequence of WNOD random variables with dominating coefficients \(g_U (n)\), \(g_L (n)\), \(n\ge 1\). Let \(\{X_{ni}, i\ge 1, n \ge 1\}\) be an array of rowwise WOD random variables with dominating coefficients \(g_U (n)\), \(g_L (n)\), \(n\ge 1\) in each row. Denote \(g(n)=\max \{g_U (n), g_L (n)\}\), \(x\vee y=\max \{x, y\}\), \(S_n=\sum _{i=1}^nX_i\) and \(M_{t,n}=\sum _{i=1}^nE|X_i|^t\) for some \(t>0\) and each \(n\ge 1\). Let C denote a positive constant, which can be different in various places.

The organization of the paper is as follows: Some useful lemmas are presented in Sect. 2. The exponential probability inequalities for WNOD random variables are established in Sect. 3. The complete convergence and complete moment convergence for arrays of rowwise WNOD random variables are obtained in Sects. 4 and 5, respectively.

2 Preliminaries

In this section, we will present some important lemmas, which will be used to prove the main results of this work. The first one is a basic property for WNOD random variables, which can be found in Wang et al. [23].

Lemma 2.1

  1. (i)

    Let \(\{X_n, n\ge 1\}\) be WNLOD (WNUOD) with dominating coefficients \(g_L (n), n \ge 1\) (\(g_U (n), n \ge 1\)). If \(\{f_n(\cdot ), n\ge 1\}\) are nondecreasing, then \(\{f_n(X_n), n\ge 1\}\) are still WNLOD (WNUOD) with dominating coefficients \(g_L (n), n \ge 1\) (\(g_U (n), n \ge 1\)); if \(\{f_n(\cdot ), n\ge 1\}\) are nonincreasing, then \(\{f_n(X_n), n\ge 1\}\) are WNUOD (WNLOD) with dominating coefficients \(g_L (n), n \ge 1\) (\(g_U (n), n \ge 1\)).

  2. (ii)

    If \(\{X_n, n\ge 1\}\) are nonnegative and WNUOD with dominating coefficients \(g_U (n), n \ge 1\), then for each \(n\ge 1\),

    $$\begin{aligned} E\prod _{i=1}^n X_i\le g_U (n)\prod _{i=1}^nEX_i. \end{aligned}$$
    (2.1)

    In particular, if \(\{X_n, n\ge 1\}\) are WNUOD with dominating coefficients \(g_U (n), n \ge 1\), then for each \(n\ge 1\) and any \(s>0\),

    $$\begin{aligned} E\exp \left\{ s\sum _{i=1}^nX_i\right\} \le g_U (n)\prod _{i=1}^nE\exp \{sX_i\}. \end{aligned}$$
    (2.2)

By Lemma 2.1, we can get the following corollary immediately, which has been obtained by Shen [19].

Corollary 2.1

  1. (i)

    Let \(\{X_n, n\ge 1\}\) be WNOD. If \(\{f_n(\cdot ), n\ge 1\}\) are nondecreasing (or nonincreasing), then \(\{f_n(X_n), n\ge 1\}\) are still WNOD.

  2. (ii)

    If \(\{X_n, n\ge 1\}\) are WNOD, then for each \(n\ge 1\) and any \(s\in \mathbb {R}\),

    $$\begin{aligned} E\exp \left\{ s\sum _{i=1}^nX_i\right\} \le g(n)\prod _{i=1}^nE\exp \{sX_i\}. \end{aligned}$$
    (2.3)

The following one is a very important property for stochastic domination. For the details of the proof, one can refer to Wu [36], or Wang et al. [30].

Lemma 2.2

Assume that the random variable Y is stochastically dominated by an nonnegative random variable X. That is to say, there exists a positive constant C such that

$$\begin{aligned} P(|Y|>x)\le CP(X>x) \end{aligned}$$

for all \(x\ge 0\). Then the following two statements

$$\begin{aligned} E|Y|^\alpha I\left( |Y|\le b\right) \le C\left[ EX^\alpha I\left( X\le b\right) +b^\alpha P\left( X>b\right) \right] \end{aligned}$$
(2.4)

and

$$\begin{aligned} E|Y|^\alpha I\left( |Y|> b\right) \le CEX^\alpha I\left( X> b\right) \end{aligned}$$
(2.5)

hold for all \(\alpha >0\) and \(b>0\).

Lemma 2.3

(cf. Shao [15]) For any \(x\ge 0\),

$$\begin{aligned} \ln (1+x)\ge \frac{x}{1+x}+\frac{x^2}{2(1+x)^2}\left[ 1+\frac{2}{3}\ln (1+x)\right] \!. \end{aligned}$$

3 Exponential probability inequalities for WNOD random variables

In this section, we establish the exponential probability inequalities for WNOD random variables, which can be applied to establish the probability limit theorem for WNOD random variables, such as weak convergence, \(L_r\) convergence, strong convergence, complete convergence, complete moment convergence, consistency, asymptotic normality, and so on. The proofs of the exponential probability inequalities for WNOD random variables are mainly inspired by Fakoor and Azarnoosh [6] and Asadian et al. [1]. Our main results on exponential probability inequalities are as follows.

Theorem 3.1

Let \(0<t\le 2\) and \(\{X_n, n\ge 1\}\) be a sequence of WNOD random variables with \(E|X_n|^t<\infty \) for each \(n\ge 1\). Assume further that \(EX_n=0\) for each \(n\ge 1\) when \(1<t\le 2\). Then for all \(x>0\) and \(y>0\),

$$\begin{aligned} P(|S_n|\ge x)\le P\left( \max _{1\le i\le n} |X_i|\ge y\right) +2g(n)\exp \left\{ \frac{x}{y}-\frac{x}{y}\ln \left( 1+\frac{xy^{t-1}}{M_{t,n}}\right) \right\} \!. \end{aligned}$$
(3.1)

If \(xy^{t-1}>M_{t,n}\), then

$$\begin{aligned} P(|S_n|\ge x)\le P\left( \max _{1\le i\le n} |X_i|\ge y\right) +2g(n)\exp \left\{ \frac{x}{y}-\frac{M_{t,n}}{y^t}-\frac{x}{y}\ln \left( \frac{xy^{t-1}}{M_{t,n}}\right) \right\} \!. \end{aligned}$$
(3.2)

Proof

For \(y>0\), denote \(Y_i=\min (X_i, y)\), \(i=1,2,\ldots ,n\) and \(S_n'=\sum _{i=1}^nY_i\), \(n\ge 1\). It is easily seen that for any \(h>0\),

$$\begin{aligned} P(S_n\ge x)\le P\left( \max _{1\le i\le n} X_i\ge y\right) +e^{-hx}Ee^{hS_n'}. \end{aligned}$$
(3.3)

It follows by (3.3) and Corollary 2.1 that

$$\begin{aligned} P(S_n\ge x)\le P\left( \max _{1\le i\le n} X_i\ge y\right) +g (n)e^{-hx}\prod _{i=1}^n Ee^{hY_i}. \end{aligned}$$
(3.4)

For \(0<t\le 1\), the function \((e^{hu}-1)/u^t\) is increasing on \(u>0\). Thus,

$$\begin{aligned} Ee^{hY_i}\le & {} \int _{0}^y (e^{hu}-1)dF_i(u)+\int _{y}^\infty (e^{hy}-1)dF_i(u)+1\\\le & {} \frac{e^{hy}-1}{y^t}\int _0^yu^tdF_i(u)+\frac{e^{hy}-1}{y^t}\int _{y}^\infty u^tdF_i(u)+1\\\le & {} 1+\frac{e^{hy}-1}{y^t}E|X_i|^t~\le ~ \exp \left\{ \frac{e^{hy}-1}{y^t}E|X_i|^t\right\} \!. \end{aligned}$$

Combining the inequality above and (3.4), we can get that

$$\begin{aligned} P(S_n\ge x)\le P\left( \max _{1\le i\le n} X_i\ge y\right) +g (n)\exp \left\{ \frac{e^{hy}-1}{y^t}M_{t,n}-hx\right\} \!. \end{aligned}$$
(3.5)

Replacing \(X_i\) by \(-X_i\) in (3.5), we have

$$\begin{aligned} P(-S_n\ge x)\le P\left( \max _{1\le i\le n} (-X_i)\ge y\right) +g (n)\exp \left\{ \frac{e^{hy}-1}{y^t}M_{t,n}-hx\right\} \!. \end{aligned}$$
(3.6)

It follows by (3.5) and (3.6) that

$$\begin{aligned} P(|S_n|\ge x)\le P\left( \max _{1\le i\le n} |X_i|\ge y\right) +2g (n)\exp \left\{ \frac{e^{hy}-1}{y^t}M_{t,n}-hx\right\} \!. \end{aligned}$$
(3.7)

Taking \(h=\frac{1}{y}\ln \left( 1+\frac{xy^{t-1}}{M_{t,n}}\right) \) in the right-hand side of (3.7), we can get (3.1) immediately.

For \(1<t\le 2\), we can still get (3.1) by the similar process of Theorem 3 in Fakoor and Azarnoosh [6] and Theorem 2.2 in Asadian et al. [1]. The details are omitted.

If \(xy^{t-1}>M_{t,n}\), then the right-hand side of (3.7) attains a minimum value when \(h=\frac{1}{y}\ln \left( \frac{xy^{t-1}}{M_{t,n}}\right) \). Substitute this value of h to the right-hand side of (3.7), we can get (3.2) immediately. This completes the proof of the theorem. \(\square \)

For \(t=2\), we have the following more precise exponential probability inequality than (3.1).

Theorem 3.2

Let \(\{X_n, n\ge 1\}\) be a sequence of WNOD random variables with \(EX_n=0\) and \(EX_n^2<\infty \) for each \(n\ge 1\). Then for any \(h,x,y>0\),

$$\begin{aligned} P(|S_n|\ge x)\le P\left( \max _{1\le i\le n} |X_i|\ge y\right) +2g(n)\exp \left\{ \frac{e^{hy}-1-hy}{y^2}M_{2,n}-hx\right\} \!. \end{aligned}$$
(3.8)

If we take \(h=\frac{1}{y}\ln \left( 1+\frac{xy}{M_{2,n}}\right) \), then

$$\begin{aligned} P(|S_n|\ge x)\le P\left( \max _{1\le i\le n} |X_i|\ge y\right) +2g(n)\exp \left\{ \frac{x}{y}-\frac{xy+M_{2,n}}{y^2}\ln \left( 1+\frac{xy}{M_{2,n}}\right) \right\} \end{aligned}$$
(3.9)

and

$$\begin{aligned} P(|S_n|\ge x)\le P\left( \max _{1\le i\le n} |X_i|\ge y\right) +2g(n)\exp \left\{ -\frac{x^2}{2(xy+M_{2,n})}\left[ 1+\frac{2}{3}\ln \left( 1+\frac{xy}{M_{2,n}}\right) \right] \right\} \!. \end{aligned}$$
(3.10)

Proof

We use the same notations as those in Theorem 3.1. Similar to the proofs of Theorem 3.1 in the paper and Lemma 2.4 in Shen [20], we have

$$\begin{aligned} P(S_n\ge x) \le P\left( \max _{1\le i\le n} X_i\ge y\right) +g (n)\exp \left\{ \frac{e^{hy}-1-hy}{y^2}M_{2,n}-hx\right\} \!. \end{aligned}$$
(3.11)

Replacing \(X_i\) by \(-X_i\) in the inequality above, we have

$$\begin{aligned} P(-S_n\ge x) \le P\left( \max _{1\le i\le n} (-X_i)\ge y\right) +g (n)\exp \left\{ \frac{e^{hy}-1-hy}{y^2}M_{2,n}-hx\right\} \!. \end{aligned}$$
(3.12)

Therefore, the desired result (3.8) follows from (3.11) and (3.12) immediately.

Equation (3.9) can be easily obtained by taking \(h=\frac{1}{y}\ln \left( 1+\frac{xy}{M_{2,n}}\right) \) in (3.8).

By Lemma 2.3, we have that

$$\begin{aligned} \frac{x}{y}-\frac{xy+M_{2,n}}{y^2}\ln \left( 1+\frac{xy}{M_{2,n}}\right)\le & {} -\frac{x^2}{2(xy+M_{2,n})}\left[ 1+\frac{2}{3}\ln \left( 1+\frac{xy}{M_{2,n}}\right) \right] \!. \end{aligned}$$
(3.13)

Hence, the desired result (3.10) follows from (3.9) and (3.13) immediately. This completes the proof of the theorem. \(\square \)

Remark 3.1

Under the conditions of Theorem 3.1, Wang et al. [31] obtained the following inequality:

$$\begin{aligned} P(|S_n|\ge x)\le \sum _{i=1}^nP\left( |X_i|\ge y\right) +2g(n)\exp \left\{ \frac{x}{y}-\frac{x}{y}\ln \left( 1+\frac{xy^{t-1}}{M_{t,n}}\right) \right\} \!. \end{aligned}$$
(3.14)

Noting that

$$\begin{aligned} P\left( \max _{1\le i\le n} |X_i|\ge y\right) \le \sum _{i=1}^nP\left( |X_i|\ge y\right) , \end{aligned}$$

we can see that (3.1) is more precise than (3.14). Additionally, we also get (3.2), which is not obtained in Wang et al. [31].

4 Complete convergence for arrays of rowwise WNOD random variables

In the previous section, we established some exponential probability inequalities for WNOD random variables. In this section, we will study the complete convergence for weighted sums of arrays of rowwise WNOD random variables by using the exponential probability inequalities that we established. The main ideas are inspired by Chen et al. [2] and Wu et al. [40].

Our main results on complete convergence for WNOD random variables are as follows. The first one (Theorem 4.1) is a very general result on complete convergence for weighted sums of WNOD random variables, which can be applied to establish other results on complete convergence and strong convergence. The proof is similar to Shen [20].

Theorem 4.1

Let \(\{X_{ni}, i\ge 1, n\ge 1\}\) be an array of rowwise WNOD random variables with finite second moments and \(\{a_{ni}, i\ge 1, n\ge 1\}\) be an array of constants. Let \(\{c_{n}, n\ge 1\}\) be a sequence of positive constants. Suppose that the following two conditions hold:

  1. (i)

    for every \(\epsilon >0\),

    $$\begin{aligned} \sum \limits _{n=1}^\infty c_n g(n)\sum \limits _{i=1}^n P\left( \left| a_{ni}X_{ni}\right| >\epsilon \right) <\infty ; \end{aligned}$$
  2. (ii)

    for some \(\delta >0\) and \(J\ge 1\),

    $$\begin{aligned} \sum \limits _{n=1}^\infty c_n g(n)\left[ \sum _{i=1}^n Ea_{ni}^2X_{ni}^2I\left( \left| a_{ni}X_{ni}\right| \le \delta \right) \right] ^J<\infty . \end{aligned}$$

    Then

    $$\begin{aligned} \sum \limits _{n=1}^\infty c_nP\left( \left| \sum _{i=1}^n\left[ a_{ni}X_{ni}-Ea_{ni}X_{ni}I\left( \left| a_{ni}X_{ni}\right| \le \delta \right) \right] \right| >\varepsilon \right) <\infty \quad { for}~{any}\quad \varepsilon >0. \end{aligned}$$
    (4.1)

Proof

Without loss of generality, we assume that \(a_{ni}\ge 0\) for all \(i\ge 1\) and \(n\ge 1\) (Otherwise, we use \(a_{ni}^+\) and \(a_{ni}^-\) instead of \(a_{ni}\), respectively and note that \(a_{ni}=a_{ni}^+-a_{ni}^-\)). Denote

$$\begin{aligned} Y_{ni}=-\delta I\left( a_{ni}X_{ni}< -\delta \right) +a_{ni}X_{ni}I\left( \left| a_{ni}X_{ni}\right| \le \delta \right) +\delta I\left( a_{ni}X_{ni}>\delta \right) ,\quad i\ge 1,\quad n\ge 1. \end{aligned}$$

It is easy to check that \(\{Y_{ni}-EY_{ni}, i\ge 1, n\ge 1\}\) is an array of rowwise WNOD random variables by Corollary 2.1 (i).

For fixed \(n\ge 1\), denote

$$\begin{aligned} T_n= & {} \sum _{i=1}^n Y_{ni},\quad S_n=\sum _{i=1}^na_{ni}X_{ni},\quad S_n^{'}=\sum _{i=1}^na_{ni}X_{ni}I\left( \left| a_{ni}X_{ni}\right| \le \delta \right) \\Z_n= & {} \sum _{i=1}^n\left[ \delta I\left( a_{ni}X_{ni}< -\delta \right) -\delta I\left( a_{ni}X_{ni}>\delta \right) \right] =S_n^{'}-T_n. \end{aligned}$$

Thus,

$$\begin{aligned}P\left( |S_n-ES_n^{'}|>\varepsilon \right)\le & {} P\left( |S_n^{'}-ES_n^{'}|>\varepsilon \right) +\sum _{i=1}^n P\left( \left| a_{ni}X_{ni}\right| > \delta \right) \\\le & {} C\sum _{i=1}^n P\left( \left| a_{ni}X_{ni}\right| > \delta \right) +P\left( |T_n-ET_n|>\frac{\varepsilon }{2}\right) \!. \end{aligned}$$

By condition (i) and noting that \(g(n)\ge 1\), to prove (4.1), it suffices to show that for any \(\varepsilon >0\),

$$\begin{aligned} \sum _{n=1}^\infty c_nP\left( |T_n-ET_n|>\varepsilon \right) <\infty . \end{aligned}$$
(4.2)

Let \(B_n^2=\sum _{i=1}^nVar(Y_{ni})\). For any \(\varepsilon >0\) and \(a>0\), denote \(d=\min \left\{ 1, \frac{a\varepsilon }{6\delta ^2}, \frac{a}{12\delta }\right\} \),

$$\begin{aligned} \mathbf {N_1}= & {} \left\{ n: B_n^2>a\varepsilon \right\} ,\quad \mathbf {N_2}=\left\{ n: \sum _{i=1}^nP\left( \left| a_{ni}X_{ni}\right| >\min \left\{ \delta , \frac{a}{3}\right\} \right) >d\right\} \!,\\ \mathbf {N_3}= & {} \left\{ n: \sum _{i=1}^n Var\left( a_{ni}X_{ni}I\left( \left| a_{ni}X_{ni}\right| \le \delta \right) \right) >\frac{a\varepsilon }{2}\right\} ,\quad \mathbf {N_4}=\mathbf {N}-\left( \mathbf {N_2}\bigcup \mathbf {N_3}\right) \!, \end{aligned}$$

where \(\mathbf {N}\) is the set of positive integers.

It is easily seen that

$$\begin{aligned} Var \left( \delta I\left( a_{ni}X_{ni}>\delta \right) -\delta I\left( a_{ni}X_{ni}< -\delta \right) \right) \le \delta ^2 P\left( \left| a_{ni}X_{ni}\right| >\delta \right) \end{aligned}$$

and

$$\begin{aligned} Cov \left( a_{ni}X_{ni}I\left( \left| a_{ni}X_{ni}\right| \le \delta \right) , \delta I\left( a_{ni}X_{ni}>\delta \right) -\delta I\left( a_{ni}X_{ni}< -\delta \right) \right) \le \delta ^2P\left( \left| a_{ni}X_{ni}\right| >\delta \right) \!. \end{aligned}$$

Therefore,

$$\begin{aligned} B_n^2\le & {} \sum _{i=1}^n Var \left( a_{ni}X_{ni}I\left( \left| a_{ni}X_{ni}\right| \le \delta \right) \right) +3\delta ^2\sum _{i=1}^nP\left( \left| a_{ni}X_{ni}\right| >\delta \right) \!, \end{aligned}$$

which implies that \(\mathbf {N_1}\subset \mathbf {N_2}\bigcup \mathbf {N_3}\).

Note that

$$\begin{aligned} \sum _{n\in \mathbf {N_2}\bigcup \mathbf {N_3}} c_nP\left( |T_n-ET_n|>\varepsilon \right)\le & {} \frac{1}{d}\sum _{n=1}^\infty c_n\sum _{i=1}^n P\left( \left| a_{ni}X_{ni}\right| >\min \left\{ \delta , \frac{a}{3}\right\} \right) \\&+\,\frac{2^J}{(a\varepsilon )^J}\sum _{n=1}^\infty c_n\left[ \sum _{i=1}^n Var \left( a_{ni}X_{ni}I\left( \left| a_{ni}X_{ni}\right| \le \delta \right) \right) \right] ^J\\< & {} \infty . \end{aligned}$$

Therefore, in order to prove (4.2), we need only to show that

$$\begin{aligned} \sum _{n\in \mathbf {N_4}} c_nP\left( |T_n-ET_n|>\varepsilon \right) <\infty . \end{aligned}$$
(4.3)

By Theorem 3.2, we can see that

$$\begin{aligned} \sum _{n\in \mathbf {N_4}} c_nP\left( |T_n-ET_n|>\varepsilon \right)\le & {} \sum _{n\in \mathbf {N_4}} c_nP\left( \max _{1\le i\le n}|Y_{ni}-EY_{ni}|>a\right) \nonumber \\&+\,2\sum _{n\in \mathbf {N_4}} c_ng(n)\exp \left\{ -\frac{\varepsilon ^2}{2(a\varepsilon +B_n^2)}\left[ 1+\frac{2}{3}\ln \left( 1+\frac{a\varepsilon }{B_n^2}\right) \right] \right\} \!. \end{aligned}$$

Firstly, we will show that for any \(n\in \mathbf {N_4}\), \(\max \nolimits _{1\le i\le n}|EY_{ni}|\le \frac{a}{2}\). Actually, for any \(n\in \mathbf {N_4}\),

$$\begin{aligned} \max \limits _{1\le i\le n}|EY_{ni}|\le & {} \max \limits _{1\le i\le n}E|Y_{ni}|\nonumber \\\le & {} \max \limits _{1\le i\le n}\left[ \delta P\left( \left| a_{ni}X_{ni}\right| >\delta \right) +E\left| a_{ni}X_{ni}\right| I\left( \left| a_{ni}X_{ni}\right| \le \frac{a}{3}\right) \right. \nonumber \\&\left. +\,E\left| a_{ni}X_{ni}\right| I\left( \frac{a}{3}<\left| a_{ni}X_{ni}\right| \le \delta \right) \right] \nonumber \\\le & {} 2\delta \sum _{i=1}^nP\left( \left| a_{ni}X_{ni}\right| >\min \left\{ \delta , \frac{a}{3}\right\} \right) +\frac{a}{3}\le 2\delta d+\frac{a}{3}\le \frac{a}{2},\nonumber \end{aligned}$$

which implies that

$$\begin{aligned} \sum _{n\in \mathbf {N_4}} c_nP\left( \max _{1\le i\le n}|Y_{ni}-EY_{ni}|>a\right)\le & {} \sum _{n=1}^\infty c_nP\left( \max _{1\le i\le n}|Y_{ni}|>\frac{a}{2}\right) \nonumber \\\le & {} \sum _{n=1}^\infty c_n\sum _{i=1}^nP\left( \left| a_{ni}X_{ni}\right| >\min \left\{ \delta , \frac{a}{2}\right\} \right) \nonumber \\< & {} \infty . \end{aligned}$$

It is easy to check that \(B_n^2\le a\varepsilon \) and \(\sum \nolimits _{i=1}^n P\left( \left| a_{ni}X_{ni}\right| > \delta \right) \le d\) when \(n\in \mathbf {N_4}\). Set \(a=\frac{\varepsilon }{6J}\). We have by conditions (i) and (ii) that

$$\begin{aligned}&\sum _{n\in \mathbf {N_4}} c_ng(n)\exp \left\{ -\frac{\varepsilon ^2}{2(a\varepsilon +B_n^2)}\left[ 1+\frac{2}{3}\ln \left( 1+\frac{a\varepsilon }{B_n^2}\right) \right] \right\} \\&\quad \le C\sum _{n\in \mathbf {N_4}} c_ng(n)\left( \frac{B_n^2}{B_n^2+a\varepsilon }\right) ^J~\le ~C\sum _{n\in \mathbf {N_4}} c_ng(n)\left( B_n^2\right) ^J\\&\quad \le C\sum _{n\in \mathbf {N_4}} c_ng(n)\left\{ \left[ \sum _{i=1}^n Var \left( a_{ni}X_{ni}I\left( \left| a_{ni}X_{ni}\right| \le \delta \right) \right) \right] ^J\right. \\&\qquad \left. +\left( 3\delta ^2\right) ^J\left[ \sum _{i=1}^nP\left( \left| a_{ni}X_{ni}\right| >\delta \right) \right] ^J\right\} <\infty . \end{aligned}$$

This completes the proof of the theorem. \(\square \)

By Theorem 4.1, we can get the following corollaries.

Corollary 4.1

Let \(\{X_{ni}, i\ge 1, n\ge 1\}\) be an array of rowwise WNOD random variables with finite second moments and \(\{a_{ni}, i\ge 1, n\ge 1\}\) be an array of constants. Let \(\{c_{n}, n\ge 1\}\) be a sequence of positive constants such that conditions (i) and (ii) of Theorem 4.1 hold for every \(\epsilon >0\) and some \(\delta >0\), \(J\ge 1\). Assume further that

$$\begin{aligned} \sum \limits _{i=1}^nEa_{ni}X_{ni}I\left( \left| a_{ni}X_{ni}\right| \le \delta \right) \rightarrow 0\quad as\quad n\rightarrow \infty . \end{aligned}$$
(4.4)

Then

$$\begin{aligned} \sum \limits _{n=1}^\infty c_nP\left( \left| \sum _{i=1}^na_{ni}X_{ni}\right| >\varepsilon \right) <\infty \quad { for}~{any}\quad \varepsilon >0. \end{aligned}$$
(4.5)

Corollary 4.2

Let \(\{X_{ni}, i\ge 1, n\ge 1\}\) be an array of rowwise WNOD random variables with mean zero and finite second moments, and \(\{a_{ni}, i\ge 1, n\ge 1\}\) be an array of constants. Let \(\{c_{n}, n\ge 1\}\) be a sequence of positive constants such that conditions (i) and (ii) of Theorem 4.1 hold for every \(\epsilon >0\) and some \(\delta >0\), \(J\ge 1\). Assume further that

$$\begin{aligned} \sum \limits _{i=1}^nEa_{ni}X_{ni}I\left( \left| a_{ni}X_{ni}\right| > \delta \right) \rightarrow 0\quad as\quad n\rightarrow \infty . \end{aligned}$$
(4.6)

Then (4.5) holds for any \(\varepsilon >0\).

If we take \(c_n\equiv 1\) and \(a_{ni}\equiv n^{-1/p}\) for \( i\ge 1\) and \(n\ge 1\) in Corollary 4.2, then we can get the following corollary by Corollary 4.2.

Corollary 4.3

Let \(p>0\) and \(\{X_{ni}, i\ge 1, n\ge 1\}\) be an array of rowwise WNOD random variables with mean zero and finite second moments. Suppose that the following conditions hold:

  1. (i)

    for every \(\epsilon >0\)

    $$\begin{aligned} \sum \limits _{n=1}^\infty g(n)\sum \limits _{i=1}^n P\left( \left| X_{ni}\right| >n^{1/p}\epsilon \right) <\infty , \end{aligned}$$
    (4.7)
  2. (ii)

    for some \(\delta >0\) and \(J\ge 1\),

    $$\begin{aligned} \sum \limits _{n=1}^\infty g(n) \left[ n^{-2/p}\sum _{i=1}^n EX_{ni}^2I\left( \left| X_{ni}\right| \le n^{1/p}\delta \right) \right] ^J<\infty , \end{aligned}$$
    (4.8)
  3. (iii)
    $$\begin{aligned} n^{-1/p}\sum \limits _{i=1}^nEX_{ni}I\left( \left| X_{ni}\right| > n^{1/p}\delta \right) \rightarrow 0\quad as\quad n\rightarrow \infty . \end{aligned}$$
    (4.9)

    Then

    $$\begin{aligned} \sum \limits _{n=1}^\infty P\left( \left| \sum _{i=1}^nX_{ni}\right| >n^{1/p}\varepsilon \right) <\infty \quad for~any\quad \varepsilon >0, \end{aligned}$$
    (4.10)

    and \(n^{-1/p}\sum _{i=1}^nX_{ni}\rightarrow 0\quad a.s.\)

As an application of Corollary 4.3, we can get the following complete convergence result and Marcinkiewicz–Zygmund type strong law of large numbers for WNOD random variables.

Corollary 4.4

Let \(\{X_{ni}, i\ge 1, n\ge 1\}\) be an array of rowwise WNOD random variables satisfying

$$\begin{aligned} P(|X_{ni}|>x)\le CP(X>x) \end{aligned}$$
(4.11)

for all \(x\ge 0\), \(i\ge 1\) and \(n\ge 1\), where X is some nonnegative random variable. Assume that \(EX_{ni}=0\) and \(g (n)=O(n^\lambda )\) for some \(\lambda >0\). If \(EX^{(2+\lambda )p}<\infty \) for some \(1\le p<2\), then (4.10) holds for any \(\varepsilon >0\) and \(n^{-1/p}\sum _{i=1}^nX_{ni}\rightarrow 0\) a.s.

Proof

We only need to prove that conditions (i)–(iii) of Corollary 4.3 hold true.

For any \(\epsilon >0\),

$$\begin{aligned} \sum \limits _{n=1}^\infty g(n) \sum \limits _{i=1}^n P\left( \left| X_{ni}\right| >n^{1/p}\epsilon \right)\le & {} C\sum \limits _{n=1}^\infty n^{\lambda +1} P\left( \left| X\right| >n^{1/p}\epsilon \right) \\= & {} C\sum \limits _{n=1}^\infty n^{\lambda +1} \sum _{i=n}^\infty P\left( i^{1/p}\epsilon <\left| X\right| \le (i+1)^{1/p}\epsilon \right) \\\le & {} CE|X|^{(2+\lambda )p}~<~\infty , \end{aligned}$$

which implies (4.7).

Taking J large enough such that \(J>\frac{p(1+\lambda )}{2-p}\), which implies that \((1-2/p)J+\lambda <-1\). It follows by \(E|X|^{(2+\lambda )p}<\infty \) that \(EX^2<\infty \). Note that \(EX_{ni}^2\le CEX^2\) by Lemma 2.2, we can get that

$$\begin{aligned}&\sum \limits _{n=1}^\infty g(n)\left[ n^{-2/p}\sum _{i=1}^n EX_{ni}^2I\left( \left| X_{ni}\right| \le n^{1/p}\delta \right) \right] ^J\\&\quad \le C\sum \limits _{n=1}^\infty n^\lambda \left[ n^{-2/p}\sum _{i=1}^n EX_{ni}^2I\left( \left| X_{ni}\right| \le n^{1/p}\delta \right) \right] ^J\\&\quad \le C\left( EX^2\right) ^J\sum \limits _{n=1}^\infty n^{\lambda +(1-2/p)J}~ <~\infty , \end{aligned}$$

which implies (4.8).

Noting that \(E|X|^{2p}<\infty \), we have by Lemma 2.2 that

$$\begin{aligned} \left| n^{-1/p}\sum \limits _{i=1}^nEX_{ni}I\left( \left| X_{ni}\right| > n^{1/p}\delta \right) \right|\le & {} n^{-1/p}\sum \limits _{i=1}^nE\left| X_{ni}\right| I\left( \left| X_{ni}\right| > n^{1/p}\delta \right) \\\le & {} Cn^{1-1/p}E\left| X\right| I\left( \left| X\right| > n^{1/p}\delta \right) \\\le & {} Cn^{-1}E\left| X\right| ^{2p}I\left( \left| X\right| > n^{1/p}\delta \right) \\\rightarrow & {} 0\quad \text {as}\quad n\rightarrow \infty , \end{aligned}$$

which implies (4.9).

From the statements above and Corollary 4.3, we can get (4.10) immediately. This completes the proof of the corollary. \(\square \)

If we take \(c_n\equiv 1\) and \(a_{ni}=a_n^{-1}\), in Corollary 4.2, where \(\{a_{n}, n\ge 1\}\) is a sequence of positive constants, we can get another corollary by using Corollary 4.2 as follows.

Corollary 4.5

Let \(\{X_{ni}, i\ge 1, n\ge 1\}\) be an array of rowwise mean zero WNOD random variables with finite second moments and \(g(n)=O(1)\), and \(\{a_{n}, n\ge 1\}\) be a sequence of positive constants. Let \(\{g_i(t), i\ge 1\}\) be a sequence of nonnegative, even functions. Assume that there exist some \(\beta \in (1,2]\) and \(\gamma >0\) such that \(g_i(t)\ge \gamma t^\beta \) for \(0<t\le 1\) and there exists a \(\gamma >0\) such that \(g_i(t)\ge \gamma t\) for \(t>1\). If

$$\begin{aligned} \sum _{n=1}^\infty \sum _{i=1}^{n} Eg_i\left( \frac{X_{ni}}{a_n}\right) <\infty , \end{aligned}$$
(4.12)

then

$$\begin{aligned} \sum \limits _{n=1}^\infty P\left( \frac{1}{a_n}\left| \sum _{i=1}^nX_{ni}\right| >\varepsilon \right) <\infty \quad \text {for any}\quad \varepsilon >0. \end{aligned}$$
(4.13)

Proof

We will prove that the conditions of Corollary 4.2 hold, where \(c_n\equiv 1\) and \(a_{ni}=a_n^{-1}\). Without loss of generality, we assume that \(0<\epsilon <1\). Note that \(g_i(t)\ge \gamma t^\beta \) for \(0<t\le 1\) and \(g_i(t)\ge \gamma t\) for \(t>1\), we have by (4.12) that

$$\begin{aligned}&\sum \limits _{n=1}^\infty \sum \limits _{i=1}^{n} P\left( \left| \frac{X_{ni}}{a_n}\right| >\epsilon \right) =\sum \limits _{n=1}^\infty \sum \limits _{i=1}^{n}EI\left( \left| X_{ni}\right| >{a_n}\right) +\sum \limits _{n=1}^\infty \sum \limits _{i=1}^{n}EI\left( \epsilon a_n<\left| X_{ni}\right| \le {a_n}\right) \nonumber \\&\quad \le \sum \limits _{n=1}^\infty \sum \limits _{i=1}^{n}\frac{E|X_{ni}|}{a_n}I\left( \left| X_{ni}\right| >{a_n}\right) +\sum \limits _{n=1}^\infty \sum \limits _{i=1}^{n}\frac{E|X_{ni}|^\beta }{\epsilon ^\beta a_n^\beta }I\left( \epsilon {a_n}<\left| X_{ni}\right| \le a_n\right) \nonumber \\&\quad \le \frac{1}{\gamma }\sum \limits _{n=1}^\infty \sum \limits _{i=1}^{n}Eg_i\left( \frac{X_{ni}}{a_n}\right) +\frac{1}{\gamma \epsilon ^\beta }\sum \limits _{n=1}^\infty \sum \limits _{i=1}^{n}Eg_i\left( \frac{X_{ni}}{a_n}\right) \nonumber \\&\quad <\infty , \end{aligned}$$

which implies that condition (i) in Theorem 4.1 holds.

For \(0<\delta <1\) and \(J\ge 1\), we have by the assumptions on \(g_i(t)\) and condition (4.12) that

$$\begin{aligned} \sum \limits _{n=1}^\infty \left[ a_n^{-2}\sum _{i=1}^{n} EX_{ni}^2I\left( \left| X_{ni}\right| \le a_n\right) \right] ^J\le & {} \sum \limits _{n=1}^\infty \left[ \sum _{i=1}^{n} \frac{E|X_{ni}|^\beta }{a_n^\beta }I\left( \left| X_{ni}\right| \le a_n\right) \right] ^J\nonumber \\\le & {} \frac{1}{\gamma ^J}\sum \limits _{n=1}^\infty \left[ \sum _{i=1}^{n}Eg_i\left( \frac{X_{ni}}{a_n}\right) \right] ^J\nonumber \\\le & {} \frac{1}{\gamma ^J}\left[ \sum \limits _{n=1}^\infty \sum _{i=1}^{n}Eg_i\left( \frac{X_{ni}}{a_n}\right) \right] ^J\nonumber \\< & {} \infty \end{aligned}$$
(4.14)

and

$$\begin{aligned} \sum \limits _{i=1}^{n}\frac{E|X_{ni}|}{a_n}I\left( \left| X_{ni}\right| > \delta a_n\right)= & {} \sum \limits _{i=1}^{n}\frac{E|X_{ni}|}{a_n}I\left( \left| X_{ni}\right| > a_n\right) +\sum \limits _{i=1}^{n}\frac{E|X_{ni}|}{a_n}I\left( \delta a_n<\left| X_{ni}\right| \le a_n\right) \nonumber \\\le & {} \frac{1}{\gamma }\sum \limits _{i=1}^{n}Eg_i\left( \frac{X_{ni}}{a_n}\right) +\,\delta ^{1-\beta }\sum \limits _{i=1}^{n}\frac{E|X_{ni}|^\beta }{a_n^\beta }I\left( \delta a_n<\left| X_{ni}\right| \le a_n\right) \nonumber \\\le & {} \frac{1}{\gamma }\sum \limits _{i=1}^{n}Eg_i\left( \frac{X_{ni}}{a_n}\right) +\frac{\delta ^{1-\beta }}{\gamma }\sum \limits _{i=1}^{n}Eg_i\left( \frac{X_{ni}}{a_n}\right) \nonumber \\\rightarrow & {} 0\quad \text {as}\quad n\rightarrow \infty . \end{aligned}$$
(4.15)

By (4.14) and (4.15), we can see that condition (ii) in Theorem 4.1 and (4.4) in Corollary 4.2 are satisfied. Hence, the desired result (4.13) follows by Corollary 4.2 immediately. This completes the proof of the corollary. \(\square \)

Remark 4.1

It is easily seen that the conditions in Corollary 4.2 are weaker than those in Corollary 4.5. In addition, the results of Corollary 4.5 for NOD random variables have been obtained by Shen [16]. So our results of Theorem 4.1 and Corollary 4.2 generalize and improve the corresponding ones of Shen [16].

5 Complete moment convergence for arrays of rowwise WNOD random variables

In this section, we will study the complete moment convergence for arrays of rowwise WNOD random variables by using the exponential probability inequalities and complete convergence result that we obtained in Sects. 3 and 4. The main ideas are inspired by Wu et al. [40].

The concept of complete moment convergence was introduced by Chow [5] as follows: Let \(\{X_n, n\ge 1\}\) be a sequence of random variables and \(a_n>0\), \(b_n>0\), \(q>0\). If

$$\begin{aligned} \sum _{n=1}^\infty a_nE\left\{ b_n^{-1} |X_n|-\varepsilon \right\} ^q_+ < \infty \end{aligned}$$

for all \(\varepsilon >0\), then the above result was called the complete moment convergence. For more details about complete moment convergence, one can refer to Wu and Zhu [41], Wang and Hu [25], Yang et al. [42], Sung [22], and so on.

Our main results on complete moment convergence for arrays of rowwise WNOD random variables are as follows.

Theorem 5.1

Let \(\{X_{ni}, i\ge 1, n\ge 1\}\) be an array of rowwise WNOD random variables with mean zero and finite second moments, and \(\{a_{ni}, i\ge 1, n\ge 1\}\) be an array of constants. Let \(\{c_{n}, n\ge 1\}\) be a sequence of positive constants such that conditions (i) and (ii) of Theorem 4.1 hold for every \(\epsilon >0\) and some \(\delta >0\), \(J>1\). Assume further that

$$\begin{aligned} \sum _{n=1}^\infty (c_n\vee 1)\sum \limits _{i=1}^nE|a_{ni}X_{ni}|I\left( \left| a_{ni}X_{ni}\right| > \delta /(4J)\right) <\infty , \end{aligned}$$
(5.1)
$$\begin{aligned} \sum _{n=1}^\infty c_ng(n) \sum \limits _{i=1}^nE|a_{ni}X_{ni}|I\left( \left| a_{ni}X_{ni}\right| > \delta \right) <\infty . \end{aligned}$$
(5.2)

Then

$$\begin{aligned} \sum _{n=1}^\infty c_nE\left\{ \left| \sum _{i=1}^na_{ni}X_{ni}\right| -\varepsilon \right\} _+<\infty \quad for~any\quad \varepsilon >0. \end{aligned}$$
(5.3)

Proof

Note that (5.1) implies

$$\begin{aligned} \sum _{n=1}^\infty c_n\sum \limits _{i=1}^nE|a_{ni}X_{ni}|I\left( \left| a_{ni}X_{ni}\right| > \delta \right) <\infty , \end{aligned}$$
(5.4)
$$\begin{aligned} \sum _{n=1}^\infty \sum \limits _{i=1}^nE|a_{ni}X_{ni}|I\left( \left| a_{ni}X_{ni}\right| > \delta \right) <\infty , \end{aligned}$$
(5.5)

and thus (4.4) holds true. It follows by Corollary 4.2 that (4.5) holds for any \(\varepsilon >0\). Furthermore, by (5.2) and (5.5), we have

$$\begin{aligned} \sum _{n=1}^\infty c_ng(n)\left( \sum \limits _{i=1}^nE|a_{ni}X_{ni}|I\left( \left| a_{ni}X_{ni}\right| > \delta \right) \right) ^J<\infty . \end{aligned}$$
(5.6)

Therefore,

$$\begin{aligned}&\sum _{n=1}^\infty c_nE\left\{ \left| \sum _{i=1}^na_{ni}X_{ni}\right| -\varepsilon \right\} _+=\sum _{n=1}^\infty c_n\int _0^\infty P\left( \left| \sum _{i=1}^na_{ni}X_{ni}\right| -\varepsilon >t\right) dt\\&\quad =\sum _{n=1}^\infty c_n\left[ \int _0^\delta P\left( \left| \sum _{i=1}^na_{ni}X_{ni}\right| >\varepsilon +t\right) dt+\int _\delta ^\infty P\left( \left| \sum _{i=1}^na_{ni}X_{ni}\right| >\varepsilon +t\right) dt\right] \\&\quad \le \delta \sum _{n=1}^\infty c_nP\left( \left| \sum _{i=1}^na_{ni}X_{ni}\right| >\varepsilon \right) +\sum _{n=1}^\infty c_n\int _\delta ^\infty P\left( \left| \sum _{i=1}^na_{ni}X_{ni}\right| >t\right) dt\\&\quad \le C+\sum _{n=1}^\infty c_n\int _\delta ^\infty P\left( \left| \sum _{i=1}^na_{ni}X_{ni}\right| >t\right) dt. \end{aligned}$$

In order to prove (5.3), we only need to prove that

$$\begin{aligned} H=\, : \sum _{n=1}^\infty c_n\int _\delta ^\infty P\left( \left| \sum _{i=1}^na_{ni}X_{ni}\right| >t\right) dt<\infty . \end{aligned}$$
(5.7)

For fixed \(t>0\), denote for \(i\ge 1\) and \(n\ge 1\) that

$$\begin{aligned} Y_{ni}= & {} -tI\left( a_{ni}X_{ni}<-t\right) +a_{ni}X_{ni}I\left( |a_{ni}X_{ni}|\le t\right) +tI\left( a_{ni}X_{ni}>t\right) \!,\\ Z_{ni}= & {} a_{ni}X_{ni}-Y_{ni}=\left( a_{ni}X_{ni}+t\right) I\left( a_{ni}X_{ni}<-t\right) +\left( a_{ni}X_{ni}-t\right) I\left( a_{ni}X_{ni}>t\right) \!. \end{aligned}$$

Note that this decomposition does depend on t.

It is easily seen that

$$\begin{aligned} P\left( \left| \sum _{i=1}^na_{ni}X_{ni}\right| >t\right)= & {} P\left( \left| \sum _{i=1}^na_{ni}X_{ni}\right| >t, \bigcup _{i=1}^n(\left| a_{ni}X_{ni}\right| >t)\right) \\&+\,P\left( \left| \sum _{i=1}^na_{ni}X_{ni}\right| >t, \bigcap _{i=1}^n(\left| a_{ni}X_{ni}\right| \le t)\right) \\\le & {} \sum _{i=1}^n P\left( \left| a_{ni}X_{ni}\right| >t\right) +P\left( \left| \sum _{i=1}^nY_{ni}\right| >t\right) , \end{aligned}$$

which implies that

$$\begin{aligned} H\le & {} \sum _{n=1}^\infty c_n\sum _{i=1}^n\int _\delta ^\infty P\left( \left| a_{ni}X_{ni}\right| >t\right) dt+\sum _{n=1}^\infty c_n\int _\delta ^\infty P\left( \left| \sum _{i=1}^nY_{ni}\right| >t\right) dt\\=: & {} H_1+H_2. \end{aligned}$$

By (5.4), we can get that

$$\begin{aligned} H_1\le C\sum _{n=1}^\infty c_n\sum _{i=1}^nE\left| a_{ni}X_{ni}\right| I\left( \left| a_{ni}X_{ni}\right| >\delta \right) <\infty .\end{aligned}$$

To prove (5.7), it suffices to show that \(H_2<\infty \). Firstly, we will show that

$$\begin{aligned} \max _{t\ge \delta }t^{-1}\left| \sum _{i=1}^nEY_{ni}\right| \rightarrow 0~~\text {as}~~n\rightarrow \infty . \end{aligned}$$
(5.8)

Note that \(|Z_{ni}|\le |a_{ni}X_{ni}|I(|a_{ni}X_{ni}|>t)\). It follows by \(EX_{ni}=0\) and (5.5) that

$$\begin{aligned} \max _{t\ge \delta }t^{-1}\left| \sum _{i=1}^nEY_{ni}\right|= & {} \max _{t\ge \delta }t^{-1}\left| \sum _{i=1}^nEZ_{ni}\right| \\\le & {} \delta ^{-1}\sum _{i=1}^n E|a_{ni}X_{ni}|I(|a_{ni}X_{ni}|>\delta )\rightarrow 0\quad \text {as}\quad n\rightarrow \infty , \end{aligned}$$

which implies (5.8) and \(\left| \sum _{i=1}^nEY_{ni}\right| \le t/2\) holds for any \(t\ge \delta \) and all n large enough. Hence,

$$\begin{aligned} H_2\le C\sum _{n=1}^\infty c_n\int _\delta ^\infty P\left( \left| \sum _{i=1}^n(Y_{ni}-EY_{ni})\right| >t/2\right) dt. \end{aligned}$$
(5.9)

It follows by Corollary 2.1 that \(\{Y_{ni}, i\ge 1, n\ge 1\}\) is an array of rowwise WNOD random variables. Denote \(B_n^2=\sum _{i=1}^n E(Y_{ni}-EY_{ni})^2\). Applying Theorem 3.1 with \(x=t/2\) and \(y=t/(2J)\), we can get by (5.9) and Theorem 3.1 that

$$\begin{aligned} H_2\le & {} C\sum _{n=1}^\infty c_n\sum _{i=1}^n\int _\delta ^\infty P\left( \left| Y_{ni}-EY_{ni}\right| >t/(2J)\right) dt\\&+\,C\sum _{n=1}^\infty c_ng(n)\int _\delta ^\infty \left( \frac{B_n^2}{B_n^2+t^2/(4J)}\right) ^Jdt\\=: & {} H_{21}+H_{22}. \end{aligned}$$

Similar to the proof of (5.8), we can see that \(|EY_{ni}|\le t/(4J)\) holds for any \(t\ge \delta \) and all n large enough. Noting that \(|Y_{ni}|\le |a_{ni}X_{ni}|\), we have by (5.1) that

$$\begin{aligned} H_{21}\le & {} C\sum _{n=1}^\infty c_n\sum _{i=1}^n\int _\delta ^\infty P\left( \left| Y_{ni}\right| >t/(4J)\right) dt\\\le & {} C\sum _{n=1}^\infty c_n\sum _{i=1}^n\int _\delta ^\infty P\left( \left| a_{ni}X_{ni}\right| >t/(4J)\right) dt\\\le & {} C\sum _{n=1}^\infty c_n\sum _{i=1}^nE\left| a_{ni}X_{ni}\right| I\left( \left| a_{ni}X_{ni}\right| >\delta /(4J)\right) \\< & {} \infty . \end{aligned}$$

In the following, we will show that \(H_{22}<\infty \).

By the notation of \(Y_{ni}\), we can see that

$$\begin{aligned} B_n^2\le \sum _{i=1}^n EY_{ni}^2=\sum _{i=1}^nEa_{ni}^2X_{ni}^2I\left( \left| a_{ni}X_{ni}\right| \le t\right) +\sum _{i=1}^nt^2P\left( \left| a_{ni}X_{ni}\right| > t\right) \!, \end{aligned}$$

which together with \(C_r\)-inequality yield that

$$\begin{aligned} H_{22}\le & {} C\sum _{n=1}^\infty c_ng(n)\int _\delta ^\infty \left( t^{-2}B_n^2\right) ^Jdt\\\le & {} C\sum _{n=1}^\infty c_ng(n)\int _\delta ^\infty \left( t^{-2}\sum _{i=1}^nEa_{ni}^2X_{ni}^2I\left( \left| a_{ni}X_{ni}\right| \le t\right) +\sum _{i=1}^nP\left( \left| a_{ni}X_{ni}\right| > t\right) \right) ^Jdt\\\le & {} C\sum _{n=1}^\infty c_ng(n)\int _\delta ^\infty \left( t^{-2}\sum _{i=1}^nEa_{ni}^2X_{ni}^2I\left( \left| a_{ni}X_{ni}\right| \le \delta \right) \right) ^Jdt\\&+\,C\sum _{n=1}^\infty c_ng(n)\int _\delta ^\infty \left( t^{-2}\sum _{i=1}^nEa_{ni}^2X_{ni}^2I\left( \delta <\left| a_{ni}X_{ni}\right| \le t\right) \right) ^Jdt\\&+\,C\sum _{n=1}^\infty c_ng(n)\int _\delta ^\infty \left( \sum _{i=1}^nP\left( \left| a_{ni}X_{ni}\right| > t\right) \right) ^Jdt\\=: & {} H_{221}+H_{222}+H_{223}. \end{aligned}$$

Since \(J>1\), it follows by condition (ii) of Theorem 4.1 and (5.6) that

$$\begin{aligned} H_{221}\le & {} C\sum _{n=1}^\infty c_ng(n)\left( \sum _{i=1}^nEa_{ni}^2X_{ni}^2I\left( \left| a_{ni}X_{ni}\right| \le \delta \right) \right) ^J\\< & {} \infty , \end{aligned}$$

and

$$\begin{aligned} H_{222}\le & {} C\sum _{n=1}^\infty c_ng(n)\int _\delta ^\infty \left( t^{-1}\sum _{i=1}^nE|a_{ni}X_{ni}|I\left( \delta <\left| a_{ni}X_{ni}\right| \le t\right) \right) ^Jdt\\\le & {} C\sum _{n=1}^\infty c_ng(n)\left( \sum _{i=1}^nE|a_{ni}X_{ni}|I\left( \left| a_{ni}X_{ni}\right| >\delta \right) \right) ^J\\< & {} \infty . \end{aligned}$$

For any \(t\ge \delta \), it follows by (5.5) that

$$\begin{aligned} \sum _{i=1}^nP\left( \left| a_{ni}X_{ni}\right| > t\right)\le & {} \frac{1}{\delta }\sum _{i=1}^nE\left| a_{ni}X_{ni}\right| I\left( \left| a_{ni}X_{ni}\right| > \delta \right) \\\rightarrow & {} 0\quad \text {as}\quad n\rightarrow \infty , \end{aligned}$$

which implies that \(\sum _{i=1}^nP\left( \left| a_{ni}X_{ni}\right| > t\right) <1\) holds for any \(t\ge \delta \) and all n large enough. Hence, we have by (5.2) that

$$\begin{aligned} H_{223}\le & {} C\sum _{n=1}^\infty c_ng(n)\int _\delta ^\infty \sum _{i=1}^nP\left( \left| a_{ni}X_{ni}\right| > t\right) dt\\\le & {} C\sum _{n=1}^\infty c_ng(n)\sum _{i=1}^nE\left| a_{ni}X_{ni}\right| I\left( \left| a_{ni}X_{ni}\right| >\delta \right) \\< & {} \infty . \end{aligned}$$

The desired result (5.3) follows from the statements above immediately. This completes the proof of the theorem. \(\square \)