1 Introduction

In 2003, Jajte [11] gave a strong law of large numbers for general weighted sums of i.i.d random variables using the class of function \(\phi \) which satisfies the following conditions:

  1. (i)

    For some \(d \ge 1, \phi \) is strictly increasing on \([d,\infty )\) with range \([0,\infty )\),

  2. (ii)

    There exist \(C>0\) and a positive integer \(k_0\ge d\) such that \(\phi (y+1)/\phi (y)\le C\) for all \(y \ge k_0\),

  3. (iii)

    There exist constants a and b such that for all \(s > d\), \(\phi ^2(s)\int _{s}^\infty \frac{1}{\phi ^2(x)}dx \le as+b.\)

Inspired by Jajte, we develop his techniques to obtain complete convergence for randomly weighted sums of coordinatewise pairwise negative quadrant dependent random variables taking values in Hilbert spaces with general normalizing sequences. Now, let us recall the definition of negative quadrant dependent (NQD, for short) random variables which was first introduced by Lehmann [13] in 1966: Two random variables \(X_1\) and \(X_2\) are called NQD if

$$\begin{aligned} P(X_1\le x_1, X_2 \le x_2)\le P(X_1\le x_1)P(X_2 \le x_2) \end{aligned}$$

for any real numbers \(x_1, x_2\). A sequence of random variables \(\{X_n, n\ge 1\}\) is said to be pairwise NQD if every pair of random variables in the sequence is NQD. It is a family of very wide scope, many later types of negative dependency are derived on this basis, such as negatively associated random variables and negatively orthant dependent random variables (see K. Joag-Dev and K. Proschan [12]). Moreover, a pairwise NQD assumption of the random variables in the model statistics is more reasonable than an assumption of independence, so many statisticians have investigated the pairwise NQD random variables (see Li et al. [14], Li and Yang [15], Matula [16], Wu [20]).

The aim of this paper is to obtain the conditions for complete convergence of randomly weighted sums of coordinatewise pairwise NQD random variables taking values in Hilbert spaces. Based on the definition of Jajte, we consider a functions \(\phi \) which satisfy the following conditions:

  1. (A1)

    \(\phi \) is strictly increasing on \([s,\infty )\) with range \([0,\infty )\) and \(\sup _{n \ge 1} \frac{\phi (n+1)}{\phi (n)} < \infty \),

  2. (A2)

    \(\phi (s)\int _{1}^s \frac{x^{r-1}}{\phi (x)}dx \le Cs^r, \)

  3. (A3)

    \(\phi ^2(s)\int _{s}^\infty \frac{x^{r-1}}{\phi ^2 (x)}dx \le Cs^r, \)

  4. (A4)

    \(\phi ^2(s)\int _{s}^\infty \frac{x^{r-1}\log _+^2(x)}{\phi ^2 (x)}dx \le Cs^r\log _+^2(s). \)

Here and thereafter \(r\ge 1\), \(s>0\) and \(\log _+(x) = \max \{1, \log (x)\}\) with \(\log \) denotes the logarithm with base 2. Using the properties of the class of functions \(\phi \) that satisfy the conditions (A1), (A2), and (A3), we find out the conditions for the convergence

$$ \sum _{n=1}^{\infty }\dfrac{1}{n^{2-r}} P\left( \left\| \sum _{i=1}^{n} (A_{ni}X_{i} -E(A_{ni}X_{i})) \right\| \ge \varepsilon \phi (n)\right) <\infty , $$

where \(\{A_{ni}, n \ge 1, 1 \le i \le n\}\) is an array of rowwise pairwise NQD random variables, \(1 \le r \le 2\) and \(\{X_{n}, n\ge 1\}\) is a sequence of \(\mathbb {H}\)-valued coordinatewise pairwise NQD in Theorem 9. Next, based on the result of Theorem 9, we establish Theorem 10 showing the convergence

$$ \sum _{n=1}^{\infty }\dfrac{1}{n^{2-r}} P\left( \max \limits _{1 \le k \le n} \left\| \sum _{i=1}^{k}(A_{ni}X_{i} - E(A_{ni}X_{i})) \right\| \ge \varepsilon \phi (n)\right) <\infty , $$

by using the conditions (A1), (A2) and (A4). Letting \(r=1\) in Theorem 10, we obtain the Marcinkiewicz-Zygmund type SLLN for randomly weighted sums with general normalizing sequences of coordinatewise pairwise NQD random variables. The main results are obtained by developing Jajte’s techniques together with using the analysis of the properties of the class of functions \(\phi \).

2 Preliminaries

Let \(\mathbb {H}\) be a real separable Hilbert space with the norm \(\Vert \cdot \Vert \) generated by an inner product \(\langle \cdot ,\cdot \rangle \) and let \(\{e_j, j\in B\}\) be an orthonormal basis in \(\mathbb {H}\). Let X be an \(\mathbb {H}\)-valued random variable, \(\langle X, e_{j} \rangle \) will be denoted by \(X^j\). Now, we recall the concept of coordinatewise pairwise NQD random variables taking values in Hilbert spaces.

Definition 1

(Dung et al. [6], Hien et al. [8]) A sequence \(\{X_n, n\ge 1\}\) of \(\mathbb {H}\)-valued random variables is said to be coordinatewise pairwise NQD if for any \(j\in B\), the sequence of random variables \(\{ X_n^j , n\ge 1\}\) is pairwise NQD.

We would like to note that there are no NQD requirements between two different coordinates of each random variable in the notions of coordinatewise pairwise NQD random variables with values in Hilbert spaces, even repetitions are permitted (for more details see Hien et al. [8]). If a sequence of \(\mathbb {H}\)-valued random variables is pairwise independent then it is coordinatewise pairwise NQD.

The following example shows a sequence of \(\mathbb {H}\)-valued coordinatewise pairwise NQD random vectors which is not pairwise independent (see Dung et al. [6, Example 2.3] or see Hien et al. [9, Example 3.1]).

Example 1

Let \(\{Z_n, n \ge 1\}\) be i.i.d N(0, 1) random variables. Then \(\{Z_n - Z_{n+1}, n \ge 1\}\) are identically distributed N(0, 2) random variables. Let F be the N(0, 2) distribution function and \(\{F_n, n \ge 1\}\) be a sequence of continuous distribution funtions. For \(n\ge 1\), put

$$ F_n^{-1}(t) = \inf \{x\, :\, F_n(x) \ge t\}\;\; \text{ and } \;\; Y_n = F_n^{-1}(F(Z_{n}-Z_{n+1})). $$

Li et al. [14] showed that \(\{Y_n, n\ge 1\}\) is a sequence of pairwise NQD random variables and for all \(n \ge 1\), the distribution function of \(Y_n\) is \(F_n\). For each \(n \ge 1, j \in B\), put \(X_n^j = a_jY_n\) where \(a_j\ge 0\) for all \(j\ge 1\) and \(\sum _{ j \in B}a_j^{\alpha /2}<\infty \). We have

$$ \textrm{Cov}(Z_n-Z_{n+1}, Z_{n+1} - Z_{n+2}) = -1. $$

Therefore, \(X_n\) and \(X_{n+1}\) are not independent. Consequently, \(\{X_n, n \ge 1\}\) is not a sequence of pairwise independent random vectors.

To prove our main results, we need the following lemmas.

Lemma 2

(Lehmann, [13]) Let X and Y be \(\mathbb {R}\)-valued NQD random variables. Then,

  1. (i)

    \(E(XY) \le EX. EY\),

  2. (ii)

    \(P(X> x, Y>y) \le P(X>x) P(Y > y) \quad \forall x, y \in \mathbb {R}\),

  3. (iii)

    If f and g are Borel fuctions, both of which are monotone increasing (or both are monotone decreasing), then f(X) and g(X) are NQD.

Lemma 3

(Dung et al. [6], Hien et al. [8]) Let \(\left\{ {{X_n},n \ge 1} \right\} \) be a sequence of \(\mathbb {H}\)-valued coordinatewise pairwise NQD random variables with mean 0 and finite second moments. Then,

$$ E{\left\| {\sum \limits _{i = 1}^n {{X_i}} } \right\| ^2} \le \sum \limits _{i = 1}^n {E{{\left\| {{X_i}} \right\| }^2}}, $$

and

$$ E\left( {\underset{1 \le k \le n}{\max }\ {{\left\| {\sum \limits _{i = 1}^k {{X_i}} } \right\| }^2}} \right) \le {\log }^2\left( 2n\right) \sum \limits _{i = 1}^n {E{{\left\| {{X_i}} \right\| }^2}}. $$

Lemma 4

Let \(\{X_n, n \ge 1\}\) be a sequence of \(\mathbb {H}\)-valued coordinatewise pairwise NQD random variables such that each coordinate \(X^j\) is nonnegative and \(\{Y_n, n \ge 1\}\) be a sequence of \(\mathbb {R}\)-valued nonnegative pairwise NQD random variables. Assume that \(\{X_n, n \ge 1\}\) and \(\{Y_n, n \ge 1\}\) are independent. Then \(\{X_nY_n, n \ge 1\}\) is also a sequence of \(\mathbb {H}\)-valued coordinatewise pairwise NQD random variables.

Proof

Using the assumptions of sequence \(\{X_n, n \ge 1\}\) and \(\{Y_n, n \ge 1\}\) and using Lemma 2 (i), we have that for all \( t \ge 0\), \(z \ge 0\), \(k\ne l \), each \(j\in B\),

$$\begin{aligned} P(X^j_kY_k\le t,X^j_lY_l\le z)= & {} \int \dots \int I(x_ky_k\le t,x_ly_l\le z)dF_{X^j_k,X^j_l,Y_k,Y_l}(x_k,x_l,y_k,y_l)\\= & {} \int \dots \int I(x_ky_k\le t,x_ly_l\le z)dF_{X^j_k,X^j_l}(x_k,x_l)dF_{Y^j_k,Y^j_l}(y_k,y_l)\\= & {} \int \dots \int P(x_kY_k\le t,x_lY_l\le z)dF_{X^j_k,X^j_l}(x_k,x_l)\\\le & {} \int \dots \int P(x_kY_k\le t)P(x_lY_l\le z)dF_{X^j_i,X^j_j}(x_k,x_l)\\\le & {} E \Big (F_{Y_k}(t/X^j_k)F_{Y_l}(z/X^j_l) \Big ) \\\le & {} E(F_{Y_k}(t/X^j_k)) \; E(F_{Y_l}(z/X^j_l))\\= & {} \int \hspace{-7.0pt}\int I(x_ky_k\le t)dF_{Y_k}(y_k)dF_{X^j_k}(x_k) \int \hspace{-7.0pt}\int I(x_ly_l\le z)dF_{Y_l}(y_l)dF_{X^j_l}(x_l)\\= & {} \int \hspace{-7.0pt}\int I(x_ky_k\le t)dF_{X^j_k,Y_k}(x_k,y_k) \int \hspace{-7.0pt}\int I(x_ly_l\le z)dF_{X^j_l,Y_l}(x_l,y_l)\\= & {} P(X^j_kY_k\le t)P(X^j_lY_l\le z). \end{aligned}$$

\(\square \)

We recall the concept of regularly and slowly varying functions as follows.

Definition 5

Let \(a \ge 0\). A positive measurable function f defined on \([a,\infty )\) is called regularly varying at infinity with index \(\rho \), written \(f \in \mathcal {RV_{\rho }}\), if for each \(\lambda >0\),

$$ \lim \limits _{x \rightarrow \infty } \dfrac{f(\lambda x)}{f(x)} =\lambda ^{\rho }. $$

In particular, when \(\rho =0\), the function f is called slowly varying at infinity, written \(f\in \mathcal{R}\mathcal{V}\).

Clearly, \(x^r,x^r\log _+(x),x^r\log _+(\log _+(x)), x^r\frac{\log _+(x)}{\log _+(\log _+(x))}\) are regularly varying functions at infinity with index r where \(\log _+(x)=\max \{1,\ln x\}\). The following result of Karamata is often applicable.

Lemma 6

(Karamata’s theorem, Cirstea et al. [4]) Let \(f \in \mathcal {RV_{\rho }}\) be locally bounded on \([a, \infty )\). Then

  1. (i)

    For \(\sigma \ge -(\rho +1)\)

    $$\begin{aligned} \lim _{x\rightarrow \infty }\dfrac{x^{\sigma +1}f(x)}{\int _{a}^x t^{\sigma } f(t) dt} = \sigma + \rho +1. \end{aligned}$$
  2. (ii)

    For \(\sigma < -(\rho +1)\) (and for \(\sigma = -(\rho +1)\) if \(\int _{x} ^\infty t^{-(\rho +1)} f(t) dt<\infty \))

    $$\begin{aligned} \lim \limits _{x\rightarrow \infty }\dfrac{x^{\sigma +1}f(x)}{\int _x^{\infty } t^{\sigma } f(t) dt} = -(\sigma + \rho +1). \end{aligned}$$

It essentially says that integrals of regularly varying functions are again regularly varying, or more precisely, one can take the slowly varying function out of the integral. For more details of regularly varying functions, the reader may refer to Bingham [1].

Definition 7

(Bingham [1, Theorem 1.5.13]) Let \(\ell (.)\) be a slowly varying function. Then, there exists a slowly varying function \(\ell ^{\#}(.)\), unique up to asymptotic equivalence, satisfying

$$\begin{aligned} \lim \limits _{x \rightarrow \infty } \ell (x) \ell ^{\#}(x\ell (x)) =1 \quad \text {and} \quad \lim \limits _{x \rightarrow \infty } \ell ^{\#}(x) \ell (x\ell ^{\#}(x)) =1. \end{aligned}$$

The function \(\ell ^{\#}\) is called the de Bruijn conjugate of \(\ell \), and \((\ell , \ell ^{\#})\) is called a (slowly varying) conjugate pair.

For \(a, b >0\), each of \(( \ell (ax), \ell ^{\#}(bx)), \, ( a\ell (x), a^{-1} \ell ^{\#}(x) ), \, ( (\ell (x^a))^{1/a},(\ell ^{\#}(x^a))^{1/a} )\) is a conjugate pair by [1, Proposition 5.1.14]. Bojanic and Seneta [2] proved that if \(\ell (.)\) is a slowly varying function satisfying

$$\begin{aligned} \lim \limits _{x \rightarrow \infty } \Big (\dfrac{\ell (\lambda _0 x)}{\ell (x)} -1 \Big )\log (\ell (x)) =0 \end{aligned}$$
(1)

for some \(\lambda _0>1\), then for every \(a \in \mathbb {R}\),

$$\begin{aligned} \lim \limits _{x \rightarrow \infty } \dfrac{\ell (x\ell ^a (x))}{\ell (x)} =1, \end{aligned}$$

and therefore, we can choose (up to aymptotic equivalence) \(\ell ^{\#}(x)=1/\ell (x)\). In particular, if \(\ell (x)=\log (x)\) then \(\ell ^{\#}(x)=1/ \log (x)\).

Lemma 8

(Bingham [1, Section 1.7.7]) If \(f \in \mathcal {RV_{\rho }}\) with \(\rho \ne 0\), then there exists \(g \in \mathcal{R}\mathcal{V}_{1 / \rho }\) such that

$$\begin{aligned} \lim \limits _{x \rightarrow \infty } \dfrac{ f(g(x))}{x}= \lim \limits _{x \rightarrow \infty } \dfrac{ g(f(x))}{x} =1. \end{aligned}$$

The function g is determined uniquely up to asymptotic equivalence. In particular, if \(f(x)= x^{ab}\ell ^a(x^b)\) for \(\ell (x)\) is a slowly varying function and \(a, b >0\), then

$$\begin{aligned} g(x) = x^{\frac{1}{ab}}\ell ^{\#\frac{1}{b}}\left( x^{\frac{1}{a}}\right) . \end{aligned}$$
(2)

Throughout this paper, by saying \(\left\{ {{X_n},n \ge 1} \right\} \) is a sequence of \(\mathbb {H}\)-valued coordinatewise pairwise NQD random variables, we mean that the \(\mathbb {H}\)-valued random variables are coordinatewise pairwise NQD with respect to the orthonormal basis \(\{e_j, j\in B\}\). Let \(\{a_n, n \ge 1\}\) and \(\{b_n, n \ge 1\}\) be sequences of positive real numbers. We use notion \(a_n =O(b_n)\) means that \(a_n \le C b_n\) for some \(0<C<\infty \). \((\ell , \ell ^{\#})\) is a slowly varying conjugate pair. The indicator function of A is denoted by \(I_A\). The symbol C denotes a generic positive constant whose value may be different for each appearance.

The organization of the paper is as follows. In Section 3, we list our main results on complete convergence for coordinatewise pairwise NQD random variables taking value on a Hilbert space. We present an application of our results to General von Mises statistics in Section 4.

3 Results

We establish the complete convergence for randomly weighted sums of coordinatewise pairwise NQD random variables taking value in a Hilbert space with general normalizing sequences.

Theorem 9

Let \(1\le r \le 2\) and \(\{X, X_{n}, n\ge 1\}\) be a sequence of \(\mathbb {H}\)-valued coordinatewise pairwise NQD and identically distributed random variables, \(\{A_{ni}, 1 \le i \le n,n \ge 1\}\) be an array of real-valued rowwise pairwise NQD random variables satisfying

$$\begin{aligned} \sum _{i=1}^{n}E(A_{ni})^2=O(n). \end{aligned}$$
(3)

Assume that \(\{X, X_{n}, n\ge 1\}\) and \(\{A_{ni}, 1 \le i \le n,n \ge 1\}\) are independent. If there exists \(\phi \) satisfying the conditions (A1)-(A3) such that \(\sum _{j \in B} E(\phi ^{-1}(|X^j|))^{r} <\infty \) , then for every \(\varepsilon >0\)

$$\begin{aligned} \sum _{n=1}^{\infty }\dfrac{1}{n^{2-r}} P\left( \left\| \sum _{i=1}^{n}( A_{ni}X_{i}- E(A_{ni}X_{i})) \right\| \ge \varepsilon \phi (n)\right) <\infty . \end{aligned}$$

Proof

Without loss of generality, we may assume that \(A_{ni}\ge 0\) a.s. and \(X^j_{n}\ge 0\) a.s. for each \(j \in B\). Otherwise, we may use \(A_{ni}^+\), \(A_{ni}^-\) instead of \(A_{ni}\) and \(X^+_{i}:= \sum _{j \in B}(X_i^j)^+e_j,X^-_{i}:=\sum _{j \in B}(X_i^j)^-e_j\) instead of \(X_{i}\), respectively, and note that

$$ A_{ni} X_i = A_{ni}^{+}X_i^{+} - A_{ni}^{+}X_i^{-} - A_{ni}^{-}X_i^{+} + A_{ni}^{-}X_i^{-}. $$

For all \(n\ge 1\), \(1\le i\le n\), \(j \in B\), we denote

$$\begin{aligned} Y_{ni}^j&= X_{i}^j I_{( X_{i}^j \le \phi (n))}+\phi (n) I_{(X_{i}^j>\phi (n))},\; Y_{ni} = \sum \limits _{j \in B} Y_{ni}^j e_j, \\ Z_{ni}^{j}&= \phi (n)I_{(X_{i}^j>\phi (n))}, \;\; Z_{ni} = \sum \limits _{j \in B} Z_{ni}^je_j,\\ U_n&= \sum \limits _{i=1}^n ( A_{ni}Y_{ni} - E(A_{ni}Y_{ni})), \;\; V_n = \sum \limits _{i=1}^n ( A_{ni}Z_{ni} - E(A_{ni}Z_{ni})), \\ M_{ni}^{j}&= X_i^j I_{(X_{i}^j >\phi (n))}, M_{ni} = \sum \limits _{j \in B} M_{ni}^je_j, \;\; K_n = \sum \limits _{i=1}^n E(A_{ni}M_{ni}). \end{aligned}$$

For any fixed \(\varepsilon >0\), we see that

$$\begin{aligned}&P\left( \left\| \sum _{i=1}^{n} (A_{ni}X_{i} - E(A_{ni}X_{i}))\right\| \ge \varepsilon \phi (n)\right) \\ \le&\sum _{i=1}^{n} \sum \limits _{j \in B} P\biggl ( |X_i^j|> \phi (n) \biggr ) + P\Big (\ \left\| U_n - V_n - K_n \right\|> \varepsilon \phi (n)\Big ) \\ \le&\sum _{i=1}^{n} \sum \limits _{j \in B} P\biggl ( |X_i^j| > \phi (n) \biggr ) + P\left( \left\| U_n - V_n \right\| + \sum _{i=1}^{n} \left\| E(A_{ni}M_{ni})\right\| \ge \varepsilon \phi (n) \right) . \end{aligned}$$

On the other hand,

$$\begin{aligned} \sum _{n=1}^{\infty }\dfrac{1}{n^{2-r}}\sum _{i=1}^{n} \sum \limits _{j \in B} P\biggl ( |X_i^j| > \phi (n) \biggr )= & {} \sum _{n=1}^{\infty } n^{r-1} \sum \limits _{j \in B}\sum \limits _{k=n}^{\infty } P\biggl ( \phi (k)<|X^j|\le \phi (k+1) \biggr ) \nonumber \\= & {} \sum \limits _{j \in B}\sum _{k=1}^{\infty } P\biggl ( \phi (k)<|X^j|\le \phi (k+1) \biggr ) \sum _{n=1}^{k} n^{r-1} \nonumber \\\le & {} \sum \limits _{j \in B}\sum _{k=1}^{\infty } {k^r} P\biggl (k<\phi ^{-1} (|X^j|)\le k+1 \biggr )\nonumber \\\le & {} \sum \limits _{j \in B} E(\phi ^{-1}(|X^j|))^{r}\ <\infty . \end{aligned}$$
(4)

Now, we prove that

$$\begin{aligned} \frac{\sum _{i=1}^{n}\left\| E(A_{ni}M_{ni}) \right\| }{\phi (n)} \rightarrow 0 \text{ as } n \rightarrow \infty . \end{aligned}$$
(5)

For \(n \ge 1\), by Holder’s inequality and the assumption (3),

$$\begin{aligned} \sum \limits _{i=1}^n E |A_{ni}| \le n^{1-\frac{1}{2}} \left( \sum \limits _{i=1}^n E(|A_{ni}|^{2})\right) ^{\frac{1}{2}} \le Cn. \end{aligned}$$
(6)

Therefore,

$$\begin{aligned} \frac{\sum _{i=1}^{n}\left\| E(A_{ni}M_{ni}) \right\| }{\phi (n)}&\le \frac{E\left\| M_{n1}\right\| \sum _{i=1}^{n}E|A_{ni}|}{\phi (n)} \le \frac{Cn\sum _{j \in B} E|M_{n1}^j|}{\phi (n)} \\&\le \frac{Cn}{\phi (n)} \sum _{j \in B}E\left( |X^j|I_{(|X^j| > \phi (n))} \right) . \end{aligned}$$

Using the condition (A2) we have

$$\begin{aligned} \sum \limits _{n=1}^\infty \frac{1}{\phi (n)} \sum \limits _{j \in B} E\left( |X^j|I_{(|X^j| > \phi (n))} \right)&= \sum \limits _{n=1}^\infty \sum \limits _{j \in B} \frac{1}{\phi (n)} \sum \limits _{k=n}^\infty E\left( |X^j| I_{(\phi (k)< |X^j| \le \phi (k+1))}\right) \\&= \sum \limits _{j \in B} \sum \limits _{k=1}^\infty E\left( |X^j| I_{(\phi (k)< |X^j| \le \phi (k+1))} \right) \sum \limits _{n=1}^k \frac{1}{\phi (n)}\\&\le C\sum \limits _{j \in B} \sum \limits _{k=1}^\infty E\left( \phi (k) I_{(\phi (k)< |X^j| \le \phi (k+1))} \right) \sum \limits _{n=1}^k \frac{n^{r-1}}{\phi (n)}\\&\le C\sum \limits _{j \in B} \sum \limits _{k=1}^\infty E \left( k^r I_{(\phi (k)< |X^j| \le \phi (k+1))}\right) \\&\le C\sum \limits _{j \in B} E \left( \phi ^{-1}(|X^j|)\right) ^r < \infty . \end{aligned}$$

By Kronecker’s Lemma, we obtain

$$ \frac{Cn}{\phi (n)} \sum \limits _{j \in B}E|X^j|I_{(|X^j|> \phi (n))} \le \frac{C}{\phi (n)} \sum \limits _{k=1}^n\sum \limits _{j\in B} E\left( |X^j| I_{(|X^j| > \phi (k))}\right) \rightarrow 0 \text{ as } n \rightarrow \infty . $$

Hence, we need only to prove that

$$ \sum _{n=1}^{\infty }\dfrac{1}{n^{2-r}} P\left( \left\| U_n - V_n \right\| \ge \varepsilon \phi (n) \right) < \infty . $$

It follows by Lemmas 2 and 3 that for each n, we have \(\{A_{ni}Y_{ni}-E(A_{ni}Y_{ni}), 1 \le i \le n\}\) and \(\{ A_{ni}Z_{ni}-E(A_{ni}Z_{ni}), 1 \le i \le n\}\) are still sequences of coordinatewise pairwise NQD.

$$\begin{aligned}{} & {} \sum _{n=1}^{\infty }\dfrac{1}{n^{2-r}} P\left( \left\| U_n - V_n \right\| \ge \varepsilon \phi (n) \right) \\\le & {} \sum _{n=1}^{\infty }\dfrac{1}{n^{2-r}} P\left( \left\| V_n \right\| \ge \frac{\varepsilon \phi (n)}{2} \right) + \sum _{n=1}^{\infty }\dfrac{1}{n^{2-r}} P\left( \left\| U_n\right\| \ge \frac{\varepsilon \phi (n)}{2} \right) := I_1 + I_2. \end{aligned}$$

For \(I_1\), by Markov’s inequality, Lemma 2 and the condition (3), we have

$$\begin{aligned} I_1&\le \frac{4}{\varepsilon ^2}\sum _{n=1}^{\infty }\frac{1}{n^{2-r}\phi ^2(n)}E(\left\| V_{n}\right\| ^2) \le \frac{C}{\varepsilon ^2}\sum _{n=1}^{\infty }\frac{1}{n^{2-r}\phi ^2(n)}\sum _{i=1}^{n}\sum \limits _{j \in B}E|A_{ni}Z^j_{ni}|^2\\&\le \frac{C}{\varepsilon ^2}\sum _{n=1}^{\infty }\frac{1}{n^{1-r}}\sum \limits _{j \in B}P(|X^j|>\phi (n)) \le \frac{C}{\varepsilon ^2}\sum \limits _{j \in B}E(\phi ^{-1}(|X^j|))^r<\infty . \end{aligned}$$

Next, we shall show that \(I_2 < \infty \). Again applying Markov’s inequality, Lemma 2 and (3), we obtain

$$\begin{aligned} I_2&\le \sum _{n=1}^{\infty } \dfrac{n^{r-2}}{\varepsilon ^2 \phi ^2(n)} E \left\| \sum _{i=1}^{n} (A_{ni}Y_{ni} -EA_{ni}Y_{ni}) \right\| ^2\\&\le \sum _{n=1}^{\infty } \dfrac{n^{r-2}}{\varepsilon ^2 \phi ^2(n)} \sum _{i=1}^{n} \sum \limits _{j \in B} E|A_{ni}Y_{ni}^j|^2\\&\le \sum _{n=1}^{\infty } \dfrac{n^{r-2}}{\varepsilon ^2 \phi ^2(n)} \sum _{i=1}^{n} \sum \limits _{j \in B} E \bigg (A_{ni}^2\phi ^2(n)I_{(|X^j|> \phi (n))} + A_{ni}^2 (X^j)^2 I_{(|X^j|\le \phi (n))} \bigg ) \\&= \sum _{n=1}^{\infty } \dfrac{n^{r-2}}{\varepsilon ^2} \sum _{i=1}^{n} \sum \limits _{j \in B} E(A_{ni}^2I_{(|X^j|> \phi (n))}) + \sum _{n=1}^{\infty } \dfrac{n^{r-2}}{\varepsilon ^2 \phi ^2(n)} \sum _{i=1}^{n} \sum \limits _{j \in B} E(A_{ni}^2 (X^j)^2 I_{(|X^j|\le \phi (n))}) \\&:= I_3+I_4. \end{aligned}$$

Under the assumptions (3) and (4), we obtain

$$\begin{aligned} I_3&= \sum _{n=1}^{\infty } \dfrac{n^{r-2}}{\varepsilon ^2}\sum _{i=1}^{n} \sum \limits _{j \in B} E(A_{ni}^2) P(|X^j|> \phi (n)) \\&= \sum _{n=1}^{\infty } \dfrac{n^{r-2}}{\varepsilon ^2} \sum \limits _{j \in B} P(|X^j|> \phi (n)) \sum _{i=1}^{n} E(A_{ni}^2) \\&\le C \sum _{n=1}^{\infty } n^{r-1} \sum \limits _{j \in B}P(|X^j|> \phi (n))\\&\le C \sum \limits _{j \in B} E\left( \phi ^{-1}(|X^j|)^{r} \right) <\infty . \end{aligned}$$

Next, we shall show that \(I_4<\infty \). By (3) and (A3), we get

$$\begin{aligned} I_4&= \sum _{n=1}^{\infty } \dfrac{n^{r-2}}{\varepsilon ^2 \phi ^2(n)} \sum _{i=1}^{n} \sum \limits _{j \in B} E(A_{ni}^2) E((X^j)^2 I_{(|X^j|\le \phi (n))}) \\&\le C \sum \limits _{j \in B} E\left( (X^j)^2 \sum _{n=1}^{\infty } \dfrac{n^{r-1}}{ \phi ^2(n)} I_{(|X^j|\le \phi (n))}\right) \\&\le C \sum \limits _{j \in B} E\left( \phi ^2(\phi ^{-1}(|X^j|)) \sum \limits _{n=\phi ^{-1}(|X^j|)}^{\infty } \dfrac{n^{r-1}}{ \phi ^2(n)} \right) \\&\le C \sum \limits _{j \in B} E(\phi ^{-1}(|X^j|)^{r})\ <\infty . \end{aligned}$$

The proof of Theorem 9 is complete. \(\square \)

We present the following example to illustrate Theorem 9. This example concerns a random variable which has the two-sided Pareto distribution.

Example 2

Let \(l_2\) denotes the real separable Hilbert space of all square summable real sequences with the inner product

$$\begin{aligned} \langle x,y\rangle =\sum _{i=1}^\infty x_iy_i, \end{aligned}$$

for \(x=(x_1,x_2,\dots )\in l_2\) and \(y=(y_1,y_2,\dots ) \in l_2\). Let \(\left\{ Y_{nk}, n\ge 1, k\ge 1\right\} \) be an array of real valued identically distributed random variables with the common density function

$$\begin{aligned} f(x)={\left\{ \begin{array}{ll} \dfrac{\alpha }{2|x|^{\alpha +1}}&{} \ \text{ for } \quad |x|>1,\\ 0 &{} \ \text{ otherwise, } \end{array}\right. } \end{aligned}$$
(7)

where \(0< \alpha < 2\). Furthermore, assume that for each \(k\ge 1\), \(\left\{ Y_{nk}, n\ge 1\right\} \) is a sequence of pairwise NQD random variables defined in Example 1 with \(\{F_n, n \ge 1\}\) is the distribution functions of the common density function (7), and for each \(n\ge 1\), \(\left\{ Y_{nk}, k\ge 1\right\} \) is an independent random variable sequence. Put \(X_{n}^{j}=a_jY_{nj}\) for \(n\ge 1\) and \(j\ge 1\), where \(a_j\ge 0\) for all \(j\ge 1\) and \(\sum _{j=1}^\infty a_j^{\alpha /2}<\infty \). We shall prove that \(\left\{ X_n=(X_n^{1},X_n^{2},\dots ),n\ge 1\right\} \) is a sequence of \(l_2\)-valued random variables, i.e., for each \(n\ge 1\),

$$\begin{aligned} \sum \limits _{j= 1}^\infty (X_n^{j})^2=\sum \limits _{j= 1}^\infty a_j^2Y_{nj}^2<\infty \ \ \text{ a.s. }. \end{aligned}$$
(8)

Put \(\xi _{nj}=a^2_jY^2_{nj}I_{(|a_jY_{nj}|< 1)},\) we have that

$$\begin{aligned} \sum _{j=1}^\infty P(a_j^2Y_{nj}^2\ne \xi _{nj})&= \sum _{j=1}^\infty P(|Y_{nj}|>a_j^{-1}) \le C \sum _{j=1}^\infty a_j^{\alpha /2}<\infty . \end{aligned}$$

From the Borel-Cantelli lemma, we have \(P(a_j^2Y_{nj}^2\ne \xi _{nj}, i.o.)=0\). Then to prove (8) it is enough to show that

$$\begin{aligned} \sum _{j=1}^\infty \xi _{nj}<\infty \ \ \text{ a.s. }. \end{aligned}$$

We have that

$$\begin{aligned} \sum _{j=1}^\infty E|\xi _{nj}-E\xi _{nj}|^2\le \sum _{j=1}^\infty E\xi _{nj}^2&=\sum _{j=1}^\infty a^4_jE(Y^4_{nj}I_{(|a_jY_{nj}|< 1)}) \le C\sum _{j=1}^\infty a_j^{\alpha /2}<\infty . \end{aligned}$$

It follows from the Khintchine-Kolmogorov convergence theorem that \(\sum _{j=1}^\infty (\xi _{nj}-E\xi _{nj})<\infty \) a.s.. Moreover,

$$ \sum _{j=1}^\infty E\xi _{nj}=\sum _{j=1}^\infty a^2_jE(Y^2_{nj}I_{(|a_jY_{nj}|< 1)})\le C\sum _{j=1}^\infty a_j^{\alpha /2}<\infty . $$

Thus, \(\sum _{j=1}^\infty \xi _{nj}<\infty \) \( \text{ a.s. }\). We consider the standard orthonormal basis of \(l_2\) as \(\{e_n, n\ge 1\}\) where \(e_n\) denotes the element of \(l_2\) having 1 in its nth position and 0 elsewhere. Because of the hypothesis that for each \(j\ge 1\), \(\left\{ Y_{nj},n\ge 1\right\} \) is a sequence of pairwise NQD random variables with mean 0, we get that the sequence \(\left\{ X_n,n\ge 1\right\} \) is also a sequence of \(l_2\)-valued coordinatewise pairwise NQD random variables with mean 0.

When \(1<\alpha <2\), let \(\phi (x) = x^{1/p}\) where \(1 \le rp < \alpha \), we can easily check that \(\phi (x)\) satisfies the assumptions (A1), (A2), and (A3). We get

$$\begin{aligned} \sum _{j=1}^{\infty } E(\phi ^{-1}(|X^j|))^r = \sum _{j=1}^{\infty } E (|X^j|)^{rp} = E|Y_{11}|^{rp}\sum \limits _{j=1}^\infty a_j^{rp}< \infty . \end{aligned}$$

Thus, for an array of rowwise pairwise NQD random variables \(\{A_{ni}, 1 \le i \le n,n \ge 1\}\) satisfying the condition (3) and every \(\varepsilon >0\) we get

$$\begin{aligned} \sum _{n=1}^{\infty }\dfrac{1}{n^{2-r}} P\left( \left\| \sum _{i=1}^{n} A_{ni} X_{i} \right\| \ge \varepsilon n^{1/p} \right) <\infty . \end{aligned}$$

However, for \(rp \ge \alpha \), we further assume that for each \(j\ge 1\), \(\left\{ Y_{nj}, n\ge 1\right\} \) is a sequence of independent random variables and \(a_1=1, a_j=0\) for all \(j\ge 2\), applying Gut [7, Proposition 6.1.4], we have that for \(0<\delta <1\) and n large,

$$\begin{aligned} 2P\left( \left\| \sum _{k=1}^{n}X_k \right\|>x\right) = 2P\left( \left| \sum _{k=1}^{n}Y_{k1} \right|>x\right) \ge (1-\delta )n\dfrac{1}{(2x)^\alpha }, \ \ x>0. \end{aligned}$$

This implies that

$$\begin{aligned} \sum \limits _{n = 1}^\infty \frac{1}{n^{2-r}} {P\left( {\left\| {\sum \limits _{k= 1}^{{n}} {{X_{k}}} } \right\| >n^{1/p} \varepsilon } \right) }\ge \dfrac{1-\delta }{2(2\varepsilon )^\alpha }\sum \limits _{n = 1}^\infty \dfrac{1}{n^{1-(rp-\alpha )/p}} =\infty . \end{aligned}$$

The following theorem is considered as the Marcinkiewicz-Zygmund type strong law of large numbers for coordinatewise pairwise NQD random vectors in Hilbert spaces.

Theorem 10

Let \(1\le r \le 2\) and \(\{X, X_{n}, n\ge 1\}\) be a sequence of \(\mathbb {H}\)-valued coordinatewise pairwise NQD and identically distributed random variables, \(\{A_{ni}, 1 \le i \le n,n \ge 1\}\) be an array of real-valued rowwise pairwise NQD random variables satisfying (3). Assume that \(\{X, X_{n}, n\ge 1\}\) and \(\{A_{ni}, 1 \le i \le n,n \ge 1\}\) are independent. If there exists \(\phi \) satisfying (A1), (A2), and (A4) such that

$$ \sum \limits _{j \in B} E \big ((\phi ^{-1}(|X^j|))^{r} \log _+^2(\phi ^{-1}(|X^j|) ) \big ) <\infty , $$

then for every \(\varepsilon >0\),

$$\begin{aligned} \sum _{n=1}^{\infty }\dfrac{1}{n^{2-r}} P\left( \max \limits _{1 \le k \le n} \left\| \sum _{i=1}^{k} (A_{ni}X_{i} - E(A_{ni}X_{i})) \right\| \ge \varepsilon \phi (n)\right) <\infty . \end{aligned}$$
(9)

In particular, when \(r=2\) we have

$$\begin{aligned} \frac{\max \limits _{1\le k \le n}\left\| \sum \limits _{i=1}^k (A_{ni}X_{i} - E(A_{ni}X_{i})) \right\| }{\phi (n)} \rightarrow 0 \ \ \text{ completely } \text{ as } n\rightarrow \infty . \end{aligned}$$

Proof

Similar to the proof of Theorem 9 above, for any fixed \(\varepsilon >0\), we see that

$$\begin{aligned}&P\left( \max \limits _{1 \le k \le n} \left\| \sum _{i=1}^{k} (A_{ni}X_{i} - E(A_{ni}X_{i})) \right\| \ge \varepsilon \phi (n)\right) \\ \le&\sum _{i=1}^{n} \sum \limits _{j \in B} P\biggl ( |X_i^j|> \phi (n) \biggr ) + P\left( \max \limits _{1 \le k \le n} \left\| U_k - V_k - K_k \right\| \ge \varepsilon \phi (n)\right) \\ \le&\sum _{i=1}^{n} \sum \limits _{j \in B} P\biggl ( |X_i^j| > \phi (n) \biggr ) + P\left( \max \limits _{1 \le k \le n} \sum _{i=1}^{k}\left\| E(A_{ni}Y_{ni}) \right\| \ge \frac{\varepsilon \phi (n)}{2}\right) \\&+ P\left( \max \limits _{1 \le k \le n} \left\| U_{k} - V_k \right\| \ge \frac{\varepsilon \phi (n)}{2}\right) . \end{aligned}$$

By (4) we have

$$ \sum _{n=1}^{\infty }\dfrac{1}{n^{2-r}}\sum _{i=1}^{n} \sum \limits _{j \in B} P\biggl ( |X_i^j|> \phi (n) \biggr ) = \sum _{n=1}^{\infty }\dfrac{1}{n^{1-r}} \sum \limits _{j \in B} P\biggl ( |X^j| > \phi (n) \biggr ) < \infty . $$

Using the result (5) in the proof of Theorem 9, we have

$$\begin{aligned} \frac{\max \limits _{1 \le k \le n} \sum _{i=1}^{k}\left\| E(A_{ni}Y_{ni}) \right\| }{\phi (n)} \le \dfrac{\sum _{i=1}^{n}\left\| E(A_{ni}M_{ni}) \right\| }{\phi (n)} \rightarrow 0 \text{ as } n \rightarrow \infty . \end{aligned}$$

Hence, we need only to prove that

$$\begin{aligned} \sum _{n=1}^{\infty }\dfrac{1}{n^{2-r}} P\left( \max \limits _{1 \le k \le n} \left\| U_{k} -V_k\right\| \ge \dfrac{\varepsilon \phi (n)}{2} \right) <\infty . \end{aligned}$$

It follows by Lemmas 2 and 3 that for each n, we have \(\{A_{ni}Y_{ni}-E(A_{ni}Y_{ni}), 1 \le i \le n\}\) and \(\{ A_{ni}Z_{ni}-E(A_{ni}Z_{ni}), 1 \le i \le n\}\) are still sequences of coordinatewise pairwise NQD. Clearly,

$$\begin{aligned} P \left( \max \limits _{1 \le k \le n} \left\| U_{k} -V_k \right\| \ge \dfrac{\varepsilon \phi (n)}{2}\right) \le P\left( \max \limits _{1 \le k \le n} \left\| V_{k}\right\| \ge \dfrac{\varepsilon \phi (n)}{4} \right) + P\left( \max \limits _{1 \le k \le n} \left\| U_{k}\right\| \ge \dfrac{\varepsilon \phi (n)}{4} \right) . \end{aligned}$$

By Markov’s inequality, (6) and (4) we obtain

$$\begin{aligned} \sum _{n=1}^{\infty }\dfrac{1}{n^{2-r}} P\left( \max \limits _{1 \le k \le n} \left\| V_{k}\right\| \ge \dfrac{\varepsilon \phi (n)}{4} \right)= & {} \sum _{n=1}^{\infty }\dfrac{4}{\varepsilon n^{2-r}\phi (n)} E \left( \max \limits _{1 \le k\le n} \left\| V_k\right\| \right) \\\le & {} \sum _{n=1}^{\infty } \frac{4}{\varepsilon n^{2-r}\phi (n)} E \left( \max \limits _{1 \le k\le n} \sum \limits _{i=1}^k \left\| A_{ni}Z_{ni} - E(A_{ni}Z_{ni})\right\| \right) \\\le & {} \sum _{n=1}^{\infty }\frac{C}{ n^{2-r} \phi (n)}E \left( \sum \limits _{i=1}^n \sum \limits _{j\in B}|A_{ni}Z^j_{ni}|\right) \\= & {} \sum _{n=1}^{\infty }\frac{C}{n^{2-r}\phi (n)}\sum \limits _{j\in B}E(|Z^j_{n1}|)E\left( \sum \limits _{i=1}^n |A_{ni}|\right) \\\le & {} \sum _{n=1}^{\infty }\frac{C}{n^{1-r}}\sum \limits _{j\in B}P(|X^j|> \phi (n)) <\infty . \end{aligned}$$

Using Markov’s inequality and Lemma 3, we get

$$\begin{aligned}{} & {} \sum _{n=1}^{\infty }\dfrac{1}{n^{2-r}} P\left( \max \limits _{1 \le k \le n} \left\| U_{k} \right\| \ge \dfrac{\varepsilon \phi (n)}{4}\right) \\\le & {} \sum _{n=1}^{\infty } \dfrac{C}{n^{2-r}\varepsilon ^2 \phi ^2(n)} E \left[ \max \limits _{1 \le k \le n} \left\| U_{k}\right\| ^2\right] \\\le & {} \sum _{n=1}^{\infty } \dfrac{C\log ^2 (n)}{n^{2-r} \phi ^2(n)} \sum _{i=1}^{n} \sum \limits _{j \in B} E|A_{ni}Y_{ni}^j|^2\\\le & {} \sum _{n=1}^{\infty } \dfrac{C \log ^2(n)}{n^{2-r} \phi ^2(n)} \sum _{i=1}^{n} \sum \limits _{j \in B} \left( E[A_{ni}^2\phi ^2(n)I_{(|X^i|> \phi (n))} ] + E[A_{ni}^2 (X^j)^2 I_{(|X^j|\le \phi (n))}] \right) \\= & {} \sum _{n=1}^{\infty } \dfrac{C\log ^2(n)}{n^{2-r}} \sum _{i=1}^{n} \sum \limits _{j \in B} E \left( A_{ni}^2 I_{(|X^j|> \phi (n))}\right) + \sum _{n=1}^{\infty } \dfrac{C \log ^2(n)}{n^{2-r} \phi ^2(n)} \sum _{i=1}^{n} \sum \limits _{j \in B} E \left( A_{ni}^2 (X^j)^2 I_{(|X^j| \le \phi (n))}\right) \\:= & {} I_1+I_2. \end{aligned}$$

For \(I_1\), by the assumption \(\sum _{j \in B} E\big ((\phi ^{-1}(|X^j|))^{r} \log _+^2(\phi ^{-1}(|X^j|) ) \big ) < \infty \), Markov’s inequality and (3), we obtain

$$\begin{aligned} I_1&= \sum _{n=1}^{\infty } \dfrac{C \log ^2(n)}{n^{2-r} } \sum \limits _{j \in B} P(|X^j|> \phi (n)) \sum _{i=1}^{n} E(A_{ni}^2) \\&\le C\sum \limits _{j \in B} \sum _{n=1}^{\infty } \dfrac{\log ^2(n)}{n^{1-r}} \sum _{k=n}^{\infty } P\biggl ( \phi (k)<|X^j|\le \phi (k+1) \biggr )\\&\le C \sum \limits _{j \in B} \sum _{k=1}^{\infty } P\biggl ( \phi (k)<|X^j|\le \phi (k+1) \biggr ) \sum _{n=1}^{k} \frac{\log _+^2(n)}{n^{1-r}} \\&\le C\sum \limits _{j \in B} \sum _{k=1}^{\infty } k^r\log _+^2(k) P\biggl (k<\phi ^{-1} (|X^j|)\le k+1 \biggr )\\&\le C \sum \limits _{j \in B} E[(\phi ^{-1}(|X^j|))^r \log _+^2(\phi ^{-1}(|X^j|) ) ] <\infty . \end{aligned}$$

For \(I_2\), by Markov’s inequality, (3) and the assumption (A4), we get

$$\begin{aligned} I_2= & {} \sum _{n=1}^{\infty } \dfrac{ C \log ^2(n)}{n^{2-r} \phi ^2(n)}\sum \limits _{j \in B} E\left( (X^j)^2 I_{(|X^j|\le \phi (n))}\right) \sum _{i=1}^{n} E(A_{ni}^2) \\\le & {} C \sum \limits _{j \in B} E\left( (X^j)^2 \sum _{n=1}^{\infty } \dfrac{n^{r-1}\log _+^2(n)}{ \phi ^2(n)} I_{(|X^j|\le \phi (n))}\right) \\\le & {} C \sum \limits _{j \in B} E\left( \phi ^2(\phi ^{-1}(|X^j|)) \sum \limits _{n=\phi ^{-1}(|X^j|)}^{\infty } \dfrac{n^{r-1}\log _+^2(n)}{ \phi ^2(n)} \right) \\\le & {} \sum \limits _{j \in B} E\left( (\phi ^{-1}(|X^j|))^r \log _+^2(\phi ^{-1}(|X^j|) ) \right) <\infty . \end{aligned}$$

The proof of Theorem 10 is complete. \(\square \)

By letting \(r = 1\) in Theorem 10, we get the Marcinkiewicz-Zygmund type strong law of large numbers for coordinatewise pairwise NQD random vectors in the following corollary. The proof is standard.

Corollary 11

Let \(\{X, X_{n}, n\ge 1\}\) be a sequence of \(\mathbb {H}\)-valued coordinatewise pairwise NQD and identically distributed random variables with mean 0, \(\{B_{n}, n \ge 1\}\) be a sequence of real-valued rowwise pairwise NQD random variables satisfying

$$\begin{aligned} \sum _{k=1}^{n}E(B_{k})^2=O(n). \end{aligned}$$

Assume that \(\{X, X_{n}, n\ge 1\}\) and \(\{B_{n}, n \ge 1\}\) are independent. If there exists \(\phi \) satisfying (A1), (A2), (A4) such that

$$ \sum \limits _{j \in B} E \big ( \phi ^{-1}(|X^j|)\log _+^2(\phi ^{-1}(|X^j|) \big ) <\infty , $$

then for every \(\varepsilon >0\),

$$\begin{aligned} \frac{1}{\phi (n)} \sum \limits _{i=1}^n B_{i}X_i \rightarrow 0 \text{ a.s. } \text{ as } n \rightarrow \infty . \end{aligned}$$

Proof

For each \(n \ge 1\), we set \(\{A_{ni}, 1\le i \le n, n \ge 1\}\) be an array of random variables such that \(A_{ni} = B_i\) for each \(1 \le i \le n\), \(n \ge 1\). For any \(\varepsilon >0\), by applying Theorem 10 with \(r = 1\), we have

$$\begin{aligned} \infty&> \sum \limits _{n=1}^{\infty } \dfrac{1}{n}P \left( \max \limits _{1 \le k \le n} \Big \Vert \sum \limits _{i=1}^{k}B_iX_i \Big \Vert> \varepsilon \phi (n) \right) \\&= \sum \limits _{l=0}^{\infty } \sum \limits _{n=2^l}^{2^{l+1}-1} \dfrac{1}{n}P \left( \max \limits _{1 \le k \le n} \Big \Vert \sum \limits _{i=1}^{k}B_i X_i \Big \Vert> \varepsilon \phi (n) \right) \\&\ge \sum \limits _{l=0}^{\infty } \sum \limits _{n=2^l}^{2^{l+1}-1} \dfrac{1}{2^{l+1}}P \left( \max \limits _{1 \le k \le 2^l} \Big \Vert \sum \limits _{i=1}^{k} B_i X_i \Big \Vert> \varepsilon \phi (2^l) \right) \\&= \dfrac{1}{2} \sum \limits _{l=0}^{\infty } P \left( \max \limits _{1 \le k \le 2^l} \Big \Vert \sum \limits _{i=1}^{k} B_i X_i \Big \Vert > \varepsilon \phi (2^l) \right) . \end{aligned}$$

Using the Borel-Cantelli lemma and (9), we get

$$\begin{aligned} \dfrac{1}{\phi (2^i)} \max \limits _{1 \le k \le 2^i} \Big \Vert \sum \limits _{i=1}^{k}B_iX_i \Big \Vert \rightarrow 0 \quad \text {a.s. as}\; \; i \rightarrow \infty . \end{aligned}$$
(10)

For \(2^k \le n < 2^{k+1}\), we have

$$\begin{aligned} 0 \le \dfrac{1}{\phi (n)} \Big \Vert \sum \limits _{i=1}^n B_iX_i \Big \Vert \le \dfrac{1}{\phi (2^i)} \max \limits _{1 \le k \le n} \Big \Vert \sum \limits _{i=1}^{k}B_i X_i \Big \Vert . \end{aligned}$$
(11)

The conclusion of the corollary follows from (10) and (11). \(\square \)

From (2), it is easy to check that if we choose \(\phi (x) = x^{1/p}\ell ^{1/p}(x)\) (then \( (\phi ^{-1}(x)= x^p\ell ^{\#}(x^p)\)), where \(1\le rp <2\), the conditions (A1)–(A4) hold. We note that in the case \(rp=2\), we choose \(r=2, p=1, \ell (x)=\log _+^2(x)\). Then, the conditions (A3), (A4) still hold. We obtain the following corollary.

Corollary 12

Let \(1\le r\le 2\), \(1 \le rp < 2\) and \(\{X, X_{n}, n\ge 1\}\) be a sequence of \(\mathbb {H}\)-valued coordinatewise pairwise NQD and identically distributed random variables with mean 0, \(\{B_{n}, n \ge 1\}\) be a sequence of real-valued pairwise NQD random variables satisfying

$$\begin{aligned} \sum _{k=1}^{n}E(B_{k})^2=O(n). \end{aligned}$$

Assume that \(\{ X_{n}, n\ge 1\}\) and \(\{B_{n}, n \ge 1\}\) are independent. If

$$ \sum \limits _{j \in B} E \big ( \left( |X^j|^{p} \ell ^\#(|X^j|^{p})\right) ^r \log _+^2\left( |X^j|^{p}\ell ^\#(|X^j|^{p}) \right) \big ) <\infty $$

then

$$\begin{aligned} \frac{1}{n^{1/p} \ell ^{1/p}(n)} \sum \limits _{i=1}^n B_{i}X_i \rightarrow 0 \text{ a.s. } \text{ as } n \rightarrow \infty . \end{aligned}$$

4 Application to General von Mises Statistics

Statistics of Cramer-von Mises type are an important tool for testing statistical hypotheses. Next, we will consider general bivariate and degenerate von Mises statistics (V-statistics). Let \(h: \mathbb {R}^2\rightarrow \mathbb {R}\) be a symmetric, measurable function. We call

$$\begin{aligned} V_n=\sum _{i,j=1}^nA_{ni}A_{nj}h(X_i,X_j) \end{aligned}$$
(12)

be V-statistic with kernel h and \(\{A_{ni}, 1\le i \le n, n \ge 1\}\) be an array of real-valued rowwise pairwise NQD random variables. The kernel and related V-statistic are called degenerate, if \(E(h(x,X_i))=0\) for all \(x\in \mathbb {R}\). Furthermore, we assume that h is Lipschitz-continuous and positive definite, i.e.,

$$\sum _{i,j=1}^mc_ic_jh(x_i,x_j)\ge 0$$

for all \(c_1,\dots ,c_n, x_1,\dots ,x_n\in \mathbb {R}\). Dung and Son [17, 18] gave the almost sure convergence of degenerate von Mises statistics for sequence of independent and pairwise independent real valued data with weights being a sequence of real numbers. In this case, we consider V-statistic defined in model (12), where the weights are an array of real-valued rowwise pairwise NQD random variables.

In the section, we use methods for random variables taking values in Hilbert spaces to obtain the conditions for the complete convergence of degenerate von Mises statistics with pairwise independent real valued data.

Theorem 13

Let \(\left\{ X,X_{n}, n \ge 1\right\} \) be a sequence of real-valued pairwise independent and identically distributed random variables with mean 0, \(\{A_{ni}, 1\le i \le n, n \ge 1\}\) be an array of rowwise pairwise NQD random variables satisfying (3). Assume that \(\{ X_{n}, n\ge 1\}\) and \(\{A_{ni},1 \le i \le n, n \ge 1\}\) are independent. Let h be a Lipschitz-continuous positive definite kernel function such that

$$\begin{aligned} E\left| h \left( X,X \right) \right| < \infty . \end{aligned}$$

Then

$$\begin{aligned} \frac{1}{n^2\log _+^4(n)} \max \limits _{1\le k \le n}V_k \rightarrow 0 \text{ completely } \text{ as } n \rightarrow \infty . \end{aligned}$$

Proof

By Sun’s version of Mercers theorem in [19] (see also [5]), we have the representation of h function under these above conditions as follows

$$ h(x,y)=\sum _{l=1}^\infty \lambda _l\phi _l(x)\phi _l(y) $$

for orthonormal eigenfunctions \((\phi _l)_{l\in \mathbb {N}}\) with the following properties:

  • \(E\phi _l(X_n)=0\) and \(E\phi _{l}^2(X_n)=1\) for all \(l\in \mathbb {N},\)

  • \(\lambda _l\ge 0\) for all \(l\in \mathbb {N}\) and \(\sum _{l=1}^\infty \lambda _l<\infty .\)

We can treat such V-statistics in the setting of Hilbert spaces. Let \(\mathbb {H}\) be a Hibert space of real-valued sequences \(y=(y_l)_{l\in \mathbb {N}}\) equipped with the inner product

$$ \langle y,z\rangle = \sum _{l=1}^\infty \lambda _ly_lz_l,\quad \text{ and }\quad \left\| y\right\| ^2 = \sum _{l=1}^\infty \lambda _ly_l^2. $$

We consider the \(\mathbb {H}\)-valued random variables \(Y_{n}=(\phi _l(X_{n}))_{l\in \mathbb {N}}\). Then \(\{Y_n, n\ge 1\}\) is a sequence of \(\mathbb {H}\)-valued coordinatewise pairwise NQD random variables with mean 0 and

$$\begin{aligned} \frac{1}{n^2\log _+^4(n)} \max \limits _{1\le k \le n}V_k&=\dfrac{1}{n^2\log _+^4(n)}\max \limits _{1\le k \le n} \sum _{i,j=1}^k A_{ni} A_{nj} h(X_i, X_j) \\&= \dfrac{1}{n^2\log _+^4(n)} \max \limits _{1\le k \le n}\sum _{i,j=1}^k \sum _{l=1}^{\infty } \lambda _l A_{ni} A_{nj} \phi _l(X_i) \phi _l(X_j) \\&= \dfrac{1}{n^2\log _+^4(n)} \max \limits _{1\le k \le n} \sum _{l=1}^\infty \lambda _l \Big (\sum _{i=1}^k A_{ni}\phi _{l}(X_i) \Big )^2 \\&=\left( \frac{1}{n\log _+^2(n)}\max \limits _{1\le k \le n} \left\| \sum _{i=1}^kA_{ni}Y_{i} \right\| \right) ^2. \end{aligned}$$

Using Theorem 10 with \(r=2\), \(\phi (x) = x\log ^2_+ (x) \) and from (2) and (1) we have \(\phi ^{-1} (x)= x/ \log ^2_+ (x)\) (up to aymptotic equivalence). Moreover,

$$\begin{aligned}&\sum \limits _{j \in B} E \left( \left( \phi ^{-1}(|X^j|)\right) ^2 \log _+^2\left( \phi ^{-1}(|X^j|) \right) \right) \\ =&\sum \limits _{j \in B} E \left( \frac{|X^j|^2}{\log ^4_+(|X^j|)}\log ^2_+\left( \frac{|X^j|}{\log ^2_+(|X^j|)}\right) \right) \\ \le&\sum \limits _{j \in B} E \left( |X^j| \right) ^2 = E|h(X,X)| <\infty . \end{aligned}$$

Hence, we obtain

$$ \frac{\max \limits _{1\le k \le n} \left\| \sum \limits _{i=1}^kA_{ni}Y_{i} \right\| }{n\log _+^2(n)}\rightarrow 0 \quad \text { completely as } \;\; n \rightarrow \infty . $$

This implies that

$$ \frac{1}{n^2\log _+^4(n)} \max \limits _{1\le k \le n}V_k \rightarrow 0 \text{ completely } \text{ as } n \rightarrow \infty .$$

\(\square \)