1 Introduction

The concept of uniform integrability plays an important role, not only in the field of Probability Theory, but also in that of Functional Analysis.

In this last field, uniform integrability of a set of random variables is equivalent, via the Dunford–Pettis theorem, to its relative weak sequential compactness in the \(L^1\) space, which opens the door to the use of this concept in fruitful functional–analytic techniques.

In the field of Probability Theory, particularly in the area of limit theorems, uniform integrability is the right condition for relaxing the condition of identical distribution in the case of weak laws of large numbers. At the same time, it is the condition needed in order that convergence in probability implies mean convergence, and thus, convergence a.s. with the addition of uniform integrability also implies mean convergence (see, e.g., Chow and Teicher 1997, p. 99).

The main motivation of the summability theory is to make a non-convergent sequence or series to converge in a more general sense. Therefore, summability theory has many applications in probability limit theorems, approximation theory with positive linear operators and differential equations, whenever the ordinary limit does not exist (see Ordóñez Cabrera and Volodin 2005; Hu et al. 2001; Giuliano Antonini et al. 2013; Gadjiev and Orhan 2002; Söylemez and Ünver 2017; Braha 2018; Ünver 2014; Atlihan et al. 2017; Tas and Yurdakadim 2017; Balser and Miyake 1999; Balser 2000).

Statistical-type generalizations of the concepts of mathematical analysis play an important role to study in these areas whenever ordinary versions fail. In this paper, we introduce the concept of B-statistical uniform integrability with respect to \(\left\{ a_{nk}\right\} \), where B is a nonnegative regular summability matrix and \(\left\{ a_{nk}\right\} \) is an array of real numbers. We use the concept of B-statistical supremum to introduce the concept of B-statistical uniform integrability with respect to \(\left\{ a_{nk}\right\} \) which is not only more general than the concept of uniform integrability with respect to \(\left\{ a_{nk}\right\} \) but also weaker than the concept of uniform integrability with respect to \(\left\{ a_{nk}\right\} \).

Let \(x=\left\{ x_{k}:k\ge 1\right\} \) be a real sequence and let \(B=\left\{ b_{nk}:n\ge 1,k\ge 1\right\} \) be a summability matrix (an array of real numbers). If the sequence \(\left\{ \left( Bx\right) _{n}:n\ge 1\right\} \) is convergent to a real number \(\alpha \), then we say that the sequence x is B-summable to the real number \(\alpha \) where the series

$$\begin{aligned} (Bx)_{n}=\sum _{k=1}^{\infty }b_{nk}x_{k} \end{aligned}$$

is convergent for any \(n\in \mathbb {N}\) and \(\mathbb {N}\) is the set of positive integers. A summability matrix B is said to be regular if \( {\lim }_{n\rightarrow \infty }(Bx)_{n}=L\) whenever \({\lim }_{k\rightarrow \infty }x_{k}=L\) (see Boos 2000). Throughout this paper, we assume that \(B = \{b_{nk}\}\) is a nonnegative regular summability matrix.

Let \(K\subset \mathbb {N}\). Then, the number

$$\begin{aligned} \delta _{B}(K):=\lim _{n\rightarrow \infty }\sum _{k\in K}b_{nk} \end{aligned}$$

is said to be the B-density of K whenever the limit exists (see Buck 1946, 1953; Fridy 1985; Kolk 1993). Regularity of the summability matrix B ensures that \(0\le \delta _{B}(K)\le 1\) whenever \(\delta _{B}(K)\) exists. If we consider \(B=C\), the Cesàro matrix, then \(\delta (K):=\delta _{C}(K)\) is called the (natural or asymptotic) density of K (see Freedman and Sember 1981) where \(C=(c_{nk})\) is the summability matrix defined by

$$\begin{aligned} c_{nk}=\left\{ \begin{array}{ll} \frac{1}{n}, &{}\quad \mathrm{{if}}\quad k\le n\\ 0, &{}\quad \mathrm{{otherwise.}} \end{array} \right. \end{aligned}$$

A real sequence \(x=\left\{ x_{k}\right\} \) is said to be B-statistically convergent (see Connor 1989; Fridy and Miller 1991) to a real number \(\alpha \) if for any \(\varepsilon >0\),

$$\begin{aligned} \delta _{B}\left( \left\{ k\in \mathbb {N}:\left| x_{k}-\alpha \right| \ge \varepsilon \right\} \right) =0. \end{aligned}$$

In this case, we write \({\mathrm {st}_{B}-\lim }_{k\rightarrow \infty }x_{k}=\alpha \). If we consider the Cesàro matrix, then C-statistical convergence is called statistical convergence (Fast 1951; Steinhaus 1951; Šalát 1980). B-statistical convergence is regular (i.e., it preserves ordinary limits) and there exist some sequences which are B-statistically convergent but not ordinary convergent. Recall that if a sequence \(x=\left\{ x_{k}\right\} \) is statistically convergent to a real number \(\alpha \), then there exists a subsequence \(\left\{ x_{k_{j}}\right\} \) such that \(\lim \nolimits _{j\rightarrow \infty }x_{k_{j}}=\alpha \) and \(\delta _{B}(\left\{ k_{j}:j\in \mathbb {N}\right\} )=1\) (see Šalát 1980; Miller 1995).

A real number M is said to be a B-statistical upper bound of a sequence \(\left\{ x_{k}\right\} \) if

$$\begin{aligned} \delta _{B}\left( \left\{ k\in \mathbb {N}:x_{k}>M\right\} \right) =0. \end{aligned}$$

In this case, \(\left\{ x_{k}\right\} \) is said to be B-statistically upper bounded. The infimum of the set of all B-statistical upper bounds of a B-statistically upper bounded sequence is said to be the B-statistical supremum of \(\left\{ x_{k}\right\} \) and is denoted by \({\sup _{\mathrm {st}_{B}}}_{k\in \mathbb {N}}x_{k}\) (Altinok and Küçükaslan 2014). If the sequence \(x=\left\{ x_{k}\right\} \) is not B-statistically upper bounded, then we define \({\sup _{\mathrm{st}_{B}}}_{k\in \mathbb {N} }x_{k}=\infty \). We use the notation \({\sup _\mathrm{st}}_{k\in \mathbb {N} }x_{k}\), whenever \(B=C\).

A similar definition for the B-statistical infimum, \( {\inf _{\mathrm {st}_{B}}}_{k\in \mathbb {N}}x_{k}\), was given in Altinok and Küçükaslan (2014) as well. It was also known from Altinok and Küçükaslan (2014) that

$$\begin{aligned} \inf _{k\in \mathbb {N}}x_{k}\le \underset{k\in \mathbb {N}}{\inf \nolimits _{\mathrm {st}_{B} }}x_{k}\le \underset{k\in \mathbb {N}}{\sup \nolimits _{\mathrm {st}_{B}}}x_{k}\le \sup _{k\in \mathbb {N}}x_{k}. \end{aligned}$$
(1)

The following remark is used in our proofs:

Remark 1

If \({\sup \nolimits _{\mathrm {st}_{B}}}_{k\in \mathbb {N}} x_{k}=M<\infty \) then, by the definition of B-statistical boundedness for any \(\varepsilon >0\) there exists \(b<M+\varepsilon \) such that \(\delta _{B}\left( \left\{ k\in \mathbb {N}:x_{k}>b\right\} \right) =0\). Thus, we have

$$\begin{aligned} \delta _{B}\left( \left\{ k\in \mathbb {N}:x_{k}>M+\varepsilon \right\} \right) =0. \end{aligned}$$

Throughout this paper, all random variables are defined on a fixed but otherwise arbitrary probability space \((\Omega ,\mathcal {F},P)\). The expected value of a random variable X is denoted by \(\mathbb {E}X\), and we use the notation I for the indicator function. We assume that \(\left\{ a_{nk}\right\} \) is an array of real numbers such that

$$\begin{aligned} \underset{n\in \mathbb {N}}{\sup \nolimits _{\mathrm {st}_{B}}}\sum _{k=1}^{\infty }\left| a_{nk}\right| <\infty . \end{aligned}$$
(2)

Let us now review the concept of uniform integrability for a sequence of random variables \(\left\{ X_{k}\right\} \) and provide several equivalent formulations of this concept.

A sequence of random variables \(\left\{ X_{k}\right\} \) is said to be uniformly integrable (see Chung 2001, p. 100) if

$$\begin{aligned} \lim _{c\rightarrow \infty }\sup _{k\in \mathbb {N}}\mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right| >c\right\} }=0. \end{aligned}$$

The uniform integrability criterion (see Chow and Teicher 1997, p. 94) asserts that \(\left\{ X_{k}\right\} \) is uniformly integrable if and only if (i) \(\sup \nolimits _{k\in \mathbb {N}}\mathbb {E}\left| X_{k}\right| <\infty \)(ii) for all \(\varepsilon >0\), there exits \(\delta >0\) such that for every event A with \(P(A)<\delta \),

$$\begin{aligned} \sup \limits _{k\in \mathbb {N}}\mathbb {E}\left| X_{k}\right| I_{A}<\varepsilon . \end{aligned}$$

It is easy to show via examples that (i) and (ii) are independent conditions in the sense that neither implies the other. Hu and Rosalsky (2011) noted that (ii) is indeed equivalent to the apparently stronger condition (ii\(^{'}\)) for all \(\varepsilon >0\), there exits \(\delta >0\) such that for every sequence of events \(\left\{ A_{k}\right\} \) with \(P(A_{k})<\delta \), \(k\in \mathbb {N}\),

$$\begin{aligned} \sup \limits _{k\in \mathbb {N}}\mathbb {E}\left| X_{k}\right| I_{A_{k} }<\varepsilon . \end{aligned}$$

The following classical result of Charles de La Vallée Poussin (see Meyer 1966, p. 19) provides another characterization of uniform integrability. We refer to it as the de La Vallée Poussin criterion for uniform integrability.

Proposition 1

(de La Vallée Poussin). A sequence of random variables \(\{X_{k}\}\) is uniformly integrable if and only if there exists a convex monotone function G defined on \([0,\infty )\) with \(G(0)=0\) such that

$$\begin{aligned} \lim _{x\rightarrow \infty }\frac{G(x)}{x}=\infty \text { and }\sup _{k\in \mathbb {N}}\mathbb {E}G(\left| X_{k}\right| )<\infty . \end{aligned}$$

The proof of the necessity half is far more difficult than the proof of the sufficiency half. On the other hand, the sufficiency half provides a very useful method for establishing uniform integrability of a sequence of random variables. For the sufficiency half, the condition that G is a convex monotone function defined on \([0,\infty )\) with \(G(0)=0\) is not needed; it can be weakened to the condition that G is a nonnegative Borel measurable function defined on \([0,\infty )\).

An alternative proof of the de La Vallée Poussin criterion for uniform integrability was provided by Chong (1979).

Now, we define a new version of uniform integrability which is called B-statistical uniform integrability with respect to \(\{ a_{nk}\}\).

Definition 1

A sequence of random variables \(\left\{ X_{k}\right\} \) is said to be B-statistically uniformly integrable with respect to \(\left\{ a_{nk}\right\} \) if

$$\begin{aligned} \lim _{c\rightarrow \infty }\underset{n\in \mathbb {N}}{\sup \nolimits _{\mathrm {st}_{B}}} \sum _{k=1}^{\infty }\left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right| >c\right\} }=0. \end{aligned}$$

By considering (1), it is easy to see that if a sequence of random variables \(\left\{ X_{k}\right\} \) is uniformly integrable with respect to \(\left\{ a_{nk}\right\} \), then it is B-statistically uniformly integrable with respect to \(\left\{ a_{nk}\right\} \). On the other hand, since uniform integrability implies uniform integrability with respect to \(\left\{ a_{nk}\right\} \) (see Ordóñez Cabrera 1994), we conclude that uniform integrability implies B-statistical uniform integrability with respect to \(\left\{ a_{nk}\right\} \). Moreover, if we take the identity matrix as B, then B-statistical uniform integrability reduces to uniform integrability with respect to \(\left\{ a_{nk}\right\} \).

The following example shows that B-statistical uniform integrability with respect to \(\left\{ a_{nk}\right\} \) does not imply uniform integrability with respect to \(\left\{ a_{nk}\right\} \) in general:

Example 1

Let \(A=(a_{nk})\) be the array defined by

$$\begin{aligned} a_{nk}=\left\{ \begin{array}{ll} 1, &{}\quad \mathrm{{if }}\;n=j^{2}\;\mathrm{{ and }}\;k=n\\ \frac{1}{2j}, &{} \mathrm{{if }}\;n=j^{2}+1\;\mathrm{{ and }}\;n\le k\le (j+1)^{2}-1\\ 0, &{}\quad \text {otherwise}. \end{array} \right. \end{aligned}$$

Consider the sequence of random variables \(\left\{ X_{k}\right\} \) defined by

$$\begin{aligned} X_{k}=\left\{ \begin{array}{ll} \pm k, &{}\quad \mathrm{{with\; probability }}\;1/2\;\mathrm{{ if }}\;k=j^{2}\\ 0, &{}\quad \mathrm{{otherwise.}} \end{array} \right. \end{aligned}$$

As

$$\begin{aligned} \mathbb {E}\left| X_{k}\right| =\left\{ \begin{array}{ll} k, &{}\quad \mathrm{{if }}\;k=j^{2}\\ 0, &{}\quad \mathrm{{otherwise}}, \end{array} \right. \end{aligned}$$

we have

$$\begin{aligned} {\displaystyle \sum _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| =\left\{ \begin{array}{ll} n, &{}\quad \mathrm{{if}} \; n=j^{2}\\ 0, &{}\quad \mathrm{{otherwise.}} \end{array} \right. \end{aligned}$$

Therefore, we obtain \(\sup \nolimits _{n\in \mathbb {N}} {\sum _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| =\infty \) which yields that \(\left\{ X_{k}\right\} \) is not uniformly integrable with respect to A (see Theorem 2 of Ordóñez Cabrera 1994).On the other hand, we have for any \(c>0\) that

$$\begin{aligned} \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right|>c\right\} }&\le \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right| >0\right\} }\\&=\left\{ \begin{array}{ll} k, &{}\quad \mathrm{{if }}\;k=j^{2}\\ 0, &{}\quad \mathrm{{otherwise}} \end{array} \right. \end{aligned}$$

which implies that

$$\begin{aligned} {\displaystyle \sum _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right| >0\right\} }=\left\{ \begin{array}{ll} n, &{}\quad \mathrm{{if }}\;n=j^{2}\\ 0, &{}\quad \mathrm{{otherwise.}} \end{array} \right. \end{aligned}$$

Thus, we get \({\sup \nolimits _{\mathrm {st}_{C}}}_{k\in \mathbb {N}}\mathbb {E} \left| X_{k}\right| I_{\left\{ \left| X_{k}\right| >c\right\} }=0\). Hence, the sequence \(\left\{ X_{k}\right\} \) is C -statistically uniformly integrable with respect to A.

2 Main results

The following theorem characterizes the concept of B-statistical uniform integrability with respect to \(\left\{ a_{nk}\right\} \):

Theorem 1

A sequence of random variables \(\left\{ X_{k}\right\} \) is B-statistically uniformly integrable with respect to \(\left\{ a_{nk}\right\} \) if and only if the following two conditions hold: (i) \({\sup \nolimits _{\mathrm {st}_{B}}}_{n\in \mathbb {N}} {\sum _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| <\infty \)(ii) For every \(\varepsilon >0\), there exists \(\nu (\varepsilon )>0\) such that

$$\begin{aligned} \underset{n\in \mathbb {N}}{\sup \nolimits _{\mathrm {st}_{B}}} {\displaystyle \sum _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{F_{k} }\le \varepsilon \end{aligned}$$

for any sequence \(\left\{ F_{k}\right\} \) of events with

$$\begin{aligned} \underset{n\in \mathbb {N}}{\sup \nolimits _{\mathrm {st}_{B}}} {\displaystyle \sum _{k=1}^{\infty }} \left| a_{nk}\right| P(F_{k})\le \nu (\varepsilon ). \end{aligned}$$
(3)

Proof

Let \(\left\{ X_{k}\right\} \) be a B-statistically uniformly integrable sequence of random variables with respect to \(\left\{ a_{nk}\right\} \) and let \(\varepsilon >0\). Then, there exists \(a>0\) such that

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}: {\displaystyle \sum _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right|>a\right\} }>\varepsilon /2\right\} \right) =0. \end{aligned}$$
(4)

By (2), we can choose M such that

Note that with \(a>0\) as in (4),

Combining (*), (4), and (**), we obtain that

$$\begin{aligned} 0&\le \delta _B \left( \left\{ n \in \mathbb {N}: \sum _{k=1}^{\infty } |a_{nk}| E|X_k|> Ma + \frac{\varepsilon }{2} \right\} \right) \\&\le \delta _B \left( \left\{ n \in \mathbb {N}: \sum _{k=1}^{\infty } |a_{nk}|> M \right\} \right) \\&\quad + \delta _B \left( \left\{ n \in \mathbb {N}: \sum _{k=1}^{\infty } |a_{nk}| E|X_k| I_{\{|X_k|> a\}} > \frac{\varepsilon }{2} \right\} \right) \\&= 0 + 0 = 0 \end{aligned}$$

and so

$$\begin{aligned} \delta _B \left( \left\{ n \in \mathbb {N}: \sum _{k=1}^{\infty } |a_{nk}| E|X_k| > Ma + \frac{\varepsilon }{2} \right\} \right) =0. \end{aligned}$$

Thus, the real number \(Ma+\frac{\varepsilon }{2}\) is a B-statistical upper bound of the sequence \(\left\{ \sum _{k=1}^{\infty } |a_{nk}| E|X_k|: n \in \mathbb {N} \right\} \). Hence (i) holds.

To verify (ii), if we choose \(\nu (\varepsilon )=\dfrac{\varepsilon }{2a}\) then, for any sequence \(\left\{ F_{k}\right\} \) of events such that (3) holds we have

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}:a {\displaystyle \sum _{k=1}^{\infty }} \left| a_{nk}\right| P(F_{k})>\varepsilon /2\right\} \right) =0. \end{aligned}$$
(5)

Moreover, we obtain

$$\begin{aligned}&\left\{ n\in \mathbb {N}: {\textstyle \sum _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{F_{k} }>\varepsilon \right\} \\&\quad \subset \left\{ n\in \mathbb {N}: {\textstyle \sum _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{F_{k}\cap \left\{ \left| X_{k}\right| \le a\right\} }>\varepsilon /2\right\} \\&\qquad \cup \left\{ n\in \mathbb {N}: {\textstyle \sum _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{F_{k}\cap \left\{ \left| X_{k}\right|>a\right\} }>\varepsilon /2\right\} \\&\quad \subset \left\{ n\in \mathbb {N}:a {\textstyle \sum _{k=1}^{\infty }} \left| a_{nk}\right| P(F_{k})>\varepsilon /2\right\} \\&\qquad \cup \left\{ n\in \mathbb {N}: {\textstyle \sum _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right|>a\right\} }>\varepsilon /2\right\} . \end{aligned}$$

Thus, we obtain by (4) and (5) that

$$\begin{aligned} 0\le & {} \delta _{B}\left( \left\{ n\in \mathbb {N}: {\displaystyle \sum _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{F_{k} }>\varepsilon \right\} \right) \\\le & {} \delta _{B}\left( \left\{ n\in \mathbb {N}:a {\displaystyle \sum _{k=1}^{\infty }} \left| a_{nk}\right| P(F_{k})>\varepsilon /2\right\} \right) \\&\quad +\,\delta _{B}\left( \left\{ n\in \mathbb {N}: {\displaystyle \sum _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right|>a\right\} }>\varepsilon /2\right\} \right) \\= & {} 0, \end{aligned}$$

which implies \(\delta _{B}\left( \left\{ n\in \mathbb {N}: {\sum _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{F_{k} }>\varepsilon \right\} \right) =0\). Therefore, (ii) is satisfied.

Conversely, suppose that (i) and (ii) hold. If \( {\sup \nolimits _{\mathrm {st}_{B}}}_{n\in \mathbb {N}} {\displaystyle \sum \nolimits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| =M<\infty \), then by Remark 1 we get

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| >M+\varepsilon \right\} \right) =0. \end{aligned}$$
(6)

By using Markov’s inequality, we have for any \(c>0\) that

$$\begin{aligned} \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| P(\left| X_{k}\right|>c)>\frac{M+\varepsilon }{c}\right\} \subset \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| >M+\varepsilon \right\} . \end{aligned}$$

Therefore, by (6), we have

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| P(\left| X_{k}\right|>c)>\frac{M+\varepsilon }{c}\right\} \right) =0. \end{aligned}$$
(7)

Now, let \(c>\frac{M+\varepsilon }{\nu }\). Then, we get

$$\begin{aligned} \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| P(\left| X_{k}\right|>c)>\nu \right\} \subset \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| P(\left| X_{k}\right|>c)>\frac{M+\varepsilon }{c}\right\} . \nonumber \\ \end{aligned}$$
(8)

Thus, we obtain by (7) and (8) that

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| P(\left| X_{k}\right|>c)>\nu \right\} \right) =0. \end{aligned}$$
(9)

On the other hand, if we use (ii) for the sequence of events \({\left\{ \left| X_{k}\right| >c\right\} }\), we can write

$$\begin{aligned} \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right|>c\right\} }>\varepsilon \right\} \subset \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| P(\left| X_{k}\right|>c)>\nu \right\} . \end{aligned}$$

Hence, by (9), we have \(\delta _{B}\left( \left\{ n\in \mathbb {N}: {\displaystyle \sum \nolimits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right|>c\right\} }>\varepsilon \right\} \right) =0\) which implies that

$$\begin{aligned} \underset{n\in \mathbb {N}}{\sup \nolimits _{\mathrm {st}_{B}}} {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right| >c\right\} }\le \varepsilon . \end{aligned}$$

Hence, the proof is completed.

Remark 2

If we take B as the identity matrix, then by Theorem 1 we immediately obtain Theorem 2 of Ordóñez Cabrera (1994) and if we take B as the identity matrix and if we take \(\left\{ a_{nk}\right\} \) as the Cesàro array, then by Theorem 1 we obtain Theorem 3 of Chandra (1989).

A de La Vallée Poussin-type characterization of uniform integrability with respect to \(\left\{ a_{nk}\right\} \) can be found in Ordóñez Cabrera (1994). Now, motivating from these characterizations, we prove the following de La Vallée Poussin-type characterization of B-statistical uniform integrability with respect to \(\left\{ a_{nk}\right\} \).

Theorem 2

A sequence of random variables \(\left\{ X_{k}\right\} \) is B-statistically uniformly integrable with respect to \(\left\{ a_{nk}\right\} \) if and only if there exists a Borel measurable function \(\phi :(0,\infty )\rightarrow (0,\infty )\) such that \(\lim \nolimits _{t\rightarrow \infty }\dfrac{\phi (t)}{t}=\infty \) and

$$\begin{aligned} \underset{n\in \mathbb {N}}{\sup \nolimits _{\mathrm {st}_{B}}} {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\phi \left| X_{k}\right| <\infty . \end{aligned}$$

Proof

If \(\left\{ X_{k}\right\} \) is B-statistically uniformly integrable with respect to \(\left\{ a_{nk}\right\} \), then we can choose a sequence of positive integers \(\{n_{j}\}\) such that for any \(j\in \mathbb {N}\)

$$\begin{aligned} \underset{n\in \mathbb {N}}{\sup \nolimits _{\mathrm {st}_{B}}} {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right| >n_{j}\right\} }<\frac{1}{2^{j}}. \end{aligned}$$

Therefore, for any \(j\in \mathbb {N}\), we have

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right|>n_{j}\right\} }>\frac{1}{2^{j}}\right\} \right) =0. \end{aligned}$$
(10)

On the other hand, there exists \(j_{0}\in \mathbb {N}\) such that \(c_{j_{0} }>c_{j_{o}}^{\prime }\) whenever \( {\sum \nolimits _{j=1}^{\infty }} c_{j}\ge {\displaystyle \sum \nolimits _{j=1}^{\infty }} c_{j}^{\prime }\) where \(c_{j}\) and \(c_{j}^{\prime }\) are positive real numbers for any \(j\in \mathbb {N}\). Now using this fact, it is easy to see that there exists \(j_{0}\in \mathbb {N}\) such that

$$\begin{aligned}&\left\{ n\in \mathbb {N}:\sum _{j=1}^{\infty } {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right|>n_{j}\right\} }>\sum _{j=1}^{\infty }\frac{1}{2^{j}}\right\} \nonumber \\&\quad \subset \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right|>n_{j_{0}}\right\} }>\frac{1}{2^{j_{0}}}\right\} . \end{aligned}$$
(11)

If we consider that \( {\displaystyle \sum \nolimits _{j=1}^{\infty }} \dfrac{1}{2^{j}}=1\), then we obtain by (10) and (11) that

$$\begin{aligned} \delta _{B}\left( \left\{ k\in \mathbb {N}:\sum _{j=1}^{\infty } {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right|>n_{j}\right\} }>1\right\} \right) =0. \end{aligned}$$
(12)

Moreover, there exists a Borel measurable function (see Chandra 1989; Ordóñez Cabrera 1994) \(\phi :(0,\infty )\rightarrow (0,\infty )\) such that \(\lim \nolimits _{t\rightarrow \infty }\dfrac{\phi (t)}{t}=\infty \) and for any \(k\in \mathbb {N}\)

$$\begin{aligned} \mathbb {E}\phi \left( \left| X_{k}\right| \right) \le \sum _{j=1}^{\infty }\sum _{i=n_{j}}^{\infty }P\left( \left| X_{k}\right| >i\right) \end{aligned}$$

which implies (see Ordóñez Cabrera 1994) that

$$\begin{aligned}&\left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\phi \left( \left| X_{k}\right| \right)>1\right\} \nonumber \\&\quad \subset \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left( \left| a_{nk}\right| \sum _{j=1}^{\infty }\sum _{i=n_{j}} ^{\infty }P\left( \left| X_{k}\right|>i\right) \right)>1\right\} \nonumber \\&\quad \subset \left\{ n\in \mathbb {N}:\sum _{j=1}^{\infty } {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right|>n_{j}\right\} }>1\right\} . \end{aligned}$$
(13)

Therefore, by (12) and (13), we obtain

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\phi \left( \left| X_{k}\right| \right) >1\right\} \right) =0 \end{aligned}$$

which implies that \({\sup _{\mathrm {st}_{B}}}_{n\in \mathbb {N}}{\displaystyle \sum \nolimits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\phi \left| X_{k}\right| \le 1\).

Conversely, if there exists such a function \(\phi \) and \(\varepsilon >0\), then by Remark 1 we get

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\phi \left( \left| X_{k}\right| \right) >M+\varepsilon \right\} \right) =0 \end{aligned}$$
(14)

where \(M:={\sup _{\mathrm {st}_{B}}}_{n\in \mathbb {N}}{\sum \nolimits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\phi \left| X_{k}\right| \) and there exists \(a>0\) such that \(\dfrac{\phi (t)}{t}>\dfrac{M+\varepsilon +1}{\varepsilon }\) whenever \(t>a\). Therefore,

$$\begin{aligned}&\left\{ n\in \mathbb {N}: {\textstyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right|>a\right\} }>\varepsilon \right\} \\&\quad \subset \left\{ n\in \mathbb {N}:\tfrac{\varepsilon }{M+\varepsilon +1} {\textstyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\phi \left( \left| X_{k}\right| \right) I_{\left\{ \left| X_{k}\right|>a\right\} }>\varepsilon \right\} \\&\quad =\left\{ n\in \mathbb {N}:\tfrac{1}{M+\varepsilon +1} {\textstyle \sum \limits _{k=1}^{\infty }} \mathbb {E}\phi \left( \left| X_{k}\right| \right) I_{\left\{ \left| X_{k}\right|>a\right\} }>1\right\} \\&\quad \subset \left\{ n\in \mathbb {N}:\tfrac{1}{M+\varepsilon +1}>\tfrac{1}{M+\varepsilon }\right\} \cup \left\{ n\in \mathbb {N}: {\textstyle \sum \limits _{k=1}^{\infty }} \mathbb {E}\phi \left( \left| X_{k}\right| \right) I_{\left\{ \left| X_{k}\right|>a\right\} }>M+\varepsilon \right\} \\&\quad =\left\{ n\in \mathbb {N}: {\textstyle \sum \limits _{k=1}^{\infty }} \mathbb {E}\phi \left( \left| X_{k}\right| \right) I_{\left\{ \left| X_{k}\right|>a\right\} }>M+\varepsilon \right\} . \end{aligned}$$

Hence, (14) implies

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right|>a\right\} }>\varepsilon \right\} \right) =0 \end{aligned}$$

which means that \(\left\{ X_{k}\right\} \) is B-statistically uniformly integrable with respect to \(\left\{ a_{nk}\right\} \).

Remark 3

If we take B as the identity matrix, then by Theorem 2 we immediately obtain Theorem 3 of Ordóñez Cabrera (1994) and if we take B as the identity matrix and \(\left\{ a_{nk}\right\} \) as the Cesàro array, then by Theorem 2 we obtain Theorem 1 of Chandra (1989).

3 The law of large numbers

The concept of uniform integrability is a useful tool to establish the law of large numbers with mean convergence (Chandra 1989; Ordóñez Cabrera 1994; Chung 2001), as uniform integrability with respect to \(\left\{ a_{nk}\right\} \) of a sequence \(\{X_{k}\}\) of pairwise independent random variables implies the following law of large numbers with mean convergence (see Chandra 1989):

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb {E}\left| \sum _{k=1}^{n}a_{nk} (X_{k}-\mathbb {E}X_{k})\right| =0. \end{aligned}$$
(15)

After introducing the concept of B-statistical uniform integrability, we can consider a result about the law of large numbers with mean convergence in the statistical sense. Let \(p\ge 1\). The B-statistical version of convergence in the pth mean of a sequence of random variables \(\{X_{k}\}\) to a random variable X is defined by:

$$\begin{aligned} \underset{k\rightarrow \infty }{\mathrm {st}_{B}-\lim }\mathbb {E}\left| X_{k} -X\right| ^{p}=0. \end{aligned}$$

Moreover, \(\{X_{k}\}\) is said to be B-statistically convergent to X in probability if for any \(\varepsilon ,\nu >0\)

$$\begin{aligned} \delta _{B}\left( \left\{ k\in \mathbb {N}:P(\left( \left| X_{k} -X\right| )\ge \nu \right) \ge \varepsilon \right\} \right) =0. \end{aligned}$$

In this case, we write \(X_{k}\overset{\mathrm {st}_{B,P}}{\rightarrow }X\). For the statistical version of this definition, see Ghosal (2013). Considering the extended Markov’s inequality, we have that B-statistical convergence in pth mean implies B-statistical convergence in probability for \(p>0\).

Remark 4

Let \(p>q>0\). Assume that a sequence of random variables \(\{X_{k}\}\) is B-statistically convergent to a random variable X in pth mean. Then, the sequence \(\left\{ \left| X_{k}-X\right| ^{p}\right\} \) is B-statistically convergent to zero. Therefore, there exists \(\left\{ k_{j}:j\in \mathbb {N}\right\} \) such that \(\delta _{B}\left( \left\{ k_{j}:j\in \mathbb {N}\right\} \right) =1\) and

$$\begin{aligned} \lim _{j\rightarrow \infty }\mathbb {E}\left| X_{k_{j}}-X\right| ^{p}=0. \end{aligned}$$

Thus, we have that the subsequence \(\left\{ X_{k_{j}}\right\} \) is convergent in the pth mean to X which implies it is convergent in the qth mean to X. Hence, we get

$$\begin{aligned} \lim _{j\rightarrow \infty }\mathbb {E}\left| X_{k_{j}}-X\right| ^{q}=0. \end{aligned}$$
(16)

As \(\delta _{B}\left( \left\{ k_{j}:j\in \mathbb {N}\right\} \right) =1\), (16) implies that

$$\begin{aligned} \underset{k\rightarrow \infty }{\mathrm {st}_{B}-\lim }\text { }\mathbb {E}\left| X_{k}-X\right| ^{q}=0. \end{aligned}$$

Hence, \(\{X_{k}\}\) is B-statistically convergent to X in qth mean.

Various notions on convergence of sequences of random variables in the statistical sense may be found in Ghosal (2013, 2016).

Lemma 1

Let \(\{X_{k}\}\) be a sequence of uniformly bounded pairwise independent random variables and let \(\left\{ a_{nk}\right\} \) be an array such that (2) holds and

$$\begin{aligned} \underset{n\rightarrow \infty }{\mathrm {st}_{B}-\lim }\text { }\sup _{k\in \mathbb {N} }\left| a_{nk}\right| =0. \end{aligned}$$
(17)

Then,

$$\begin{aligned} \underset{n\rightarrow \infty }{\mathrm {st}_{B}-\lim }\text { }\mathbb {E}\left( \sum _{k=1}^{\infty }a_{nk}(X_{k}-\mathbb {E}X_{k})\right) ^{2}=0. \end{aligned}$$
(18)

Proof

Let \(\underset{n\in \mathbb {N}}{\sup \nolimits _{\mathrm {st}_{B}}}\sum _{k=1}^{\infty }\left| a_{nk}\right| <M\). Then, we have

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}:\sum _{k=1}^{\infty }\left| a_{nk}\right| \ge M\right\} \right) =0. \end{aligned}$$
(19)

If \(H>0\) denotes a uniform bound of \(\{X_{k}\}\), then by (17) we get for any \(\varepsilon >0\) that

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}:\sup _{k\in \mathbb {N}}\left| a_{nk}\right| \ge \varepsilon /MH^{2}\right\} \right) =0. \end{aligned}$$
(20)

Now, we have from the pairwise independence that

$$\begin{aligned}&\left\{ n\in \mathbb {N}:\mathbb {E}\left( \sum _{k=1}^{\infty }a_{nk} (X_{k}-\mathbb {E}X_{k})\right) ^{2}\ge \varepsilon \right\} \nonumber \\&\quad \subset \left\{ n\in \mathbb {N}:\sum _{k=1}^{\infty }a_{nk}^{2}\mathbb {E} (X_{k}-\mathbb {E}X_{k})^{2}\ge \varepsilon \right\} \nonumber \\&\quad \subset \left\{ n\in \mathbb {N}:H^{2}\sup _{k\in \mathbb {N}}\left| a_{nk}\right| \sum _{k=1}^{\infty }\left| a_{nk}\right| \ge \varepsilon \right\} \nonumber \\&\quad \subset \left\{ n\in \mathbb {N}:\sup _{k\in \mathbb {N}}\left| a_{nk}\right| \ge \varepsilon /MH^{2}\right\} \cup \left\{ n\in \mathbb {N}:\sum _{k=1}^{\infty }\left| a_{nk}\right| \ge M\right\} . \end{aligned}$$
(21)

By (19), (20) and (21), we have

$$\begin{aligned} 0\le & {} \delta _{B}\left( \left\{ n\in \mathbb {N}:\mathbb {E}\left( \sum _{k=1}^{\infty }a_{nk}(X_{k}-\mathbb {E}X_{k})\right) ^{2}\ge \varepsilon \right\} \right) \\\le & {} \delta _{B}\left( \left\{ n\in \mathbb {N}:\sup _{k\in \mathbb {N} }\left| a_{nk}\right| \ge \varepsilon /MH^{2}\right\} \right) \\&+\,\delta _{B}\left( \left\{ n\in \mathbb {N}:\sum _{k=1} ^{\infty }\left| a_{nk}\right| \ge M\right\} \right) \\= & {} 0 \end{aligned}$$

which yields that

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}:\mathbb {E}\left( \sum _{k=1} ^{\infty }a_{nk}(X_{k}-\mathbb {E}X_{k})\right) ^{2}\ge \varepsilon \right\} \right) =0. \end{aligned}$$

Therefore, (18) holds.

Theorem 3

Let \(\{X_{k}\}\) be a sequence of pairwise independent random variables and let \(\left\{ a_{nk}\right\} \) be an array such that (2) and (17) hold. If \(\{X_{k}\}\) is B-statistically uniformly integrable with respect to \(\left\{ a_{nk}\right\} \), then

$$\begin{aligned} \underset{n\rightarrow \infty }{\mathrm {st}_{B}-\lim }\text { }\mathbb {E}\left| \sum _{k=1}^{\infty }a_{nk}(X_{k}-\mathbb {E}X_{k})\right| =0 \end{aligned}$$

and, a fortiori,

$$\begin{aligned} \sum _{k=1}^{\infty }a_{nk}(X_{k}-\mathbb {E}X_{k})\overset{\mathrm {st}_{B,P}}{\rightarrow }0. \end{aligned}$$

Proof

Since \(\{X_{k}\}\) is B-statistically uniformly integrable with respect to \(\left\{ a_{nk}\right\} \), for any \(\varepsilon >0\) there exists \(a>0\) such that

$$\begin{aligned} \underset{n\in \mathbb {N}}{\sup \nolimits _{\mathrm {st}_{B}}} {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right| >a\right\} }<\varepsilon /4 \end{aligned}$$

which implies

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}: {\sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| X_{k}\right| I_{\left\{ \left| X_{k}\right|>a\right\} }>\varepsilon /4\right\} \right) =0. \end{aligned}$$
(22)

Now, we define for any \(k\in \mathbb {N}\)

$$\begin{aligned} U_{k}&=X_{k}I_{\left\{ \left| X_{k}\right| \le a\right\} }\\ V_{k}&=X_{k}I_{\left\{ \left| X_{k}\right| >a\right\} }. \end{aligned}$$

The sequence of random variables \(\left\{ U_{k}-\mathbb {E}U_{k}\right\} \) is pairwise independent with uniform bound 2a. By Lemma 1 and Remark 4, we have

$$\begin{aligned} \underset{n\rightarrow \infty }{\mathrm {st}_{B}-\lim }\text { }\mathbb {E}\left| \sum _{k=1}^{\infty }a_{nk}(U_{k}-\mathbb {E}U_{k})\right| =0. \end{aligned}$$

Therefore,

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}:\mathbb {E}\left| \sum _{k=1}^{\infty }a_{nk}(U_{k}-\mathbb {E}U_{k})\right| >\varepsilon /2\right\} \right) =0. \end{aligned}$$
(23)

On the other hand,

$$\begin{aligned}&\left\{ n\in \mathbb {N}:\sum _{k=1}^{\infty }\left| a_{nk}\right| \left| V_{k}-\mathbb {E}V_{k}\right|>\varepsilon /2\right\} \nonumber \\&\quad \subset \left\{ n\in \mathbb {N}:2\sum _{k=1}^{\infty }\left| a_{nk}\right| \mathbb {E}\left| V_{k}\right|>\varepsilon /2\right\} \nonumber \\&\quad =\left\{ n\in \mathbb {N}:\sum _{k=1}^{\infty }\left| a_{nk}\right| \mathbb {E}\left| X_{k}I_{\left\{ \left| X_{k}\right|>a\right\} }\right| >\varepsilon /4\right\} . \end{aligned}$$
(24)

Now, by (22) and (24), we obtain

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}:\sum _{k=1}^{\infty }\left| a_{nk}\right| \left| V_{k}-\mathbb {E}V_{k}\right| >\varepsilon /2\right\} \right) =0. \end{aligned}$$

Finally, as

$$\begin{aligned}&\left\{ n\in \mathbb {N}:\mathbb {E}\left| {\textstyle \sum \limits _{k=1}^{\infty }} a_{nk}(X_{k}-\mathbb {E}X_{k})\right|>\varepsilon \right\} \nonumber \\&\quad \subset \left\{ n\in \mathbb {N}:\mathbb {E}\left| {\textstyle \sum \limits _{k=1}^{\infty }} a_{nk}(U_{k}-\mathbb {E}U_{k})\right|>\varepsilon /2\right\} \nonumber \\&\qquad \cup \left\{ n\in \mathbb {N}: {\textstyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \left| V_{k}-\mathbb {E}V_{k}\right| >\varepsilon /2\right\} , \end{aligned}$$
(25)

by (23) and (25), we have

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}:\mathbb {E}\left| \sum _{k=1}^{\infty }a_{nk}(X_{k}-\mathbb {E}X_{k})\right| >\varepsilon \right\} \right) =0 \end{aligned}$$

which finishes the proof.

Remark 5

If we take B as the identity matrix, then by Theorem 3 we immediately obtain Theorem 4 of Ordóñez Cabrera (1994) and if we take B as the identity matrix and if we take \(\left\{ a_{nk}\right\} \) as the Cesàro array, then by Theorem 3 we obtain Theorem 1 of Chandra (1989).

In the following theorem, we drop the pairwise independence assumption by strengthening the other conditions.

Theorem 4

Let \(0<p<1\) and let \(\left\{ a_{nk}\right\} \) be an array such that

$$\begin{aligned} \underset{n\in \mathbb {N}}{\sup \nolimits _{\mathrm {st}_{B}}} {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| ^{p}<\infty \end{aligned}$$

and

$$\begin{aligned} \underset{n\rightarrow \infty }{\mathrm {st}_{B}-\lim }\text { }\sup _{k\in \mathbb {N} }\left| a_{nk}\right| =0. \end{aligned}$$

If \(\left\{ \left| X_{k}\right| ^{p}\right\} \) is a B-statistically uniformly integrable sequence of random variables with respect to \(\left\{ \left| a_{nk}\right| ^{p}\right\} \), then

$$\begin{aligned} \underset{n\rightarrow \infty }{\mathrm {st}_{B}-\lim }\text { }\mathbb {E}\left| \sum _{k=1}^{\infty }a_{nk}X_{k}\right| ^{p}=0 \end{aligned}$$

and, a fortiori,

$$\begin{aligned} \sum _{k=1}^{\infty }a_{nk}X_{k}\overset{\mathrm {st}_{B,P}}{\rightarrow }0. \end{aligned}$$

Proof

Let \(\varepsilon >0\). Then, there exists \(a>0\) such that

$$\begin{aligned} \underset{n\in \mathbb {N}}{\sup \nolimits _{\mathrm {st}_{B}}} {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| ^{p}\mathbb {E}\left| X_{k}\right| ^{p}I_{\left\{ \left| X_{k}\right| >a\right\} }<\varepsilon /2 \end{aligned}$$

which implies

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| ^{p}\mathbb {E}\left| X_{k}\right| ^{p}I_{\left\{ \left| X_{k}\right|>a\right\} }>\varepsilon /2\right\} \right) =0. \end{aligned}$$
(26)

If \(\underset{n\in \mathbb {N}}{\sup \nolimits _{\mathrm {st}_{B}}} {\displaystyle \sum \nolimits _{k=1}^{\infty }} \left| a_{nk}\right| ^{p}<M<\infty \), then we obtain

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| ^{p}>M\right\} \right) =0. \end{aligned}$$
(27)

Since \(\underset{n\rightarrow \infty }{\mathrm {st}_{B}-\lim }\) \(\sup \nolimits _{k\in \mathbb {N} }\left| a_{nk}\right| =0\), we get

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}:\sup \limits _{k\in \mathbb {N} }\left| a_{nk}\right| >\left( \varepsilon /aM\right) ^{1/1-p} \right\} \right) =0. \end{aligned}$$
(28)

Now, if we define sequences of random variables \(\left\{ U_{k}\right\} \) and \(\left\{ V_{k}\right\} \) as in Theorem 3, then we have

$$\begin{aligned}&\left\{ n\in \mathbb {N}:\mathbb {E}\left| {\displaystyle \sum \limits _{k=1}^{\infty }} a_{nk}U_{k}\right|>\varepsilon \right\} \nonumber \\&\quad \subset \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| \mathbb {E}\left| U_{k}\right|>\varepsilon \right\} \nonumber \\&\quad \subset \left\{ n\in \mathbb {N}:a\sup \limits _{k\in \mathbb {N}}\left| a_{nk}\right| ^{1-p} {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| ^{p}>\varepsilon \right\} \nonumber \\&\quad \subset \left\{ n\in \mathbb {N}:\sup \limits _{k\in \mathbb {N}}\left| a_{nk}\right| ^{1-p}>\varepsilon /aM\right\} \cup \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| ^{p}>M\right\} . \end{aligned}$$
(29)

Thus, by (27), (28) and (29), we have

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}:\mathbb {E}\left| {\displaystyle \sum \nolimits _{k=1}^{\infty }} a_{nk}U_{k}\right| >\varepsilon \right\} \right) =0. \end{aligned}$$

Thus, the sequence \(\left\{ \mathbb {E}\left| {\displaystyle \sum \nolimits _{k=1}^{\infty }} a_{nk}U_{k}\right| :n\in \mathbb {N}\right\} \) is B-statistically convergent to zero. Now, Remark 4 implies that

$$\begin{aligned} \underset{n\rightarrow \infty }{\mathrm {st}_{B}-\lim }\text { }\mathbb {E}\left| {\displaystyle \sum \limits _{k=1}^{\infty }} a_{nk}U_{k}\right| ^{p}=0. \end{aligned}$$
(30)

Therefore,

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}:\mathbb {E}\left| {\displaystyle \sum \limits _{k=1}^{\infty }} a_{nk}U_{k}\right| ^{p}>\varepsilon /2\right\} \right) =0. \end{aligned}$$

On the other hand,

$$\begin{aligned} \left\{ n\in \mathbb {N}:\mathbb {E}\left| {\displaystyle \sum \limits _{k=1}^{\infty }} a_{nk}V_{k}\right| ^{p}>\varepsilon /2\right\}\subset & {} \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| ^{p}\mathbb {E}\left| V_{k}\right| ^{p}>\varepsilon /2\right\} \\= & {} \left\{ n\in \mathbb {N}: {\displaystyle \sum \limits _{k=1}^{\infty }} \left| a_{nk}\right| ^{p}\mathbb {E}\left| X_{k}\right| ^{p}I_{\left\{ \left| X_{k}\right|>a\right\} }>\varepsilon /2\right\} . \end{aligned}$$

Thus, by (26), we obtain

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}:\mathbb {E}\left| {\displaystyle \sum \limits _{k=1}^{\infty }} a_{nk}V_{k}\right| ^{p}>\varepsilon /2\right\} \right) =0. \end{aligned}$$
(31)

Moreover, we have

$$\begin{aligned}&\left\{ n\in \mathbb {N}:\mathbb {E}\left| \sum _{k=1}^{\infty }a_{nk} X_{k}\right| ^{p}>\varepsilon \right\} \nonumber \\&\quad \subset \left\{ n\in \mathbb {N}:\mathbb {E}\left| \sum _{k=1}^{\infty }a_{nk}U_{k}\right| ^{p}+\mathbb {E}\left| \sum _{k=1}^{\infty } a_{nk}V_{k}\right| ^{p}>\varepsilon \right\} \nonumber \\&\quad \subset \left\{ n\in \mathbb {N}:\mathbb {E}\left| \sum _{k=1}^{\infty }a_{nk}U_{k}\right| ^{p}>\varepsilon /2\right\} \cup \left\{ n\in \mathbb {N}:\mathbb {E}\left| \sum _{k=1}^{\infty }a_{nk}V_{k}\right| ^{p}>\varepsilon /2\right\} .\nonumber \\ \end{aligned}$$
(32)

Now, by (30), (31) and (32), we conclude

$$\begin{aligned} \delta _{B}\left( \left\{ n\in \mathbb {N}:\mathbb {E}\left| \sum _{k=1}^{\infty }a_{nk}X_{k}\right| ^{p}>\varepsilon \right\} \right) =0. \end{aligned}$$

This finished the proof.

Remark 6

If we take B as the identity matrix, then by Theorem 4 we immediately obtain Theorem 5 of Ordóñez Cabrera (1994).

4 Conclusion and an open problem

The main goal of this article is to introduce the concept of B-statistical uniform integrability of a sequence of random variables with respect to an array of weights, where B is a nonnegative regular summability matrix. This notion is then applied to obtain new laws of large numbers that generalize some well-known results from the literature.

One of the main results of the article, Theorem 3, provides the law of large numbers with respect to \(L_1\)-convergence for a sequence of pairwise independent random variables, while the subsequent Theorem 4 provides the law of large numbers for an arbitrary sequence of random variables without any assumptions on their joint distributions. Theorem 4 is proved under the assumption that the sequence \(\left\{ \left| X_{k}\right| ^{p}\right\} \) is a B-statistically uniformly integrable sequence of random variables with respect to the array of weights, where \(0<p<1\). It would be interesting to obtain a version of Theorem 4 for \(p \ge 1\).