Abstract
Important probabilistic problems require to find the limit of a sequence of random variables. However, this limit can be understood in different ways and various kinds of convergence can be defined. Among the many types of convergence of sequences of random variables, we can highlight, for example, that convergence in \(L^p\)-sense implies convergence in probability, which, in turn, implies convergence in distribution, besides that all these implications are strict. In this paper, the relationship between several types of convergence of sequences of random variables will be analyzed from the perspective of lineability theory.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Preliminaries and background
Since the beginning of the twentyfirst century the study of linearity within non linear settings (known as lineability theory nowadays) has become a sort of a trend within many areas of mathematical research such as real and complex analysis [4, 8, 9, 24, 37], linear dynamics [13, 22, 23], set theory [27], operator theory [14, 37], measure theory [9], and algebraic geometry [12].
Let us recall that if V is a vector space and \(\kappa \) is a (finite or infinite) cardinal number, a subset \(A\subset V\) is said to be \(\kappa \)-lineable if there exists a vector subspace \(M\subset V\) such that the dimension of M is \(\kappa \) and \(M\backslash \{0\}\subset A.\) In addition, if A is contained in some commutative algebra, then A is called strongly \(\kappa \)-algebrable if there exists a \(\kappa \)-generated free algebra M with \(M\backslash \{0\}\subset A;\) that is, there exists a subset B of cardinality \(\kappa \) with the following property: for any positive integer m, any nonzero polynomial P in m variables without constant term and any distinct elements \(x_1,\ldots ,x_m\in B,\) we have \(P(x_1,\ldots ,x_m)\ne 0\) and \(P(x_1,\ldots ,x_m)\in A.\)
Recently, in [11], it was introduced the notion of convex lineability, which will also be explored in this work. As usual, conv(B) will denote the convex hull of a subset B of a vector space V, that is,
We say that a subset A of a vector space V is \(\kappa \)-convex lineable if there exists a linearly independent subset \(B\subset V\) such that B has cardinality \(\kappa \) and conv\((B)\subset A.\)
The notion of lineability dates back to 2004 (see [4]). It was originally coined by V. I. Gurariy and it was just recently introduced by the American Mathematical Society in the 2020 Mathematical Subject Classification under the references 15A03 and 46B87. The original motivation for this concept was, probably, the inspiration that came from the famous example of Weierstrass (also known as Weierstrass’ monster, a continuous nowhere differentiable function on \({\mathbb {R}},\) see [39]). In 1966, V. I. Gurariy showed that the set of Weierstrass’ monsters contains (except for the 0 function) an infinite dimensional vector space (see [28, 29]). The current state of the art of this area of research can be consulted, for instance, in [1, 3, 5, 10, 12, 13, 18,19,20, 25, 33].
In this work, we link the topic of lineability with probability theory. This paper’s goal is to continue the ongoing research that was started in [21, 26]. On this occasion we shall analyzed the many different types of convergence of sequences of random variables from the perspective of lineability theory. A recent article that also studies this topic is [7].
Throughout the paper, \(\left( \Omega ,{\mathcal {A}},\mu \right) \) will always denote a probability space. Any measurable function defined on it will be called a random variable. Two random variables X and Y on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) will be said to be equal if \(X=Y\) except on a set of probability zero. Given a random variable X and a real number \(\varepsilon ,\) we will frequently write \(\mu (X>\varepsilon )\) instead of \(\mu \left( \left\{ \omega \in \Omega :X(\omega )>\varepsilon \right\} \right) \) (and similarly for other inequalities). We define below some basic kinds of convergence considered in probability theory that will be studied in the article.
-
(1)
A sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) of random variables converges almost surely to another random variable X if there exists a subset \(B\in {\mathcal {A}}\) such that \(\mu (B)=1\) and \(\lim _{n\rightarrow \infty } X_n(\omega )=X(\omega )\) for every \(\omega \in B.\)
-
(2)
The sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges almost surely uniformly to X if there exists a set \(B\in {\mathcal {A}}\) such that \(\mu (B)=1\) and \(\lim _{n\rightarrow \infty }X_n(\omega )=X(\omega )\) uniformly on B.
-
(3)
It is said that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges in probability to X if
$$\begin{aligned} \lim _{n\rightarrow \infty }\mu \left( |X_n-X|>\varepsilon \right) =0 \end{aligned}$$for every \(\varepsilon >0.\)
-
(4)
The sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges completely to X if
$$\begin{aligned} \sum _{n=1}^{\infty }\mu \left( |X_n-X|>\varepsilon \right) <\infty \end{aligned}$$for every \(\varepsilon >0.\)
-
(5)
Let us suppose that X and \(X_n,\) for all \(n\in {\mathbb {N}},\) belong to the space \(L^p\left( \Omega ,{\mathcal {A}},\mu \right) \) for some \(p>0.\) The sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to X in \(L^p\)-sense or in p-mean if
$$\begin{aligned} \lim _{n\rightarrow \infty } E\left( |X_n-X|^p\right) =0, \end{aligned}$$where E(X) denotes the expectation of the random variable X.
-
(6)
Let F and \(F_n\) be the distribution functions of X and \(X_n,\) respectively. The sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges in distribution to X if \(\lim _{n\rightarrow \infty }F_n(x)=F(x)\) for all \(x\in {\mathbb {R}}\) at which F is continuous. When this property holds, it is also said that \(\left\{ F_n\right\} _{n=1}^{\infty }\) converges weakly to F.
-
(7)
Let us suppose that F and \(F_n,\) for all \(n\in {\mathbb {N}},\) are absolutely continuous distribution functions with densities f and \(f_n,\) respectively. The sequence of distribution functions \(\left\{ F_n\right\} _{n=1}^{\infty }\) converges in variation to F if
$$\begin{aligned} \lim _{n\rightarrow \infty }\int _{-\infty }^{+\infty }|f_n(x)-f(x)|dx = 0. \end{aligned}$$
The following diagram contains the different implications held by the modes of convergence defined above.
2 Algebrability of sets of sequences of random variables
2.1 Almost sure uniform convergence and complete convergence
Let us suppose that \(\left\{ X_n\right\} _{n=1}^{\infty }\) is a sequence of random variables on a probability space \(\left( \Omega ,{\mathcal {A}},\mu \right) \) and that \(X_n\rightarrow 0\) uniformly on a subset B with \(\mu (B)=1.\) Given \(\varepsilon >0,\) there exists \(n_0\in {\mathbb {N}}\) such that \(|X_n(\omega )|\le \varepsilon \) for every \(n>n_0\) and every \(\omega \in B.\) As a consequence,
That is, \(\left\{ X_n\right\} _{n=1}^{\infty }\) also converges completely to zero. However, the converse is not true. Based on the example given in [38, Section 14.15], we will prove the strong \({\mathfrak {c}}\)-algebrability of the collection of sequences that converge completely but do not converge almost surely uniformly. We will consider the following product of sequences of random variables:
The symbols \({\mathcal {B}}\) and \(\lambda \) will denote the Borel \(\sigma \)-algebra in [0, 1] and the Lebesgue measure on [0, 1], respectively, and \({\mathfrak {c}}\) will denote the cardinality of the continuum.
Theorem 1
Let \({\mathcal {S}}\) be the set of all sequences \(\left\{ X_n\right\} _{n=1}^{\infty }\) of random variables on \(\left( [0,1],{\mathcal {B}},\lambda \right) \) such that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 completely but does not converge almost surely uniformly. Then \({\mathcal {S}}\) is strongly \({\mathfrak {c}}\)-algebrable.
Proof
It is based in a general algebrability criteria that appears in [6, Proposition 7]. For each \(b>0\) and each \(n\in {\mathbb {N}},\) the random variable \(X_{b,n}\) is defined as follows:
Let \({\mathcal {H}}\subset (0,+\infty )\) be a Hamel basis for \({\mathbb {R}}\) over the field \({\mathbb {Q}}.\) It is known that the cardinality of such a basis is \({\mathfrak {c}}\) (see [35, Theorem 4.2.3]). We will prove that if \(h_1,\ldots ,h_m\) are distinct elements of \({\mathcal {H}}\) and P is a non-zero polynomial in m variables without constant term, then the sequence
is not zero and belongs to \({\mathcal {S}}.\) The polynomial P can be written as
where \(k\in {\mathbb {N}},\) \(a_i\in {\mathbb {R}}\backslash \{0\},\) \(s_{ij}\in {\mathbb {N}}\cup \{0\}\) for \(1 \le i \le k,\) \(1 \le j \le m,\) and \(s_{i1}+\cdots +s_{im}>0.\) We can assume that all the vectors of exponents \((s_{i1},\ldots ,s_{im})\) are different. Let \(h_1,\ldots ,h_m\) be distinct elements of \({\mathcal {H}}\) and note that
Hence,
Setting \(b_i=s_{i1}h_1+\cdots +s_{im}h_m\) for each \(i\in \{1,\ldots ,k\},\) we have
Since \({\mathcal {H}}\) is a basis for \({\mathbb {R}}\) over \({\mathbb {Q}},\) it follows that \(b_1,\ldots ,b_k\) are distinct. Moreover, each \(b_i\) is strictly positive because \(h_1,\ldots ,h_m\) are also positive and \(s_{i1}+\cdots +s_{im}>0.\) Without loss of generality, we can suppose that \(0<b_1<\cdots <b_k.\)
If \(P\left( X_{h_1,n},\ldots ,X_{h_m,n}\right) \) were zero at every point of [0, 1], then \(\sum _{i=1}^{k}a_{i}e^{b_i/x}\) would be zero for every \(x\in \left( 0,1/n^2\right] ,\) which is not possible because
Furthermore, if \(\varepsilon >0,\) then
That is, the sequence \(\left\{ \sum _{i=1}^{k}a_iX_{b_i,n}\right\} _{n=1}^{\infty }\) converges completely to 0.
Let us suppose that \(\left\{ \sum _{i=1}^{k}a_iX_{b_i,n}\right\} _{n=1}^{\infty }\) also converges almost surely uniformly to 0. Then there is some measurable set B with \(\lambda (B)=1\) and there is \(n_0\in {\mathbb {N}}\) such that \(\left| \sum _{i=1}^{k}a_iX_{b_i,n}(x)\right| <1\) for all \(n\ge n_0\) and all \(x\in B.\) Moreover, the fact that \(\lambda (B)=1\) implies that 0 is an accumulation point of B. Hence, fixed \(n\ge n_0\) and \(A= B\cap \left( 0,1/n^2\right] ,\) we have
This contradiction shows that \(\left\{ \sum _{i=1}^{k}a_iX_{b_i,n}\right\} _{n=1}^{\infty }\) does not converge almost surely uniformly to 0. That is, the sequence \(\left\{ P\left( X_{h_1,n},\ldots ,X_{h_m,n}\right) \right\} _{n=1}^{\infty } =\left\{ \sum _{i=1}^{k}a_iX_{b_i,n}\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\) \(\square \)
2.2 Complete convergence and almost sure convergence
As a consequence of the Borel–Cantelli lemma, if a sequence of random variables converges completely to 0, then it also converges almost surely (see [34, Theorem 1.27 and Proposition 5.7]). However, the converse does not hold in general (see [38, Section 14.4]). Let us mention that both convergence modes are equivalent if the random variables \(\left\{ X_n\right\} _{n=1}^{\infty }\) are independent (see [31]). Based on these facts, we will study the set of sequences of random variables that converge to zero almost surely but not completely.
Theorem 2
Let \({\mathcal {S}}\) be the set of all sequences \(\left\{ X_n\right\} _{n=1}^{\infty }\) of random variables on \(\left( [0,1],{\mathcal {B}},\lambda \right) \) such that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 almost surely but does not converge completely. Then \({\mathcal {S}}\) is strongly \({\mathfrak {c}}\)-algebrable.
Proof
For each \(b\in {\mathbb {R}}\) and \(n\in {\mathbb {N}},\) the random variable \(X_{b,n}\) is defined as follows:
Let \({\mathcal {H}}\subset (0,+\infty )\) be a Hamel basis for \({\mathbb {R}}\) over the field \({\mathbb {Q}},\) whose cardinality is \({\mathfrak {c}}\) (see [35, Theorem 4.2.3]). Let \(h_1,\ldots ,h_m\) be distinct elements of \({\mathcal {H}}\) and let P be a non-zero polynomial in m variables without constant term. As it was shown in the proof of Theorem 1, there are positive numbers \(b_1<\cdots <b_k\) such that
Following the same technique employed in the proof of Theorem 1 it can be shown that \(\left\{ \sum _{i=1}^{k}a_iX_{b_i,n}\right\} _{n=1}^{\infty }\) is not the zero sequence. Furthermore, note that \(\left\{ \sum _{i=1}^{k}a_{i}X_{b_i,n}\right\} _{n=1}^{\infty }\) converges to 0 almost surely (in fact, this happens at every point of [0, 1]).
Finally, we will prove that the sequence \(\left\{ \sum _{i=1}^{k}a_{i}X_{b_i,n}\right\} _{n=1}^{\infty }\) does not converge completely to zero. Let \(\varepsilon >0.\) Since
there exists \(n_0\in {\mathbb {N}}\) such that \(\left| \sum _{i=1}^{k}a_ie^{b_i/x}\right| >\varepsilon \) for every \(x\in (0,1/n_0].\) Hence, if \(n\ge n_0\) and \(x\in (0,1/n],\) then
and so
This proves that the sequence
belongs to \({\mathcal {S}}\) and concludes the proof of the strongly \({\mathfrak {c}}\)-algebrability of this set. \(\square \)
Remark 3
Let us recall that a probability space \((\Omega ,{\mathcal {A}},\mu )\) is said to be non-atomic if for every set \(B\in {\mathcal {A}}\) with \(\mu (B)>0\) there exists another set \(A\in {\mathcal {A}}\) such that \(A\subset B\) and \(0<\mu (A)<\mu (B).\) By [16, Theorem 9.2.2], Theorems 1 and 2 remain valid if the space \(\left( [0,1],{\mathcal {B}},\lambda \right) \) is replaced by \(\left( \Omega ,{\mathcal {A}},\mu \right) ,\) where \(\Omega \) is a complete separable metric space, \({\mathcal {A}}\) is the Borel \(\sigma \)-algebra in \(\Omega ,\) and \(\mu \) is a non-atomic probability measure.
Remark 4
It is well known that every sequence of random variables that converges almost surely also converges in probability (see [34, Proposition 5.10]). In [2, Theorem 7.1] it was proved the \({\mathfrak {c}}\)-lineability of the set of sequences of Lebesgue measurable functions on [0, 1] that converge in measure (that is, in probability) but not almost surely, while the strongly \({\mathfrak {c}}\)-algebrability of that family of functions was obtained in [17, Theorem 2.2]. In the case of a non-atomic probability space, the strong \({\mathfrak {c}}\)-algebrability of the collection of sequences of random variables that converge in probability but not almost surely was proved in [7, Theorem 11].
3 Lineability of sets of sequences of random variables
Before presenting the next theorems about sequences of random variables, we need to state the following result, whose proof can be seen in [32, Lemma 9.21] and that will be applied in Theorems 6 and 15:
Theorem 5
There exists a family \(\Delta \) of subsets of \({\mathbb {N}}\) with the following properties :
-
Each \(\sigma \in \Delta \) is infinite.
-
The elements of \(\Delta \) are almost disjoint. That is, if \(\sigma \) and \(\sigma '\) are two different elements of \(\Delta ,\) then \(\sigma \cap \sigma '\) is finite or empty.
-
The cardinality of \(\Delta \) is \({\mathfrak {c}}.\)
Next we prove a general result about lineability of sets of sequences.
Theorem 6
Let V be a vector space over \({\mathbb {R}}\) and let \(c_1\) and \(c_2\) be two types of convergence defined on V such that both \(c_1\) and \(c_2\) have the following properties :
-
(a)
If \(\left\{ X_n\right\} _{n=1}^{\infty }\subset V\) and \(X_n\rightarrow 0,\) then also \(\lambda X_n\rightarrow 0\) for all \(\lambda \in {\mathbb {R}}.\)
-
(b)
The sequence in V constantly equal to 0 converges to 0.
-
(c)
If \(\left\{ X_n\right\} _{n=1}^{\infty }\) is a sequence in V that can be decomposed into a finite number of subsequences converging to 0, then \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0.
-
(d)
If \(\left\{ X_n\right\} _{n=1}^{\infty }\) is a sequence that converges to 0, then all subsequences of \(\left\{ X_n\right\} _{n=1}^{\infty }\) also converge to 0.
-
(e)
If a finite number of terms from a sequence is deleted, the convergence or lack of convergence does not change.
Let
If \({\mathcal {S}}\ne \emptyset ,\) then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable.
Proof
Let \(\Delta \) be a family of \({\mathfrak {c}}\) almost disjoint infinite subsets of \({\mathbb {N}}\) (see Theorem 5). If \(\sigma \in \Delta \) and \(j\in {\mathbb {N}},\) \(\sigma (j)\) denotes the jth element of the set \(\sigma .\) Let us suppose that a sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\) Given \(\sigma \in \Delta ,\) we define \(\left\{ X_{\sigma ,n}\right\} _{n=1}^{\infty }\) as follows:
The set \({\mathcal {M}}=\left\{ \left\{ X_{\sigma ,n}\right\} _{n=1}^{\infty }:\sigma \in \Delta \right\} \) has cardinality \({\mathfrak {c}}\) and every linear combination of its elements belongs to \({\mathcal {S}}.\) Indeed, let us suppose that \(k\in {\mathbb {N}},\) \(\sigma _1,\ldots ,\sigma _k\) are distinct elements of \(\Delta ,\) and \(a_1,\ldots ,a_k\in {\mathbb {R}}\backslash \{0\}.\) Since \(\sigma _1,\ldots ,\sigma _k\) are almost disjoint, there exists \(n_0\in {\mathbb {N}}\) such that each \(n\ge n_0\) belongs at most to one set \(\sigma _i,\) \(i \in \{1, \ldots , k\}.\) Therefore, \(\left\{ \sum _{i=1}^{k}a_iX_{\sigma _i,n}\right\} _{n=n_0}^{\infty }\) can be decomposed into \(k+1\) subsequences: the zero sequence and \(\left\{ a_iX_j:j\in {\mathbb {N}},\sigma _i(j)\ge n_0\right\} \) with \(i\in \{1,\ldots ,k\}.\) By the properties (a) and (e) we have that
for every \(i\in \{1,\ldots ,k\}.\) Moreover, the zero sequence converges to 0 by the property (b). Then (c) implies that \(\left\{ \sum _{i=1}^{k}a_iX_{\sigma _i,n}\right\} _{n=n_0}^{\infty }\) converges to zero with respect to \(c_1.\) Using again (e), it follows that
Let us see that this linear combination does not converge to zero with respect to \(c_2.\) Since \(,\) by (a) we know that \(.\) Then (e) implies
By (d) we deduce that
Therefore, \(\left\{ \sum _{i=1}^{k}a_iX_{\sigma _i,n}\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\)
By (b), the fact that implies that \(\left\{ \sum _{i=1}^{k}a_iX_{\sigma _i,n}\right\} _{n=1}^{\infty }\) cannot be the zero sequence. Hence, we deduce that the elements of \({\mathcal {M}}\) are linearly independent and the proof of the \({\mathfrak {c}}\)-lineability of \({\mathcal {S}}\) is complete. \(\square \)
Theorem 7
The following modes of convergence of sequences of random variables satisfy the properties (a), (b), (c), (d), and (e) given in Theorem 6: almost sure convergence, almost sure uniform convergence, complete convergence, convergence in probability, convergence in distribution, and convergence in \(L^p\)-sense.
Proof
The unique property that might not be obvious is (a) for convergence in distribution. However, it can be deduced from Slutsky’s theorem (see [34, Theorems 5.22 and 5.23]). \(\square \)
3.1 Almost sure uniform convergence, complete convergence and almost sure convergence
The following theorem complements the result given in Theorem 1.
Theorem 8
Let \(\left( \Omega ,{\mathcal {A}},\mu \right) \) be a non-atomic probability space and let \({\mathcal {S}}\) be the set of all sequences \(\left\{ X_n\right\} _{n=1}^{\infty }\) of random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) such that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 completely but does not converge almost surely uniformly. Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable.
Proof
Since \(\left( \Omega ,{\mathcal {A}},\mu \right) \) is non-atomic, for every \(n\in {\mathbb {N}}\) there exists a subset \(\Omega _n\in {\mathcal {A}}\) such that \(\mu (\Omega _n)=1/n^2\) (see [16, Corollary 1.12.10]). The random variable \(X_n\) is defined as the characteristic function of \(\Omega _n\):
Note that if \(\varepsilon >0,\) then
That is, the sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges completely to 0.
Let us suppose that \(\left\{ X_n\right\} _{n=1}^{\infty }\) also converges almost surely uniformly to 0. Then there is some measurable set B with \(\mu (B)=1\) and there is \(n_0\in {\mathbb {N}}\) such that \(|X_n(\omega )|<1/2\) for all \(n\ge n_0\) and all \(\omega \in B.\) Since \(X_{n_0}(\omega )=1\) if \(\omega \in \Omega _{n_0},\) it follows that \(B\cap \Omega _{n_0}=\emptyset \) and then
This contradiction shows that the sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) does not converge almost surely uniformly to 0. That is, \(\left\{ X_n\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\) Now we just need to apply Theorem 6 to get the result. \(\square \)
The following theorem complements the result given in Theorem 2.
Theorem 9
Let \(\left( \Omega ,{\mathcal {A}},\mu \right) \) be a non-atomic probability space and let \({\mathcal {S}}\) be the set of all sequences \(\left\{ X_n\right\} _{n=1}^{\infty }\) of random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) such that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 almost surely but does not converge completely. Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable.
Proof
By [16, Corollary 1.12.10], there exists a sequence \(\left\{ \Omega _n:n\in {\mathbb {N}}\right\} \) of measurable subsets of \(\Omega \) such that \(\Omega _{n+1}\subset \Omega _n\) and \(\mu \left( \Omega _n\right) =1/n\) for each \(n\in {\mathbb {N}}.\) Let \(X_n={\textbf{1}}_{\Omega _n}\) and note that if \(\omega \notin \Omega _{n_0}\) for some \(n_0\in {\mathbb {N}},\) then \(X_n(\omega )=0\) for all \(n\ge n_0.\) Therefore,
Since \(\mu \left( \bigcap _{n=1}^{\infty }\Omega _n\right) =0,\) it follows that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 almost surely. Moreover,
Consequently, the sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) does not converge completely to 0. We have proved that \(\left\{ X_n\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}},\) so this set is \({\mathfrak {c}}\)-lineable by Theorem 6. \(\square \)
3.2 Convergence in \(L^p\)-sense and almost sure convergence
Let us recall that E(X) denotes the expectation of a random variable X. By the Lyapunov inequality, it is known that \(E(|X|^q)^{1/q}\le E(|X|^p)^{1/p}\) if \(0<q<p\) (see [15, page 277]). As a consequence, it follows that the convergence in \(L^p\)-sense implies the convergence in \(L^q\)-sense whenever \(0<q<p,\) but the converse is not true (see [38, Section 14.5]). Moreover, it is also known that almost sure convergence and convergence in \(L^p\)-sense are independent, that is, none of them implies the other one (see [38, Sections 14.7 and 14.8]). More generally, considering non-atomic probability spaces, Theorems 10, 11, and 12 below bring interesting information related to these modes of convergence.
Theorem 10
Let \(\left( \Omega ,{\mathcal {A}},\mu \right) \) be a non-atomic probability space and let \(p>0.\) Let \({\mathcal {S}}\) be the set of all sequences \(\left\{ X_n\right\} _{n=1}^{\infty }\) of random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) such that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 in \(L^q\)-sense for all \(q\in (0,p)\) but does not converge in \(L^p\)-sense. Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable.
Proof
Let \(n_0\in {\mathbb {N}}\) such that \(n^{-p+1/n}\le 1\) for all \(n\ge n_0.\) For each \(n\ge n_0\) there exists \(\Omega _n\in {\mathcal {A}}\) such that \(\mu (\Omega _n)=n^{-p+1/n}\) (see [16, Corollary 1.12.10]). Define
On the one hand, if \(0<q<p,\) then
and so \(\left\{ X_n\right\} _{n=n_0}^{\infty }\) converges to 0 in \(L^q\)-sense. On the other hand, we have that
Therefore, \(\left\{ X_n\right\} _{n=n_0}^{\infty }\) does not converge to zero in \(L^p\)-sense. This proves that \(\left\{ X_n\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\) By Theorem 6, we conclude that \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable. \(\square \)
Theorem 11
Let \(\left( \Omega ,{\mathcal {A}},\mu \right) \) be a non-atomic probability space and let \({\mathcal {S}}\) be the set of all sequences \(\left\{ X_n\right\} _{n=1}^{\infty }\) of random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) such that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 almost surely but does not converge in \(L^p\)-sense for any \(p>0.\) Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable.
Proof
Since the probability space is non-atomic, there exists a collection \(\left\{ \Omega _n:n\in {\mathbb {N}}\right\} \) of measurable subsets of \(\Omega \) such that \(\mu (\Omega _n)=1/n\) and \(\Omega _{n+1}\subset \Omega _n\) for every \(n\in {\mathbb {N}}.\) Set
On the one hand, if \(\omega \notin \bigcap _{n\in {\mathbb {N}}}\Omega _n,\) then \(X_n(\omega )=0\) for n large enough (dependent on \(\omega \)). That is, the set of points \(\omega \) for which there is no convergence of \(\left\{ X_n(\omega )\right\} _{n=1}^{\infty }\) is a subset of \(\bigcap _{n\in {\mathbb {N}}}\Omega _n.\) Since \(\mu \left( \Omega _n\right) =1/n\) for all n, we have that \(\mu \left( \bigcap _{n\in {\mathbb {N}}}\Omega _n\right) =0.\) Thus,
On the other hand, if \(p>0,\) then
and so \(\left\{ X_{n}\right\} _{n=1}^{\infty }\) does not converge to 0 in \(L^p\)-sense. This proves that \(\left\{ X_{n}\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\) By Theorem 6 we conclude that \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable. \(\square \)
Theorem 12
Let \(\left( \Omega ,{\mathcal {A}},\mu \right) \) be a non-atomic probability space and let \({\mathcal {S}}\) be the set of all sequences \(\left\{ X_n\right\} _{n=n_0}^{\infty }\) of random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) such that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 in \(L^p\)-sense for all \(p>0\) but does not converge almost surely. Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable.
Proof
Since the probability space is non-atomic, there exists a measurable set \(\Omega _0\) such that \(\mu (\Omega _0)=1/2\) (see [16, Corollary 1.12.10]). Defining \(\Omega _1=\Omega \backslash \Omega _0,\) we find two measurable sets \(\Omega _0\) and \(\Omega _1\) such that
With the same argument, it is possible to find two measurable sets \(\Omega _{00}\) and \(\Omega _{01}\) such that
Similarly, there exist two measurable sets \(\Omega _{10}\) and \(\Omega _{11}\) such that
If the previous argument is repeated, we obtain a family
of measurable subsets of \(\Omega \) with the following properties:
-
\(\Omega _{j_1\cdots j_m}=\Omega _{j_1\cdots j_{m}0}\cup \Omega _{j_1\cdots j_{m}1}.\)
-
\(\Omega _{j_1\cdots j_{m}0}\cap \Omega _{j_1\cdots j_{m}1}=\emptyset .\)
-
\(\mu \left( \Omega _{j_1\cdots j_{m}0}\right) =\mu \left( \Omega _{j_1\cdots j_{m}1}\right) =1/2^{m+1}.\)
-
For each \(m\in {\mathbb {N}},\) \(\Omega =\bigcup \left\{ \Omega _{j_1\cdots j_m}: j_1,\ldots ,j_m\in \{0,1\}\right\} .\)
Let \(X_1={\textbf{1}}_{\Omega }.\) Moreover, given a natural number \(n\ge 2,\) we consider its binary expansion:
where \(m\ge 1\) and \(j_1,\ldots ,j_m\in \{0,1\}\) are unique for each \(n\ge 2.\) Hence, we can define
Let this m be denoted by \(n^*.\) The following table clarifies how each \(X_n\) is defined.
\(n=1\) | \(X_1={\textbf{1}}_{\Omega }\) | |
\(n=2=2^1+0\) | \(n^*=1\) | \(X_2={\textbf{1}}_{\Omega _0}\) |
\(n=3=2^1+1\) | \(n^*=1\) | \(X_3={\textbf{1}}_{\Omega _1}\) |
\(n=4=2^2+2\cdot 0+0\) | \(n^*=2\) | \(X_4={\textbf{1}}_{\Omega _{00}}\) |
\(n=5=2^2+2\cdot 0+1\) | \(n^*=2\) | \(X_5={\textbf{1}}_{\Omega _{01}}\) |
\(n=6=2^2+2\cdot 1+0\) | \(n^*=2\) | \(X_6={\textbf{1}}_{\Omega _{10}}\) |
\(n=7=2^2+2\cdot 1+1\) | \(n^*=2\) | \(X_7={\textbf{1}}_{\Omega _{11}}\) |
\(n=8=2^3+2^2\cdot 0+2\cdot 0+0\) | \(n^*=3\) | \(X_8={\textbf{1}}_{\Omega _{000}}\) |
\(n=9=2^3+2^2\cdot 0+2\cdot 0+1\) | \(n^*=3\) | \(X_9={\textbf{1}}_{\Omega _{001}}\) |
\(\vdots \) | \(\vdots \) | \(\vdots \) |
Let us see that \(\left\{ X_n\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\) On the one hand,
which means that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 in \(L^p\) for all \(p>0.\) On the other hand, for each \(\omega \in \Omega \) and each \(m\in {\mathbb {N}},\) there exist unique \(j_1,\ldots ,j_m\in \{0,1\}\) such that \(\omega \in \Omega _{j_1\cdots j_m}.\) If \(0\le \ell <2^m,\) then
As a consequence, the sequence \(\left\{ X_n(\omega )\right\} _{n=1}^{\infty }\) contains infinitely many elements equal to 0 and infinitely many elements equal to 1. We thus conclude that \(\left\{ X_n(\omega )\right\} _{n=1}^{\infty }\) does not converge for any \(\omega \in \Omega .\) This proves that \(\left\{ X_n(\omega )\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}},\) so this set is \({\mathfrak {c}}\)-lineable by Theorem 6. \(\square \)
3.3 Convergence in distribution and convergence in probability
Although every sequence of random variables that converges in probability also converges in distribution, the reciprocal result does not hold in general (see [34, Proposition 5.13] and [38, Section 14.2]). Our next result will show the lineability of the collection of sequences converging in distribution but not in probability.
Theorem 13
Let \(\left( \Omega ,{\mathcal {A}},\mu \right) \) be a probability space on which is possible to define infinitely many independent and identically distributed different random variables. Let \({\mathcal {S}}\) be the set of all sequences of random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) that converge in distribution but do not converge in probability. Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable.
Proof
Let \(\left\{ X_n\right\} _{n=1}^{\infty }\) be a sequence of independent and identically distributed different random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) .\) Since all random variables \(X_n\) have the same distribution function, that we will call F, it follows that the sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges in distribution to \(X_1.\)
If F only took the values 0 and 1, every random variable \(X_n\) would be constant almost surely and equal to the same value (\(\min \left\{ x\in {\mathbb {R}}:F(x)=1\right\} \)). Therefore, we would have that \(X_n=X_m\) for all \(n,m\in {\mathbb {N}},\) which contradicts the hypothesis of the theorem. Then there must exist \(x_0\in {\mathbb {R}}\) such that \(0<F(x_0)<1.\) Since F is right-continuous, there exists \(\delta >0\) such that if \(x_0\le x<x_0+\delta ,\) then \(F(x)<1.\) Choose any a and b so that
Since F is non-decreasing, for all m and n we have
These inequalities will be applied later on.
It is known that convergence in probability is metrizable (see [15, Exercise 21.15]). In fact, convergence in probability is equivalent to convergence with respect to the following metric in the space of all random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \):
Let us estimate the distance between \(X_m\) and \(X_n\) for two natural numbers \(m\ne n.\) To do that, we will use that the function \(f(t)=\frac{t}{1+t}\) is increasing on \([0,+\infty )\) and that the variables \(X_m\) and \(X_n\) are independent:
By (3.1), if \(m\ne n,\) then
We conclude that \(\left\{ X_n\right\} _{n=1}^{\infty }\) is not a Cauchy sequence, which implies that \(\left\{ X_n\right\} _{n=1}^{\infty }\) cannot converge in probability. Therefore, \(\left\{ X_n\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\) By Theorem 6, we obtain the \({\mathfrak {c}}\)-lineability of the set \({\mathcal {S}}.\) \(\square \)
Remark 14
The existence of a probability space and a collection of independent random variables on it with prefixed distributions, as required in Theorems 13 and 15, is guaranteed by [15, Theorem 20.4].
4 Convergence of sums of sequences of random variables
Let X, Y, \(X_n,\) and \(Y_n\) (with \(n\in {\mathbb {N}}\)) be random variables and suppose that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges in distribution to X and \(\left\{ Y_n\right\} _{n=1}^{\infty }\) converges in distribution to Y, which will be denoted by \(X_n\overset{\text {d}}{\longrightarrow }X\) and \(Y_n\overset{\text {d}}{\longrightarrow }Y.\) In general, it is not possible to guarantee the convergence of \(\left\{ X_n+Y_n\right\} _{n=1}^{\infty }\) to \(X+Y,\) as it can be seen [38, Section 14.10]. Inspired by the cited counterexample, we will prove a stronger result:
Theorem 15
Let us suppose that \(\left( \Omega ,{\mathcal {A}},\mu \right) \) is a probability space such that there are two subsets \(\Omega _1\in {\mathcal {A}}\) and \(\Omega _2\in {\mathcal {A}}\) with \(\Omega =\Omega _1\cup \Omega _2,\) \(\Omega _1\cap \Omega _2=\emptyset ,\) and \(\mu (\Omega _1)=\mu (\Omega _2)=1/2.\) For each \(i\in \{1,2\},\) let \({\mathcal {A}}_i\) be the elements of \({\mathcal {A}}\) contained in \(\Omega _i\) and let \(\mu _i(B)=2\mu (B)\) for all \(B\in {\mathcal {A}}_i.\) In addition, suppose that there is a sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) of independent random variables defined on \(\left( \Omega _1,{\mathcal {A}}_1,\mu _1\right) \) and there is another sequence \(\left\{ Y_n\right\} _{n=1}^{\infty }\) of independent random variables defined on \(\left( \Omega _2,{\mathcal {A}}_2,\mu _2\right) \) such that the distribution function of every \(X_n\) and every \(Y_n\) is
Under these conditions, there exist two vector spaces, \({\mathcal {V}}_1\) and \({\mathcal {V}}_2,\) of sequences of random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) with the following properties :
-
(1)
The dimension of both spaces \({\mathcal {V}}_1\) and \({\mathcal {V}}_2\) is \({\mathfrak {c}}.\)
-
(2)
If \(\left\{ Z_n\right\} _{n=1}^{\infty }\in {\mathcal {V}}_1\backslash \{0\}\) and \(\left\{ T_n\right\} _{n=1}^{\infty }\in {\mathcal {V}}_2\backslash \{0\},\) then there are random variables Z and T on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) satisfying \(Z_n\overset{\textrm{d}}{\longrightarrow }Z,\) \(T_n\overset{\textrm{d}}{\longrightarrow }T,\) and \(.\)
Proof
Given \(a\in {\mathbb {R}}\backslash \{0\}\) and \(n\in {\mathbb {N}},\) let \(F_a\) denote the distribution function of \(aX_n.\) On the one hand, if \(a>0,\) then
On the other hand, if \(a<0,\) then
That is, for every \(a\in {\mathbb {R}}\backslash \{0\}\) and every \(n\in {\mathbb {N}},\) the distribution function of \(aX_n\) is
As a consequence, the density function of \(aX_n\) is
For each \(n\in {\mathbb {N}},\) we define the following extension of \(X_n\) to the whole space \(\left( \Omega ,{\mathcal {A}},\mu \right) \):
If \(n_1,\ldots ,n_k\) are distinct positive integers and \(a_1,\ldots ,a_k\) are non-zero real numbers, then the variables \(a_1X_{n_1},\ldots ,a_kX_{n_k}\) are independent, so the density function of \(\sum _{i=1}^{k}a_{i}X_{n_i}\) is the following convolution:
(see [15, page 267]). Therefore, the distribution function of \(\sum _{i=1}^{k}a_{i}X_{n_i}\) is
If S is defined as
then the distribution function of the extension \(\sum _{i=1}^{k}a_{i}X_{n_i}^*\) is \(\left( G_{a_1,\ldots ,a_k}+S\right) /2.\) Indeed, since \(\sum _{i=1}^{k}a_{i}X_{n_i}^*=0\) on \(\Omega _2,\) if \(x<0,\) then
On the other hand, if \(x\ge 0,\) then
This proves that
is the distribution function of \(\sum _{i=1}^{k}a_{i}X_{n_i}^*\) whenever \(n_1,\ldots ,n_k\) are distinct positive integers and \(a_1,\ldots ,a_k\) are non-zero real numbers.
Let \(\Delta \) be a family of \({\mathfrak {c}}\) almost disjoint subsets of \({\mathbb {N}}\) (see Theorem 5). Given \(n\in {\mathbb {N}}\) and \(\sigma \in \Delta ,\) the symbol \(\sigma (n)\) denotes the nth element of \(\sigma .\) The space \({\mathcal {V}}_1\) is defined as the linear span of the set
Let us see that any non-zero element of \({\mathcal {V}}_1\) is convergent in distribution. Let \(\left\{ \sum _{i=1}^{k}a_iX_{\sigma _i(n)}^*\right\} _{n=1}^{\infty }\in {\mathcal {V}}_1,\) where \(\sigma _1,\ldots ,\sigma _k\) are distinct elements of \(\Delta \) and \(a_1,\ldots ,a_k\in {\mathbb {R}}\backslash \{0\}.\) Since \(\sigma _1,\ldots ,\sigma _k\) are almost disjoint sets, there exists \(n_0\) such that \(\sigma _i(n)\ne \sigma _j(n)\) for all \(n\ge n_0\) and \(i\ne j.\) By Eq. (4.3), if \(n\ge n_0,\) the distribution function of \(\sum _{i=1}^{k}a_iX_{\sigma _i(n)}^*\) is \(H_{a_1,\ldots ,a_k}.\) Moreover, since the distribution function of \(\sum _{i=1}^{k}a_iX_i^*\) is also \(H_{a_1,\ldots ,a_k},\) we can conclude that
Note that if \(i\in \{1,\ldots ,k\},\) then
Using that \(g_{a_1,\ldots ,a_k}=f_{a_1}*\cdots *f_{a_k}\) and the Titchmarsh convolution theorem (see [36]), we obtain
Since \(G_{a_1,\ldots ,a_k}(x)=\int _{-\infty }^{x}g_{a_1,\ldots ,a_k}(t)dt,\) it follows that
Hence,
Therefore, \(H_{a_1,\ldots ,a_k}\ne S.\) Since S is the distribution function of the random variable which is equal to zero everywhere, we deduce that \(\sum _{i=1}^{k}a_iX_{\sigma _i(n)}^*\ne 0\) if \(n\ge n_0,\) so \(\left\{ \sum _{i=1}^{k}a_iX_{\sigma _i(n)}^*\right\} _{n=1}^{\infty }\) is not the zero sequence. That is, the set
is linearly independent and the dimension of \({\mathcal {V}}_1\) is \({\mathfrak {c}}.\)
For each \(n\in {\mathbb {N}},\) the random variable \(Y_n\) defined on \(\Omega _2\) is extended to the whole space \(\Omega \) as follows:
The space \({\mathcal {V}}_2\) is defined as the linear span of
Similar arguments to those used previously imply that the dimension of \({\mathcal {V}}_2\) is \({\mathfrak {c}}\) and that any non-zero element of \({\mathcal {V}}_2\) is convergent in distribution. In particular, if \(\varphi _1,\ldots \varphi _{\ell }\) are distinct elements of \(\Delta \) and \(b_1,\ldots ,b_{\ell }\in {\mathbb {R}}\backslash \{0\},\) then there is \(n_1\in {\mathbb {N}}\) such that if \(n\ge n_1,\) the distribution functions of \(\sum _{i=1}^{\ell }b_iY_{\varphi _i(n)}\) and \(\sum _{i=1}^{\ell }b_iY_{\varphi _i(n)}^*\) are \(G_{b_1,\ldots ,b_{\ell }}\) and \(H_{b_1,\ldots ,b_{\ell }},\) respectively. By Eq. (4.3), the distribution function of \(\sum _{i=1}^{\ell }b_iX_{k+i}^*\) is also \(H_{b_1,\ldots ,b_{\ell }},\) so we deduce that
To conclude the proof, we have to prove that the sum of both sequences does not converge in distribution to the sum of the limits. Setting
we have to prove that \(.\) Let \(n\ge \max \{n_0,n_1\}.\) Since \(Z_n=0\) on \(\Omega _2\) and \(T_n=0\) on \(\Omega _1,\) if \(x\in {\mathbb {R}},\) then
where we have applied Eq. (4.2) (and a similar one for \(\sum _{i=1}^{\ell }b_iY_{\varphi _i(n)}\)). This proves that if \(n\ge \max \{n_0,n_1\},\) then the distribution function of \(Z_n+T_n\) is
Moreover, by Eq. (4.3), the distribution function L is
Let us see that these two distribution functions are not equal. On the one hand, by the Titchmarsh convolution theorem (see [36]), we have
Since \(G_{a_1,\ldots ,a_k}(x)=\int _{-\infty }^{x}g_{a_1,\ldots ,a_k}(t)dt,\) we deduce that
Similarly,
Since \(G_{a_1,\ldots ,a_k}\) and \(G_{b_1,\ldots ,b_{\ell }}\) are non-negative at every point, it follows that
On the other hand, applying again the Titchmarsh convolution theorem, we obtain
which implies
and then
Therefore,
which proves that \(\left\{ Z_n+T_n\right\} _{n=1}^{\infty }\) does not converge in distribution to L. \(\square \)
5 Convex lineability of sets of sequences of random variables
5.1 Convergence of distribution functions and convergence of densities
Let us suppose that F and \(F_n\) (with \(n\in {\mathbb {N}}\)) are absolutely continuous distribution functions with densities f and \(f_n,\) respectively. By the Scheffé theorem, if \(\lim _{n\rightarrow \infty }f_n(x)=f(x)\) for almost all \(x\in {\mathbb {R}},\) then \(\left\{ F_n\right\} _{n=1}^{\infty }\) converges to F at every point of \({\mathbb {R}}\) (see [15, Theorem 16.12]). However, the converse result does not hold in general (see [38, Section 14.9]). In this sense, we have the following result that relates both modes of convergence from the point of view of lineability:
Theorem 16
Let \({\mathcal {S}}\) be the set of all sequences \(\left\{ F_n\right\} _{n=1}^{\infty }\) of absolutely continuous distribution functions such that \(\left\{ F_n\right\} _{n=1}^{\infty }\) converges at every point of \({\mathbb {R}},\) but the sequence of their densities \(\left\{ f_n\right\} _{n=1}^{\infty }\) does not converge at almost any point of [0, 1]. Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-convex lineable.
Proof
For each \(b\in (0,+\infty )\) and \(n\in {\mathbb {N}},\) define
In order to prove that \(F_{b,n}\) is a distribution functions, we only need to show that \(F_{b,n}\) is nondecreasing. To do that, we will study its derivative on (0, 1), which will be denoted by \(f_{b,n}.\) Thus, if \(0<x<1,\) then
Let \(g_n(x)=x-\frac{\sin (2\pi nx)}{2\pi n}\) and observe that \(g_n'(x)=1-\cos (2\pi nx)>0\) for all \(x\in (0,1),\) which means that \(g_n\) is strictly increasing on (0, 1). Since \(g_n(0)=0,\) it follows that \(g_n(x)>0\) for all \(x\in (0,1).\) Therefore, \(f_{b,n}(x)>0\) for all \(x\in (0,1),\) which implies that \(F_{b,n}\) is increasing on (0, 1) and, consequently, it is a distribution function. We also consider another distribution function:
The set \({\mathcal {M}}=\left\{ \left\{ F_{b,n}\right\} _{n=1}^{\infty }:b\in (0,+\infty )\right\} \) is linearly independent. Indeed, let us suppose that \(0<b_1<\cdots <b_k\) and \(a_1,\ldots ,a_k\in {\mathbb {R}}\backslash \{0\}.\) If for some \(n\in {\mathbb {N}}\) we had
for every \(x\in [0,1],\) the Identity Theorem for holomorphic functions would imply that
at every point x of the complex plane. As a consequence, it would follow that
This contradiction shows that, for every \(n\in {\mathbb {N}},\) \(\sum _{i=1}^{k}a_iF_{b_i,n}\) is not identically zero on [0, 1]. Therefore, \(\left\{ \sum _{i=1}^{k}a_iF_{b_i,n}\right\} _{n=1}^{\infty }\) is not the zero sequence and then the elements of \({\mathcal {M}}\) are linearly independent.
Next we will prove that every convex combination of elements of \({\mathcal {M}}\) belongs to \({\mathcal {S}}.\) Let us suppose again that \(0<b_1<\cdots <b_k,\) \(a_1,\ldots ,a_k\in [0,1],\) and \(\sum _{i=1}^{k}a_i=1.\) The sum \(\sum _{i=1}^{k}a_iF_{b_i,n}\) is a new distribution function such that
for all \(x\in {\mathbb {R}}.\)
Now let us see that the sequence of densities \(\left\{ \sum _{i=1}^{k}a_{i}f_{b_i,n}\right\} _{n=1}^{\infty }\) does not converge at any point of (0, 1). If \(x\in (0,1),\) then
Note that
Moreover, \(\sum _{i=1}^{k}a_{i}e^{b_i(x-1)}\) is strictly positive and does not depend on n. Hence, if \(\left\{ \sum _{i=1}^{k}a_{i}f_{b_i,n}(x)\right\} _{n=1}^{\infty }\) were convergent, the sequence \(\left\{ \cos (2\pi nx)\right\} _{n=1}^{\infty }\) should be convergent as well, which is not possible. On the one hand, if \(x\in {\mathbb {Q}},\) then that sequence is periodic. On the other hand, if \(x\notin {\mathbb {Q}},\) then \(\left\{ (\cos (2\pi nx),\sin (2\pi nx))\right\} _{n=1}^{\infty }\) is dense in the unit circle and so \(\left\{ \cos (2\pi nx)\right\} _{n=1}^{\infty }\) is dense in \([-1,1]\) (see [30, Proposition 4.1.1]). Therefore, \(\left\{ \sum _{i=1}^{k}a_{i}f_{b_i,n}\right\} _{n=1}^{\infty }\) does not converge at any point of (0, 1). The densities are unique almost everywhere, which implies that any other sequence of densities will not converge at almost any point of (0, 1). \(\square \)
5.2 Convergence in distribution and convergence in variation
Let us suppose again that F and \(F_n\) (with \(n\in {\mathbb {N}}\)) are absolutely continuous distribution functions with density functions f and \(f_n,\) respectively. If \(\left\{ F_n\right\} _{n=1}^{\infty }\) converges in variation to F, for each \(x\in {\mathbb {R}}\) we have
Therefore, \(\left\{ F_n\right\} _{n=1}^{\infty }\) converges weakly to F. However, the converse is not true in general (see [38, Section 14.12]). The next result shows that the set of sequences of absolutely continuous distribution functions converging weakly but not in variation contains a large convex space in the lineability sense.
Theorem 17
Let F denote the function on \({\mathbb {R}}\) defined as follows :
Let \({\mathcal {S}}\) denote the set of all sequences of absolutely continuous distribution functions that converge to F at every point of \({\mathbb {R}}\) but do not converge to F in variation. Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-convex lineable.
Proof
We will work restricted to [0, 1], since all the distribution functions considered in this proof will take the value 0 on \((-\infty ,0]\) and the value 1 on \([1,+\infty ).\) Given \(b\in (0,1)\) and \(n\in {\mathbb {N}},\) the function \(g_{b,n}\) is defined as follows:
Note that \(\int _{0}^{1}g_{b,n}(t)dt=1.\) Let \(G_{b,n}\) be the distribution function associated to the density \(g_{b,n}.\) For each \(x\in [0,1]\) there exists \(j\in \{0,\ldots ,n-1\}\) such that either \(x\in \left[ \frac{j}{n},\frac{j+b}{n}\right] \) or \(x\in \left[ \frac{j+b}{n},\frac{j+1}{n}\right] .\) On the one hand, if \(x\in \left[ \frac{j}{n},\frac{j+b}{n}\right] \) for some \(j\in \{0,\ldots ,n-1\},\) then
Hence,
and
On the other hand, if \(x\in \left[ \frac{j+b}{n},\frac{j+1}{n}\right] \) for some \(j\in \{0,\ldots ,n-1\},\) then
Hence,
and
Therefore, we obtain that
for every \(x\in [0,1].\)
Let us prove that the set \(\left\{ G_{b,1}: b\in (0,1)\right\} \) is linearly independent. Let \(k\in {\mathbb {N}},\) \(0<b_1<\cdots<b_k<1,\) and \(a_1,\ldots ,a_k\in {\mathbb {R}}\backslash \{0\}.\) If \(n=1\) and \(x\in \left[ b_{k-1},b_k\right] ,\) then the equalities in (5.2) imply that \(G_{b_i,1}(x)=1\) for every \(i\in \{1,\ldots ,k-1\},\) while \(G_{b_k,1}(x)=\frac{x}{b_k}\) by (5.1). Therefore, for \(x\in [b_{k-1},b_k],\) we have
Consequently, it cannot happen that \(\sum _{i=1}^{k}a_iG_{b_i,1}(x)=0\) for all \(x\in [b_{k-1},b_k].\) This proves the set of functions \(\left\{ G_{b,1}: b\in (0,1)\right\} \) is linearly independent, which in turn implies that the set of sequences
is linearly independent as well.
Next we will prove that every convex combination of elements of the set \({\mathcal {M}}\) belongs to \({\mathcal {S}}.\) Let \(k\in {\mathbb {N}},\) \(0<b_1<\cdots<b_k<1,\) and \(a_1,\ldots ,a_k\in {\mathbb {R}}\backslash \{0\}\) such that \(\sum _{i=1}^{k}a_i=1.\) By the inequalities in (5.3), if \(x\in [0,1],\) then
Therefore, \(\left\{ \sum _{i=1}^{k}a_iG_{b_i,n}\right\} _{n=1}^{\infty }\) converges to F at every point.
Finally, the variation between F and \(\sum _{i=1}^{k}a_iG_{b_i,n}\) is equal to
Since \(b_1<\cdots <b_k,\) we have that
for every \(i\in \{1,\ldots ,k\}\) and every \(j\in \{0,\ldots ,n-1\}.\) Therefore, if \(i\in \{1,\ldots ,k\}\) and \(j\in \{0,\ldots ,n-1\},\) then \(g_{b_i,n}(x)=0\) for all \(x\in \left[ \frac{j+b_k}{n},\frac{j+1}{n}\right] .\) Hence,
Consequently, \(\left\{ \sum _{i=1}^{k}a_iG_{b_i,n}\right\} _{n=1}^{\infty }\) does not converge to F in variation and the proof is complete. \(\square \)
Data Availability
Not applicable.
References
Aizpuru, A., Pérez-Eslava, C., Seoane-Sepúlveda, J.B.: Linear structure of sets of divergent sequences and series. Linear Algebra Appl. 418, 595–598 (2006)
Araújo, G., Bernal-González, L., Muñoz-Fernández, G.A., Prado-Bassas, J.A., Seoane-Sepúlveda, J.B.: Lineability in sequence and function spaces. Stud. Math. 237, 119–136 (2017)
Araújo, G., Bernal-González, L., Fernández-Sánchez, J., Seoane-Sepúlveda, J.B.: Continuous nowhere Hölder functions on \({\mathbb{Z} }_{p}\). Proc. Am. Math. Soc. 151, 1031–1040 (2023)
Aron, R.M., Gurariy, V.I., Seoane-Sepúlveda, J.B.: Lineability and spaceability of sets of functions on \({\mathbb{R}}\). Proc. Am. Math. Soc. 133, 795–803 (2005)
Aron, R.M., Bernal González, L., Pellegrino, D., Seoane Sepúlveda, J. B., Lineability: The Search for Linearity in Mathematics. Monographs and Research Notes in Mathematics. CRC Press, Boca Raton (2016)
Balcerzak, M., Bartoszewicz, A., Filipczak, M.: Nonseparable spaceability and strong algebrability of sets of continuous singular functions. J. Math. Anal. Appl. 407, 263–269 (2013)
Bartoszewicz, A., Filipczak, M., Glab, S.: Algebraic structures in the set of sequences of independent random variables. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A-Mat 117, 45 (2023)
Bastin, F., Conejero, J.A., Esser, C., Seoane-Sepúlveda, J.B.: Algebrability and nowhere Gevrey differentiability. Isr. J. Math. 205(1), 127–143 (2015)
Bernal-González, L., Ordóñez-Cabrera, M.: Lineability criteria, with applications. J. Funct. Anal. 266, 3997–4025 (2014)
Bernal-González, L., Pellegrino, D., Seoane-Sepúlveda, J.B.: Linear subsets of nonlinear sets in topological vector spaces. Bull. Am. Math. Soc. 51, 71–130 (2014)
Bernal-González, L., Conejero, J.A., Murillo-Arcila, M., Seoane-Sepúlveda, J.B.: [S]-linear and convex structures in function families. Linear Algebra Appl. 579, 463–483 (2019)
Bernal-González, L., Cabana-Méndez, H.J., Muñoz-Fernández, G.A., Seoane-Sepúlveda, J.B.: On the dimension of subspaces of continuous functions attaining their maximum finitely many times. Trans. Am. Math. Soc. 373, 3063–3083 (2020)
Bernal-González, L., Calderón-Moreno, M.c, Fernández-Sánchez, J., Muñoz-Fernández, G. A., Seoane-Sepúlveda, J. B.: Construction of dense maximal-dimensional hypercyclic subspaces for Rolewicz operators. Chaos Solitons Fractals 162, 112408 (2022)
Bertoloto, F.J., Botelho, G., Fávaro, V.V., Jatobá, A.M.: Hypercyclicity of convolution operators on spaces of entire functions. Ann. Inst. Fourier (Grenoble) 63, 1263–1283 (2013)
Billingsley, P.: Probability and Measure. Wiley, Hoboken (1995)
Bogachev, V.I., Measure Theory, vol. I and vol. II. Springer, Berlin (2007)
Calderón-Moreno, M.C., Gerlach-Mena, P.J., Prado-Bassas, J.A.: Lineability and modes of convergence. RACSAM 114, 18 (2020)
Carmona-Tapia, J., Fernández-Sánchez, J., Seoane-Sepúlveda, J.B., Trutschnig, W.: Lineability, spaceability, and latticeability of subsets of \(C\)([0,1]) and Sobolev spaces. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A-Mat 116, 113 (2022)
Ciesielski, K.C., Gámez-Merino, J.L., Pellegrino, D., Seoane-Sepúlveda, J.B.: Lineability, spaceability, and additivity cardinals for Darboux-like functions. Linear Algebra Appl. 440, 307–317 (2014)
Ciesielski, K.C., Seoane-Sepúlveda, J.B.: Differentiability versus continuity: restriction and extension theorems and monstrous examples. Bull. Am. Math. Soc. 56, 211–260 (2019)
Conejero, J.A., Fenoy, M., Murillo-Arcila, M., Seoane-Sepúlveda, J.B.: Lineability within probability theory settings. RACSAM 111, 673–684 (2017)
D’Aniello, E., Maiuriello, M.: On spaceability of shift-like operators on \(L^p\). J. Math. Anal. Appl 526(1), 127177, 10 (2023). https://doi.org/10.1016/j.jmaa.2023.127177
D’Aniello, E., Maiuriello, M., Seoane-Sepúlveda, J.B.: The interplay between recurrence and hypercyclicity in dissipative contexts. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 118(1), Paper No. 30, 9 (2024). https://doi.org/10.1007/s13398-023-01528-1
Enflo, P.H., Gurariy, V.I., Seoane-Sepúlveda, J.B.: Some results and open questions on spaceability in function spaces. Trans. Am. Math. Soc. 366, 611–625 (2014)
Fernández-Sánchez, J., Rodríguez-Vidanes, D.L., Seoane-Sepúlveda, J.B., Trutschnig, W.: Lineability and \(K\)-linear discontinuous functions. Linear Algebra Appl. 645, 52–67 (2022)
Fernández-Sánchez, J., Seoane-Sepúlveda, J.B., Trutschnig, W.: Lineability, algebrability, and sequences of random variables. Math. Nachr. 295, 861–875 (2022)
Gámez-Merino, J.L., Seoane-Sepúlveda, J.B.: An undecidable case of lineability in \({\mathbb{R}}^{\mathbb{R}}\). J. Math. Anal. Appl. 401, 959–962 (2013)
Gurariy, V.I.: Subspaces and bases in spaces of continuous functions. Dokl. Akad. Nauk SSSR 167, 971–973 (1966) (Russian)
Gurariy, V.I.: Linear spaces composed of everywhere nondifferentiable functions. C. R. Acad. Bulgare Sci. 44, 13–16 (1991) (Russian)
Hasselblatt, B., Katok, A.: A First Course in Dynamics with a Panorama of Recent Developments. Cambridge University Press, New York (2003)
Hsu, P.L., Robbins, H.: Complete convergence and the law of large numbers. Proc. Nat. Acad. Sci. USA 33, 25–31 (1947)
Jech, T.: Set Theory. The Third Millennium Edition, Revised and Expanded. Springer Monographs in Mathematics. Springer, Berlin (2003)
Jiménez-Rodríguez, P., Muñoz-Fernández, G.A., Seoane-Sepúlveda, J.B.: On Weierstrass’ monsters and lineability. Bull. Belg. Math. Soc. Simon Stevin 20, 577–586 (2013)
Karr, A.F.: Probability. Springer-Verlag New York, Inc., New York (1993)
Kuczma, M.: An Introduction to the Theory of Functional Equations and Inequalities: Cauchy’s Equation and Jensen’s Inequality. Birkhäuser, Basel (2009)
Ransford, T.: Letter to the editor: a short complex-variable proof of the Titchmarsh convolution theorem. J. Fourier Anal. Appl. 25, 2697–2702 (2019)
Seoane-Sepúlveda, J.B.: Chaos and lineability of pathological phenomena in analysis. ProQuest LLC, Ann Arbor, MI. Thesis (Ph.D.), Kent State University (2006)
Stoyanov, J.M.: Counterexamples in Probability. Wiley Series in Probability and Statistics. Wiley, Chichester (1997)
Weierstrass, K.: Abhandlungen aus der Funktionenlehre. Julius Springer, Berlin (1886)
Funding
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
G. Araújo was supported by Grant 3024/2021, Paraíba State Research Foundation (FAPESQ).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Araújo, G., Fenoy, M., Fernández-Sánchez, J. et al. Modes of convergence of random variables and algebraic genericity. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A-Mat. 118, 63 (2024). https://doi.org/10.1007/s13398-024-01561-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13398-024-01561-8