Abstract
The search of lineability consists on finding large vector spaces of mathematical objects with special properties. Such examples have arisen in the last years in a wide range of settings such as in real and complex analysis, sequence spaces, linear dynamics, norm-attaining functionals, zeros of polynomials in Banach spaces, Dirichlet series, and non-convergent Fourier series, among others. In this paper we present the novelty of linking this notion of lineability to the area of Probability Theory by providing positive (and negative) results within the framework of martingales, random variables, and certain stochastic processes.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Since the beginning of the 21st century many authors have become interested in the study of linearity within non linear settings or, in other words, the search for linear structures of mathematical objects enjoying certain special or unexpected properties. Vector spaces and linear algebras are elegant mathematical structures which, at first glance, seem to be “forbidden” to families of “strange” objects. In other words, take a function with some special or (as sometimes it is called) “pathological” property (for example, the classical nowhere differentiable function, also known as Weierstrass’ monster). Coming up with a concrete example of such a function might be difficult. In fact, it may seem so difficult that if you succeed, you think that there cannot be too many functions of that kind. Probably one cannot find infinite dimensional vector spaces or infinitely generated algebras of such functions. This is, however, exactly what has been happening in the last years in many fields of mathematics, from Linear chaos to real and complex analysis [2, 6, 15], passing through set theory [17] and linear and multilinear Algebra, or even operator theory [9, 11], topology, measure theory [5, 6, 13], and abstract algebra.
Recall that, as it nowadays is common terminology, a subset M of a topological vector space X is called lineable (respectively, spaceable) in X if there exists an infinite dimensional linear space (respectively, infinite dimensional closed linear space) \(Y \subset M\cup \{0\}\). Moreover, given an algebra \(\mathcal {A},\) a subset \(\mathcal {B}\subset \mathcal {A}\) is said to be algebrable if there is a subalgebra \(\mathcal {C}\) of \(\mathcal {A}\) such that \(\mathcal {C}\subset \mathcal {B}\cup \{0\}\) and the cardinality of any generator of \(\mathcal {C}\) is infinite (see, e.g., [2, 3, 7]).
As we mentioned above, there have recently been many results regarding the linear structure of certain special subsets. One of the earliest results in this direction was provided by Gurariy, who showed that the set of Weierstrass’ monsters is lineable [18]. Also, and more recently, Enflo et al. [15] proved that, for every infinite dimensional closed subspace X of \(\mathcal {C}[0,1]\), the set of functions in X having infinitely many zeros in [0, 1] is spaceable in X (see, also, [12, 16]). A vast literature on this topic have been built during the last decade, and we refer the interested reader to the survey paper [7] or, for a much detailed and thorough study, to the forthcoming monograph [3].
In this paper, we relate for the first time, the topic of lineability with Probability Theory and Stochastic Processes. However one needs to be careful when trying to find linear structures within certain sets of objects in this setting. Indeed, the set of probability density functions cannot contain any linear space since any non-trivial multiple of one already fails to be a probability density function or, in a deeper level, if we had two martingales \(\{X_n\}_n\), \(\{Y_n\}_n\), with their corresponding filtrations \(\{\mathcal {F}_n\}_n\) and \(\{\mathcal {G}_n\}_n\), the sequence of random variables \(\{X_n + Y_n\}_n\) is not, in general, a martingale unless we had a “universal” filtration that would comply with both simultaneously. Nevertheless, we shall consider some classical (counter)examples in probability theory and study up to what level it is possible to obtain lineability-related results. In this paper we shall consider lineability and algebrability problems related to the following concepts:
-
(i)
Convergent martingales that are not \(L_1\) bounded,
-
(ii)
pointwise convergence of random variables,
-
(iii)
stochastic processes being \(L_2\) bounded, converging in \(L_2\), and not converging for any point off a null set, and
-
(iv)
zero-mean sequences of mutually independent random variables with divergent sample mean.
-
(v)
unbounded random variables with finite expected value.
2 Preliminaries and notation
In this section, we recall some results that will be needed throughout the paper (for more details see, e.g., [10]).
Let \(\Omega \) be a non-empty space and let \(\mathcal {F}\) be a \(\sigma \)-algebra over \(\Omega \). We say that the pair \((\Omega ,\mathcal {F})\) is a probabilizable (measurable) space. Given \((\Omega ,\mathcal {F})\), a filtration of \(\sigma \)-algebras of \(\mathcal {F}\) is an increasing sequence of \(\sigma \)-algebras, such that \(\mathcal {F}_n\subset \mathcal {F}\) for every \(n\in \mathbb {N}\).
Adding a function \(\mu :\mathcal {F}\rightarrow [0,1]\), we say that the triplet \((\Omega ,\mathcal {F},P)\) is a probability space. A random variable X on \((\Omega ,\mathcal {F},P)\) is a real-valued function defined on \(\Omega \), such that for every open subset \(B\subset \mathbb {R}\) we have \(X^{-1}(B)\in \mathcal {F}\). The expected value of the random variable X, namely E(X), is computed as
A collection of random variables indexed by a totally ordered set, representing the evolution of some system of random variables is said to be a stochastic process.
We now introduce the notion of a conditional expectation of a random variable X.
Definition 1
Let \((\Omega , \mathcal {F},P)\) be a probability space, let X be a random variable on this probability space, and let \(\mathcal {H} \subseteq \mathcal {F}\) be a sub-\(\sigma \)-algebra of \(\mathcal {F}\). The conditional expectation of X, denoted as \(E[X\mid \mathcal {H}]\), is any \(\mathcal {H}\)-measurable function \(\Omega \rightarrow \mathbb {R}\) which satisfies
A sequence of random variables \(\{X_n\}_n\) defined on \((\Omega ,\mathcal {F},\mu )\) is said to be a Markov chain if for every \(n\ge 1\), the variable \(X_{n+1}\) only depends upon the state of \(X_n\). Given a sequence of random variables \(\{X_n\}_{n\in \mathbb {N}}\) and a filtration \(\{\mathcal {F}_n\}_{n\in \mathbb {N}}\) of \(\sigma \)-algebras of \(\mathcal {F}\), we say that \(\{X_n\}_{n\in \mathbb {N}}\) is a martingale if \(X_n\) is integrable and \(E[X_{n+1}|\mathcal {F}_n]=X_n\) almost surely (a.s. from now one) for all \(n \in \mathbb {N}\).
Finally, let us recall the following definition that will be necessary in order to introduce the notion of a martingale indexed by a directed set (see, e.g., [14]).
Definition 2
(directed set) A directed set is a nonempty set D with a relation \(\sim _R\) such that:
-
(i)
\(a \sim _R a\) for every \(a \in D\).
-
(ii)
If \(a, b, c \in D\) such that \(a \sim _R b\) and \(b \sim _R c\), then \(a \sim _R c\).
-
(iii)
If \(a, b \in D\) then there exists \(c \in D\) with \(a \sim _R c\) and \(b \sim _R c\).
We point out that \(a \sim _R b\) is (usually) denoted by \(a \le b\).
Let D be a directed set and let \(\{X_d: d \in D\}\) be an indexed family of random variables. Let \(\{\mathcal {F}_d: d \in D\}\) be a family of \(\sigma \)-algebras such that for \(d_1 \le d_2\), we have \(\mathcal {F}_{d_1} \subset \mathcal {F}_{d_2}\). We also say that \(\{X_d\}\) is a martingale indexed by a directed set D if for every \(d \in D\) we have \(E[|X_d|] < \infty \), \(X_d\) is \(\mathcal {F}_{d}\)-measurable, and for every \(d_1 \le d_2\) we have \(E[X_{d_2} | \mathcal {F}_{d_1}] = X_{d_1}\) almost surely.
3 Lineability of special sequences of random variables
The motivation for our first result is the fact that many martingale convergence theorems require the martingale to be \(L_1\)-bounded (for instance, in the famous Doob’s martingale convergence theorems or in Lévy’s zero–one law, [10]). However, this condition (although sufficient) is not necessary. Indeed, there is a classical and well known example due to Ash (see [4], or [21, Example 9.15] for a more modern reference), in which (briefly) the author constructed a martingale via a Markov chain \(\{X_n: n \in \mathbb {N}\}\), properly defined on a probability space \((\Omega , \mathcal {F}, P)\), such that \((X_{n})_n\) converges for every \(\omega \in \Omega \), and with \(E[|X_n|] \mathop {\longrightarrow }\limits ^{n \rightarrow \infty } \infty \).
Here, and although (as we mentioned in the Introduction) one cannot consider lineability within martingales, we shall show that one can construct an infinite dimensional vector space every non-zero element of which, \(\{X_n: n \in \mathbb {N}\}\), is a sequence of convergent random variables with \(E[|X_n|] \mathop {\longrightarrow }\limits ^{n \rightarrow \infty } \infty \). That is, the main tool in Ash’s example is, actually, “not as uncommon” as one might expect. The proof is a little bit technical, although constructive.
Theorem 1
The set of convergent sequences of random variables \(\{X_n: n \in \mathbb {N}\}\) with \(E[|X_n|] \mathop {\longrightarrow }\limits ^{n \rightarrow \infty } \infty \) is lineable.
Proof
First let us denote by \({\mathcal S}=\{s_j\}_{j\in \mathbb {N}}\) the (increasing) sequence of odd prime numbers. Next, for every \(s \in {\mathcal S}\) we consider the Markov chain defined as follows. Let \(X_1^{(s)} = 0\). Also, if \(X_n^{(s)} = 0\) let
and, if \(X_n^{(s)} \ne 0\), we let \(X_{n+1}^{(s)}=X_n^{(s)}\). Notice that, if \(X_n^{(s)}\ne 0\), then \(X_j^{(s)}=X_n^{(s)}\) for every \(j\ge n\). Let us consider \(A=\{\omega :X_n^{(s)}(\omega ) \ne 0\quad \text{ for }\quad \text{ some }\quad n\in \mathbb {N}\}\). If \(\omega \in A\), then \(X_j^{(s)}(\omega )=X_n^{(s)}(\omega )\) for every \(j\ge n\). In contrast, if \(\omega \in \Omega {\setminus } A\) then \(X_{n+1}^{(s)}\) is defined following equation (3). Moreover, note that for every \(n \in \mathbb {N}\),
Therefore, for every \(s \in {\mathcal S}\), the Markov chain \(\{X_n^{(s)}: n \in \mathbb {N}\}\) is a martingale respect on the natural filtration, that is, \(\mathcal {F}_n=\sigma (X_1,\ldots ,X_n)\) Footnote 1 for all n. Furthermore, given \(s \in {\mathcal S}\), and assuming all of the above random variables are properly defined on a probability space \((\Omega , \mathcal {F}, P)\), we have that either \(X_{n}^{(s)}(\omega )=0\) for every \(n \in \mathbb {N}\) or even in the case that there is some \(m\in \mathbb {N}\) such that \(X_{n}^{(s)}\ne 0\) for all \(n\ge m\), we can conclude that \(\{X_{n}^{(s)}\}_n\) is a convergent sequence on \((\Omega ,\mathcal {F},P)\).
Before carrying on with the main construction, let us recall that it can be assumed, without loss of generality, that the set \(\{X_n^{(s)}: s \in \mathcal {S} \}\) is linearly independent, just taking, for instance, disjoint supports in the construction of the random variables.
Our aim now is to show that any non-zero element in the linear span of \(\{X_n^{(s)}: s \in \mathcal {S} \}\) is convergent and not \(L_1\)-bounded. The convergence is straightforward from the fact that \(\{X_{n}^{(s)}\}_n\) converges for every \(\omega \in \Omega \) and any element in the linear span of \(\{X_n^{(s)}: s \in \mathcal {S} \}\) is a finite linear combination of these random variables in the sequence \(\{X_{n}^{(s)}\}_n\).
We still need a couple of estimates in order to achieve our goal. For every \(I\in \mathcal {F}\), let us define \(I_A\) as the characteristic function on the set A. Let \(s \in \mathcal {S}\) and \(k \in \mathbb {N}\), we have that
from which we obtain that
where, for the sake of simplicity, we have denoted \(a_n:= s^{n} n^s\) and \(p_n:= 1/s^n\). Applying the definition of \(X_n^{(s)}\), making some simple calculations, and keeping in mind that for every \(j \in \{1, \ldots , k-1\}\), we have \(0< 1-2p_{j} < 1\), and
As a consequence, we obtain the following lower bound for \(E\left[ |X_k^{(s)}|\right] \):
In the previous expression, let us recall that the amount \(\prod _{j=1}^{\infty } \left( 1 - \frac{2}{s^j}\right) \) is known, in Number Theory, as the q-Pochhammer symbol (also known as q-shifted factorial, see [8]) \((2;s)_{\infty }\), which verifies
if \(s > 2\) (which complies with our hypotheses). We, thus, have
and \(\left\{ X_k^{(s)}\right\} _k\) is not \(L_1\)-bounded. However, our aim is to show that any non-zero element in the linear span of \(\{X_n^{(s)}: s \in \mathcal {S} \}\) is not \(L_1\)-bounded and, in order to obtain this, we shall need another estimate for \(E\left[ |X_k^{(s)}|\right] \). Recall that, since \((2;s)_{\infty } \in (0,1)\), we also have
and it can be easily checked that the expression \(R_{s+1}(k)\) is a polynomial of degree \(s+1\) with
Now, let \(X_k \in \text {span}\left\{ X_k^{(s)}: s \in \mathcal {S} \right\} \), then:
where \(s_1<s_2< \cdots < s_m\) are elements from \(\mathcal {S}\), \(\{\alpha _n\}_n \subset \mathbb {R}\), and (without loss of generality) \(\alpha _m \ne 0\). Let us now show that \(X_k\) is not \(L_1\)-bounded. Indeed, using the linearity of \(E[\cdot ]\), the reverse triangle inequality, and Eqs. (10) and (11), we have:
since the expression \(2 |\alpha _m | \cdot (2;s_m)_{\infty } \cdot R_{s_m+1}(k)\) is a polynomial of degree \(s_m+1\) with
the expression
is a polynomial of degree \(s_{m-1}+1\), and \(s_{m-1} < s_m\). Therefore, \(X_k\) is not \(L_1\)-bounded, and the result is proved. \(\square \)
Remark 1
We recall that the previous result could certainly be stated in terms of martingales assuming, of course, that the martingales adapted to the same filtration form a vector space (the proof would follow the same ideas as in that of Theorem 1).
Now, let us continue focusing on obtaining lineability-related results of certain subsets of random variables enjoying “unexpected” properties. For instance, in [21, Example 9.2], the authors provide (given any \(b >0\)) a sequence of integrable random variables \(\{X_n\}_{n \in \mathbb {N}}\) and an integrable random variable X such that \(X_n\) converges to X pointwise and, yet, \(E[X_n] = -b\) and \(E[X]=b\) (the important point here is that one has, under the previous hypotheses, \(E[X_n] \ne E[X]\) for every \(n \in \mathbb {N}\)). This construction can be generalized in order to construct a positive cone (see, e.g., [1]) of such elements since, in general, linearity of elements enjoying such properties might get lost.
Let \(\{X_n\}_{n \in \mathbb {N}}\) and \(\{Y_n\}_{n \in \mathbb {N}}\) be sequences of integrable random variables converging, pointwise, to the integrable random variables X, Y (respectively). Let \(b, c >0\), and \(X_n, X, Y_n, Y\) random variables such that \(E[X_n] = -b, E[X]=b, E[Y_n] = -c, E[Y]=c\) . Now, let \(\alpha , \beta \in \mathbb {R}\) be such that \(\alpha b + \beta c = 0\), then \(E[\alpha X_n + \beta Y_b] = 0 = E[\alpha X + \beta Y]\), which does not fall into the class of examples we are working with. Thus, the above property is “not a lineable one”. However, one could try to find a positive cone of such objects, as it was done in [1] when certain sets failed to be lineable (calling these sets coneable). More precisely, a subset M of a topological vector space X is called positively coneable in X if there exists an infinite dimensional set M such that \(\alpha M \subset M\) for every \(\alpha >0\).
Theorem 2
Let us consider the probability space \(([0,1],\mathbb {B}([0,1]),\lambda )\), where \(\lambda \) denotes the Lebesgue measure. The set of sequences of integrable random variables \(\{X_n\}_n\) converging to an integrable random variable X such that \(\lim _{n\rightarrow \infty }E[X_n]\ne E[X]\) is positively coneable.
Proof
For every \(m \in \mathbb {N}\), let us take \(B^{(m)}, C^{(m)} > 0\) and let us define the following random variables for every \(\omega \in [0,1]\)
where \(\{a_m\}_{m\in \mathbb {N}} \subset \mathbb {N}\) is defined, recursively, as follows:
This permits us to state that the set of sequences \(\{X_n^{(m)}: m \in \mathbb {N}\}\) are linearly independent when seen as regular functions in \(\mathbb {R}^{[0,1]}\) (due to the choice of the \(a_m\)’s in order to avoid major overlappings). The sequence \(X_n^{(m)}\) converges to \(X^{(m)}\) pointwisely when n tends to infinity. It can be easily seen that \(\{X_n^{(m)}\}_n\) is a sequence of integrable random variables for every \(m \in \mathbb {N}\) and that \(X^{(m)}\) is an integrable random variable, too.
Furthermore, for every \(n,m\in \mathbb {N}\) we have
and
We then consider the positive cone given by \(\mathcal {C}_n = \{ \alpha X_n^{(m)} : m \in \mathbb {N}, \alpha >0\}\) where any element \(Y_n \in \mathcal {C}_n\) can be written as \(Y_n = \sum _{i=1}^{k} \alpha _i X_n^{(m_i)}\), where \(\alpha _i >0\) and \(m_i \in \mathbb {N}\) for every \(i \in \{1, \ldots , k\}\). By linearity of \(E[\cdot ]\) we have
and given \(Y= \sum _{i=1}^{k} \alpha _i X^{(m_i)}\), one obtains
which gives that, although by linearity, \(Y_n\) converges pointwise to Y with \(E[Y_n] \ne E[Y]\) (actually, and more precisely, \(E[Y_n] > E[Y]\)) for every \(n \in \mathbb {N}\). \(\square \)
The following result shows the algebrability of the set of unbounded random variables with a finite expected value. The example used for the construction is inspired in [21, Example 5.2].
Theorem 3
Let us consider the probability space \((\mathbb {R}^+,\mathbb {B}(\mathbb {R}^+),\lambda )\), where \(\lambda \) denotes the Lebesgue measure. The set of unbounded random variables \(f:\mathbb {R}\rightarrow \mathbb {R}\) that have a finite expected value is algebrable.
Proof
Let us consider the function
For each \(n\in \mathbb {N}\), we define:
Each function \(f_n\) is null except in the interval \(J_n:=\left[ n,n+\frac{1}{n^3}\right] \). Moreover,
and then, the random variable defined as
has an expected value \(E[X]=\frac{\pi ^2}{12}\).
Let us consider a Cantor set on the unit interval obtained as \(C=\cup _{n=1}^\infty I_n\), where \(I_0=[0,1]\) and \(I_n\) is obtained from \(I_{n-1}\) removing the inner third of each of its subintervals. Let us define \(L_n:=J_n\cap (n+I_n)\). Then, we have
Let \(\{\alpha _l\}_{l\in \Lambda }\) be a non-numerable set of irrational numbers on (0, 1) which are not \(\mathbb {Q}\) linearly dependent, then for every \(\alpha \in \{\alpha _l\}_{l\in \Lambda }\) we define the functions:
and then, we consider the random variable
Consider the algebra generated by these functions \(\mathcal {A}(\{X_\alpha \}_{\alpha \in \Lambda })\). It is clear that for every \(\alpha \in \{\alpha _l\}_{l\in \Lambda }\) the random variable \(X_\alpha \) has a finite expected value and it is an unbounded random variable. Besides, this algebra is uncountably generated.
Given an arbitrary function
On the one hand, these random variables are unbounded, too. Indeed, let \(\alpha _{min}:=\min \{\alpha _m\,:\,1\le m\le m_0\}\) and we get \(X(n+\alpha _{min})=n\). Additionally, this random variable has a finite expected value as well. \(\square \)
Remark 2
Let us recall that, in the previous result, the unboundedness holds outside every interval of finite length, which adds an extra pathology to the considered property.
For the final part of this paper, let us recall the work [20] (see, also, [21, Example 9.17]), in which Walsh provided an example of a martingale (indexed by a directed set) that is \(L_2\) bounded and converging in \(L_2\) and that, also, does not converge for any point off a null set. Our aim here shall be to generalize this example in order to build an infinite dimensional linear space such that every non zero element of which is a martingale enjoying the previous property. Before starting its proof, we need to recall the following lemma (due to Muñoz, Palmberg, Puglisi, and the second author), which is a particular case of [19, Theorem 3.5]. In what follows \((\ell _p, \Vert \cdot \Vert _p)\) denotes the Banach space of real valued sequences with the usual p-norm.
Lemma 1
The set \(\ell _2 {\setminus }\ell _1\) is lineable.
Theorem 4
The set of stochastic processes that are \(L_2\) bounded, converging in \(L_2\) and that, also, do not converge for any point of a null set, is lineable.
Proof
By Lemma 1, let V be any (countably generated) linear space contained in \((\ell _2 {\setminus } \ell _1) \cup \{0\}\) and let \(\left\{ \{h^{(m)}_n\}_n : m \in \mathbb {N}\right\} \) be a basis for V. For instance, and in order to be more clear in the coming construction, we can take (see [19, Theorem 3.5])
For every \(m \in \mathbb {Q} \cap \left( \frac{1}{2}, 1 \right) \), let \(\{X^{(m)}_{n}\}_n\) be an linearly independent (and infinite) set, every element of which is a sequence of mutually independent random variables such that provided \(m \in \mathbb {Q} \cap \left( \frac{1}{2}, 1 \right) \)
for every \(n\in \mathbb {N}\).
By construction, one has that \(\sum _{n \in \mathbb {N}} h^{(m)}_n X^{(m)}_{n}\) converges almost surely for every \(m \in \mathbb {Q} \cap \left( \frac{1}{2}, 1 \right) \).
Let D be the family of all finite subsets of \(\mathbb {N}\), partially ordered by set inclusion, which is a directed set. For every \(d\in D, m \in \mathbb {Q} \cap \left( \frac{1}{2}, 1 \right) \) we define:
Therefore, for every \(m \in \mathbb {Q} \cap \left( \frac{1}{2}, 1 \right) \), and with respect to its own filtration, it can be easily checked that \(\{M^{(m)}_d : d \in D\}\) is a martingale and it converges in probability. By construction, we also have that the set \(\{(M^{(m)}_d)_{d \in D} : m \in \mathbb {N}\}\) is linearly independent and, by linearity, any non-zero element in \(W := \text{ span }\{(M^{(m)}_d)_{d \in D} : m \in \mathbb {N}\}\) also converges in probability.
However, we will see \(\{(M^{(m)}_d)_{d \in D} : m \in \mathbb {N}\}\) as, simply, stochastic processes (dropping the filtration). Moreover, any element in W is, also, \(L_2\)-bounded, since (for every \(m \in \mathbb {Q} \cap \left( \frac{1}{2}, 1 \right) \)) we have
since the set \(\left\{ \{h^{(m)}_k\}_{k\in \mathbb {N}} : m \in \mathbb {N}\right\} \) is contained in \(\ell _2 {\setminus } \ell _1\). However, notice that (for every \(m \in \mathbb {Q} \cap \left( \frac{1}{2}, 1 \right) \)), \(M^{(m)}_d (\omega )\) converges only if it converges regardless of the order of summation (that is, absolutely), but
It only remains to show that, for any \(m_1, m_2, \ldots , m_q \in \left( \frac{1}{2}, 1 \right) \) and \(\alpha _1, \ldots , \alpha _q \in \mathbb {R}\),
Indeed, if we apply the reverse triangle inequality to the above expression assuming, without loss of generality, that \(m_1< m_2< \cdots < m_q\), and \(\alpha _1\ne 0\), we obtain
and this last sum is divergent to \(+\infty \), by construction and by the above definition of V. \(\square \)
Now, we wold like to consider a new interesting property of random variables. In [21], the authors show that there exists a sequence \(\{X_n : n \in \mathbb {N}\}\) of mutually independent random variables, having zero mean, and such that \(\left| \frac{1}{n} \sum _{i=1}^{n} X_i\right| \) diverges to \(\infty \) almost surely. However, this example can be extended in order to obtain lineability, as our following result states.
Theorem 5
Given a common probability space \((\Omega , \mathcal {F}, P)\), the set of sequences \(\{X_n : n \in \mathbb {N}\}\) of mutually independent random variables having zero mean and such that \(\left| \frac{1}{n} \sum _{i=1}^{n} X_i\right| \) diverges to \(\infty \) (almost surely) is lineable.
Proof
Given \(s \in \mathbb {N}, s\ge 2, \) let \(\{X_{n}^{(s)}: n \in \mathbb {N}\}\) be a set of linearly independent sequences, each of which is formed by mutually independent random variables, and such that
It is easy to check that, by construction,
for every \( s, n \in \mathbb {N}\) with \(s\ge 2\). From Eq. (36) it follows that (for every \(s\ge 2, s \in \mathbb {N}\))
Indeed, take \(s\ge 2, s \in \mathbb {N}\), and let \(\Omega _1 = \{\omega \in \Omega : X^{(s)}_n(\omega ) = -n^s\}\) and \(\Omega _2 = \Omega {\setminus } \Omega _1\). Now, if \(\omega \in \Omega _1\), we have \(X^{(s)}_k(\omega ) = -k^s\), obtaining that, as \(n \rightarrow \infty \), Eq. (37) holds.
Let now \(V=\text {span} \{X_{n}^{(s)} : s\ge 2, s \in \mathbb {N}\}\) and let \(Y_n \in V\). Thus, \(Y_n\) can be written as
for some \(N \in \mathbb {N}\), \(s_i \in \mathbb {N}\), \(2 \le s_1< s_2<\cdots < s_N\), and \(\alpha _i \in \mathbb {R}\) for every \(i \in \{1, 2, \ldots , N\}\) with \(\alpha _N \ne 0\). By the linearity of \(E[\cdot ]\), and Eq. (35), we have that
Also, notice that
From the previous inequality, the fact that \(s_N> s_{N-1}> \cdots > s_1 \ge 2\), and Eqs. (36) and (37) it can be seen that \(\left| \frac{1}{n} \sum _{k=1}^{n} Y_k\right| \rightarrow \infty \) a.s. and the claim holds for \(\omega \in \Omega _1.\) The case \(w\in \Omega _2\) also holds in a similar fashion and, thus, we spare the details of the calculations involved in it. \(\square \)
Notes
By \(\mathcal {F}_n=\sigma (X_1,\ldots ,X_n)\) we mean the smallest \(\sigma \)-algebra in which \(\{X_i : i \le n\}\) are measurable.
References
Aizpuru, A., Pérez-Eslava, C., García-Pacheco, F.J., Seoane-Sepúlveda, J.B.: Lineability and coneability of discontinuous functions on \(\mathbb{R}\). Publ. Math. Debrecen 72(1–2), 129–139 (2008)
Aron, R., Gurariy, V.I., Seoane, J.B.: Lineability and spaceability of sets of functions on \(\mathbb{R}\). Proc. Am. Math. Soc. 133(3), 795–803 (2005, electronic)
Aron, R.M., González, L.B., Pellegrino, D.M., Sepúlveda J.B.S.: Lineability: the search for linearity in mathematics. Monographs and Research Notes in Mathematics. CRC Press, Boca Raton (2016)
Ash, R.B.: Real analysis and probability. Probability and mathematical statistics, No. 11. Academic Press, New York-London (1972)
Barbieri, G., García-Pacheco, F.J., Puglisi, D.: Lineability and spaceability on vector-measure spaces. Stud. Math. 219(2), 155–161 (2013)
Bernal-González, L., Cabrera, M.O.: Lineability criteria, with applications. J. Funct. Anal. 266(6), 3997–4025 (2014)
Bernal-González, L., Pellegrino, D., Seoane-Sepúlveda, J.B.: Linear subsets of nonlinear sets in topological vector spaces. Bull. Am. Math. Soc. (N.S.), 51(1), 71–130 (2014)
Berndt, B.C.: What is a \(q\)-series? In: Ramanujan rediscovered, Ramanujan Math. Soc. Lect. Notes Ser., vol. 14, pp. 31–51. Ramanujan Math. Soc., Mysore (2010)
Bertoloto, F.J., Botelho, G., Fávaro, V.V., Jatobá, A.M.: Hypercyclicity of convolution operators on spaces of entire functions. Ann. Inst. Fourier (Grenoble) 63(4), 1263–1283 (2013)
Billingsley, P.: Probability and measure. Wiley Series in Probability and Mathematical Statistics, 3rd edn, A Wiley-Interscience Publication. Wiley, New York (1995)
Botelho, G., Fávaro, V.V.: Constructing Banach spaces of vector-valued sequences with special properties. Mich. Math. J. 64(3), 539–554 (2015)
Cariello, D., Seoane-Sepúlveda, J.B.: Basic sequences and spaceability in \(\ell _p\) spaces. J. Funct. Anal. 266(6), 3797–3814 (2014)
Drewnowski, L., Lipecki, Z.: On vector measures which have everywhere infinite variation or noncompact range. Dissertationes Math. (Rozprawy Mat.) 339, 39 (1995)
Dugundji, J.: Topology. Allyn and Bacon, Inc., Boston, Mass.-London-Sydney (1978, Reprinting of the 1966 original, Allyn and Bacon Series in Advanced Mathematics)
Enflo, P.H., Gurariy, V.I., Seoane-Sepúlveda, J.B.: Some results and open questions on spaceability in function spaces. Trans. Am. Math. Soc. 366(2), 611–625 (2014)
Fonf, V.P., Zanco, C.: Almost overcomplete and almost overtotal sequences in Banach spaces. J. Math. Anal. Appl. 420(1), 94–101 (2014)
Gámez-Merino, J.L., Seoane-Sepúlveda, J.B.: An undecidable case of lineability in \(\mathbb{R}^{\mathbb{R}}\). J. Math. Anal. Appl. 401(2), 959–962 (2013)
Gurariĭ, V.I.: Linear spaces composed of everywhere nondifferentiable functions. C. R. Acad. Bulgare Sci. 44(5), 13–16 (1991)
Muñoz-Fernández, G.A., Palmberg, N., Puglisi, D., Seoane-Sepúlveda, J.B.: Lineability in subsets of measure and function spaces. Linear Algebra Appl. 428(11–12), 2805–2812 (2008)
Walsh, J.B.: Martingales with a multidimensional parameter and stochastic integrals in the plane. In: Lectures in probability and statistics (Santiago de Chile, 1986), Lecture Notes in Math., vol. 1215, pp. 329–491. Springer, Berlin (1986)
Wise, G.L., Hall, E.B.: Counterexamples in probability and real analysis. The Clarendon Press, Oxford University Press, New York (1993)
Acknowledgments
This work was partially supported by Ministerio de Educación, Cultura y Deporte, projects MTM2013-47093-P and MTM2015-65825-P, by the Basque Government through the BERC 2014-2017 program and by the Spanish Ministerio de Economía y Competitividad: BCAM Severo Ochoa excellence accreditation SEV-2013-0323.
Author information
Authors and Affiliations
Corresponding author
Additional information
Dedicated to Prof. Manuel Maestre on the occasion of his 60th birthday.
Rights and permissions
About this article
Cite this article
Conejero, J.A., Fenoy, M., Murillo-Arcila, M. et al. Lineability within probability theory settings. RACSAM 111, 673–684 (2017). https://doi.org/10.1007/s13398-016-0318-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13398-016-0318-y