1 Introduction

Since the beginning of the 21st century many authors have become interested in the study of linearity within non linear settings or, in other words, the search for linear structures of mathematical objects enjoying certain special or unexpected properties. Vector spaces and linear algebras are elegant mathematical structures which, at first glance, seem to be “forbidden” to families of “strange” objects. In other words, take a function with some special or (as sometimes it is called) “pathological” property (for example, the classical nowhere differentiable function, also known as Weierstrass’ monster). Coming up with a concrete example of such a function might be difficult. In fact, it may seem so difficult that if you succeed, you think that there cannot be too many functions of that kind. Probably one cannot find infinite dimensional vector spaces or infinitely generated algebras of such functions. This is, however, exactly what has been happening in the last years in many fields of mathematics, from Linear chaos to real and complex analysis [2, 6, 15], passing through set theory [17] and linear and multilinear Algebra, or even operator theory [9, 11], topology, measure theory [5, 6, 13], and abstract algebra.

Recall that, as it nowadays is common terminology, a subset M of a topological vector space X is called lineable (respectively, spaceable) in X if there exists an infinite dimensional linear space (respectively, infinite dimensional closed linear space) \(Y \subset M\cup \{0\}\). Moreover, given an algebra \(\mathcal {A},\) a subset \(\mathcal {B}\subset \mathcal {A}\) is said to be algebrable if there is a subalgebra \(\mathcal {C}\) of \(\mathcal {A}\) such that \(\mathcal {C}\subset \mathcal {B}\cup \{0\}\) and the cardinality of any generator of \(\mathcal {C}\) is infinite (see, e.g., [2, 3, 7]).

As we mentioned above, there have recently been many results regarding the linear structure of certain special subsets. One of the earliest results in this direction was provided by Gurariy, who showed that the set of Weierstrass’ monsters is lineable [18]. Also, and more recently, Enflo et al. [15] proved that, for every infinite dimensional closed subspace X of \(\mathcal {C}[0,1]\), the set of functions in X having infinitely many zeros in [0, 1] is spaceable in X (see, also, [12, 16]). A vast literature on this topic have been built during the last decade, and we refer the interested reader to the survey paper [7] or, for a much detailed and thorough study, to the forthcoming monograph [3].

In this paper, we relate for the first time, the topic of lineability with Probability Theory and Stochastic Processes. However one needs to be careful when trying to find linear structures within certain sets of objects in this setting. Indeed, the set of probability density functions cannot contain any linear space since any non-trivial multiple of one already fails to be a probability density function or, in a deeper level, if we had two martingales \(\{X_n\}_n\), \(\{Y_n\}_n\), with their corresponding filtrations \(\{\mathcal {F}_n\}_n\) and \(\{\mathcal {G}_n\}_n\), the sequence of random variables \(\{X_n + Y_n\}_n\) is not, in general, a martingale unless we had a “universal” filtration that would comply with both simultaneously. Nevertheless, we shall consider some classical (counter)examples in probability theory and study up to what level it is possible to obtain lineability-related results. In this paper we shall consider lineability and algebrability problems related to the following concepts:

  1. (i)

    Convergent martingales that are not \(L_1\) bounded,

  2. (ii)

    pointwise convergence of random variables,

  3. (iii)

    stochastic processes being \(L_2\) bounded, converging in \(L_2\), and not converging for any point off a null set, and

  4. (iv)

    zero-mean sequences of mutually independent random variables with divergent sample mean.

  5. (v)

    unbounded random variables with finite expected value.

2 Preliminaries and notation

In this section, we recall some results that will be needed throughout the paper (for more details see, e.g., [10]).

Let \(\Omega \) be a non-empty space and let \(\mathcal {F}\) be a \(\sigma \)-algebra over \(\Omega \). We say that the pair \((\Omega ,\mathcal {F})\) is a probabilizable (measurable) space. Given \((\Omega ,\mathcal {F})\), a filtration of \(\sigma \)-algebras of \(\mathcal {F}\) is an increasing sequence of \(\sigma \)-algebras, such that \(\mathcal {F}_n\subset \mathcal {F}\) for every \(n\in \mathbb {N}\).

Adding a function \(\mu :\mathcal {F}\rightarrow [0,1]\), we say that the triplet \((\Omega ,\mathcal {F},P)\) is a probability space. A random variable X on \((\Omega ,\mathcal {F},P)\) is a real-valued function defined on \(\Omega \), such that for every open subset \(B\subset \mathbb {R}\) we have \(X^{-1}(B)\in \mathcal {F}\). The expected value of the random variable X, namely E(X), is computed as

$$\begin{aligned} E[X]=\int _\Omega XdP. \end{aligned}$$
(1)

A collection of random variables indexed by a totally ordered set, representing the evolution of some system of random variables is said to be a stochastic process.

We now introduce the notion of a conditional expectation of a random variable X.

Definition 1

Let \((\Omega , \mathcal {F},P)\) be a probability space, let X be a random variable on this probability space, and let \(\mathcal {H} \subseteq \mathcal {F}\) be a sub-\(\sigma \)-algebra of \(\mathcal {F}\). The conditional expectation of X, denoted as \(E[X\mid \mathcal {H}]\), is any \(\mathcal {H}\)-measurable function \(\Omega \rightarrow \mathbb {R}\) which satisfies

$$\begin{aligned} \int _H E[X \mid \mathcal {H}] \; dP = \int _H X \; dP \quad \text {for every} \quad H \in \mathcal {H}. \end{aligned}$$
(2)

A sequence of random variables \(\{X_n\}_n\) defined on \((\Omega ,\mathcal {F},\mu )\) is said to be a Markov chain if for every \(n\ge 1\), the variable \(X_{n+1}\) only depends upon the state of \(X_n\). Given a sequence of random variables \(\{X_n\}_{n\in \mathbb {N}}\) and a filtration \(\{\mathcal {F}_n\}_{n\in \mathbb {N}}\) of \(\sigma \)-algebras of \(\mathcal {F}\), we say that \(\{X_n\}_{n\in \mathbb {N}}\) is a martingale if \(X_n\) is integrable and \(E[X_{n+1}|\mathcal {F}_n]=X_n\) almost surely (a.s. from now one) for all \(n \in \mathbb {N}\).

Finally, let us recall the following definition that will be necessary in order to introduce the notion of a martingale indexed by a directed set (see, e.g., [14]).

Definition 2

(directed set) A directed set is a nonempty set D with a relation \(\sim _R\) such that:

  1. (i)

    \(a \sim _R a\) for every \(a \in D\).

  2. (ii)

    If \(a, b, c \in D\) such that \(a \sim _R b\) and \(b \sim _R c\), then \(a \sim _R c\).

  3. (iii)

    If \(a, b \in D\) then there exists \(c \in D\) with \(a \sim _R c\) and \(b \sim _R c\).

We point out that \(a \sim _R b\) is (usually) denoted by \(a \le b\).

Let D be a directed set and let \(\{X_d: d \in D\}\) be an indexed family of random variables. Let \(\{\mathcal {F}_d: d \in D\}\) be a family of \(\sigma \)-algebras such that for \(d_1 \le d_2\), we have \(\mathcal {F}_{d_1} \subset \mathcal {F}_{d_2}\). We also say that \(\{X_d\}\) is a martingale indexed by a directed set D if for every \(d \in D\) we have \(E[|X_d|] < \infty \), \(X_d\) is \(\mathcal {F}_{d}\)-measurable, and for every \(d_1 \le d_2\) we have \(E[X_{d_2} | \mathcal {F}_{d_1}] = X_{d_1}\) almost surely.

3 Lineability of special sequences of random variables

The motivation for our first result is the fact that many martingale convergence theorems require the martingale to be \(L_1\)-bounded (for instance, in the famous Doob’s martingale convergence theorems or in Lévy’s zero–one law, [10]). However, this condition (although sufficient) is not necessary. Indeed, there is a classical and well known example due to Ash (see [4], or [21, Example 9.15] for a more modern reference), in which (briefly) the author constructed a martingale via a Markov chain \(\{X_n: n \in \mathbb {N}\}\), properly defined on a probability space \((\Omega , \mathcal {F}, P)\), such that \((X_{n})_n\) converges for every \(\omega \in \Omega \), and with \(E[|X_n|] \mathop {\longrightarrow }\limits ^{n \rightarrow \infty } \infty \).

Here, and although (as we mentioned in the Introduction) one cannot consider lineability within martingales, we shall show that one can construct an infinite dimensional vector space every non-zero element of which, \(\{X_n: n \in \mathbb {N}\}\), is a sequence of convergent random variables with \(E[|X_n|] \mathop {\longrightarrow }\limits ^{n \rightarrow \infty } \infty \). That is, the main tool in Ash’s example is, actually, “not as uncommon” as one might expect. The proof is a little bit technical, although constructive.

Theorem 1

The set of convergent sequences of random variables \(\{X_n: n \in \mathbb {N}\}\) with \(E[|X_n|] \mathop {\longrightarrow }\limits ^{n \rightarrow \infty } \infty \) is lineable.

Proof

First let us denote by \({\mathcal S}=\{s_j\}_{j\in \mathbb {N}}\) the (increasing) sequence of odd prime numbers. Next, for every \(s \in {\mathcal S}\) we consider the Markov chain defined as follows. Let \(X_1^{(s)} = 0\). Also, if \(X_n^{(s)} = 0\) let

$$\begin{aligned} X_{n+1}^{(s)} = \left\{ \begin{array}{ll} s^{n+1} \cdot (n+1)^s &{} \text { with probability } 1/s^{n+1}, \\ -s^{n+1} \cdot (n+1)^s &{} \text { with probability } 1/s^{n+1}, \\ 0 &{} \text { with probability } 1-2/s^{n+1}, \end{array} \right. \end{aligned}$$
(3)

and, if \(X_n^{(s)} \ne 0\), we let \(X_{n+1}^{(s)}=X_n^{(s)}\). Notice that, if \(X_n^{(s)}\ne 0\), then \(X_j^{(s)}=X_n^{(s)}\) for every \(j\ge n\). Let us consider \(A=\{\omega :X_n^{(s)}(\omega ) \ne 0\quad \text{ for }\quad \text{ some }\quad n\in \mathbb {N}\}\). If \(\omega \in A\), then \(X_j^{(s)}(\omega )=X_n^{(s)}(\omega )\) for every \(j\ge n\). In contrast, if \(\omega \in \Omega {\setminus } A\) then \(X_{n+1}^{(s)}\) is defined following equation (3). Moreover, note that for every \(n \in \mathbb {N}\),

$$\begin{aligned} E\left[ X_{n+1}^{(s)}|X_n^{(s)}=0\right]= & {} \left( s^{n+1} \cdot (n+1)^s\right) \cdot \frac{1}{s^{n+1}} - \left( s^{n+1} \cdot (n+1)^s\right) \cdot \frac{1}{s^{n+1}} \nonumber \\&+\, 0 \cdot \left( 1-\frac{2}{s^{n+1}}\right) = 0, \end{aligned}$$
(4)
$$\begin{aligned} E\left[ X_{n+1}^{(s)}|X_n^{(s)}=s^{n+1} \cdot (n+1)^s\right] = s^{n+1} \cdot (n+1)^s, \text { and } \end{aligned}$$
(5)
$$\begin{aligned} E\left[ X_{n+1}^{(s)}|X_n^{(s)}=-s^{n+1} \cdot (n+1)^s\right] = -s^{n+1} \cdot (n+1)^s. \end{aligned}$$
(6)

Therefore, for every \(s \in {\mathcal S}\), the Markov chain \(\{X_n^{(s)}: n \in \mathbb {N}\}\) is a martingale respect on the natural filtration, that is, \(\mathcal {F}_n=\sigma (X_1,\ldots ,X_n)\) Footnote 1 for all n. Furthermore, given \(s \in {\mathcal S}\), and assuming all of the above random variables are properly defined on a probability space \((\Omega , \mathcal {F}, P)\), we have that either \(X_{n}^{(s)}(\omega )=0\) for every \(n \in \mathbb {N}\) or even in the case that there is some \(m\in \mathbb {N}\) such that \(X_{n}^{(s)}\ne 0\) for all \(n\ge m\), we can conclude that \(\{X_{n}^{(s)}\}_n\) is a convergent sequence on \((\Omega ,\mathcal {F},P)\).

Before carrying on with the main construction, let us recall that it can be assumed, without loss of generality, that the set \(\{X_n^{(s)}: s \in \mathcal {S} \}\) is linearly independent, just taking, for instance, disjoint supports in the construction of the random variables.

Our aim now is to show that any non-zero element in the linear span of \(\{X_n^{(s)}: s \in \mathcal {S} \}\) is convergent and not \(L_1\)-bounded. The convergence is straightforward from the fact that \(\{X_{n}^{(s)}\}_n\) converges for every \(\omega \in \Omega \) and any element in the linear span of \(\{X_n^{(s)}: s \in \mathcal {S} \}\) is a finite linear combination of these random variables in the sequence \(\{X_{n}^{(s)}\}_n\).

We still need a couple of estimates in order to achieve our goal. For every \(I\in \mathcal {F}\), let us define \(I_A\) as the characteristic function on the set A. Let \(s \in \mathcal {S}\) and \(k \in \mathbb {N}\), we have that

$$\begin{aligned} \begin{array}{lcl} X_k^{(s)} &{} = &{} X_2^{(s)} \cdot I_{\{X_2^{(s)}\ne 0\}} + X_3^{(s)} \cdot I_{\{X_2^{(s)}= 0, X_3^{(s)}\ne 0\}}+X_4^{(s)} \cdot I_{\{X_2^{(s)} = X_3^{(s)} = 0, X_4^{(s)}\ne 0\}}\\ &{}&{} + \cdots + X_k^{(s)} \cdot I_{\{X_1^{(s)} = \cdots = X_{k-1}^{(s)} = 0, X_k^{(s)}\ne 0\}} + 0 \cdot I_{\{X_1^{(s)} = \cdots = X_{k}^{(s)} = 0\}},\\ \end{array} \end{aligned}$$
(7)

from which we obtain that

$$\begin{aligned} \begin{array}{lcl} E\left[ |X_k^{(s)}|\right] &{} = &{} 2 a_2 p_2 + (1-2p_2) \cdot 2 a_3 p_3 + (1-2p_2)(1-2p_3) \cdot 2 a_4 p_4 \\ &{}&{} + \cdots + (1-2p_2)(1-2p_3)\cdot \cdots \cdot (1-2p_{k-1}) \cdot 2 a_k p_k,\\ \end{array} \end{aligned}$$
(8)

where, for the sake of simplicity, we have denoted \(a_n:= s^{n} n^s\) and \(p_n:= 1/s^n\). Applying the definition of \(X_n^{(s)}\), making some simple calculations, and keeping in mind that for every \(j \in \{1, \ldots , k-1\}\), we have \(0< 1-2p_{j} < 1\), and

$$\begin{aligned} 1 > (1-2p_2) \ge (1-2p_2)(1-2p_3) \ge \cdots \ge (1-2p_2)(1-2p_3)\cdot \cdots \cdot (1-2p_{k-1}). \end{aligned}$$
(9)

As a consequence, we obtain the following lower bound for \(E\left[ |X_k^{(s)}|\right] \):

$$\begin{aligned} E\left[ |X_k^{(s)}|\right] \ge \displaystyle 2 \left[ \prod _{j=1}^{k-1} \left( 1 - 2p_j\right) \right] \cdot \left[ \sum _{j=2}^{k} a_j p_j \right] = \displaystyle 2 \left[ \prod _{j=1}^{k-1} \left( 1 - \frac{2}{s^j}\right) \right] \cdot \left[ \sum _{j=2}^{k} j^s \right] . \end{aligned}$$
(10)

In the previous expression, let us recall that the amount \(\prod _{j=1}^{\infty } \left( 1 - \frac{2}{s^j}\right) \) is known, in Number Theory, as the q-Pochhammer symbol (also known as q-shifted factorial, see [8]) \((2;s)_{\infty }\), which verifies

$$\begin{aligned} 0< (2;s)_{\infty } < 1 \end{aligned}$$

if \(s > 2\) (which complies with our hypotheses). We, thus, have

$$\begin{aligned} E[|X_k^{(s)}|] \ge \displaystyle 2 \left[ \prod _{j=1}^{k-1} \left( 1 - \frac{2}{s^j}\right) \right] \cdot \left( \sum _{j=2}^{k} j^s \right) \mathop {\longrightarrow }\limits ^{k\rightarrow \infty } \quad 2 \cdot (2;s)_{\infty } \cdot \lim _{k\rightarrow \infty } \sum _{j=2}^{k} j^s = \infty , \end{aligned}$$

and \(\left\{ X_k^{(s)}\right\} _k\) is not \(L_1\)-bounded. However, our aim is to show that any non-zero element in the linear span of \(\{X_n^{(s)}: s \in \mathcal {S} \}\) is not \(L_1\)-bounded and, in order to obtain this, we shall need another estimate for \(E\left[ |X_k^{(s)}|\right] \). Recall that, since \((2;s)_{\infty } \in (0,1)\), we also have

$$\begin{aligned} E\left[ |X_k^{(s)}|\right] \le R_{s+1}(k):= \displaystyle 2 \sum _{j=2}^{k} j^s \end{aligned}$$
(11)

and it can be easily checked that the expression \(R_{s+1}(k)\) is a polynomial of degree \(s+1\) with

$$\begin{aligned} \lim _{k\rightarrow \infty }R_{s+1}(k) = + \infty . \end{aligned}$$
(12)

Now, let \(X_k \in \text {span}\left\{ X_k^{(s)}: s \in \mathcal {S} \right\} \), then:

$$\begin{aligned} X_k = \alpha _1 X_k^{(s_1)} + \alpha _2 X_k^{(s_2)} + \cdots + \alpha _m X_k^{(s_m)}, \end{aligned}$$
(13)

where \(s_1<s_2< \cdots < s_m\) are elements from \(\mathcal {S}\), \(\{\alpha _n\}_n \subset \mathbb {R}\), and (without loss of generality) \(\alpha _m \ne 0\). Let us now show that \(X_k\) is not \(L_1\)-bounded. Indeed, using the linearity of \(E[\cdot ]\), the reverse triangle inequality, and Eqs. (10) and (11), we have:

$$\begin{aligned} \begin{array}{lcl} E\left[ |X_k|\right] &{} = &{} E\left[ |\alpha _1 X_k^{(s_1)} + \alpha _2 X_k^{(s_2)} + \cdots + \alpha _m X_k^{(s_m)}|\right] \\ &{} \ge &{} |\alpha _m| \cdot E\left[ |X_k^{(s_m)}\right] - |\alpha _{1}|\cdot E\left[ |X_k^{(s_{1})}\right] - \cdots - |\alpha _{m-1}| \cdot E[|X_k^{(s_{m-1})}]\\ &{} \ge &{} \displaystyle |\alpha _m | \cdot (2;s_m)_{\infty } \cdot R_{s_m+1}(k) - 2 |\alpha _1| \sum _{j=2}^{k} j^{s_1} - \cdots - 2 |\alpha _{m-1}| \sum _{j=2}^{k} j^{s_{m-1}}\\ &{} = &{} \displaystyle |\alpha _m | \cdot (2;s_m)_{\infty } \cdot R_{s_m+1}(k) - 2\sum _{i=1}^{m-1} \left( \sum _{j=2}^{k} j^{s_i} \right) \mathop {\longrightarrow }\limits ^{k\rightarrow \infty } \infty ,\\ \end{array} \end{aligned}$$
(14)

since the expression \(2 |\alpha _m | \cdot (2;s_m)_{\infty } \cdot R_{s_m+1}(k)\) is a polynomial of degree \(s_m+1\) with

$$\begin{aligned} \displaystyle \lim _{k\rightarrow \infty } R_{s_m+1}(k) = + \infty , \end{aligned}$$
(15)

the expression

$$\begin{aligned} \sum _{i=1}^{m-1} \left( \sum _{j=2}^{k} j^{s_i} \right) \end{aligned}$$
(16)

is a polynomial of degree \(s_{m-1}+1\), and \(s_{m-1} < s_m\). Therefore, \(X_k\) is not \(L_1\)-bounded, and the result is proved. \(\square \)

Remark 1

We recall that the previous result could certainly be stated in terms of martingales assuming, of course, that the martingales adapted to the same filtration form a vector space (the proof would follow the same ideas as in that of Theorem 1).

Now, let us continue focusing on obtaining lineability-related results of certain subsets of random variables enjoying “unexpected” properties. For instance, in [21, Example 9.2], the authors provide (given any \(b >0\)) a sequence of integrable random variables \(\{X_n\}_{n \in \mathbb {N}}\) and an integrable random variable X such that \(X_n\) converges to X pointwise and, yet, \(E[X_n] = -b\) and \(E[X]=b\) (the important point here is that one has, under the previous hypotheses, \(E[X_n] \ne E[X]\) for every \(n \in \mathbb {N}\)). This construction can be generalized in order to construct a positive cone (see, e.g., [1]) of such elements since, in general, linearity of elements enjoying such properties might get lost.

Let \(\{X_n\}_{n \in \mathbb {N}}\) and \(\{Y_n\}_{n \in \mathbb {N}}\) be sequences of integrable random variables converging, pointwise, to the integrable random variables XY (respectively). Let \(b, c >0\), and \(X_n, X, Y_n, Y\) random variables such that \(E[X_n] = -b, E[X]=b, E[Y_n] = -c, E[Y]=c\) . Now, let \(\alpha , \beta \in \mathbb {R}\) be such that \(\alpha b + \beta c = 0\), then \(E[\alpha X_n + \beta Y_b] = 0 = E[\alpha X + \beta Y]\), which does not fall into the class of examples we are working with. Thus, the above property is “not a lineable one”. However, one could try to find a positive cone of such objects, as it was done in [1] when certain sets failed to be lineable (calling these sets coneable). More precisely, a subset M of a topological vector space X is called positively coneable in X if there exists an infinite dimensional set M such that \(\alpha M \subset M\) for every \(\alpha >0\).

Theorem 2

Let us consider the probability space \(([0,1],\mathbb {B}([0,1]),\lambda )\), where \(\lambda \) denotes the Lebesgue measure. The set of sequences of integrable random variables \(\{X_n\}_n\) converging to an integrable random variable X such that \(\lim _{n\rightarrow \infty }E[X_n]\ne E[X]\) is positively coneable.

Proof

For every \(m \in \mathbb {N}\), let us take \(B^{(m)}, C^{(m)} > 0\) and let us define the following random variables for every \(\omega \in [0,1]\)

$$\begin{aligned} X^{(m)}(\omega ) = \frac{a_m}{a_m-1} \cdot B^{(m)} \cdot I_{[1/a_m,1]}(\omega )\quad \text { for every } w\in \omega \in [0,1]\quad \text{ and } \end{aligned}$$
(17)
$$\begin{aligned} X_n^{(m)}(\omega ) = \left\{ \begin{array}{ll} B^{(m)} + C^{(m)} &{} \text { if } n \le a_m,\\ n \cdot C^{(m)} \cdot I_{[1/a_m - 1/n,1/a_m]}(\omega ) + X^{(m)}(\omega ) &{} \text { if } n > a_m, \end{array} \right. \end{aligned}$$
(18)

where \(\{a_m\}_{m\in \mathbb {N}} \subset \mathbb {N}\) is defined, recursively, as follows:

$$\begin{aligned} a_1 = 2 \quad \text { and } \quad a_{m+1} = (a_m + 1)\cdot a_m \quad \text { for } m>1. \end{aligned}$$
(19)

This permits us to state that the set of sequences \(\{X_n^{(m)}: m \in \mathbb {N}\}\) are linearly independent when seen as regular functions in \(\mathbb {R}^{[0,1]}\) (due to the choice of the \(a_m\)’s in order to avoid major overlappings). The sequence \(X_n^{(m)}\) converges to \(X^{(m)}\) pointwisely when n tends to infinity. It can be easily seen that \(\{X_n^{(m)}\}_n\) is a sequence of integrable random variables for every \(m \in \mathbb {N}\) and that \(X^{(m)}\) is an integrable random variable, too.

Furthermore, for every \(n,m\in \mathbb {N}\) we have

$$\begin{aligned} E[X^{(m)}] = \int _{[0,1]} \frac{a_m}{a_m-1} \cdot B^{(m)} \cdot I_{[1/a_m,1]}(\omega ) d\omega = B^{(m)}, \end{aligned}$$
(20)

and

$$\begin{aligned} E[X_n^{(m)}] = \int _{[0,1]} X^{(m)}(\omega )+n \cdot C^{(m)} \cdot I_{[1/a_m - 1/n,1/a_m]}(\omega ) d\omega = B^{(m)} + C^{(m)}. \end{aligned}$$
(21)

We then consider the positive cone given by \(\mathcal {C}_n = \{ \alpha X_n^{(m)} : m \in \mathbb {N}, \alpha >0\}\) where any element \(Y_n \in \mathcal {C}_n\) can be written as \(Y_n = \sum _{i=1}^{k} \alpha _i X_n^{(m_i)}\), where \(\alpha _i >0\) and \(m_i \in \mathbb {N}\) for every \(i \in \{1, \ldots , k\}\). By linearity of \(E[\cdot ]\) we have

$$\begin{aligned} E[Y_n] = E\left[ \sum _{i=1}^{k} \alpha _i X_n^{(m_i)}\right] = \sum _{i=1}^{k} \alpha _i E\left[ X_n^{(m_i)}\right] = \sum _{i=1}^{k} \alpha _i (B^{(m_i)} + C^{(m_i)}), \end{aligned}$$
(22)

and given \(Y= \sum _{i=1}^{k} \alpha _i X^{(m_i)}\), one obtains

$$\begin{aligned} E[Y] = \sum _{i=1}^{k} \alpha _i E\left[ X^{(m_i)}\right] = \sum _{i=1}^{k} \alpha _i B^{(m_i)}, \end{aligned}$$
(23)

which gives that, although by linearity, \(Y_n\) converges pointwise to Y with \(E[Y_n] \ne E[Y]\) (actually, and more precisely, \(E[Y_n] > E[Y]\)) for every \(n \in \mathbb {N}\). \(\square \)

The following result shows the algebrability of the set of unbounded random variables with a finite expected value. The example used for the construction is inspired in [21, Example 5.2].

Theorem 3

Let us consider the probability space \((\mathbb {R}^+,\mathbb {B}(\mathbb {R}^+),\lambda )\), where \(\lambda \) denotes the Lebesgue measure. The set of unbounded random variables \(f:\mathbb {R}\rightarrow \mathbb {R}\) that have a finite expected value is algebrable.

Proof

Let us consider the function

$$\begin{aligned} T(x):={\left\{ \begin{array}{ll} 1-x &{}\quad \text{ if } 0\le x \le 1,\\ 0 &{} \quad \text { otherwise.} \end{array}\right. } \end{aligned}$$
(24)

For each \(n\in \mathbb {N}\), we define:

$$\begin{aligned} f_n(x):=nT(n^3(x-n)) \end{aligned}$$
(25)

Each function \(f_n\) is null except in the interval \(J_n:=\left[ n,n+\frac{1}{n^3}\right] \). Moreover,

$$\begin{aligned} \int _{J_n}f_n(x)dx=\frac{1}{2n^2}. \end{aligned}$$
(26)

and then, the random variable defined as

$$\begin{aligned} X(x):=\sum _{n=1}^\infty f_n(x) \end{aligned}$$
(27)

has an expected value \(E[X]=\frac{\pi ^2}{12}\).

Let us consider a Cantor set on the unit interval obtained as \(C=\cup _{n=1}^\infty I_n\), where \(I_0=[0,1]\) and \(I_n\) is obtained from \(I_{n-1}\) removing the inner third of each of its subintervals. Let us define \(L_n:=J_n\cap (n+I_n)\). Then, we have

$$\begin{aligned} \int _{L_n}f_n(x)dx=\frac{1}{2n^2}\left( \frac{2}{3}\right) ^n \quad \text { for every }n\in \mathbb {N}. \end{aligned}$$
(28)

Let \(\{\alpha _l\}_{l\in \Lambda }\) be a non-numerable set of irrational numbers on (0, 1) which are not \(\mathbb {Q}\) linearly dependent, then for every \(\alpha \in \{\alpha _l\}_{l\in \Lambda }\) we define the functions:

$$\begin{aligned} X_n^{(\alpha )}(x)= {\left\{ \begin{array}{ll} f_n(x-\alpha ) &{}\quad \text { if } x\in \alpha +L_n,\\ 0 &{} \quad \text { elsewhere}. \end{array}\right. } \end{aligned}$$
(29)

and then, we consider the random variable

$$\begin{aligned} X_{\alpha }(x):=\sum _{n=1}^\infty X_n^{\alpha }(x)dx. \end{aligned}$$
(30)

Consider the algebra generated by these functions \(\mathcal {A}(\{X_\alpha \}_{\alpha \in \Lambda })\). It is clear that for every \(\alpha \in \{\alpha _l\}_{l\in \Lambda }\) the random variable \(X_\alpha \) has a finite expected value and it is an unbounded random variable. Besides, this algebra is uncountably generated.

Given an arbitrary function

$$\begin{aligned} X(x):=\sum _{m=1}^{m_0}\lambda _mX_{\alpha _m}(x), \quad \text { with } \alpha _m\in \Lambda ,\lambda _m\in \mathbb {K}\text { for all }m\in \mathbb {N}. \end{aligned}$$
(31)

On the one hand, these random variables are unbounded, too. Indeed, let \(\alpha _{min}:=\min \{\alpha _m\,:\,1\le m\le m_0\}\) and we get \(X(n+\alpha _{min})=n\). Additionally, this random variable has a finite expected value as well. \(\square \)

Remark 2

Let us recall that, in the previous result, the unboundedness holds outside every interval of finite length, which adds an extra pathology to the considered property.

For the final part of this paper, let us recall the work [20] (see, also, [21, Example 9.17]), in which Walsh provided an example of a martingale (indexed by a directed set) that is \(L_2\) bounded and converging in \(L_2\) and that, also, does not converge for any point off a null set. Our aim here shall be to generalize this example in order to build an infinite dimensional linear space such that every non zero element of which is a martingale enjoying the previous property. Before starting its proof, we need to recall the following lemma (due to Muñoz, Palmberg, Puglisi, and the second author), which is a particular case of [19, Theorem 3.5]. In what follows \((\ell _p, \Vert \cdot \Vert _p)\) denotes the Banach space of real valued sequences with the usual p-norm.

Lemma 1

The set \(\ell _2 {\setminus }\ell _1\) is lineable.

Theorem 4

The set of stochastic processes that are \(L_2\) bounded, converging in \(L_2\) and that, also, do not converge for any point of a null set, is lineable.

Proof

By Lemma 1, let V be any (countably generated) linear space contained in \((\ell _2 {\setminus } \ell _1) \cup \{0\}\) and let \(\left\{ \{h^{(m)}_n\}_n : m \in \mathbb {N}\right\} \) be a basis for V. For instance, and in order to be more clear in the coming construction, we can take (see [19, Theorem 3.5])

$$\begin{aligned} V = \text{ span } \left\{ h^{(m)}_n:= \left\{ \frac{1}{n^{m}}\right\} _{n \in \mathbb {N}} : m \in \mathbb {Q} \cap \left( \frac{1}{2}, 1 \right) \right\} . \end{aligned}$$
(32)

For every \(m \in \mathbb {Q} \cap \left( \frac{1}{2}, 1 \right) \), let \(\{X^{(m)}_{n}\}_n\) be an linearly independent (and infinite) set, every element of which is a sequence of mutually independent random variables such that provided \(m \in \mathbb {Q} \cap \left( \frac{1}{2}, 1 \right) \)

$$\begin{aligned} P\left( X^{(m)}_{n}=-1\right) = P\left( X^{(m)}_{n}=1\right) = 1/2. \end{aligned}$$
(33)

for every \(n\in \mathbb {N}\).

By construction, one has that \(\sum _{n \in \mathbb {N}} h^{(m)}_n X^{(m)}_{n}\) converges almost surely for every \(m \in \mathbb {Q} \cap \left( \frac{1}{2}, 1 \right) \).

Let D be the family of all finite subsets of \(\mathbb {N}\), partially ordered by set inclusion, which is a directed set. For every \(d\in D, m \in \mathbb {Q} \cap \left( \frac{1}{2}, 1 \right) \) we define:

$$\begin{aligned} \displaystyle M^{(m)}_d = \sum _{n \in d} h^{(m)}_n X^{(m)}_{n}. \end{aligned}$$
(34)

Therefore, for every \(m \in \mathbb {Q} \cap \left( \frac{1}{2}, 1 \right) \), and with respect to its own filtration, it can be easily checked that \(\{M^{(m)}_d : d \in D\}\) is a martingale and it converges in probability. By construction, we also have that the set \(\{(M^{(m)}_d)_{d \in D} : m \in \mathbb {N}\}\) is linearly independent and, by linearity, any non-zero element in \(W := \text{ span }\{(M^{(m)}_d)_{d \in D} : m \in \mathbb {N}\}\) also converges in probability.

However, we will see \(\{(M^{(m)}_d)_{d \in D} : m \in \mathbb {N}\}\) as, simply, stochastic processes (dropping the filtration). Moreover, any element in W is, also, \(L_2\)-bounded, since (for every \(m \in \mathbb {Q} \cap \left( \frac{1}{2}, 1 \right) \)) we have

$$\begin{aligned} E\left[ (M^{(m)}_d)^2\right] = \displaystyle \sum _{n=1}^{\infty } \left( h^{(m)}_n\right) ^2 < \infty , \end{aligned}$$

since the set \(\left\{ \{h^{(m)}_k\}_{k\in \mathbb {N}} : m \in \mathbb {N}\right\} \) is contained in \(\ell _2 {\setminus } \ell _1\). However, notice that (for every \(m \in \mathbb {Q} \cap \left( \frac{1}{2}, 1 \right) \)), \(M^{(m)}_d (\omega )\) converges only if it converges regardless of the order of summation (that is, absolutely), but

$$\begin{aligned} \sum _{n=1}^{\infty } \left| h^{(m)}_n X^{(m)}_{n}\right| = \sum _{n=1}^{\infty } \left| h^{(m)}_n \right| = \sum _{n=1}^{\infty } 1/n^m= \infty . \end{aligned}$$

It only remains to show that, for any \(m_1, m_2, \ldots , m_q \in \left( \frac{1}{2}, 1 \right) \) and \(\alpha _1, \ldots , \alpha _q \in \mathbb {R}\),

$$\begin{aligned} \lim _{s \rightarrow \infty }\sum _{n=1}^{s} \left| \alpha _1 h^{(m_1)}_n X^{(m_1)}_{n} + \alpha _2 h^{(m_2)}_n X^{(m_2)}_{n} + \cdots + \alpha _q h^{(m_q)}_n X^{(m_q)}_{n}\right| = \infty . \end{aligned}$$

Indeed, if we apply the reverse triangle inequality to the above expression assuming, without loss of generality, that \(m_1< m_2< \cdots < m_q\), and \(\alpha _1\ne 0\), we obtain

$$\begin{aligned}&\sum _{n=1}^{s} \left| \alpha _1 h^{(m_1)}_n X^{(m_1)}_{n} + \alpha _2 h^{(m_2)}_n X^{(m_2)}_{n} + \cdots + \alpha _q h^{(m_q)}_n X^{(m_q)}_{n}\right| \nonumber \\&\quad \ge \sum _{n=1}^{s}\left( |\alpha _1| \frac{1}{n^{m_1}} - |\alpha _2| \frac{1}{n^{m_2}} - \cdots - |\alpha _q| \frac{1}{n^{m_q}}\right) \end{aligned}$$

and this last sum is divergent to \(+\infty \), by construction and by the above definition of V. \(\square \)

Now, we wold like to consider a new interesting property of random variables. In [21], the authors show that there exists a sequence \(\{X_n : n \in \mathbb {N}\}\) of mutually independent random variables, having zero mean, and such that \(\left| \frac{1}{n} \sum _{i=1}^{n} X_i\right| \) diverges to \(\infty \) almost surely. However, this example can be extended in order to obtain lineability, as our following result states.

Theorem 5

Given a common probability space \((\Omega , \mathcal {F}, P)\), the set of sequences \(\{X_n : n \in \mathbb {N}\}\) of mutually independent random variables having zero mean and such that \(\left| \frac{1}{n} \sum _{i=1}^{n} X_i\right| \) diverges to \(\infty \) (almost surely) is lineable.

Proof

Given \(s \in \mathbb {N}, s\ge 2, \) let \(\{X_{n}^{(s)}: n \in \mathbb {N}\}\) be a set of linearly independent sequences, each of which is formed by mutually independent random variables, and such that

$$\begin{aligned} P\left( X_{n}^{(s)} = -n^s\right) = 1 - \frac{1}{n^{2s}}\quad \text{ and } \end{aligned}$$
$$\begin{aligned} P\left( X_{n}^{(s)} = n^{3s}-n^s\right) = \frac{1}{n^{2s}}. \end{aligned}$$

It is easy to check that, by construction,

$$\begin{aligned} E[X_n^{(s)}] = 0 \quad \text {and} \end{aligned}$$
(35)
$$\begin{aligned} \frac{X_n^{(s)}}{n} \longrightarrow -\infty \quad \text{ almost } \text{ surely } \end{aligned}$$
(36)

for every \( s, n \in \mathbb {N}\) with \(s\ge 2\). From Eq. (36) it follows that (for every \(s\ge 2, s \in \mathbb {N}\))

$$\begin{aligned} \frac{1}{n} \sum _{i=1}^{n} X^{(s)}_i \longrightarrow -\infty \quad \text{ a.s. } \end{aligned}$$
(37)

Indeed, take \(s\ge 2, s \in \mathbb {N}\), and let \(\Omega _1 = \{\omega \in \Omega : X^{(s)}_n(\omega ) = -n^s\}\) and \(\Omega _2 = \Omega {\setminus } \Omega _1\). Now, if \(\omega \in \Omega _1\), we have \(X^{(s)}_k(\omega ) = -k^s\), obtaining that, as \(n \rightarrow \infty \), Eq. (37) holds.

Let now \(V=\text {span} \{X_{n}^{(s)} : s\ge 2, s \in \mathbb {N}\}\) and let \(Y_n \in V\). Thus, \(Y_n\) can be written as

$$\begin{aligned} Y_n = \sum _{i=1}^{N} \alpha _i X_n^{(s_i)}, \end{aligned}$$

for some \(N \in \mathbb {N}\), \(s_i \in \mathbb {N}\), \(2 \le s_1< s_2<\cdots < s_N\), and \(\alpha _i \in \mathbb {R}\) for every \(i \in \{1, 2, \ldots , N\}\) with \(\alpha _N \ne 0\). By the linearity of \(E[\cdot ]\), and Eq. (35), we have that

$$\begin{aligned} \begin{array}{lcl} E[Y_n]= & {} \displaystyle \sum _{i=1}^{N} \alpha _i E[X_n^{(s_i)}] = \sum _{i=1}^{N} \alpha _i 0 = 0. \end{array} \end{aligned}$$

Also, notice that

$$\begin{aligned} \begin{array}{lcl} \left| \displaystyle \frac{1}{n} \sum _{k=1}^{n} Y_k\right| &{} = &{} \displaystyle \left| \frac{\alpha _1}{n} \cdot \sum _{k=1}^{n} X_k^{(s_1)} + \cdots + \frac{\alpha _N}{n} \cdot \sum _{k=1}^{n} X_k^{(s_N)} \right| \\ &{} \ge &{} \displaystyle \frac{|\alpha _N|}{n} \left| \sum _{k=1}^{n} X_k^{(s_N)}\right| - \frac{|\alpha _{N-1}|}{n} \left| \sum _{k=1}^{n} X_k^{(s_{N-1})}\right| - \cdots - \frac{|\alpha _1|}{n} \left| \sum _{k=1}^{n} X_k^{(s_1)}\right| . \end{array} \end{aligned}$$

From the previous inequality, the fact that \(s_N> s_{N-1}> \cdots > s_1 \ge 2\), and Eqs. (36) and (37) it can be seen that \(\left| \frac{1}{n} \sum _{k=1}^{n} Y_k\right| \rightarrow \infty \) a.s. and the claim holds for \(\omega \in \Omega _1.\) The case \(w\in \Omega _2\) also holds in a similar fashion and, thus, we spare the details of the calculations involved in it. \(\square \)