1 Introduction

It is well known that the concept of Brownian motion is one of the most important in probability theory and its applications as Stock Market, Financial Mathematics, Mathematical Statistics, Uncertainty, Decision Making, Fractal Analysis in Medical Imaging.

The starting point of the present research are the papers [2, 13, 14, 16, 17, 22, 24, 27, 30] in which stochastic integration is studied in partially ordered spaces or in the fuzzy set valued case. The literature in this field is rich, we can cite for example [4,5,6,7,8,9, 15, 18,19,20, 23, 26, 28, 29, 33].

Here the notion of set valued Brownian motion is introduced and studied for the case of compact convex subsets of a Banach space X. By considering the scalar valued case, multiplication and subtraction have a prominent role in the definition of Brownian motion. In the set valued case, multiplication provides a problem as we are not aware of a canonical multiplication of convex compact sets that yields a set with the same properties. There are, however, known canonical subtraction operators for such sets ([25] and the references therein). Subtraction thus seems easier to deal with. Our study is motivated by the facts that very little is known about Brownian motion in the set valued case, and the known examples of set valued stochastic integration do not really deal with set valued Brownian motion, as would be a canonical analogue of the scalar valued case.

The paper is organized as follows: in Sect. 2 the basic properties of the hyperspace ck(X) and its embedding in C(K) are introduced. Since, in order to properly define a set valued Brownian motion, a difference and a multiplicative structure are needed, the embedding and the Riesz structure of C(K) are used. For this reason the theory of integration in vector lattice is very important and useful, see for example [3, 11, 30,31,32].

In Sect. 3 examples of ck(X)-valued Brownian motion are given together with some properties and with some characterizations involving pointwise martingales and Gaussian processes. Moreover the quadratic variation is introduced and a Levy’s reverse result is obtained. In Sect. 4 a possible extension to arbitrary Banach lattices is given: this is done in the more abstract framework of [16, 17], with the purpose to compare the two types of construction in the particular case here discussed, where the Banach lattice is C(K). In this last case, when K is Stonian, moreover a direct Levy’s result follows. In the appendix a characterization of the generalized Hukuhara difference which extends [25] is introduced.

2 Preliminaries

We recall from [10, Chapter II] the following notations that will be used in the present paper. Let X be a Banach space with its dual \(X^*\) and let ck(X) be the subfamily of \(2^X {\setminus } \emptyset \) of all compact, convex subsets of X.

As in [10] for all \(A,B \in ck(X)\) and \(\lambda \in {\mathbb {R}}\) the Minkowski addition and scalar multiplication are defined as

$$\begin{aligned} A + B = \{ a+b: a \in A, b \in B \}, \,\,\, \hbox {and} \,\,\,\, \lambda A = \{ \lambda a : a \in A \} \end{aligned}$$
(1)

Let H be the corresponding Hausdorff metric on ck(X), i.e.

$$\begin{aligned} H(A,B)=\max (e_d(A,B),e_d(B,A)) \end{aligned}$$

where the excess \(e_d(A,B)\) of the set A over the set B is defined as

$$\begin{aligned} e_d(A,B)=\sup \{ d(a,B):a\in A \}=\sup \{ \inf _{b\in B}d(a,b): a\in A\}. \end{aligned}$$

It is known that the family ck(X) endowed with the Hausdorff metric is a complete metric space. For every \(C \in ck(X)\), the support function of C is denoted by \(s( \cdot , C)\) and is defined by \(s(x^*, C) = \sup \{ \langle x^*,c \rangle : \ c \in C\}\) for each \(x^* \in X^*\).

Clearly, the map \(x^* \longmapsto s(x^*, C)\) is sublinear on \(X^*\) and

$$\begin{aligned} -s(-x^*, C) = \inf \{ \langle x^*,c \rangle : \ c \in C\}, \,\,\hbox { for each }\,\, x^* \in X^*. \end{aligned}$$

The following theorem holds:

Theorem 2.1

([21, Theorem 5.7]) Let X be a Banach space; then there exist a compact (Stonian) Hausdorff space K and a map \(j: ck(X) \rightarrow C(K)\) such that

(2.1.a) :

\(j(\alpha A + \beta C) = \alpha j(A) + \beta j(C)\) for all \(A,C \in ck(X)\) and \(\alpha , \beta \in {\mathbb {R}}^+\),

(2.1.b) :

\(d_H(A,C) = \Vert j(A) - j(C) \Vert _{\infty }\) for every \(A,C \in ck(X)\),

(2.1.c) :

j(ck(X)) is norm closed in C(K),

(2.1.d) :

\(j(\hbox {co}(A \cup B) = \max \{j(A), j(C) \}\), for all \(A,C \in ck(X)\).

The Rådström embedding \(\widetilde{j(ck(X))}\) of ck(X) is given by \(j: ck(X) \rightarrow \widetilde{j(ck(X))}\), where \( j(C)= s(\cdot , C) \text{ for } \text{ all } C\in ck(X) \) and \(\widetilde{j( ck(X)) }\) is the closure of the span of \(\{s(\cdot , C) : C\in ck(X)\}\) in \((C(B_{X^*}), \sigma (X^*,X))\). Here \(C(B_{X^*}) =\{ f:B_{X^*} \rightarrow {\mathbb {R}}: f \text{ is } \text{ continuous } \}\), \(B_{X^*}\) denotes the unit ball of \(X^*\) and \(\sigma (X^*,X)\) denotes the weak\(^*\) topology on \(X^*\).

The bounded-weak-star (bw*) topology is the strongest topology of \(B_{X^*}\) with coincides with the weak\(^*\) topology of \(B_{X^*}\) on every ball \(B^r_{X^*}:= \{ f \in B_{X^*} : \Vert f\Vert \le r\}\). Let \(\mathfrak {B}( ck(X))\) be the Borel \(\sigma \)-algebra on \((ck(X), d_H)\).

In order to define Brownian multivalued motion a multiplication and a difference in ck(X) are needed. For what concerns the difference see the Appendix (however we shall always consider the difference \(B_1-B_2\) of two convex and compact sets as the element \(j(B_1)-j(B_2)\) in C(K)), while to access the averaging properties of conditional expectation operators a multiplicative structure is needed. In the Riesz space setting the most natural multiplicative structure is that of an f-algebra (see for example [1]). This gives a multiplicative structure that is compatible with the order and additive structures on the space.

The ideal, \(E^e\), of E generated by e, where e is a weak order unit of E and E is Dedekind complete, has a natural f-algebra structure. This is constructed by setting \((Pe) \cdot (Qe) = PQe = (Qe) \cdot (Pe)\) for band projections P and Q, and extending to \(E^e\) by use of Freudenthal’s Theorem. In fact this process extends the multiplicative structure to the universal completion \(E^u\), of E. This multiplication is associative, distributive and is positive in the sense that if \(x, y \in E^+\) then \(xy > 0\). Here e is the multiplicative unit.

Thus the multiplication operation \(\cdot : ck(X) \times ck(X) \rightarrow C(K)\) can be defined by:

$$\begin{aligned} A \cdot B =j(A) \cdot j(B). \end{aligned}$$

If X is finite dimensional then \(j(B_X) \cdot j(B)= j(B)\), and \(B_X \cdot B\) exists not only in C(K) but also in ck(X) (and of course coincides with B).

3 ck(X)-valued Brownian motion

Now we shall introduce a Brownian motion taking values in the space \(ck_r(X)\), where X is any general Banach space. (Here the notation \(ck_r(X)\) means all the indicator functions of the type \(r1_B\), as r varies in \({\mathbb {R}}\) and B in ck(X)).

In order to do this, let us denote by e the unit function in C(K). In case X is finite-dimensional, \(e=j(B_X)\), the corresponding element of the unit ball of X.

Definition 3.1

Let S denote the hyperspace we are interested in, i.e. \(ck_r(X)\), and let \((B_t)_t\) be a process taking values in S, namely for every \(t \ge 0\) \(B_t : \varOmega \rightarrow S \subset C(K)\). This process will be called set-valued Brownian motion if the following conditions are satisfied:

(3.1.1) :

There exists an f-algebra L such that \(B_t(\omega )\in L\) for each \(\omega \in \varOmega \) and each \(t>0\);

(3.1.2) :

\(B_t, B_t^2\) are C(K)-valued Bochner integrable functions for each \(t > 0\);

(3.1.3) :

For every evaluation functional \(f \in C(K)^*\), the process \(f(B_t)_t\) is a standard real Brownian motion.

We recall that an evaluation functional f associates to every \(x\in C(K)\) the value x(k) for some fixed \(k\in K\).

Example 3.2

The following is an example of a set-valued Brownian motion, when X is finite-dimensional: \( (B_t)_t = (W_t e )_t \) where \((W_t)_t\) is the standard scalar Brownian motion, and e is the unit ball in X. Then for every \(f \in C(K)^* \) such that \(f(e)= 1\) it is

$$\begin{aligned} f (W_t e )_t = (W_t f(e))_t = (W_t)_t. \end{aligned}$$

So for every elementary event \(\omega \)

So \(B_t(\omega ) \in j(S)\) if \(W_t(\omega ) > 0\), while \(B_t(\omega ) \in -j(S)\) othewise.

Next, for every real number t and every element \(B\in ck(X)\), the notation tB represents the indicator function \(t1_B\).

Finally, if \((W_t)_{t>0}\) denotes the standard Brownian motion, and if we set \(V_t:=W_te\) for each positive t, then we have shown that \((V_t)_{t>0}\) is a Brownian motion taking values in \(S:=ck_r(X)\) (or in j(S) after embedding).

From now on, let \((\varOmega ,\mathcal {A},P)\) denote any fixed probability space, with a \(\sigma \)-algebra \(\mathcal {A}\) and a countably additive probability measure P.

Definition 3.3

Let \(\varGamma : \varOmega \rightarrow ck(X)\) be a measurable function. Define

$$\begin{aligned} P_{\varGamma }(B)=P(\varGamma (\omega )\subset B) \text{ for } \text{ all } B\in \mathfrak {B}( ck(X)) \end{aligned}$$

and

$$\begin{aligned} F_{\varGamma }(Y)=P(\varGamma (\omega ) \subset Y) \hbox { for all } Y\in ck(X). \end{aligned}$$

Then \(P_{\varGamma }: \mathfrak {B}(ck(X)) \rightarrow [0,1]\) is a probability measure (the probability distribution of \(\varGamma \)), and \(F_{\varGamma }: {\mathrm{ck}}(X) \rightarrow [0,1]\) is its distribution function.

Proposition 3.4

Let \(\varGamma : \varOmega \rightarrow {\mathrm{ck}}(X)\) be a measurable set-function. Then

$$\begin{aligned} F_{\varGamma }= F_{j\circ \varGamma } (j(\cdot )). \end{aligned}$$

Proof

It is

$$\begin{aligned} F_{\varGamma } (Y)= & {} P( \varGamma \subset Y) \,\,\, \hbox {where} \,\,\, \varGamma \subset Y \Longleftrightarrow j(\varGamma ) \le j(Y) \\ F_{\varGamma } (Y)= & {} P( \varGamma \subset Y) = P(j(\varGamma ) \le j(Y)) = F_{j(\varGamma )} (j(Y)). \end{aligned}$$

\(\square \)

Example 3.5

Let us assume that \(X_1\) and \(X_2\) are two real-valued random variables, \(X_1\le X_2\), and consider the variable \(\varGamma :=[X_1,X_2]\) taking values in the hyperspace \(ck({\mathbb {R}})\). Now, when Y is an element of \(ck({\mathbb {R}})\), i.e. \(Y=[y_1,y_2]\), the condition \(\varGamma \subset Y\) means \([X_1\ge y_1,X_2\le y_2]\), and so

$$\begin{aligned} F_{\varGamma }(Y)=P([y_1\le X_1\le X_2\le y_2]). \end{aligned}$$

On the other hand, in this situation, the unit sphere of the dual space of \({\mathbb {R}}\) is simply the set \(\{-1,1\}\), and, for every set \([a,b]\in ck({\mathbb {R}})\), one has

$$\begin{aligned} s(x^*,[a,b])=\left\{ \begin{array}{rr} -a,&{} x^*=-1\\ &{} \\ \ b,&{} x^*=1 \end{array}\right. \end{aligned}$$

Hence, one can write \(j([a,b])=(-a,b)\) as soon as \(a,b\in {\mathbb {R}},\ a\le b\). Then \(j(\varGamma )=(-X_1,X_2)\) and \(j(Y)=(-y_1,y_2)\): the condition \(j(\varGamma )\le j(Y)\) now means

$$\begin{aligned} -X_1\le -y_1, \quad X_2\le y_2 \end{aligned}$$

and again one has

$$\begin{aligned} F_{j(\varGamma )}(j(Y)) = P([y_1\le X_1\le X_2\le y_2]) = F_{\varGamma }(Y). \end{aligned}$$

Let \(X_1, Z\) be two independent random variables with distribution \(\varGamma (1,\lambda )\), and denote \(X_2:=X+Z\).

Then clearly \(0\le X_1\le X_2\), and \(\underline{X}= [X_1,X_2]\) defines a \(ck({\mathbb {R}})\)-valued variable. In order to compute its distribution function, fix arbitrarily \(y_1\) and \(y_2\) in \({\mathbb {R}}\), with \(0\le y_1\le y_2\). Then

$$\begin{aligned} F_{\underline{X}}([y_1,y_2])= & {} P([y_1\le X_1\le X_2\le y_2]) =\int _{y_1}^{y_2}\left( {\int _0^{y_2-x}}f_Z(z)dz\right) f_{X_1}(x)dx\\= & {} \lambda ^2\int _{y_1}^{y_2}{\int _0^{y_2-x}}e^{-\lambda x}e^{-\lambda z}dz dx. \end{aligned}$$

Simple computations give finally

$$\begin{aligned} F_{\underline{X}}([y_1,y_2])= & {} e^{-\lambda y_1}-e^{-\lambda y_2}+\lambda (y_1-y_2)e^{-\lambda y_2} \\= & {} F_{(-X_1,X_2)} (-y_1,y_2). \end{aligned}$$

In Figure Fig. 1 the plot of the the distribution function of \(F_{\underline{X}}\) for \(\lambda =1\) is given.

In [27] the set valued Gaussian distribution is defined to satisfy the condition \(F_{\varGamma }=F_{j\circ \varGamma }(j(\cdot ))\).

Theorem 3.6

Assume that \(W_t: \varOmega \rightarrow ck(X)\) is a weakly continuous L-valued function of \(t\ge 0\) that satisfies \(W_0=0\). Moreover suppose that \(W_t\) and \(W^2_t\) are Bochner integrable for each t. Let \(\{t_0, \ldots t_m\}\) be such that \(0=t_0< t_1< \dots <t_m\).

Fig. 1
figure 1

Plot of the distribution function of \(F_{\underline{X}}\)

Then \((W_t)_t\) is a Brownian motion if and only if one of the following statements holds for any evaluation function \(f\in C(K)^*\):

  • (3.6.i) The increments \(f(W_{t_{1}}-W_{t_{0}}), f(W_{t_{2}}-W_{t_{1}}), \dots , f(W_{t_{m}}-W_{t_{m-1}})\) are independent and each of these increments is normally distributed with null mean and variance equal to \(t_{i+1}-t_i\)

  • (3.6.ii) The random variables \(f(W_{t_{1}}), f(W_{t_{2}}), \dots , f(W_{t_{m}})\) are jointly normally distributed with means equal to zero and co-variance matrix V given by

    $$\begin{aligned} V= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} t_1 &{}\quad t_1 &{}\quad \cdots &{}\quad t_1\\ t_1 &{}\quad t_2 &{}\quad \cdots &{}\quad t_2\\ \vdots &{}\quad \vdots &{} &{}\quad \vdots \\ t_1 &{}\quad t_m &{}\quad \cdots &{}\quad t_m \end{array} \right) \end{aligned}$$
  • (3.6.iii) The random variables \(f(W_{t_{1}}), f(W_{t_{2}}), \dots , f(W_{t_{m}})\) have the joint moment-generating function given by

    $$\begin{aligned} \varphi (u_1, \ldots , u_m)= & {} \exp \Big \{ \dfrac{1}{2} u^2_m (t_{m} - t_{m-1})\Big \}\cdots \exp \Big \{ (u_1 + u_2+ \cdots +u_{m})^2 t_1\Big \}, \end{aligned}$$

    for every \(u_1, \ldots , u_m \in {\mathbb {R}}\).

Proof

Let \(\{t_0, \ldots t_m\}\) be fixed with \(0=t_0< t_1< \dots <t_m\) and consider an evaluation function f. By 3.6.i) it is

$$\begin{aligned} t_{i+1} - t_{i}= & {} {\mathrm{Var}} (f(W_{t_{i+1}}) - f(W_{t_{i}}) ) = {\mathrm{Var}} (f(W_{t_{i+1}} - W_{t_{i}}) )\\= & {} {{\mathbb {E}}} [(f(W_{t_{i+1}} - W_{t_{i}})^2]= {{\mathbb {E}}}[ f^2 (W_{t_{i+1}} - W_{t_{i}})]\\= & {} {{\mathbb {E}}}[ f^2 (W_{t_{i+1}}) +f^2( W_{t_{i}}) - 2 f(W_{t_{i+1}}) \cdot f(W_{t_{i}}) ]\\= & {} {{\mathbb {E}}} [f^2(W_{t_{i+1}}) - f^2(W_{t_{i}})] \end{aligned}$$

Now, for every evaluation function f the process \(f(W_t)_t\) is a scalar Brownian motion and so all the three conditions are equivalent thanks to [24, Theorem 3.3.2] since

$$\begin{aligned} V = \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\mathbb {E}}[f(W(t_1))^2] &{} {\mathbb {E}}[f(W(t_1))f(W(t_2))] &{} \cdots &{} {\mathbb {E}}[f(W(t_1))f( W(t_m))]\\ {\mathbb {E}}[f(W(t_2)) f(W(t_1))] &{} {\mathbb {E}}[f(W(t_2))^2] &{} \cdots &{} {\mathbb {E}}[f(W(t_2)) f(W(t_m))]\\ \vdots &{} \vdots &{} &{} \vdots \\ {\mathbb {E}}[f(W(t_m)) f(W(t_1))] &{} {\mathbb {E}}[f(W(t_m)) f(W(t_2))] &{} \cdots &{} {\mathbb {E}}[f(W(t_m))^2] \end{array} \right) \end{aligned}$$

and

$$\begin{aligned}&\varphi (u_1, \ldots , u_m) = {\mathbb {E}}\Big [ \exp \Big \{ u_m f( W(t_m)) + u_{m-1} f(W(t_{m-1}))+ \cdots + u_{1} f(W(t_{1}))\Big \}\Big ] \\&\quad = {\mathbb {E}}\Big [ \exp \Big \{u_m f( W(t_m) -W(t_{m-1}) ) + (u_{m-1} + u_m) f (W(t_{m-1}) -W(t_{m-2}) )\\&\qquad + \cdots \, + (u_1 + u_2+ \cdots +u_{m})f( W(t_1))\Big \}\Big ] \\&\quad = {\mathbb {E}} \Big [\exp \Big \{u_m f( W(t_m) -W(t_{m-1}) )\Big \}\Big ] \cdot \cdots \cdot \, {\mathbb {E}} \Big [\exp \Big \{ (u_1 + u_2+ \cdots +u_{m}) f(W(t_1))\Big \}\Big ]\\&\quad = \exp \Big \{ \dfrac{1}{2} u^2_m (t_{m} - t_{m-1})\Big \}\cdots \exp \Big \{ (u_1 + u_2+ \cdots +u_{m})^2 t_1\Big \}. \end{aligned}$$

So, by Definition 3.1, \((W_t)_t\) is a Brownian motion. \(\square \)

Definition 3.7

For every Bochner integrable set-valued function W, the conditional expectation \(E(W|\mathcal {F})\) of W with respect to a sub \(\sigma \)-algebra \(\mathcal {F}\subset \mathcal {A}\) is a Bochner integrable function with respect to \((\varOmega , \mathcal {F}, \lambda )\) such that for every evaluation function \(f \in C(K)^*\) it is

$$\begin{aligned} \mathbb {E}(f (W) | \mathcal {F}) =f( \mathbb {E}(W|\mathcal {F}) ). \end{aligned}$$

Definition 3.8

A set valued process \((M_t)_t\) is a pointwise martingale if

  • (3.8.1) \(M_t\) is Bochner integrable for every t;

  • (3.8.2) \(\mathbb {E}( M_t | \mathcal {F}_s) = M_s\), for every \(s < t\) where \((\mathcal {F}_s)_s\) is the natural filtration of \((M_t)_t\).

Then every Brownian motion \((M_t)_t\) is a pointwise martingale.

Theorem 3.9

Assume that \((B_t)_t\) is a set-valued Brownian motion, taking values in L. Then whenever \(0<s<t\) are fixed in \({\mathbb {R}}\), one has

$$\begin{aligned} \mathbb {E}(B_t^2|\mathcal {F}_s)=B_s^2+(t-s)e. \end{aligned}$$

Proof

Let f be any evaluation functional. Then we have

$$\begin{aligned} \mathbb {E}(f(B_t^2)|\mathcal {F}_s)=\mathbb {E}((f(B_t))^2|\mathcal {F}_s)=f(B_s^2)+t-s=f(B_s^2+(t-s)e) \end{aligned}$$

by the usual properties of scalar Brownian motion and multiplicativity property of f. So, by arbitrariness of f, this leads to the assertion. \(\square \)

Clearly, this result means that, under the stated hypotheses, the sequence \((B_t^2-te)_t\) is a pointwise martingale.

The last theorem can be reversed, in some sense: more precisely,

Theorem 3.10

Let \((B_t)_t\) be a weak set-valued Gaussian process with homogeneous increments, such that \(B_0=0\). If \((B_t^2-te)_t\) is a pointwise martingale, then \((B_t)_t\) is a Wiener process (therefore, assuming also that the trajectories of \((B_t)_t\) are weakly continuous, one can conclude that \((B_t)_t\) is a set-valued Brownian motion).

Proof

Indeed, from the martingale condition, one can deduce that \(\mathbb {E}(B_t^2-te)\) is constant with respect to t, and therefore null, since \(B_0=0\). So, \(\mathbb {E}(B^2_t)=te\) for all t. Now, if \(0<s<t\), thanks to the homogeneity property:

$$\begin{aligned}&\mathbb {E}(f(2B_tB_s)=\mathbb {E}\left( f\left[ B_t^2+B_s^2-(B_t-B_s)^2\right] \right) \\&\quad =\mathbb {E}\left( \left[ f(B_t^2)+f(B_s)^2-f(B_{t-s}^2)\right] \right) =t+s-t+s=2s \end{aligned}$$

and this is precisely the defining property for a (weak) Wiener process. \(\square \)

Let \((M_t)_{t\ge 0}\) be an L-valued adapted process, then \(\sum _{i=0}^{n-1} [M_{t_{j+1}}- M_{t_{j}}]^2 \in C(K)\) for every partition \(\pi =\{0=t_0<t_1<\dots <t_n=T\}\) of \([0,T], T > 0\).

Definition 3.11

The quadratic variation \([M_t, M_t]\) of an L-valued adapted process \((M_t)_t\), when it exists, is given by the following limit

$$\begin{aligned} \lim _{\Vert \pi \Vert \rightarrow 0} \Vert f([M_t, M_t](T) ) - f\left( \sum _{i=0}^{n-1} [M_{t_{j+1}}- M_{t_{j}}]^2\right) \Vert _2 = 0 \end{aligned}$$

for every evaluation function \(f \in C(K)^*\) and every \(T>0\).

Theorem 3.12

(Theorem of Levy) Let \(M_t\) be a martingale relative to a filtration \({\mathcal {F}}_t\) with \(M_0=0\). Assume that \(M_t\) has weakly continuous paths and \([M_t, M_t](T)=T\) for all \(T\ge 0\). Then \((M_t)_t\) is a set-valued Brownian motion.

Proof

From the assumptions on \(M_t\), we get that, for each evaluation function \(f\in C(K)^*\), \(f(M_t)\) is a martingale with \(f(M_0)=0\), \(f(M_t)\) has continuous paths and \([f(M_t), f(M_t)](t)=t\) for all \(t\ge 0\). Thus, \(f(M_t)\) is a Brownian motion and so \(M_t\) is a set valued Brownian motion. \(\square \)

Definition 3.13

A set valued process \((W_t)_t\) is integrable with respect to a Brownian motion \((B_t)_t\) if for every \(T > 0\) there exists an element \(I_T \in C(K)\) such that:

  • (3.13.1) \((I_T)_T\) is a martingale with respect to \((B_t)_t\);

  • (3.13.2) for every evaluation function \(f \in C(K)^*\) it is

    $$\begin{aligned} f(I_T) = (I) \int _0^T f(W_t) d (fB_t) \end{aligned}$$

    where the last integral is in the Ito sense.

For instance, the process \((B_t)_t\) is integrable, with \(I_T=\frac{B^2_T-T}{2}\); more generally, if \((B_t)_t\) takes values in an f-algebra L, then the process \((B_t^k)_t\) is integrable for every positive integer k, and the usual Ito formula holds.

4 Brownian motion in vector lattices

In this section we generalize the notions of Brownian Motion introduced before, replacing the space C(K) with a particular Riesz space E having an order unit e.

Definition 4.1

([17, Definition 3.6]) Let \((B_t,\mathfrak {F}_t)\) be an adapted stochastic process in the Dedekind complete Riesz space E with conditional expectation \({\mathbb {F}}\) and unit element e. The process is called an \({\mathbb {F}}\)-conditional Brownian motion in E if for all \(0\le s<t\) we have

  • (4.1.1) \(B_0=0;\)

  • (4.1.2) the increment \(B_t-B_s\) is \({\mathbb {F}}\)-conditionally independent of \(\mathfrak {F}_s\);

  • (4.1.3) \({\mathbb {F}}(B_t-B_s)=0;\)

  • (4.1.4) \({\mathbb {F}}[(B_t-B_s)^2]=(t-s)e;\)

  • (4.1.5) \({\mathbb {F}}((B_t-B_s)^4]=3(t-s)^2e.\)

Remark 4.2

It was noted in [13, page 901] that the definition of a Brownian motion in the Riesz space setting yields a Brownian motion in the classical case of real valued Brownian motion; i.e., a real valued stochastic process satisfies conditions (4.1.1)–(4.1.5) if and only if it is a Brownian motion.

Theorem 4.3

Let \(B_t\) be a set valued stochastic process. Then \(B_t\) is a Brownian motion if and only if for any evaluation function \(f\in C(K)^*\) and every pair (st) of positive real numbers with \(s<t\):

  • (4.3.1) \(f(B_0)=0;\)

  • (4.3.2) the increment \(f(B_t)-f(B_s)\) is \(({\mathbb {F}}f)\)-conditionally independent of \(f(\mathfrak {F}_s)\);

  • (4.3.3) \({\mathbb {F}}(f(B_t)-f(B_s))=0;\)

  • (4.3.4) \({\mathbb {F}}[(f(B_t)-f(B_s))^2]=(t-s)f(e);\)

  • (4.3.5) \({\mathbb {F}}((f(B_t)-f(B_s))^4]=3(t-s)^2f(e).\)

Proof

We note that \(B_t\) is a Brownian motion if and only if \(f(B_t)\) is a Brownian motion for each evaluation function \(f\in C(K)^*\) which is equivalent to the conditions (4.3.1)–(4.3.5) by the Remark 4.2. \(\square \)

As a consequence of Theorem 4.3, when \(E=C(K)\), with K Stonian and e is the unit function in C(K), we have:

Corollary 1

Let \((B_t)_t\) be a \({\mathbb {F}}\)-conditional Brownian motion then, in the \(L^2\)-norm,

$$\begin{aligned}{}[B_t,B_t] (T) = Te. \end{aligned}$$

Proof

Let \(\varPi :=\{t_0, \ldots t_m\}\) be fixed with \(0=t_0< t_1< \dots <t_m\) and let \(\delta (\varPi )= \sup _{j < m} \{t_{j+1} - t_j \}\). Since

$$\begin{aligned} \mathbb {E}\left[ \left( B_{t_{j+1}} - B_{t_j}\right) ^2\right] = {\mathrm{Var}}\left[ (B_{t_{j+1}} - B_{t_j})\right] = (t_{j+1} - t_j)e, \end{aligned}$$

then

$$\begin{aligned} \mathbb {E}\left[ \sum _{j=0}^{m-1} (B_{t_{j+1}} - B_{t_j})^2\right] = \sum _{j=0}^{m-1} \mathbb {E}\left[ (B_{t_{j+1}} - B_{t_j})^2\right] = Te. \end{aligned}$$

Moreover

$$\begin{aligned} {\mathrm{Var}} \left[ (B_{t_{j+1}} - B_{t_j})^2\right] = \mathbb {E}\left[ (B_{t_{j+1}} - B_{t_j})^4\right] - (t_{j+1} - t_j)^2 e= 2 (t_{j+1} - t_j)^2 e \end{aligned}$$

and

$$\begin{aligned} {\mathrm{Var}} \left[ \sum _{j=0}^{m-1} (B_{t_{j+1}} - B_{t_j})^2\right]= & {} \sum _{j=0}^{m-1} {\mathrm{Var}} \left[ (B_{t_{j+1}} - B_{t_j})^2\right] = 2 \sum _{j=0}^{m-1} (t_{j+1} - t_j)^2 e \\\le & {} 2 T e \delta (\varPi ) \end{aligned}$$

and this implies that

$$\begin{aligned} \lim _{\delta (\varPi ) \rightarrow 0} {\mathrm{Var}} \left[ \sum _{j=0}^{m-1} (B_{t_{j+1}} - B_{t_j})^2\right] =0 \end{aligned}$$

and so

$$\begin{aligned} \lim _{\delta (\varPi ) \rightarrow 0} \left\| \sum _{j=0}^{m-1} (B_{t_{j+1}} - B_{t_j})^2 - T e \right\| _2 =0. \end{aligned}$$

\(\square \)

5 Appendix

At the beginning of the paper we have claimed that, in order to introduce a notion of Brownian motion in this context, a kind of difference between sets is necessary. Here, following [25], for every \(A \in ck(X)\) let \(-A\) be the opposite of the set A, namely \(-A = \{-a: a \in A\}\) and consider the following difference between sets:

Definition 5.1

([25, Definition 1]) For every \(A, B \in ck(X)\) the generalized Hukuhara difference of A and B (gH-difference for short), when exists, is the set \(C \in ck(X)\) such that

$$\begin{aligned} A \ominus _g B := C \Longleftrightarrow \left\{ \begin{array}{ll} (i) &{} A = B + C, \,\,\,\,\, \hbox {or}\\ (ii) &{} B = A + (-C). \end{array} \right. \end{aligned}$$
(2)

Remark 5.2

By [25, Propositions 1,6 and Remarks 2-5] if the set C exists it is unique and coincides with the Hukuhara difference between A and B. Moreover a necessary condition for the existence is that either A contains a traslate of B or B contains a traslate of A. If equations (2.i) and (2.ii) hold simultaneously then C is a singleton. Finally

  • (5.2.1) \(A \ominus _g B \in ck(X)\) then \(B \ominus _g A = - (A \ominus _g B)\);

  • (5.2.2) \(A \ominus _g A = \{ 0\}\),

  • (5.2.3) \((A+B) \ominus _g B = A\), \(A \ominus _g (A -B) = B\), \(A \ominus _g (A+B) = -B\).

If A is compact and convex subset of X then it is characterized by its support function \(s_A\) by Hahn-Banach theorem (see for example [10, Proposition II.16]). It is possible to express the gH-difference of convex compact sets using support functions.

Given \(A,B,C \in ck(X)\) let \(s(\cdot ,A), s(\cdot ,B), s(\cdot ,C), s(\cdot ,-C)\) be the support functions of \(A,B,C,-C\) respectively.

Again by [10, Propositions II-19] the map \(A \mapsto s(\cdot ,A)\) is injective, \(s(x^*,A+B)=s(x^*,A) + s(x^*,B)\), \(s(x^*,\lambda A)= \lambda s(x^*,A)\) for every non negative \(\lambda \), while

$$\begin{aligned} s(x^*,-A)= & {} \sup \{<x^*,-x>, x \in A \} = \sup \{< -x^*,x>, x\in A\} \\= & {} s(-x^*,A) \ge - s(x^*,A) \end{aligned}$$

And the equality in the last line holds when the opposite of A is a set \(C \in ck(X)\) such that \(A + C = \{0\}\), namely \(s(x^*,\ominus _g A) = -s(x^*,A)\). So in general \(s(-x^*,A) \ge - s(x^*,A)\) and the equality holds when equation (2.i) holds.

We recall some well-known facts concerning Banach spaces.

Theorem 5.3

[12, Theorem 2.3] Let X be any Banach space and \(H:B_{X^*} \rightarrow {\mathbb {R}}\) be any mapping. Then H is the support function of a convex compact subset of X if and only if H is \(bw^*\)-continuous, subadditive and positively homogeneous.

A consequence of this result can be stated in the following way.

Proposition 5.4

Let \((B_t)_t:=(W_te)_t\) be the example of Brownian motion in a finite-dimensional space X, given above in the Example 3.2. Then, for each positive t, the function \(j(B_t^2-\int _0^t2B_{\tau }dB_{\tau })\) is (\(bw^*\))-continuous, subadditive and positively homogeneous.

Proof

Clearly, since \(\int _0^t2B_{\tau }dB_{\tau }=B^2_t-te\), it follows that the difference \(B_t^2-\int _0^t2B_{\tau }dB_{\tau }\) is a positive multiple of e, i.e. a convex compact set. The conclusion then follows from Theorem 5.3. \(\square \)

The generalized Hukuhara difference can be expressed by means of the support functions as in [25, Proposition 8] in the following way:

Proposition 5.5

Let \(s(\cdot , A), s(\cdot , B)\) be the support functions of \(A,B \in ck(X)\) and denote by \(s_1:=s(\cdot ,A) - s(\cdot ,B), s_2 = s(\cdot , B) - s(\cdot ,A)\). Then only four cases may occur:

(5.5.a) :

if \(s_1,s_2\) are bw*-continuous and subadditive then \(A \ominus _g B \in ck(X)\) and \(A \ominus _g B\) is a singleton;

(5.5.b) :

if only \(s_1\) is bw*-continuous and subadditive then equation (2i) holds and \(s(\cdot ,C) :=s_1\);

(5.5.c) :

if only \(s_2\) is bw*-continuous and subadditive then equation (2ii) holds and \(s(\cdot ,C) := s(\cdot , B) - s(\cdot ,-A)\);

(5.5.d) :

if none of them is bw*-continuous or subadditive then \(A \ominus _g B\) does not exist.

Proof

The proof of the first statement is the same as in [25, Proposition 8] since it depends only on the subadditivity of \(s_i,i=1,2\) and the fact that in this case AB are each one a traslate of the other, the bw*-continuity of \(s_1, s_2\) implies again by Theorem 5.3, that \(A \ominus _g B \in ck(X)\).

As to the second statement observe that by [12, Theorem 2.3] there exists \(C \in ck(X)\) such that \(s_1 =s(\cdot ,C)\), so (2.i) is valid.

In the third case the same can be done for \(s_2\), so there exists \(D \in ck(X)\) such that \(s_2 =s(\cdot ,D)\). Set now \(C=-D\), the rest of the proof follows as in the quoted [25, Proposition 8].

Finally since [12, Theorem 2.3] is a necessary and sufficient condition that there exist no \(C \in ck(X)\) such that \(A = B + C\) or \(D \in ck(X)\) such that \(B = A + D\) , so \(A \ominus _g B\) does not exists. \(\square \)