Keywords

2010 Mathematics Subject Classification

16.1 Introduction

The discrete Burkholder–Davis–Gundy inequality (see [3, Theorem 3.2]) states that for any p ∈ (1, ) and martingales difference sequence \((d_j)_{j=1}^n\) in L p( Ω) one has

$$\displaystyle \begin{aligned} \Big\|\sum_{j=1}^n d_j\Big\|{}_{L^p(\Omega)} \eqsim_p \Big\|\Big(\sum_{j=1}^n |d_j|{}^2\Big)^{1/2}\Big\|{}_{L^p(\Omega)}. \end{aligned} $$
(16.1)

Moreover, there is the extension to continuous-time local martingales M (see [13, Theorem 26.12]) which states that for every p ∈ [1, ),

$$\displaystyle \begin{aligned} \big\|\sup_{t\in [0,\infty)}|M_t|\big\|{}_{L^p(\Omega)} \eqsim_{p} \big\|[M]_{\infty}^{1/2}\big\|{}_{L^p(\Omega)}. \end{aligned} $$
(16.2)

Here t↦[M]t denotes the quadratic variation process of M.

In the case X is a UMD Banach function space the following variant of (16.1) holds (see [24, Theorem 3]): for any p ∈ (1, ) and martingales difference sequence \((d_j)_{j=1}^n\) in L p( Ω;X) one has

$$\displaystyle \begin{aligned} \Big\|\sum_{j=1}^n d_j\Big\|{}_{L^p(\Omega;X)} \eqsim_p \Big\|\Big(\sum_{j=1}^n |d_j|{}^2\Big)^{1/2}\Big\|{}_{L^p(\Omega;X)}. \end{aligned} $$
(16.3)

Moreover, the validity of the estimate also characterizes the UMD property.

It is a natural question whether (16.2) has a vector-valued analogue as well. The main result of this paper states that this is indeed the case:

Theorem 16.1.1

Let X be a UMD Banach function space over a σ-finite measure space (S,  Σ, μ). Assume that \(N:{\mathbb R}_+\times \Omega \times S\to {\mathbb R}\) is such that N|[0,tΩ×S is \(\mathcal B([0,t])\otimes \mathcal F_t\otimes \Sigma \)-measurable for all t ≥ 0 and such that for almost all s  S, N(⋅, ⋅, s) is a martingale with respect to \(({\mathcal F}_t)_{t\geq 0}\) and N(0, ⋅, s) = 0. Then for all p ∈ (1, ),

$$\displaystyle \begin{aligned} \big\|\sup_{t\geq 0} |N(t,\cdot,\cdot)| \big\|{}_{L^p(\Omega;X)} \eqsim_{p,X} \sup_{t\geq 0}\big\|N(t,\cdot,\cdot) \big\|{}_{L^p(\Omega;X)} \eqsim_{p,X} \|[N]_{\infty}^{1/2}\|{}_{L^p(\Omega;X)}. \end{aligned} $$
(16.4)

where [N] denotes the quadratic variation process of N.

By standard methods we can extend Theorem 16.1.1 to spaces X which are isomorphic to a closed subspace of a Banach function space (e.g. Sobolev and Besov spaces, etc.)

The two-sided estimate (16.4) can for instance be used to obtain two-sided estimates for stochastic integrals for processes with values in infinite dimensions (see [25] and [26]). In particular, applying it with \(N(t,\cdot ,s) = \int _0^t \Phi (\cdot , s) \,\mathrm {d} W\) implies the following maximal estimate for the stochastic integral

$$\displaystyle \begin{aligned} &\Big\|s\mapsto \sup_{t\geq 0} \Big|\int_0^{t}\Phi(\cdot, s) \,\mathrm{d} W\Big| \Big\|{}_{L^p(\Omega; X)}\notag\\ &\quad \eqsim_{p,X} \sup_{t\geq 0} \Big\|s\mapsto \int_0^{t}\Phi(\cdot, s) \,\mathrm{d} W\Big\|{}_{L^p(\Omega; X)}\\ &\quad \eqsim_{p,X} \Big\|s\mapsto \Big(\int_0^{\infty}\Phi^2(t,s)\,\mathrm{d} t\Big)^{1/2}\Big\|{}_{L^p(\Omega;X)},\notag \end{aligned} $$
(16.5)

where W is a Brownian motion and \(\Phi :{\mathbb R}_+\times \Omega \times S\to {\mathbb R}\) is a progressively measurable process such that the right-hand side of (16.5) is finite. The second norm equivalence was obtained in [25]. The norm equivalence with the left-hand side is new in this generality. The case where X is an L q-space was recently obtained in [1] using different methods.

It is worth noticing that the second equivalence of (16.4) in the case of X = L q was obtained by Marinelli in [18] for some range of 1 < p, q <  by using an interpolation method.

The UMD property is necessary in Theorem 16.1.1 by necessity of the UMD property in (16.3) and the fact that any discrete martingale can be transformed to a continuous-time one. Also in the case of continuous martingales, the UMD property is necessary in Theorem 16.1.1. Indeed, applying (16.5) with W replaced by an independent Brownian motion \(\widetilde {W}\) we obtain

$$\displaystyle \begin{aligned} \Big\|\int_0^{\infty}\Phi \,\mathrm{d} W\Big\|{}_{L^p(\Omega; X)}\eqsim_{p,X} \Big\|\int_0^{\infty}\Phi \,\mathrm{d} \widetilde W\Big\|{}_{L^p(\Omega; X)}, \end{aligned}$$

for all predictable step processes Φ. The latter holds implies that X is a UMD Banach space (see [10, Theorem 1]).

In the special case that \(X = {\mathbb R}\) the above reduces to (16.2). In the proof of Theorem 16.1.1 the UMD property is applied several times:

  • The boundedness of the lattice maximal function (see [2, 9, 24]).

  • The X-valued Meyer–Yoeurp decomposition of a martingale (see Lemma 16.2.1).

  • The square-function estimate (16.3) (see [24]).

It remains open whether there exists a predictable expression for the right-hand side of (16.4). One would expect that one needs simply to replace [N] by its predictable compensator, the predictable quadratic variationN〉. Unfortunately, this does not hold true already in the scalar-valued case: if M is a real-valued martingale, then

$$\displaystyle \begin{aligned} \mathbb E |M|{}^p_t \lesssim_{p} \mathbb E \langle M\rangle^{\frac p2}_t,\;\;\; t\geq 0,\;\; p<2, \end{aligned}$$
$$\displaystyle \begin{aligned} \mathbb E |M|{}^p_t \gtrsim_{p} \mathbb E \langle M\rangle^{\frac p2}_t,\;\;\; t\geq 0,\;\; p>2, \end{aligned}$$

where both inequalities are known not to be sharp (see [3, p. 40], [19, p. 297], and [21]). The question of finding such a predictable right-hand side in (16.4) was answered only in the case X = L q for 1 < q <  by Dirsken and the second author (see [7]). The key tool exploited there was the so-called Burkholder-Rosenthal inequalities, which are of the following form:

$$\displaystyle \begin{aligned} \mathbb E \|M_N\|{}^p \eqsim_{p,X} {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert (M_n)_{0\leq n \leq N} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}_{p,X}^p, \end{aligned}$$

where (M n)0≤nN is an X-valued martingale, \({\left \vert \kern -0.25ex\left \vert \kern -0.25ex\left \vert \cdot \right \vert \kern -0.25ex\right \vert \kern -0.25ex\right \vert }_{p,X}\) is a certain norm defined on the space of X-valued L p-martingales which depends only on predictable moments of the corresponding martingale. Therefore using approach of [7] one can reduce the problem of continuous-time martingales to discrete-time martingales. However, the Burkholder-Rosenthal inequalities are explored only in the case X = L q.

Thanks to (16.2) the following natural question arises: can one generalize (16.4) to the case p = 1, i.e. whether

$$\displaystyle \begin{aligned} \big\|\sup_{t\geq 0} |N(t,\cdot,\cdot)| \big\|{}_{L^1(\Omega;X)} \eqsim_{p,X} \|[N]_{\infty}^{1/2}\|{}_{L^1(\Omega;X)} \end{aligned} $$
(16.6)

holds true? Unfortunately the outlined earlier techniques cannot be applied in the case p = 1. Moreover, the obtained estimates cannot be simply extrapolated to the case p = 1 since those contain the UMD p constant, which is known to have infinite limit as p → 1. Therefore (16.6) remains an open problem. Note that in the case of a continuous martingale M inequalities (16.4) can be extended to the case p ∈ (0, 1] due to the classical Lenglart approach (see Corollary 16.4.4).

16.2 Preliminaries

Throughout the paper any filtration satisfies the usual conditions (see [12, Definition 1.1.2 and 1.1.3]), unless the underlying martingale is continuous (then the corresponding filtration can be assumed general).

A Banach space X is called a UMD space if for some (or equivalently, for all) p ∈ (1, ) there exists a constant β > 0 such that for every n ≥ 1, every martingale difference sequence \((d_j)^n_{j=1}\) in L p( Ω;X), and every {−1, 1}-valued sequence \((\varepsilon _j)^n_{j=1}\) we have

$$\displaystyle \begin{aligned} \Bigl({\mathbb E} \Bigl\| \sum^n_{j=1} \varepsilon_j d_j\Bigr\|{}^p\Bigr )^{\frac 1p} \leq \beta \Bigl({\mathbb E} \Bigl \| \sum^n_{j=1}d_j\Bigr\|{}^p\Bigr )^{\frac 1p}. \end{aligned}$$

The above class of spaces was extensively studied by Burkholder (see [4]). UMD spaces are always reflexive. Examples of UMD space include the reflexive range of L q-spaces, Besov spaces, Sobolev, and Musielak-Orlicz spaces. Example of spaces without the UMD property include all nonreflexive spaces, e.g. L 1(0, 1) and C([0, 1]). For details on UMD Banach spaces we refer the reader to [5, 11, 22, 24].

The following lemma follows from [27, Theorem 3.1].

Lemma 16.2.1 (Meyer-Yoeurp Decomposition)

Let X be a UMD space and p ∈ (1, ). Let \(M:\mathbb R_+ \times \Omega \to X\) be an L p-martingale that takes values in some closed subspace X 0 of X. Then there exists a unique decomposition M = M d + M c, where M c is continuous, M d is purely discontinuous and starts at zero, and M d and M c are L p-martingales with values in X 0 ⊆ X. Moreover, the following norm estimates hold for every t ∈ [0, ),

$$\displaystyle \begin{aligned} \begin{aligned} \|M^d(t)\|{}_{L^p(\Omega;X)} \leq \beta_{p,X} \|M(t)\|{}_{L^p(\Omega;X)},\\ \|M^c(t)\|{}_{L^p(\Omega;X)} \leq \beta_{p,X} \|M(t)\|{}_{L^p(\Omega;X)}. \end{aligned} \end{aligned} $$
(16.7)

Furthermore, if \(A^{p, d}_X\) and \(A^{p,c}_X\) are the corresponding linear operators that map M to M d and M c respectively, then

$$\displaystyle \begin{aligned} A^{p, d}_X = A^{p, d}_{\mathbb R} \otimes \mathrm{Id}_X, \end{aligned}$$
$$\displaystyle \begin{aligned} A^{c, d}_X = A^{c, d}_{\mathbb R}\otimes \mathrm{Id}_X. \end{aligned}$$

Recall that for a given measure space (S,  Σ, μ), the linear space of all real-valued measurable functions is denoted by L 0(S).

Definition 16.2.2

Let (S,  Σ, μ) be a measure space. Let n : L 0(S) → [0, ] be a function which satisfies the following properties:

  1. (i)

    n(x) = 0 if and only if x = 0,

  2. (ii)

    for all x, y ∈ L 0(S) and \(\lambda \in {\mathbb R}\), n(λx) = |λ|n(x) and n(x + y) ≤ n(x) + n(y),

  3. (iii)

    if x ∈ L 0(S), y ∈ L 0(S), and |x|≤|y|, then n(x) ≤ n(y),

  4. (iv)

    if 0 ≤ x nx with \((x_n)_{n=1}^\infty \) a sequence in L 0(S) and x ∈ L 0(S), then \(n(x) = \sup _{n \in {\mathbb N}}n(x_n)\).

Let X denote the space of all x ∈ L 0(S) for which ∥x∥ := n(x) < . Then X is called the normed function space associated to n. It is called a Banach function space when (X, ∥⋅∥X) is complete.

We refer the reader to [31, Chapter 15] for details on Banach function spaces.

Remark 16.2.3

Let X be a Banach function space over a measure space (S,  Σ, μ). Then X is continuously embedded into L 0(S) endowed with the topology of convergence in measure on sets of finite measure. Indeed, assume x n → x in X and let A ∈ Σ be of finite measure. We claim that 1 Ax n →1 Ax in measure. For this it suffices to show that every subsequence of (x n)n≥1 has a further subsequence which convergences a.e. to x. Let \((x_{n_k})_{k\geq 1}\) be a subsequence. Choose a subsubsequence \(({\mathbf {1}}_A x_{n_{k_\ell }})_{\ell \geq 1} =: (y_\ell )_{\ell \geq 1}\) such that \(\sum _{\ell =1}^{\infty } \|y_\ell - x\| < \infty \). Then by [31, Exercise 64.1] \(\sum _{\ell =1}^{\infty } |y_\ell - x|\) converges in X. In particular, \(\sum _{\ell =1}^{\infty } |y_\ell - x|<\infty \) a.e. Therefore, y  → x a.e. as desired.

Given a Banach function space X over a measure space S and Banach space E, let X(E) denote the space of all strongly measurable functions f : S → E with ∥fX(E) := ∥s↦∥f(s)∥EX ∈ X. The space X(E) becomes a Banach space when equipped with the norm ∥fX(E).

A Banach function space has the UMD property if and only if (16.3) holds for some (or equivalently, for all) p ∈ (1, ) (see [24]). A broad class of Banach function spaces with UMD is given by the reflexive Lorentz–Zygmund spaces (see [6]) and the reflexive Musielak–Orlicz spaces (see [17]).

Definition 16.2.4

\(N:\mathbb R_+ \times \Omega \times S \to \mathbb R\) is called a (continuous) (local) martingale field if N|[0,t]× Ω×S is \(\mathcal B([0,t])\otimes \mathcal F_t \otimes \Sigma \)-measurable for all t ≥ 0 and N(⋅, ⋅, s) is a (continuous) (local) martingale with respect to \(({\mathcal F}_t)_{t\geq 0}\) for almost all s ∈ S.

Let X be a Banach space, \(I \subset \mathbb R\) be a closed interval (perhaps, infinite). A function f : I → X is called càdlàg (an acronym for the French phrase “continue à droite, limite à gauche”) if f is right continuous and has limits from the left-hand side. We define a Skorohod space \(\mathcal D(I; X)\) as a linear space consisting of all càdlàg functions f : I → X. We denote the linear space of all bounded càdlàg functions f : I → X by \(\mathcal D_b(I;X)\).

Lemma 16.2.5

\(\mathcal D_b(I;X)\) equipped with the norm ∥⋅∥ is a Banach space.

Proof

The proof is analogous to the proof of the same statement for continuous functions. □

Let X be a Banach space, τ be a stopping time, \(V:\mathbb R_+ \times \Omega \to X\) be a càdlàg process. Then we define ΔV τ :  Ω → X as follows

$$\displaystyle \begin{aligned} \Delta V_{\tau} := V_{\tau} - \lim_{\varepsilon\to 0} V_{(\tau-\varepsilon)\vee 0}. \end{aligned}$$

16.3 Lattice Doob’s Maximal Inequality

Doob’s maximal L p-inequality immediately implies that for martingale fields

$$\displaystyle \begin{aligned} \big\|\sup_{t\geq 0} \|N(t,\cdot)\|{}_X\big\|{}_{L^p(\Omega)} \leq \frac{p}{p-1}\sup_{t\geq 0} \|N(t)\|{}_{L^p(\Omega;X)},\;\;\; 1<p<\infty. \end{aligned}$$

In the next lemma we prove a stronger version of Doob’s maximal L p-inequality. As a consequence in Theorem 16.3.2 we will obtain the same result in a more general setting.

Lemma 16.3.1

Let X be a UMD Banach function space and let p ∈ (1, ). Let N be a càdlàg martingale field with values in a finite dimensional subspace of X. Then for all T > 0,

$$\displaystyle \begin{aligned} \big\|\sup_{t\in [0,T]}|N(t,\cdot)|\big\|{}_{L^p(\Omega;X)} \eqsim_{p,X} \sup_{t\in [0,T]} \|N(t)\|{}_{L^p(\Omega;X)} \end{aligned}$$

whenever one of the expression is finite.

Proof

Clearly, the left-hand side dominates the right-hand side. Therefore, we can assume the right-hand side is finite and in this case we have

$$\displaystyle \begin{aligned}\|N(T)\|{}_{L^p(\Omega;X)} = \sup_{t\in [0,T]} \|N(t)\|{}_{L^p(\Omega;X)}<\infty. \end{aligned}$$

Since N takes values in a finite dimensional subspace it follows from Doob’s L p-inequality (applied coordinatewise) that the left-hand side is finite.

Since N is a càdlàg martingale field and by Definition 16.2.2(iv) we have that

$$\displaystyle \begin{aligned} \lim_{n\to \infty}\big\|\sup_{0\leq j\leq n}|N(jT/n,\cdot)|\big\|{}_{L^p(\Omega;X)} = \big\|\sup_{t\in [0,T]}|N(t,\cdot)|\big\|{}_{L^p(\Omega;X)}. \end{aligned}$$

Set M j = N jTn for j ∈{0, …, n} and M j = M n for j > n. It remains to prove

$$\displaystyle \begin{aligned}\big\|\sup_{0\leq j\leq n}|M_j(\cdot)|\big\|{}_{L^p(\Omega;X)} \leq C_{p,X} \|M_n\|{}_{L^p(\Omega;X)}.\end{aligned}$$

If \((M_j)_{j=0}^n\) is a Paley–Walsh martingale (see [11, Definition 3.1.8 and Proposition 3.1.10]), this estimate follows from the boundedness of the dyadic lattice maximal operator [24, pp. 199–200 and Theorem 3]. In the general case one can replace Ω by a divisible probability space and approximate (M j) by Paley-Walsh martingales in a similar way as in [11, Corollary 3.6.7]. □

Theorem 16.3.2 (Doob’s Maximal L p-Inequality)

Let X be a UMD Banach function space over a σ-finite measure space and let p ∈ (1, ). Let \(M:\mathbb R_+\times \Omega \to X\) be a martingale such that

  1. 1.

    for all t ≥ 0, M(t) ∈ L p( Ω;X);

  2. 2.

    for a.a ω  Ω, M(⋅, ω) is in \(\mathcal D([0,\infty );X)\).

Then there exists a martingale field \(N\in L^p(\Omega ; X(\mathcal D_b([0,\infty ))))\) such that for a.a. ω  Ω, all t ≥ 0 and a.a. s  S, N(t, ω, s) = M(t, ω)(s) and

$$\displaystyle \begin{aligned} \big\|\sup_{t\geq 0}|N(t,\cdot)|\big\|{}_{L^p(\Omega;X)} \eqsim_{p,X} \sup_{t\geq 0}\|M(t,\cdot)\|{}_{L^p(\Omega;X)}. \end{aligned} $$
(16.8)

Moreover, if M is continuous, then N can be chosen to be continuous as well.

Proof

We first consider the case where M becomes constant after some time T > 0. Then

$$\displaystyle \begin{aligned}\sup_{t\geq 0}\|M(t,\cdot)\|{}_{L^p(\Omega;X)} = \|M(T)\|{}_{L^p(\Omega;X)}.\end{aligned}$$

Let (ξ n)n≥1 be simple random variables such that ξ n → M(T) in L p( Ω;X). Let \(M_n(t) = {\mathbb E}(\xi _n|{\mathcal F}_t)\) for t ≥ 0. Then by Lemma 16.3.1

$$\displaystyle \begin{aligned} \big\|\sup_{t\geq 0} |N_n(t,\cdot) - N_m(t,\cdot)|\big\|{}_{L^p(\Omega;X)}\eqsim_{p,X} \big\||M_n(T,\cdot) - M_m(T,\cdot)|\big\|{}_{L^p(\Omega;X)} \to 0 \end{aligned}$$

as n, m →. Therefore, (N n)n≥1 is a Cauchy sequence and hence converges to some N from the space \(L^p(\Omega ;X(\mathcal D_b([0,\infty ))))\). Clearly, N(t, ⋅) = M(t) and (16.8) holds in the special case that M becomes constant after T > 0.

In the case M is general, for each T > 0 we can set M T(t) = M(t ∧ T). Then for each T > 0 we obtain a martingale field N T as required. Since \(N^{T_1} = N^{T_2}\) on [0, T 1 ∧ T 2], we can define a martingale field N by setting N(t, ⋅) = N T(t, ⋅) on [0, T]. Finally, we note that

$$\displaystyle \begin{aligned}\lim_{T\to\infty} \sup_{t\geq 0} \|M^T(t)\|{}_{L^p(\Omega;X)} = \sup_{t\geq 0} \|M(t)\|{}_{L^p(\Omega;X)}.\end{aligned}$$

Moreover, by Definition 16.2.2(iv) we have

$$\displaystyle \begin{aligned}\lim_{T\to\infty} \big\|\sup_{t\geq 0}|N^T(t,\cdot)|\big\|{}_{L^p(\Omega;X)} = \big\|\sup_{t\geq 0}|N(t,\cdot)|\big\|{}_{L^p(\Omega;X)},\end{aligned}$$

Therefore the general case of (16.8) follows by taking limits.

Now let M be continuous, and let (M n)n≥1 be as before. By the same argument as in the first part of the proof we can assume that there exists T > 0 such that M t = M tT for all t ≥ 0. By Lemma 16.2.1 there exists a unique decomposition \(M_n = M_n^c + M_n^d\) such that \(M_n^d\) is purely discontinuous and starts at zero and \(M_n^c\) has continuous paths a.s. Then by (16.7)

$$\displaystyle \begin{aligned} \|M(T)-M_n^c(T)\|{}_{L^p(\Omega;X)}\leq \beta_{p,X} \|M(T)-M_n(T)\|{}_{L^p(\Omega;X)}\to 0. \end{aligned}$$

Since \(M_n^c\) takes values in a finite dimensional subspace of X we can define a martingale field N n by \(N_n(t,\omega ,s) = M_n^c(t,\omega )(s)\). Now by Lemma 16.3.1

$$\displaystyle \begin{aligned}\big\|\sup_{0\leq t\leq T} |N_n(t,\cdot) - N_m(t,\cdot)|\big\|{}_{L^p(\Omega;X)}\eqsim_{p,X} \big\||M_n^c(T,\cdot) - M_m^c(T,\cdot)|\big\|{}_{L^p(\Omega;X)} \to 0.\end{aligned}$$

Therefore, (N n)n≥1 is a Cauchy sequence and hence converges to some N from the space L p( Ω;X(C b([0, )))). Analogously to the first part of the proof, N(t, ⋅) = M(t) for all t ≥ 0. □

Remark 16.3.3

Note that due to the construction of N we have that ΔM τ(s) =  ΔN(⋅, s)τ for any stopping time τ and almost any s ∈ S. Indeed, let (M n)n≥1 and (N n)n≥1 be as in the proof of Theorem 16.3.2. Then on the one hand

$$\displaystyle \begin{aligned} \|\Delta M_{\tau} - \Delta (M_n)_{\tau}\|{}_{L^p(\Omega; X)} &\leq \bigl\|\sup_{0\leq t\leq T}\|M(t)-M_n(t)\|{}_X\bigr\|{}_{L^p(\Omega)}\\ &\eqsim_p\|M(T)-M_n(T)\|{}_{L^p(\Omega;X)} \to 0,\;\;\; n\to\infty. \end{aligned} $$

On the other hand

$$\displaystyle \begin{aligned} \|\Delta N_{\tau} - \Delta (N_n)_{\tau}\|{}_{L^p(\Omega; X)} &\leq \bigl\|\sup_{0\leq t\leq T}|N(t)-N_n(t)|\bigr\|{}_{L^p(\Omega;X)}\\ &\eqsim_{p,X}\bigl\||N(T)-N_n(T)|\bigr\|{}_{L^p(\Omega;X)} \to 0,\;\;\; n\to\infty. \end{aligned} $$

Since \(\|M_n(t) - N_n(t,\cdot )\|{ }_{L^p(\Omega ; X)} = 0\) for all n ≥ 0, we have that by the limiting argument \(\|\Delta M_{\tau } - \Delta N_\tau (\cdot )\|{ }_{L^p(\Omega ; X)}=0\), so the desired follows from Definition 16.2.2(i).

One could hope there is a more elementary approach to derive continuity of N in the case M is continuous: if the filtration \(\widetilde {\mathbb F} := (\widetilde {\mathcal F}_t)_{t\geq 0}\) is generated by M, then M(s) is \(\widetilde {\mathbb F}\)-adapted for a.e. s ∈ S, and one might expect that M has a continuous version. Unfortunately, this is not true in general as follows from the next example.

Example 16.3.4

There exists a continuous martingale \(M:\mathbb R_+ \times \Omega \to \mathbb R\), a filtration \(\widetilde {\mathbb F} = (\widetilde {\mathcal F}_t)_{t\geq 0}\) generated by M and all \(\mathbb P\)-null sets, and a purely discontinuous nonzero \(\widetilde {\mathbb F}\)-martingale \(N:\mathbb R_+ \times \Omega \to \mathbb R\). Let \(W:\mathbb R_+ \times \Omega \to \mathbb R\) be a Brownian motion, \(L:\mathbb R_+ \times \Omega \to \mathbb R\) be a Poisson process such that W and L are independent. Let \(\mathbb F = (\mathcal F_t)_{t\geq 0}\) be the filtration generated by W and L. Let σ be an \(\mathbb F\)-stopping time defined as follows

$$\displaystyle \begin{aligned} \sigma = \inf\{u\geq 0:\Delta L_u \neq 0\}. \end{aligned}$$

Let us define

$$\displaystyle \begin{aligned}M:= \int\mathbf 1_{[0,\sigma]}\,\mathrm{d} W = W^{\sigma}. \end{aligned}$$

Then M is a martingale. Let \(\widetilde {\mathbb F} := (\widetilde {\mathcal F}_t)_{t\geq 0}\) be generated by M. Note that \(\widetilde {\mathcal F}_t \subset \mathcal F_t\) for any t ≥ 0. Define a random variable

$$\displaystyle \begin{aligned} \tau=\inf\{t\geq 0:\exists u\in [0,t) \,\text{such that}\, M \, \text{is a constant on}\, [u,t]\}. \end{aligned}$$

Then τ = σ a.s. Moreover, τ is a \(\widetilde {\mathbb F}\)-stopping time since for each u ≥ 0

$$\displaystyle \begin{aligned} \mathbb P\{\tau = u\} = \mathbb P\{\sigma = u\}= \mathbb P\{\Delta L^{\sigma}_u \neq 1\} \leq \mathbb P\{\Delta L_u \neq 1\} = 0, \end{aligned} $$

and hence

$$\displaystyle \begin{aligned} \{\tau\leq u\} =\{\tau<u\} \cup \{\tau=u\}\subset \widetilde{ \mathcal F}_u. \end{aligned}$$

Therefore \(N:\mathbb R_+ \times \Omega \to \mathbb R\) defined by

$$\displaystyle \begin{aligned}N_t:= \mathbf 1_{[\tau,\infty)}(t) -t\wedge \tau\;\;\;\; t\geq 0, \end{aligned}$$

is an \(\widetilde {\mathbb F}\)-martingale since it is \(\widetilde {\mathbb F}\)-measurable and since N t = (L tt)σ a.s. for each t ≥ 0, hence for each u ∈ [0, t]

$$\displaystyle \begin{aligned} \mathbb E (N_t|\widetilde {\mathcal F}_u) = \mathbb E (\mathbb E (N_t|{\mathcal F}_u)|\widetilde {\mathcal F}_u) =\mathbb E (\mathbb E ((L_{t}-t)^{\sigma}|{\mathcal F}_u)|\widetilde {\mathcal F}_u) = (L_{u}-u)^{\sigma} = N_u \end{aligned}$$

due to the fact that tL t − t is an \(\widetilde {\mathbb F}\)-measurable \(\mathbb F\)-martingale (see [15, Problem 1.3.4]). But (N t)t≥0 is not continuous since (L t)t≥0 is not continuous.

16.4 Main Result

Theorem 16.1.1 will be a consequence of the following more general result.

Theorem 16.4.1

Let X be a UMD Banach function space over a σ-finite measure space (S,  Σ, μ) and let p ∈ (1, ). Let \(M:{\mathbb R}_+\times \Omega \to X\) be a local L p-martingale with respect to \(({\mathcal F}_t)_{t\geq 0}\) and assume M(0, ⋅) = 0. Then there exists a mapping \(N:{\mathbb R}_+\times \Omega \times S\to {\mathbb R}\) such that

  1. 1.

    for all t ≥ 0 and a.a. ω  Ω, N(t, ω, ⋅) = M(t, ω),

  2. 2.

    N is a local martingale field,

  3. 3.

    the following estimate holds

    $$\displaystyle \begin{aligned} \big\|\sup_{t\geq 0} |N(t,\cdot,\cdot)| \big\|{}_{L^p(\Omega;X)} \eqsim_{p,X} \big\|\sup_{t\geq 0}\|M(t,\cdot)\|{}_X \big\|{}_{L^p(\Omega)} \eqsim_{p,X} \|[N]_{\infty}^{1/2}\|{}_{L^p(\Omega;X)}. \end{aligned} $$
    (16.9)

To prove Theorem 16.4.1 we first prove a completeness result.

Proposition 16.4.2

Let X be a Banach function space over a σ-finite measure space S, 1 ≤ p < ∞. Let

where \(\|N\|{ }_{\mathrm {MQ}^p(X)} := \|[N]_{\infty }^{1/2}\|{ }_{L^p(\Omega ;X)}\). Then \((\mathrm {MQ}^p(X), \|\cdot \|{ }_{\mathrm {MQ}^p(X)})\) is a Banach space. Moreover, if N n → N in MQ p, then there exists a subsequence \((N_{n_k})_{k\geq 1}\) such that pointwise a.e. in S, we have \(N_{n_k}\to N\) in \(L^1(\Omega ;\mathcal D_b([0,\infty )))\).

Proof

Let us first check that MQp(X) is a normed vector space. For this only the triangle inequality requires some comments. By the well-known estimate for local martingales M, N (see [13, Theorem 26.6(iii)]) we have that a.s.

$$\displaystyle \begin{aligned} \begin{aligned}{}[M+N]_{t} &= [M]_t+2[M,N]_{t} +[N]_t\\ &\leq [M]_t+2[M]^{1/2}_t[N]_{t}^{1/2} +[N]_t = \big([M]_t^{1/2}+[N]_{t}^{1/2}\big)^2, \end{aligned} \end{aligned} $$
(16.10)

Therefore, \([M+N]_{t}^{1/2} \leq [M]^{1/2}_t+[N]_{t}^{1/2}\) a.s. for all t ∈ [0, ].

Let (N k)k≥1 be such that \(\sum _{k\geq 1} \|N_k\|{ }_{\mathrm {MQ}^p(X)}<\infty \). It suffices to show that ∑k≥1N k converges in MQp(X). Observe that by monotone convergence in Ω and Jensen’s inequality applied to ∥⋅∥X for any n > m ≥ 1 we have

$$\displaystyle \begin{aligned} &\Big\|\sum_{k = m+1}^n {\mathbb E} [N_k]_{\infty}^{1/2}\Big\|{}_{X} \notag\\ &\quad = \Big\|\sum_{k = 1}^n {\mathbb E} [N_k]_{\infty}^{1/2} - \sum_{k = 1}^m {\mathbb E} [N_k]_{\infty}^{1/2}\Big\|{}_{X}\notag\\ &\quad =\Big\|{\mathbb E} \sum_{k = m+1}^n [N_k]_{\infty}^{1/2}\Big\|{}_{X} \leq {\mathbb E}\Big\| \sum_{k = m+1}^n [N_k]_{\infty}^{1/2}\Big\|{}_{X}\\ &\quad =\Big\| \sum_{k = m+1}^n [N_k]_{\infty}^{1/2}\Big\|{}_{L^1(\Omega;X)} \leq \Big\| \sum_{k = m+1}^n [N_k]_{\infty}^{1/2}\Big\|{}_{L^p(\Omega;X)} \notag\\ &\quad \leq \sum_{k = m+1}^n\Big\| [N_k]_{\infty}^{1/2}\Big\|{}_{L^p(\Omega;X)} \to 0,\;\; m,n\to \infty,\notag \end{aligned} $$
(16.11)

where the latter holds due to the fact that \(\sum _{k \geq 1}\Big \| [N_k]_{\infty }^{1/2}\Big \|{ }_{L^p(\Omega ;X)} < \infty \). Thus \(\sum _{k = 1}^n {\mathbb E} [N_k]_{\infty }^{1/2}\) converges in X as n →, where the corresponding limit coincides with its pointwise limit \(\sum _{k \geq 1} {\mathbb E} [N_k]_{\infty }^{1/2}\) by Remark 16.2.3. Therefore, since any element of X is finite a.s. by Definition 16.2.2, we can find S 0 ∈ Σ such that \(\mu (S_0^{c}) = 0\) and pointwise in S 0, we have \(\sum _{k\geq 1} {\mathbb E} [N_k]_{\infty }^{1/2}<\infty \). Fix s ∈ S 0. In particular, we find that \(\sum _{k\geq 1} [N_k]_{\infty }^{1/2}\) converges in L 1( Ω). Moreover, since by the scalar Burkholder-Davis-Gundy inequalities \({\mathbb E}\sup _{t\geq 0} |N_k(t,\cdot ,s)| \eqsim {\mathbb E}[N_k(s)]_{\infty }^{1/2}\), we also obtain that

$$\displaystyle \begin{aligned} N(\cdot, s):=\sum_{k\geq 1} N_k(\cdot, s) \ \ \text{converges in} \ L^1(\Omega;\mathcal D_b([0,\infty)).\end{aligned} $$
(16.12)

Let N(⋅, s) = 0 for sS 0. Then N defines a martingale field. Moreover, by the scalar Burkholder-Davis-Gundy inequalities

$$\displaystyle \begin{aligned} \lim_{m\to \infty} \Big[\sum_{k=n}^m N_k(\cdot,s)\Big]_{\infty}^{1/2} =\Big[\sum_{k=n}^\infty N_k(\cdot, s)\Big]_{\infty}^{1/2}\end{aligned} $$

in L 1( Ω). Therefore, by considering an a.s. convergent subsequence and by (16.10) we obtain

$$\displaystyle \begin{aligned} \Big[\sum_{k=n}^\infty N_k(\cdot, s)\Big]_{\infty}^{1/2}\leq \sum_{k=n}^\infty [N_k(\cdot,s)]_{\infty}^{1/2}. \end{aligned} $$
(16.13)

It remains to prove that N ∈MQp(X) and N =∑k≥1N k with convergence in MQp(X). Let ε > 0. Choose \(n\in {\mathbb N}\) such that \(\sum _{k\geq n+1} \|N_k\|{ }_{\mathrm {MQ}^p(X)}<\varepsilon \). It follows from (16.11) that \({\mathbb E}\big \| \sum _{k\geq 1} [N_k]_{\infty }^{1/2}\big \|{ }_{X}<\infty \), so \(\sum _{k\geq 1} [N_k]_{\infty }^{1/2}\) a.s. converges in X. Now by (16.13), the triangle inequality and Fatou’s lemma, we obtain

$$\displaystyle \begin{aligned} \Big\|\Big[\sum_{k\geq n+1} N_k \Big]_{\infty}^{1/2}\Big\|{}_{L^p(\Omega; X)} & \leq \Big\|\sum_{k=n+1}^\infty [ N_k ]_{\infty}^{1/2}\Big\|{}_{L^p(\Omega; X)} \\ & \leq \sum_{k=n+1}^\infty \Big\|[ N_k ]_{\infty}^{1/2}\Big\|{}_{L^p(\Omega; X)} \\ & \leq \liminf_{m\to \infty} \sum_{k=n+1}^m\Big\|[ N_k]_{\infty}^{1/2} \Big\|{}_{L^p(\Omega; X)}<\varepsilon^p. \end{aligned} $$

Therefore, N ∈MQp(X) and \(\| N - \sum _{k=1}^n N_k \|{ }_{\mathrm {MQ}^p(X)} <\varepsilon \).

For the proof of the final assertion assume that N n → N in MQp(X). Choose a subsequence \((N_{n_k})_{k\geq 1}\) such that \(\|N_{n_k}- N\|{ }_{\mathrm {MQ}^p(X)}\leq 2^{-k}\). Then \(\sum _{k\geq 1}\|N_{n_k}- N\|{ }_{\mathrm {MQ}^p(X)}<\infty \) and hence by (16.12) we see that pointwise a.e. in S, the series \(\sum _{k\geq 1} (N_{n_k}- N)\) converges in \(L^1(\Omega ;\mathcal D_b([0,\infty )))\). Therefore, \(N_{n_k}\to N\) in \(L^1(\Omega ;\mathcal D_b([0,\infty );X))\) as required. □

For the proof of Theorem 16.4.1 we will need the following lemma presented in [8, Théorème 2].

Lemma 16.4.3

Let 1 < p < ∞, \(M:\mathbb R_+\times \Omega \to \mathbb R\) be an L p-martingales. Let T > 0. For each n ≥ 1 define

$$\displaystyle \begin{aligned} R_n := \sum_{k=1}^n \big|M_{\frac{Tk}{n}} - M_{\frac{T(k-1)}{n}}\big|{}^2. \end{aligned}$$

Then R n converges to [M]T in L p∕2.

Proof of Theorem 16.4.1

The existence of the local martingale field N together with the first estimate in (16.9) follows from Theorem 16.3.2. It remains to prove

$$\displaystyle \begin{aligned} \big\|\sup_{t\geq 0}\|M(t,\cdot)\|{}_X \big\|{}_{L^p(\Omega)} \eqsim_{p,X} \|[N]_{\infty}^{1/2}\|{}_{L^p(\Omega;X)}. \end{aligned} $$
(16.14)

Due to Definition 16.2.2(iv) it suffices to prove the above norm equivalence in the case M and N becomes constant after some fixed time T.

Step 1: The Finite Dimensional Case

Assume that M takes values in a finite dimensional subspace Y  of X and that the right hand side of (16.14) is finite. Then we can write \(N(t,s) = M(t)(s) = \sum _{j=1}^n M_j(t) x_j(s)\), where each M j is a scalar-valued martingale with M j(T) ∈ L p( Ω) and x 1, …, x n ∈ X form a basis of Y . Note that for any c 1, …, c n ∈ L p( Ω) we have that

$$\displaystyle \begin{aligned} \Bigl\|\sum_{j=1}^n c_jx_j\Bigr\|{}_{L^p(\Omega; X)} \eqsim_{p,Y} \sum_{j=1}^n \|c_j\|{}_{L^p(\Omega)}. \end{aligned} $$
(16.15)

Fix m ≥ 1. Then by (16.3) and Doob’s maximal inequality

$$\displaystyle \begin{aligned} \begin{aligned} \big\|\sup_{t\geq 0}\|M(t,\cdot)\|{}_X \big\|{}_{L^p(\Omega)} &\eqsim_{p} \|M(T,\cdot)\|{}_{L^p(\Omega;X)}\\ &= \Bigl\|\sum_{i=1}^m M_{\frac{Ti}{m}} - M_{\frac{T(i-1)}{m}}\Bigr\|{}_{L^p(\Omega; X)}\\ &\eqsim_{p,X} \Bigl\|\Bigl(\sum_{i=1}^m\big|M_{\frac{Ti}{m}} - M_{\frac{T(i-1)}{m}}\big|{}^2\Bigr)^{\frac{1}{2}}\Bigr\|{}_{L^p(\Omega; X)}, \end{aligned} \end{aligned} $$
(16.16)

and by (16.15) and Lemma 16.4.3 the right hand side of (16.16) converges to

$$\displaystyle \begin{aligned}\|[M]_{\infty}^{1/2}\|{}_{L^p(\Omega;X)}=\|[N]_{\infty}^{1/2}\|{}_{L^p(\Omega;X)}. \end{aligned}$$

Step 2: Reduction to the Case Where M Takes Values in a Finite Dimensional Subspace of X

Let M(T) ∈ L p( Ω;X). Then we can find simple functions (ξ n)n≥1 in L p( Ω;X) such that ξ n → M(T). Let \(M_n(t) = {\mathbb E}(\xi _n|\mathcal {F}_t)\) for all t ≥ 0 and n ≥ 1, (N n)n≥1 be the corresponding martingale fields. Then each M n takes values in a finite dimensional subspace X n ⊆ X, and hence by Step 1

$$\displaystyle \begin{aligned} \big\|\sup_{t\geq 0}\|M_n(t,\cdot)-M_m(t,\cdot)\|{}_X \big\|{}_{L^p(\Omega)} \eqsim_{p,X} \|[N_n-N_m]_{\infty}^{1/2}\|{}_{L^p(\Omega;X)} \end{aligned}$$

for any m, n ≥ 1. Therefore since (ξ n)n≥1 is Cauchy in L p( Ω;X), (N n)n≥1 converges to some N in MQp(X) by the first part of Proposition 16.4.2.

Let us show that N is the desired local martingale field. Fix t ≥ 0. We need to show that N(⋅, t, ⋅) = M t a.s. on Ω. First notice that by the second part of Proposition 16.4.2 there exists a subsequence of (N n)n≥1 which we will denote by (N n)n≥1 as well such that N n(⋅, t, σ) → N(⋅, t, σ) in L 1( Ω) for a.e. σ ∈ S. On the other hand by Jensen’s inequality

$$\displaystyle \begin{aligned} \bigl\|\mathbb E |N_n(\cdot,t,\cdot) - M_t|\bigr\|{}_X = \bigl\|\mathbb E |M_n(t) - M(t)|\bigr\|{}_X \leq \mathbb E \|M_n(t)- M(t)\|{}_X \to 0,\;\;\;\; n\to \infty. \end{aligned}$$

Hence N n(⋅, t, ⋅) → M t in X(L 1( Ω)), and thus by Remark 16.2.3 in L 0(S;L 1( Ω)). Therefore we can find a subsequence of (N n)n≥1 (which we will again denote by (N n)n≥1) such that N n(⋅, t, σ) → M t(σ) in L 1( Ω) for a.e. σ ∈ S (here we use the fact that μ is σ-finite), so N(⋅, t, ⋅) = M t a.s. on Ω × S, and consequently by Definition 16.2.2(iii), N(ω, t, ⋅) = M t(ω) for a.a. ω ∈ Ω. Thus (16.14) follows by letting n →.

Step 3: Reduction to the Case Where the Left-Hand Side of (16.14) is Finite

Assume that the left-hand side of (16.14) is infinite, but the right-hand side is finite. Since M is a local L p-martingale we can find a sequence of stopping times (τ n)n≥1 such that τ n↑∞ and \(\|M^{\tau _n}_T\|{ }_{L^p(\Omega ;X)}<\infty \) for each n ≥ 1. By the monotone convergence theorem and Definition 16.2.2(iv)

$$\displaystyle \begin{aligned} \|[N]_{\infty}^{1/2}\|{}_{L^p(\Omega;X)} &= \lim_{n\to \infty}\|[N^{\tau_n}]_{\infty}^{1/2}\|{}_{L^p(\Omega;X)} \eqsim_{p,X} \limsup_{n\to \infty}\|M^{\tau_n}_T\|{}_{L^p(\Omega;X)}\\ & = \lim_{n\to \infty}\|M^{\tau_n}_T\|{}_{L^p(\Omega;X)} = \lim_{n\to \infty}\Bigl\|\sup_{0\leq t\leq T}\|M^{\tau_n}_t\|{}_X\Bigr\|{}_{L^p(\Omega)}\\ &= \Bigl\|\sup_{0\leq t\leq T}\|M_t\|{}_X\Bigr\|{}_{L^p(\Omega)}=\infty \end{aligned} $$

and hence the right-hand side of (16.14) is infinite as well.

We use an extrapolation argument to extend part of Theorem 16.4.1 to p ∈ (0, 1] in the continuous-path case.

Corollary 16.4.4

Let X be a UMD Banach function space over a σ-finite measure space and let p ∈ (0, ). Let M be a continuous local martingale \(M:\mathbb R_+\times \Omega \to X\) with M(0, ⋅) = 0. Then there exists a continuous local martingale field \(N:\mathbb R_+\times \Omega \times S\to {\mathbb R}\) such that for a.a. ω  Ω, all t ≥ 0, and a.a. s  S, N(t, ω, ⋅) = M(t, ω)(s) and

$$\displaystyle \begin{aligned} \big\|\sup_{t\geq 0} \|M(t,\cdot)\|{}_X \big\|{}_{L^p(\Omega)} \eqsim_{p,X} \big\|[N]_{\infty}^{1/2}\big\|{}_{L^p(\Omega;X)} . \end{aligned} $$
(16.17)

Proof

By a stopping time argument we can reduce to the case where ∥M(t, ω)∥X is uniformly bounded in \(t\in \mathbb R_+\) and ω ∈ Ω and M becomes constant after a fixed time T. Now the existence of N follows from Theorem 16.4.1 and it remains to prove (16.17) for p ∈ (0, 1]. For this we can use a classical argument due to Lenglart. Indeed, for both estimates we can apply [16] or [23, Proposition IV.4.7] to the continuous increasing processes \(Y,Z:{\mathbb R}_+\times \Omega \to \mathbb R_+\) given by

$$\displaystyle \begin{aligned} Y_u &=\mathbb E \sup_{t\in [0,u]} \|M(t,\cdot)\|{}_X, \\ Z_u &= \|s\mapsto [N(\cdot, \cdot, s)]_{u}^{1/2}\|{}_X, \end{aligned} $$

where q ∈ (1, ) is a fixed number. Then by (16.9) for any bounded stopping time τ, we have

$$\displaystyle \begin{aligned} {\mathbb E} Y_{\tau}^q & = \sup_{t\geq 0} \|M(t\wedge \tau,\cdot)\|{}_X^q \eqsim_{q,X} {\mathbb E} \|s\mapsto [N(\cdot\wedge \tau, \cdot, s)]_{\infty}^{1/2}\|{}_X^q \\ & \stackrel{(*)}{=} {\mathbb E} \|s\mapsto [N(\cdot, \cdot, s)]_{\tau}^{1/2}\|{}_X^q = {\mathbb E} Z_\tau^q, \end{aligned} $$

where we used [13, Theorem 17.5] in (∗). Now (16.17) for p ∈ (0, q) follows from [16] or [23, Proposition IV.4.7]. □

As we saw in Theorem 16.3.2, continuity of M implies pointwise continuity of the corresponding martingale field N. The following corollaries of Theorem 16.4.1 are devoted to proving the same type of assertions concerning pure discontinuity, quasi-left continuity, and having accessible jumps.

Let τ be a stopping time. Then τ is called predictable if there exists a sequence of stopping times (τ n)n≥1 such that τ n < τ a.s. on {τ > 0} for each n ≥ 1 and a.s. A càdlàg process \(V:\mathbb R_+ \times \Omega \to X\) is called to have accessible jumps if there exists a sequence of predictable stopping times (τ n)n≥1 such that \(\{t\in \mathbb R_+:\Delta V \neq 0\} \subset \{\tau _1,\ldots ,\tau _n,\ldots \}\) a.s.

Corollary 16.4.5

Let X be a UMD function space over a measure space (S,  Σ, μ), 1 < p < ∞, \(M:\mathbb R_+ \times \Omega \to X\) be a purely discontinuous L p-martingale with accessible jumps. Let N be the corresponding martingale field. Then N(⋅, s) is a purely discontinuous martingale with accessible jumps for a.e. s  S.

For the proof we will need the following lemma taken from [7, Subsection 5.3].

Lemma 16.4.6

Let X be a Banach space, 1 ≤ p < ∞, \(M:\mathbb R_+ \times \Omega \to X\) be an L p-martingale, τ be a predictable stopping time. Then ( ΔM τ1 [0,t](τ))t≥0 is an L p-martingale as well.

Proof of Corollary 16.4.5

Without loss of generality we can assume that there exists T ≥ 0 such that M t = M T for all t ≥ T, and that M 0 = 0. Since M has accessible jumps, there exists a sequence of predictable stopping times (τ n)n≥1 such that a.s.

$$\displaystyle \begin{aligned}\{t\in \mathbb R_+:\Delta M \neq 0\} \subset \{\tau_1,\ldots,\tau_n,\ldots\}. \end{aligned}$$

For each m ≥ 1 define a process \(M^m:\mathbb R_+ \times \Omega \to X\) in the following way:

$$\displaystyle \begin{aligned} M^m(t) := \sum_{n=1}^m\Delta M_{\tau_n}\mathbf 1_{[0,t]}(\tau_n),\;\;\; t\geq 0. \end{aligned}$$

Note that M m is a purely discontinuous L p-martingale with accessible jumps by Lemma 16.4.6. Let N m be the corresponding martingale field. Then N m(⋅, s) is a purely discontinuous martingale with accessible jumps for almost any s ∈ S due to Remark 16.3.3. Moreover, for any m ≥  ≥ 1 and any t ≥ 0 we have that a.s. [N m(⋅, s)]t ≥ [N (⋅, s)]t. Define \(F:\mathbb R_+\times \Omega \times S \to \mathbb R_+\cup \{+\infty \}\) in the following way:

$$\displaystyle \begin{aligned}F(t,\cdot,s):= \lim_{m\to \infty} [N^{m}(\cdot, s)]_t,\;\;\; s\in S, t\geq 0. \end{aligned}$$

Note that F(⋅, ⋅, s) is a.s. finite for almost any s ∈ S. Indeed, by Theorem 16.4.1 and [27, Theorem 4.2] we have that for any m ≥ 1

$$\displaystyle \begin{aligned} \big\|[N^m]_{\infty}^{1/2}\big\|{}_{L^p(\Omega;X)} \eqsim_{p,X} \|M^m(T,\cdot) \|{}_{L^p(\Omega;X)} \leq \beta_{p,X}\|M(T,\cdot)\|{}_{L^p(\Omega;X)}, \end{aligned}$$

so by Definition 16.2.2(iv), F(⋅, ⋅, s) is a.s. finite for almost any s ∈ S and

$$\displaystyle \begin{aligned} \big\|F_{\infty}^{1/2}\big\|{}_{L^p(\Omega;X)} &= \big\|F_{T}^{1/2}\big\|{}_{L^p(\Omega;X)} = \lim_{m\to \infty}\big\|[N^m]_{T}^{1/2}\big\|{}_{L^p(\Omega;X)}\\ &\lesssim_{p,X} \limsup_{m\to \infty}\|M^m(T,\cdot) \|{}_{L^p(\Omega;X)} \lesssim_{p,X} \|M(T,\cdot)\|{}_{L^p(\Omega;X)}. \end{aligned} $$

Moreover, for almost any s ∈ S we have that F(⋅, ⋅, s) is pure jump and

$$\displaystyle \begin{aligned} \{t\in \mathbb R_+:\Delta F \neq 0\} \subset \{\tau_1,\ldots,\tau_n,\ldots\}. \end{aligned}$$

Therefore to this end it suffices to show that F(s) = [N(s)] a.s. on Ω for a.e. s ∈ S. Note that by Definition 16.2.2(iv),

$$\displaystyle \begin{aligned} \big\|(F-[N^m])^{1/2}(\infty)\big\|{}_{L^p(\Omega;X)}\to 0,\;\;\; m\to \infty \end{aligned} $$
(16.18)

so by Theorem 16.4.1 (M m(T))m≥1 is a Cauchy sequence in L p( Ω;X). Let ξ be its limit, \(M^0:\mathbb R_+ \times \Omega \to X\) be a martingale such that \(M^0(t)=\mathbb E (\xi |\mathcal F_{t})\) for all t ≥ 0. Then by [27, Proposition 2.14] M 0 is purely discontinuous. Moreover, for any stopping time τ a.s.

$$\displaystyle \begin{aligned} \Delta M^0_{\tau} = \lim_{m\to \infty} \Delta M^m_{\tau} = \lim_{m\to \infty}\Delta M_{\tau} \mathbf 1_{\{\tau_1,\ldots,\tau_m\}}(\tau) = \Delta M_{\tau}, \end{aligned}$$

where the latter holds since the set {τ 1, …, τ n, …} exhausts the jump times of M. Therefore M = M 0 since both M and M 0 are purely discontinuous with the same jumps, and hence [N] = F (where F(s) = [M 0(s)] by (16.18)). Consequently N(⋅, ⋅, s) is purely discontinuous with accessible jumps for almost all s ∈ S. □

Remark 16.4.7

Note that the proof of Corollary 16.4.5 also implies that \(M^m_t \to M_t\) in L p( Ω;X) for each t ≥ 0.

A càdlàg process \(V:\mathbb R_+ \times \Omega \to X\) is called quasi-left continuous if ΔV τ = 0 a.s. for any predictable stopping time τ.

Corollary 16.4.8

Let X be a UMD function space over a measure space (S,  Σ, μ), 1 < p < ∞, \(M:\mathbb R_+ \times \Omega \to X\) be a purely discontinuous quasi-left continuous L p-martingale. Let N be the corresponding martingale field. Then N(⋅, s) is a purely discontinuous quasi-left continuous martingale for a.e. s  S.

The proof will exploit the random measure theory. Let \((J, \mathcal J)\) be a measurable space. Then a family μ = {μ(ω; dt, dx), ω ∈ Ω} of nonnegative measures on \((\mathbb R_+ \times J; \mathcal B(\mathbb R_+)\otimes \mathcal J)\) is called a random measure. A random measure μ is called integer-valued if it takes values in \(\mathbb N\cup \{\infty \}\), i.e. for each \(A \in \mathcal B(\mathbb R_+)\otimes \mathcal F\otimes \mathcal J\) one has that \(\mu (A) \in \mathbb N\cup \{\infty \}\) a.s., and if μ({t}× J) ∈{0, 1} a.s. for all t ≥ 0.

Let X be a Banach space, μ be a random measure, \(F:\mathbb R_+ \times \Omega \times J \to X\) be such that \(\int _{\mathbb R_+ \times J} \|F\| \,\mathrm {d} \mu <\infty \) a.s. Then the integral process ((Fμ)t)t≥0 of the form

$$\displaystyle \begin{aligned} (F\star \mu)_t := \int_{\mathbb R_+ \times J} F(s,\cdot, x)\mathbf 1_{[0,t]}(s)\mu(\cdot; \,\mathrm{d} s, \,\mathrm{d} x),\;\;\; t\geq 0, \end{aligned}$$

is a.s. well-defined.

Any integer-valued optional \({\mathcal P}\otimes \mathcal J\)-σ-finite random measure μ has a compensator: a unique predictable \({\mathcal P}\otimes \mathcal J\)-σ-finite random measure ν such that \(\mathbb E (W \star \mu )_{\infty } = \mathbb E (W \star \nu )_{\infty }\) for each \({\mathcal P}\otimes \mathcal J\)-measurable real-valued nonnegative W (see [12, Theorem II.1.8]). For any optional \({\mathcal P}\otimes \mathcal J\)-σ-finite measure μ we define the associated compensated random measure by \(\bar {\mu } = \mu -\nu \).

Recall that \(\mathcal P\) denotes the predictable σ-algebra on \(\mathbb R_+ \times \Omega \) (see [13] for details). For each \(\mathcal P \otimes \mathcal J\)-strongly-measurable \(F:\mathbb R_+ \times \Omega \times J \to X\) such that \(\mathbb E (\|F\|\star \mu )_{\infty }< \infty \) (or, equivalently, \(\mathbb E (\|F\|\star \nu )_{\infty }<\infty \), see the definition of a compensator above) we can define a process \(F\star \bar {\mu }\) by F ⋆ μ − F ⋆ ν. Then this process is a purely discontinuous local martingale. We will omit here some technicalities for the convenience of the reader and refer the reader to [12, Chapter II.1], [7, Subsection 5.4–5.5], and [14, 19, 20] for more details on random measures.

Proof of Corollary 16.4.8

Without loss of generality we can assume that there exists T ≥ 0 such that M t = M T for all t ≥ T, and that M 0 = 0. Let μ be a random measure defined on \(\mathbb R_+\times X\) in the following way

$$\displaystyle \begin{aligned} \mu(A\times B) = \sum_{t\geq 0} \mathbf 1_{A}(t) \mathbf 1_{B\setminus \{0\}}(\Delta M_t), \end{aligned}$$

where \(A\subset \mathbb R_+\) is a Borel set, and B ⊂ X is a ball. For each k,  ≥ 1 we define a stopping time τ k, as follows

$$\displaystyle \begin{aligned} \tau_{k,\ell} = \inf\{t\in \mathbb R_+: \#\{u\in [0,t] : \|\Delta M_u\|{}_X\in [1/k, k]\} = \ell\}. \end{aligned}$$

Since M has càdlàg trajectories, τ k, is a.s. well-defined and takes its values in [0, ]. Moreover, τ k, → for each k ≥ 1 a.s. as  →, so we can find a subsequence \((\tau _{k_n,\ell _n})_{n\geq 1}\) such that k n ≥ n for each n ≥ 1 and \(\inf _{m\geq n} \tau _{k_m, \ell _m}\to \infty \) a.s. as n →. Define \(\tau _n = \inf _{m\geq n} \tau _{k_m, \ell _m}\) and define \(M^n := (\mathbf 1_{[0,\tau _n]} \mathbf 1_{B_n})\star \bar {\mu }\), where \(\bar {\mu } = \mu -\nu \) is such that ν is a compensator of μ and B n = {x ∈ X : ∥x∥∈ [1∕n, n]}. Then M n is a purely discontinuous quasi-left continuous martingale by [7]. Moreover, a.s.

$$\displaystyle \begin{aligned} \Delta M^n_t = \Delta M_t \mathbf 1_{[0,\tau_n]}(t) \mathbf 1_{[1/n,n]}(\|\Delta M_t\|),\;\;\;\; t\geq 0. \end{aligned}$$

so by [27] M n is an L p-martingale (due to the weak differential subordination of purely discontinuous martingales).

The rest of the proof is analogous to the proof of Corollary 16.4.5 and uses the fact that τ n → monotonically a.s. □

Let X be a Banach space. A local martingale \(M:\mathbb R_+ \times \Omega \to X\) is called to have the canonical decomposition if there exist local martingales \(M^c,M^q, M^a:\mathbb R_+\times \Omega \to X\) such that M c is continuous, M q and M a are purely discontinuous, M q is quasi-left continuous, M a has accessible jumps, \(M^c_0=M^q_0=0\), and M = M c + M q + M a. Existence of such a decomposition was first shown in the real-valued case by Yoeurp in [30], and recently such an existence was obtained in the UMD space case (see [27, 28]).

Remark 16.4.9

Note that if a local martingale M has some canonical decomposition, then this decomposition is unique (see [13, 27, 28, 30]).

Corollary 16.4.10

Let X be a UMD Banach function space, 1 < p < ∞, \(M:\mathbb R_+\times \Omega \to X\) be an L p-martingale. Let N be the corresponding martingale field. Let M = M c + M q + M a be the canonical decomposition, N c, N q, and N a be the corresponding martingale fields. Then N(s) = N c(s) + N q(s) + N a(s) is the canonical decomposition of N(s) for a.e. s  S. In particular, if M 0 = 0 a.s., then M is continuous, purely discontinuous quasi-left continuous, or purely discontinuous with accessible jumps if and only if N(s) is so for a.e. s  S.

Proof

The first part follows from Theorem 16.3.2, Corollaries 16.4.5 and 16.4.8 and the fact that N(s) = N c(s) + N q(s) + N a(s) is then a canonical decomposition of a local martingale N(s) which is unique due to Remark 16.4.9. Let us show the second part. One direction follows from Theorem 16.3.2, Corollaries 16.4.5 and 16.4.8. For the other direction assume that N(s) is continuous for a.e. s ∈ S. Let M = M c + M q + M a be the canonical decomposition, N c, N q, and N a be the corresponding martingale fields of M c, M q, and M a. Then by the first part of the theorem and the uniqueness of the canonical decomposition (see Remark 16.4.9) we have that for a.e. s ∈ S, N q(s) = N a(s) = 0, so M q = M a = 0, and hence M is continuous. The proof for the case of pointwise purely discontinuous quasi-left continuous N or pointwise purely discontinuous N with accessible jumps is similar. □

Remark 16.4.11

It remains open whether the first two-sided estimate in (16.9) can be extended to p = 1. Recently, in [29] the second author has extended the second two-sided estimate in (16.9) to arbitrary UMD Banach spaces and to p ∈ [1, ). Here the quadratic variation has to be replaced by a generalized square function.