1 Introduction

A process \(\mathbf {X}=(X_t)_{t\ge 0}\) on a filtered probability space \((\Omega , \mathcal {F}, \mathbb {F}=(\mathcal {F}_t)_{t\ge 0}, \mathbb {P})\) is called a semimartingale (relative to the filtration \(\mathbb {F}\)) if it admits a decomposition

$$\begin{aligned} X_t=X_0+M_t+A_t,\quad t\ge 0, \end{aligned}$$
(1.1)

where \(\mathbf {M}= ( M_t )_{t\ge 0}\) is a càdlàg local martingale, \(\mathbf A = ( A_t )_{t\ge 0}\) is a càdlàg adapted process of finite variation, \(M_0=A_0=0\) and \(X_0\) is \(\mathcal {F}_0\)-measurable. \(\mathbf {X}\) is called a special semimartingale if (1.1) holds with \(\mathbf A \) being also predictable. In that case decomposition (1.1) is unique and is called the canonical decomposition of \(\mathbf {X}\). We refer to [22, 32] for basic properties of semimartingales.

Semimartingales play a crucial role in stochastic analysis as they form the class of good integrators for the Itô stochastic integral, cf. the Bichteler–Dellacherie theorem [9, 11]. Semimartingales also play a fundamental role in mathematical finance. Roughly speaking, the (discounted) asset price process must be a semimartingale in order to preclude arbitrage opportunities, see [9, Theorems 1.4, 1.6] for details, see also [26]. The question whether a given process is a semimartingale is also of importance in stochastic modeling, where long memory processes with possible jumps and high volatility are considered as driving processes for stochastic differential equations. Examples of such processes include various fractional, or more generally, Volterra processes driven by Lévy processes.

The problem of identifying semimartingales within given classes of stochastic processes has a long history. For Markov processes this problem was studied by [15, 31, 38], and in the context of Gaussian processes, it was intensively studied in 1980s. Gal’chuk [19] investigated Gaussian semimartingales addressing a question posed by Prof. A.N. Shiryayev. Key results on Gaussian semimartingales are due to [23, 24, 27, 39], [29, 2, 3, 5, 6, 14, 17].

Stricker’s theorem [39, Théorème 1] is probably the most fundamental result on Gaussian semimartingales and it is used to obtain all of the above cited results (except [23], which it extends). An important question when certain Gaussian semimartingales admit an equivalent local martingale measure was studied by [13].

Throughout this paper, if \(\mathbf {X}\) is a process with index set \(T \subset \mathbb {R}\), then \(\mathbb {F}^X=(\mathcal {F}^X_t)_{t\ge 0}\) denotes its natural filtration; i.e., the least filtration satisfying the usual conditions such that \(\sigma (X_s:s\le t, s \in T)\subseteq \mathcal {F}^X_t\), \(t\ge 0\).

Theorem

(Stricker’s theorem) Let \(\mathbf {X}\) be a symmetric Gaussian process. Then \(\mathbf {X}\) is a semimartingale relative to its natural filtration \(\mathbb {F}^X\) if and only if it admits a decomposition (1.1), where \((\mathbf {X}, \mathbf {M},\mathbf A )\) are jointly symmetric Gaussian, \(\mathbf {M}\) has independent increments and \(\mathbf A \) is a predictable process of finite variation. In this case, \(\mathbf {X}\) is a special semimartingale and (1.1) is the canonical decomposition of \(\mathbf {X}\).

In this paper we investigate if (and how) Stricker’s theorem can be generalized to the much larger class of infinitely divisible processes, which includes Gaussian, stable and other processes of interest (see Sect. 4 for specific examples). Recall that a process \(\mathbf {X}=(X_t)_{t\in T}\) is said to be infinitely divisible if all its finite dimensional distributions are infinitely divisible, and it is called symmetric if \(\mathbf X\) and \(-\mathbf X\) have the same finite dimensional distributions.

We will now do a preliminary analysis of this problem to gain more intuitions. There are two key features of the decomposition (1.1) in the Gaussian case. The first one is that components \(\mathbf {M}\) and \(\mathbf A \) of the canonical decomposition are in the same distributional class as \(\mathbf {X}\), both are Gaussian. The second one is that \(\mathbf {M}\) is a process with independent increments. The following two examples show that we cannot hope to get a direct extension of Stricker’s theorem. The first one shows that the processes \(\mathbf {M}\) and \(\mathbf A \) in the canonical decomposition (1.1) of an infinitely divisible semimartingale are not infinitely divisible in general.

Example 1.1

Let \(\mathbf {X}=(X_t)_{t\ge 0}\) be the symmetric infinitely divisible process given by

$$\begin{aligned} X_t={\left\{ \begin{array}{ll} U+V &{} 0\le t<1, \\ V &{} \ t\ge 1, \end{array}\right. } \end{aligned}$$

where random variables \(U\) and \(V\) have standard Gaussian and standard Laplace distributions, respectively, and \(U\) and \(V\) are independent. Then \(\mathbf {X}\) is a special semimartingale relative to the natural filtration \(\mathbb {F}^X\),but processes \(\mathbf {M}\) and \(\mathbf A \) in its canonical decomposition (1.1) are not infinitely divisible (see Appendix A for details).

The second example shows that \(\mathbf {M}\) in the canonical decomposition (1.1) of an infinitely divisible semimartingale need not have independent increments.

Example 1.2

Let \(\mathbf {X}=(X_t)_{t\ge 0}\) be the symmetric infinitely divisible process given by \(X_t=\sum _{k=1}^N B_k(t)\), where \(\{B_k(t):t\ge 0\}\) are independent standard Brownian motions and \(N\) is a Poisson random variable independent of \(\{B_k(t):t\ge 0,\,k\in \mathbb {N}\}\). Then \(\mathbf {X}\) is a special semimartingale relative to \(\mathbb {F}^X\), with the canonical decomposition (1.1) given by \(M_t=X_t\) and \(A_t=0\). Process \(\mathbf {M}\) does not have independent increments (see Appendix A for details).

This leads to the question: What are the special properties of Gaussian processes that make Stricker’s theorem valid?

The key to address this question is provided by Hida’s multiplicity theorem [20, Theorem 4.1]. We give it here in a simplified version which suffices for our purposes, see Remark 5.2.

Theorem 1.3

(Hida’s multiplicity theorem) Let \(\mathbf {X}= (X_t)_{t\ge 0}\) be a symmetric Gaussian process which is right-continuous in probability. Then there exist independent symmetric right-continuous in \(L^2\) Gaussian processes \(\mathbf {B}_j=(B_j(t))_{t\in \mathbb {R}}\), \(j \le N \le \infty \), each \(\mathbf {B}_j\) having independent increments and \(B_j(0)=0\), such that for each \(t\ge 0\) \(\mathcal {F}^X_t = \vee _{j} \mathcal {F}^{B_j}_t\) and

$$\begin{aligned} X_t = \sum _{j=1}^N \int _{-\infty }^t f_j(t, s) \, dB_j(s) \quad \text {a.s.} \end{aligned}$$

Here \((f_j(t,\cdot ))_{t\ge 0}\) is a family of deterministic functions such that for every \(t\ge 0\)

\(\int _{-\infty }^t f(t,s)^2 \, m_j(ds)< \infty \), where \(m_j(ds)=\mathbb {E}[B_j(ds)^2]\).

Definition 1.4

An infinitely divisible process \(\mathbf {X}=(X_t)_{t\ge 0}\) is said to be representable if there exist a countable generated measurable space \(V\), an infinitely divisible independently scattered random measure \(\Lambda \) on \(\mathbb {R}\times V\), and a family of measurable functions \(\{\phi (t, \cdot )\}_{t\ge 0}\) on \(\mathbb {R}\times V\) such that for every \(t\ge 0\)

$$\begin{aligned} X_t=\int _{(-\infty ,t]\times V} \phi (t,u)\,\Lambda (du) \quad \text {a.s.} \end{aligned}$$
(1.2)

The process \(\mathbf {X}\) is said to be strictly representable if (1.2) holds for some \((\Lambda , \phi )\) as above and \(\mathcal {F}^X_t=\mathcal {F}^{\Lambda }_t\) for every \(t\ge 0\). Here \(\mathbb {F}^\Lambda =(\mathcal {F}^\Lambda _t)_{t\ge 0}\) denotes the filtration generated by \(\Lambda \); see Sect. 2 for the definition of \(\Lambda \) and further pertinent definitions and related facts.

As a corollary to Hida’s multiplicity theorem it follows that Gaussian processes are strictly representable:

Corollary 1.5

We have the following:

(i):

Every symmetric right-continuous in probability Gaussian process \(\mathbf {X}= (X_t)_{t\ge 0}\) is strictly representable by some symmetric Gaussian random measure \(\Lambda \).

(ii):

Every symmetric right-continuous in probability, or mean zero and right-continuous in \(L^1\), infinitely divisible process \(\mathbf {X}= (X_t)_{t\ge 0}\) is representable.

Proof

(i) Applying Theorem 1.3, we may take in (1.2) \(V=\{1,\dots ,N\}\), when \(N < \infty \) or \(V=\mathbb {N}\) when \(N=\infty \), a Gaussian random measure \(\Lambda \) on \(\mathbb {R}\times V\) determined by \(\Lambda ((a,b]\times \{j\})=B_j(b)-B_j(a)\), and \(\phi (t, (s,j))=f_j(t,s)\). (ii) Follows by Proposition 5.3. \(\square \)

Typical infinitely divisible processes are defined by a stochastic integral as in (1.2) with specific \(\Lambda \) and \(\phi \), so they are explicitly representable. Moreover, by Corollary 1.5(ii), every right-continuous in probability symmetric infinitely divisible process is representable. On the other hand, the strict representability may be difficult, if not impossible, to attain. For instance, processes given by Examples 1.1 and 1.2 are representable but not strictly representable. The latter fact can easily be deduced from the next theorem but direct proofs are also possible, see the end of Example 1.1 in Appendix A.

The following result generalizes Stricker’s theorem to infinitely divisible processes. It is a direct consequence of our main result, Theorem 3.1.

Theorem 1.6

Suppose that \(\mathbf {X}=(X_t)_{t\ge 0}\) is a symmetric infinitely divisible process representable by a symmetric infinitely divisible random measure \(\Lambda \). Then \(\mathbf {X}\) is a semimartingale relative to the filtration \(\mathbb {F}^\Lambda \) if and only if

$$\begin{aligned} X_t=X_0+M_t+A_t \end{aligned}$$
(1.3)

where \(\mathbf {M}\) and \(\mathbf A \) are infinitely divisible processes representable by \(\Lambda \) such that \(\mathbf {M}\) is a càdlàg process with independent increments relative to \(\mathbb {F}^{\Lambda }\), \(\mathbf A \) is a predictable càdlàg process of finite variation and \(M_0=A_0=0\). Decomposition (1.3) is unique in the class of processes representable by \(\Lambda \). Furthermore, \(\mathbf {X}\) is a special semimartingale if and only if (1.3) holds and \(\mathbf {M}\) is a martingale with independent increments.

There is a slight difference between (1.3) and (1.1) in the meaning of \(\mathbf {M}\). In (1.3), \(\mathbf {M}\) is a process with independent increments which needs not be a (local) martingale. It could be further decomposed into a martingale and a process of finite variation leading to (1.1) but we would loose the predictability of \(\mathbf A \) and uniqueness of the decomposition. If \(\mathbf {X}\) is a Gaussian semimartingale relative \(\mathbb {F}^X\), then by Corollary 1.5(i) and Theorem 1.6, \(\mathbf {X}\) is a special semimartingale and \((\mathbf {M}, \mathbf A ,\mathbf {X})\) are jointly Gaussian, which gives Stricker’s theorem. If \(\mathbf {X}\) is a symmetric \(\alpha \)-stable process representable by a symmetric \(\alpha \)-stable random measure \(\Lambda \), for example, then \(\mathbf {X}\) is a semimartingale relative \(\mathbb {F}^{\Lambda }\) if and only if it has a decomposition (1.3) into jointly symmetric \(\alpha \)-stable processes \(\mathbf {M}\) and \(\mathbf A \). If such \(\mathbf {X}\) is strictly representable by a symmetric \(\alpha \)-stable random measure, then (1.3) gives the decomposition of \(\mathbf {X}\) relative to its natural filtration.

Our proofs rely on different techniques than those used in the Gaussian case, see Remark 3.5. We combine series representations of càdlàg infinitely divisible processes with detailed analysis of their jumps, which seems to be a new approach in this context. This technique is possible because such series representations converge uniformly a.s. on compacts, as shown in a recent work of [8, Theorem3.1].

Section 2 contains preliminary definitions and facts. Our main result, Theorem 3.1, is stated and proved in Sect. 3, and the proof of Theorem 1.6 is given at the end of this section. Theorem 3.1 reduces the question when an infinitely divisible process is a semimartingale relative to \(\mathbb {F}^{\Lambda }\) to the one when a certain associated infinitely divisible process is of finite variation.

In Sect. 4 we use Theorem 3.1 to obtain explicit necessary and sufficient conditions for a large class of stationary increment infinitely divisible processes to be semimartingales, see Theorems 4.2 and 4.3 and their subsequent remarks. These results extend [27, Theorem 6.5] from Gaussian to infinitely divisible processes, see Corollary 4.8. We then apply Theorems 4.2 and 4.3 to characterize the semimartingale property of various type of processes including linear fractional processes, moving averages, supOU processes and etc. These latter results generalize in a natural way results of [4, 10]. Some supplementary material was moved to Appendices A and B.

2 Preliminaries

In this section we will give more definitions and notation, and recall some facts that will be used throughout this paper. The material on infinitely divisible random measures and the related stochastic integral can be found in [33]. \((\Omega ,\mathcal {F},\mathbb {P})\) will stand for a complete probability space, \((V, \mathcal V)\) will denote a countable generated measurable space, that is, the \(\sigma \)-algebra \(\mathcal V\) is generated by countable many sets, and \(\{V_n \}\subset \mathcal {V}\) will be a fixed sequence such that \(V_n \uparrow V\). Define

$$\begin{aligned} \fancyscript{S}= \big \{A \in \fancyscript{B}(\mathbb {R})\otimes \mathcal {V}: \ A \subset [-n, n] \times V_n \quad \text {for some } n \ge 1 \big \}. \end{aligned}$$

Then \(\fancyscript{S}\) is a \(\delta \)-ring of subsets of \(\mathbb {R}\times V\) such that \(\sigma (\fancyscript{S})=\fancyscript{B}(\mathbb {R})\otimes \mathcal {V}\). For example, \(\fancyscript{S}\) can be the family of bounded Borel subsets of an Euclidean space. A stochastic process \(\Lambda =\{\Lambda (A) \}_{A \in \fancyscript{S}}\) is said to be an (independently scattered) infinitely divisible random measure if

  1. (i)

    for any sequence \((A_n)_{n\in \mathbb {N}} \subseteq \fancyscript{S}\) of pairwise disjoint sets, \(\Lambda (A_n)\), \(n=1,2,\dots \) are independent and if \(\bigcup _{n=1}^{\infty } A_n \in \fancyscript{S}\), then \(\Lambda (\bigcup _{n=1}^{\infty } A_n) = \sum _{n=1}^{\infty } \Lambda (A_n)\) a.s.;

  2. (ii)

    \(\Lambda (A)\) has an infinitely divisible distribution for every \(A \in \fancyscript{S}\).

\(\mathbb {F}^{\Lambda }=(\mathcal {F}^{\Lambda }_t)_{t \ge 0}\) will denote the natural filtration of \(\Lambda \), i.e., the least filtration satisfying the usual conditions of right-continuity and completeness such that

$$\begin{aligned} \sigma \Big (\Lambda (A):A\in \fancyscript{S},\, A\subseteq (-\infty ,t]\times V\Big )\subseteq \mathcal {F}^\Lambda _t,\quad t\ge 0. \end{aligned}$$

We will now recall deterministic characteristics of \(\Lambda \) that will play a crucial role in this paper. From [33, Proposition2.4], there exist measurable functions \( b :\mathbb {R}\times V \rightarrow \mathbb {R}\) and \( \sigma :\mathbb {R}\times V \rightarrow \mathbb {R}_+\), a \(\sigma \)-finite measure \(\kappa \) on \(\mathbb {R}\times V\), and a measurable family \(\{\rho _{u}\}_{u\in \mathbb {R}\times V}\) of Lévy measures on \(\mathbb {R}\) such that for every \(A \in \fancyscript{S}\) and \(\theta \in \mathbb {R}\)

$$\begin{aligned} \log \mathbb {E}e^{i\theta \Lambda (A)}= \int _{A} \Bigg [ i\theta b(u)-\frac{1}{2}\theta ^2 \sigma ^2(u) +\int _{\mathbb {R}} \big (e^{i\theta x}-1-i\theta [[ x]]\big ) \, \rho _{u}(dx) \Bigg ] \,\kappa (d u). \end{aligned}$$
(2.1)

Here \(u = (s,v) \in \mathbb {R}\times V\) and

$$\begin{aligned}{}[[ x]]= \frac{x}{|x| \vee 1} = {\left\{ \begin{array}{ll} x &{} \text {if } |x|<1\, , \\ \mathrm {sgn}(x) &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(2.2)

is a truncation function. Given \(b\), \(\sigma ^2\), \(\kappa \), and \(\{\rho _{u}\}_{u\in \mathbb {R}\times V}\) as above, there is an independently scattered random measure \(\Lambda \) satisfying (2.1) by Kolmogorov’s extension theorem. According to [33, Theorem2.7], the stochastic integral \(\int _{\mathbb {R}\times V} f(u) \, \Lambda (du)\) of a measurable deterministic function \( f :\mathbb {R}\times V \rightarrow \mathbb {R}\) exists if and only if

  1. (a)

    \(\int _{\mathbb {R}\times V} |B(f(u), u)| \, \kappa (du) < \infty \),

  2. (b)

    \(\int _{\mathbb {R}\times V} K(f(u), u) \, \kappa (du) < \infty \),

where

$$\begin{aligned} B(x, u)&= x b(u) + \int _{\mathbb {R}} \big ( [[ xy]]- x [[ y]]\big ) \, \rho _{u}(dy)\quad \text {and} \nonumber \\ K(x, u)&= x^2 \sigma ^2(u) + \int _{\mathbb {R}} [[ xy]]^{2} \, \rho _{u}(dy), \quad x \in \mathbb {R}, \ u \in \mathbb {R}\times V. \end{aligned}$$
(2.3)

When (a)–(b) hold, then \(\int _{\mathbb {R}\times V} f(u)\,\Lambda (du)\) is an infinitely divisible random variable. Moreover, if \(f=f(t, \cdot )\) depends on a parameter \(t\in T\), then \(\big (\int _{\mathbb {R}\times V} f(t,u)\,\Lambda (du)\big )_{t\in T}\) is an infinitely divisible process.

We will also use the following definitions and notation. For a càdlàg function \( g :\mathbb {R}_+ \rightarrow \mathbb {R}\), the jump size of \(g\) at \(t\) is defined as \(\Delta g(t)=\lim _{s\uparrow t, s<t} (g(t)-g(s))\) when \(t>0\) and \(\Delta g(0)=0\). If \( X :\Omega \rightarrow [0,\infty ]\) is a measurable function, then \([X]=\{(\omega ,X(\omega )):\omega \in \Omega ,\, X(\omega )<\infty \}\) denotes the graph of \(X\). (Notice that [22] write \([[ X]]\) for the graph of \(X\).) A random set \(A\subseteq \Omega \times \mathbb {R}_+\) is said to be evanescent if the set \(\{\omega \in \Omega :\exists \, t\in \mathbb {R}_+ \text { such that } (\omega ,t)\in A\}\) is a \(\mathbb {P}\)-null set. For two random subsets \(A\) and \(B\) of \(\Omega \times \mathbb {R}_+\), we say that \(A\subseteq B\) up to evanescent if \(B{\setminus } A\) is evanescent. Two processes \(\mathbf {X}=(X_t)_{t\ge 0}\) and \(\mathbf {Y}=(Y_t)_{t\ge 0}\) are said to be indistinguishable if the set \(\{(\omega , t):X_t(\omega ) \ne Y_t(\omega ) \}\) is evanescent. We will write \(\mathbf {X}=\mathbf {Y}\) when \(\mathbf {X}\) and \(\mathbf {Y}\) are indistinguishable.

3 Infinitely divisible semimartingales

In this section \(\mathbf {X}=(X_t)_{t\ge 0}\) stands for a càdlàg infinitely divisible process which is representable by some infinitely divisible random measure \(\Lambda \), i.e., a process of the form

$$\begin{aligned} X_t=\int _{(-\infty ,t]\times V} \phi (t, u)\,\Lambda (du), \end{aligned}$$
(3.1)

where \( \phi :\mathbb {R}_+\times (\mathbb {R}\times V) \rightarrow \mathbb {R}\) is a measurable deterministic function and \(\Lambda \) is specified by (2.1)–(2.2). We assume that for every \(u=(s,v) \in \mathbb {R}\times V\),   \(\phi (\cdot , u)\) is càdlàg, cf. Remark 3.2. Let \(B\) be given by (2.3). We further assume that

$$\begin{aligned} \int _{(0,t] \times V} \big | B\big (\phi (s, s, v), (s,v)\big ) \big | \, \kappa (ds, dv) < \infty \quad \text {for every t>0.} \end{aligned}$$
(3.2)

The following is the main result of this section.

Theorem 3.1

Under the above assumptions \(\mathbf {X}\) is a semimartingale relative to the filtration \(\mathbb {F}^{\Lambda }=(\mathcal {F}^{\Lambda }_t)_{t \ge 0}\) if and only if

$$\begin{aligned} X_t = X_0 + M_t + A_t, \quad t \ge 0, \end{aligned}$$
(3.3)

where \(\mathbf {M}= ( M_t )_{t\ge 0}\) is a semimartingale with independent increments given by the stochastic integral

$$\begin{aligned} M_t = \int _{(0,t] \times V} \phi (s,(s,v))\,\Lambda (ds,dv), \quad t\ge 0, \end{aligned}$$
(3.4)

and \(\mathbf {A}= ( A_t )_{t\ge 0}\) is a predictable càdlàg process of finite variation of the form

$$\begin{aligned} A_t = \int _{(-\infty ,t] \times V} \big [\phi (t,(s,v)) - \phi (s_{+}, (s,v))\big ] \,\Lambda (ds,dv). \end{aligned}$$
(3.5)

Decomposition (3.3) is unique in the following sense: if \(\mathbf {X}=X_0+\mathbf {M}'+\mathbf A '\), where \(\mathbf {M}'\) and \(\mathbf A '\) are processes representable by \(\Lambda \) such that \(\mathbf {M}'\) is a semimartingale with independent increments relative to \(\mathbb {F}^{\Lambda }\) and \(\mathbf A '\) is a predictable càdlàg process of finite variation, then \(\mathbf {M}'=\mathbf {M}+g\) and \(\mathbf A '=\mathbf A -g\) for some càdlàg deterministic function \(g\) of finite variation, where \(\mathbf {M}\) and \(\mathbf A \) are given by (3.4) and (3.5).

\(\mathbf X\) is a special semimartingale if and only if (3.3)–(3.5) hold and \({\mathbb {E}|M_t|<\infty }\) for all \(t>0\). In this case, \((M_t-\mathbb {E}M_t)_{t\ge 0}\) is a martingale and

$$\begin{aligned} X_t= X_0+ (M_t -\mathbb {E}M_t)+ (A_t+\mathbb {E}M_t),\quad t\ge 0 \end{aligned}$$

is the canonical decomposition of  \(\mathbf {X}\).

In the next section we use Theorem 3.1 to characterize the semimartingale property of various infinitely divisible processes with stationary increments. In the following we conduct the proofs of Theorems 3.1 and 1.6, but first we consider two remarks and an example.

Remark 3.2

If \(\mathbf {X}\) given by (3.1) is a semimartingale relative \(\mathbb {F}^\Lambda \), and \(\Lambda \) satisfies the non-deterministic condition

$$\begin{aligned} \kappa \big (u\in \mathbb {R}\times V\! : \sigma ^2(u)=0,\, \rho _u(\mathbb {R})=0 \big )=0, \end{aligned}$$
(3.6)

then \(\phi \) can be chosen such that \(\phi (\cdot , u)\) is càdlàg for every \(u=(s,v) \in \mathbb {R}\times V\). The proof of this statement is given in the Appendix A.

Remark 3.3

Condition (3.2) is always satisfied when \(\Lambda \) is symmetric. Indeed, in this case \(B\equiv 0\).

Example 3.4

Consider the setting in Theorem 3.1 and suppose that \(\Lambda \) is an \(\alpha \)-stable random measure and \(\alpha \in (0,1)\). Then \(\mathbf X\) is a semimartingale with respect to \(\mathbb {F}^\Lambda \) if and only if it is of finite variation. This follows by Theorem 3.1 because the process \(\mathbf M\) given by (3.4) is of finite variation. Indeed, the Lévy–Itô decomposition of \(\mathbf {M}\) [22, II, 2.34] combined with [22, II, 1.28] show that \(\mathbf {M}\) is of finite variation.

Proof of Theorem 3.1

The sufficiency is obvious. To show the necessary part we need to show that a semimartingale \(\mathbf {X}\) has a decomposition (3.3) where the processes \(\mathbf M\) and \(\mathbf A\) have the stated properties. We will start by considering the case where \(\Lambda \) does not have a Gaussian component, i.e. \(\sigma ^2=0\). We may and will assume that \(\phi (0,u)=0\) for all \(u\) corresponding to \(X_0=0\) a.s., and that \(\phi (t,(s,v))=0\) for \(s>t\) and \(v\in V\).

Case 1. \(\Lambda \) has no Gaussian component: We divide the proof into the following six steps.

Step 1. Let \(X^0_t = X_t - \beta (t)\), with

$$\begin{aligned} \beta (t) = \int _{U} B\big (\phi (t, u), u\big ) \, \kappa (d u),\quad U=\mathbb {R}\times V. \end{aligned}$$

We will give the series representation for \(\mathbf {X}^0\) that will be crucial for our considerations. To this end, define for \(s\ne 0\) and \(u \in U=\mathbb {R}\times V\)

$$\begin{aligned} R(s, u) = {\left\{ \begin{array}{ll} \inf \{ x>0:\rho _u(x,\infty ) \le s\} &{} \text {if } s>0, \\ \sup \{ x<0:\rho _u(-\infty , x) \le -s\} &{} \text {if } s<0. \end{array}\right. } \end{aligned}$$

Choose a probability measure \(\tilde{\kappa }\) on \(U\) equivalent to \(\kappa \), and let \(h(u)= \frac{1}{2}(d \tilde{\kappa }/d\kappa )(u)\). By an extension of our probability space if necessary [36], Proposition 2 and Theorem 4.1, shows that there exists three independent sequences \((\Gamma _i)_{i\in \mathbb {N}}\), \((\epsilon _i)_{i\in \mathbb {N}}\), and \((T_i)_{i\in \mathbb {N}}\), where \(\Gamma _i\) are partial sums of i.i.d. standard exponential random variables, \(\epsilon _i\) are i.i.d. symmetric Bernoulli random variables, and \(T_i=(T_i^1, T_i^2)\) are i.i.d. random variables in \(U\) with the common distribution \(\tilde{\kappa }\), such that for every \(A\in \fancyscript{S}\),

$$\begin{aligned} \Lambda (A)= \nu _0(A)+ \sum _{j=1}^\infty \big [R_j\mathbf {1}_A(T_j)-\nu _j(A)\big ] \quad \text {a.s.} \end{aligned}$$
(3.7)

where \(R_j=R(\epsilon _j\Gamma _j h(T_j),T_j)\), \(\nu _0(A)= \int _A b(u) \, \kappa (d u)\), and for \(j\ge 1\)

$$\begin{aligned} \nu _j(A) = \int _{\Gamma _{j-1}}^{\Gamma _j} \mathbb {E}[[ R(\epsilon _1 r h(T_1),T_1)]]\mathbf {1}_A(T_1) \, dr. \end{aligned}$$

It follows by the same argument that

$$\begin{aligned} X^0_t = \sum _{j=1}^{\infty } \big [ R_j \phi (t, T_j) - \alpha _j(t) \big ] \quad \text {a.s.}, \end{aligned}$$

where

$$\begin{aligned} \alpha _j(t) = \int _{\Gamma _{j-1}}^{\Gamma _j} \mathbb {E}[[ R(\epsilon _1 r h(T_1),T_1) \phi (t, T_1)]]\, dr. \end{aligned}$$

Step 2. Set \(J=\{t\ge 0:\kappa (\{t\}\times V)>0\}\),

$$\begin{aligned} T^{1,c}_i=T_i^1\mathbf {1}_{\{T_i^1\in \mathbb {R}_+{\setminus } J\}}\quad \text {and}\quad T^{1,d}_i=T_i^1\mathbf {1}_{\{T_i^1\in J\}}. \end{aligned}$$

Since \(\kappa \) is a \(\sigma \)-finite measure the set \(J\) is countable. Furthermore, \(\mathbb {P}(T^{1,c}_i=x)=0\) for all \(x>0\) and \(T^{1,d}_i\) is discrete. We will show that for every \(i \in \mathbb {N}\)

$$\begin{aligned} \Delta X_{T_i^{1,c}} = R_i \phi \left( T_i^{1,c}, T_i\right) \quad \text {a.s. } \end{aligned}$$
(3.8)

Since \(\mathbf {X}\) is càdlàg, the series

$$\begin{aligned} X_t^0=\sum _{j=1}^{\infty } \big [ R_j \phi (t, T_j) - \alpha _j(t) \big ] \end{aligned}$$

converges uniformly for \(t\) in compact intervals a.s., cf. [8, Corollary 3.2]. Moreover, \(\beta \) is càdlàg, see [8, Lemma 3.5], and by Lebesgue’s dominated convergence theorem it follows that \(\alpha _j\), for \(j\in \mathbb {N}\), are càdlàg as well. Therefore, with probability one,

$$\begin{aligned} \Delta X_t = \Delta \beta (t) + \sum _{j=1}^{\infty } \big [ R_j \Delta \phi (t, T_j) - \Delta \alpha _j(t) \big ] \quad \text {for all} \ t>0. \end{aligned}$$

Hence, for every \(i \in \mathbb {N}\) almost surely

$$\begin{aligned} \Delta X_{T_i^{1,c}} = \Delta \beta \left( T_i^{1,c}\right) + \sum _{j=1}^{\infty } \Bigg [ R_j \Delta \phi \Big (T_i^{1,c}, T_j\Big ) - \Delta \alpha _j\Big (T_{i}^{1,c}\Big ) \Bigg ]. \end{aligned}$$
(3.9)

Since \(\beta \) has at most countable many discontinuities (it is càdlàg), with probability one \(T_i^{1,c}\) is a continuity point of \(\beta \) since \(\mathbb {P}(T^{1,c}_i=x)=0\) for all \(x>0\). Hence \(\Delta \beta (T_i^{1,c})=0\) a.s. Since \((\Gamma _j)_{j\in \mathbb {N}}\) are independent of \(T_i^{1,c}\), the argument used for \(\beta \) also yields \(\Delta \alpha _j(T_i^{1,c})=0\) a.s. By (3.9) this proves

$$\begin{aligned} \Delta X_{T_i^{1,c}} = \sum _{j=1}^{\infty } R_j \Delta \phi \Big (T_{i}^{1,c}, T_j\Big ). \end{aligned}$$
(3.10)

Furthermore, for \(i\ne j\) we get

$$\begin{aligned} \mathbb {P}\Bigg ( \Delta \phi \Big (T_i^{1,c}, T_j\Big ) \ne 0\Bigg )&= \int _{U} \mathbb {P}\Bigg ( \Delta \phi \Big (T_i^{1,c}, T_j\Big ) \ne 0\, |\, T_j= u\Bigg ) \, \tilde{\kappa }(d u) \\&= \int _{U} \mathbb {P}\Bigg ( \Delta \phi \Big (T_i^{1,c}, u\Big ) \ne 0 \Bigg ) \, \tilde{\kappa }(d u) = 0 \end{aligned}$$

again because \(\phi (\cdot , u)\) has only countably many jumps and the distribution of \(T_i^{1,c}\) is continuous on \((0,\infty )\). If \(j=i\) then

$$\begin{aligned} \Delta \phi \Big (T_i^{1,c}, T_i\Big )&= \lim _{h \downarrow 0,\, h>0} \Bigg [\phi \Big (T_i^{1,c}, \Big (T_i^1, T_i^2\Big )\Big ) - \phi \Big (T_i^{1,c}-h, \Big (T_i^1, T_i^2\Big )\Big ) \Bigg ] \\&= \phi \Big (T_i^{1,c}, T_i\Big ) \end{aligned}$$

as \(\phi (t,(s,v))=0\) whenever \(t<s\) and \(v\in V\). This simplifies (3.10)–(3.8).

Step 3. Next we will show that \(\mathbf {M}\), defined in (3.4), is a well-defined càdlàg process satisfying

$$\begin{aligned} \Delta M_{T^{1,c}_i} =\Delta X_{T^{1,c}_i}\quad \text {a.s.} \quad \text {for all }i\in \mathbb {N}. \end{aligned}$$
(3.11)

Since any semimartingale has finite quadratic variation, we have in particular

$$\begin{aligned} \sum _{0<s\le t} \big (\Delta X_s\big )^2<\infty \quad \text {a.s.} \end{aligned}$$

Let \(\mathbf {X}'\) be an independent copy of \(\mathbf {X}\) and set \(\tilde{\mathbf {X}}=\mathbf {X}-\mathbf {X}'\). Let \(\bar{R}_j= R(\epsilon _j \Gamma _j h(T_j)/2,T_j)\) and \((\xi _j)_{j\in \mathbb {N}}\) be i.i.d. symmetric Bernoulli random variables defined on a probability space \((\Omega ',\mathcal {F}',\mathbb {P}')\). By [35, Theorem2.4] it follows that for all \(t\ge 0\) the series

$$\begin{aligned} \bar{X}_t=\sum _{j=1}^\infty \xi _j \bar{R}_j\phi (t,T_j) \end{aligned}$$

defined on \(\Omega \times \Omega '\) converge a.s. under \(\mathbb {P}\otimes \mathbb {P}'\) and \(\bar{\mathbf {X}}\) equals \(\tilde{\mathbf {X}}\) in finite dimensional distributions. Thus \(\bar{\mathbf {X}}\) has a càdlàg modification satisfying

$$\begin{aligned} \sum _{s\in (0,t]} \big (\Delta \bar{X}_s)^2<\infty \quad \mathbb {P}\otimes \mathbb {P}'\text {-a.s.} \end{aligned}$$
(3.12)

By [8, Corollary 3.2], we have \(\mathbb {P}\otimes \mathbb {P}'\)-a.s. for all \(t\ge 0\) that

$$\begin{aligned} \Delta \bar{X}_t=\sum _{j=1}^\infty \xi _j \bar{R}_j\Delta \phi (t,T_j). \end{aligned}$$
(3.13)

By (3.12) and (3.13) we have for \(\mathbb {P}\)-a.a. \(\omega \in \Omega \) that

$$\begin{aligned} \sum _{s\in A} Y_s^2< \infty \quad \mathbb {P}'\text {-a.s.,}\quad&\text {where}\quad Y_s=\sum _{j=1}^\infty a(s,j) \xi _j,\\ a(s,j)= \bar{R}_j(\omega )\Delta \phi (s,T_j(\omega ))\quad&\text {and}\quad A=\mathop {\cup }\limits _{j\in \mathbb {N}}\{s\in (0,t]:\Delta \phi (s,T_j(\omega ))\ne 0\}. \end{aligned}$$

For a fixed \(\omega \in \Omega \) as above, \(A\) is a countable deterministic set and \(\mathbf{Y}=(Y_s)_{s\in A}\) is a Bernoulli/Rademacher random element in \(\ell ^2(A)\) defined on \((\Omega ',\mathcal {F}',\mathbb {P}')\). By [28, Theorem 4.8], \(\mathbb {E}'[\Vert Y\Vert _{\ell ^2(A)}^2]<\infty \) which implies that

$$\begin{aligned} \infty >\mathbb {E}'\left[ \sum _{s\in A} Y_s^2\right] =\sum _{s\in A} \mathbb {E}'[ Y_s^2]=\sum _{s\in A} \sum _{j=1}^\infty a(s,j)^2= \sum _{j=1}^\infty \left( \sum _{s\in A} a(s,j)^2\right) . \end{aligned}$$
(3.14)

Equation (3.14) implies that \(\mathbb {P}\)-a.s.

$$\begin{aligned} \infty >\sum _{i:\,T_i^1\in (0,t]} \left| \bar{R}_i\Delta \phi \left( T^1_i,T_i\right) \right| ^2=\sum _{i:\,T_i^1\in (0,t]} \left| \bar{R}_i\phi \left( T^1_i,T_i\right) \right| ^2. \end{aligned}$$

Put for \(t,r \ge 0\) and \((\epsilon ,s,v) \in \{-1,1\} \times \mathbb {R}\times V\)

$$\begin{aligned} H(t; r, (\epsilon ,s,v)) = R\big (\epsilon r h(s,v)/2, (s,v)\big ) \phi (s,(s,v)) \mathbf {1}_{\{0< s \le t\}}. \end{aligned}$$

The above bound shows that for each \(t\ge 0\)

$$\begin{aligned} \sum _{i=1}^{\infty } \left| H\left( t; \Gamma _i,\left( \epsilon _i,T_i^1,T_i^2\right) \right) \right| ^2 < \infty \quad \text {a.s.} \end{aligned}$$

That implies, by [36, Theorem4.1], that the following limit is finite

$$\begin{aligned}&\lim _{n\rightarrow \infty } \int _0^n \mathbb {E}\left[ \left[ H\left( t; r, \left( \epsilon _1,T_1^1,T_1^2\right) \right) ^2 \right] \right] \, dr \\&\quad = \int _0^{\infty } \mathbb {E}\left[ \left[ H\left( t; r, \left( \epsilon _1,T_1^1,T_1^2\right) \right) ^2 \right] \right] \, dr. \end{aligned}$$

Evaluating this limit we get

$$\begin{aligned} \infty&> \int _0^{\infty } \mathbb {E}\left[ \left[ R(\epsilon _1 r h(T_1)/2, T_1) \phi \left( T_i^1, T_i\right) \mathbf {1}_{\{0<T_i^1 \le t\}}\right] \right] ^2 \, dr \\&= \int _0^{\infty } \int _{\mathbb {R}\times V} \mathbb {E}\left[ \left[ R(\epsilon _1 r h(s,v)/2, (s,v)) \phi (s, (s,v)) \mathbf {1}_{\{0< s \le t\}}\right] \right] ^2 \, \tilde{\kappa }(ds,dv) \,dr \\&= 4 \int _0^{\infty } \int _{\mathbb {R}\times V} \mathbb {E}\left[ \left[ R(\epsilon _1 z, (s,v)) \phi (s, (s,v)) \mathbf {1}_{\{0< s \le t\}}\right] \right] ^2 \, \kappa (ds,dv) \,dz \\&= 2\int _{\mathbb {R}\times V} \int _{\mathbb {R}} \left[ \left[ x \phi (s, (s,v)) \mathbf {1}_{\{0< s \le t\}}\right] \right] ^2 \, \rho _{(s,v)}(dx) \,\kappa (ds,dv) \\&= 2\int _{(0,t] \times V} \int _{\mathbb {R}} \min \{ |x \phi (s, (s,v))|^2, 1\} \, \rho _{(s,v)}(dx)\, \kappa (ds,dv). \end{aligned}$$

Finiteness of this integral in conjunction with (3.2) yield the existence of the stochastic integral

$$\begin{aligned} M_t = \int _{(0,t] \times V} \phi (s, s,v) \, \Lambda (ds,dv) \end{aligned}$$

by (a) and (b) on page 7. The fact that \(\mathbf {M}\) has independent increments is obvious since \(\Lambda \) is independently scattered. Furthermore, \(\mathbf {M}\) is càdlàg in probability by the continuity properties of stochastic integrals, and by Lemma 6.2 it has a càdlàg modification which will also be denoted by \(\mathbf {M}\).

Let \((\zeta _t)_{t\ge 0}\) be the shift component of \(\mathbf {M}\). By (3.2) and the fact that

$$\begin{aligned} \zeta _t=\int _{(0,t]\times V} B\big (\phi (s,s,v),(s,v)\big )\,\kappa (ds,dv),\quad t\ge 0, \end{aligned}$$

see [33, Theorem 2.7], we deduce that \((\zeta _t)_{t\ge 0}\) is of finite variation. Therefore the independent increments of \(\mathbf {M}\) and [22, II, 5.11] show that \(\mathbf {M}\) is a semimartingale. For \(t\ge 0\) we can write \(M_t\) as a series using the series representation (3.7) of \(\Lambda \). It follows that

$$\begin{aligned} M_t =\zeta _t+ \sum _{i=1}^{\infty } \Bigg [ R_i \phi \Big (T_i^1, T_i\Big ) \mathbf {1}_{\{0<T_i^1 \le t\}} - \gamma _j(t)\Bigg ] \end{aligned}$$

where

$$\begin{aligned} \gamma _j(t)=\int _{\Gamma _{j-1}}^{\Gamma _j} \mathbb {E}\left[ \left[ R(\epsilon _1 r h(T_1),T_1) \phi \left( T^1_1,T_1\right) \mathbf {1}_{\{ 0<T^1_j\le t\}}\right] \right] \, dr. \end{aligned}$$

By arguments as above we have \(\Delta M_{T^{1,c}_i}= R_i\phi (T^{1,c}_i,T_i)\) a.s. and hence by (3.8) we obtain (3.11).

Step 4. In the following we will show the existence of a sequence \((\tau _k)_{k\in \mathbb {N}}\) of totally inaccessible stopping times such that all local martingales \(\mathbf {Z}=(Z_t)_{t\ge 0}\) with respect to \(\mathbb {F}^\Lambda \) are purely discontinuous and up to evanescent

$$\begin{aligned} \{\Delta \mathbf{Z}\ne 0\}\subseteq (\Omega \times J)\cup \left( \mathop {\cup }\limits _{k\in \mathbb {N}}[\tau _k]\right) ,\quad \mathop {\cup }\limits _{k\in \mathbb {N}}[\tau _k] \subseteq \mathop {\cup }\limits _{k\in \mathbb {N}}\left[ T^{1,c}_k\right] . \end{aligned}$$
(3.15)

Recall that \(\{\Delta \mathbf{Z}\ne 0\}\) denotes the random set \(\{(\omega , t)\in \Omega \times \mathbb {R}_+:Z_t(\omega )\ne 0\}\) and \(J\) is the countable subset of \(\mathbb {R}_+\) defined in Step 2. Set \(\mathcal {V}_0=\{A\in \mathcal {V}:A\subseteq V_k\text { for some }k\in \mathbb {N}\}\) where \((V_k)_{k\in \mathbb {N}}\) is given in the Sect. 2. To show (3.15) choose a sequence \((B_k)_{k\ge 1}\subseteq \mathcal {V}_0\) of disjoint sets which generates \(\mathcal V\) and for all \(k\in \mathbb {N}\) let \(\mathbf{U}^k=(U^k_t)_{t\ge 0}\) be given by

$$\begin{aligned} U^k_t=\Lambda ((0,t]\times B_k). \end{aligned}$$

For \(k\in \mathbb {N}\), \(\mathbf{U}^k\) is a càdlàg in probability infinitely divisible process with independent increments and has therefore a càdlàg modification by Lemma 6.2 (which will also be denoted \(\mathbf{U}^k\)). Hence \(\mathbf{U}=\{(U_t^k)_{k\in \mathbb {N}}:t\in \mathbb {R}_+\}\) is a càdlàg \(\mathbb {R}^\mathbb {N}\)-valued process with no Gaussian component. Let \(E=\mathbb {R}^\mathbb {N}{\setminus }\{0\}\). Then \(E\) is a Blackwell space and \(\mu \) defined by

$$\begin{aligned} \mu (A)=\sharp \big \{t\in \mathbb {R}_+:(t,\Delta U_t)\in A\big \},\quad A\in \fancyscript{B}(\mathbb {R}_+\times E) \end{aligned}$$

is an extended Poisson random measure on \(\mathbb {R}_+\times E\), in the sense of [22, II, 1.20]. Let \(\nu \) be the intensity measure of \(\mu \). We have that \(\mathbb {F}^\Lambda \) is the least filtration for which \(\mu \) is an optional random measure. Thus according to [22, III, 1.14(b) and the remark after III, 4.35], \(\mu \) has the martingale representation property, that is for all real-valued local martingales \(\mathbf{Z}=(Z_t)_{t\ge 0}\) with respect to \(\mathbb {F}^\Lambda \) there exists a predictable function \(\phi \) from \(\Omega \times \mathbb {R}_+\times E\) into \(\mathbb {R}\) such that

$$\begin{aligned} Z_t = \phi *(\mu -\nu )_t,\quad t\ge 0 \end{aligned}$$
(3.16)

(in (3.16) the symbol \(*\) denotes integration with respect to \(\mu -\nu \) as in [22, II, 1.]). Note that \(\{t\ge 0:\nu (\{t\}\times E)>0\}\subseteq J\). By definition, see [22, II, 1.27(b)], \(\mathbf Z\) is a purely discontinuous local martingale and \(\Delta Z_t(\omega )=\phi (\omega ,t,\Delta U_t(\omega ))\mathbf {1}_{\{\Delta U_t(\omega )\ne 0\}}\) for \((\omega ,t)\in \Omega \times J^c\) up to evanescent, which shows that

$$\begin{aligned} \{ \Delta \mathbf{Z}\ne 0\}\subseteq (\Omega \times J)\cup \{ \Delta \mathbf{U}\ne 0\}\quad \text {up to evanescent.} \end{aligned}$$

Lemma 6.1 and a diagonal argument show the existence of a sequence of totally inaccessible stopping times \((\tau _k)_{k\in \mathbb {N}}\) such that up to evanescent

$$\begin{aligned} \{\Delta \mathbf{U}\ne 0\}= (\Omega \times J)\cup (\cup _{k\in \mathbb {N}} [\tau _k]). \end{aligned}$$

Arguing as in Step 2 with \(\phi (t,(s,v))=\mathbf {1}_{(0, t]}(s)\mathbf {1}_{ B_k}(v)\) shows that with probability one

$$\begin{aligned} \Delta U^k_t = \Delta \zeta (t) + \sum _{j=1}^{\infty } \Bigg [ R_j \mathbf {1}_{\{t=T^1_j\}} \mathbf {1}_{\{T^2_j\in B_k\}}- \Delta \gamma _j(t) \Bigg ] \quad \text {for all} \ t>0 \end{aligned}$$

where

$$\begin{aligned} \xi (t)&= \int _{\mathbb {R}\times V} \mathbf {1}_{\{0\le s\le t\}}\mathbf {1}_{\{v\in B_k\}}b(s,v)\, \kappa (ds,dv),\\ \gamma _j(t)&= \int _{\Gamma _{j-1}}^{\Gamma _j} \mathbb {E}\left[ \left[ R(\epsilon _1 r h(T_1),T_1) \mathbf {1}_{\{ T^1_j\le t\}}\mathbf {1}_{\{T^2_j\in B_k\}}\right] \right] \, dr. \end{aligned}$$

The functions \(\xi \) and \(\gamma _j\), for \(j\in \mathbb {N}\), are continuous on \(\mathbb {R}_+{\setminus } J\) and hence with probability one

$$\begin{aligned} \Delta U^k_t= \sum _{j=1}^{\infty } R_j \mathbf {1}_{\{t=T^1_j\}} \mathbf {1}_{\{T^2_j\in B_k\}} \quad \text {for all }t\in \mathbb {R}_+{\setminus } J. \end{aligned}$$
(3.17)

Since each \(\tau _k\) is totally inaccessible and \(J\) is countable, we have \(\mathbb {P}(\tau _k\in J)=0\). Hence by (3.17) we conclude that

$$\begin{aligned} \cup _{k\in \mathbb {N}} [\tau _k]\subseteq \cup _{k\in \mathbb {N}} \left[ T^{1,c}_k\right] \quad \text {up to evanescent.} \end{aligned}$$

This completes the proof of Step 4.

Step 5. Fix \(r\in \mathbb {N}\) and let \(\mathbf {X}'=(X'_t)_{t\ge 0}\) be given by

$$\begin{aligned} X' _t=X_t-\sum _{s\in (0,t]} \Delta X_s\mathbf {1}_{\{|\Delta X_s|>r\}}. \end{aligned}$$

We will show that \(\mathbf {X}'\) is a special semimartingale with martingale component \(\mathbf {M}'=(M'_t)_{t\ge 0}\) given by

$$\begin{aligned} M_t'=\tilde{M}_t-\mathbb {E}\tilde{M}_t \quad \text {where}\; \tilde{M}_t=M_t-\sum _{s\in (0,t]}\Delta M_s \mathbf {1}_{\{|M_s|>r\}}. \end{aligned}$$

Recall that \(\mathbf M\) is given by (3.4). By [22, II, 5.10c)] it follows that \(\mathbf {M}'\) is a martingale (and well-defined). The process \(\mathbf {X}'\) is a special semimartingale since its jumps are bounded by \(r\) in absolute value; denote by \(\mathbf W\) and \(\mathbf N\) the finite variation and martingale compnents, respectively, in the canonical decomposition \(\mathbf {X}'=X_0+\mathbf {W}+\mathbf {N}\) of \(\mathbf {X}'\). That is, we want to show that \(\mathbf {N}= \mathbf {M}'\). By (3.11) we have for all \(i\in \mathbb {N}\)

$$\begin{aligned} \Delta M'_{T_i^{1,c}}=\Delta M_{T_i^{1,c}}\mathbf {1}_{\{|\Delta M_{T_i^{1,c}}|\le r\}}=\Delta X_{T_i^{1,c}}\mathbf {1}_{\{|\Delta X_{T_i^{1,c}}|\le r\}}=\Delta X'_{T_i^{1,c}} \quad \text {a.s.} \end{aligned}$$
(3.18)

Let \((\tau _k)_{k\in \mathbb {N}}\) be a sequence of totally inaccessible stopping times satisfying (3.15) for both \(\mathbf{Z}=\mathbf{N}\) and \(\mathbf{Z}=\mathbf {M}'\). Since \(\mathbf W \) is predictable and \(\tau _k\) is a totally inaccessible stopping time we have that \(\Delta W_{\tau _k}=0\) a.s. cf. [22, I,2.24] and hence

$$\begin{aligned} \Delta N_{\tau _k}=\Delta X_{\tau _k}'-\Delta W_{\tau _k}=\Delta X_{\tau _k}'=\Delta M_{\tau _k}'\quad \text {a.s.} \end{aligned}$$
(3.19)

the last equality follows by (3.18) and the second inclusion in (3.15).

Since \(J\) is countable we may find a set \(K\subseteq \mathbb {N}\) such that \(J=\{t_k\}_{k\in K}\). Next we will show that for all \(k\in K\)

$$\begin{aligned} \Delta N_{t_k}=\Delta M_{t_k}\quad \text {a.s.} \end{aligned}$$
(3.20)

By linearity, \(\mathbf A \), define in (3.5), is a well-defined càdlàg process. For all \(k\in K\) we have almost surely

$$\begin{aligned} A_{t_k}&= \int _{(-\infty ,t_k]\times V} \big [\phi (t_k,(s,v))-\phi (s,(s,v))\big ]\,\Lambda (ds,dv)\\&= \int _{(-\infty ,t_k)\times V} \big [\phi (t_k,(s,v))-\phi (s,(s,v))\big ]\,\Lambda (ds,dv) \end{aligned}$$

which shows that \(A_{t_k}\) is \(\mathcal {F}_{t_k-}^\Lambda \)-measurable. Define a process \(\mathbf{Z}=(Z_t)_{t\ge 0}\) by

$$\begin{aligned} Z_t=\sum _{k\in K} \big (\Delta A_{t_k}-\Delta W_{t_k}\big )\mathbf {1}_{\{t=t_k\}}. \end{aligned}$$
(3.21)

Since \(\Delta A_{t_k}-\Delta W_{t_k}\) is \(\mathcal {F}_{t_k-}^\Lambda \)-measurable for all \(k\in K\), (3.21) shows that \(\mathbf Z\) is a predictable process. Let \(^{p} \mathbf {Y}\) denote the predictable projection of any measurable process \(\mathbf {Y}\), see [22, I, 2.28]. Since \(\mathbf Z\) is predictable

$$\begin{aligned} \mathbf{Z}=\,\! ^p \mathbf{Z}=\,\! ^p\big (\mathbf {1}_{\Omega \times J}(\Delta \mathbf A +\Delta \mathbf{W})\big ) = \mathbf {1}_{\Omega \times J} \, ^p (\Delta \mathbf A -\Delta \mathbf{W})=\mathbf {1}_{\Omega \times J} \, ^p (\Delta \mathbf {M}'-\Delta \mathbf{N})=0 \end{aligned}$$
(3.22)

where the third equality follows by [22, I, 2.28(c)] and the fact that \(\Omega \times J\) is a predictable set, the last equality follows by [22, I, 2.31] and the fact that \(\mathbf {M}'\) and \(\mathbf N\) are local martingales. Equation (3.22) shows that \(\Delta A_t=\Delta W_t\) for all \(t\in J\), which implies (3.20).

By (3.19), (3.20) and the fact that

$$\begin{aligned} \{\Delta \mathbf{N}\ne 0\}\subseteq (\Omega \times J)\cup (\cup _{k\in \mathbb {N}} [\tau _k]),\quad \{\Delta \mathbf {M}'\ne 0\} \subseteq (\Omega \times J)\cup (\cup _{k\in \mathbb {N}} [\tau _k]) \end{aligned}$$

we have shown that \(\Delta \mathbf{N}=\Delta \mathbf {M}'\). By Step 4, \(\mathbf N\) and \(\mathbf {M}'\) are purely discontinuous local martingale which implies that \(\mathbf{N}=\mathbf {M}'\), cf. [22, I, 4.19].This completes Step 5.

Step 6. We will show that \(\mathbf A \) is a predictable càdlàg process of finite variation. According to Step 5 the process \(\mathbf W :=\mathbf {X}'-X_0-\mathbf {M}'\) is predictable and has càdlàg paths of finite variation. Thus with \(\mathbf {V}=(V_t)_{t\ge 0}\) given by

$$\begin{aligned} V_t= \sum _{s\in (0,t]} \Delta X_s\mathbf {1}_{\{|X_s|>r\}}-\sum _{s\in (0,t]} \Delta M_s\mathbf {1}_{\{|M_s|>r\}} \end{aligned}$$

we have by the definitions of \(\mathbf W\) and \(\mathbf V\) that

$$\begin{aligned} A_t= X_t-X_0-M_t=W_t+V_t-\mathbb {E}\tilde{M}_t. \end{aligned}$$
(3.23)

This shows that \(\mathbf A \) has càdlàg sample paths of finite variation. Next we will show that \(\mathbf A \) is predictable. Since the processes \(\mathbf W\), \(\mathbf V\) and \(\tilde{\mathbf {M}}\) depend on the truncation level \(r\) they will be denoted \(\mathbf {W}^r\), \(\mathbf {V}^{r}\) and \(\tilde{\mathbf {M}}^r\) in the following. As \(r\rightarrow \infty \), \(V_t^r(\omega )\rightarrow 0\) point wise in \((\omega ,t)\), which by (3.23) shows that \(W^r_t(\omega )-\mathbb {E}\tilde{M}^r_t \rightarrow A_t(\omega )\) point wise in \((\omega ,t)\) as \(r\rightarrow \infty \). For all \(r\in \mathbb {N}\), \((W^r_t-\mathbb {E}\tilde{M}^r_t)_{t\ge 0}\) is a predictable process, which implies that \(\mathbf A\) is a point wise limit of predictable processes and hence predictable. This completes the proof of Step 6 and the proof of the decomposition (3.3) in Case 1.

Case 2. \(\Lambda \) is symmetric Gaussian: suppose that \(\Lambda \) is a symmetric Gaussian random measure. By [3, Theorem 4.6] used on the sets \(C_t=(-\infty ,t]\times V\), \(\mathbf {X}\) is a special semimartingale in \(\mathbb {F}^{\Lambda }\) with martingale component \(\mathbf {M}=(M_t)_{t\ge 0}\) given by

$$\begin{aligned} M_t=\int _{(0,t]\times V} \phi (s,(s,v))\,\Lambda (ds,dv),\quad t\ge 0, \end{aligned}$$

see [3, Equation (4.11)], which completes the proof in the Gaussian case.

Case 3. \(\Lambda \) is general: Let us observe that it is enough to show the theorem in the above two cases. We may decompose \(\Lambda \) as \(\Lambda = \Lambda _G + \Lambda _P\), where \(\Lambda _G, \Lambda _P\) are independent, independently scattered random measures. \(\Lambda _G\) is a symmetric Gaussian random measure characterized by (2.1) with \(b \equiv 0\) and \(\kappa \equiv 0\) while \(\Lambda _P\) is given by (2.1) with \(\sigma ^2 \equiv 0\). Observe that

$$\begin{aligned} \mathbb {F}^{\Lambda } = \mathbb {F}^{\Lambda _G} \vee \mathbb {F}^{\Lambda _P}, \end{aligned}$$

which can be deduced from the Lévy-Itô decomposition, see [22, II, 2.35], used on the processes \(\mathbf {Y}=(Y_t)_{t\ge 0}\) of the form \(Y_t=\Lambda ((0,t]\times B)\) where \(B\in \mathcal {V}_0\) (\(\mathcal {V}_0\) is defined on page 13). We have \(\mathbf {X}= \mathbf {X}^G + \mathbf {X}^P\), where \(\mathbf {X}^G\) and \(\mathbf {X}^P\) are defined by (3.1) with \(\Lambda _G\) and \(\Lambda _P\) in the place of \(\Lambda \), respectively. Since \((\Lambda , \mathbf {X})\) and \((\Lambda _P -\Lambda _G, \mathbf {X}^P- \mathbf {X}^G)\) have the same distributions, the process \(\mathbf {X}^P -\mathbf {X}^G\) has a modification which is a semimartingale with respect to \(\mathbb {F}^{\Lambda _P-\Lambda _G}= \mathbb {F}^{\Lambda _P} \vee \mathbb {F}^{-\Lambda _G}= \mathbb {F}^{\Lambda }\).

Consequently, processes \(\mathbf {X}^G\) and \(\mathbf {X}^P\) have modifications which are semimartingales with respect to \(\mathbb {F}^{\Lambda }\), and so, they are semimartingales relative to \(\mathbb {F}^{\Lambda _G}\) and \(\mathbb {F}^{\Lambda _P}\), respectively, and the general result follows from the above two cases.

The uniqueness: Let \(\mathbf {M}, \mathbf {M}', \mathbf A \) and \(\mathbf A '\) be as in the theorem. We will first show that \((\mathbf {M}, \mathbf {M}')\) is a bivariate process with independent increments relative to \(\mathbb {F}^{\Lambda }\). To this aim, choose \(0 \le s < t\) and \(A_1, \dots , A_n \in \fancyscript{S}\) such that \(A_i \subset (- \infty , s] \times V\), \(i\le n\), \(n \ge 1\). Consider random vectors \(\xi =(\xi _1,\xi _2):= (M_t -M_s, M_t' - M_s')\) and \(\eta = (\eta _1,\dots ,\eta _n):=(\Lambda (A_1),\dots , \Lambda (A_n))\). Since \(\mathbf {M}\) and \(\mathbf {M}'\) are processes representable by \(\Lambda \), \((\xi , \eta )\) has an infinitely divisible distribution in \(\mathbb {R}^{n+2}\). Since \(\mathbf {M}\) and \(\mathbf {M}'\) have independent increments relative to \(\mathbb {F}^{\Lambda }\), \(\xi _i\) is independent of \(\eta _j\) for every \(i\le 2\), \(j\le n\). It follows from the form characteristic function and the uniqueness of Lévy–Khintchine triplets that the pairwise independence between blocks of jointly infinitely divisible random variables is equivalent to the independence of blocks (this is a straightforward extension of [21, Theorem 4]). Therefore, \(\xi \) is independent of \(\eta \). We infer that \(\xi \) is independent of \(\mathcal {F}^{\Lambda }_s\), so that \((\mathbf {M}, \mathbf {M}')\) is a process with independent increments relative to \(\mathbb {F}^{\Lambda }\), so is \(\overline{\mathbf {M}}:= \mathbf {M}'- \mathbf {M}\).

Since \(\mathbf {X}=X_0+\mathbf {M}+\mathbf A =X_0+\mathbf {M}'+\mathbf A '\) by assumption, we have

$$\begin{aligned} \overline{\mathbf {M}}= \mathbf {M}' - \mathbf {M}= \mathbf A '- \mathbf A , \end{aligned}$$

so that the independent increment semimartingale \(\overline{\mathbf {M}}\) is predictable. For each \(n\in \mathbb {N}\) define the truncated process \({\overline{\mathbf {M}}}^{(n)} = (\overline{M}_t^{(n)})_{t\ge 0}\) by

$$\begin{aligned} \overline{M}^{(n)}_t={\overline{M}}_t-\sum _{s\le t} \Delta {\overline{M}}_s\mathbf {1}_{\{|\Delta {\overline{M}}_s|> n\}}. \end{aligned}$$

According to [22, II, 5.10], there exists a càdlàg deterministic function \(\mathbf{g}_{n}\) of finite variation with \(g_{n}(0)=0\) such that \({\overline{\mathbf {M}}}^{(n)}-\mathbf{g}_{n}\) is a martingale. Since \({\overline{\mathbf {M}}}^{(n)}-\mathbf{g}_{n}\) is also predictable and of finite variation, \({\overline{\mathbf {M}}}^{(n)}=\mathbf{g}_{n}\), cf. [22, I, 3.16]. Letting \(n\rightarrow \infty \) we obtain that \(\overline{\mathbf {M}}\) is deterministic and obviously càdlàg and of finite variation.

The special semimartingale part: To prove the part concerning the special semimartingale property of \(\mathbf {X}\) we note that the process \(\mathbf A\) in (3.5) is a special semimartingale since it is a predictable càdlàg process of finite variation. Thus \(\mathbf X\) is a special semimartingale if and only if \(\mathbf M\) is special semimartingale. Due to the independent increments, \(\mathbf M\) is a special semimartingale if and only if \(\mathbb {E}|M_t|<\infty \) for all \(t>0\), cf. [22, II, 2.29(a)], and in that case \(M_t=(M_t-\mathbb {E}M_t)+\mathbb {E}M_t\) is the canonical decomposition of \(\mathbf M\). This completes the proof. \(\square \)

Proof of Theorem 1.6

We only need to prove the only if-implication. Suppose that \(\mathbf {X}\) is a semimartingale with respect to \(\mathbb {F}^\Lambda \). According to Remark 3.2 we may and do choose \(\phi \) such that \(t\mapsto \phi (t,u)\) is càdlàg of all \(u\). By Remark 3.3, assumption (3.2) is satisfied and hence by letting \(\mathbf {M}\) and \(\mathbf A \) be defined by (3.4) and (3.5), respectively, we obtain the representation of \(\mathbf {X}\) as claimed in Theorem 1.6. To show the uniqueness part we note that by symmetric, the deterministic function \(g\) in Theorem 3.1 satisfies that \(g(t)\) equals \(-g(t)\) in law, which implies that \(g(t)=0\) for all \(t\ge 0\). Since the expectation of a symmetric random variable is zero whenever it exists the last part regarding the special semimartingale property follows as well. \(\square \)

Remark 3.5

We conclude this section by recalling that the proof of any of the results on Gaussian semimartingales \(\mathbf {X}\) mentioned in the Introduction relies on the approximations of the finite variation component \(\mathbf A \) by discrete time Doob–Meyer decompositions \(\mathbf A ^n=(A^n_t)_{t\ge 0}\) given by

$$\begin{aligned} A^n_t=\sum _{i=1}^{[2^n t]} \mathbb {E}[X_{i2^{-n}}-X_{(i-1)2^{-n}}|\mathcal {F}_{(i-1)2^{-n}}], \quad t\ge 0 \end{aligned}$$

and showing that the convergence \(\lim _n A^n_t=A_t\) holds in an appropriate sense, see [30, 39]. This technique does not seem effective in the non-Gaussian situation since it relies on strong integrability properties of functionals of \(\mathbf {X}\), which are in general not present and can not be obtained by stopping arguments.

4 Some stationary increment semimartingales

In this section we consider infinitely divisible processes which are stationary increment mixed moving averages (SIMMA). Specifically, a process \(\mathbf {X}= ( X_t )_{t\ge 0}\) is called a SIMMA process if it can be written in the form

$$\begin{aligned} X_t=\int _{\mathbb {R}\times V} \big [f(t-s, v)- f_0(-s, v)\big ]\, \Lambda (ds, dv),\quad t\ge 0, \end{aligned}$$
(4.1)

where the functions \(f\) and \(f_0\) are deterministic measurable such that \(f(s, v) = f_0(s,v) = 0\) whenever \(s<0\), and \(f(\cdot ,v)\) is càdlàg for all \(v\). \(\Lambda \) is an independently scattered infinitely divisible random measure that is invariant under translations over \(\mathbb {R}\). If \(V\) is a one-point space [or simply, there is no \(v\)-component in (4.1)] and \(f_0=0\), then (4.1) defines a moving average (a mixed moving average for a general \(V\), cf. [40]). If \(V\) is a one-point space and \(f_0(x)=f(x)=x_+^\alpha \) for some \(\alpha \in \mathbb {R}\), then \(\mathbf {X}\) is a fractional Lévy process.

The finite variation property of SIMMA processes was investigated in [7] and these results, together with Theorem 3.1, are crucial in our description of SIMMA semimartingales.

The random measure \(\Lambda \) in (4.1) is as in (2.1) but the functions \(b\) and \(\sigma ^2\) do not depend on \(s\) and the measure \(\kappa \) is a product measure: \(\kappa (ds,dv)=ds\,m(dv)\) for some \(\sigma \)-finite measure \(m\) on \(V\). In this case, for \(A\in \fancyscript{S}\) and \(\theta \in \mathbb {R}\)

$$\begin{aligned}&\log \mathbb {E} e^{i\theta \Lambda (A)} \\&\quad = \int _A \Bigg ( i\theta b(v)-\frac{1}{2}\theta ^2 \sigma ^2(v) +\int _{\mathbb {R}} (e^{i\theta x}-1-iu[[ x]]) \, \rho _v(dx)\Bigg )\, ds\,m(dv). \nonumber \end{aligned}$$
(4.2)

The function \(B\) in (2.3) is independent of \(s\), so that with \(B(x,v)=B(x,(s,v))\) we have

$$\begin{aligned} B(x, v) = xb(v) + \int _{\mathbb {R}} \big ( [[ xy]]- x [[ y]]\big ) \, \rho _{v}(dy), \quad x \in \mathbb {R}, \ v \in V. \end{aligned}$$

The SIMMA process (4.1) is a special case of (3.1) if we take \(\phi (t,(s,v))=f(t-s, v)- f_0(-s, v)\). Therefore, from Theorem 3.1 we obtain:

Theorem 4.1

Suppose that

$$\begin{aligned} \int _{V} \big | B\big (f(0, v), v\big ) \big | \, m(dv) < \infty . \end{aligned}$$
(4.3)

Then \(\mathbf {X}\) is a semimartingale with respect to the filtration \(\mathbb {F}^{\Lambda }=(\mathcal {F}^{\Lambda }_t)_{t \ge 0}\) if and only if

$$\begin{aligned} X_t = X_0 + M_t + A_t, \quad t \ge 0, \end{aligned}$$
(4.4)

where \(\mathbf {M}= ( M_t )_{t\ge 0}\) is a Lévy process given by

$$\begin{aligned} M_t = \int _{(0,t] \times V} f(0,v)\,\Lambda (ds,dv), \quad t\ge 0, \end{aligned}$$

and \(\mathbf {A}= ( A_t )_{t\ge 0}\) is a predictable process of finite variation given by

$$\begin{aligned} A_t = \int _{\mathbb {R}\times V} [g(t-s,v) - g(-s,v)] \,\Lambda (ds,dv) \end{aligned}$$
(4.5)

where \(g(s, v)= f(s, v) - f(0, v) \mathbf {1}_{\{s\ge 0\}}\).

Now we will give specific and closely related necessary and sufficient conditions on \(f\) and \(\Lambda \) that make \(\mathbf {X}\) a semimartingale.

Theorem 4.2

(Sufficiency) Let \(\mathbf {X}= ( X_t )_{t\ge 0}\) be specified by (4.1)–(4.2). Suppose that (4.3) is satisfied and that for \(m\)-a.e. \(v \in V\), \(f(\cdot ,v)\) is absolutely continuous on \([0,\infty )\) with a derivative \(\dot{f}(s,v)=\frac{\partial }{\partial s}f(s,v)\) satisfying

$$\begin{aligned} \int _V \int _{0}^\infty \big (|\dot{f}(s,v)|^2\sigma ^2(v)\big )\,ds \,m(dv)&< \infty , \end{aligned}$$
(4.6)
$$\begin{aligned} \int _V \int _0^\infty \int _\mathbb {R}\big (|x\dot{f}(s,v)|\wedge |x\dot{f}(s,v)|^2\big )\,\rho _v(dx)\,ds\,m(dv)&< \infty . \end{aligned}$$
(4.7)

Then \(\mathbf X\) is a semimartingale with respect to \(\mathbb {F}^\Lambda \).

Proof

We need to verify the conditions of Theorem 4.1. With \(g(s, v)= f(s, v) - f(0, v) \mathbf {1}_{\{s\ge 0\}}\) we have for \(m\)-a.e. \(v \in V\), \(g(\cdot ,v)\) is absolutely continuous on \(\mathbb {R}\) with derivative \(\dot{g}(s,v)=\dot{f}(s,v)\) for \(s > 0\) and \(\dot{g}(s,v)=0\) for \(s<0\). By Jensen’s inequality, for each fixed \(t>0\), the function

$$\begin{aligned} (s,v)\mapsto g(t-s,v)-g(-s,v)=\int _0^t \dot{g}(u-s,v)\,du, \end{aligned}$$

when substituted for \(\dot{f}(s,v)\) in (4.6)–(4.7), satisfies these conditions. Indeed, it is straightforward to verify (4.6). To verify (4.7) we use the fact that \(\psi \! :u\mapsto 2\int _0^{|u|} (v\wedge 1)\,dv\) is convex and satisfies \(\psi (u)\le |ux|\wedge |ux|^2\le 2\psi (u)\). In particular, \((s,v)\mapsto g(t-s,v)-g(-s,v)\) satisfies (b) of the Sect. 2, and so does the function

$$\begin{aligned} (s,v)\mapsto f(0,v) \mathbf {1}_{(0, t]}(s)=g(t-s,v) - g(-s,v) - [f(t-s,v) - f(-s,v)]. \end{aligned}$$

This fact together with assumption (4.3) guarantee that \(\mathbf {M}\) of Theorem 4.1 is well-defined. Then \(\mathbf A \) is well-defined by (4.4). The process \(\mathbf A \) is of finite variation by [7, Theorem 3.1] because \(g(\cdot ,v)\) is absolutely continuous on \(\mathbb {R}\) and \(\dot{g}(\cdot , v)= \dot{f}(\cdot , v)\) satisfies (4.6)–(4.7). \(\square \)

Theorem 4.3

(Necessity) Suppose that \(\mathbf {X}\) is a semimartingale with respect to \(\mathbb {F}^\Lambda \) and for \(m\)-almost every \(v\in V\) we have either

$$\begin{aligned} \int _{-1}^1 |x|\,\rho _{v}(dx)=\infty \quad \text {or}\quad \sigma ^2(v)>0. \end{aligned}$$
(4.8)

Then for \(m\)-a.e. \(v\), \( f(\cdot ,v)\) is absolutely continuous on \([0,\infty )\) with a derivative \(\dot{f}(\cdot ,v)\) satisfying (4.6) and

$$\begin{aligned}&\in t _0^\infty \int _\mathbb {R}\big (|x \dot{f}(s,v)|\wedge |x \dot{f}(s,v)|^2\big )(1 \wedge x^{-2})\,\rho _v(dx)\, ds<\infty . \end{aligned}$$
(4.9)

If, additionally,

$$\begin{aligned} \limsup _{u \rightarrow \infty } \, \frac{u\int _{|x|>u} |x|\,\rho _v(dx)}{\int _{|x|\le u} x^2 \, \rho _v(dx)} < \infty \quad m\text {-a.e.} \end{aligned}$$
(4.10)

then for \(m\)-a.e. \(v\),

$$\begin{aligned} \int _{0}^\infty \int _{\mathbb {R}} ( |x{\dot{f}}(s,v)|^2 \wedge |x{\dot{f}}(s,v)|) \, \rho _v(dx)\, ds < \infty . \end{aligned}$$
(4.11)

Finally, if

$$\begin{aligned} \sup _{v\in V}\sup _{u > 0} \, \frac{u\int _{|x|>u} |x|\,\rho _v(dx)}{\int _{|x|\le u} x^2 \, \rho _v(dx)} <\infty \end{aligned}$$
(4.12)

then \(\dot{f}\) satisfies (4.6)–(4.7).

Proof

Assume that \(\mathbf {X}\) is a semimartingale with respect to \(\mathbb {F}^{\Lambda }\). By a symmetrization argument we may assume that \(\Lambda \) is a symmetric random measure. Indeed, let \(\Lambda '\) be an independent copy of \(\Lambda \) and \(\mathbf {X}'\) be defined by (4.1) with \(\Lambda \) replaced by \(\Lambda '\). Then \(\mathbf {X}'\) is a semimartingale with respect to \(\mathbb {F}^{\Lambda '}\). By the independence, both \(\mathbf {X}\) and \(\mathbf {X}'\) are semimartingales with respect to \(\mathbb {F}^\Lambda \vee \mathbb {F}^{\Lambda '}\) and since \(\mathbb {F}^{\Lambda -\Lambda '}\subseteq \mathbb {F}^\Lambda \vee \mathbb {F}^{\Lambda '}\), the process \(\mathbf {X}-\mathbf {X}'\) is a semimartingale with respect to \(\mathbb {F}^{\Lambda -\Lambda '}\). This shows that we may assume that \(\Lambda \) is symmetric. Then (4.3) holds since \(B=0\).

By Theorem 4.1 process \(\mathbf A\) in (4.5) is of finite variation. It follows from [7, Theorem 3.3] that for \(m\)-a.e. \(v\), \(g(\cdot ,v)\) is absolutely continuous on \(\mathbb {R}\) with a derivative \(\dot{g}(\cdot ,v)\) satisfying (4.6) and (4.9). Furthermore \(\dot{g}\) satisfies (4.11) under assumption (4.10), and under assumption (4.12), \(\dot{g}\) satisfies (4.7). Since \(f(s,v)=g(s,v)+f(0,v)\mathbf {1}_{\{s\ge 0\}}\), \(f(\cdot ,v)\) is absolutely continuous on \([0,\infty )\) with a derivative \(\dot{f}(\cdot ,v)=\dot{g}(\cdot ,v)\) for \(m\)-a.e. \(v\) satisfying the conditions of the theorem. \(\square \)

Remark 4.4

Theorem 4.3 becomes an exact converse to Theorem 4.2 when (4.8) holds and either (4.10) holds and \(V\) is a finite set, or (4.12) holds.

Remark 4.5

Condition (4.8) is in general necessary to deduce that \(f\) has absolutely continuous sections. Indeed, let \(V\) be a one point space so that \(\Lambda \) is generated by increments of a Lévy process denoted again by \(\Lambda \). If (4.8) is not satisfied, then taking \(f=\mathbf {1}_{[0,1]}\) we get that \(X_t=\Lambda _t-\Lambda _{t-1}\) is of finite variation and hence a semimartingale, but \(f\) is not continuous on \([0,\infty )\).

Next we will consider several consequences of Theorems 4.2 and 4.3. When there is no \(v\)-component, (4.3) is always satisfied and \(\Lambda \) is generated by a two-sided Lévy process. In what follows, \(\mathbf {Z}=(Z_t)_{t\in \mathbb {R}}\) will denote a non-deterministic two-sided Lévy process, with characteristic triplet \((b,\sigma ^2,\rho )\), \(Z_0=0\) and natural filtration \(\mathbb {F}^Z\).

The following proposition characterizes fractional Lévy processes which are semimartingales, and completes results of [4, Corollary 5.4] and parts of [10, Theorem 1].

Proposition 4.6

(Fractional Lévy processes) Let \(\gamma >0\), \(x_+:=\max \{x,0\}\) for \(x\in \mathbb {R}\), \(\mathbf {Z}\) be a Lévy process as above, and \(\mathbf X\) be a fractional Lévy process defined by

$$\begin{aligned} X_t=\int _{-\infty }^t \big \{(t-s)^\gamma _+-(-s)_+^\gamma \,\big \}\,dZ_s \end{aligned}$$
(4.13)

where the stochastic integrals exist. Then \(\mathbf X\) is a semimartingale with respect to \(\mathbb {F}^Z\) if and only if \(\sigma ^2=0\), \(\gamma \in (0,\tfrac{1}{2})\) and

$$\begin{aligned} \int _\mathbb {R}|x|^{\frac{1}{1-\gamma }}\,\rho (dx)<\infty . \end{aligned}$$
(4.14)

Proof

First we notice that, as a consequence of \(\mathbf {X}\) being well-defined, \(\gamma <\tfrac{1}{2}\) and

$$\begin{aligned} \int _{|x|>1} |x|^{\frac{1}{1-\gamma }}\,\rho (dx)<\infty . \end{aligned}$$
(4.15)

Indeed, since the stochastic integral (4.13) is well-defined [33, Theorem 2.7] shows that

$$\begin{aligned} \int _{-\infty }^t \int _\mathbb {R}\big (1\wedge |\{(t-s)^\gamma -(-s)^\gamma _+\}x|^2\big )\,\rho (dx)\,ds<\infty , \quad t\ge 0. \end{aligned}$$
(4.16)

This implies that \(\gamma <\tfrac{1}{2}\) if \(\rho (\mathbb {R})>0\). A similar argument shows that \(\gamma <\tfrac{1}{2}\) if \(\sigma ^2>0\), and thus, by the non-deterministic assumption on \(\mathbf Z\), we have shown that \(\gamma <\frac{1}{2}\). Putting \(t=1\) in (4.16) and using the estimate \(|(1-s)^\gamma -(-s)^\gamma _+|\ge |\gamma (1-s)^{\gamma -1}|\) for \(s\in (-\infty ,0]\) we get

$$\begin{aligned} \infty&> \int _{-\infty }^0 \int _\mathbb {R}\big (1\wedge |\gamma (1-s)^{\gamma -1}x|^2\big )\,\rho (dx)\,ds\\&= \int _\mathbb {R}\int _1^\infty \big (1\wedge |\gamma s^{\gamma -1}x|^2\big )\,ds\,\rho (dx)\\&\ge \int _\mathbb {R}\int _{1\le s\le |\gamma x|^{\frac{1}{1-\gamma }} }\,ds\,\rho (dx)\ge \int _{|\gamma x|>1} \Bigg (|\gamma x|^{\frac{1}{1-\gamma }} -1\Bigg )\,\rho (dx), \end{aligned}$$

which shows (4.15).

Suppose that \(\mathbf X\) is a semimartingale. If \(\sigma ^2>0\), then according to Theorem 4.3, \(f\) is absolutely continuous on \([0,\infty )\) with a derivative \(\dot{f}\) satisfying

$$\begin{aligned} \int _0^\infty |\dot{f}(t)|^2\,dt=\int _0^\infty \gamma ^2 t^{2(\gamma -1)}\,dt<\infty \end{aligned}$$

which is a contradiction and shows that \(\sigma ^2=0\). By the non-deterministic assumption on \(\mathbf Z\) we have \(\rho (\mathbb {R})>0\). To complete the proof of the necessity part, it remains to show that

$$\begin{aligned} \int _{|x|\le 1} |x|^{\frac{1}{1-\gamma }}\,\rho (dx)<\infty . \end{aligned}$$
(4.17)

Since \(\dot{f}(t)=\gamma t^{\gamma -1}\) for \(t>0\), we have

$$\begin{aligned} \int _0^\infty \big \{|x \dot{f}(t)|\wedge |x \dot{f}(t)|^2\big \}\,dt=C |x|^{\frac{1}{1-\gamma }} \end{aligned}$$
(4.18)

where \(C=\gamma ^{\frac{1}{1-\gamma }}(\gamma ^{-1}+(1-2\gamma )^{-1})\). In the case \(\int _{|x|\le 1} |x|\,\rho (dx)<\infty \) (4.17) holds since \(1<\tfrac{1}{1-\gamma }\). Thus we may assume that \(\int _{|x|\le 1}|x|\,\rho (dx)=\infty \), that is, Eq. (4.8) of Theorem 4.3 is satisfied. By Theorem 4.3 (4.9) and (4.18) we have

$$\begin{aligned} \int _{|x|\le 1} |x|^{\frac{1}{1-\gamma }} \,\rho (dx)\le \int _\mathbb {R}|x|^{\frac{1}{1-\gamma }} (1\wedge x^{-2})\,\rho (dx)<\infty \end{aligned}$$

which completes the proof of the necessity part.

On the other hand, suppose that \(\sigma ^2=0\), \(\gamma \in (0,\tfrac{1}{2})\) and (4.14) is satisfied. By (4.14) and (4.18), \(f\) is absolutely continuous on \([0,\infty )\) with a derivative \(\dot{f}\) satisfying (4.7) and hence \(\mathbf X\) is a semimartingale with respect to \(\mathbb {F}^Z\), cf. Theorem 4.2. \(\square \)

Below we will recall the conditions from [7] under which (4.10) or (4.12) hold. Recall that a measure \(\mu \) on \(\mathbb {R}\) is said to be regularly varying if \(x\mapsto \mu ([-x,x]^c)\) is a regularly varying function; see [12].

Proposition 4.7

[7, Proposition 3.5] Condition (4.10) is satisfied when one of the following two conditions holds for \(m\)-almost every \(v \in V\)

(i):

\(\int _{|x|>1} x^2 \, \rho _v(dx)<\infty \) or

(ii):

\(\rho _v\) is regularly varying at \(\infty \) with index \(\beta \in [-2,-1)\).

Suppose that \(\rho _v=\rho \) for all \(v\), where \(\rho \) satisfies (4.10) and is regularly varying with index \(\bar{\beta }\in (-2,-1)\) at 0. Then (4.12) holds.

Theorems 4.2 and 4.3 and Proposition 4.7 extend [27, Theorem 6.5] from the case where \(\mathbf Z\) is a Brownian motion to quite general Lévy processes in the following way.

Corollary 4.8

Suppose that \(\mathbf {Z}=(Z_t)_{t\in \mathbb {R}}\) is a two-sided Lévy process as above, with paths of infinite variation on compact intervals. Let \(\mathbf {X}=(X_t)_{t\ge 0}\) be a process of the form

$$\begin{aligned} X_t=\int _{-\infty }^t \big \{f(t-s)-f_0(-s)\big \}\,dZ_s. \end{aligned}$$

Suppose that the random variable \(Z_1\) is either square-integrable or has a regularly varying distribution at \(\infty \) of index \(\beta \in [-2,-1)\). Then \(\mathbf X\) is a semimartingale with respect to \(\mathbb {F}^Z\) if and only if \(f\) is absolutely continuous on \([0,\infty )\) with a derivative \(\dot{f}\) satisfying

$$\begin{aligned} \int _0^\infty |\dot{f}(t)|^2\,dt<\infty \quad \text {if }\sigma ^2&> 0, \nonumber \\ \int _0^\infty \int _\mathbb {R}\big (|x \dot{f}(t)|\wedge |x \dot{f}(t)|^2\big )\,\rho (dx)\,dt&< \infty . \end{aligned}$$
(4.19)

Proof of Corollary 4.8

The conditions imposed on \(Z_1\) are equivalent to that \(\rho \) satisfies (i) or (ii) of Proposition 4.7, respectively, cf. [16, Theorem 1] and [37, Theorem 25.3]. Moreover, (4.8) of Theorem 4.3 is equivalent to that \(\mathbf Z\) has sample paths of infinite variation on bounded intervals and hence the result follows by Theorems 4.2 and 4.3. \(\square \)

Example 4.9

In the following we will consider \(\mathbf X\) and \(\mathbf Z\) given as in Corollary 4.8 where \(\mathbf Z\) is either a stable or a tempered stable Lévy process.

(i):

Stable. Assume that \(\mathbf Z\) is a symmetric \(\alpha \)-stable Lévy process with index \(\alpha \in (1,2)\), that is, \(\rho (dx)=c|x|^{-\alpha -1}\,dx\) where \(c>0\), and \(\sigma ^2=b=0\). Then \(\mathbf X\) is a semimartingale with respect to \(\mathbb {F}^Z\) if and only if \(f\) is absolutely continuous on \([0,\infty )\) with a derivative \(\dot{f}\) satisfying

$$\begin{aligned} \int _0^\infty |\dot{f}(t)|^\alpha \,dt<\infty . \end{aligned}$$
(4.20)

We use Corollary 4.8 to show the above. Note that \(\int _{|x|\le 1} |x|\,\rho (dx)=\infty \) and \(\rho \) is regularly varying at \(\infty \) of index \(-\alpha \in (-2,-1)\). Moreover, the identity

$$\begin{aligned} \int _\mathbb {R}\big (|xy|\wedge |xy|^2\big )\,\rho (dx)=C |y|^\alpha ,\quad y\in \mathbb {R}, \end{aligned}$$
(4.21)

with \(C=2c ((2-\alpha )^{-1}+(\alpha -1)^{-1})\), shows that (4.19) is equivalent to (4.20). Thus the result follows by Corollary 4.8.

(ii):

Tempered stable. Suppose that \(\mathbf Z\) is a symmetric tempered stable Lévy process with indexs \(\alpha \in [1,2)\) and \(\lambda >0\), i.e., \(\rho (dx)=c |x|^{-\alpha -1}e^{-\lambda |x|} \,dx\) where \(c>0\), and \(\sigma ^2=b=0\). Then \(\mathbf X\) is a semimartingale with respect to \(\mathbb {F}^Z\) if and only if \(f\) is absolutely continuous on \([0,\infty )\) with a derivative \(\dot{f}\) satisfying

$$\begin{aligned} \int _0^\infty \big (| \dot{f}(t)|^{\alpha }\wedge | \dot{f}(t)|^2\big ) \, ds <\infty . \end{aligned}$$
(4.22)

Again we will use Corollary 4.8. The conditions imposed on \(\mathbf Z\) in Corollary 4.8 are satisfied due to the fact that \(\int _{|x|\le 1} |x|\,\rho (dx)=\infty \) and \(\int _{|x|>1}|x|^2\,\rho (dx)<\infty \). Moreover, using the asymptotics of the incomplete gamma functions we have that

$$\begin{aligned} \int _\mathbb {R}\big (|xu|\wedge |xu|^2\big )\,\rho (dx)\sim {\left\{ \begin{array}{ll} C_1 u^\alpha &{} \text {as } u\rightarrow \infty \\ C_2 u^2 &{} \text {as } u\rightarrow 0 \end{array}\right. } \end{aligned}$$
(4.23)

where \(C_1, C_2>0\) are finite constants depending only on \(\alpha , c\) and \(\lambda \), and we write \(f(u)\sim g(u)\) as \(u\rightarrow \infty \) (resp. \(u\rightarrow 0\)) when \(f(u)/g(u)\rightarrow 1\) as \(u\rightarrow \infty \) (resp. \(u\rightarrow 0\)). Equation (4.23) shows that (4.19) is equivalent to (4.22), and hence the result follows by Corollary 4.8.

Example 4.10

A supOU process \(\mathbf{X}=(X_t)_{t\ge 0}\) is a stochastic process of the form

$$\begin{aligned} X_t=\int _{\mathbb {R}_-\times (-\infty ,t]} e^{v(t-s)}\,\Lambda (ds,dv) \end{aligned}$$
(4.24)

where \(\mathbb {R}_-:=(-\infty ,0)\), \(\rho _v=\rho \) does not depend on \(v\) and \(m\) is a probability measure. SupOU processes, which is short for superposition of Ornstein–Uhlenbeck processes, were introduced by [1]. Suppose for simplicity that \(\sigma ^2=0\). Process \(\mathbf X\) is well-defined if and only if \(\int _\mathbb {R}\log (1+|x|)\,\rho (dx)<\infty \) and \(\int _{-\infty }^0 \frac{1}{|v|}\,m(dv)<\infty \), cf. [18, page 343].

Let \(\mathbf X\) be a supOU process of the form (4.24) and suppose that the Lévy measure \(\rho \) satisfies following (1)–(2):

  1. (1)

    Either \(\int _{|x|\ge 1} |x|^2\,\rho (dx)<\infty \), or \(\rho \) is regularly varying at \(\infty \) with index \(\beta \in [-2,-1)\).

  2. (2)

    \(\rho \) is regularly varying at \(0\) with index \(\bar{\beta }\in (-2,-1)\).

Then \(\mathbf X\) is a semimartingale relative \(\mathbb F^\Lambda \) if and only if

$$\begin{aligned} \int _{-\infty }^0 \Big (\int _\mathbb {R}\big (|x v|^2\wedge |x v|\big ) \,\rho (dx)\Big ) |v|^{-1}\,m(dv)<\infty . \end{aligned}$$
(4.25)

In particular if \(\Lambda \) is symmetric \(\alpha \)-stable with \(\alpha \in (1,2)\), i.e. \(\rho (dx)=c |x|^{-1-\alpha }\,dx\), \(c>0\). Then \(\mathbf X\) is a semimartingale with respect to \(\mathbb F^\Lambda \) if and only if

$$\begin{aligned} \int _{-\infty }^0 |v|^{\alpha -1}\, m(dv)<\infty . \end{aligned}$$
(4.26)

To see this we observe that \(f(t,v):=e^{vt}\) is absolutely continuous in \(t\in [0,\infty )\) with \(\dot{f}(t,v)=v e^{vt}\). For all \(v\in \mathbb {R}_-\) and \(x\in \mathbb {R}\) a simple computation shows that

$$\begin{aligned} \int _0^\infty |x \dot{f}(t,v)|\wedge |x\dot{f}(t,v)|^2\,dt = \frac{|x v|^2}{2|v|}\mathbf {1}_{\{|xv|\le 1\}} +\frac{|xv|-1/2}{|v|}\mathbf {1}_{\{|xv|>1\}} \end{aligned}$$

which is bounded from below and above by constants times

$$\begin{aligned} \frac{1}{|v|} \Big (|x v|^2\wedge |x v|\Big ). \end{aligned}$$

Thus (4.25) follows by Theorems 4.2 and 4.3 together with Proposition 4.7. When \(\Lambda \) is symmetric \(\alpha \)-stable with \(\alpha \in (1,2)\), the above (1) and (2) are satisfied and \( \int _\mathbb {R}\big (|x v|^2\wedge |x v| \big )\,\rho (dx)=|v|^{\alpha }\). Hence (4.26) follows by (4.25).

Example 4.11

(Multi-stable) In this example we extend Example 4.9(i) to the so called multi-stable processes, that is, we will consider \(\mathbf X\) given by (4.1) with

$$\begin{aligned} \rho _v(dx)=c |x|^{-\alpha (v)-1} \, dx \end{aligned}$$

where \( \alpha :V \rightarrow (0,2)\) is a measurable function, \(c>0\) and \(b=\sigma ^2=0\). For \(v\in V\), \(\rho _v\) is the Lévy measure of a symmetric stable distribution with index \(\alpha (v)\). Assume that there exists an \(r>1\) such that \(\alpha (v)\ge r\) for all \(v\in V\). Then \(\mathbf {X}\) is a semimartingale with respect to \(\mathbb {F}^\Lambda \) if and only if for \(m\)-a.e. \(v\), \(f(\cdot ,v)\) is absolutely continuous on \([0,\infty )\) with a derivative \(\dot{f}(\cdot ,v)\) satisfying

$$\begin{aligned} \int _V\int _0^\infty \Bigg ( \frac{1}{2-\alpha (v)}|\dot{f}(s,v)|^{\alpha (v)}\Bigg ) \,ds\,m(dv)<\infty . \end{aligned}$$
(4.27)

To show the above we will argue similarly as in Example 4.9. By the symmetry, Eq. (4.3) is satisfied. For all \(v\in V\), \(\int _{|x|\le 1} |x|\,\rho _v(dx)=\infty \), which shows that (4.8) of Theorem 4.3 is satisfied. By basic calculus we have for \(v\in V\) that

$$\begin{aligned} u\int _{|x|>u} |x|\,\rho _v(dx)=K(v) \int _{|x|\le u} x^2\,\rho _v(dx) \end{aligned}$$
(4.28)

where \(K(v)=(2-\alpha (v))/(\alpha (v)-1)\). Since \(\alpha (v)\ge r\) we have that \(K(v)\le 2/(r-1)<\infty \) which together with (4.28) implies (4.12). From (4.21) we infer that (4.7) is equivalent to (4.27), and thus Theorems 4.2 and 4.3 conclude the proof.

Example 4.12

(supFLP) Consider \(\mathbf {X}= ( X_t )_{t\ge 0}\) of the form

$$\begin{aligned} X_t=\int _{\mathbb {R}\times V} \Bigg ( (t-s)_+^{\gamma (v)}-(-s)_+^{\gamma (v)}\Bigg ) \,\Lambda (ds,dv), \end{aligned}$$
(4.29)

where \( \gamma :V \rightarrow (0,\infty )\) is a measurable function. Processes of the form (4.29) may be viewed as superpositions of fractional Lévy processes with (possible) different indexes; hence the name supFLP. If \(m\)-a.e. we have \(\gamma \in (0,\frac{1}{2})\), \(\sigma ^2 =0\) and

$$\begin{aligned} \int _V \Big (\int _\mathbb {R}|x|^{\frac{1}{1-\gamma (v)}} \,\rho _v(dx)\Big ) \big (\tfrac{1}{2}-\gamma (v)\big )^{-1}\,m(dv)<\infty , \end{aligned}$$
(4.30)

then \(\mathbf {X}\) is a semimartingale with respect to \(\mathbb {F}^{\Lambda }\). Conversely, if \(\mathbf {X}\) is a semimartingale with respect to \(\mathbb {F}^{\Lambda }\) and \(\int _{|x|\le 1}|x|\,\rho _v(dx)=\infty \) for \(m\)-a.e. \(v\), then \(m\)-a.e. \(\gamma \in (0,\frac{1}{2})\), \( \sigma ^2=0\) and

$$\begin{aligned} \int _\mathbb {R}|x|^{\frac{1}{1-\gamma (v)}}\,\rho _v(dx)<\infty , \end{aligned}$$
(4.31)

and if in addition \(\rho \) satisfies (4.12), then (4.30) holds.

To show the above let \(f(t,v)=t^{\gamma (v)}_+\) for \(t\in \mathbb {R}, v\in V\). Since \(f(0,v)=0\) for all \(v\), Eq. (4.3) is satisfied. As in Example 4.6, we observe that the conditions

$$\begin{aligned} \int _{|x|\ge 1} |x|^{\frac{1}{1-\gamma (v)}}\,\rho _v(dx)<\infty \quad \text {and} \quad \gamma (v)<\tfrac{1}{2}\quad m\text {-a.e.} \end{aligned}$$
(4.32)

follow from the fact that \(\mathbf {X}\) is a well-defined. For \(\gamma (v)\in (0,\frac{1}{2})\), \(f(\cdot ,v)\) is absolutely continuous on \([0,\infty )\). By (4.18) we deduce that

$$\begin{aligned} \frac{c|x|^{\frac{1}{1-\gamma (v)}}}{\tfrac{1}{2}-\gamma (v)}\le \int _0^\infty \{|x \dot{f}(t,v)|\wedge |x \dot{f}(t,v)|^2\}\,dt \le \frac{\tilde{c}|x|^{\frac{1}{1-\gamma (v)}}}{\tfrac{1}{2}-\gamma (v)} \end{aligned}$$
(4.33)

for all \(x\in \mathbb {R}\), where \(c, \tilde{c} >0\) are finite constants not depending \(v\) and \(x\).

By Theorem 4.2 and (4.33), the sufficient part follows. To show the necessary part assume that \(\mathbf {X}\) is a semimartingale with respect to \(\mathbb {F}^{\Lambda }\) and that \(\int _{|x|\le 1}|x|\,\rho _v(dx)=\infty \) for \(m\)-a.e. \(v\). By Theorem 4.3, \(f(\cdot ,v)\) is absolutely continuous with a derivative \(\dot{f}(\cdot ,v)\) satisfying (4.6) and (4.9). From (4.6) we deduce that \(\sigma ^2=0\) \(m\)-a.e. and from (4.9) and (4.33) we infer that

$$\begin{aligned} \int _{|x|\le 1} |x|^{\frac{1}{1-\gamma (v)}}\,\rho _v(dx)<\infty \quad m\text {-a.e.}v. \end{aligned}$$
(4.34)

By (4.32)–(4.34), condition (4.31) follows. Moreover, if \(\rho \) satisfies (4.12), then Theorem 4.3 together with (4.33) show (4.30). This completes the proof.