Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

This chapter is divided into two parts: the first is devoted to processes with independent increments and the second to random measures with independent increments. Of course, both parts are strongly related to each other and we had doubts about the best order in which to present them in order to avoid repetition. We decided to start with stochastic processes where previous results are better known, and this part is mainly based on Solé et al. [24] where a Malliavin Calculus for Lévy processes is developed. Our approach relies on a chaotic expansion of square integrable functionals of the process, stated by Itô [5], in terms of a vector random measure on the plane; that expansion gives rise to a Fock space structure and enables us to define a Malliavin derivative and a Shorohod integral as an annihilation and creation operator respectively. Later, using an ad hoc canonical space, the Malliavin derivative restricted to the jumps part of the process can be conveniently interpreted as an increment quotient operator, extending the idea of the difference operator or add-one-cost operator of the Poisson processes, see Nualart and Vives [18, 19], Last and Penrose [12], and Last [11] in this volume. We also extend the interesting formulas of Geiss and Laukkarinen [4] for computing the derivatives of smooth functionals, which widen considerably the practical applications of the calculus. Finally, following Murr [15], we prove that the duality coupling between Malliavin derivative and Skorohod integral characterizes the underlying process and, in this way, extends to stochastic processes some characterizations of Stein’s method type. We should point out that in the first part (and also in the second, as we comment below) we use the very general and deep results of Last and Penrose [12] and Last [11] to improve some results and simplify the proofs of Solé et al. [24].

It is worth remarking that there is another approach to a chaos-based Malliavin calculus for jump processes using a different chaos expansion, that we comment in Sect. 2.2. For that development and many applications see Di Nunno et al. [3] and the references therein.

In the second part we extend Malliavin calculus to a random measure with independent increments. We start by recalling a representation theorem of such a random measure in terms of an integral with respect to a Poisson random measure in a product space; a weak version (in law) of that representation was obtained by Kingman [8] (see also Kingman [9]). That representation gives rise to the possibility of building a Malliavin calculus due to the fact that Itô’s [5] chaotic representation property also holds here. In this context, the results of Last and Penrose [12] and Last [11] play a central role since, thanks to them, it is not necessary to construct a canonical space, and we can simply interpret the Malliavin derivative as an add-one-cost operator. As in the first part, we introduce the smooth functionals of Geiss and Laukkarinen [4], and the characterization of random measures with independent increments by duality formulas of Murr [15].

2 Part 1: Malliavin Calculus for Processes with Independent Increments

2.1 Processes with Independent Increments and Its Lévy–Itô Decomposition

This section contains the notations and properties of processes with independent increments that we use; we mainly follow the excellent book of Sato [22]. In particular, we present the so-called Lévy–Itô decomposition of a process with independent increments as a sum of a continuous function, a continuous Gaussian process with independent increments, and two integrals. One of these integrals is considered with respect to a Poisson random measure whereas the other with respect to a compensated Poisson random measure. These integrals are, respectively, the sum of the big jumps of the process and the compensated sum of small jumps. That decomposition is a masterpiece of stochastic processes theory, and there exist proofs of such a fact that are based on very different tools: see, for example, Sato [22] and Kallenberg [7].

Fix a probability space \((\varOmega,\mathcal{A}, \mathbb{P})\). Let X = { X t , t ≥ 0} be a real process with independent increments, that is, for every n ≥ 1 and 0 ≤ t 1 < ⋯ < t n , the random variables \(X_{t_{2}} - X_{t_{1}},\ldots,X_{t_{n}} - X_{t_{n-1}}\) are independent. We assume that X 0 = 0, a.s., and that X is continuous in probability and cadlag. A process with all these properties is also called an additive process. We assume that the σ-field \(\mathcal{A}\) is generated by X.

The hypothesis that the process is cadlag is not restrictive: every process with independent increments and continuous in probability has a cadlag modification (Sato [22, Theorem 11.5]). The conditions of continuity in probability and cadlag prevent the existence of fixed discontinuities, that is to say, there are no points t ≥ 0 such that \(\mathbb{P}\{X_{t}\neq X_{t-}\} > 0.\)

The system of generating triplets of X is denoted by {(m t , ρ t , ν t ),  t ≥ 0}. Thus, \(m:\, \mathbb{R}_{+} \rightarrow \mathbb{R}\), where \(\mathbb{R}_{+} = [0,\infty )\), is a continuous function that gives a deterministic tendency of the process (see representation (2)); ρ t  ≥ 0 is the variance of the Gaussian part of X t , and ν t is the Lévy measure of the jumps part. More specifically, ν t is a measure on \(\mathbb{R}_{0}\), where \(\mathbb{R}_{0} = \mathbb{R}\setminus \{0\}\), such that \(\int _{\mathbb{R}_{0}}(1 \wedge x^{2})\,\nu _{t}(\mathrm{d}x) < \infty \), where ab = min(a, b). Observe that for all t ≥ 0 and ɛ > 0, \(\nu _{t}\big((-\varepsilon,\varepsilon )^{c}\big) < \infty \), and hence ν t is finite on compact sets of \(\mathbb{R}_{0}\), and then σ-finite. Denote by ν the (unique) measure on \(\mathcal{B}((0,\infty ) \times \mathbb{R}_{0})\) defined by

$$\displaystyle{ \nu \big((0,t] \times B\big) =\nu _{t}(B),\ B \in \mathcal{B}(\mathbb{R}_{0}). }$$
(1)

It is also σ-finite, and moreover, \(\nu \big(\{t\} \times \mathbb{R}_{0}\big) = 0\) for every t > 0 (Sato [22, p. 53]); thus it is non-atomic. The measure ν controls the jumps of the process: for \(B \in \mathcal{B}(\mathbb{R}_{0})\), ν((0, t] × B) is the expectation of the number of jumps of the process in the interval (0, t] with size in B. We remark that in a finite time interval the process can have an infinity of jumps of small size, and there are Lévy measures such that, for example, \(\nu \big((0,t] \times (0,x_{0})\big) = \infty \), for some x 0 > 0.

Write

$$\displaystyle{N(C) = \#\{t:\, (t,\varDelta X_{t}) \in C\},\quad C \in \mathcal{B}((0,\infty ) \times \mathbb{R}_{0}),}$$

the jumps measure of the process, where Δ X t  = X t X t. It is a Poisson random measure on \((0,\infty ) \times \mathbb{R}_{0}\) with intensity measure ν (Sato [6, Theorem 19.2]). Let

$$\displaystyle{\hat{N }= N-\nu }$$

represent the compensated jumps measure.

Theorem 1 (Lévy–Itô Decomposition)

$$\displaystyle{ X_{t} = m_{t} + G_{t} +\int \limits _{(0,t]\times \{\vert x\vert >1\}}x\,N(\mathrm{d}(s,x)) +\int \limits _{(0,t]\times \{0<\vert x\vert \leq 1\}}x\,\widehat{N } (\mathrm{d}(s,x)), }$$
(2)

where {G t , t ≥ 0} is a centered continuous Gaussian process with independent increments and variance \(\mathbb{E}[G_{t}^{2}] =\rho _{ t}^{2},\) independent of N.

Sato [22, Theorem 19.2] gives a more precise statement, and instead of the second integral in (2) he writes

$$\displaystyle{\lim _{\varepsilon \downarrow 0}\int \limits _{(0,t]\times \{\varepsilon <\vert x\vert \leq 1\}}x\,\widehat{N } (\mathrm{d}(s,x)),}$$

where the convergence is a.s., uniform in t on every bounded interval.

Remark 1

 

  1. 1.

    The function tρ t is continuous and increasing, and ρ 0 = 0 (Sato [22, Theorem 9.8]), and hence it defines a σ-finite and non-atomic measure on \(\mathbb{R}_{+}\), denoted by ρ. The Gaussian process {G t ,  t ≥ 0} introduced above defines through

    $$\displaystyle{G\big((s,t]\big) = G_{t} - G_{s},\ 0 \leq s < t,}$$

    a centered Gaussian random measure G on \(\{B \in \mathcal{B}(\mathbb{R}_{+}),\ \rho (B) < \infty \}\) with control measure ρ (see Peccati and Taqqu [20, p. 63] for this definition). In the Gaussian Malliavin calculus terminology this is called a white noise measure (Nualart [17, p. 8]). This will be important when we define Malliavin derivatives with respect to X.

  2. 2.

    Remember that a Lévy process is an additive process with stationary increments. In this case, m t  = m t, for some \(m^{\circ }\in \mathbb{R}\), ρ t  = ρ t, for some ρ  ≥ 0, and the Gaussian process {G t ,  t ≥ 0} can be written as \(G_{t} = \sqrt{\rho ^{\circ }}\,W_{t}\), where {W t ,  t ≥ 0} is a standard Brownian motion. Also, ν t  = t ν for some Lévy measure ν , and the measure ν is simply the product measure of the Lebesgue measure on (0, ) and ν : ν(d(t, x)) = d tν (dx). 

  3. 3.

    The notations are slightly different from Sato [22] and Solé et al. [24], where ν denotes the Lévy measure of a Lévy process, that in the previous point we write ν . Also, our measure ν on \((0,\infty ) \times \mathbb{R}_{0}\) defined in (1) is denoted by Sato [22] by \(\tilde{\nu }\).

2.2 Wiener–Itô Chaos Expansion

The well-known Wiener–Itô chaos expansion of square integrable functionals of a Brownian motion can be extended to the square integrable functionals of a process with independent increments. This was another major contribution made by Itô [5]; indeed, Itô proved that result for Lévy processes, however his proof is written in very general terms and also covers the case of processes with independent increments. That chaos expansion determines a Fock space structure on \(L^{2}(\mathbb{P})\), which is the basis of our Malliavin calculus development.

With the preceding notations, define a measure μ on \(\big(\mathbb{R}_{+} \times \mathbb{R},\mathcal{B}(\mathbb{R}_{+} \times \mathbb{R})\big)\) by

$$\displaystyle{ \mu (\mathrm{d}(t,x)) =\rho (\mathrm{d}t)\,\delta _{0}(\mathrm{d}x) + x^{2}\mathbb{1}_{ (0,\infty )\times \mathbb{R}_{0}}\nu (\mathrm{d}(t,x)). }$$
(3)

It is non-atomic since ν and ρ are non-atomic. Moreover, for a bounded set \(B \in \mathcal{B}(\mathbb{R})\),

$$\displaystyle{\mu ([0,t] \times B) =\rho _{t}\delta _{0}(B) +\int \limits _{B\cap \mathbb{R}_{0}}x^{2}\,\nu _{ t}(\mathrm{d}x),}$$

and the last integral on the right-hand side is equal to

$$\displaystyle\begin{array}{rcl} & & \int \limits _{B\cap \{0<\vert x\vert \leq 1\}}x^{2}\,\nu _{ t}(\mathrm{d}x) +\int \limits _{B\cap \{\ \vert x\vert >1\}}x^{2}\,\nu _{ t}(\mathrm{d}x) {}\\ & & \quad \leq \int \limits _{\{0<\vert x\vert \leq 1\}}x^{2}\,\nu _{ t}(\mathrm{d}x) + C\,\nu _{t}\big(\{x:\ \vert x\vert > 1\} < \infty, {}\\ \end{array}$$

where C is a constant. Hence, the measure μ is locally finite, and, in particular, σ-finite.

Extending Itô [5] to this context, we can define a random measure (in the sense of vector measures, see Appendix 2) M on \(\big(\mathbb{R}_{+} \times \mathbb{R},\mathcal{B}(\mathbb{R}_{+} \times \mathbb{R})\big)\) with control measure μ: for \(C \in \mathcal{B}(\mathbb{R}_{+} \times \mathbb{R})\), such that μ(C) < , write C(0) = { t ≥ 0:  (t, 0) ∈ C} and \(C^{{\ast}} = C \cap \big ((0,\infty ) \times \mathbb{R}_{0}\big)\), and note that

$$\displaystyle{\int \limits _{(0,\infty )\times \mathbb{R}_{0}}\mathbb{1}_{C^{{\ast}}}(s,x)x^{2}\,\nu (\mathrm{d}(s,x)) < \infty,}$$

that is, \(\mathbb{1}_{C^{{\ast}}}(t,x)x \in L^{2}(\nu )\). So there exists the \(L^{2}(\mathbb{P})\) integral of that function with respect to \(\widehat{N }\) (see Appendix 1), and we can define

$$\displaystyle{M(C) = G\big(C(0)\big) +\int \limits _{C^{{\ast}}}x\,\widehat{N } (\mathrm{d}(t,x)).}$$

We prove that M is a completely random measure ; see Appendix 2 where these definitions are recalled.

Proposition 1

M is a completely random measure with control measure μ.

Proof

In this proof, all sets \(C \in \mathcal{B}(\mathbb{R}_{+} \times \mathbb{R})\) are assumed to have finite μ-measure. It is clear that \(\mathbb{E}[M(C)] = 0\), and by the independence between G and N it follows that

$$\displaystyle{\mathbb{E}\big[M(C_{1})M(C_{2})\big] =\mu (C_{1} \cap C_{2}).}$$

From (45) in the appendix it is deduced that the characteristic function of M(C) is

$$\displaystyle{\mathbb{E}\Big[\exp \big(iuM(C)\big)\Big] =\exp \left [-\frac{u^{2}} {2} \rho \big(C(0)\big) +\int \limits _{\mathbb{R}_{0}}\Big(e^{iux} - 1 - iux\Big)\,\alpha _{ C}(\mathrm{d}x)\right ],}$$

where α C is the measure on \(\mathbb{R}_{0}\) defined for \(A \in \mathcal{B}(\mathbb{R}_{0})\) by

$$\displaystyle{\alpha _{C}(A) =\nu \Big (C \cap \big ((0,\infty ) \times A\big)\Big).}$$

By a standard approximation argument it is proved that if \(f: \mathbb{R}_{0} \rightarrow \mathbb{R}_{+}\) is measurable, then

$$\displaystyle{\int \limits _{\mathbb{R}_{0}}f(x)\,\alpha _{C}(\mathrm{d}x) =\int \limits _{C\cap ((0,\infty )\times \mathbb{R}_{0}))}f(x)\,\nu (\mathrm{d}(t,x)).}$$

Thus

$$\displaystyle{\int \limits _{\mathbb{R}_{0}}x^{2}\,\alpha _{ C}(\mathrm{d}x) =\int \limits _{C\cap ((0,\infty )\times \mathbb{R}_{0}))}x^{2}\nu (\mathrm{d}(t,x)) \leq \mu (C) < \infty.}$$

Therefore, α C is a Lévy measure with finite second order moment. Then M(C) has an infinitely divisible law with finite variance and Lévy measure given by α C . Furthermore, if \(C_{1},C_{2} \in \mathcal{B}(\mathbb{R}_{+} \times \mathbb{R})\) are disjoint, \(\alpha _{C_{1}\cup C_{2}} =\alpha _{C_{1}} +\alpha _{C_{2}}\), and it follows that if \(C_{1},\ldots,C_{n} \in \mathcal{B}(\mathbb{R}_{+} \times \mathbb{R})\), all with finite μ-measure, are disjoint, then \(M(C_{1}),\ldots,M(C_{n})\) are independent. □ 

Hence, we can define multiple Wiener–Itô integrals with respect to M, see Appendix 2. Let L s 2(μ n) be the subset of symmetric functions of L 2(μ n), and for f ∈ L s 2(μ n) denote by I n ( f) the multiple integral of f with respect to M.

The chaotic representation Theorem of square integrable functionals of a Lévy process of Itô [5, Theorem 2] is extended to this case with the same proof. So we have the chaotic decomposition property:

$$\displaystyle{L^{2}(\mathbb{P}) =\bigoplus _{ n=0}^{\infty }I_{ n}\big(L_{s}^{2}(\mu ^{n})\big),}$$

and the (unique) representation of a functional F ∈ L 2(Ω),

$$\displaystyle{ F =\sum _{ n=0}^{\infty }I_{ n}(\,f_{n}),\quad f_{n} \in L_{s}^{2}(\mu ^{n}). }$$

From this point, we can apply all the machinery of the annihilation operators (Malliavin derivatives) and creation operators (Skorohod integrals) on Fock spaces, as exposed in Nualart and Vives [18, 19].

Remark 2

For a process without Gaussian part, we can consider the chaos expansion of a square integrable functional in terms of the multiple integrals with respect to the Poisson random measure N rather than M, and then define a Malliavin derivative and a Skorohod integral; see Di Nunno et al. [3] and the references therein. Indeed, in the second part of this paper, dealing with random measures with independent increments, we combine that approach with the multiple integral with respect to M.

2.3 Derivative Operators

Let \(F \in L^{2}(\mathbb{P})\) with a finite chaos expansion

$$\displaystyle{F =\sum _{ n=0}^{N}I_{ n}(\,f_{n}),}$$

where N < . The Malliavin derivative of F is defined as the element of \(L^{2}(\mu \otimes \mathbb{P})\) given by

$$\displaystyle{D_{z}F =\sum _{ n=1}^{N}nI_{ n-1}\big(\,f_{n}(z,\cdot )\big),\,z \in \mathbb{R}_{+} \times \mathbb{R}.}$$

This operator is unbounded. However, the set of elements of \(L^{2}(\mathbb{P})\) with finite chaos expansion is dense in \(L^{2}(\mathbb{P})\), and the operator D is closable ; the domain of D, denoted by dom D, coincides with the set of \(F \in L^{2}(\mathbb{P})\) with chaotic decomposition

$$\displaystyle{F =\sum _{ n=0}^{\infty }I_{ n}(\,f_{n}),}$$

such that

$$\displaystyle{ \sum _{n=1}^{\infty }n\,n!\Vert \,f_{ n}\Vert _{L_{s}^{2}(\mu ^{n})}^{2} < \infty. }$$
(4)

The Malliavin derivative of such an F is given by

$$\displaystyle{D_{z}F =\sum _{ n=1}^{\infty }nI_{ n-1}\Big(\,f_{n}\big(z,\cdot \big)\Big),\ \ z \in \mathbb{R}_{+} \times \mathbb{R},}$$

where the convergence of the series is in \(L^{2}(\mu \otimes \mathbb{P})\).

The domain dom D is a Hilbert space with the scalar product

$$\displaystyle{ \langle F,G\rangle = \mathbb{E}[F\,G] + \mathbb{E}\left [\int \limits _{\mathbb{R}_{+}\times \mathbb{R}}D_{z}F\,D_{z}G\,\mu (\mathrm{d}z)\right ]. }$$
(5)

For all these properties we refer to Nualart and Vives [18].

Given the form of the measure μ, for \(f: (\mathbb{R}_{+} \times \mathbb{R})^{n} \rightarrow \mathbb{R}\) measurable, positive or μ n integrable, we have

$$\displaystyle{ \begin{array}{rl} \int \limits _{(\mathbb{R}_{+}\times \mathbb{R})^{n}} & f\,d\mu ^{\otimes n} \\ & =\int \limits _{\mathbb{R}_{+}\times (\mathbb{R}_{+}\times \mathbb{R})^{n-1}}f\big((t,0),z_{1},\ldots z_{n-1})\big)\,\rho (\mathrm{d}t)\,\mu ^{\otimes (n-1)}(\mathrm{d}z_{1},\ldots,\mathrm{d}z_{n-1}) \\ &\quad +\int \limits _{(0,\infty )\times \mathbb{R}_{0}\times (\mathbb{R}_{+}\times \mathbb{R})^{n-1}}f(z_{1},z_{2},\ldots z_{n})\,\mu ^{\otimes n}(\mathrm{d}z_{1},\ldots,\mathrm{d}z_{n}). \end{array} }$$

As a consequence, when ρ ≠ 0 and ν ≠ 0, it is natural to consider two more spaces: Let dom D 0 (if ρ ≠ 0) be the set of \(F \in L^{2}(\mathbb{P})\) with decomposition F =  n = 0 I n ( f n ) such that

$$\displaystyle{\sum _{n=1}^{\infty }n\,n!\int \limits _{ \mathbb{R}_{+}\times (\mathbb{R}_{+}\times \mathbb{R})^{n-1}}f^{2}((t,0),z_{ 1},\ldots,z_{n-1})\,\rho (\mathrm{d}t)\,\mu ^{\otimes (n-1)}(\mathrm{d}z_{ 1},\ldots,\mathrm{d}z_{n-1}) < \infty.}$$

For F ∈ dom D 0 we can define the square integrable stochastic process

$$\displaystyle{D_{t,0}F =\sum _{ n=1}^{\infty }n\,I_{ n-1}\Big(\,f_{n}\big((t,0),\cdot \big)\Big),}$$

where the convergence is in \(L^{2}(\rho \otimes \mathbb{P})\). Analogously, if ν ≠ 0, let dom D J be the set of \(F \in L^{2}(\mathbb{P})\) such that

$$\displaystyle{\ \ \sum _{n=1}^{\infty }n\,n!\int \limits _{ (0,\infty )\times \mathbb{R}_{0}\times (\mathbb{R}_{+}\times \mathbb{R})^{n-1}}f_{n}^{2}\,d\mu ^{\otimes n} < \infty,}$$

and for F ∈ dom D J, define

$$\displaystyle{D_{z}F =\sum _{ n=1}^{\infty }nI_{ n-1}\Big(\,f_{n}\big(z,\cdot \big)\Big),}$$

where the convergence is in \(L^{2}\big((0,\infty ) \times \mathbb{R}_{0}\times \varOmega,x^{2}\,\nu (\mathrm{d}(t,x)) \otimes \mathbb{P}\big).\)

It is clear that when both ρ ≠ 0 and ν ≠ 0, then dom D = dom D 0 ∩dom D J.

2.4 The Skorohod Integral

Following the scheme of Nualart and Vives [18], we can define a creation operator (Skorohod—or Kabanov–Skorohod—integral) in the following way: let \(g \in L^{2}\big(\mu \otimes \mathbb{P})\), which has a chaotic decomposition

$$\displaystyle{ g(z) =\sum _{ n=0}^{\infty }I_{ n}(\,f_{n}(z,\cdot )), }$$
(6)

where \(f_{n} \in L^{2}\big(\mu ^{\otimes (n+1)}\big)\) is symmetric in the n last variables. Denote by \(\hat{f } _{n}\) the symmetrization in all n + 1 variables. If

$$\displaystyle{ \sum _{n=0}^{\infty }(n + 1)!\,\Vert \,\hat{f } _{ n}\Vert _{L_{s}^{2}(\mu ^{n+1})}^{2} < \infty, }$$
(7)

define the Skorohod integral of f by

$$\displaystyle{\delta (g) =\sum _{ n=0}^{\infty }I_{ n+1}(\hat{\,f} _{n}),}$$

where the convergence is in \(L^{2}(\mathbb{P})\). Denote by dom δ the set of g that satisfy (7). The operator δ is the dual of the operator D, that is, a process \(g \in L^{2}(\mu \times \mathbb{P})\) belongs to dom δ if and only if there is a constant C such that for all F ∈ dom D,

$$\displaystyle{\Big\vert \mathbb{E}\int \limits _{\mathbb{R}_{+}\times \mathbb{R}}g(z)\,D_{z}F\,\mu (\mathrm{d}z)\Big\vert \leq C\,\big(\mathbb{E}[F^{2}]\big)^{1/2}.}$$

If g ∈ dom δ, then δ(g) is the element of \(L^{2}(\mathbb{P})\) characterized by the duality (or integration by parts) formula

$$\displaystyle{ \mathbb{E}[\delta (g)\,F] = \mathbb{E}\int \limits _{\mathbb{R}_{+}\times \mathbb{R}}g(z)\,D_{z}F\,\mu (\mathrm{d}z), }$$
(8)

for any F ∈ dom D.

For more properties of the operator δ in the Lévy processes case, including its relationship with the stochastic integral with respect to the measure M, and a Clark–Ocone–Haussman formula, we refer to Solé et al. [24, 25].

2.5 Derivation of Smooth Functionals

Following an interesting approach of Geiss and Laukkarinen [4] (in the Lévy processes context) we will prove the following formulas of the derivative of smooth functionals: denote by \(\mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\) the set of infinitely continuous differentiable functions such that the function and all partial derivatives are bounded. Let \(f \in \mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\) and consider

$$\displaystyle{ F = f\big(X_{t_{1}},\ldots,X_{t_{n}}\big). }$$
(9)

We will prove that F ∈ dom D and

$$\displaystyle{ D_{t,0}F =\sum _{ j=1}^{n}\frac{\partial _{j}f} {\partial x_{j}}f\big(X_{t_{1}},\ldots,X_{t_{n}}\big)\mathbb{1}_{[0,t_{j}]}(t), }$$
(10)

and for x ≠ 0,

$$\displaystyle{ D_{t,x}F = \frac{f\big(X_{t_{1}}+x\mathbb{1}_{[0,t_{1}]}(t),\ldots,X_{t_{n}}+x\mathbb{1}_{[0,t_{n}]}(t)\big)-f\big(X_{t_{1}},\ldots,X_{t_{n}}\big)} {x}. }$$
(11)

Note the following relationship between both derivatives of a smooth functional:

$$\displaystyle{D_{t,0}F =\lim _{x\rightarrow 0}D_{t,x}F,\ \text{a.s.}}$$

Geiss and Laukkarinen [4] (in the Lévy processes case) give a direct proof of (10) and (11) by using Fourier inversion and a Clark–Ocone–Haussman type formula. They also show that the random variables of form (9) are dense in \(L^{2}(\mathbb{P})\) with respect to the norm induced by (5), and hence it is possible to define the Malliavin derivatives starting with (10) and (11). In order to prove these formulas in our context we will follow an alternative procedure: we will first prove these formulas in a canonical space associated with the process with independent increments and later we will transfer them to the general case.

2.5.1 Malliavin Derivatives in the Canonical Space

Since the Gaussian part and the jumps part of X are independent, we can construct a version of X in a canonical probability space of the form \(\big(\varOmega _{G} \times \varOmega _{N},\mathcal{A}_{G} \otimes \mathcal{A}_{N}, \mathbb{P}_{G} \otimes \mathbb{P}_{N}\big)\) where

  • \((\varOmega _{G},\mathcal{A}_{G}, \mathbb{P}_{G})\) is the canonical space associated with the Gaussian continuous process G; specifically, \(\varOmega _{G} = \mathcal{C}(\mathbb{R}_{+})\) is the space of continuous functions on \(\mathbb{R}_{+}\), \(\mathcal{A}_{G}\) the Borel σ-algebra generated by the topology of the uniform convergence on compact sets, and \(\mathbb{P}_{G}\) the probability that makes the projections

    $$\displaystyle\begin{array}{rcl} & & G_{t}^{{\ast}}:\varOmega _{ G} \rightarrow \mathbb{R} {}\\ & & \qquad f\mapsto f(t) {}\\ \end{array}$$

    a process with the same law of {G t ,  t ≥ 0}.

  • \((\varOmega _{N},\mathcal{A}_{N}, \mathbb{P}_{N})\) is a canonical space associated with the Poisson random measure N. Essentially, Ω N is formed by infinite sequences \(\omega =\big ((t_{1},x_{1}),(t_{2},x_{2}),\ldots \big) \in \big ((0,\infty ) \times \mathbb{R}_{0}\big)^{\mathbb{N}}\) (see Appendix 3 for that construction), where t i are the instants of jump of the process, and x i the size of the corresponding jump. In this space, under \(\mathbb{P}_{N}\), the mapping defined by

    $$\displaystyle{N^{{\ast}}(\omega ) =\sum \delta _{ (t_{j},x_{j})},\ \text{if}\ \omega =\big ((t_{1},x_{1}),(t_{2},x_{2}),\ldots \big)}$$

    is a Poisson random measure with intensity measure ν.

    Define

    $$\displaystyle{ J_{t}^{{\ast}} =\int \limits _{ (0,t]\times \{\vert x\vert >1\}}x\,N^{{\ast}}(\mathrm{d}(s,x)) +\int \limits _{ (0,t]\times \{0<\vert x\vert \leq 1\}}x\,\widehat{N } ^{{\ast}}(\mathrm{d}(s,x)), }$$

    where \(\widehat{N } ^{{\ast}} = N^{{\ast}}-\nu.\) Then J  = { J t ,  t ≥ 0} is a process with independent increments with generating triplets (0, ν t , 0).

  • Finally, in the product space Ω G ×Ω N we write

    $$\displaystyle{X_{t}^{{\ast}} = m_{ t} + G_{t}^{{\ast}} + J_{ t}^{{\ast}},}$$

    and call it the canonical version of the process X.

2.5.2 Derivative \(\boldsymbol{D}_{t,0}\)

In order to compute the derivative D t, 0 F for F ∈ L 2(Ω G ×Ω N ), from the isometry

$$\displaystyle{L^{2}(\varOmega _{ G} \times \varOmega _{N}) \simeq L^{2}(\varOmega _{ G};L^{2}(\varOmega _{ N})),}$$

we can consider F as an element of L 2(Ω G ; L 2(Ω N )) and apply the theory of Malliavin derivatives of random variables with values in a separable Hilbert space following Nualart [17, p. 31]. This derivative coincides with D t, 0. This is proved from the fact that, by definition, a L 2(Ω N )-valued smooth random variable has the form

$$\displaystyle{F =\sum _{ i=1}^{n}F_{ i}\,H_{i},}$$

where F i are standard smooth variables (see Nualart [17, p. 25]) and H i  ∈ L 2(Ω N ). Define the Malliavin derivative of F as

$$\displaystyle{ D_{t}^{{\ast}}F =\sum _{ i=1}^{n}D_{ t}F_{i} \otimes H_{i}. }$$
(12)

This definition is extended to a subspace dom D by a density argument.

Proposition 2

dom  D dom  D 0 , and for F ∈dom  D ,

$$\displaystyle{ D_{t}^{{\ast}}F = D_{ t,0}F. }$$
(13)

Proof

First consider the functionals of the form

$$\displaystyle{F = N^{{\ast}}(C_{ 1})\cdots N^{{\ast}}(C_{ m})G^{{\ast}}(B_{ 1})\cdots G^{{\ast}}(B_{ k}),}$$

where \(C_{1},\ldots,C_{m} \in \mathcal{B}((0,\infty ) \times \mathbb{R}_{0})\) are bounded, pairwise disjoints, and at strictly positive distance of the t-axis, and \(B_{1},\ldots,B_{k} \in \mathcal{B}(\mathbb{R}_{+})\) are pairwise disjoints, with finite ρ measure. Itô [5] shows that the family of that functionals constitutes a fundamental set in \(L^{2}(\mathbb{P}_{G} \otimes \mathbb{P}_{N})\). Moreover, Itô shows that such an F can be written as a sum of multiple integrals:

$$\displaystyle{F = I_{0}(\,f_{0}) + \cdots + I_{m+k}(\,f_{m+k}),}$$

and then the derivatives are easy to compute, proving equality (13), which is extended to dom D by density. See Solé et al. [24]. □ 

From the above proposition and the properties of the Malliavin derivatives in the Gaussian white noise case, it follows that the first rule of differentiation (10) in the canonical space holds:

Proposition 3

Let \(F = f\big(X_{t_{1}}^{{\ast}},\ldots,X_{t_{n}}^{{\ast}}\big)\) where \(f \in \mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\) . Then F ∈dom D t,0 and

$$\displaystyle{ D_{t,0}F =\sum _{ j=1}^{n}\frac{\partial _{j}f} {\partial x_{j}}f\big(X_{t_{1}}^{{\ast}},\ldots,X_{ t_{n}}^{{\ast}}\big)\mathbb{1}_{ [0,t_{j}]}(t). }$$

2.5.3 Derivative \(\boldsymbol{D_{t,x},\,x\neq 0}\)

Consider ω = (ω G, ω N) ∈ Ω G ×Ω N , \(\omega ^{N} =\big ((t_{1},x_{1}),(t_{2},x_{2}),\ldots \big) \in \big ((0,\infty ) \times \mathbb{R}_{0}\big)^{\mathbb{N}}\). Given \(z = (t,x) \in (0,\infty ) \times \mathbb{R}_{0}\), we add to ω N a jump of size x at instant t, and call the new element \(\omega _{z}^{N} =\big ((t_{1},x_{1}),(t_{2},x_{2}),\ldots,(t,x),\ldots \big)\), and write ω z  = (ω G, ω z N). For a random variable F, we define the quotient operator

$$\displaystyle{\boldsymbol{\varPsi }_{t,x}F(\omega ) = \frac{F(\omega _{t,x}) - F(\omega )} {x}.}$$

See Solé et al. [24] for the measurability properties of this function. By iteration, we define

$$\displaystyle{\varPsi _{z_{1},\ldots,z_{n}}^{n}F:=\varPsi _{ z_{1}}\varPsi _{z_{2},\ldots,z_{n}}^{n-1}F.}$$

Since this function only depends on the part ω N, we can assume that X does not have a Gaussian part.

In the following lemma we will consider a set Δ of the form (m, m + 1] ×{ x: n <  | x | ≤ n + 1} or (m, m + 1] ×{ x: 1∕(n + 1) <  | x | ≤ 1∕n}, for some m ≥ 0 and n ≥ 1. Then ν(Δ) <  and for every k ≥ 1, \(\int _{\varDelta }\vert x\vert ^{k}\,\nu (\mathrm{d}(t,x)) < \infty.\) The Poisson random measure N restricted to Δ has finite intensity measure (from now on, in this section, we suppress the ∗ to simplify the notations). The ordinary n-fold product measure is denoted by N n, and by N (n) the measure

$$\displaystyle{N^{(n)}(D) = N^{\otimes n}(D_{\neq }),}$$

where \(D \in \mathcal{B}(\varDelta ^{n})\) and D is the set of elements of \((z_{1},\ldots,z_{n}) \in D\) such that z i z j if ij. The measure defined by \(\mathbb{E}[N^{(n)}(D)]\) is called the n-factorial moment measure of the Poisson random measure N (see Last [11, formula (1.9)] or Schneider and Weil [23, p. 55]). For F ∈ L 2(Ω N ), for every \(D \in \mathcal{B}(\varDelta ^{n})\), the following integrals are finite and

$$\displaystyle{ \mathbb{E}\big[FN^{(n)}(D)\big] =\int \limits _{ D}\mathbb{E}\big[F(\omega _{z_{1},\ldots,z_{n}}))\big]\,\nu (\mathrm{d}z_{1})\cdots \nu (\mathrm{d}z_{n}). }$$
(14)

To deduce that equality, note that we can write F = f(N), for some f defined on the set of integer valued (including ) locally finite measures (see Part 2). For \(\omega = (z_{1},z_{2},\ldots )\), \(N(\omega ) =\sum _{i}\delta _{z_{i}}\), and hence \(N(\omega _{z}) =\sum _{i}\delta _{z_{i}} +\delta _{z}\). Then, equality (14) is just a reformulation of a generalized Mecke formula (see Last [11, formula (1.10)]).

As a consequence, we have

Lemma 1

Let \(F_{k},F \in L^{2}(\mathbb{P}_{N})\) such that limk F k = F in \(L^{2}(\mathbb{P}_{N})\) . Then for every \(D \in \mathcal{B}(\varDelta ^{n})\)

$$\displaystyle{\lim _{k}\mathbb{E}\int \limits _{D}\big\vert \varPsi _{z_{1},\ldots,z_{n}}F_{k} -\varPsi _{z_{1},\ldots,z_{n}}F\big\vert \,\nu (\mathrm{d}z_{1})\cdots \nu (\mathrm{d}z_{n}) = 0.}$$

Proof

The proof is very similar to the proof of Lemma 2 of Last [11]. It suffices to show that for every \(m = 1,\ldots,n,\)

$$\displaystyle{\lim _{k}\mathbb{E}\int \limits _{D}\Big\vert \frac{F_{k}(\omega _{z_{1},\ldots,z_{m}}) - F(\omega _{z_{1},\ldots,z_{m}})} {x_{1}\cdots x_{m}} \Big\vert \,\nu (\mathrm{d}z_{1})\cdots \nu (\mathrm{d}z_{n}) = 0,}$$

where z i  = (t i , x i ). Since Δ is bounded and far from 0, the \(x_{1},\ldots,x_{n}\) in the denominator can be suppressed. By (14),

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}\int \limits _{D}\big\vert F_{k}(\omega _{z_{1},\ldots,z_{m}}) - F(\omega _{z_{1},\ldots,z_{m}})\big\vert \,\nu (\mathrm{d}z_{1})\cdots \nu (\mathrm{d}z_{n}) = \mathbb{E}\Big[\big\vert F_{k} - F\big\vert N^{(n)}(D)\Big] {}\\ & & \quad \leq \Big (\mathbb{E}\big[(F_{k} - F)^{2}\big]\mathbb{E}\big[N^{(n)}(D)^{2}\big]\Big)^{1/2}, {}\\ \end{array}$$

which goes to 0 as k → . □ 

Proposition 4

Let \(F \in L^{2}(\mathbb{P}_{G} \otimes \mathbb{P}_{N})\) such that

$$\displaystyle{ E\Big[\int \limits _{\mathbb{R}_{+}\times \mathbb{R}_{0}}\big(\boldsymbol{\varPsi }_{z}F)^{2}\,\mu (dz)\Big] < \infty. }$$
(15)

Then F ∈dom  D J and

$$\displaystyle{D_{z}F(\omega ) =\boldsymbol{\varPsi } _{z}F(\omega ),\ \ \mu \otimes \mathbb{P} -\mathrm{ a.e.}\ (z,\omega ) \in (0,\infty ) \times \mathbb{R}_{0} \times \varOmega.}$$

Proof

To simplify the notations we write μ(d(x, t)) rather than x 2ν(d(x, t)). First it is proved that for f ∈ L s 2(μ n), 

$$\displaystyle{ DI_{n}(\,f) =\boldsymbol{\varPsi } I_{n}(\,f),\ \mu \otimes \mathbb{P} -\text{a.e.} }$$
(16)

To prove this, first, instead of f we consider \(f\mathbb{1}_{\varDelta }^{\otimes n}\). Thus, as before, the multiple integrals (with respect to M) can be computed pathwise, and the above equality is easily checked, and then extended to f. Moreover, it is proved that the operator Ψ is closed, again working first with the restriction on Δ. Hence, if F ∈ dom D J, then DF = Ψ F. For the details see Solé et al. [24].

Note that as a consequence of (16),

$$\displaystyle{ f_{n}(z_{1},\ldots,z_{n}) = \frac{1} {n!}\,\mathbb{E}\Big[\varPsi _{z_{1},\ldots,z_{n}}^{n}I_{ n}(\,f_{n})\Big],\ \mu ^{\otimes n} -\text{a.e.} }$$

This property is extended to a general \(F =\sum _{ n=0}^{\infty }I_{n}(\,f_{n}) \in L^{2}(\mathbb{P})\) to get a Stroock type formula

$$\displaystyle{ f_{n} = \frac{1} {n!}\,\mathbb{E}\big[\varPsi ^{n}F\big]\,\mu ^{\otimes n} -\text{a.e.} }$$
(17)

This is proved considering F k  =  n = 0 k I n ( f n ). We have that for k ≥ n, \(f_{n} = \frac{1} {n!}\,\mathbb{E}\big[\varPsi ^{n}F_{ k}\big]\). By Lemma 1, for every \(D \in \mathcal{B}(\varDelta ^{n})\)

$$\displaystyle{\lim _{k}\int \limits _{D}\big\vert E\big[\varPsi ^{n}F_{ k}\big] - E\big[\varPsi ^{n}F\big]\big\vert \,\mathrm{d}\nu ^{\otimes n} =\int \limits _{ D}\big\vert n!f_{n} - E\big[\varPsi ^{n}F\big]\big\vert \,\mathrm{d}\nu ^{\otimes n} = 0.}$$

Then

$$\displaystyle{f_{n} = \frac{1} {n!}\,\mathbb{E}\big[\varPsi ^{n}F\big],\,\nu ^{\otimes n} -\mbox{a.e. on $\varDelta$ },}$$

and also μ n-a.e. And hence the equality holds on \((0,\infty ) \times \mathbb{R}_{0}\) because it is a countable union of sets of type Δ.

Now assume that condition (15) holds. Then

$$\displaystyle{\varPsi _{z}F =\sum _{ n=0}^{\infty }I_{ n}(g_{n}(z,\cdot )),}$$

with

$$\displaystyle{ \sum _{n=0}^{\infty }n!\int g_{ n}^{2}\,\mathrm{d}\mu ^{\otimes (n+1)} < \infty. }$$
(18)

However, thanks to (17), the kernel g n is related to the kernel f n+1 due to

$$\displaystyle{g_{n}(z,z_{1},\ldots,z_{n}) = \frac{1} {n!}\,\mathbb{E}\Big[\varPsi _{z_{1},\ldots,z_{n}}^{n}\varPsi _{ z}F\Big] = \frac{1} {n!}\,\mathbb{E}\Big[\varPsi _{z_{1},\ldots,z_{n},z}^{n+1}F\Big] = (n+1)f_{ n+1}(z,z_{1},\ldots,z_{n}),}$$

and by (18),

$$\displaystyle{\sum _{n=1}^{\infty }nn!\int f_{ n}^{2}\,\mathrm{d}\mu ^{\otimes n} =\sum _{ n=0}^{\infty }n!\int g_{ n}^{2}\,\mathrm{d}\mu ^{\otimes (n+1)} < \infty,}$$

which is the condition for F ∈ dom D J. □ 

We can deduce the second rule of differentiation (11):

Proposition 5

Let \(F = f\big(X_{t_{1}}^{{\ast}},\ldots,X_{t_{n}}^{{\ast}}\big)\) where \(f \in \mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\) . Then F ∈dom  D J and for x ≠ 0,

$$\displaystyle{ D_{t,x}F = \frac{f\big(X_{t_{1}}^{{\ast}} + x\mathbb{1}_{[0,t_{1}]}(t),\ldots,X_{t_{n}}^{{\ast}} + x\mathbb{1}_{[0,t_{n}]}(t)\big) - f\big(X_{t_{1}}^{{\ast}},\ldots,X_{t_{n}}^{{\ast}}\big)} {x}. }$$

Proof

To shorten the notations we suppress the star in X t . We consider the case \(F = f\big(X_{s}\big)\); the general case is similar. We have

$$\displaystyle{\varPsi _{t,x}F = \frac{f\big(X_{s} + x\mathbb{1}_{[0,s]}(t)\big) - f\big(X_{s}\big)} {x}.}$$

By Proposition 4 it suffices to prove that the following integral is finite:

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}\left [\int \limits _{(0,\infty )\times \mathbb{R}_{0}}\big(\boldsymbol{\varPsi }_{z}F\big)^{2}\,\mu (\mathrm{d}z)\right ] = \mathbb{E}\left [\int \limits _{ (0,\infty )\times \mathbb{R}_{0}}\big(\boldsymbol{\varPsi }_{t,x}F\big)^{2}\,x^{2}\nu (\mathrm{d}(t,x))\right ] {}\\ & & \quad = \mathbb{E}\left [\int \limits _{(0,\infty )\times \mathbb{R}_{0}}\bigg(\frac{f\big(X_{s} + x\mathbb{1}_{[0,s]}(t)\big) - f\big(X_{s}\big)} {x} \bigg)^{2}\,x^{2}\nu (\mathrm{d}(t,x))\right ] {}\\ & & \quad = \mathbb{E}\left [\int \limits _{(0,s]\times \mathbb{R}_{0}}\Big(f\big(X_{s} + x\big) - f\big(X_{s}\big)\Big)^{2}\,\nu (\mathrm{d}(t,x))\right ] {}\\ & & \quad = \mathbb{E}\left [\int \limits _{\mathbb{R}_{0}}\Big(\,f\big(X_{s} + x\big) - f\big(X_{s}\big)\Big)^{2}\,\nu _{ s}(\mathrm{d}x)\right ]. {}\\ \end{array}$$

To this end, by the mean value Theorem, there is a random point Y such that

$$\displaystyle{f\big(X_{s} + x\big) - f\big(X_{s}\big) = xf'(Y ).}$$

Since f′ is bounded, say by C, using that ν s is a Lévy measure, for every ω,

$$\displaystyle{\int \limits _{\{\vert x\vert \leq 1\}}\Big(F\big(X_{s} + x\big) - f\big(X_{s}\big)\Big)^{2}\,\nu _{ s}(\mathrm{d}x) \leq C^{2}\int \limits _{ \{\vert x\vert \leq 1\}}x^{2}\nu _{ s}(\mathrm{d}x) = C' < \infty,}$$

where C′ is a constant independent of ω. Similarly,

$$\displaystyle{\int \limits _{\{\vert x\vert >1\}}\Big(\,f\big(X_{s} + x\big) - f\big(X_{s}\big)\Big)^{2}\,\nu _{ s}(\mathrm{d}x) \leq C'''\nu _{s}\{x:\,\vert x\vert > 1\} = C''' < \infty,}$$

where C″ and C‴ are constants independent of ω. □ 

2.5.4 Transfer of the Derivative Rules from the Canonical Space to an Arbitrary Space

Recall that we write a star to denote random variables, measures, processes, or operators in the canonical space. We consider a process with independent increments X on \((\varOmega,\mathcal{A}, \mathbb{P})\) with Poisson measure N and independent Gaussian part G, with the same law as N and G respectively, related to the additive process X constructed in the canonical space \(\big(\varOmega _{G} \times \varOmega _{N},\mathcal{A}_{G} \otimes \mathcal{A}_{N}, \mathbb{P}_{G} \otimes \mathbb{P}_{N}\big)\). Note that the generating triplets of X and X coincide, and hence the measures μ and μ (see (3)) are the same. Moreover, the Fock space structure of \(L^{2}(\mathbb{P})\) allows us to transfer some properties of the derivatives and Skorohod integrals in the canonical space to the space \((\varOmega,\mathcal{A}, \mathbb{P})\). This can be done thanks to the fact that to a square integrable random variable \(F \in L^{2}(\mathbb{P})\) with

$$\displaystyle{F =\sum _{ n=0}^{\infty }I_{ n}(\,f_{n}),\quad f_{n} \in L_{s}^{2}(\mu ^{n}),}$$

we can associate \(F^{{\ast}}\in L^{2}(\mathbb{P}_{G} \times \mathbb{P}_{N})\) given by

$$\displaystyle{F^{{\ast}} =\sum _{ n=0}^{\infty }I_{ n}^{{\ast}}(\,f_{ n}).}$$

That is, the kernels of F and F are the same. In a similar way, since, given that \(g \in L^{2}\big(\mathbb{R}_{+} \times \mathbb{R}\times \varOmega,\mathcal{B}(\mathbb{R}_{+} \times \mathbb{R}) \otimes \mathcal{A},\mu \otimes \mathbb{P})\) has a chaotic decomposition

$$\displaystyle{ g(z) =\sum _{ n=0}^{\infty }I_{ n}(\,f_{n}(z,\cdot )), }$$
(19)

f n  ∈ L n+1 2 is symmetric in the n last variables, we can transfer from g to g , and if g ∈ dom δ, then g  ∈ dom δ . More specifically,

Lemma 2

With the previous notations, for every \(t_{1},\ldots,t_{n} \in \mathbb{R}_{+}\) and \(F \in L^{2}(\mathbb{P})\) , we have that

$$\displaystyle{ \big(X_{t_{1}},\ldots,X_{t_{n}},F\big)\ \mathop{=}\limits ^{\mathcal{L}}\ \big(X_{ t_{1}}^{{\ast}},\ldots,X_{ t_{n}}^{{\ast}},F^{{\ast}}\big), }$$
(20)

where \(\mathop{=}\limits ^{\mathcal{L}}\) means equality in law.

Proof

We undertake the proof in several steps:

Step 1 :

Let \(F =\sum _{ n=0}^{\infty }I_{n}(\,f_{n}) \in L^{2}(\mathbb{P})\). We first prove that F and F have the same law:

$$\displaystyle{ \sum _{n=0}^{\infty }I_{ n}(\,f_{n})\ \mathop{=}\limits ^{\mathcal{L}}\ \sum _{ n=0}^{\infty }I_{ n}^{{\ast}}(\,f_{ n}). }$$
(21)

In fact, if the sum has a finite number of terms, and the f n are simple (see the appendix) then the equality in law is clear. Equality (21) for finite sums with arbitrary kernels follows by \(L^{2}(\mathbb{P})\)-convergence. The infinite sum case is proved in a similar fashion.

Step 2 :

For \(F,G \in L^{2}(\mathbb{P})\) we prove that

$$\displaystyle{(F,G)\ \mathop{=}\limits ^{\mathcal{L}}\ (F^{{\ast}},G^{{\ast}}).}$$

We use Cramer–Wold device. Let F =  n = 0 I n ( f n ) and G =  n = 0 I n (g n ). For \(a,b \in \mathbb{R}\),

$$\displaystyle{ aF + bG =\sum _{ n=0}^{\infty }I_{ n}\big(af_{n} + bg_{n}\big)\ \mathop{=}\limits ^{\mathcal{L}}\ \sum _{ n=0}^{\infty }I_{ n}^{{\ast}}\big(af_{ n} + bg_{n}\big) = aF^{{\ast}} + bG^{{\ast}}. }$$
Step 3 :

To prove (20) we consider n = 1; the general case is similar. First assume that the process X is square integrable, then \(\int _{R_{0}}x^{2}\,\nu _{t}(\mathrm{d}x) < \infty \), and thus \(\int _{(0,t]\times \mathbb{R}_{0}}x^{2}\,\nu (\mathrm{d}(s,x)) < \infty.\) This implies that the representation (2) admits the form:

$$\displaystyle\begin{array}{rcl} X_{t}& =& m_{t} + G_{t} +\int \limits _{(0,t]\times \{\vert x\vert >1\}}x\,\nu (\mathrm{d}(s,x)) +\int \limits _{(0,t]\times \mathbb{R}_{0}}x\,\widehat{N } (\mathrm{d}(s,x)) {}\\ & =& m_{t} +\int \limits _{(0,t]\times \{\vert x\vert >1\}}x\,\nu (\mathrm{d}(s,x)) + I_{1}\big(\mathbb{1}_{[0,t]\times \{0\}}+\mathbb{1}_{(0,t]\times \mathbb{R}_{0}}\big), {}\\ \end{array}$$

and the property follows from step 2. In the general case, define

$$\displaystyle{ X_{t}^{(n)} = m_{ t} + G_{t} +\int \limits _{(0,t]\times \{1<\vert x\vert \leq n\}}x\,N(\mathrm{d}(s,x)) +\int \limits _{(0,t]\times \{0<\vert x\vert \leq 1\}}x\,\widehat{N } (\mathrm{d}(s,x)) }$$
(22)
$$\displaystyle{ = m_{t} +\int \limits _{(0,t]\times \{1<\vert x\vert \leq n\}}x\,\nu (\mathrm{d}(s,x)) + I_{1}\big(\mathbb{1}_{[0,t]\times \{0\}}+\mathbb{1}_{(0,t)\times \{0<\vert x\vert \leq n\}}\big). }$$
(23)

By expression (23) and Step 2, \(\big(X_{t}^{(n)},F\big)\ \mathop{=}\limits ^{\mathcal{L}}\ \big(X_{t}^{(n){\ast}},F^{{\ast}}\big)\). Since \(\nu \big((0,t] \times \{\vert x\vert > 1\}\big) < \infty \), we can apply Proposition 10 in the appendix to the first integral in the expression (22), and we deduce that when n → , X t (n) → X t in probability, and the lemma follows. □ 

To transfer the derivative rules we will use the duality coupling (8). By construction, \(F \in L^{2}(\mathbb{P})\) belongs to dom D if and only if there is a constant C such that for all g ∈ dom δ,

$$\displaystyle{ \Big\vert \mathbb{E}\big[F\delta (g)\big]\Big\vert \leq C\,\left (\mathbb{E}\left [\int \limits _{\mathbb{R}_{+}\times \mathbb{R}}g^{2}\,\mathrm{d}\mu \right ]\right )^{1/2}. }$$
(24)

If F ∈ dom D, then DF is the element of \(L^{2}(\mu \otimes \mathbb{P})\) characterized by

$$\displaystyle{ \mathbb{E}[\delta (g)\,F] = \mathbb{E}\int \limits _{\mathbb{R}_{+}\times \mathbb{R}}g(z)\,D_{z}F\,d\mu (z), }$$
(25)

for every g ∈ dom δ. That is, we use (8) to prove a property of the derivative from the Skorohod integral.

Proposition 6

Let \(F = f\big(X_{t_{1}},\ldots,X_{t_{n}}\big)\) with \(f \in \mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\) . Then F ∈dom  D and

$$\displaystyle{ D_{t,0}F =\sum _{ j=1}^{n}\frac{\partial _{j}f} {\partial x_{j}}f\big(X_{t_{1}},\ldots,X_{t_{n}}\big)\mathbb{1}_{[0,t_{j}]}(t), }$$
(26)

and for x ≠ 0,

$$\displaystyle{ D_{t,x}F =\psi _{t,x}F = \frac{f\big(X_{t_{1}}+x\mathbb{1}_{[0,t_{1}]}(t),\ldots,X_{t_{n}}+x\mathbb{1}_{[0,t_{n}]}(t)\big)-f\big(X_{t_{1}},\ldots,X_{t_{n}}\big)} {x}. }$$
(27)

Proof

We are going to prove that F ∈ dom D. For this objective, let g ∈ dom δ and consider g to have the same kernels as g, and then g  ∈ dom δ and satisfies inequality (24). Then, since we have proved that \(f\big(X_{t_{1}}^{{\ast}},\ldots,X_{t_{n}}^{{\ast}}\big) \in \text{dom}\,D\), by Lemma 2,

$$\displaystyle\begin{array}{rcl} & & \Big\vert \mathbb{E}\big[f\big(X_{t_{1}},\ldots,X_{t_{n}}\big)\delta (g)\big]\Big\vert =\Big\vert \mathbb{E}^{{\ast}}\big[f\big(X_{ t_{1}}^{{\ast}},\ldots,X_{ t_{n}}^{{\ast}}\big)\delta ^{{\ast}}(g^{{\ast}})\big]\Big\vert {}\\ & & \quad \leq C\,\left (\mathbb{E}^{{\ast}}\left [\int \limits _{ \mathbb{R}_{+}\times \mathbb{R}}(g^{{\ast}})^{2}\,\mathrm{d}\mu \right ]\right )^{1/2} = C\,\left (\mathbb{E}\left [\int \limits _{ \mathbb{R}_{+}\times \mathbb{R}}g^{2}\,\mathrm{d}\mu \right ]\right )^{1/2} < \infty, {}\\ \end{array}$$

where \(\mathbb{E}^{{\ast}}\) is the expectation in Ω G ×Ω N . Now in an identical way, we can show that

$$\displaystyle\begin{array}{rcl} Y _{t,x}&:=& \sum _{j=1}^{n}\frac{\partial _{j}f} {\partial x_{j}}f\big(X_{t_{1}},\ldots,X_{t_{n}}\big)\mathbb{1}_{[0,t_{j}]}(t)\mathbb{1}_{\{0\}}(x) {}\\ & & \quad + \frac{f\big(X_{t_{1}} + x\mathbb{1}_{[0,t_{1}]}(t),\ldots,X_{t_{n}} + x\mathbb{1}_{[0,t_{n}]}(t)\big) - f\big(X_{t_{1}},\ldots,X_{t_{n}}\big)} {x} \,\mathbb{1}_{\{x\neq 0\}}(x){}\\ \end{array}$$

satisfies formula (25). □ 

2.6 Characterization of Processes with Independent Increments by Duality Formulas

Following Murr [15] we prove that the duality formula (8) characterizes the law of a process with independent increments. We restrict ourselves to real processes, while Murr [15] studies the vector case. Like Murr [15] we assume that the process is integrable. The fact that the process is integrable is equivalent to { | x | > 1} | x | dν t (x) < . Then, as in the proof of Lemma 2, we can write the following representation:

$$\displaystyle{ X_{t} = b_{t} + G_{t} +\int \limits _{(0,t]\times \{\vert x\vert >1\}}x\,\widehat{N } (\mathrm{d}(s,x)) +\int \limits _{(0,t]\times \{0<\vert x\vert \leq 1\}}x\,\widehat{N } (\mathrm{d}(s,x)), }$$

where \(b_{t} = m_{t} +\int _{(0,t]\times \{\vert x\vert >1\}}x\,\nu (\mathrm{d}(s,x))\), the first integral belongs to \(L^{1}(\mathbb{P})\) and the second to \(L^{2}(\mathbb{P})\) (see Theorem 7 in the appendix).

Consider the system of generating triplets of X (with respect to the cutoff function χ(x) = x) {(b t , ρ t , ν t ),  t ≥ 0}. As we commented in Sect. 2.1 (see Sato [22, Theorem 9.8]):

  1. 1.

    b 0 = 0 and the function tb t is continuous.

  2. 2.

    ρ 0 = 0, ρ t  ≥ 0 and the function tρ t is increasing and continuous.

  3. 3.

    For every t ≥ 0, ν t is a Lévy measure, and lim s → t ν s (B) = ν t (B) for every \(B \in \mathcal{B}(\mathbb{R})\) such that \(B \subset \{ x:\ \vert x\vert >\varepsilon \}\) for some ɛ > 0.

  4. 4.

    For every t ≥ 0, { | x | > 1} | x | dν t (x) < .

Denote by \(\mathbb{S}\) the set of random variables of the form \(F = f\big(X_{t_{1}},\ldots,X_{t_{n}}\big)\) with \(f \in \mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\), and by \(\mathcal{E}\) the set of real step functions \(g =\sum _{ j=1}^{k}a_{j}\mathbb{1}_{(s_{j},s_{j+1}]}\), with 0 ≤ s 1 < ⋯ < s k+1. In the next theorem we add conditions regarding the regularity of the trajectories to agree with our definitions.

Theorem 2 (Murr)

Let X be an integrable process, cadlag and continuous in probability, and {(b t t t ), t ≥ 0} be such that (1)–(4) above are satisfied. Then X is a process with independent increments with system of generating triplets {(b t t t ), t ≥ 0} if and only if for every \(F \in \mathbb{S}\) , and every step function \(g \in \mathcal{E},\)

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [F\int \limits _{\mathbb{R}_{+}}g(t)\,\mathrm{d}(X_{t} - b_{t})\right ]& =& \mathbb{E}\left [\,\int \limits _{\mathbb{R}_{+}}D_{t,0}F\,g(t)\,\rho (\mathrm{d}t)\right ] \\ & & \quad + \mathbb{E}\left [\int \limits _{(0,t]\times \mathbb{R}_{0}}\varPsi _{t,x}F\,g(t)x^{2}\,\nu (\mathrm{d}(t,x))\right ],{}\end{array}$$
(28)

where ν is defined in (1).

Proof

Assume that X is a process with independent increments. To prove (28), by linearity, it suffices to consider \(g =\mathbb{1}_{[0,u]}\), So we will check

$$\displaystyle{ \mathbb{E}\left [F\big(X_{u} - b_{u})\right ] = \mathbb{E}\left [\,\int \limits _{[0,u]}D_{t,0}F\,\rho (\mathrm{d}t)\right ] + \mathbb{E}\left [\int \limits _{(0,u]\times \mathbb{R}_{0}}\varPsi _{t,x}F\,x^{2}\,\nu (\mathrm{d}(t,x))\right ]. }$$
(29)

Note that for a deterministic function \(h \in L^{2}(\mathbb{R} \times \mathbb{R}_{+},\mu )\) the duality formula (8) gives

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [F\int \limits _{\mathbb{R}_{+}\times \mathbb{R}}h\,\mathrm{d}M\right ]& =& \mathbb{E}\left [\,\int \limits _{\mathbb{R}_{+}}D_{t,0}F\,h(t,0)\,\rho (\mathrm{d}t)\right ] {}\\ & & \quad + \mathbb{E}\left [\int \limits _{(0,\infty )\times \mathbb{R}_{0}}\varPsi _{t,x}F\,h(t,x)x^{2}\,\nu (\mathrm{d}(t,x))\right ]. {}\\ \end{array}$$

Set

$$\displaystyle{h_{n}(t,x) =\mathbb{1}_{[0,u]\times \{0\}}+x\,\mathbb{1}_{[0,u]\times \{1<\vert x\vert \leq n\}}(t,x)+x\,\mathbb{1}_{[0,u]\times \{0<\vert x\vert \leq 1\}}(t,x)}$$

that belongs to L 2(μ), then

$$\displaystyle{\int \limits _{\mathbb{R}_{+}\times \mathbb{R}}h_{n}\,\mathrm{d}M = G_{u} +\int \limits _{(0,u]\times \{1<\vert x\vert \leq n\}}x\,\widehat{N } (\mathrm{d}(s,x)) +\int \limits _{(0,t]\times \{0<\vert x\vert \leq 1\}}x\,\widehat{N } (\mathrm{d}(s,x)).}$$

In relation to the first integral in the right-hand side, note that \(x\mathbb{1}_{(0,u]\times \{1<\vert x\vert \leq n\}}\) belongs to L 1(ν) ∩ L 2(ν), and

$$\displaystyle{\lim _{n}\int \limits _{(0,u]\times \{1<\vert x\vert \leq n\}}x\,\widehat{N } (\mathrm{d}(s,x)) =\int \limits _{(0,u]\times \{\vert x\vert >1\}}x\,\mathrm{d}\widehat{N } (\mathrm{d}(s,x))}$$

in \(L^{1}(\mathbb{P})\), and hence \(\int _{\mathbb{R}_{+}\times \mathbb{R}}h_{n}\,\mathrm{d}M\) converges in \(L^{1}(\mathbb{P})\) to X u b u . Since F is bounded, it follows (29).

To prove the reciprocal implication, Murr [15] fixes \(g =\sum _{ j=1}^{k}a_{j}\mathbb{1}_{(s_{j},s_{j+1}]}\), with 0 ≤ s 1 < ⋯ < s k+1, and for \(u \in \mathbb{R}\), defines

$$\displaystyle{\varphi (u) = \mathbb{E}\left [\exp \bigg\{iu\int \limits _{\mathbb{R}_{+}}g\,dX\bigg\}\right ].}$$

Since

$$\displaystyle{\varphi '(u) = i\mathbb{E}\left [\exp \bigg\{iu\int \limits _{\mathbb{R}_{+}}g\,dX\bigg\}\,\int \limits _{\mathbb{R}_{+}}g\,dX\right ],}$$

applying the duality formula (28) with \(F =\exp \big\{ iu\int _{\mathbb{R}_{+}}g\,dX\big\}\) it is deduced a differential equation, which for u = 1 determines the characteristic function of \(\big(X_{s_{1}},X_{s_{2}} - X_{s_{1}},\ldots,X_{s_{k+1}} - X_{s_{k}})\), which determines the law of the process, and the theorem follows. □ 

Remark 3

Murr [15] defines Ψ t, x F as

$$\displaystyle{\varPsi _{t,x}F = f\big(X_{t_{1}}+x\mathbb{1}_{[0,t_{1}]}(t),\ldots,X_{t_{n}}+x\mathbb{1}_{[0,t_{n}]}(t)\big)-f\big(X_{t_{1}},\ldots,X_{t_{n}}\big),}$$

whereas in our definition of Ψ given in (27) we divide by x. However in the second term in the right-hand side of formula (28) Murr puts x rather than x 2. Of course, both formulations are equivalent.

3 Part 2: Random Measures

The context of this part is one of the random measures a.s. locally finites on a locally compact second countable Hausdorff space; the main references here are Kallenberg [6] and Schneider and Weyl [23]. In this part we use standard notations of random measures.

3.1 Random Measures

Let \(\mathbb{X}\) be a locally compact second countable Hausdorff space; it can be proved that this space is Polish (complete separable metrizable space). Denote by \(\mathcal{X}\) its Borel σ-field. A measure χ on \((\mathbb{X},\mathcal{X})\) is locally finite if χ(K) <  for every compact set K; note that such a measure is σ-finite.

Denote by M (or \(\mathbf{M}(\mathbb{X})\) if we want to stress the underlying space) the set of locally finite measures on \((\mathbb{X},\mathcal{X})\) and endow this space with the σ-field \(\mathcal{M}\) generated by the evaluation maps. We also denote by N the subset of locally finite measures taking values in \(\{0,1,\ldots \}\cup \{\infty \}\). This notation is consistent with the one adopted in the survey [14] in this volume.

Given a random measure ξ on \((\mathbb{X},\mathcal{X})\) with intensity λ, remember that it is said that \(s \in \mathbb{X}\) is a fixed atom of ξ if \(\mathbb{P}\{\xi \{s\} > 0\} > 0\). Note that if ξ has no fixed atoms, then for every \(s \in \mathbb{X}\), \(\lambda \{s\} = \mathbb{E}\big[\xi \{s\}\big] = 0\), so the intensity measure is non-atomic.

3.2 Infinitely Divisible Random Measures and Random Measures with Independent Increments

It is said that the random measure ξ has independent increments if for any family of pairwise disjoint sets \(A_{1},\ldots,A_{k} \in \mathcal{X}\), the random variables \(\xi (A_{1}),\ldots,\xi (A_{k})\) are independent. Matthes et al. [13, p. 16] call these random measures free from after-effects, and Kingman [8, 9] completely random measures.

A random measure ξ is said to be infinitely divisible if for every n ≥ 1 there are random measures \(\xi _{1},\ldots,\xi _{n}\) such that they are independent, and ξ has the same law as ξ 1 + ⋯ +ξ n . Indeed, every random measure with independent increments without fixed atoms is infinitely divisible (Kallenberg [6, Chap. 7]). The nice Lévy–Itô decomposition of processes with independent increments in terms of a Poisson random measures (Theorem 1) is transferred to random measures with independent increments; general infinitely divisible random measures have a representation in law (Kallenberg [6, Theorem 8.1]).

Before the representation theorem it is convenient to comment that since the number of fixed atoms of a random measure is at most countable (Kallenberg, [6, p. 56]), if ξ is a random measure with independent increments it can be written as

$$\displaystyle{\xi =\sum _{ n=1}^{N}\xi (\{s_{ n}\})\,\delta _{s_{n}} +\xi ',}$$

with N ≤ , where {s n , n ≥ 1} is the set of fixed atoms of ξ, and ξ′ is a random measure without fixed atoms with independent increments. So, as Kingman [9] graphically says, fixed atoms can be removed by simple surgery.

Theorem 3

Let ξ be a random measure with independent increments with intensity measure λ, without fixed atoms. Then it can be represented uniquely in the form

$$\displaystyle{ \xi (A) =\beta (A) +\int \limits _{A\times (0,\infty )}x\,\eta (\mathrm{d}(s,x)), }$$
(30)

for \(A \in \mathcal{X}\) , where \(\beta \in \mathbf{M}(\mathbb{X})\) is non-atomic, and η is a Poisson random measure on \(\mathbb{X} \times (0,\infty )\) which intensity measure \(\nu \in \mathbf{M}(\mathbb{X} \times (0,\infty ))\) non-atomic. Moreover, for \(A \in \mathcal{X}\) , we have ξ(A) < ∞, a.s. if and only if β(A) < ∞ and

$$\displaystyle{\int \limits _{A\times (0,\infty )}(1 \wedge x)\,\nu (\mathrm{d}(s,x)) < \infty.}$$

For a proof see Kallenberg [7, Corollary 12.11] in the context of Borel spaces or Daley and Vere–Jones [2, Theorem 10.1.III] for Polish spaces.

Remark 4

We comment some key points used in the proof that we need later:

  1. 1.

    The measure ν on \(\mathbb{X} \times (0,\infty )\) comes from

    $$\displaystyle{\nu (A \times B) =\nu _{A}(B),}$$

    where \(A \in \mathcal{X}\) with λ(A) < , and \(B \in \mathcal{B}((0,\infty ))\), and ν A is a Lévy measure on (0, ). That Lévy measure is associated with the positive infinitely divisible random variable with finite expectation ξ(A), and then it integrates the function f(x) = x. So, for \(A \in \mathcal{X}\) with λ(A) < , we have

    $$\displaystyle{\int \limits _{A\times (0,\infty )}x\,\nu (\mathrm{d}(s,x)) < \infty,}$$

    and

    $$\displaystyle{ \mathbb{E}[\xi (A)] =\lambda (A) =\beta (A) +\int \limits _{A\times (0,\infty )}x\,\nu (\mathrm{d}(s,x)). }$$
    (31)
  2. 2.

    The Poisson random measure η is given by

    $$\displaystyle{\eta =\sum _{s\in \mathbb{X}}\delta _{(s,\xi \{s\})}.}$$

    Since it is measurable (see Kallenberg [7], proof of Corollary 12.11) it follows that the σ-fields generated by ξ and η coincide. We will assume that \(\mathcal{A}\) is that σ-field.

  3. 3.

    The Laplace functional of ξ at \(h: \mathbb{X} \rightarrow \mathbb{R}_{+}\) is

    $$\displaystyle{ \mathbb{E}\left [\exp \bigg\{-\int \limits _{\mathbb{X}}h\,\mathrm{d}\xi \bigg\}\right ] =\exp \bigg\{ -\int \limits _{\mathbb{X}}h\mathrm{d}\beta -\int \limits _{\mathbb{X}\times (0,\infty )}\big(1 - e^{-xh(s)}\big)\nu (\mathrm{d}(s,x))\bigg\}. }$$
    (32)

Example: Subordinators

A subordinator X = { X t ,  t ≥ 0} is a Lévy process such that the trajectories are increasing a.s. Then it defines a random measure on \(\mathbb{X} = \mathbb{R}_{+}\). Representation (3) corresponds to the Lévy–Itô decomposition of X (Theorem 1) which, with the notations of Part 1, is reduced to (see Sato [22, Theorem 21.5])

$$\displaystyle{X_{t} =\gamma ^{\circ }t +\int \limits _{ (0,t]\times (0,\infty )}x\,N(\mathrm{d}(s,x)),}$$

where γ  ≥ 0 and N is a Poisson random measure on (0, ) × (0, ) with intensity ν(d(t, x)) = dtν (dx), where ν is the Lévy measure of X (see Remark 1.3), and the Gaussian part is 0. For every t > 0, (0, t]×(0, )(1 ∧ x) ν (dx) < , and the intensity measure of the random measure is given by

$$\displaystyle{\lambda \big([0,t]\big) =\gamma _{0}t + t\int \limits _{(0,\infty )}x\,\nu ^{\circ }(\mathrm{d}x),}$$

which, in general, can be infinite.

3.3 Mecke Formula for Random Measures with Independent Increments

We prove Mecke formula for random measures with independent increments which is inspired in Murr [15]. We first recall classical Mecke formula for Poisson processes (Last [11, formula (1.7)], Privault [21, formula (2.44)]); see Schneider and Weil [23, Theorem 3.2.5] for the following version of the formula, which we use later.

Theorem 4 (Mecke Formula for Poisson Random Measures)

Let γ be a point process with non-atomic intensity measure \(\lambda \in \mathbf{M}(\mathbb{X})\) . Then γ is a Poisson random measure if and only if for every measurable function \(h: \mathbf{N}(\mathbb{X}) \times \mathbb{X} \rightarrow \mathbb{R}_{+}\) we have

$$\displaystyle{ \mathbb{E}\left [\int \limits _{\mathbb{X}}h(\gamma,s)\,\gamma (\mathrm{d}s)\right ] =\int \limits _{\mathbb{X}}\mathbb{E}\left [h(\gamma +\delta _{s},s)\right ]\,\lambda (\mathrm{d}s). }$$
(33)

Theorem 5 (Mecke Formula for Random Measures with Independent Increments)

Let ξ be a random measure without fixed atoms and let \(\beta \in \mathbf{M}(\mathbb{X})\) be non-atomic and \(\nu \in \mathbf{M}(\mathbb{X} \times (0,\infty ))\) be non-atomic. Then ξ is a random measure with independent increments with associated measures β and ν if and only if for every measurable function \(h: \mathbf{M}(\mathbb{X}) \times \mathbb{X} \rightarrow \mathbb{R}_{+}\) we have

$$\displaystyle{ \mathbb{E}\left [\int \limits _{\mathbb{X}}h(\xi,s)\,\xi (\mathrm{d}s)\right ] =\int \limits _{\mathbb{X}}\mathbb{E}\big[h(\xi,s)\big]\,\beta (\mathrm{d}s) +\int \limits _{\mathbb{X}\times (0,\infty )}\mathbb{E}\big[h(\xi +x\delta _{s},s)\big]\,x\,\nu (\mathrm{d}(s,x)). }$$
(34)

Proof

 

  1. 1.

    Let ξ be a random measure with independent increments with associated measures β and ν. First note that since β is a deterministic measure, changing ξ by ξβ, and changing the function h conveniently, we can assume that β = 0. We will reduce the proof to an easy case, and later we prove formula (34) in that case.

    By standard arguments, it suffices to prove formula (34) for h(x, s) = f(x)g(s) where \(f: \mathbf{M}(\mathbb{X}) \rightarrow \mathbb{R}_{+}\) is bounded and \(g =\mathbb{1}_{C}\) for some \(C \in \mathcal{X}\) with λ(C) < . Now, given that \(\mathcal{M}(\mathbb{X})\) is generated by the projections π A , for \(A \in \mathcal{X}\), there is a countable family \(\{A_{n},\,n \geq 1\} \subset \mathcal{X}\) and a measurable function \(F: \mathbb{R}^{\infty }\rightarrow \mathbb{R}_{+}\) such that

    $$\displaystyle{f = F\big(\pi _{A_{1}},\pi _{A_{2}},\ldots \big).}$$

    (See Chow and Teicher [1, p. 17].) Hence,

    $$\displaystyle{f(\xi ) = F\big(\xi (A_{1}),\xi (A_{2}),\ldots,\big).}$$

    Denote by \(\mathcal{A}_{n}\) the σ-field generated by \(\xi (A_{1}),\ldots,\xi (A_{n})\), and define

    $$\displaystyle{F_{n} = \mathbb{E}\big[f(\xi )\,\vert \mathcal{A}_{n}\big].}$$

    By the convergence of martingales theorem we have that

    $$\displaystyle{\lim _{n}F_{n} = f,\ \text{a.s.}}$$

    and since f is bounded, the convergence is also in L p, for all p ≥ 1. Hence, there is enough to consider the case

    $$\displaystyle{f(\xi ) = f\big(\xi (A_{1}),\ldots,\xi (A_{n})\big).}$$

    With a monotone class argument, we can restrict to

    $$\displaystyle{f(\xi ) = f_{1}\big(\xi (A_{1})\big)\cdots f_{n}\big(\xi (A_{n})\big),}$$

    with bounded \(f_{1},\ldots,f_{n} \geq 0\), and \(A_{1},\ldots,A_{n}\) pairwise disjoint. Using that ξ has independent increments, in formula (34) with such an f(ξ) and \(g =\mathbb{1}_{C}\), it is clear that we need only to consider two cases: when C is disjoint with all A j , \(j = 1,\ldots,n\), or when C coincides with one of the A j . In the first case equality (34) is reduced to check that if AC = ∅, then

    $$\displaystyle{ \mathbb{E}\big[f\big(\xi (A)\big)\xi (C)\big] =\int \limits _{C\times (0,\infty )}\mathbb{E}\big[f(\xi (A) + x\delta _{s}(A))\big]\,x\,\nu (\mathrm{d}(s,x)), }$$

    that is evident since, thanks to (31) and the independence between ξ(A) and ξ(C), both sides are equal to \(\mathbb{E}\big[f\big(\xi (A)\big)\big]\,\lambda (C).\)

    In the second case (remember that here β = 0), equality (34) simplifies as

    $$\displaystyle{ \mathbb{E}\big[f\big(\xi (A)\big)\xi (A)\big] =\int \limits _{A\times (0,\infty )}\mathbb{E}\big[f(\xi (A) + x\delta _{s}(A))\big]\,x\,\nu (\mathrm{d}(s,x)). }$$
    (35)

    Changing ξ(A) by its expression in representation Theorem 3, in the left-hand side of (35) we have

    $$\displaystyle{ \mathbb{E}\left [f\bigg(\int \limits _{A\times (0,\infty )}x\,\eta (\mathrm{d}(s,x))\bigg)\int \limits _{A\times (0,\infty )}x\,\eta (\mathrm{d}(s,x))\right ], }$$
    (36)

    where η is a Poisson random measure on \(\mathbb{X} \times (0,\infty )\) with intensity measure ν. By Mecke formula for Poisson random measures (33),

    $$\displaystyle{\mbox{ (36)} =\int \limits _{A\times (0,\infty )}\mathbb{E}\left [f\left (\int \limits _{A\times (0,\infty )}x\,\eta (\mathrm{d}(s,x)) + x\delta _{(s,x)}\big(A \times (0,\infty )\big)\right )\right ]x\,\nu (\mathrm{d}(s,x)),}$$

    that is exactly the right-hand side of (35).

  2. 2.

    We prove the reciprocal implication. This proof is also inspired in Murr [15]. Note that applying formula (34) to the function h(μ, s) = f(s) we have

    $$\displaystyle{\int \limits _{\mathbb{X}}f\,\mathrm{d}\lambda = \mathbb{E}\left [\int \limits _{\mathbb{X}}f\,\mathrm{d}\xi \right ] =\int \limits _{\mathbb{X}}f\,\mathrm{d}\beta +\int \limits _{\mathbb{X}\times (0,\infty )}xf(s)\,\nu (\mathrm{d}(s,x)).}$$

    Fix \(g:\, \mathbb{X} \rightarrow \mathbb{R}_{+}\) measurable with \(\int _{\mathbb{X}}g\,\mathrm{d}\lambda < \infty \). and define, for u > 0,

    $$\displaystyle{G(u) = \mathbb{E}\left [\exp \{-u\int \limits _{\mathbb{X}}g\,\mathrm{d}\xi \}\right ].}$$

    Since \(\mathbb{E}\big[\int \limits _{\mathbb{X}}g\,\mathrm{d}\xi \big] < \infty \), by differentiation we get

    $$\displaystyle{G'(u) = -\mathbb{E}\left [\exp \{-u\int \limits _{\mathbb{X}}g\,\mathrm{d}\xi \}\int \limits _{\mathbb{X}}g\,\mathrm{d}\xi \right ].}$$

    Now in formula (34) take

    $$\displaystyle{h(\mu,s) =\exp \{ -u\int \limits _{\mathbb{X}}g\,\mathrm{d}\mu \}\,g(s),}$$

    and then,

    $$\displaystyle{G'(u) = -\int \limits _{\mathbb{X}}G(u)g(s)\,\beta (\mathrm{d}s) -\int \limits _{\mathbb{X}\times (0,\infty )}G(u)\exp \{ - uxg(s)\}g(s)x\,\nu (\mathrm{d}(s,x)),}$$

    or

    $$\displaystyle{\frac{G'(u)} {G(u)} = -\int \limits _{\mathbb{X}}g(s)\,\beta (\mathrm{d}s) -\int \limits _{\mathbb{X}\times (0,\infty )}\exp \{ - uxg(s)\}g(s)x\,\nu (\mathrm{d}(s,x)).}$$

    The function on the right-hand side is continuous in u, and given that G(0) = 1 we have the

    $$\displaystyle\begin{array}{rcl} G(u)& =& \exp \left \{-\int \limits _{0}^{u}\bigg(\int \limits _{ \mathbb{X}}g(s)\,\beta (\mathrm{d}s) +\int \limits _{\mathbb{X}\times (0,\infty )}\exp \{ - zxg(s)\}g(s)x\,\nu (\mathrm{d}(s,x))\bigg)\mathrm{d}z\right \} {}\\ & =& \exp \left \{-u\int \limits _{\mathbb{X}}g(s)\,\beta (\mathrm{d}s) -\int \limits _{\mathbb{X}\times (0,\infty )}\bigg(\int \limits _{0}^{u}\exp \{ - zxg(s)\}\,\mathrm{d}z\bigg)g(s)x\,\nu (\mathrm{d}(s,x))\right \}. {}\\ \end{array}$$

    In particular, for u = 1 we get

    $$\displaystyle{\int \limits _{0}^{1}\exp \Big\{ - zxg(s)\Big\}\,\mathrm{d}z =\mathbb{1}_{\{ s:\,g(s)>0\}}(s) \frac{1} {xg(s)}\Big(1 - e^{-xg(s)}\Big),}$$

    and then the Laplace functional of ξ is

    $$\displaystyle{G(1) =\exp \left \{-\int \limits _{\mathbb{X}}g(s)\,\mathrm{d}\beta (s) -\int \limits _{\mathbb{X}\times (0,\infty )}\big(1 - e^{-xg(s)}\big)\,\nu (\mathrm{d}(s,x))\right \},}$$

    which corresponds to the claimed random measure (see (32)) □ 

3.4 Malliavin Calculus

From now on, we consider the random measure with independent increments given by

$$\displaystyle{ \xi (A) =\beta (A) +\int \limits _{A\times (0,\infty )}x\,\eta (\mathrm{d}(s,x)), }$$
(37)

where η is a Poisson random measure with intensity ν.

As in Part 1, we construct a completely random measure on \(\mathbb{X} \times (0,\infty )\). With that purpose, define a new measure μ on \(\mathbb{X} \times (0,\infty )\) by

$$\displaystyle{\mu (\mathrm{d}(s,x)) = x^{2}\,\nu (\mathrm{d}(s,x)).}$$

For \(C \in \mathcal{X}\times \mathcal{B}(0,\infty )\) such that μ(C) < , the function \(\mathbb{1}_{C}(s,x)x\) is in L 2(ν); hence the following random variable is well defined (as a limit in \(L^{2}(\mathbb{P})\)):

$$\displaystyle{M(C):=\int \limits _{\mathbb{X}\times (0,\infty )}\mathbb{1}_{C}(s,x)x\hat{\eta }(\mathrm{d}(s,x)) =\int \limits _{C}x\,\hat{\eta }(\mathrm{d}(s,x)),}$$

where \(\hat{\eta }=\eta -\nu\). It is a completely random measure. As before, consider the set of symmetric functions

$$\displaystyle{L_{s}^{2}(\mu ^{n}) = L_{ s}^{2}\Big(\big(\mathbb{X} \times (0,\infty )\big)^{n},\mu ^{\otimes n}\Big).}$$

The multiple Itô integral of order n with respect to M of a function f ∈ L s 2(μ n) is denoted by I n ( f). Itô chaotic representation property is also true in this context, and we have that \(F \in L^{2}(\mathbb{P})\) admits a representation of the form

$$\displaystyle{ F =\sum _{ n=0}^{\infty }I_{ n}(\,f_{n}),\quad f_{n} \in L_{s}^{2}(\mu ^{n}). }$$
(38)

So we can define as in Part 1 a Malliavin derivative D with domain dom D and its dual, the Skorohod integral δ in dom δ.

3.4.1 Malliavin Derivatives with Respect to the Underlying Poisson Random Measure

In the present context of random measures, the absence of the Gaussian part and the fact that the integral in the representation (37) is pathwise make things easier, and we do not need to introduce a canonical space. As we commented in the Introduction, we rely on the very general construction of Last and Penrose [12] and Last [11] (see also Privault [21] for multiple Poisson integrals). Denote by \(I_{n}^{\hat{\eta }}(\,f)\) the multiple integral of order n with respect to \(\hat{\eta }\) of a function f ∈ L s 2(ν n). For \(f:\big (\mathbb{X} \times (0,\infty )\big)^{n} \rightarrow \mathbb{R}\) write

$$\displaystyle{f^{{\ast}}\big((s_{ 1},x_{1}),\ldots,(s_{n},x_{n})) = x_{1}\cdots x_{n}f\big((s_{1},x_{1}),\ldots,(s_{n},x_{n})).}$$

Obviously we have that f ∈ L s 2(μ n) if and only if f  ∈ L s 2(ν n). In this case,

$$\displaystyle{I_{n}(\,f) = I_{n}^{\hat{\eta }}(f^{{\ast}}).}$$

This is proved by standard techniques by considering first the case of elementary functions and by using a density argument.

Hence, for \(F \in L^{2}(\mathbb{P})\) with an expansion (38) (remember that the σ-field generated by ξ and η coincide) we have also the expansion

$$\displaystyle{F =\sum _{ n=0}^{\infty }I_{ n}^{\hat{\eta }}(\,f_{ n}^{{\ast}}).}$$

Last and Penrose [12] (see Last [11, Theorem 3]) introduce two derivative operators, the first one as an add-one-cost operator, that we comment in next subsection, and a Malliavin derivative D η (Last denotes it by D′) as an annihilation operator on the chaos expansion. The relation between our derivative D and D η is the following:

Proposition 7

We have dom D = dom D η , and for F ∈dom D,

$$\displaystyle{D_{(s,x)}F = \frac{1} {x}\,D_{(s,x)}^{\eta }F,\ \mu \otimes \mathbb{P} -\text{a.e.}}$$

Proof

The proof is direct from the chaos expansion of F. □ 

3.4.2 Derivation of Smooth Functionals

We first prove a property for the Poisson process case: following Last and Penrose [12] and Last [11], consider a square integrable random variable \(F \in L^{2}(\mathbb{P})\); since it is measurable with respect to the σ-field generated by η, there is a measurable function \(f: \mathbf{N}(\mathbb{X} \times (0,\infty )) \rightarrow \mathbb{R}\) such that F = f(η) and \(\mathbb{E}[f^{2}(\eta )] < \infty \). Define

$$\displaystyle{\mathbb{D}_{z}^{\eta }f(\eta ) = f(\eta +\delta _{ z}) - f(\eta ).}$$

By iteration, let

$$\displaystyle{\mathbb{D}_{z_{1},\ldots,z_{n}}^{\eta,n}f(\eta ) = \mathbb{D}_{ z_{1}}^{\eta }\mathbb{D}_{z_{2},\ldots,z_{n}}^{\eta,n-1}f(\eta ).}$$

Now define \(T_{0}f = \mathbb{E}\big[f(\eta )\big]\), and for n ≥ 1, 

$$\displaystyle{T_{n}f(z_{1},\ldots,z_{n}) = \mathbb{E}\big[\mathbb{D}_{z_{1},\ldots,z_{n}}^{n}f(\eta )\big].}$$

This operator verifies that T n f ∈ L s 2(ν n), and in the (Poisson) chaotic decomposition of F = f(η)

$$\displaystyle{F =\sum _{ n=0}^{\infty }I_{ n}^{\hat{\eta }}(\,f_{ n}),}$$

the kernels are

$$\displaystyle{f_{n} = \frac{1} {n!}T_{n}f.}$$

See Last [11, Theorem 2].

Proposition 8

Let \(F \in L^{2}(\mathbb{P}).\) Then F ∈dom  D η if and only if \(\ \mathbb{D}^{\eta }F \in L^{2}(\varOmega \times \mathbb{X} \times (0,\infty ), \mathbb{P}\otimes \nu )\) .

Proof

If F ∈ dom D η then the property follows from the coincidence between D η and \(\mathbb{D}^{\eta }\) (Last [11, equality (1.48)]). The proof of the reciprocal implication is analogous to the proof of the second part of Proposition 4. □ 

Now we return to Malliavin derivatives with respect to the random measure ξ.

Proposition 9

Let \(A_{1},\ldots.A_{n} \in \mathbb{X}\) , with finite λ measure. Let F =f(ξ(A 1 ),…,ξ(A n )), with \(f \in \mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\) . Then F ∈dom D and

$$\displaystyle{D_{s,x}F = \frac{1} {x}\Big(\,f\big(\xi (A_{1}) + x\delta _{s}(A_{1}),\ldots,\xi (A_{n}) + x\delta _{s}(A_{n})\big) - f\big(\xi (A_{1}),\ldots \xi (A_{n})\big)\Big).}$$

The idea of the proof is the same of that of Proposition 5.

3.4.3 Characterization of Random Measures by Duality Formulas

Following Murr [15] we present another version of the Mecke formula to characterize random measures with independent increments by duality formulas. Indeed, Murr [15] gives a characterization of infinitely random measures so it is more general than our result. The interest of our characterization is that the proof is based on Malliavin calculus for random measures with independent increments, specifically, the duality coupling between D and δ: For F ∈ dom D and g ∈ dom δ,

$$\displaystyle{ \mathbb{E}\big[F\delta (g)\big] = \mathbb{E}\left [\int \limits _{\mathbb{X}\times (0,\infty )}D_{z}Fg(z)\,\mu (\mathrm{d}z)\right ]. }$$
(39)

Denote by \(\mathcal{U}\) the ring of relatively compact sets of \(\mathbb{X}\). Every locally finite measure is finite on the sets of \(\mathcal{U}\). Let \(\mathbb{S}\) be the set of functions \(f: \mathbf{M}(\mathbb{X}) \rightarrow \mathbb{R}\) of the form \(f(\mu ) = h\big(\mu (A_{1}),\ldots,\mu (A_{n})\big)\) with \(h \in \mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\) and \(A_{1},\ldots,A_{n} \in \mathcal{U}\); also let \(\mathcal{E}\) be the set of simple functions \(g =\sum _{ j=1}^{k}a_{j}\mathbb{1}_{A_{j}}\), with \(a_{1},\ldots,a_{n} > 0\) and \(A_{1},\ldots,A_{n} \in \mathcal{U}\).

Theorem 6 (Murr)

Let \(\beta \in \mathbf{M}(\mathbb{X})\) be non-atomic and \(\nu \in \mathbf{M}\big(\mathbb{X} \times (0,\infty )\big)\) be non-atomic and such that for \(A \in \mathcal{U}\) , ∫ A×(0,∞) x ν(d (x,s)) < ∞. A random measure ξ has independent increments with characteristics β and ν if and only if for all \(f \in \mathbb{S}\) and \(g \in \mathcal{E}\) ,

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [f(\xi )\int \limits _{\mathbb{X}}g(s)\,\xi (\mathrm{d}s)\right ]& =& \mathbb{E}\big[f(\xi )\big]\int \limits _{\mathbb{X}}g(s)\,\beta (\mathrm{d}s) \\ & & \quad +\int \limits _{\mathbb{X}\times (0,\infty )}\mathbb{E}\big[f(\xi +x\delta _{s})\big]g(s)x\,\nu (\mathrm{d}(s,x)).{}\end{array}$$
(40)

Proof

Assume that ξ is a random measure with independent increments. Formula (40) is the particular case of formula (34) for h(μ, s) = f(μ)g(s). However, as we commented, we will see that (40) is also consequence of the duality coupling (39).

To prove (40), by linearity, it suffices to consider the case \(g =\mathbb{1}_{A}\) for \(A \in \mathcal{U}\). By construction (see Remark 4) A×(0, ) xν(d(s, x)) < . Assume first that also

$$\displaystyle{\int \limits _{A\times (0,\infty )}x^{2}\,\nu (\mathrm{d}(s,x)) < \infty.}$$

Then \(x\mathbb{1}_{A\times (0,\infty )} \in L^{1}\big(\nu \big) \cap L^{2}\big(\nu \big)\), and by the representation (30),

$$\displaystyle\begin{array}{rcl} \delta (g)& =& \int \limits _{A\times (0,\infty )}x\,\hat{\eta }(\mathrm{d}(s,x)) =\int \limits _{A\times (0,\infty )}x\,\eta (\mathrm{d}(s,x)) -\int \limits _{A\times (0,\infty )}x\,\nu (\mathrm{d}(s,x)) {}\\ & =& \int \limits _{\mathbb{X}}g(s)\,\xi (\mathrm{d}s) -\int \limits _{\mathbb{X}}g(s)\,\mathrm{d}\beta (s) -\int \limits _{A\times (0,\infty )}x\,\nu (\mathrm{d}(s,x)). {}\\ \end{array}$$

Further, the right-hand side of formula of duality (39) for F = f(ξ) and \(g =\mathbb{1}_{A}\) is

$$\displaystyle{ \mathbb{E}\left [\int \limits _{A\times (0,\infty )}\big(f(\xi +x\delta _{s}) - f(\xi )\big)\,x\,\nu (\mathrm{d}(s,x))\right ], }$$

and formula (40) follows. When A×(0, ) x 2ν(d(s, x)) = , then the result is obtaining approximating A×(0, ) xη(d(s, x)) by A×{0 < x < n} xη(d(s, x)) as in the proof of Theorem 2.

The reciprocal implication is also proved as in Theorem 2. □ 

Remark 5

For an infinitely divisible random measure Murr [15] writes formula (40) as

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}\left [f(\xi )\int \limits _{\mathbb{X}}g(s)\,\xi (\mathrm{d}s)\right ] \\ & & \quad = \mathbb{E}\big[f(\xi )\big]\int \limits _{\mathbb{X}}g(s)\,\beta (\mathrm{d}s) + \mathbb{E}\left [\int \limits _{\mathbf{M}_{0}(\mathbb{X})}f(\xi +\chi )\Big(\int \limits _{\mathbb{X}}g(s)\,\chi (\mathrm{d}s)\Big)\,\varGamma (\mathrm{d}\chi )\right ],{}\end{array}$$
(41)

where \(\mathbf{M}_{0}(\mathbb{X}) = \mathbf{M}(\mathbb{X})\setminus \{0\}\), here 0 is the zero measure, and Γ is a σ-finite measure on \(\mathbf{M}_{0}(\mathbb{X})\). Kallenberg [6, Lemma 7.3] proves that ξ has independent increments if and only if Γ is concentrated on the set of degenerate measures in \(\mathbf{M}_{0}(\mathbb{X})\), that are the measures of the form χ = xδ s , for some x > 0 and \(s \in \mathbb{X}\). In this case, consider the (measurable) mapping

$$\displaystyle\begin{array}{rcl} & & \mathbf{M}_{0}(\mathbb{X}) \rightarrow \mathbb{X} \times (0,\infty ) {}\\ & & \qquad x\,\delta _{s}\mapsto (s,x) {}\\ \end{array}$$

and then ν is the image measure of Γ by this mapping. Thus, by the image measure Theorem,

$$\displaystyle{\int \limits _{\mathbf{M}_{0}(\mathbb{X})}f(\xi +\chi )\left (\int \limits _{\mathbb{X}}g(s)\,\chi (\mathrm{d}s)\right )\,\varGamma (\mathrm{d}\chi ) =\int \limits _{\mathbb{X}\times (0,\infty )}f(\xi +x\delta _{s})xg(s)\,\nu (\mathrm{d}(s,x)),}$$

so formula (41) and (40) are the same in case of a random measure with independent increments.