Abstract
Malliavin calculus for Poisson processes based on the difference operator or add-one-cost operator is extended to stochastic processes and random measures with independent increments. Our approach is to use a Wiener–Itô chaos expansion, valid for both stochastic processes and random measures with independent increments, to construct a Malliavin derivative and a Skorohod integral. Useful derivation rules for smooth functionals given by Geiss and Laukkarinen (Probab Math Stat 31:1–15, 2011) are proved. In addition, characterizations for processes or random measures with independent increments based on the duality between the Malliavin derivative and the Skorohod integral following an interesting point of view from Murr (Stoch Process Appl 123:1729–1749, 2013) are studied.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
This chapter is divided into two parts: the first is devoted to processes with independent increments and the second to random measures with independent increments. Of course, both parts are strongly related to each other and we had doubts about the best order in which to present them in order to avoid repetition. We decided to start with stochastic processes where previous results are better known, and this part is mainly based on Solé et al. [24] where a Malliavin Calculus for Lévy processes is developed. Our approach relies on a chaotic expansion of square integrable functionals of the process, stated by Itô [5], in terms of a vector random measure on the plane; that expansion gives rise to a Fock space structure and enables us to define a Malliavin derivative and a Shorohod integral as an annihilation and creation operator respectively. Later, using an ad hoc canonical space, the Malliavin derivative restricted to the jumps part of the process can be conveniently interpreted as an increment quotient operator, extending the idea of the difference operator or add-one-cost operator of the Poisson processes, see Nualart and Vives [18, 19], Last and Penrose [12], and Last [11] in this volume. We also extend the interesting formulas of Geiss and Laukkarinen [4] for computing the derivatives of smooth functionals, which widen considerably the practical applications of the calculus. Finally, following Murr [15], we prove that the duality coupling between Malliavin derivative and Skorohod integral characterizes the underlying process and, in this way, extends to stochastic processes some characterizations of Stein’s method type. We should point out that in the first part (and also in the second, as we comment below) we use the very general and deep results of Last and Penrose [12] and Last [11] to improve some results and simplify the proofs of Solé et al. [24].
It is worth remarking that there is another approach to a chaos-based Malliavin calculus for jump processes using a different chaos expansion, that we comment in Sect. 2.2. For that development and many applications see Di Nunno et al. [3] and the references therein.
In the second part we extend Malliavin calculus to a random measure with independent increments. We start by recalling a representation theorem of such a random measure in terms of an integral with respect to a Poisson random measure in a product space; a weak version (in law) of that representation was obtained by Kingman [8] (see also Kingman [9]). That representation gives rise to the possibility of building a Malliavin calculus due to the fact that Itô’s [5] chaotic representation property also holds here. In this context, the results of Last and Penrose [12] and Last [11] play a central role since, thanks to them, it is not necessary to construct a canonical space, and we can simply interpret the Malliavin derivative as an add-one-cost operator. As in the first part, we introduce the smooth functionals of Geiss and Laukkarinen [4], and the characterization of random measures with independent increments by duality formulas of Murr [15].
2 Part 1: Malliavin Calculus for Processes with Independent Increments
2.1 Processes with Independent Increments and Its Lévy–Itô Decomposition
This section contains the notations and properties of processes with independent increments that we use; we mainly follow the excellent book of Sato [22]. In particular, we present the so-called Lévy–Itô decomposition of a process with independent increments as a sum of a continuous function, a continuous Gaussian process with independent increments, and two integrals. One of these integrals is considered with respect to a Poisson random measure whereas the other with respect to a compensated Poisson random measure. These integrals are, respectively, the sum of the big jumps of the process and the compensated sum of small jumps. That decomposition is a masterpiece of stochastic processes theory, and there exist proofs of such a fact that are based on very different tools: see, for example, Sato [22] and Kallenberg [7].
Fix a probability space \((\varOmega,\mathcal{A}, \mathbb{P})\). Let X = { X t , t ≥ 0} be a real process with independent increments, that is, for every n ≥ 1 and 0 ≤ t 1 < ⋯ < t n , the random variables \(X_{t_{2}} - X_{t_{1}},\ldots,X_{t_{n}} - X_{t_{n-1}}\) are independent. We assume that X 0 = 0, a.s., and that X is continuous in probability and cadlag. A process with all these properties is also called an additive process. We assume that the σ-field \(\mathcal{A}\) is generated by X.
The hypothesis that the process is cadlag is not restrictive: every process with independent increments and continuous in probability has a cadlag modification (Sato [22, Theorem 11.5]). The conditions of continuity in probability and cadlag prevent the existence of fixed discontinuities, that is to say, there are no points t ≥ 0 such that \(\mathbb{P}\{X_{t}\neq X_{t-}\} > 0.\)
The system of generating triplets of X is denoted by {(m t , ρ t , ν t ), t ≥ 0}. Thus, \(m:\, \mathbb{R}_{+} \rightarrow \mathbb{R}\), where \(\mathbb{R}_{+} = [0,\infty )\), is a continuous function that gives a deterministic tendency of the process (see representation (2)); ρ t ≥ 0 is the variance of the Gaussian part of X t , and ν t is the Lévy measure of the jumps part. More specifically, ν t is a measure on \(\mathbb{R}_{0}\), where \(\mathbb{R}_{0} = \mathbb{R}\setminus \{0\}\), such that \(\int _{\mathbb{R}_{0}}(1 \wedge x^{2})\,\nu _{t}(\mathrm{d}x) < \infty \), where a ∧ b = min(a, b). Observe that for all t ≥ 0 and ɛ > 0, \(\nu _{t}\big((-\varepsilon,\varepsilon )^{c}\big) < \infty \), and hence ν t is finite on compact sets of \(\mathbb{R}_{0}\), and then σ-finite. Denote by ν the (unique) measure on \(\mathcal{B}((0,\infty ) \times \mathbb{R}_{0})\) defined by
It is also σ-finite, and moreover, \(\nu \big(\{t\} \times \mathbb{R}_{0}\big) = 0\) for every t > 0 (Sato [22, p. 53]); thus it is non-atomic. The measure ν controls the jumps of the process: for \(B \in \mathcal{B}(\mathbb{R}_{0})\), ν((0, t] × B) is the expectation of the number of jumps of the process in the interval (0, t] with size in B. We remark that in a finite time interval the process can have an infinity of jumps of small size, and there are Lévy measures such that, for example, \(\nu \big((0,t] \times (0,x_{0})\big) = \infty \), for some x 0 > 0.
Write
the jumps measure of the process, where Δ X t = X t − X t−. It is a Poisson random measure on \((0,\infty ) \times \mathbb{R}_{0}\) with intensity measure ν (Sato [6, Theorem 19.2]). Let
represent the compensated jumps measure.
Theorem 1 (Lévy–Itô Decomposition)
where {G t , t ≥ 0} is a centered continuous Gaussian process with independent increments and variance \(\mathbb{E}[G_{t}^{2}] =\rho _{ t}^{2},\) independent of N.
Sato [22, Theorem 19.2] gives a more precise statement, and instead of the second integral in (2) he writes
where the convergence is a.s., uniform in t on every bounded interval.
Remark 1
-
1.
The function t ↦ ρ t is continuous and increasing, and ρ 0 = 0 (Sato [22, Theorem 9.8]), and hence it defines a σ-finite and non-atomic measure on \(\mathbb{R}_{+}\), denoted by ρ. The Gaussian process {G t , t ≥ 0} introduced above defines through
$$\displaystyle{G\big((s,t]\big) = G_{t} - G_{s},\ 0 \leq s < t,}$$a centered Gaussian random measure G on \(\{B \in \mathcal{B}(\mathbb{R}_{+}),\ \rho (B) < \infty \}\) with control measure ρ (see Peccati and Taqqu [20, p. 63] for this definition). In the Gaussian Malliavin calculus terminology this is called a white noise measure (Nualart [17, p. 8]). This will be important when we define Malliavin derivatives with respect to X.
-
2.
Remember that a Lévy process is an additive process with stationary increments. In this case, m t = m ∘ t, for some \(m^{\circ }\in \mathbb{R}\), ρ t = ρ ∘ t, for some ρ ∘ ≥ 0, and the Gaussian process {G t , t ≥ 0} can be written as \(G_{t} = \sqrt{\rho ^{\circ }}\,W_{t}\), where {W t , t ≥ 0} is a standard Brownian motion. Also, ν t = t ν ∘ for some Lévy measure ν ∘, and the measure ν is simply the product measure of the Lebesgue measure on (0, ∞) and ν ∘: ν(d(t, x)) = d t ν ∘(dx).
-
3.
The notations are slightly different from Sato [22] and Solé et al. [24], where ν denotes the Lévy measure of a Lévy process, that in the previous point we write ν ∘. Also, our measure ν on \((0,\infty ) \times \mathbb{R}_{0}\) defined in (1) is denoted by Sato [22] by \(\tilde{\nu }\).
2.2 Wiener–Itô Chaos Expansion
The well-known Wiener–Itô chaos expansion of square integrable functionals of a Brownian motion can be extended to the square integrable functionals of a process with independent increments. This was another major contribution made by Itô [5]; indeed, Itô proved that result for Lévy processes, however his proof is written in very general terms and also covers the case of processes with independent increments. That chaos expansion determines a Fock space structure on \(L^{2}(\mathbb{P})\), which is the basis of our Malliavin calculus development.
With the preceding notations, define a measure μ on \(\big(\mathbb{R}_{+} \times \mathbb{R},\mathcal{B}(\mathbb{R}_{+} \times \mathbb{R})\big)\) by
It is non-atomic since ν and ρ are non-atomic. Moreover, for a bounded set \(B \in \mathcal{B}(\mathbb{R})\),
and the last integral on the right-hand side is equal to
where C is a constant. Hence, the measure μ is locally finite, and, in particular, σ-finite.
Extending Itô [5] to this context, we can define a random measure (in the sense of vector measures, see Appendix 2) M on \(\big(\mathbb{R}_{+} \times \mathbb{R},\mathcal{B}(\mathbb{R}_{+} \times \mathbb{R})\big)\) with control measure μ: for \(C \in \mathcal{B}(\mathbb{R}_{+} \times \mathbb{R})\), such that μ(C) < ∞, write C(0) = { t ≥ 0: (t, 0) ∈ C} and \(C^{{\ast}} = C \cap \big ((0,\infty ) \times \mathbb{R}_{0}\big)\), and note that
that is, \(\mathbb{1}_{C^{{\ast}}}(t,x)x \in L^{2}(\nu )\). So there exists the \(L^{2}(\mathbb{P})\) integral of that function with respect to \(\widehat{N }\) (see Appendix 1), and we can define
We prove that M is a completely random measure ; see Appendix 2 where these definitions are recalled.
Proposition 1
M is a completely random measure with control measure μ.
Proof
In this proof, all sets \(C \in \mathcal{B}(\mathbb{R}_{+} \times \mathbb{R})\) are assumed to have finite μ-measure. It is clear that \(\mathbb{E}[M(C)] = 0\), and by the independence between G and N it follows that
From (45) in the appendix it is deduced that the characteristic function of M(C) is
where α C is the measure on \(\mathbb{R}_{0}\) defined for \(A \in \mathcal{B}(\mathbb{R}_{0})\) by
By a standard approximation argument it is proved that if \(f: \mathbb{R}_{0} \rightarrow \mathbb{R}_{+}\) is measurable, then
Thus
Therefore, α C is a Lévy measure with finite second order moment. Then M(C) has an infinitely divisible law with finite variance and Lévy measure given by α C . Furthermore, if \(C_{1},C_{2} \in \mathcal{B}(\mathbb{R}_{+} \times \mathbb{R})\) are disjoint, \(\alpha _{C_{1}\cup C_{2}} =\alpha _{C_{1}} +\alpha _{C_{2}}\), and it follows that if \(C_{1},\ldots,C_{n} \in \mathcal{B}(\mathbb{R}_{+} \times \mathbb{R})\), all with finite μ-measure, are disjoint, then \(M(C_{1}),\ldots,M(C_{n})\) are independent. □
Hence, we can define multiple Wiener–Itô integrals with respect to M, see Appendix 2. Let L s 2(μ n) be the subset of symmetric functions of L 2(μ ⊗n), and for f ∈ L s 2(μ n) denote by I n ( f) the multiple integral of f with respect to M.
The chaotic representation Theorem of square integrable functionals of a Lévy process of Itô [5, Theorem 2] is extended to this case with the same proof. So we have the chaotic decomposition property:
and the (unique) representation of a functional F ∈ L 2(Ω),
From this point, we can apply all the machinery of the annihilation operators (Malliavin derivatives) and creation operators (Skorohod integrals) on Fock spaces, as exposed in Nualart and Vives [18, 19].
Remark 2
For a process without Gaussian part, we can consider the chaos expansion of a square integrable functional in terms of the multiple integrals with respect to the Poisson random measure N rather than M, and then define a Malliavin derivative and a Skorohod integral; see Di Nunno et al. [3] and the references therein. Indeed, in the second part of this paper, dealing with random measures with independent increments, we combine that approach with the multiple integral with respect to M.
2.3 Derivative Operators
Let \(F \in L^{2}(\mathbb{P})\) with a finite chaos expansion
where N < ∞. The Malliavin derivative of F is defined as the element of \(L^{2}(\mu \otimes \mathbb{P})\) given by
This operator is unbounded. However, the set of elements of \(L^{2}(\mathbb{P})\) with finite chaos expansion is dense in \(L^{2}(\mathbb{P})\), and the operator D is closable ; the domain of D, denoted by dom D, coincides with the set of \(F \in L^{2}(\mathbb{P})\) with chaotic decomposition
such that
The Malliavin derivative of such an F is given by
where the convergence of the series is in \(L^{2}(\mu \otimes \mathbb{P})\).
The domain dom D is a Hilbert space with the scalar product
For all these properties we refer to Nualart and Vives [18].
Given the form of the measure μ, for \(f: (\mathbb{R}_{+} \times \mathbb{R})^{n} \rightarrow \mathbb{R}\) measurable, positive or μ ⊗n integrable, we have
As a consequence, when ρ ≠ 0 and ν ≠ 0, it is natural to consider two more spaces: Let dom D 0 (if ρ ≠ 0) be the set of \(F \in L^{2}(\mathbb{P})\) with decomposition F = ∑ n = 0 ∞ I n ( f n ) such that
For F ∈ dom D 0 we can define the square integrable stochastic process
where the convergence is in \(L^{2}(\rho \otimes \mathbb{P})\). Analogously, if ν ≠ 0, let dom D J be the set of \(F \in L^{2}(\mathbb{P})\) such that
and for F ∈ dom D J, define
where the convergence is in \(L^{2}\big((0,\infty ) \times \mathbb{R}_{0}\times \varOmega,x^{2}\,\nu (\mathrm{d}(t,x)) \otimes \mathbb{P}\big).\)
It is clear that when both ρ ≠ 0 and ν ≠ 0, then dom D = dom D 0 ∩dom D J.
2.4 The Skorohod Integral
Following the scheme of Nualart and Vives [18], we can define a creation operator (Skorohod—or Kabanov–Skorohod—integral) in the following way: let \(g \in L^{2}\big(\mu \otimes \mathbb{P})\), which has a chaotic decomposition
where \(f_{n} \in L^{2}\big(\mu ^{\otimes (n+1)}\big)\) is symmetric in the n last variables. Denote by \(\hat{f } _{n}\) the symmetrization in all n + 1 variables. If
define the Skorohod integral of f by
where the convergence is in \(L^{2}(\mathbb{P})\). Denote by dom δ the set of g that satisfy (7). The operator δ is the dual of the operator D, that is, a process \(g \in L^{2}(\mu \times \mathbb{P})\) belongs to dom δ if and only if there is a constant C such that for all F ∈ dom D,
If g ∈ dom δ, then δ(g) is the element of \(L^{2}(\mathbb{P})\) characterized by the duality (or integration by parts) formula
for any F ∈ dom D.
For more properties of the operator δ in the Lévy processes case, including its relationship with the stochastic integral with respect to the measure M, and a Clark–Ocone–Haussman formula, we refer to Solé et al. [24, 25].
2.5 Derivation of Smooth Functionals
Following an interesting approach of Geiss and Laukkarinen [4] (in the Lévy processes context) we will prove the following formulas of the derivative of smooth functionals: denote by \(\mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\) the set of infinitely continuous differentiable functions such that the function and all partial derivatives are bounded. Let \(f \in \mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\) and consider
We will prove that F ∈ dom D and
and for x ≠ 0,
Note the following relationship between both derivatives of a smooth functional:
Geiss and Laukkarinen [4] (in the Lévy processes case) give a direct proof of (10) and (11) by using Fourier inversion and a Clark–Ocone–Haussman type formula. They also show that the random variables of form (9) are dense in \(L^{2}(\mathbb{P})\) with respect to the norm induced by (5), and hence it is possible to define the Malliavin derivatives starting with (10) and (11). In order to prove these formulas in our context we will follow an alternative procedure: we will first prove these formulas in a canonical space associated with the process with independent increments and later we will transfer them to the general case.
2.5.1 Malliavin Derivatives in the Canonical Space
Since the Gaussian part and the jumps part of X are independent, we can construct a version of X in a canonical probability space of the form \(\big(\varOmega _{G} \times \varOmega _{N},\mathcal{A}_{G} \otimes \mathcal{A}_{N}, \mathbb{P}_{G} \otimes \mathbb{P}_{N}\big)\) where
-
\((\varOmega _{G},\mathcal{A}_{G}, \mathbb{P}_{G})\) is the canonical space associated with the Gaussian continuous process G; specifically, \(\varOmega _{G} = \mathcal{C}(\mathbb{R}_{+})\) is the space of continuous functions on \(\mathbb{R}_{+}\), \(\mathcal{A}_{G}\) the Borel σ-algebra generated by the topology of the uniform convergence on compact sets, and \(\mathbb{P}_{G}\) the probability that makes the projections
$$\displaystyle\begin{array}{rcl} & & G_{t}^{{\ast}}:\varOmega _{ G} \rightarrow \mathbb{R} {}\\ & & \qquad f\mapsto f(t) {}\\ \end{array}$$a process with the same law of {G t , t ≥ 0}.
-
\((\varOmega _{N},\mathcal{A}_{N}, \mathbb{P}_{N})\) is a canonical space associated with the Poisson random measure N. Essentially, Ω N is formed by infinite sequences \(\omega =\big ((t_{1},x_{1}),(t_{2},x_{2}),\ldots \big) \in \big ((0,\infty ) \times \mathbb{R}_{0}\big)^{\mathbb{N}}\) (see Appendix 3 for that construction), where t i are the instants of jump of the process, and x i the size of the corresponding jump. In this space, under \(\mathbb{P}_{N}\), the mapping defined by
$$\displaystyle{N^{{\ast}}(\omega ) =\sum \delta _{ (t_{j},x_{j})},\ \text{if}\ \omega =\big ((t_{1},x_{1}),(t_{2},x_{2}),\ldots \big)}$$is a Poisson random measure with intensity measure ν.
Define
$$\displaystyle{ J_{t}^{{\ast}} =\int \limits _{ (0,t]\times \{\vert x\vert >1\}}x\,N^{{\ast}}(\mathrm{d}(s,x)) +\int \limits _{ (0,t]\times \{0<\vert x\vert \leq 1\}}x\,\widehat{N } ^{{\ast}}(\mathrm{d}(s,x)), }$$where \(\widehat{N } ^{{\ast}} = N^{{\ast}}-\nu.\) Then J ∗ = { J t ∗, t ≥ 0} is a process with independent increments with generating triplets (0, ν t , 0).
-
Finally, in the product space Ω G ×Ω N we write
$$\displaystyle{X_{t}^{{\ast}} = m_{ t} + G_{t}^{{\ast}} + J_{ t}^{{\ast}},}$$and call it the canonical version of the process X.
2.5.2 Derivative \(\boldsymbol{D}_{t,0}\)
In order to compute the derivative D t, 0 F for F ∈ L 2(Ω G ×Ω N ), from the isometry
we can consider F as an element of L 2(Ω G ; L 2(Ω N )) and apply the theory of Malliavin derivatives of random variables with values in a separable Hilbert space following Nualart [17, p. 31]. This derivative coincides with D t, 0. This is proved from the fact that, by definition, a L 2(Ω N )-valued smooth random variable has the form
where F i are standard smooth variables (see Nualart [17, p. 25]) and H i ∈ L 2(Ω N ). Define the Malliavin derivative of F as
This definition is extended to a subspace dom D ∗ by a density argument.
Proposition 2
dom D ∗ ⊂ dom D 0 , and for F ∈dom D ∗ ,
Proof
First consider the functionals of the form
where \(C_{1},\ldots,C_{m} \in \mathcal{B}((0,\infty ) \times \mathbb{R}_{0})\) are bounded, pairwise disjoints, and at strictly positive distance of the t-axis, and \(B_{1},\ldots,B_{k} \in \mathcal{B}(\mathbb{R}_{+})\) are pairwise disjoints, with finite ρ measure. Itô [5] shows that the family of that functionals constitutes a fundamental set in \(L^{2}(\mathbb{P}_{G} \otimes \mathbb{P}_{N})\). Moreover, Itô shows that such an F can be written as a sum of multiple integrals:
and then the derivatives are easy to compute, proving equality (13), which is extended to dom D ∗ by density. See Solé et al. [24]. □
From the above proposition and the properties of the Malliavin derivatives in the Gaussian white noise case, it follows that the first rule of differentiation (10) in the canonical space holds:
Proposition 3
Let \(F = f\big(X_{t_{1}}^{{\ast}},\ldots,X_{t_{n}}^{{\ast}}\big)\) where \(f \in \mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\) . Then F ∈dom D t,0 and
2.5.3 Derivative \(\boldsymbol{D_{t,x},\,x\neq 0}\)
Consider ω = (ω G, ω N) ∈ Ω G ×Ω N , \(\omega ^{N} =\big ((t_{1},x_{1}),(t_{2},x_{2}),\ldots \big) \in \big ((0,\infty ) \times \mathbb{R}_{0}\big)^{\mathbb{N}}\). Given \(z = (t,x) \in (0,\infty ) \times \mathbb{R}_{0}\), we add to ω N a jump of size x at instant t, and call the new element \(\omega _{z}^{N} =\big ((t_{1},x_{1}),(t_{2},x_{2}),\ldots,(t,x),\ldots \big)\), and write ω z = (ω G, ω z N). For a random variable F, we define the quotient operator
See Solé et al. [24] for the measurability properties of this function. By iteration, we define
Since this function only depends on the part ω N, we can assume that X does not have a Gaussian part.
In the following lemma we will consider a set Δ of the form (m, m + 1] ×{ x: n < | x | ≤ n + 1} or (m, m + 1] ×{ x: 1∕(n + 1) < | x | ≤ 1∕n}, for some m ≥ 0 and n ≥ 1. Then ν(Δ) < ∞ and for every k ≥ 1, \(\int _{\varDelta }\vert x\vert ^{k}\,\nu (\mathrm{d}(t,x)) < \infty.\) The Poisson random measure N ∗ restricted to Δ has finite intensity measure (from now on, in this section, we suppress the ∗ to simplify the notations). The ordinary n-fold product measure is denoted by N ⊗n, and by N (n) the measure
where \(D \in \mathcal{B}(\varDelta ^{n})\) and D ≠ is the set of elements of \((z_{1},\ldots,z_{n}) \in D\) such that z i ≠ z j if i ≠ j. The measure defined by \(\mathbb{E}[N^{(n)}(D)]\) is called the n-factorial moment measure of the Poisson random measure N (see Last [11, formula (1.9)] or Schneider and Weil [23, p. 55]). For F ∈ L 2(Ω N ), for every \(D \in \mathcal{B}(\varDelta ^{n})\), the following integrals are finite and
To deduce that equality, note that we can write F = f(N), for some f defined on the set of integer valued (including ∞) locally finite measures (see Part 2). For \(\omega = (z_{1},z_{2},\ldots )\), \(N(\omega ) =\sum _{i}\delta _{z_{i}}\), and hence \(N(\omega _{z}) =\sum _{i}\delta _{z_{i}} +\delta _{z}\). Then, equality (14) is just a reformulation of a generalized Mecke formula (see Last [11, formula (1.10)]).
As a consequence, we have
Lemma 1
Let \(F_{k},F \in L^{2}(\mathbb{P}_{N})\) such that limk F k = F in \(L^{2}(\mathbb{P}_{N})\) . Then for every \(D \in \mathcal{B}(\varDelta ^{n})\)
Proof
The proof is very similar to the proof of Lemma 2 of Last [11]. It suffices to show that for every \(m = 1,\ldots,n,\)
where z i = (t i , x i ). Since Δ is bounded and far from 0, the \(x_{1},\ldots,x_{n}\) in the denominator can be suppressed. By (14),
which goes to 0 as k → ∞. □
Proposition 4
Let \(F \in L^{2}(\mathbb{P}_{G} \otimes \mathbb{P}_{N})\) such that
Then F ∈dom D J and
Proof
To simplify the notations we write μ(d(x, t)) rather than x 2 ν(d(x, t)). First it is proved that for f ∈ L s 2(μ n),
To prove this, first, instead of f we consider \(f\mathbb{1}_{\varDelta }^{\otimes n}\). Thus, as before, the multiple integrals (with respect to M) can be computed pathwise, and the above equality is easily checked, and then extended to f. Moreover, it is proved that the operator Ψ is closed, again working first with the restriction on Δ. Hence, if F ∈ dom D J, then DF = Ψ F. For the details see Solé et al. [24].
Note that as a consequence of (16),
This property is extended to a general \(F =\sum _{ n=0}^{\infty }I_{n}(\,f_{n}) \in L^{2}(\mathbb{P})\) to get a Stroock type formula
This is proved considering F k = ∑ n = 0 k I n ( f n ). We have that for k ≥ n, \(f_{n} = \frac{1} {n!}\,\mathbb{E}\big[\varPsi ^{n}F_{ k}\big]\). By Lemma 1, for every \(D \in \mathcal{B}(\varDelta ^{n})\)
Then
and also μ ⊗n-a.e. And hence the equality holds on \((0,\infty ) \times \mathbb{R}_{0}\) because it is a countable union of sets of type Δ.
Now assume that condition (15) holds. Then
with
However, thanks to (17), the kernel g n is related to the kernel f n+1 due to
and by (18),
which is the condition for F ∈ dom D J. □
We can deduce the second rule of differentiation (11):
Proposition 5
Let \(F = f\big(X_{t_{1}}^{{\ast}},\ldots,X_{t_{n}}^{{\ast}}\big)\) where \(f \in \mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\) . Then F ∈dom D J and for x ≠ 0,
Proof
To shorten the notations we suppress the star in X t ∗. We consider the case \(F = f\big(X_{s}\big)\); the general case is similar. We have
By Proposition 4 it suffices to prove that the following integral is finite:
To this end, by the mean value Theorem, there is a random point Y such that
Since f′ is bounded, say by C, using that ν s is a Lévy measure, for every ω,
where C′ is a constant independent of ω. Similarly,
where C″ and C‴ are constants independent of ω. □
2.5.4 Transfer of the Derivative Rules from the Canonical Space to an Arbitrary Space
Recall that we write a star to denote random variables, measures, processes, or operators in the canonical space. We consider a process with independent increments X on \((\varOmega,\mathcal{A}, \mathbb{P})\) with Poisson measure N and independent Gaussian part G, with the same law as N ∗ and G ∗ respectively, related to the additive process X ∗ constructed in the canonical space \(\big(\varOmega _{G} \times \varOmega _{N},\mathcal{A}_{G} \otimes \mathcal{A}_{N}, \mathbb{P}_{G} \otimes \mathbb{P}_{N}\big)\). Note that the generating triplets of X and X ∗ coincide, and hence the measures μ and μ ∗ (see (3)) are the same. Moreover, the Fock space structure of \(L^{2}(\mathbb{P})\) allows us to transfer some properties of the derivatives and Skorohod integrals in the canonical space to the space \((\varOmega,\mathcal{A}, \mathbb{P})\). This can be done thanks to the fact that to a square integrable random variable \(F \in L^{2}(\mathbb{P})\) with
we can associate \(F^{{\ast}}\in L^{2}(\mathbb{P}_{G} \times \mathbb{P}_{N})\) given by
That is, the kernels of F and F ∗ are the same. In a similar way, since, given that \(g \in L^{2}\big(\mathbb{R}_{+} \times \mathbb{R}\times \varOmega,\mathcal{B}(\mathbb{R}_{+} \times \mathbb{R}) \otimes \mathcal{A},\mu \otimes \mathbb{P})\) has a chaotic decomposition
f n ∈ L n+1 2 is symmetric in the n last variables, we can transfer from g to g ∗, and if g ∈ dom δ, then g ∗ ∈ dom δ ∗. More specifically,
Lemma 2
With the previous notations, for every \(t_{1},\ldots,t_{n} \in \mathbb{R}_{+}\) and \(F \in L^{2}(\mathbb{P})\) , we have that
where \(\mathop{=}\limits ^{\mathcal{L}}\) means equality in law.
Proof
We undertake the proof in several steps:
- Step 1 :
-
Let \(F =\sum _{ n=0}^{\infty }I_{n}(\,f_{n}) \in L^{2}(\mathbb{P})\). We first prove that F and F ∗ have the same law:
$$\displaystyle{ \sum _{n=0}^{\infty }I_{ n}(\,f_{n})\ \mathop{=}\limits ^{\mathcal{L}}\ \sum _{ n=0}^{\infty }I_{ n}^{{\ast}}(\,f_{ n}). }$$(21)In fact, if the sum has a finite number of terms, and the f n are simple (see the appendix) then the equality in law is clear. Equality (21) for finite sums with arbitrary kernels follows by \(L^{2}(\mathbb{P})\)-convergence. The infinite sum case is proved in a similar fashion.
- Step 2 :
-
For \(F,G \in L^{2}(\mathbb{P})\) we prove that
$$\displaystyle{(F,G)\ \mathop{=}\limits ^{\mathcal{L}}\ (F^{{\ast}},G^{{\ast}}).}$$We use Cramer–Wold device. Let F = ∑ n = 0 ∞ I n ( f n ) and G = ∑ n = 0 ∞ I n (g n ). For \(a,b \in \mathbb{R}\),
$$\displaystyle{ aF + bG =\sum _{ n=0}^{\infty }I_{ n}\big(af_{n} + bg_{n}\big)\ \mathop{=}\limits ^{\mathcal{L}}\ \sum _{ n=0}^{\infty }I_{ n}^{{\ast}}\big(af_{ n} + bg_{n}\big) = aF^{{\ast}} + bG^{{\ast}}. }$$ - Step 3 :
-
To prove (20) we consider n = 1; the general case is similar. First assume that the process X is square integrable, then \(\int _{R_{0}}x^{2}\,\nu _{t}(\mathrm{d}x) < \infty \), and thus \(\int _{(0,t]\times \mathbb{R}_{0}}x^{2}\,\nu (\mathrm{d}(s,x)) < \infty.\) This implies that the representation (2) admits the form:
$$\displaystyle\begin{array}{rcl} X_{t}& =& m_{t} + G_{t} +\int \limits _{(0,t]\times \{\vert x\vert >1\}}x\,\nu (\mathrm{d}(s,x)) +\int \limits _{(0,t]\times \mathbb{R}_{0}}x\,\widehat{N } (\mathrm{d}(s,x)) {}\\ & =& m_{t} +\int \limits _{(0,t]\times \{\vert x\vert >1\}}x\,\nu (\mathrm{d}(s,x)) + I_{1}\big(\mathbb{1}_{[0,t]\times \{0\}}+\mathbb{1}_{(0,t]\times \mathbb{R}_{0}}\big), {}\\ \end{array}$$and the property follows from step 2. In the general case, define
$$\displaystyle{ X_{t}^{(n)} = m_{ t} + G_{t} +\int \limits _{(0,t]\times \{1<\vert x\vert \leq n\}}x\,N(\mathrm{d}(s,x)) +\int \limits _{(0,t]\times \{0<\vert x\vert \leq 1\}}x\,\widehat{N } (\mathrm{d}(s,x)) }$$(22)$$\displaystyle{ = m_{t} +\int \limits _{(0,t]\times \{1<\vert x\vert \leq n\}}x\,\nu (\mathrm{d}(s,x)) + I_{1}\big(\mathbb{1}_{[0,t]\times \{0\}}+\mathbb{1}_{(0,t)\times \{0<\vert x\vert \leq n\}}\big). }$$(23)By expression (23) and Step 2, \(\big(X_{t}^{(n)},F\big)\ \mathop{=}\limits ^{\mathcal{L}}\ \big(X_{t}^{(n){\ast}},F^{{\ast}}\big)\). Since \(\nu \big((0,t] \times \{\vert x\vert > 1\}\big) < \infty \), we can apply Proposition 10 in the appendix to the first integral in the expression (22), and we deduce that when n → ∞, X t (n) → X t in probability, and the lemma follows. □
To transfer the derivative rules we will use the duality coupling (8). By construction, \(F \in L^{2}(\mathbb{P})\) belongs to dom D if and only if there is a constant C such that for all g ∈ dom δ,
If F ∈ dom D, then DF is the element of \(L^{2}(\mu \otimes \mathbb{P})\) characterized by
for every g ∈ dom δ. That is, we use (8) to prove a property of the derivative from the Skorohod integral.
Proposition 6
Let \(F = f\big(X_{t_{1}},\ldots,X_{t_{n}}\big)\) with \(f \in \mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\) . Then F ∈dom D and
and for x ≠ 0,
Proof
We are going to prove that F ∈ dom D. For this objective, let g ∈ dom δ and consider g ∗ to have the same kernels as g, and then g ∗ ∈ dom δ ∗ and satisfies inequality (24). Then, since we have proved that \(f\big(X_{t_{1}}^{{\ast}},\ldots,X_{t_{n}}^{{\ast}}\big) \in \text{dom}\,D\), by Lemma 2,
where \(\mathbb{E}^{{\ast}}\) is the expectation in Ω G ×Ω N . Now in an identical way, we can show that
satisfies formula (25). □
2.6 Characterization of Processes with Independent Increments by Duality Formulas
Following Murr [15] we prove that the duality formula (8) characterizes the law of a process with independent increments. We restrict ourselves to real processes, while Murr [15] studies the vector case. Like Murr [15] we assume that the process is integrable. The fact that the process is integrable is equivalent to ∫ { | x | > 1} | x | dν t (x) < ∞. Then, as in the proof of Lemma 2, we can write the following representation:
where \(b_{t} = m_{t} +\int _{(0,t]\times \{\vert x\vert >1\}}x\,\nu (\mathrm{d}(s,x))\), the first integral belongs to \(L^{1}(\mathbb{P})\) and the second to \(L^{2}(\mathbb{P})\) (see Theorem 7 in the appendix).
Consider the system of generating triplets of X (with respect to the cutoff function χ(x) = x) {(b t , ρ t , ν t ), t ≥ 0}. As we commented in Sect. 2.1 (see Sato [22, Theorem 9.8]):
-
1.
b 0 = 0 and the function t ↦ b t is continuous.
-
2.
ρ 0 = 0, ρ t ≥ 0 and the function t ↦ ρ t is increasing and continuous.
-
3.
For every t ≥ 0, ν t is a Lévy measure, and lim s → t ν s (B) = ν t (B) for every \(B \in \mathcal{B}(\mathbb{R})\) such that \(B \subset \{ x:\ \vert x\vert >\varepsilon \}\) for some ɛ > 0.
-
4.
For every t ≥ 0, ∫ { | x | > 1} | x | dν t (x) < ∞.
Denote by \(\mathbb{S}\) the set of random variables of the form \(F = f\big(X_{t_{1}},\ldots,X_{t_{n}}\big)\) with \(f \in \mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\), and by \(\mathcal{E}\) the set of real step functions \(g =\sum _{ j=1}^{k}a_{j}\mathbb{1}_{(s_{j},s_{j+1}]}\), with 0 ≤ s 1 < ⋯ < s k+1. In the next theorem we add conditions regarding the regularity of the trajectories to agree with our definitions.
Theorem 2 (Murr)
Let X be an integrable process, cadlag and continuous in probability, and {(b t ,ρ t ,ν t ), t ≥ 0} be such that (1)–(4) above are satisfied. Then X is a process with independent increments with system of generating triplets {(b t ,ρ t ,ν t ), t ≥ 0} if and only if for every \(F \in \mathbb{S}\) , and every step function \(g \in \mathcal{E},\)
where ν is defined in (1).
Proof
Assume that X is a process with independent increments. To prove (28), by linearity, it suffices to consider \(g =\mathbb{1}_{[0,u]}\), So we will check
Note that for a deterministic function \(h \in L^{2}(\mathbb{R} \times \mathbb{R}_{+},\mu )\) the duality formula (8) gives
Set
that belongs to L 2(μ), then
In relation to the first integral in the right-hand side, note that \(x\mathbb{1}_{(0,u]\times \{1<\vert x\vert \leq n\}}\) belongs to L 1(ν) ∩ L 2(ν), and
in \(L^{1}(\mathbb{P})\), and hence \(\int _{\mathbb{R}_{+}\times \mathbb{R}}h_{n}\,\mathrm{d}M\) converges in \(L^{1}(\mathbb{P})\) to X u − b u . Since F is bounded, it follows (29).
To prove the reciprocal implication, Murr [15] fixes \(g =\sum _{ j=1}^{k}a_{j}\mathbb{1}_{(s_{j},s_{j+1}]}\), with 0 ≤ s 1 < ⋯ < s k+1, and for \(u \in \mathbb{R}\), defines
Since
applying the duality formula (28) with \(F =\exp \big\{ iu\int _{\mathbb{R}_{+}}g\,dX\big\}\) it is deduced a differential equation, which for u = 1 determines the characteristic function of \(\big(X_{s_{1}},X_{s_{2}} - X_{s_{1}},\ldots,X_{s_{k+1}} - X_{s_{k}})\), which determines the law of the process, and the theorem follows. □
Remark 3
Murr [15] defines Ψ t, x F as
whereas in our definition of Ψ given in (27) we divide by x. However in the second term in the right-hand side of formula (28) Murr puts x rather than x 2. Of course, both formulations are equivalent.
3 Part 2: Random Measures
The context of this part is one of the random measures a.s. locally finites on a locally compact second countable Hausdorff space; the main references here are Kallenberg [6] and Schneider and Weyl [23]. In this part we use standard notations of random measures.
3.1 Random Measures
Let \(\mathbb{X}\) be a locally compact second countable Hausdorff space; it can be proved that this space is Polish (complete separable metrizable space). Denote by \(\mathcal{X}\) its Borel σ-field. A measure χ on \((\mathbb{X},\mathcal{X})\) is locally finite if χ(K) < ∞ for every compact set K; note that such a measure is σ-finite.
Denote by M (or \(\mathbf{M}(\mathbb{X})\) if we want to stress the underlying space) the set of locally finite measures on \((\mathbb{X},\mathcal{X})\) and endow this space with the σ-field \(\mathcal{M}\) generated by the evaluation maps. We also denote by N the subset of locally finite measures taking values in \(\{0,1,\ldots \}\cup \{\infty \}\). This notation is consistent with the one adopted in the survey [14] in this volume.
Given a random measure ξ on \((\mathbb{X},\mathcal{X})\) with intensity λ, remember that it is said that \(s \in \mathbb{X}\) is a fixed atom of ξ if \(\mathbb{P}\{\xi \{s\} > 0\} > 0\). Note that if ξ has no fixed atoms, then for every \(s \in \mathbb{X}\), \(\lambda \{s\} = \mathbb{E}\big[\xi \{s\}\big] = 0\), so the intensity measure is non-atomic.
3.2 Infinitely Divisible Random Measures and Random Measures with Independent Increments
It is said that the random measure ξ has independent increments if for any family of pairwise disjoint sets \(A_{1},\ldots,A_{k} \in \mathcal{X}\), the random variables \(\xi (A_{1}),\ldots,\xi (A_{k})\) are independent. Matthes et al. [13, p. 16] call these random measures free from after-effects, and Kingman [8, 9] completely random measures.
A random measure ξ is said to be infinitely divisible if for every n ≥ 1 there are random measures \(\xi _{1},\ldots,\xi _{n}\) such that they are independent, and ξ has the same law as ξ 1 + ⋯ +ξ n . Indeed, every random measure with independent increments without fixed atoms is infinitely divisible (Kallenberg [6, Chap. 7]). The nice Lévy–Itô decomposition of processes with independent increments in terms of a Poisson random measures (Theorem 1) is transferred to random measures with independent increments; general infinitely divisible random measures have a representation in law (Kallenberg [6, Theorem 8.1]).
Before the representation theorem it is convenient to comment that since the number of fixed atoms of a random measure is at most countable (Kallenberg, [6, p. 56]), if ξ is a random measure with independent increments it can be written as
with N ≤ ∞, where {s n , n ≥ 1} is the set of fixed atoms of ξ, and ξ′ is a random measure without fixed atoms with independent increments. So, as Kingman [9] graphically says, fixed atoms can be removed by simple surgery.
Theorem 3
Let ξ be a random measure with independent increments with intensity measure λ, without fixed atoms. Then it can be represented uniquely in the form
for \(A \in \mathcal{X}\) , where \(\beta \in \mathbf{M}(\mathbb{X})\) is non-atomic, and η is a Poisson random measure on \(\mathbb{X} \times (0,\infty )\) which intensity measure \(\nu \in \mathbf{M}(\mathbb{X} \times (0,\infty ))\) non-atomic. Moreover, for \(A \in \mathcal{X}\) , we have ξ(A) < ∞, a.s. if and only if β(A) < ∞ and
For a proof see Kallenberg [7, Corollary 12.11] in the context of Borel spaces or Daley and Vere–Jones [2, Theorem 10.1.III] for Polish spaces.
Remark 4
We comment some key points used in the proof that we need later:
-
1.
The measure ν on \(\mathbb{X} \times (0,\infty )\) comes from
$$\displaystyle{\nu (A \times B) =\nu _{A}(B),}$$where \(A \in \mathcal{X}\) with λ(A) < ∞, and \(B \in \mathcal{B}((0,\infty ))\), and ν A is a Lévy measure on (0, ∞). That Lévy measure is associated with the positive infinitely divisible random variable with finite expectation ξ(A), and then it integrates the function f(x) = x. So, for \(A \in \mathcal{X}\) with λ(A) < ∞, we have
$$\displaystyle{\int \limits _{A\times (0,\infty )}x\,\nu (\mathrm{d}(s,x)) < \infty,}$$and
$$\displaystyle{ \mathbb{E}[\xi (A)] =\lambda (A) =\beta (A) +\int \limits _{A\times (0,\infty )}x\,\nu (\mathrm{d}(s,x)). }$$(31) -
2.
The Poisson random measure η is given by
$$\displaystyle{\eta =\sum _{s\in \mathbb{X}}\delta _{(s,\xi \{s\})}.}$$Since it is measurable (see Kallenberg [7], proof of Corollary 12.11) it follows that the σ-fields generated by ξ and η coincide. We will assume that \(\mathcal{A}\) is that σ-field.
-
3.
The Laplace functional of ξ at \(h: \mathbb{X} \rightarrow \mathbb{R}_{+}\) is
$$\displaystyle{ \mathbb{E}\left [\exp \bigg\{-\int \limits _{\mathbb{X}}h\,\mathrm{d}\xi \bigg\}\right ] =\exp \bigg\{ -\int \limits _{\mathbb{X}}h\mathrm{d}\beta -\int \limits _{\mathbb{X}\times (0,\infty )}\big(1 - e^{-xh(s)}\big)\nu (\mathrm{d}(s,x))\bigg\}. }$$(32)
Example: Subordinators
A subordinator X = { X t , t ≥ 0} is a Lévy process such that the trajectories are increasing a.s. Then it defines a random measure on \(\mathbb{X} = \mathbb{R}_{+}\). Representation (3) corresponds to the Lévy–Itô decomposition of X (Theorem 1) which, with the notations of Part 1, is reduced to (see Sato [22, Theorem 21.5])
where γ ∘ ≥ 0 and N is a Poisson random measure on (0, ∞) × (0, ∞) with intensity ν(d(t, x)) = dt ν ∘(dx), where ν ∘ is the Lévy measure of X (see Remark 1.3), and the Gaussian part is 0. For every t > 0, ∫ (0, t]×(0, ∞)(1 ∧ x) ν ∘(dx) < ∞, and the intensity measure of the random measure is given by
which, in general, can be infinite.
3.3 Mecke Formula for Random Measures with Independent Increments
We prove Mecke formula for random measures with independent increments which is inspired in Murr [15]. We first recall classical Mecke formula for Poisson processes (Last [11, formula (1.7)], Privault [21, formula (2.44)]); see Schneider and Weil [23, Theorem 3.2.5] for the following version of the formula, which we use later.
Theorem 4 (Mecke Formula for Poisson Random Measures)
Let γ be a point process with non-atomic intensity measure \(\lambda \in \mathbf{M}(\mathbb{X})\) . Then γ is a Poisson random measure if and only if for every measurable function \(h: \mathbf{N}(\mathbb{X}) \times \mathbb{X} \rightarrow \mathbb{R}_{+}\) we have
Theorem 5 (Mecke Formula for Random Measures with Independent Increments)
Let ξ be a random measure without fixed atoms and let \(\beta \in \mathbf{M}(\mathbb{X})\) be non-atomic and \(\nu \in \mathbf{M}(\mathbb{X} \times (0,\infty ))\) be non-atomic. Then ξ is a random measure with independent increments with associated measures β and ν if and only if for every measurable function \(h: \mathbf{M}(\mathbb{X}) \times \mathbb{X} \rightarrow \mathbb{R}_{+}\) we have
Proof
-
1.
Let ξ be a random measure with independent increments with associated measures β and ν. First note that since β is a deterministic measure, changing ξ by ξ −β, and changing the function h conveniently, we can assume that β = 0. We will reduce the proof to an easy case, and later we prove formula (34) in that case.
By standard arguments, it suffices to prove formula (34) for h(x, s) = f(x)g(s) where \(f: \mathbf{M}(\mathbb{X}) \rightarrow \mathbb{R}_{+}\) is bounded and \(g =\mathbb{1}_{C}\) for some \(C \in \mathcal{X}\) with λ(C) < ∞. Now, given that \(\mathcal{M}(\mathbb{X})\) is generated by the projections π A , for \(A \in \mathcal{X}\), there is a countable family \(\{A_{n},\,n \geq 1\} \subset \mathcal{X}\) and a measurable function \(F: \mathbb{R}^{\infty }\rightarrow \mathbb{R}_{+}\) such that
$$\displaystyle{f = F\big(\pi _{A_{1}},\pi _{A_{2}},\ldots \big).}$$(See Chow and Teicher [1, p. 17].) Hence,
$$\displaystyle{f(\xi ) = F\big(\xi (A_{1}),\xi (A_{2}),\ldots,\big).}$$Denote by \(\mathcal{A}_{n}\) the σ-field generated by \(\xi (A_{1}),\ldots,\xi (A_{n})\), and define
$$\displaystyle{F_{n} = \mathbb{E}\big[f(\xi )\,\vert \mathcal{A}_{n}\big].}$$By the convergence of martingales theorem we have that
$$\displaystyle{\lim _{n}F_{n} = f,\ \text{a.s.}}$$and since f is bounded, the convergence is also in L p, for all p ≥ 1. Hence, there is enough to consider the case
$$\displaystyle{f(\xi ) = f\big(\xi (A_{1}),\ldots,\xi (A_{n})\big).}$$With a monotone class argument, we can restrict to
$$\displaystyle{f(\xi ) = f_{1}\big(\xi (A_{1})\big)\cdots f_{n}\big(\xi (A_{n})\big),}$$with bounded \(f_{1},\ldots,f_{n} \geq 0\), and \(A_{1},\ldots,A_{n}\) pairwise disjoint. Using that ξ has independent increments, in formula (34) with such an f(ξ) and \(g =\mathbb{1}_{C}\), it is clear that we need only to consider two cases: when C is disjoint with all A j , \(j = 1,\ldots,n\), or when C coincides with one of the A j . In the first case equality (34) is reduced to check that if A ∩ C = ∅, then
$$\displaystyle{ \mathbb{E}\big[f\big(\xi (A)\big)\xi (C)\big] =\int \limits _{C\times (0,\infty )}\mathbb{E}\big[f(\xi (A) + x\delta _{s}(A))\big]\,x\,\nu (\mathrm{d}(s,x)), }$$that is evident since, thanks to (31) and the independence between ξ(A) and ξ(C), both sides are equal to \(\mathbb{E}\big[f\big(\xi (A)\big)\big]\,\lambda (C).\)
In the second case (remember that here β = 0), equality (34) simplifies as
$$\displaystyle{ \mathbb{E}\big[f\big(\xi (A)\big)\xi (A)\big] =\int \limits _{A\times (0,\infty )}\mathbb{E}\big[f(\xi (A) + x\delta _{s}(A))\big]\,x\,\nu (\mathrm{d}(s,x)). }$$(35)Changing ξ(A) by its expression in representation Theorem 3, in the left-hand side of (35) we have
$$\displaystyle{ \mathbb{E}\left [f\bigg(\int \limits _{A\times (0,\infty )}x\,\eta (\mathrm{d}(s,x))\bigg)\int \limits _{A\times (0,\infty )}x\,\eta (\mathrm{d}(s,x))\right ], }$$(36)where η is a Poisson random measure on \(\mathbb{X} \times (0,\infty )\) with intensity measure ν. By Mecke formula for Poisson random measures (33),
$$\displaystyle{\mbox{ (36)} =\int \limits _{A\times (0,\infty )}\mathbb{E}\left [f\left (\int \limits _{A\times (0,\infty )}x\,\eta (\mathrm{d}(s,x)) + x\delta _{(s,x)}\big(A \times (0,\infty )\big)\right )\right ]x\,\nu (\mathrm{d}(s,x)),}$$that is exactly the right-hand side of (35).
-
2.
We prove the reciprocal implication. This proof is also inspired in Murr [15]. Note that applying formula (34) to the function h(μ, s) = f(s) we have
$$\displaystyle{\int \limits _{\mathbb{X}}f\,\mathrm{d}\lambda = \mathbb{E}\left [\int \limits _{\mathbb{X}}f\,\mathrm{d}\xi \right ] =\int \limits _{\mathbb{X}}f\,\mathrm{d}\beta +\int \limits _{\mathbb{X}\times (0,\infty )}xf(s)\,\nu (\mathrm{d}(s,x)).}$$Fix \(g:\, \mathbb{X} \rightarrow \mathbb{R}_{+}\) measurable with \(\int _{\mathbb{X}}g\,\mathrm{d}\lambda < \infty \). and define, for u > 0,
$$\displaystyle{G(u) = \mathbb{E}\left [\exp \{-u\int \limits _{\mathbb{X}}g\,\mathrm{d}\xi \}\right ].}$$Since \(\mathbb{E}\big[\int \limits _{\mathbb{X}}g\,\mathrm{d}\xi \big] < \infty \), by differentiation we get
$$\displaystyle{G'(u) = -\mathbb{E}\left [\exp \{-u\int \limits _{\mathbb{X}}g\,\mathrm{d}\xi \}\int \limits _{\mathbb{X}}g\,\mathrm{d}\xi \right ].}$$Now in formula (34) take
$$\displaystyle{h(\mu,s) =\exp \{ -u\int \limits _{\mathbb{X}}g\,\mathrm{d}\mu \}\,g(s),}$$and then,
$$\displaystyle{G'(u) = -\int \limits _{\mathbb{X}}G(u)g(s)\,\beta (\mathrm{d}s) -\int \limits _{\mathbb{X}\times (0,\infty )}G(u)\exp \{ - uxg(s)\}g(s)x\,\nu (\mathrm{d}(s,x)),}$$or
$$\displaystyle{\frac{G'(u)} {G(u)} = -\int \limits _{\mathbb{X}}g(s)\,\beta (\mathrm{d}s) -\int \limits _{\mathbb{X}\times (0,\infty )}\exp \{ - uxg(s)\}g(s)x\,\nu (\mathrm{d}(s,x)).}$$The function on the right-hand side is continuous in u, and given that G(0) = 1 we have the
$$\displaystyle\begin{array}{rcl} G(u)& =& \exp \left \{-\int \limits _{0}^{u}\bigg(\int \limits _{ \mathbb{X}}g(s)\,\beta (\mathrm{d}s) +\int \limits _{\mathbb{X}\times (0,\infty )}\exp \{ - zxg(s)\}g(s)x\,\nu (\mathrm{d}(s,x))\bigg)\mathrm{d}z\right \} {}\\ & =& \exp \left \{-u\int \limits _{\mathbb{X}}g(s)\,\beta (\mathrm{d}s) -\int \limits _{\mathbb{X}\times (0,\infty )}\bigg(\int \limits _{0}^{u}\exp \{ - zxg(s)\}\,\mathrm{d}z\bigg)g(s)x\,\nu (\mathrm{d}(s,x))\right \}. {}\\ \end{array}$$In particular, for u = 1 we get
$$\displaystyle{\int \limits _{0}^{1}\exp \Big\{ - zxg(s)\Big\}\,\mathrm{d}z =\mathbb{1}_{\{ s:\,g(s)>0\}}(s) \frac{1} {xg(s)}\Big(1 - e^{-xg(s)}\Big),}$$and then the Laplace functional of ξ is
$$\displaystyle{G(1) =\exp \left \{-\int \limits _{\mathbb{X}}g(s)\,\mathrm{d}\beta (s) -\int \limits _{\mathbb{X}\times (0,\infty )}\big(1 - e^{-xg(s)}\big)\,\nu (\mathrm{d}(s,x))\right \},}$$which corresponds to the claimed random measure (see (32)) □
3.4 Malliavin Calculus
From now on, we consider the random measure with independent increments given by
where η is a Poisson random measure with intensity ν.
As in Part 1, we construct a completely random measure on \(\mathbb{X} \times (0,\infty )\). With that purpose, define a new measure μ on \(\mathbb{X} \times (0,\infty )\) by
For \(C \in \mathcal{X}\times \mathcal{B}(0,\infty )\) such that μ(C) < ∞, the function \(\mathbb{1}_{C}(s,x)x\) is in L 2(ν); hence the following random variable is well defined (as a limit in \(L^{2}(\mathbb{P})\)):
where \(\hat{\eta }=\eta -\nu\). It is a completely random measure. As before, consider the set of symmetric functions
The multiple Itô integral of order n with respect to M of a function f ∈ L s 2(μ n) is denoted by I n ( f). Itô chaotic representation property is also true in this context, and we have that \(F \in L^{2}(\mathbb{P})\) admits a representation of the form
So we can define as in Part 1 a Malliavin derivative D with domain dom D and its dual, the Skorohod integral δ in dom δ.
3.4.1 Malliavin Derivatives with Respect to the Underlying Poisson Random Measure
In the present context of random measures, the absence of the Gaussian part and the fact that the integral in the representation (37) is pathwise make things easier, and we do not need to introduce a canonical space. As we commented in the Introduction, we rely on the very general construction of Last and Penrose [12] and Last [11] (see also Privault [21] for multiple Poisson integrals). Denote by \(I_{n}^{\hat{\eta }}(\,f)\) the multiple integral of order n with respect to \(\hat{\eta }\) of a function f ∈ L s 2(ν n). For \(f:\big (\mathbb{X} \times (0,\infty )\big)^{n} \rightarrow \mathbb{R}\) write
Obviously we have that f ∈ L s 2(μ n) if and only if f ∗ ∈ L s 2(ν n). In this case,
This is proved by standard techniques by considering first the case of elementary functions and by using a density argument.
Hence, for \(F \in L^{2}(\mathbb{P})\) with an expansion (38) (remember that the σ-field generated by ξ and η coincide) we have also the expansion
Last and Penrose [12] (see Last [11, Theorem 3]) introduce two derivative operators, the first one as an add-one-cost operator, that we comment in next subsection, and a Malliavin derivative D η (Last denotes it by D′) as an annihilation operator on the chaos expansion. The relation between our derivative D and D η is the following:
Proposition 7
We have dom D = dom D η , and for F ∈dom D,
Proof
The proof is direct from the chaos expansion of F. □
3.4.2 Derivation of Smooth Functionals
We first prove a property for the Poisson process case: following Last and Penrose [12] and Last [11], consider a square integrable random variable \(F \in L^{2}(\mathbb{P})\); since it is measurable with respect to the σ-field generated by η, there is a measurable function \(f: \mathbf{N}(\mathbb{X} \times (0,\infty )) \rightarrow \mathbb{R}\) such that F = f(η) and \(\mathbb{E}[f^{2}(\eta )] < \infty \). Define
By iteration, let
Now define \(T_{0}f = \mathbb{E}\big[f(\eta )\big]\), and for n ≥ 1,
This operator verifies that T n f ∈ L s 2(ν n), and in the (Poisson) chaotic decomposition of F = f(η)
the kernels are
See Last [11, Theorem 2].
Proposition 8
Let \(F \in L^{2}(\mathbb{P}).\) Then F ∈dom D η if and only if \(\ \mathbb{D}^{\eta }F \in L^{2}(\varOmega \times \mathbb{X} \times (0,\infty ), \mathbb{P}\otimes \nu )\) .
Proof
If F ∈ dom D η then the property follows from the coincidence between D η and \(\mathbb{D}^{\eta }\) (Last [11, equality (1.48)]). The proof of the reciprocal implication is analogous to the proof of the second part of Proposition 4. □
Now we return to Malliavin derivatives with respect to the random measure ξ.
Proposition 9
Let \(A_{1},\ldots.A_{n} \in \mathbb{X}\) , with finite λ measure. Let F =f(ξ(A 1 ),…,ξ(A n )), with \(f \in \mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\) . Then F ∈dom D and
The idea of the proof is the same of that of Proposition 5.
3.4.3 Characterization of Random Measures by Duality Formulas
Following Murr [15] we present another version of the Mecke formula to characterize random measures with independent increments by duality formulas. Indeed, Murr [15] gives a characterization of infinitely random measures so it is more general than our result. The interest of our characterization is that the proof is based on Malliavin calculus for random measures with independent increments, specifically, the duality coupling between D and δ: For F ∈ dom D and g ∈ dom δ,
Denote by \(\mathcal{U}\) the ring of relatively compact sets of \(\mathbb{X}\). Every locally finite measure is finite on the sets of \(\mathcal{U}\). Let \(\mathbb{S}\) be the set of functions \(f: \mathbf{M}(\mathbb{X}) \rightarrow \mathbb{R}\) of the form \(f(\mu ) = h\big(\mu (A_{1}),\ldots,\mu (A_{n})\big)\) with \(h \in \mathcal{C}_{b}^{\infty }(\mathbb{R}^{n})\) and \(A_{1},\ldots,A_{n} \in \mathcal{U}\); also let \(\mathcal{E}\) be the set of simple functions \(g =\sum _{ j=1}^{k}a_{j}\mathbb{1}_{A_{j}}\), with \(a_{1},\ldots,a_{n} > 0\) and \(A_{1},\ldots,A_{n} \in \mathcal{U}\).
Theorem 6 (Murr)
Let \(\beta \in \mathbf{M}(\mathbb{X})\) be non-atomic and \(\nu \in \mathbf{M}\big(\mathbb{X} \times (0,\infty )\big)\) be non-atomic and such that for \(A \in \mathcal{U}\) , ∫ A×(0,∞) x ν(d (x,s)) < ∞. A random measure ξ has independent increments with characteristics β and ν if and only if for all \(f \in \mathbb{S}\) and \(g \in \mathcal{E}\) ,
Proof
Assume that ξ is a random measure with independent increments. Formula (40) is the particular case of formula (34) for h(μ, s) = f(μ)g(s). However, as we commented, we will see that (40) is also consequence of the duality coupling (39).
To prove (40), by linearity, it suffices to consider the case \(g =\mathbb{1}_{A}\) for \(A \in \mathcal{U}\). By construction (see Remark 4) ∫ A×(0, ∞) x ν(d(s, x)) < ∞. Assume first that also
Then \(x\mathbb{1}_{A\times (0,\infty )} \in L^{1}\big(\nu \big) \cap L^{2}\big(\nu \big)\), and by the representation (30),
Further, the right-hand side of formula of duality (39) for F = f(ξ) and \(g =\mathbb{1}_{A}\) is
and formula (40) follows. When ∫ A×(0, ∞) x 2 ν(d(s, x)) = ∞, then the result is obtaining approximating ∫ A×(0, ∞) x η(d(s, x)) by ∫ A×{0 < x < n} x η(d(s, x)) as in the proof of Theorem 2.
The reciprocal implication is also proved as in Theorem 2. □
Remark 5
For an infinitely divisible random measure Murr [15] writes formula (40) as
where \(\mathbf{M}_{0}(\mathbb{X}) = \mathbf{M}(\mathbb{X})\setminus \{0\}\), here 0 is the zero measure, and Γ is a σ-finite measure on \(\mathbf{M}_{0}(\mathbb{X})\). Kallenberg [6, Lemma 7.3] proves that ξ has independent increments if and only if Γ is concentrated on the set of degenerate measures in \(\mathbf{M}_{0}(\mathbb{X})\), that are the measures of the form χ = x δ s , for some x > 0 and \(s \in \mathbb{X}\). In this case, consider the (measurable) mapping
and then ν is the image measure of Γ by this mapping. Thus, by the image measure Theorem,
so formula (41) and (40) are the same in case of a random measure with independent increments.
References
Chow, Y.S., Teicher H.: Probability Theory. Independence, Interchangeability, Martingales. Springer, New York (1978)
Daley, D.J., Vere-Jones, D.: An Introduction to the Theory of Point Processes. Volume II: General Theory and Structure, 2nd edn. Springer, New York (2008)
Di Nunno, G., Øksendal, B., Proske, F.: Malliavin Calculus for Lévy Processes with Applications to Finance. Springer, Berlin (2009)
Geiss, C., Laukkarinen, E.: Denseness of certain smooth Lévy functionals in \(\mathbb{D}_{1,2}\). Probab. Math. Stat. 31, 1–15 (2011)
Itô, K.: Spectral type of the shift transformation of differential processes with stationary increments. Trans. Am. Math. Soc. 81, 253–263 (1956)
Kallenberg, O.: Random Measures, 3rd edn. Akademie-Verlag, Berlin (1983)
Kallenberg, O.: Foundations of Modern Probability, 2nd edn. Springer, New York (2002)
Kingman, J.F.C.: Completely random measures. Pac. J. Math. 21, 59–78 (1967)
Kingman, J.F.C.: Poisson Processes. Oxford Science Publications/Clarendon Press, Oxford (1993)
Kyprianou, A.E.: Introductory Lectures on Fluctuations of Lévy Processes with Applications. Springer, Berlin (2006)
Last, G.: Stochastic analysis for Poisson processes. In: Peccati, G., Reitzner, M. (eds.) Stochastic Analysis for Poisson Point Processes: Malliavin Calculus, Wiener-Ito Chaos Expansions and Stochastic Geometry. Bocconi & Springer Series, vol. 7, pp. 1–36. Springer, Cham (2016)
Last, G., Penrose, M.D.: Poisson process Fock space representation, chaos expansion and covariance inequalities. Probab. Theory Relat. Fields 150, 663–690 (2011)
Matthes, K., Kerstan, J., Mecke, J.: Infinite Divisible Point Processes. Wiley, Chichester (1978)
Molchanov, I., Zuyev, S.: Variational analysis of Poisson processes. In: Peccati, G., Reitzner, M. (eds.) Stochastic Analysis for Poisson Point Processes: Malliavin Calculus, Wiener-Ito Chaos Expansions and Stochastic Geometry. Bocconi & Springer Series, vol. 7, pp. 81–101. Springer, Cham (2016)
Murr, R.: Characterization of infinite divisibility by duality formulas. Application to Lévy processes and random measures. Stoch. Process. Appl. 123, 1729–1749 (2013)
Neveu, J.: Processus Pontuels. In: École d’Eté de Probabilités de Saint Flour, VI-1976. Lecture Notes in Mathematics, vol. 598, pp. 247–447. Springer, Berlin (1977)
Nualart, D.: The Malliavin Calculus and Related Topics, 2nd edn. Springer, Berlin (2006)
Nualart, D., Vives, J.: Anticipative calculus for the Poisson process based on the Fock space. In: Séminaire de Probabilités XXIV. Lecture Notes in Mathematics, vol. 1426, pp. 154–165. Springer, Berlin (1990)
Nualart, D., Vives, J.: A duality formula on the Poisson space and some applications. In: Proceedings of the Ascona Conference on Stochastic Analysis. Progress in Probability, vol. 36, pp. 205–213. Birkhäuser, Basel (1995)
Peccati, G., Taqqu M.S.: Wiener Chaos: Moments, Cumulants and Diagrams. Bocconi & Springer Series, vol. 1, Springer, Milan (2011)
Privault, N.: Combinatorics of Poisson stochastic integrals with random integrals. In: Peccati, G., Reitzner, M. (eds.) Stochastic Analysis for Poisson Point Processes: Malliavin Calculus, Wiener-Ito Chaos Expansions and Stochastic Geometry. Bocconi & Springer Series, vol. 7, pp. 37–80. Springer, Cham (2016)
Sato, K.: Lévy Processes and Infinitely Divisible Distributions. Cambridge University Press, Cambridge (1999)
Schneider, R., Weil, W.: Stochastic and Integral Geometry. Springer, Berlin (2008)
Solé, J.L., Utzet, F., Vives, J.: Canonical Lévy process and Malliavin calculus. Stoch. Process. Appl. 117, 165–187 (2007)
Solé, J.L., Utzet, F., Vives, J.: Chaos expansions and Malliavin calculus for Lévy processes. In: Abel Symposium 2005, Stochastic Analysis and Applications, pp. 595–612. Springer, Berlin (2007)
Acknowledgements
The authors were partially supported by grants MINECO reference MTM2012-33937 and UNAB10-4E-378 co-funded by FEDER “A way to build Europe,” and Generalitat de Catalunya reference 2014-SGR422.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix 1: Pathwise and \(\boldsymbol{L^{2}(\mathbb{P})}\) Integrals with Respect to Poisson Random Measures
For reader convenience we review the definition and properties of the integrals with respect to Poisson random measures. For these properties there is no need of topological conditions on the state space.
1.1 Pathwise Integrals with Respect to a Poisson Random Measure
Let \((\mathbb{X},\mathcal{X},\nu )\) be a σ-finite measure space and η a Poisson random measure with intensity ν. For a measurable mapping \(f: \mathbb{X} \rightarrow \mathbb{R}\) we can consider the integral \(\int _{\mathbb{X}}f(x)\,\eta (\omega,\mathrm{d}x)\) assuming that f is positive or \(\int _{\mathbb{X}}\vert f(x)\vert \,\eta (\omega,\mathrm{d}x) < \infty \), and if this happens for all ω ∈ Ω a.s., the mapping \(\omega \mapsto \int _{\mathbb{X}}f(x)\,\eta (\omega,\mathrm{d}x)\) defines a random variable. The following theorem summarizes the main properties of that integral. See also Privault [21, Sect. 2.4.1] for additional properties.
Theorem 7 (Kyprianou [10, Theorem 2.7])
Let \(f: \mathbb{X} \rightarrow \mathbb{R}\) be a measurable mapping. Then
-
1.
The integral \(\int _{\mathbb{X}}f(x)\,\eta (\omega,\mathrm{d}x)\) is absolutely convergent for every ω ∈Ω a.s. if and only if
$$\displaystyle{ \int \limits _{\mathbb{X}}\big(1 \wedge \vert f(x)\vert \big)\,\nu (\mathrm{d}x) < \infty. }$$(42)In this case, the characteristic function of \(\int _{\mathbb{X}}f\,\mathrm{d}\eta\) is
$$\displaystyle{\mathbb{E}\left [\exp \left \{iu\int \limits _{\mathbb{X}}f\,\mathrm{d}\eta \right \}\right ] =\exp \left \{\int \limits _{\mathbb{X}}\Big(e^{iu\,f(x)} - 1\Big)\,\nu (\mathrm{d}x)\right \}.}$$ -
2.
If f ∈ L 1 (ν), then \(\int _{\mathbb{X}}f\,\mathrm{d}\eta \in L^{1}(\mathbb{P})\) and
$$\displaystyle{\mathbb{E}\left [\int \limits _{\mathbb{X}}f\,\mathrm{d}\eta \right ] =\int \limits _{\mathbb{X}}f\,\mathrm{d}\nu.}$$ -
3.
If f ∈ L 1 (ν) ∩ L 2 (ν), then \(\int _{\mathbb{X}}f\,\mathrm{d}\eta \in L^{2}(\mathbb{P})\) and
$$\displaystyle{ \mathbb{E}\left [\left (\int \limits _{\mathbb{X}}f\,\mathrm{d}\eta \right )^{2}\right ] =\int \limits _{ \mathbb{X}}f^{2}\,\mathrm{d}\nu + \left (\int \limits _{ \mathbb{X}}f\,\mathrm{d}\nu \right )^{2}. }$$(43)
Note that f ∈ L 1(ν) implies (42) because \(1 \wedge \vert f\vert \leq \vert f\vert\).
We need the following property.
Proposition 10
Assume that \(\nu (\mathbb{X}) < \infty \) . Let { f n , n ≥ 1} and f be measurable functions on \(\mathbb{X}\) such that limn f n = f. Then \(\lim _{n}\int _{\mathbb{X}}f_{n}\,\mathrm{d}\eta =\int _{\mathbb{X}}f\,\mathrm{d}\eta\) in probability.
Proof
Observe that since \(\nu (\mathbb{X}) < \infty \) all the integrals are well defined. Set \(g_{n} =\vert \, f_{n} - f\vert\). The characteristic function of \(\int _{\mathbb{X}}g_{n}\,\mathrm{d}\eta\) is
that converges to 1 by dominated convergence. Hence \(\int _{\mathbb{X}}g_{n}\,\mathrm{d}\eta\) converges to 0 in law, and thus in probability. □
1.2 \(\boldsymbol{L^{2}(\mathbb{P})}\)-Integral with Respect to the Compensed Poisson Random Measure
Again with the preceding notations, consider the ring \(\mathcal{X}_{0} =\{ C \in \mathcal{X}:\nu (C) < \infty \}\). The compensated Poisson measure is defined on \(\mathcal{X}_{0}\) by \(\hat{\eta }(C) =\eta (C) -\nu (C),\ C \in \mathcal{X}_{0}\). Recall that the simple functions of the form
are dense in L p(ν) (p ≥ 1). Denote by \(\mathcal{D}\) the set of such functions. For \(f \in \mathcal{D}\) define
It is clear that \(\int _{\mathbb{X}}f\,\mathrm{d}\hat{\eta } \in L^{2}(\mathbb{P})\) is centered, and for \(f,g \in \mathcal{D}\),
Now, for a general f ∈ L 2(ν) the definition of \(\int _{\mathbb{X}}f\,\mathrm{d}\hat{\eta }\) follows by the standard procedure, and equality (44) is true for f, g ∈ L 2(ν). The characteristic function of \(\int _{\mathbb{X}}f\,\mathrm{d}\hat{\eta }\) is
1.3 Relation Between Pathwise and \(\boldsymbol{L^{2}(\mathbb{P})}\) Integrals, and Definition of the \(\boldsymbol{L^{1}(\mathbb{P})}\) Integral
If f ∈ L 1(ν) ∩ L 2(ν), both integrals of f with respect to η and \(\hat{\eta }\) are defined and we have
This is proved in a standard way.
Even if we only have f ∈ L 1(ν), both integrals on the right-hand side above are well defined, and then, abusing of the language, we also write \(\int _{\mathbb{X}}f\,\mathrm{d}\hat{\eta }\) to denote that difference of integrals. As a consequence of Theorem 7, that integral belongs to \(L^{1}(\mathbb{P})\).
Appendix 2: Completely Random Measures
We recall the notion of completely random measures (in the sense of vector measures) and multiple integrals following Peccati and Taqqu [20]; for the properties presented here there are no topological conditions on the phase space. We restrict ourselves to the \(L^{2}(\mathbb{P})\)-valued completely random measures.
Let \((\mathbb{X},\mathcal{X},\lambda )\) be a measure space where λ is σ-finite and non-atomic. As before, set \(\mathcal{X}_{0} =\{ C \in \mathcal{X}:\,\lambda (C) < \infty \}\). A centered completely random measure in \(L^{2}(\mathbb{P})\), for short a completely random measure, with control measure λ is a mapping \(\varphi: \mathcal{X}_{0}\times \varOmega \rightarrow \mathbb{R}\) such that
-
1.
Fixed \(C \in \mathcal{X}_{0}\), \(\varphi (\cdot,C):\varOmega \rightarrow \mathbb{R}\) is a centered square integrable random variable. We denote this random variable by φ(C).
-
2.
If \(C_{1},\ldots,C_{n} \in \mathcal{X}_{0}\) are disjoint, the random variables \(\varphi (C_{1}),\ldots,\varphi (C_{n})\) are independent.
-
3.
For every \(C_{1},C_{2} \in \mathcal{X}_{0}\),
$$\displaystyle{\mathbb{E}[\varphi (C_{1})\varphi (C_{2})] =\lambda (C_{1} \cap C_{2}).}$$
As pointed out by Peccati and Taqqu [20, p. 52], φ is additive and σ-additive on \(\mathbb{X}_{0}\) in the sense of vector measures on \(L^{2}(\mathbb{P})\), that means, for every finite sequence of disjoint sets \(C_{1},\ldots,C_{n} \in \mathcal{X}_{0}\),
and the same is true for an infinite sequence of pairwise disjoints sets \(\{C_{n},\ n \geq 1\} \subset \mathcal{X}_{0}\) such that \(\bigcup _{n=1}^{\infty }C_{n} \in \mathcal{X}_{0}\). However, we stress that in general, fixed ω ∈ Ω, φ(ω, ⋅ ) is not σ-additive, that means, in general a completely random measure is not a random measure in the sense used in Part 2 of this paper.
2.1 Multiple Integrals with Respect to a Completely Random Measure
Itô construction of multiple integrals [5] can be extended to the case that the integrator is a general completely random measure; see Peccati and Taqqu [20, Chap. 5], and note the comment on page 83 when φ is an \(L^{2}(\mathbb{P})\) completely random measure.
The multiple stochastic integral of order n with respect to φ, I n ( f), is defined through the same steps as in the Wiener case: For
where \(C_{1},\ldots,C_{n} \in \mathcal{X}_{0}\), pairwise disjoints, set
Therefore, I n is extended to \(L^{2}\big(\lambda ^{\otimes n}\big)\) by linearity and continuity. This integral has the usual properties:
-
1.
\(I_{n}(\,f) = I_{n}(\tilde{f}),\) where \(\tilde{f}\) is the symmetrization of f:
$$\displaystyle{\tilde{f}(s_{1},\ldots,s_{n}) = \frac{1} {n!}\sum _{\pi \in \mathfrak{G}_{n}}f(s_{\pi (1)},\ldots,s_{\pi (n)}),}$$where \(\mathfrak{G}_{n}\) is the set of permutations of \(\{1,2,\ldots,n\}.\)
-
2.
I n (af + bg) = aI n ( f) + bI n (g).
-
3.
\(\mathbb{E}[I_{n}(\,f)I_{m}(g)] =\delta _{n,m}n!\int _{\mathbb{X}^{n}}\tilde{f}\,\tilde{g}\,\mathrm{d}\lambda ^{\otimes n},\) where δ n, m = 1, if n = m, and 0 otherwise.
Appendix 3: Canonical Space of the Jumps Part of a Process with Independent Increments
As in the Lévy processes case, we use a nice construction by Neveu [16] to build a Poisson random measure on \((0,\infty ) \times \mathbb{R}_{0}\) with intensity measure ν defined in (1). It is worth remarking that this measure is locally finite on \((0,\infty ) \times \mathbb{R}_{0}\). We separate the construction in two steps:
- Step 1 :
-
For m ≥ 0 and k ≥ 1, set
$$\displaystyle\begin{array}{rcl} \varDelta _{m,1}& =& (m,m + 1] \times \{ x \in \mathbb{R}:\, 1 <\vert x\vert \}, {}\\ \varDelta _{m,k}& =& (m,m + 1] \times \{ x \in \mathbb{R}:\, 1/k <\vert x \leq 1/(k - 1)\vert \},\quad k \geq 2. {}\\ \end{array}$$Since for every t > 0, ν t is a Lévy measure, we have that \(\nu \big(\varDelta _{m,k}\big) < \infty.\) Denote by ν m, k the restriction of ν to Δ m, k . We consider the space of the finite sequences of elements of Δ m, k , including the empty sequence; specifically, let
$$\displaystyle{\varOmega _{m,k} =\bigcup _{n\geq 0}\big(\varDelta _{m,k}\big)^{n},}$$where \(\big(\varDelta _{m,k}\big)^{0} =\{\alpha \}\), α being a distinguished element that represents the empty sequence. Let
$$\displaystyle{\mathcal{A}_{m,k} =\big\{ B \subset \varOmega _{m,k}:\, B =\bigcup _{n\geq 0}B_{n},\ B_{n} \in \mathcal{B}\big(\varDelta _{m,k}\big)^{n}\big\}.}$$Since ν m, k (Δ m, k ) < ∞, there is a probability \(\mathbb{Q}_{m,k}\) on Δ m, k such that \(\nu _{m,k} = c_{m,k}\,\mathbb{Q}_{m,k},\) for some constant c m, k > 0. Now define a probability \(\mathbb{P}_{m,k}\) on \((\varOmega _{m,k},\mathcal{A}_{m,k})\) in the following way: for \(B =\bigcup _{n}B_{n},\ B_{n} \in \mathcal{B}\big(\varDelta _{m,k})^{n}\) set
$$\displaystyle{\mathbb{P}_{m,k}(B) = e^{-c_{m,k} }\,\sum _{n=0}^{\infty }\dfrac{c_{m,k}^{n}} {n!} \,\mathbb{Q}_{m,k}^{\otimes n}\big(B_{ n}\big),}$$where \(\mathbb{Q}_{m,k}^{\otimes 0} =\delta _{\alpha }\). Then, Neveu [16, Proposition I.6] proves that under \(\mathbb{P}_{m,k}\), the mapping given by
$$\displaystyle{N'_{m,k}(\omega ) =\sum _{ j=1}^{n}\delta _{ (t_{j},x_{j})},\ \text{if}\ \omega =\big ((t_{1},x_{1}),\ldots,(t_{n},x_{n})\big),}$$and N′ m, k (α) = 0, is a Poisson random measure with intensity ν m, k .
- Step 2 :
-
Now superpose the Poisson random measures N′ m, k : Let
$$\displaystyle{(\varOmega _{N},\mathcal{A}_{N}, \mathbb{P}_{N}) =\bigotimes _{m\geq 1,k\geq 1}(\varOmega _{m,k},\mathcal{A}_{m,k}, \mathbb{P}_{m,k}).}$$For ω = (ω m, k , m ≥ 1, k ≥ 1), define
$$\displaystyle{N_{m,k}^{{\ast}}(\omega ) = N'_{ m,k}(\omega _{m,k})}$$and finally
$$\displaystyle{N^{{\ast}}(\omega ) =\sum _{ m,k}N_{m,k}^{{\ast}}(\omega ),}$$which is a Poisson random measure with intensity measure ν.
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Solé, J.L., Utzet, F. (2016). Malliavin Calculus for Stochastic Processes and Random Measures with Independent Increments. In: Peccati, G., Reitzner, M. (eds) Stochastic Analysis for Poisson Point Processes. Bocconi & Springer Series, vol 7. Springer, Cham. https://doi.org/10.1007/978-3-319-05233-5_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-05233-5_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-05232-8
Online ISBN: 978-3-319-05233-5
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)