Abstract
This chapter develops some basic theory for the stochastic analysis of Poisson process on a general σ-finite measure space. After giving some fundamental definitions and properties (as the multivariate Mecke equation) the chapter presents the Fock space representation of square-integrable functions of a Poisson process in terms of iterated difference operators. This is followed by the introduction of multivariate stochastic Wiener–Itô integrals and the discussion of their basic properties. The chapter then proceeds with proving the chaos expansion of square-integrable Poisson functionals, and defining and discussing Malliavin operators. Further topics are products of Wiener–Itô integrals and Mehler’s formula for the inverse of the Ornstein–Uhlenbeck generator based on a dynamic thinning procedure. The chapter concludes with covariance identities, the Poincaré inequality, and the FKG-inequality.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Basic Properties of a Poisson Process
Let \((\mathbb{X},\mathcal{X})\) be a measurable space. The idea of a point process with state space \(\mathbb{X}\) is that of a random countable subset of \(\mathbb{X}\), defined over a fixed probability space \((\varOmega,\mathcal{A}, \mathbb{P})\). It is both convenient and mathematically fruitful to define a point process as a random element η in the space \(\mathbf{N}_{\sigma }(\mathbb{X}) \equiv \mathbf{N}_{\sigma }\) of all σ-finite measures χ on \(\mathbb{X}\) such that \(\chi (B) \in \mathbb{Z}_{+} \cup \{\infty \}\) for all \(B \in \mathcal{X}\). To do so, we equip N σ with the smallest σ-field \(\mathcal{N}_{\sigma }(\mathbb{X}) \equiv \mathcal{N}_{\sigma }\) of subsets of N σ such that χ ↦ χ(B) is measurable for all \(B \in \mathcal{X}\). Then η: Ω → N σ is a point process if and only if
for all \(B \in \mathcal{X}\) and all \(k \in \mathbb{Z}_{+}\). Here we write η(ω, B) instead of the more clumsy η(ω)(B). We wish to stress that the results of this chapter do not require special (topological) assumptions on the state space.
The Dirac measure δ x at the point \(x \in \mathbb{X}\) is the measure on \(\mathbb{X}\) defined by \(\delta _{x}(B) =\mathbb{1}_{B}(x)\), where \(\mathbb{1}_{B}\) is the indicator function of \(B \in \mathcal{X}\). If X is a random element of \(\mathbb{X}\), then δ X is a point process on \(\mathbb{X}\). Suppose, more generally, that \(X_{1},\ldots,X_{m}\) are independent random elements in \(\mathbb{X}\) with distribution \(\mathbb{Q}\). Then
is a point process on \(\mathbb{X}\). Because
η is referred to as binomial process with sample size m and sampling distribution \(\mathbb{Q}\). Taking an infinite sequence \(X_{1},X_{2},\ldots\) of independent random variables with distribution \(\mathbb{Q}\) and replacing in (1) the deterministic sample size m by an independent \(\mathbb{Z}_{+}\)-valued random variable κ (and interpreting an empty sum as null measure) yields a mixed binomial process . Of particular interest is the case where κ has a Poisson distribution with parameter λ ≥ 0, see also (5) below. It is then easy to check that
for any measurable function \(u: \mathbb{X} \rightarrow [0,\infty )\), where \(\mu:=\lambda \mathbb{Q}\). It is convenient to write this as
where ν(u) denotes the integral of a measurable function u with respect to a measure ν. Clearly,
so that μ is the intensity measure of η. The identity (3) or elementary probabilistic arguments show that η has independent increments , that is, the random variables \(\eta (B_{1}),\ldots,\eta (B_{m})\) are stochastically independent whenever \(B_{1},\ldots,B_{m} \in \mathcal{X}\) are pairwise disjoint. Moreover, η(B) has a Poisson distribution with parameter μ(B), that is
Let μ be a σ-finite measure on \(\mathbb{X}\). A Poisson process with intensity measure μ is a point process η on \(\mathbb{X}\) with independent increments such that (5) holds, where an expression of the form ∞ e −∞ is interpreted as 0. It is easy to see that these two requirements determine the distribution \(\mathbb{P}_{\eta }:= \mathbb{P}(\eta \in \cdot )\) of a Poisson process η. We have seen above that a Poisson process exists for a finite measure μ. In the general case, it can be constructed as a countable sum of independent Poisson processes, see [12, 15, 18] for more details. Equation (3) remains valid. Another consequence of this construction is that η has the same distribution as
where \(X_{1},X_{2},\ldots\) are random elements in \(\mathbb{X}\). A point process that can be (almost surely) represented in this form will be called proper. Any locally finite point process on a Borel subset of a complete separable metric space is proper. However, there are examples of Poisson processes which are not proper.
Let η be a Poisson process with intensity measure μ. A classical and extremely useful formula by Mecke [18] says that
for all measurable \(h: \mathbf{N}_{\sigma } \times \mathbb{X} \rightarrow [0,\infty ]\). One can use the mixed binomial representation to prove this result for finite Poisson processes. An equivalent formulation for a proper Poisson process is
for all measurable \(h: \mathbf{N}_{\sigma } \times \mathbb{X} \rightarrow [0,\infty ]\). Although η −δ x is in general a signed measure, we can use (6) to see that
is almost surely well defined. Both (7) and (8) characterize the distribution of a Poisson process with given intensity measure μ.
Equation (7) admits a useful generalization involving multiple integration. To formulate this version we consider, for \(m \in \mathbb{N}\), the m-th power \((\mathbb{X}^{m},\mathcal{X}^{m})\) of \((\mathbb{X},\mathcal{X})\). Let η be a proper point process given by (6). We define another point process η (m) on \(\mathbb{X}^{m}\) by
where the superscript ≠ indicates summation over m-tuples with pairwise different entries. (In the case \(\eta (\mathbb{X}) = \infty\) this involves only integer-valued indices.) In the case C = B m for some \(B \in \mathcal{X}\) we have that
Therefore η (m) is called m-th factorial measure of η. It can be readily checked that, for any \(m \in \mathbb{N}\),
where η (1): = η. This suggests a recursive definition of the factorial measures of a general point process, without using a representation as a sum of Dirac measures. The next proposition confirms this idea.
Proposition 1
Let η be a point process on \(\mathbb{X}\) . Then there is a uniquely determined sequence η (m) , \(m \in \mathbb{N}\) , of symmetric point processes on \(\mathbb{X}^{m}\) satisfying η (1) := η and the recursion (10) .
The proof of Proposition 1 is given in the appendix and can be skipped without too much loss. It is enough to remember that η (m) can be defined by (9), whenever η is given by (6) and that any Poisson process has a proper version.
The multivariate version of (7) (see e.g. [15]) says that
for all measurable \(h: \mathbf{N}_{\sigma } \times \mathbb{X}^{m} \rightarrow [0,\infty ]\). In particular the factorial moment measures of η are given by
Of course (11) remains true for a measurable \(h: \mathbf{N}_{\sigma } \times \mathbb{X}^{m} \rightarrow \mathbb{R}\) provided that the right-hand side is finite when replacing h with | h | .
2 Fock Space Representation
In the remainder of this chapter we consider a Poisson process η on \(\mathbb{X}\) with σ-finite intensity measure μ and distribution \(\mathbb{P}_{\eta }\).
In this and later chapters the following difference operators (sometimes called add-one cost operators) will play a crucial role. For any f ∈ F(N σ ) (the set of all measurable functions from N σ to \(\mathbb{R}\)) and \(x \in \mathbb{X}\) the function D x f ∈ F(N σ ) is defined by
Iterating this definition, for n ≥ 2 and \((x_{1},\ldots,x_{n}) \in \mathbb{X}^{n}\) we define a function \(D_{x_{1},\ldots,x_{n}\,}^{n}f \in \mathbf{F}(\mathbf{N}_{\sigma })\) inductively by
where D 1: = D and D 0 f = f. Note that
where | J | denotes the number of elements of J. This shows that \(D_{x_{1},\ldots,x_{n}\,}^{n}f\) is symmetric in x 1, …, x n and that \((x_{1},\ldots,x_{n},\chi )\mapsto D_{x_{1},\ldots,x_{n}\,}^{n}f(\chi )\) is measurable. We define symmetric and measurable functions T n f on \(\mathbb{X}^{n}\) by
and we set \(T_{0}f:= \mathbb{E}f(\eta )\), whenever these expectations are defined. By 〈⋅ , ⋅ 〉 n we denote the scalar product in L 2(μ n) and by \(\|\cdot \|_{n}\) the associated norm. Let L s 2(μ n) denote the symmetric functions in L 2(μ n). Our aim is to prove that the linear mapping f ↦ (T n ( f)) n ≥ 0 is an isometry from \(L^{2}(\mathbb{P}_{\eta })\) into the Fock space given by the direct sum of the spaces L s 2(μ n), n ≥ 0 (with L 2 norms scaled by \(n!^{-1/2}\)) and with L s 2(μ 0) interpreted as \(\mathbb{R}\). In Sect. 4 we will see that this mapping is surjective. The result (and its proof) is from [13] and can be seen as a crucial first step in the stochastic analysis on Poisson spaces.
Theorem 1
Let \(f,g \in L^{2}(\mathbb{P}_{\eta })\) . Then
where the series converges absolutely.
We will prepare the proof with some lemmas. Let \(\mathcal{X}_{0}\) be the system of all measurable \(B \in \mathcal{X}\) with μ(B) < ∞. Let F 0 be the space of all bounded and measurable functions \(v: \mathbb{X} \rightarrow [0,\infty )\) vanishing outside some \(B \in \mathcal{X}_{0}\). Let G denote the space of all (bounded and measurable) functions \(g: \mathbf{N}_{\sigma } \rightarrow \mathbb{R}\) of the form
where \(n \in \mathbb{N}\), \(a_{1},\ldots,a_{n} \in \mathbb{R}\) and v 1, …, v n ∈ F 0.
Lemma 1
Relation (17) holds for f,g ∈ G .
Proof
By linearity it suffices to consider functions f and g of the form
for v, w ∈ F 0. Then we have for n ≥ 1 that
where \((e^{-v} - 1)^{\otimes n}(x_{1},\ldots,x_{n}):=\prod _{ i=1}^{n}(e^{-v(x_{i})} - 1)\). From (3) we obtain that
Since v ∈ F 0 it follows that T n f ∈ L s 2(μ n), n ≥ 0. Using (3) again, we obtain that
On the other hand we have from (19) (putting μ 0(1): = 1) that
This equals the right-hand side of (20). □
To extend (17) to general \(f,g \in L^{2}(\mathbb{P}_{\eta })\) we need two further lemmas.
Lemma 2
The set G is dense in \(L^{2}(\mathbb{P}_{\eta })\) .
Proof
Let W be the space of all bounded measurable \(g: \mathbf{N}_{\sigma } \rightarrow \mathbb{R}\) that can be approximated in \(L^{2}(\mathbb{P}_{\eta })\) by functions in G. This space is closed under monotone and uniformly bounded convergence and also under uniform convergence. Moreover, it contains the constant functions. The space G is stable under multiplication and we denote by \(\mathcal{N}'\) the smallest σ-field on N σ such that χ ↦ h(χ) is measurable for all h ∈ G. A functional version of the monotone class theorem (see e.g. Theorem I.21 in [1]) implies that W contains any bounded \(\mathcal{N}'\)-measurable g. On the other hand we have that
for any \(C \in \mathcal{X}\). Hence χ ↦ χ(C) is \(\mathcal{N}'\)-measurable whenever \(C \in \mathcal{X}_{0}\). Since μ is σ-finite, for any \(C \in \mathcal{X}\) there is a monotone sequence \(C_{k} \in \mathcal{X}_{0}\), \(k \in \mathbb{N}\), with union C, so that χ ↦ χ(C) is \(\mathcal{N}'\)-measurable. Hence \(\mathcal{N}' = \mathcal{N}_{\sigma }\) and it follows that W contains all bounded measurable functions. But then W is clearly dense in \(L^{2}(\mathbb{P}_{\eta })\) and the proof of the lemma is complete. □
Lemma 3
Suppose that \(f,f^{1},f^{2},\ldots \in L^{2}(\mathbb{P}_{\eta })\) satisfy f k → f in \(L^{2}(\mathbb{P}_{\eta })\) as k →∞, and that h: N σ → [0,1] is measurable. Let \(n \in \mathbb{N}\) , let \(C \in \mathcal{X}_{0}\) and set B:= C n . Then
Proof
By (15), the relation (21) is implied by the convergence
for all m ∈ { 0, …, n}. For m = 0 this is obvious. Assume m ∈ { 1, …, n}. Then the integral in (22) equals
where we have used (11) to get the equality. By the Cauchy–Schwarz inequality the last expression is bounded above by
Since the Poisson distribution has moments of all orders, we obtain (22) and hence the lemma. □
Proof of Theorem 1
By linearity and the polarization identity
it suffices to prove (17) for \(f = g \in L^{2}(\mathbb{P}_{\eta })\). By Lemma 2 there are f k ∈ G, \(k \in \mathbb{N}\), satisfying f k → f in \(L^{2}(\mathbb{P}_{\eta })\) as k → ∞. By Lemma 1, Tf k, \(k \in \mathbb{N}\), is a Cauchy sequence in \(\mathbf{H}:= \mathbb{R} \oplus \oplus _{n=1}^{\infty }L_{s}^{2}(\mu ^{n})\). The direct sum of the scalar products (n! )−1〈⋅ , ⋅ 〉 n makes H a Hilbert space. Let \(\tilde{f} = (\tilde{f}_{n}) \in \mathbf{H}\) be the limit, that is
Taking the limit in the identity \(\mathbb{E}f^{k}(\eta )^{2} =\langle Tf^{k},Tf^{k}\rangle _{\mathbf{H}}\) yields \(\mathbb{E}f(\eta )^{2} =\langle \tilde{ f},\tilde{f}\rangle _{\mathbf{H}}\). Equation (23) implies that \(\tilde{f}_{0} = \mathbb{E}f(\eta ) = T_{0}f\). It remains to show that for any n ≥ 1,
Let \(C \in \mathcal{X}_{0}\) and B: = C n. Let μ B n denote the restriction of the measure μ n to B. By (23) T n f k converges in L 2(μ B n) (and hence in L 1(μ B n)) to \(\tilde{f}_{n}\), while by the definition (16) of T n , and the case h ≡ 1 of (22), T n f k converges in L 1(μ B n) to T n f. Hence these \(L^{1}(\mathbb{P})\) limits must be the same almost everywhere, so that \(\tilde{f}_{n} = T_{n}\,f\ \mu ^{n}\)-a.e. on B. Since μ is assumed σ-finite, this implies (24) and hence the theorem. □
3 Multiple Wiener–Itô Integrals
For n ≥ 1 and g ∈ L 1(μ n) we define (see [6, 7, 28, 29])
where \([n]:=\{ 1,\ldots,n\}\), J c: = [n]∖ J and x J : = (x j ) j ∈ J . If J = ∅, then the inner integral on the right-hand side has to be interpreted as μ n(g). (This is to say that η (0)(1): = 1.) The multivariate Mecke equation (11) implies that all integrals in (25) are finite and that \(\mathbb{E}I_{n}(g) = 0\).
Given functions \(g_{i}: \mathbb{X} \rightarrow \mathbb{R}\) for \(i = 1,\ldots,n\), the tensor product ⊗ i = 1 n g i is the function from \(\mathbb{X}^{n}\) to \(\mathbb{R}\) which maps each \((x_{1},\ldots,x_{n})\) to ∏ i = 1 n g i (x i ). When the functions \(g_{1},\ldots,g_{n}\) are all the same function h, we write h ⊗n for this tensor product function. In this case the definition (25) simplifies to
Let Σ n denote the set of all permutations of [n], and for \(g \in \mathbb{X}^{n} \rightarrow \mathbb{R}\) define the symmetrization \(\tilde{g}\) of g by
The following isometry properties of the operators I n are crucial. The proof is similar to the one of [16, Theorem 3.1] and is based on the product form (12) of the factorial moment measures and some combinatorial arguments. For more information on the intimate relationships between moments of Poisson integrals and the combinatorial properties of partitions we refer to [16, 21, 25, 28].
Lemma 4
Let g ∈ L 2 (μ m ) and h ∈ L 2 (μ n ) for m,n ≥ 1 and assume that {g ≠ 0} ⊂ B m and {h ≠ 0} ⊂ B n for some \(B \in \mathcal{X}_{0}\) . Then
Proof
We start with a combinatorial identity. Let \(n \in \mathbb{N}\). A subpartition of [n] is a (possibly empty) family σ of nonempty pairwise disjoint subsets of [n]. The cardinality of ∪ J ∈ σ J is denoted by \(\|\sigma \|\). For \(u \in \mathbf{F}(\mathbb{X}^{n})\) we define \(u_{\sigma }: \mathbb{X}^{\vert \sigma \vert +n-\|\sigma \|}\rightarrow \mathbb{R}\) by identifying the arguments belonging to the same J ∈ σ. (The arguments \(x_{1},\ldots,x_{\vert \sigma \vert +n-\|\sigma \|}\) have to be inserted in the order of occurrence.) Now we take \(r,s \in \mathbb{Z}_{+}\) such that r + s ≥ 1 and define Σ r, s as the set of all partitions of \(\{1,\ldots,r + s\}\) such that \(\vert J \cap \{ 1,\ldots,r\}\vert \leq 1\) and \(\vert J \cap \{ r + 1,\ldots,r + s\}\vert \leq 1\) for all J ∈ σ. Let \(u \in \mathbf{F}(\mathbb{X}^{r+s})\). It is easy to see that
provided that η({u ≠ 0}) < ∞. (In the case r = 0 the inner integral on the left-hand side is interpreted as 1.)
We next note that g ∈ L 1(μ m) and h ∈ L 1(μ n) and abbreviate f: = g ⊗ h. Let \(k:= m + n\), J 1: = [m] and \(J_{2}:=\{ m + 1,\ldots,m + n\}\). The definition (25) and Fubini’s theorem imply that
where I c: = [k]∖ I and x J : = (x j ) j ∈ J for any J ⊂ [k]. We now take the expectation of (30) and use Fubini’s theorem (justified by our integrability assumptions on g and h). Thanks to (29) and (12) we can compute the expectation of the inner two integrals to obtain that
where Σ m, n ∗ is the set of all subpartitions σ of [k] such that | J ∩ J 1 | ≤ 1 and | J ∩ J 2 | ≤ 1 for all J ∈ σ. Let Σ m, n ∗, 2 ⊂ Σ m, n ∗ be the set of all subpartitions of [k] such that | J | = 2 for all J ∈ σ. For any π ∈ Σ m, n ∗, 2 we let Σ m, n ∗(π) denote the set of all σ ∈ Σ m, n ∗ satisfying π ⊂ σ. Note that π ∈ Σ m, n ∗(π) and that for any σ ∈ Σ m, n ∗ there is a unique π ∈ Σ m, n ∗, 2 such that σ ∈ Σ m, n ∗(π). In this case
so that (31) implies
The inner sum comes to zero, except in the case where \(\|\pi \|= k\). Hence (32) vanishes unless m = n. In the latter case we have
as asserted. □
Any g ∈ L 2(μ m) is the L 2-limit of a sequence g k ∈ L 2(μ m) satisfying the assumptions of Lemma 4. For instance we may take \(g_{k}:=\mathbb{1}_{(B_{k})^{m}}g\), where μ(B k ) < ∞ and \(B_{k} \uparrow \mathbb{X}\) as k → ∞. Therefore the isometry (28) allows us to extend the linear operator I m in a unique way to L 2(μ m). It follows from the isometry that \(I_{m}(g) = I_{m}(\tilde{g})\) for all g ∈ L 2(μ m). Moreover, (28) remains true for arbitrary g ∈ L 2(μ m) and h ∈ L 2(μ n). It is convenient to set I 0(c): = c for \(c \in \mathbb{R}\). When m ≥ 1, the random variable I m (g) is the (m-th order) Wiener–Itô integral of g ∈ L 2(μ m) with respect to the compensated Poisson process \(\hat{\eta }:=\eta -\mu\). The reference to \(\hat{\eta }\) comes from the explicit definition (25). We note that \(\hat{\eta }(B)\) is only defined for \(B \in \mathcal{X}_{0}\). In fact, \(\{\hat{\eta }(B): B \in \mathcal{X}_{0}\}\) is an independent random measure in the sense of [7].
4 The Wiener–Itô Chaos Expansion
A fundamental result of Itô [7] and Wiener [29] says that every square integrable function of the Poisson process η can be written as an infinite series of orthogonal stochastic integrals. Our aim is to prove the following explicit version of this Wiener–Itô chaos expansion . Recall definition (16).
Theorem 2
Let \(f \in L^{2}(\mathbb{P}_{\eta })\) . Then T n f ∈ L s 2 (μ n ), \(n \in \mathbb{N}\) , and
where the series converges in \(L^{2}(\mathbb{P})\) . Moreover, if g n ∈ L s 2 (μ n ) for \(n \in \mathbb{Z}_{+}\) satisfy \(f(\eta ) =\sum _{ n=0}^{\infty }\frac{1} {n!}I_{n}(g_{n})\) with convergence in \(L^{2}(\mathbb{P})\) , then \(g_{0} = \mathbb{E}f(\eta )\) and g n = T n f, μ n -a.e. on \(\mathbb{X}^{n}\) , for all \(n \in \mathbb{N}\) .
For a homogeneous Poisson process on the real line, the explicit chaos expansion (33) was proved in [8]. The general case was formulated and proved in [13]. Stroock [27] has proved the counterpart of (33) for Brownian motion. Stroock’s formula involves iterated Malliavin derivatives and requires stronger integrability assumptions on f(η).
Theorem 2 and the isometry properties (28) of stochastic integrals show that the isometry f ↦ (T n ( f)) n ≥ 0 is in fact a bijection from \(L^{2}(\mathbb{P}_{\eta })\) onto the Fock space. The following lemma is the key for the proof.
Lemma 5
Let \(f(\chi ):= e^{-\chi (v)}\) , \(\chi \in \mathbf{N}_{\sigma }(\mathbb{X})\) , where \(v: \mathbb{X} \rightarrow [0,\infty )\) is a measurable function vanishing outside a set \(B \in \mathcal{X}\) with μ(B) < ∞. Then (33) holds \(\mathbb{P}\) -a.s. and in \(L^{2}(\mathbb{P})\) .
Proof
By (3) and (19) the right-hand side of (33) equals the formal sum
Using the pathwise definition (25) we obtain that almost surely
where N: = η(B). Assume for the moment that η is proper and write \(\delta _{X_{1}} +\ldots +\delta _{X_{N}}\) for the restriction of η to B. Then we have almost surely that
and hence (33) holds with almost sure convergence of the series. To demonstrate that convergence also holds in \(L^{2}(\mathbb{P})\), let the partial sum I(m) be given by the right-hand side of (34) with the series terminated at n = m. Then since \(\mu (1 - e^{-v})\) is nonnegative and \(\vert 1 - e^{-v(y)}\vert \leq 1\) for all y, a similar argument to (35) yields
Since 2N has finite moments of all orders, by dominated convergence the series (34) (and hence (33)) converges in \(L^{2}(\mathbb{P})\).
Since the convergence of the right-hand side of (34) as well as the almost sure identity \(I = e^{-\eta (v)}\) remain true for any point process with the same distribution as η (that is, for any Poisson process with intensity measure μ), it was no restriction of generality to assume that η is proper. □
Proof of Theorem 2
Let \(f \in L^{2}(\mathbb{P}_{\eta })\) and define T n f for \(n \in \mathbb{Z}_{+}\) by (16). By (28) and Theorem 1,
Hence the infinite series of orthogonal terms
converges in \(L^{2}(\mathbb{P})\). Let h ∈ G, where G was defined at (18). By Lemma 5 and linearity of I n (⋅ ) the sum \(\sum _{n=0}^{\infty } \frac{1} {n!}I_{n}(T_{n}h)\) converges in \(L^{2}(\mathbb{P})\) to h(η). Using (28) followed by Theorem 1 yields
Hence if \(\mathbb{E}(\,f(\eta ) - h(\eta ))^{2}\) is small, then so is \(\mathbb{E}(\,f(\eta ) - S)^{2}\). Since G dense in \(L^{2}(\mathbb{P}_{\eta })\) by Lemma 2, it follows that f(η) = S almost surely.
To prove the uniqueness, suppose that also g n ∈ L s 2(μ n) for \(n \in \mathbb{Z}_{+}\) are such that \(\sum _{n=0}^{\infty } \frac{1} {n!}I_{n}(g_{n})\) converges in \(L^{2}(\mathbb{P})\) to f(η). By taking expectations we must have \(g_{0} = \mathbb{E}f(\eta ) = T_{0}f\). For n ≥ 1 and h ∈ L s 2(μ n), by (28) and (33) we have
and similarly with T n f replaced by g n , so that \(\langle T_{n}\,f - g_{n},h\rangle _{n} = 0\). Putting \(h = T_{n}\,f - g_{n}\) gives \(\|T_{n}\,f - g_{n}\|_{n} = 0\) for each n, completing the proof of the theorem. □
5 Malliavin Operators
For any p ≥ 0 we denote by L η p the space of all random variables \(F \in L^{p}(\mathbb{P})\) such that \(F = f(\eta )\ \mathbb{P}\)-almost surely, for some f ∈ F(N σ ). Note that the space L η p is a subset of \(L^{p}(\mathbb{P})\) while \(L^{p}(\mathbb{P}_{\eta })\) is the space of all measurable functions f ∈ F(N σ ) satisfying \(\int \vert \,f\vert ^{p}\,\mathrm{d}\mathbb{P}_{\eta } = \mathbb{E}\vert \,f(\eta )\vert ^{p} <\infty\). The representative f of \(F \in L^{p}(\mathbb{P})\) is is \(\mathbb{P}_{\eta }\)-a.e. uniquely defined element of \(L^{p}(\mathbb{P}_{\eta })\). For \(x \in \mathbb{X}\) we can then define the random variable D x F: = D x f(η). More generally, we define \(D_{x_{1},\ldots,x_{n}\,}^{n}F:= D_{x_{1},\ldots,x_{n}\,}^{n}f(\eta )\) for any \(n \in \mathbb{N}\) and \(x_{1},\ldots,x_{n} \in \mathbb{X}\). The mapping \((\omega,x_{1},\ldots,x_{n})\mapsto D_{x_{1},\ldots,x_{n}\,}^{n}F(\omega )\) is denoted by D n F (or by DF in the case n = 1). The multivariate Mecke equation (11) easily implies that these definitions are \(\mathbb{P}\otimes \mu\)-a.e. independent of the choice of the representative.
By (33) any F ∈ L η 2 can be written as
where \(f_{n}:= \frac{1} {n!}\mathbb{E}D^{n}F\). In particular we obtain from (28) (or directly from Theorem 1) that
We denote by dom D the set of all F ∈ L η 2 satisfying
The following result is taken from [13] and generalizes Theorem 6.5 in [8] (see also Theorem 6.2 in [20]). It shows that under the assumption (38) the pathwise defined difference operator DF coincides with the Malliavin derivative of F. The space dom D is the domain of this operator.
Theorem 3
Let F ∈ L η 2 be given by (36) . Then \(DF \in L^{2}(\mathbb{P}\otimes \mu )\) iff F ∈dom D. In this case we have \(\mathbb{P}\) -a.s. and for μ-a.e. \(x \in \mathbb{X}\) that
The proof of Theorem 3 requires some preparations. Since
(28) implies that the infinite series
converges in \(L^{2}(\mathbb{P})\) for μ-a.e. \(x \in \mathbb{X}\) provided that F ∈ dom D. By construction of the stochastic integrals we can assume that (ω, x) ↦ (I n−1 f n (x, ⋅ ))(ω) is measurable for all n ≥ 1. Therefore we can also assume that the mapping D′F given by (ω, x) ↦ D′ x F(ω) is measurable. We have just seen that
Next we introduce an operator acting on random functions that will turn out to be the adjoint of the difference operator D, see Theorem 4. For p ≥ 0 let \(L_{\eta }^{p}(\mathbb{P}\otimes \mu )\) denote the set of all \(H \in L^{p}(\mathbb{P}\otimes \mu )\) satisfying H(ω, x) = h(η(ω), x) for \(\mathbb{P}\otimes \mu\)-a.e. (ω, x) for some representative \(h \in \mathbf{F}(\mathbf{N}_{\sigma } \times \mathbb{X})\). For such a H we have for μ-a.e. x that \(H(x):= H(\cdot,x) \in L^{2}(\mathbb{P})\) and (by Theorem 2)
where \(h_{0}(x):= \mathbb{E}H(x)\) and \(h_{n}(x,x_{1},\ldots,x_{n}):= \frac{1} {n!}\mathbb{E}D_{x_{1},\ldots,x_{n}\,}^{n}H(x)\). We can then define the Kabanov–Skorohod integral [3, 10, 11, 26] of H, denoted δ(H), by
which converges in \(L^{2}(\mathbb{P})\) provided that
Here
is the symmetrization of h n . The set of all \(H \in L_{\eta }^{2}(\mathbb{P}\otimes \mu )\) satisfying the latter assumption is the domain dom δ of the operator δ.
We continue with a preliminary version of Theorem 4.
Proposition 2
Let F ∈dom D. Let \(H \in L_{\eta }^{2}(\mathbb{P}\otimes \mu )\) be given by (42) and assume that
Then
Proof
Minkowski inequality implies (44) and hence H ∈ dom δ. Using (40) and (42) together with (28), we obtain that
where the use of Fubini’s theorem is justified by (41), the assumption on H and the Cauchy–Schwarz inequality. Swapping the order of summation and integration (to be justified soon) we see that the last integral equals
where we have used the fact that f n is a symmetric function. By definition (43) and (28), the last series coincides with \(\mathbb{E}F\delta (H)\). The above change of order is permitted since
and the latter series is finite in view of the Cauchy–Schwarz inequality, the finiteness of (36) and assumption (46). □
Proof of Theorem 3
We need to show that
First consider the case with \(f(\chi ) = e^{-\chi (v)}\) with a measurable \(v: \mathbb{X} \rightarrow [0,\infty )\) vanishing outside a set with finite μ-measure. Then n! f n = T n f is given by (19). Given \(n \in \mathbb{N}\),
which is summable in n, so (38) holds in this case. Also, in this case, \(D_{x}\,f(\eta ) = (e^{v(x)} - 1)f(\eta )\) by (13), while \(f_{n}(\cdot,x) = (e^{-v(x)} - 1)n^{-1}f_{n-1}\) so that by (40),
where the last inequality is from Lemma 5 again. Thus (48) holds for f of this form. By linearity this extends to all elements of G.
Let us now consider the general case. Choose g k ∈ G, \(k \in \mathbb{N}\), such that G k : = g k (η) → F in \(L^{2}(\mathbb{P})\) as k → ∞, see Lemma 2. Let \(H \in L_{\eta }^{2}(\mathbb{P}_{\eta }\otimes \mu )\) have the representative \(h(\chi,x):= h'(\chi )\mathbb{1}_{B}(x)\), where h′ is as in Lemma 5 and \(B \in \mathcal{X}_{0}\). From Lemma 5 it is easy to see that (46) holds. Therefore we obtain from Proposition 2 and the linearity of the operator D′ that
On the other hand,
and by the case n = 1 of Lemma 3, this tends to zero as k → ∞. Since D′ x g k = D x g k a.s. for μ-a.e. x we obtain from (49) that
By Lemma 2, the linear combinations of the functions h considered above are dense in \(L^{2}(\mathbb{P}_{\eta }\otimes \mu )\), and by linearity (50) carries through to h in this dense class of functions too, so we may conclude that the assertion (48) holds.
It follows from (41) and (48) that F ∈ dom D implies \(DF \in L_{\eta }^{2}(\mathbb{P}\otimes \mu )\). The other implication was noticed in [22, Lemma 3.1]. To prove it, we assume \(DF \in L_{\eta }^{2}(\mathbb{P}\otimes \mu )\) and apply the Fock space representation (17) to \(\mathbb{E}(D_{x}F)^{2}\) for μ-a.e. x. This gives
and hence F ∈ dom D. □
The following duality relation (also referred to as partial integration, or integration by parts formula) shows that the operator δ is the adjoint of the difference operator D. It is a special case of Proposition 4.2 in [20] applying to general Fock spaces.
Theorem 4
Let F ∈dom D and H ∈dom δ. Then,
Proof
We fix F ∈ dom D. Theorem 3 and Proposition 2 imply that (51) holds if \(H \in L_{\eta }^{2}(\mathbb{P}\otimes \mu )\) satisfies the stronger assumption (46). For any \(m \in \mathbb{N}\) we define
Since H (m) satisfies (46) we obtain that
From (28) we have
As m → ∞ this tends to zero, since
is finite. It follows that the left-hand side of (53) tends to the left-hand side of (51).
To treat the right-hand side of (53) we note that
Since H ∈ dom δ this tends to 0 as m → ∞. Therefore \(\mathbb{E}(\delta (H) -\delta (H^{(m)}))^{2} \rightarrow 0\) and the right-hand side of (53) tends to the right-hand side of (51). □
We continue with a basic isometry property of the Kabanov–Skorohod integral . In the present generality the result is in [17]. A less general version is [24, Proposition 6.5.4].
Theorem 5
Let \(H \in L_{\eta }^{2}(\mathbb{P}\otimes \mu )\) be such that
Then, H ∈dom δ and moreover
Proof
Suppose that H is given as in (42). Assumption (55) implies that H(x) ∈ dom D for μ-a.e. \(x \in \mathbb{X}\). We therefore deduce from Theorem 3 that
\(\mathbb{P}\)-a.s. and for μ 2-a.e. \((x,y) \in \mathbb{X}^{2}\). Using assumption (55) together with the isometry properties (28), we infer that
yielding that H ∈ dom δ.
Now we define H (m) ∈ dom δ, \(m \in \mathbb{N}\), by (52) and note that
Using the symmetry properties of the functions h n it is easy to see that the latter sum equals
On the other hand, we have from Theorem 3 that
so that
coincides with (57). Hence
These computations imply that g m (x, y): = D y H (m)(x) converges in \(L^{2}(\mathbb{P} \otimes \mu ^{2})\) towards g. Similarly, g′ m (x, y): = D x H (m)(y) converges towards g′(x, y): = D x g(y). Since we have seen in the proof of Theorem 4 that H (m) → H in \(L^{2}(\mathbb{P}\otimes \mu )\) as m → ∞, we can now conclude that the right-hand side of (58) tends to the right-hand side of the asserted identity (56). On the other hand we know by (54) that \(\mathbb{E}\delta (H^{(m)})^{2} \rightarrow \mathbb{E}\delta (H)^{2}\) as m → ∞. This concludes the proof. □
To explain the connection of (55) with classical stochastic analysis we assume for a moment that \(\mathbb{X}\) is equipped with a transitive binary relation < such that {(x, y): x < y} is a measurable subset of \(\mathbb{X}^{2}\) and such that x < x fails for all \(x \in \mathbb{X}\). We also assume that < totally orders the points of \(\mathbb{X}\ \mu\)-a.e., that is
where \([x]:= \mathbb{X}\setminus \{y \in \mathbb{X}: \mbox{ $y <x$ or $x <y$}\}\). For any χ ∈ N σ let χ x denote the restriction of χ to \(\{y \in \mathbb{X}: y <x\}\). Our final assumption on < is that (χ, y) ↦ χ y is measurable. A measurable function \(h: \mathbf{N}_{\sigma } \times \mathbb{X} \rightarrow \mathbb{R}\) is called predictable if
A process \(H \in L_{\eta }^{0}(\mathbb{P}\otimes \mu )\) is predictable if it has a predictable representative. In this case we have \(\mathbb{P}\otimes \mu\)-a.e. that D x H(y) = 0 for y < x and D y H(x) = 0 for x < y. In view of (59) we obtain from (56) the classical Itô isometry
In fact, a combinatorial argument shows that any predictable \(H \in L_{\eta }^{2}(\mathbb{P}\otimes \mu )\) is in the domain of δ. We refer to [14] for more detail and references to the literature.
We return to the general setting and derive a pathwise interpretation of the Kabanov–Skorohod integral. For \(H \in L_{\eta }^{1}(\mathbb{P}\otimes \mu )\) with representative h we define
The Mecke equation (7) implies that this definition does \(\mathbb{P}\)-a.s. not depend on the choice of the representative. The next result (see [13]) shows that the Kabanov–Skorohod integral and the operator δ′ coincide on the intersection of their domains. In the case of a diffuse intensity measure μ (and requiring some topological assumptions on \((\mathbb{X},\mathcal{X})\)) the result is implicit in [23].
Theorem 6
Let \(H \in L_{\eta }^{1}(\mathbb{P}\otimes \mu ) \cap \mathrm{dom}\,\delta\) . Then \(\delta (H) =\delta '(H)\ \mathbb{P}\) -a.s.
Proof
Let H have representative h. The Mecke equation (7) shows the integrability \(\mathbb{E}\int \vert h(\eta -\delta _{x},x)\vert \eta (\mathrm{d}x) <\infty\) as well as
whenever \(f: \mathbf{N}_{\sigma } \rightarrow \mathbb{R}\) is measurable and bounded. Therefore we obtain from (51) that \(\mathbb{E}F\delta '(H) = \mathbb{E}F\delta (H)\) provided that F: = f(η) ∈ dom D. By Lemma 2 the space of such bounded random variables is dense in \(L_{\eta }^{2}(\mathbb{P})\), so we may conclude that the assertion holds. □
Finally in this section we discuss the Ornstein–Uhlenbeck generator L whose domain dom L is given by the class of all F ∈ L η 2 satisfying
In this case one defines
The (pseudo) inverse L −1 of L is given by
The random variable L −1 F is well defined for any F ∈ L η 2. Moreover, (37) implies that L −1 F ∈ domL. The identity \(LL^{-1}F = F\), however, holds only if \(\mathbb{E}F = 0\).
The three Malliavin operators D, δ, and L are connected by a simple formula:
Proposition 3
Let F ∈dom L. Then F ∈dom D, DF ∈dom δ and \(\delta (DF) = -LF\) .
Proof
The relationship F ∈ dom D is a direct consequence of (37). Let H: = DF. By Theorem 3 we can apply (43) with \(h_{n}:= (n + 1)f_{n+1}\). We have
showing that H ∈ dom δ. Moreover, since \(I_{n+1}(\tilde{h}_{n}) = I_{n+1}(h_{n})\) it follows that
finishing the proof. □
The following pathwise representation shows that the Ornstein–Uhlenbeck generator can be interpreted as the generator of a free birth and death process on \(\mathbb{X}\).
Proposition 4
Let F ∈dom L with representative f and assume \(DF \in L_{\eta }^{1}(\mathbb{P}\otimes \mu )\) . Then
Proof
We use Proposition 3. Since \(DF \in L_{\eta }^{1}(\mathbb{P}\otimes \mu )\) we can apply Theorem 6 and the result follows by a straightforward calculation. □
6 Products of Wiener–Itô Integrals
In this section we study the chaos expansion of I p ( f)I q (g), where f ∈ L s 2(μ p) and g ∈ L s 2(μ q) for \(p,q \in \mathbb{N}\). We define for any \(r \in \{ 0,\ldots,p \wedge q\}\) (where p ∧ q: = min{p, q}) and l ∈ [r] the contraction \(f \star _{ r}^{l}g: \mathbb{X}^{p+q-r-l} \rightarrow \mathbb{R}\) by
whenever these integrals are well defined. In particular f ⋆0 0 g = f ⊗ g.
In the case q = 1 the next result was proved in [10]. The general case is treated in [28], though under less explicit integrability assumptions and for diffuse intensity measure. Our proof is quite different.
Proposition 5
Let f ∈ L s 2 (μ p ) and f ∈ L s 2 (μ q ) and assume \(f \star _{ r}^{l}g \in L^{2}(\mu ^{p+q-r-l})\) for all \(r \in \{ 0,\ldots,p \wedge q\}\) and \(l \in \{ 0,\ldots,r - 1\}\) . Then
Proof
First note that the Cauchy–Schwarz inequality implies \(f \star _{ r}^{r}g \in L^{2}(\mu ^{p+q-2r})\) for all \(r \in \{ 0,\ldots,p \wedge q\}\).
We prove (67) by induction on p + q. For p ∧ q = 0 the assertion is trivial. For the induction step we assume that p ∧ q ≥ 1. If F, G ∈ L η 0, then an easy calculation shows that
holds \(\mathbb{P}\)-a.s. and for μ-a.e. \(x \in \mathbb{X}\). Using this together with Theorem 3 we obtain that
where f x : = f(x, ⋅ ) and g x : = g(x, ⋅ ). We aim at applying the induction hypothesis to each of the summands on the above right-hand side. To do so, we note that
for all \(r \in \{ 0,\ldots,(p - 1) \wedge q\}\) and \(l \in \{ 0,\ldots,r\}\) and
for all \(r \in \{ 0,\ldots,(p - 1) \wedge (q - 1)\}\) and \(l \in \{ 0,\ldots,r\}\). Therefore the pairs ( f x , g), ( f, g x ) and ( f x , g x ) satisfy for μ-a.e. \(x \in \mathbb{X}\) the assumptions of the proposition. The induction hypothesis implies that
A straightforward but tedious calculation (left to the reader) implies that the above right-hand side equals
where the summand for \(p + q - r - l = 0\) has to be interpreted as 0. It follows that
where G denotes the right-hand side of (67). On the other hand, the isometry properties (28) show that \(\mathbb{E}I_{p}(\,f)I_{q}(g) = \mathbb{E}G\). Since \(I_{p}(\,f)I_{q}(g) \in L_{\eta }^{1}(\mathbb{P})\) we can use the Poincaré inequality of Corollary 1 in Sect. 8 to conclude that
This finishes the induction and the result is proved. □
If {f ≠ 0} ⊂ B p and {g ≠ 0} ⊂ B q for some \(B \in \mathcal{X}_{0}\) (as in Lemma 4), then (67) can be established by a direct computation, starting from (30). The argument is similar to the proof of Theorem 3.1 in [16]. The required integrability follows from the Cauchy–Schwarz inequality; see [16, Remark 3.1]. In the case q ≥ 2 we do not see, however, how to get from this special to the general case via approximation.
Equation (67) can be further generalized so as to cover the case of a finite product of Wiener–Itô integrals. We again refer the reader to [28] as well as to [16, 21].
7 Mehler’s Formula
In this section we assume that η is a proper Poisson process. We shall derive a pathwise representation of the inverse (64) of the Ornstein–Uhlenbeck generator.
To give the idea we define for F ∈ L η 2 with representation (36)
The family {T s : s ≥ 0} is the Ornstein–Uhlenbeck semigroup , see e.g. [24] and also [19] for the Gaussian case. If F ∈ domL then it is easy to see that
in \(L^{2}(\mathbb{P})\), see [19, Proposition 1.4.2] for the Gaussian case. Hence L can indeed be interpreted as the generator of the semigroup. But in the theory of Markov processes it is well known (see, e.g., the resolvent identities in [12, Theorem 19.4]) that
at least under certain assumptions. What we therefore need is a pathwise representation of the operators T s . Our guiding star is the birth and death representation in Proposition 4.
For F ∈ L η 1 with representative f we define,
where η (s) is a s-thinning of η and where Π μ′ denotes the distribution of a Poisson process with intensity measure μ′. The thinning η (s) can be defined by removing the points in (6) independently of each other with probability 1 − s; see [12, p. 226]. Since
this definition does almost surely not depend on the representative of F. Equation (72) implies in particular that
while Jensen’s inequality implies for any p ≥ 1 the contractivity property
We prepare the main result of this section with the following crucial lemma from [17].
Lemma 6
Let F ∈ L η 2 . Then, for all \(n \in \mathbb{N}\) and s ∈ [0,1],
In particular
Proof
To begin with, we assume that the representative of F is given by \(f(\chi ) = e^{-\chi (v)}\) for some \(v: \mathbb{X} \rightarrow [0,\infty )\) such that μ({v > 0}) < ∞. By the definition of a s-thinning,
and it follows from Lemma 12.2 in [12] that
Hence, the definition (71) of the operator P s implies that the following function f s is a representative of P s F:
Therefore we obtain for any \(x \in \mathbb{X}\) that
This identity can be iterated to yield for all \(n \in \mathbb{N}\) and all \((x_{1},\ldots,x_{n}) \in \mathbb{X}^{n}\) that
On the other hand we have \(\mathbb{P}\)-a.s. that
so that (75) holds for Poisson functionals of the given form.
By linearity, (75) extends to all F with a representative in the set G of all linear combinations of functions f as above. There are f k ∈ G, \(k \in \mathbb{N}\), satisfying \(F_{k}:= f_{k}(\eta ) \rightarrow F = f(\eta )\) in \(L^{2}(\mathbb{P})\) as k → ∞, where f is a representative of F (see [13, Lemma 2.1]). Therefore we obtain from the contractivity property (74) that
as k → ∞. Taking \(B \in \mathcal{X}\) with μ(B) < ∞, it therefore follows from [13, Lemma 2.3] that
as k → ∞. On the other hand we obtain from the Fock space representation (17) that \(\mathbb{E}\vert D_{x_{1},\ldots,x_{n}\,}^{n}F\vert <\infty\) for μ n-a.e. \((x_{1},\ldots,x_{n}) \in \mathbb{X}^{n}\), so that linearity of P s and (74) imply
Again, this latter integral tends to 0 as k → ∞. Since (75) holds for any F k we obtain that (75) holds \(\mathbb{P} \otimes (\mu _{B})^{n}\)-a.e., and hence also \(\mathbb{P} \otimes \mu ^{n}\)-a.e.
Taking the expectation in (75) and using (73) proves (76). □
The following theorem from [17] achieves the desired pathwise representation of the inverse Ornstein–Uhlenbeck operator.
Theorem 7
Let F ∈ L η 2 . If \(\mathbb{E}F = 0\) then we have \(\mathbb{P}\) -a.s. that
Proof
Assume that F is given as in (36). Applying (36) to P s F and using (76) yields
Furthermore,
Assume now that \(\mathbb{E}F = 0\). In view of (64) we need to show that the above right-hand side converges in \(L^{2}(\mathbb{P})\), as m → ∞, to the right-hand side of (78). Taking into account (79) we hence have to show that
converges in \(L^{2}(\mathbb{P})\) to zero. Using that \(\mathbb{E}I_{n}(\,f_{n})I_{m}(\,f_{m}) =\mathbb{1}\{m = n\}n!\|\,f_{n}\|_{n}^{2}\) we obtain
which tends to zero as m → ∞. □
Equation (79) implies Mehler’s formula
which was proved in [24] for the special case of a finite Poisson process with a diffuse intensity measure. Originally this formula was first established in a Gaussian setting, see, e.g., [19]. The family \(\{P_{e^{-s}}: s \geq 0\}\) of operators describes a special example of Glauber dynamics. Using (80) in (78) gives the identity (69).
8 Covariance Identities
The fundamental Fock space isometry (17) can be rewritten in several other disguises. We give here two examples, starting with a covariance identity from [5] involving the operators P s .
Theorem 8
Assume that η is a proper Poisson process. Then, for any F,G ∈dom D,
Proof
The Cauchy–Schwarz inequality and the contractivity property (74) imply that
which is finite due to Theorem 3. Therefore we can use Fubini’s theorem and (75) to obtain that the right-hand side of (81) equals
For s ∈ [0, 1] and μ-a.e. \(x \in \mathbb{X}\) we can apply the Fock space isometry Theorem 1 to D x F and D x P s G. Taking into account Lemma 6, (73) and applying Fubini again (to be justified below) yields that the second summand in (82) equals
Inserting this into (82) and applying Theorem 1 yield the asserted formula (81). The use of Fubini’s theorem is justified by Theorem 1 for f = g and the Cauchy–Schwarz inequality. □
The integrability assumptions of Theorem 8 can be reduced to mere square integrability when using a symmetric formulation. Under the assumptions of Theorem 8 the following result was proved in [4, 5]. An even more general version is [13, Theorem 1.5].
Theorem 9
Assume that η is a proper Poisson process. Then, for any F ∈ L η 2 ,
and for any F,G ∈ L η 2 ,
Proof
It is well known (and not hard to prove) that η (t) and η −η (t) are independent Poisson processes with intensity measures tμ and (1 − t)μ, respectively. Therefore we have for F ∈ L η 2 with representative f that
holds almost surely. It is easy to see that the right-hand side of (85) is a measurable function of (the suppressed) ω ∈ Ω, \(x \in \mathbb{X}\), and t ∈ [0, 1].
Now we take F, G ∈ L η 2 with representatives f and g. Let us first assume that \(DF,DG \in L^{2}(\mathbb{P}\otimes \mu )\). Then (83) follows from the (conditional) Jensen inequality while (85) implies for all t ∈ [0, 1] and \(x \in \mathbb{X}\), that
Therefore (84) is just another version of (81).
In this second step of the proof we consider general F, G ∈ L η 2. Let F k ∈ L η 2, \(k \in \mathbb{N}\), be a sequence such that \(DF_{k} \in L^{2}(\mathbb{P}\otimes \mu )\) and \(\mathbb{E}(\,f - F_{k})^{2} \rightarrow 0\) as k → ∞. We have just proved that
where μ ∗ is the product of μ and Lebesgue measure on [0, 1]. Since \(L^{2}(\mathbb{P} \otimes \mu ^{{\ast}})\) is complete, there is an \(h \in L^{2}(\mathbb{P} \otimes \mu ^{{\ast}})\) satisfying
On the other hand it follows from Lemma 3 that for any \(C \in \mathcal{X}_{0}\)
as k → ∞. Comparing this with (86) shows that \(h(\omega,x,t) = \mathbb{E}[D_{x}F\mid \eta ^{(t)}](\omega )\) for \(\mathbb{P} \otimes \mu ^{{\ast}}\)-a.e. (ω, x, t) ∈ Ω× C × [0, 1] and hence also for \(\mathbb{P} \otimes \mu ^{{\ast}}\)-a.e. \((\omega,x,t) \in \varOmega \times \mathbb{X} \times [0,1]\). Therefore the fact that \(h \in L^{2}(\mathbb{P} \otimes \mu ^{{\ast}})\) implies (84). Now let G k , \(k \in \mathbb{N}\), be a sequence approximating G. Then Eq. (84) holds with ( f k , G k ) instead of ( f, G). But the second summand is just a scalar product in \(L^{2}(\mathbb{P} \otimes \mu ^{{\ast}})\). Taking the limit as k → ∞ and using the L 2-convergence proved above yield the general result. □
A quick consequence of the previous theorem is the Poincaré inequality for Poisson processes. The following general version is taken from [30]. A more direct approach can be based on the Fock space representation in Theorem 1, see [13].
Theorem 10
For any F ∈ L η 2 ,
Proof
η is proper. Take F = G in (84) and apply Jensen’s inequality. □
The following extension of (87) (taken from [17]) has been used in the proof of Proposition 5.
Corollary 1
For F ∈ L η 1 ,
Proof
For s > 0 we define
By definition of F s we have F s ∈ L η 2 and | D x F s | ≤ | D x F | for μ-a.e. \(x \in \mathbb{X}\). Together with the Poincaré inequality (87) we obtain that
By the monotone convergence theorem and the dominated convergence theorem, respectively, we have that \(\mathbb{E}F_{s}^{2} \rightarrow \mathbb{E}F^{2}\) and \(\mathbb{E}F_{s} \rightarrow \mathbb{E}F\) as s → ∞. Hence letting s → ∞ in the previous inequality yields the assertion. □
As a second application of Theorem 9 we obtain the Harris-FKG inequality for Poisson processes , derived in [9]. Given \(B \in \mathcal{X}\), a function f ∈ F(N σ ) is increasing on B if f(χ +δ x ) ≥ f(χ) for all χ ∈ N σ and all x ∈ B. It is decreasing on B if (−f) is increasing on B.
Theorem 11
Suppose \(B \in \mathcal{X}\) . Let \(f,g \in L^{2}(\mathbb{P}_{\eta })\) be increasing on B and decreasing on \(\mathbb{X}\setminus B\) . Then
It was noticed in [30] that the correlation inequality (89) (also referred to as association) is a direct consequence of a covariance identity.
References
Dellacherie, C., Meyer, P.A.: Probabilities and Potential. North-Holland Mathematics Studies, vol. 29. North-Holland Publishing Company, Amsterdam/New York (1978)
Dudley, R.M.: Real Analysis and Probability. Cambridge University Press, Cambridge (2002)
Hitsuda, M.: Formula for Brownian partial derivatives. In: Proceedings of the 2nd Japan-USSR Symposium on Probability Theory, pp. 111–114 (1972)
Houdré, C., Perez-Abreu, V.: Covariance identities and inequalities for functionals on Wiener space and Poisson space. Ann. Probab. 23, 400–419 (1995)
Houdré, C., Privault, N.: Concentration and deviation inequalities in infinite dimensions via covariance representations. Bernoulli 8, 697–720 (2002)
Itô, K.: Multiple Wiener integral. J. Math. Soc. Jpn. 3, 157–169 (1951)
Itô, K.: Spectral type of the shift transformation of differential processes with stationary increments. Trans. Am. Math. Soc. 81, 253–263 (1956)
Ito, Y.: Generalized Poisson functionals. Probab. Theory Relat. Fields 77, 1–28 (1988)
Janson, S.: Bounds on the distributions of extremal values of a scanning process. Stoch. Process. Appl. 18, 313–328 (1984)
Kabanov, Y.M.: On extended stochastic integrals. Theory Probab. Appl. 20, 710–722 (1975)
Kabanov, Y.M., Skorokhod, A.V.: Extended stochastic integrals. In: Proceedings of the School-Seminar on the Theory of Random Processes, Druskininkai, 25–30 November 1974. Part I. Vilnius (Russian) (1975)
Kallenberg, O.: Foundations of Modern Probability, 2nd edn. Springer, New York (2002)
Last, G., Penrose, M.D.: Fock space representation, chaos expansion and covariance inequalities for general Poisson processes. Probab. Theory Relat. Fields 150, 663–690 (2011)
Last, G., Penrose, M.D.: Martingale representation for Poisson processes with applications to minimal variance hedging. Stoch. Process. Appl. 121, 1588–1606 (2011)
Last, G., Penrose, M.D.: Lectures on the Poisson Process. Cambridge University Press www.math.kit.edu/stoch/~cost/seite/lehrbuch_poissonlde (2016)
Last, G., Penrose, M.D., Schulte, M., Thäle, C.: Moments and central limit theorems for some multivariate Poisson functionals. Adv. Appl. Probab. 46, 348–364 (2014)
Last, G., Peccati, G., Schulte, M.: Normal approximation on Poisson spaces: Mehler’s formula, second order Poincaré inequalities and stabilization. Probab. Theory Relat. Fields (2014, to appear)
Mecke, J.: Stationäre zufällige Maße auf lokalkompakten Abelschen Gruppen. Z. Wahrscheinlichkeitstheor. Verwandte Geb. 9, 36–58 (1967)
Nualart, D.: The Malliavin Calculus and Related Topics. Springer, Berlin (2006)
Nualart, D., Vives, J.: Anticipative calculus for the Poisson process based on the Fock space. Séminaire Probabilités XXIV. Lecture Notes in Mathematics, vol. 1426, pp. 154–165. Springer, Berlin (1990)
Peccati, G., Taqqu, M.S.: Wiener Chaos: Moments, Cumulants and Diagrams. Bocconi & Springer Series, vol. 1, Springer, Milan (2011)
Peccati, G., Thäle, C.: Gamma limits and U-statistics on the Poisson space. ALEA Lat. Am. J. Probab. Math. Stat. 10, 525–560 (2013)
Picard, J.: On the existence of smooth densities for jump processes. Probab. Theory Relat. Fields 105, 481–511 (1996)
Privault, N.: Stochastic Analysis in Discrete and Continuous Settings with Normal Martingales. Springer, Berlin (2009)
Privault, N.: Combinatorics of Poisson Stochastic Integrals with Random Integrands. In: Peccati, G., Reitzner, M. (eds.) Stochastic Analysis for Poisson Point Processes: Malliavin Calculus, Wiener-Ito Chaos Expansions and Stochastic Geometry. Bocconi & Springer Series, vol. 7, pp. 37–80. Springer, Cham (2016)
Skorohod, A.V.: On a generalization of a stochastic integral. Theory Probab. Appl. 20, 219–233 (1975)
Stroock, D.W.: Homogeneous chaos revisited. Séminaire de Probabilités XXI. Lecture Notes in Mathematics, vol. 1247, pp. 1–8. Springer, New York (1987)
Surgailis, D.: On multiple Poisson stochastic integrals and associated Markov semigroups. Probab. Math. Stat. 3, 217–239 (1984)
Wiener, N.: The homogeneous chaos. Am. J. Math. 60, 897–936 (1938).
Wu, L.: A new modified logarithmic Sobolev inequality for Poisson point processes and several applications. Probab. Theory Relat. Fields 118, 427–438 (2000)
Acknowledgements
The proof of Proposition 5 is joint work with Matthias Schulte.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
In this appendix we prove Proposition 1. If χ ∈ N is given by
for some \(k \in \mathbb{N}_{0} \cup \{\infty \}\) and some points \(x_{1},x_{2},\ldots \in \mathbb{X}\) (which are not assumed to be distinct) we define, for \(m \in \mathbb{N}\), the factorial measure \(\chi ^{(m)} \in \mathbf{N}(\mathbb{X}^{m})\) by
These measures satisfy the recursion
Let N < ∞ denote the set of all χ ∈ N with \(\chi (\mathbb{X}) <\infty\). For χ ∈ N < ∞ the recursion (92) is solved by
where the integrations are with respect to finite signed measures. Note that χ (m) is a signed measure such that \(\chi ^{(m)}(C) \in \mathbb{Z}\) for all \(C \in \mathcal{X}^{m}\). At this stage it might not be obvious that χ (m)(C) ≥ 0. If, however, χ is given by (90) with \(k \in \mathbb{N}\), then (93) coincides with (91). Hence χ (m) is a measure in this case. For any χ ∈ N < ∞ we denote by χ (m) the signed measure (93). This is in accordance with the recursion (92). The next lemma shows that χ (m) is a measure.
Lemma 7
Let χ ∈ N <∞ and \(m \in \mathbb{N}\) . Then χ (m) (C) ≥ 0 for all \(C \in \mathcal{X}^{m}\) .
Proof
Let \(B_{1},\ldots,B_{m} \in \mathcal{X}\) and let Π m denote the set of partitions of [m]. The definition (93) implies that
where the coefficients \(c_{\pi } \in \mathbb{R}\) do not depend on B 1, …, B m and χ. For instance
It follows that the left-hand side of (94) is determined by the values of χ on the algebra generated by B 1, …, B m . The atoms of this algebra are all nonempty sets of the form
where i 1, …, i m ∈ { 0, 1} and, for \(B \subset \mathbb{X}\), B 1: = B and \(B^{0}:= \mathbb{X}\setminus B\). Let \(\mathcal{A}\) denote the set of all these atoms. For \(B \in \mathcal{A}\) we take x ∈ B and let χ B : = χ(B)δ x . Then the measure
is a finite sum of Dirac measures and (94) implies that
Therefore it follows from (91) (applied to χ′) that χ (m)(B 1 ×⋯ × B m ) ≥ 0.
Let \(\mathcal{A}_{m}\) be the system of all finite and disjoint unions of sets B 1 ×⋯ × B m . This is an algebra; see Proposition 3.2.3 in [2]. From the first step of the proof and additivity of χ (m) we obtain that χ (m)(A) ≥ 0 holds for all \(A \in \mathcal{A}_{m}\). The system \(\mathcal{M}\) of all sets \(A \in \mathcal{X}^{m}\) with the property χ (m)(A) ≥ 0 is monotone. Hence a monotone class theorem (see e.g. Theorem 4.4.2 in [2]) implies that \(\mathcal{M} = \mathcal{X}^{m}\). Therefore χ (m) is nonnegative. □
Lemma 8
Let χ,ν ∈ N <∞ and assume that χ ≤ν. Let \(m \in \mathbb{N}\) . Then χ (m) ≤ν (m) .
Proof
By a monotone class argument it suffices to show that
for all \(B_{1},\ldots,B_{m} \in \mathcal{X}\). Fixing the latter sets we define the system \(\mathcal{A}\) of atoms of the generated algebra as in the proof of Lemma 7. For \(B \in \mathcal{A}\) we choose x ∈ B and define χ B : = χ(B)δ x and ν B : = ν(B)δ x . Then
are finite sums of Dirac measures satisfying χ′ ≤ ν′. By (94) we have
A similar identity holds for ν (m) and (ν′)(m). Therefore (91) (applied to χ′ and ν′) implies the asserted inequality (95). □
We can now prove a slightly more detailed version of Proposition 1.
Proposition 6
For any χ ∈ N σ there is a unique sequence χ (m) , \(m \in \mathbb{N}\) , of symmetric σ-finite measures on \((\mathbb{X}^{m},\mathcal{X}^{m})\) satisfying χ (1) := χ and the recursion (92) . Moreover, the mapping χ ↦ χ (m) is measurable. Finally, χ (m) (B m ) ≤χ(B) m for all \(m \in \mathbb{N}\) and \(B \in \mathcal{X}\) .
Proof
For χ ∈ N < ∞ the functionals defined by (93) satisfy the recursion (92) and are measures by Lemma 7.
For a general χ ∈ N σ we proceed by induction. For m = 1 we have χ (1) = χ and there is nothing to prove. Assume now that m ≥ 1 and that the measures χ (1), …, χ (m) satisfy the first m − 1 recursions and have the properties stated in the proposition. Then (92) enforces the definition
for \(C \in \mathcal{X}^{m+1}\), where
The function \(K: \mathbb{X}^{m} \times \mathbf{N}_{\sigma } \times \mathcal{X}^{m} \rightarrow (-\infty,\infty ]\) is a signed kernel in the following sense. The mapping (x 1, …, x m , χ) ↦ K(x 1, …, x m , χ, C) is measurable for all \(C \in \mathcal{X}^{m+1}\), while K(x 1, …, x m , χ, ⋅ ) is σ-additive for all \((x_{1},\ldots,x_{m},\chi ) \in \mathbb{X}^{m} \times \mathbf{N}_{\sigma }\). Hence it follows from (96) and the measurability properties of χ (m) (which are part of the induction hypothesis) that χ (m+1)(C) is a measurable function of χ.
Next we show that
holds for all χ ∈ N σ and all \(C \in \mathcal{X}^{m+1}\). Since χ (m) is a measure (by induction hypothesis) (96), (97) and monotone convergence then imply that χ (m+1) is a measure. Fix χ ∈ N σ and choose a sequence (χ n ) of finite measures in N σ such that χ n ↑ χ. Lemma 7 (applied to χ n and m + 1) implies that
Indeed, we have for all \(B \in \mathcal{X}^{m}\) that
Since K(x 1, …, x m , ⋅ , C) is increasing, this implies
By induction hypothesis we have that (χ n )(m) ↑ χ (m) so that (97) follows.
Finally we note that χ (m)(B m) ≤ χ(B)m follows by induction. In particular, χ (m) is σ-finite. To prove the symmetry of χ (m) it is then sufficient to show that the restriction of χ (m) to B m is symmetric, for any \(B \in \mathcal{X}\) with χ(B) < ∞. This fact follows from (94). □
For any χ ∈ N, \(B \in \mathcal{X}\) with χ(B) < ∞, and \(m \in \mathbb{N}\) it follows by induction that
Since χ and χ (m) are σ-finite, this extends to any \(B \in \mathcal{X}\). In particular χ (m) is the zero measure whenever \(\chi (\mathbb{X}) <m\).
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Last, G. (2016). Stochastic Analysis for Poisson Processes. In: Peccati, G., Reitzner, M. (eds) Stochastic Analysis for Poisson Point Processes. Bocconi & Springer Series, vol 7. Springer, Cham. https://doi.org/10.1007/978-3-319-05233-5_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-05233-5_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-05232-8
Online ISBN: 978-3-319-05233-5
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)