Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Basic Properties of a Poisson Process

Let \((\mathbb{X},\mathcal{X})\) be a measurable space. The idea of a point process with state space \(\mathbb{X}\) is that of a random countable subset of \(\mathbb{X}\), defined over a fixed probability space \((\varOmega,\mathcal{A}, \mathbb{P})\). It is both convenient and mathematically fruitful to define a point process as a random element η in the space \(\mathbf{N}_{\sigma }(\mathbb{X}) \equiv \mathbf{N}_{\sigma }\) of all σ-finite measures χ on \(\mathbb{X}\) such that \(\chi (B) \in \mathbb{Z}_{+} \cup \{\infty \}\) for all \(B \in \mathcal{X}\). To do so, we equip N σ with the smallest σ-field \(\mathcal{N}_{\sigma }(\mathbb{X}) \equiv \mathcal{N}_{\sigma }\) of subsets of N σ such that χχ(B) is measurable for all \(B \in \mathcal{X}\). Then η: Ω → N σ is a point process if and only if

$$\displaystyle{\{\eta (B) = k\} \equiv \{\omega \in \varOmega:\eta (\omega,B) = k\} \in \mathcal{A}}$$

for all \(B \in \mathcal{X}\) and all \(k \in \mathbb{Z}_{+}\). Here we write η(ω, B) instead of the more clumsy η(ω)(B). We wish to stress that the results of this chapter do not require special (topological) assumptions on the state space.

The Dirac measure δ x at the point \(x \in \mathbb{X}\) is the measure on \(\mathbb{X}\) defined by \(\delta _{x}(B) =\mathbb{1}_{B}(x)\), where \(\mathbb{1}_{B}\) is the indicator function of \(B \in \mathcal{X}\). If X is a random element of \(\mathbb{X}\), then δ X is a point process on \(\mathbb{X}\). Suppose, more generally, that \(X_{1},\ldots,X_{m}\) are independent random elements in \(\mathbb{X}\) with distribution \(\mathbb{Q}\). Then

$$\displaystyle\begin{array}{rcl} \eta:=\delta _{X_{1}} +\ldots +\delta _{X_{m}}& &{}\end{array}$$
(1)

is a point process on \(\mathbb{X}\). Because

$$\displaystyle\begin{array}{rcl} \mathbb{P}(\eta (B) = k) = \binom{m}{k}\mathbb{Q}(B)^{k}(1 - \mathbb{Q}(B))^{m-k},\quad k = 0,\ldots,m,& & {}\\ \end{array}$$

η is referred to as binomial process with sample size m and sampling distribution \(\mathbb{Q}\). Taking an infinite sequence \(X_{1},X_{2},\ldots\) of independent random variables with distribution \(\mathbb{Q}\) and replacing in (1) the deterministic sample size m by an independent \(\mathbb{Z}_{+}\)-valued random variable κ (and interpreting an empty sum as null measure) yields a mixed binomial process . Of particular interest is the case where κ has a Poisson distribution with parameter λ ≥ 0, see also (5) below. It is then easy to check that

$$\displaystyle\begin{array}{rcl} \mathbb{E}\exp \bigg[-\int u(x)\eta (\mathrm{d}x)\bigg] =\exp \bigg [-\int (1 - e^{-u(x)})\mu (\mathrm{d}x)\bigg],& &{}\end{array}$$
(2)

for any measurable function \(u: \mathbb{X} \rightarrow [0,\infty )\), where \(\mu:=\lambda \mathbb{Q}\). It is convenient to write this as

$$\displaystyle\begin{array}{rcl} \mathbb{E}\exp [-\eta (u)] =\exp \big [-\mu (1 - e^{-u})\big],& &{}\end{array}$$
(3)

where ν(u) denotes the integral of a measurable function u with respect to a measure ν. Clearly,

$$\displaystyle\begin{array}{rcl} \mu (B) = \mathbb{E}\eta (B),\quad B \in \mathcal{X},& &{}\end{array}$$
(4)

so that μ is the intensity measure of η. The identity (3) or elementary probabilistic arguments show that η has independent increments , that is, the random variables \(\eta (B_{1}),\ldots,\eta (B_{m})\) are stochastically independent whenever \(B_{1},\ldots,B_{m} \in \mathcal{X}\) are pairwise disjoint. Moreover, η(B) has a Poisson distribution with parameter μ(B), that is

$$\displaystyle\begin{array}{rcl} \mathbb{P}(\eta (B) = k) = \frac{\mu (B)^{k}} {k!} \exp [-\mu (B)],\quad k \in \mathbb{Z}_{+}.& &{}\end{array}$$
(5)

Let μ be a σ-finite measure on \(\mathbb{X}\). A Poisson process with intensity measure μ is a point process η on \(\mathbb{X}\) with independent increments such that (5) holds, where an expression of the form ∞ e is interpreted as 0. It is easy to see that these two requirements determine the distribution \(\mathbb{P}_{\eta }:= \mathbb{P}(\eta \in \cdot )\) of a Poisson process η. We have seen above that a Poisson process exists for a finite measure μ. In the general case, it can be constructed as a countable sum of independent Poisson processes, see [12, 15, 18] for more details. Equation (3) remains valid. Another consequence of this construction is that η has the same distribution as

$$\displaystyle\begin{array}{rcl} \eta =\sum _{ n=1}^{\eta (\mathbb{X})}\delta _{ X_{n}\,},& &{}\end{array}$$
(6)

where \(X_{1},X_{2},\ldots\) are random elements in \(\mathbb{X}\). A point process that can be (almost surely) represented in this form will be called proper. Any locally finite point process on a Borel subset of a complete separable metric space is proper. However, there are examples of Poisson processes which are not proper.

Let η be a Poisson process with intensity measure μ. A classical and extremely useful formula by Mecke [18] says that

$$\displaystyle\begin{array}{rcl} \mathbb{E}\int h(\eta,x)\eta (\mathrm{d}x) = \mathbb{E}\int h(\eta +\delta _{x},x)\mu (\mathrm{d}x)& &{}\end{array}$$
(7)

for all measurable \(h: \mathbf{N}_{\sigma } \times \mathbb{X} \rightarrow [0,\infty ]\). One can use the mixed binomial representation to prove this result for finite Poisson processes. An equivalent formulation for a proper Poisson process is

$$\displaystyle\begin{array}{rcl} \mathbb{E}\int h(\eta -\delta _{x},x)\eta (\mathrm{d}x) = \mathbb{E}\int h(\eta,x)\mu (\mathrm{d}x)& &{}\end{array}$$
(8)

for all measurable \(h: \mathbf{N}_{\sigma } \times \mathbb{X} \rightarrow [0,\infty ]\). Although ηδ x is in general a signed measure, we can use (6) to see that

$$\displaystyle\begin{array}{rcl} \int h(\eta -\delta _{x},x)\eta (\mathrm{d}x) =\sum _{i}h\bigg(\sum _{j\neq i}\delta _{X_{j}},X_{i}\bigg)& & {}\\ \end{array}$$

is almost surely well defined. Both (7) and (8) characterize the distribution of a Poisson process with given intensity measure μ.

Equation (7) admits a useful generalization involving multiple integration. To formulate this version we consider, for \(m \in \mathbb{N}\), the m-th power \((\mathbb{X}^{m},\mathcal{X}^{m})\) of \((\mathbb{X},\mathcal{X})\). Let η be a proper point process given by (6). We define another point process η (m) on \(\mathbb{X}^{m}\) by

$$\displaystyle\begin{array}{rcl} \eta ^{(m)}(C) ={{}{\sum } ^{\neq }}_{ i_{1},\ldots,i_{m}\leq \eta (\mathbb{X})}\mathbb{1}_{C}(X_{i_{1}},\ldots,X_{i_{m}}),\quad C \in \mathcal{X}^{m},& &{}\end{array}$$
(9)

where the superscript ≠ indicates summation over m-tuples with pairwise different entries. (In the case \(\eta (\mathbb{X}) = \infty\) this involves only integer-valued indices.) In the case C = B m for some \(B \in \mathcal{X}\) we have that

$$\displaystyle\begin{array}{rcl} \eta ^{(m)}(B^{m}) =\eta (B)(\eta (B) - 1)\cdots (\eta (B) - m + 1).& & {}\\ \end{array}$$

Therefore η (m) is called m-th factorial measure of η. It can be readily checked that, for any \(m \in \mathbb{N}\),

$$\displaystyle\begin{array}{rcl} \eta ^{(m+1)}& =& \int \bigg[\int\mathbb{1}\{(x_{ 1},\ldots,x_{m+1}) \in \cdot \}\eta (\mathrm{d}x_{m+1}) \\ & & \quad -\sum _{j=1}^{m}\mathbb{1}\{(x_{ 1},\ldots,x_{m},x_{j}) \in \cdot \}\bigg]\eta ^{(m)}(\mathrm{d}(x_{ 1},\ldots,x_{m})),{}\end{array}$$
(10)

where η (1): = η. This suggests a recursive definition of the factorial measures of a general point process, without using a representation as a sum of Dirac measures. The next proposition confirms this idea.

Proposition 1

Let η be a point process on \(\mathbb{X}\) . Then there is a uniquely determined sequence η (m) , \(m \in \mathbb{N}\) , of symmetric point processes on \(\mathbb{X}^{m}\) satisfying η (1) := η and the recursion (10) .

The proof of Proposition 1 is given in the appendix and can be skipped without too much loss. It is enough to remember that η (m) can be defined by (9), whenever η is given by (6) and that any Poisson process has a proper version.

The multivariate version of (7) (see e.g. [15]) says that

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}\int h(\eta,x_{1},\ldots,x_{m})\eta ^{(m)}(\mathrm{d}(x_{ 1},\ldots,x_{m})) \\ & & \qquad \ \ = \mathbb{E}\int h(\eta +\delta _{x_{1}} +\ldots +\delta _{x_{m}},x_{1},\ldots,x_{m})\mu ^{m}(\mathrm{d}(x_{ 1},\ldots,x_{m})),{}\end{array}$$
(11)

for all measurable \(h: \mathbf{N}_{\sigma } \times \mathbb{X}^{m} \rightarrow [0,\infty ]\). In particular the factorial moment measures of η are given by

$$\displaystyle\begin{array}{rcl} \mathbb{E}\eta ^{(m)} =\mu ^{m},\quad m \in \mathbb{N}.& &{}\end{array}$$
(12)

Of course (11) remains true for a measurable \(h: \mathbf{N}_{\sigma } \times \mathbb{X}^{m} \rightarrow \mathbb{R}\) provided that the right-hand side is finite when replacing h with | h | .

2 Fock Space Representation

In the remainder of this chapter we consider a Poisson process η on \(\mathbb{X}\) with σ-finite intensity measure μ and distribution \(\mathbb{P}_{\eta }\).

In this and later chapters the following difference operators (sometimes called add-one cost operators) will play a crucial role. For any f ∈ F(N σ ) (the set of all measurable functions from N σ to \(\mathbb{R}\)) and \(x \in \mathbb{X}\) the function D x f ∈ F(N σ ) is defined by

$$\displaystyle\begin{array}{rcl} D_{x}\,f(\chi ):= f(\chi +\delta _{x}) - f(\chi ),\quad \chi \in \mathbf{N}_{\sigma }.& &{}\end{array}$$
(13)

Iterating this definition, for n ≥ 2 and \((x_{1},\ldots,x_{n}) \in \mathbb{X}^{n}\) we define a function \(D_{x_{1},\ldots,x_{n}\,}^{n}f \in \mathbf{F}(\mathbf{N}_{\sigma })\) inductively by

$$\displaystyle\begin{array}{rcl} D_{x_{1},\ldots,x_{n}}^{n}f:= D_{ x_{1}}^{1}D_{ x_{2},\ldots,x_{n}}^{n-1}f,& &{}\end{array}$$
(14)

where D 1: = D and D 0 f = f. Note that

$$\displaystyle\begin{array}{rcl} D_{x_{1},\ldots,x_{n}\,}^{n}f(\chi ) =\sum _{ J\subset \{1,2,\ldots,n\}}(-1)^{n-\vert J\vert }f\Big(\chi +\sum _{ j\in J}\delta _{x_{j}}\Big),& &{}\end{array}$$
(15)

where | J | denotes the number of elements of J. This shows that \(D_{x_{1},\ldots,x_{n}\,}^{n}f\) is symmetric in x 1, , x n and that \((x_{1},\ldots,x_{n},\chi )\mapsto D_{x_{1},\ldots,x_{n}\,}^{n}f(\chi )\) is measurable. We define symmetric and measurable functions T n f on \(\mathbb{X}^{n}\) by

$$\displaystyle\begin{array}{rcl} T_{n}f(x_{1},\ldots,x_{n}):= \mathbb{E}D_{x_{1},\ldots,x_{n}\,}^{n}f(\eta ),& &{}\end{array}$$
(16)

and we set \(T_{0}f:= \mathbb{E}f(\eta )\), whenever these expectations are defined. By 〈⋅ , ⋅ 〉 n we denote the scalar product in L 2(μ n) and by \(\|\cdot \|_{n}\) the associated norm. Let L s 2(μ n) denote the symmetric functions in L 2(μ n). Our aim is to prove that the linear mapping f ↦ (T n ( f)) n ≥ 0 is an isometry from \(L^{2}(\mathbb{P}_{\eta })\) into the Fock space given by the direct sum of the spaces L s 2(μ n), n ≥ 0 (with L 2 norms scaled by \(n!^{-1/2}\)) and with L s 2(μ 0) interpreted as \(\mathbb{R}\). In Sect. 4 we will see that this mapping is surjective. The result (and its proof) is from [13] and can be seen as a crucial first step in the stochastic analysis on Poisson spaces.

Theorem 1

Let \(f,g \in L^{2}(\mathbb{P}_{\eta })\) . Then

$$\displaystyle\begin{array}{rcl} \mathbb{E}f(\eta )g(\eta ) = \mathbb{E}f(\eta )\mathbb{E}g(\eta ) +\sum _{ n=1}^{\infty } \frac{1} {n!}\langle T_{n}\,f,T_{n}g\rangle _{n},& &{}\end{array}$$
(17)

where the series converges absolutely.

We will prepare the proof with some lemmas. Let \(\mathcal{X}_{0}\) be the system of all measurable \(B \in \mathcal{X}\) with μ(B) < . Let F 0 be the space of all bounded and measurable functions \(v: \mathbb{X} \rightarrow [0,\infty )\) vanishing outside some \(B \in \mathcal{X}_{0}\). Let G denote the space of all (bounded and measurable) functions \(g: \mathbf{N}_{\sigma } \rightarrow \mathbb{R}\) of the form

$$\displaystyle\begin{array}{rcl} g(\chi ) = a_{1}e^{-\chi (v_{1})} + \cdots + a_{ n}e^{-\chi (v_{n})},& &{}\end{array}$$
(18)

where \(n \in \mathbb{N}\), \(a_{1},\ldots,a_{n} \in \mathbb{R}\) and v 1, , v n  ∈ F 0.

Lemma 1

Relation (17) holds for f,g ∈ G .

Proof

By linearity it suffices to consider functions f and g of the form

$$\displaystyle\begin{array}{rcl} f(\chi ) =\exp [-\chi (v)],\quad g(\chi ) =\exp [-\chi (w)]& & {}\\ \end{array}$$

for v, w ∈ F 0. Then we have for n ≥ 1 that

$$\displaystyle{D^{n}f(\chi ) =\exp [-\chi (v)](e^{-v} - 1)^{\otimes n},}$$

where \((e^{-v} - 1)^{\otimes n}(x_{1},\ldots,x_{n}):=\prod _{ i=1}^{n}(e^{-v(x_{i})} - 1)\). From (3) we obtain that

$$\displaystyle\begin{array}{rcl} T_{n}\,f =\exp [-\mu (1 - e^{-v})](e^{-v} - 1)^{\otimes n}.& &{}\end{array}$$
(19)

Since v ∈ F 0 it follows that T n f ∈ L s 2(μ n), n ≥ 0. Using (3) again, we obtain that

$$\displaystyle\begin{array}{rcl} \mathbb{E}f(\eta )g(\eta ) =\exp [-\mu (1 - e^{-(v+w)})].& &{}\end{array}$$
(20)

On the other hand we have from (19) (putting μ 0(1): = 1) that

$$\displaystyle\begin{array}{rcl} & & \sum _{n=0}^{\infty } \frac{1} {n!}\langle T_{n}\,f,T_{n}g\rangle _{n} {}\\ & & \qquad =\exp [-\mu (1 - e^{-v})]\exp [-\mu (1 - e^{-w})]\sum _{ n=0}^{\infty } \frac{1} {n!}\mu ^{n}(((e^{-v} - 1)(e^{-w} - 1))^{\otimes n}) {}\\ & & \qquad =\exp [-\mu (2 - e^{-v} - e^{-w})]\exp [\mu ((e^{-v} - 1)(e^{-w} - 1))]. {}\\ \end{array}$$

This equals the right-hand side of (20). □ 

To extend (17) to general \(f,g \in L^{2}(\mathbb{P}_{\eta })\) we need two further lemmas.

Lemma 2

The set G is dense in \(L^{2}(\mathbb{P}_{\eta })\) .

Proof

Let W be the space of all bounded measurable \(g: \mathbf{N}_{\sigma } \rightarrow \mathbb{R}\) that can be approximated in \(L^{2}(\mathbb{P}_{\eta })\) by functions in G. This space is closed under monotone and uniformly bounded convergence and also under uniform convergence. Moreover, it contains the constant functions. The space G is stable under multiplication and we denote by \(\mathcal{N}'\) the smallest σ-field on N σ such that χh(χ) is measurable for all h ∈ G. A functional version of the monotone class theorem (see e.g. Theorem I.21 in [1]) implies that W contains any bounded \(\mathcal{N}'\)-measurable g. On the other hand we have that

$$\displaystyle{\chi (C) =\lim _{t\rightarrow 0+}t^{-1}(1 - e^{-t\chi (C)}),\quad \chi \in \mathbf{N}_{\sigma },}$$

for any \(C \in \mathcal{X}\). Hence χχ(C) is \(\mathcal{N}'\)-measurable whenever \(C \in \mathcal{X}_{0}\). Since μ is σ-finite, for any \(C \in \mathcal{X}\) there is a monotone sequence \(C_{k} \in \mathcal{X}_{0}\), \(k \in \mathbb{N}\), with union C, so that χχ(C) is \(\mathcal{N}'\)-measurable. Hence \(\mathcal{N}' = \mathcal{N}_{\sigma }\) and it follows that W contains all bounded measurable functions. But then W is clearly dense in \(L^{2}(\mathbb{P}_{\eta })\) and the proof of the lemma is complete. □ 

Lemma 3

Suppose that \(f,f^{1},f^{2},\ldots \in L^{2}(\mathbb{P}_{\eta })\) satisfy f k → f in \(L^{2}(\mathbb{P}_{\eta })\) as k →∞, and that h: N σ → [0,1] is measurable. Let \(n \in \mathbb{N}\) , let \(C \in \mathcal{X}_{0}\) and set B:= C n . Then

$$\displaystyle\begin{array}{rcl} \lim _{k\rightarrow \infty }\mathbb{E}\int \limits _{B}\vert D_{x_{1},\ldots,x_{n}\,}^{n}f(\eta ) - D_{ x_{1},\ldots,x_{n}\,}^{n}f^{k}(\eta )\vert h(\eta )\mu ^{n}(\mathrm{d}(x_{ 1},\ldots,x_{n})) = 0.& &{}\end{array}$$
(21)

Proof

By (15), the relation (21) is implied by the convergence

$$\displaystyle\begin{array}{rcl} \lim _{k\rightarrow \infty }\mathbb{E}\int \limits _{B}\Big\vert \,f\Big(\eta +\sum _{i=1}^{m}\delta _{ x_{i}}\Big) - f^{k}\Big(\eta +\sum _{ i=1}^{m}\delta _{ x_{i}}\Big)\Big\vert h(\eta )\mu ^{n}(\mathrm{d}(x_{ 1},\ldots,x_{n})) = 0& &{}\end{array}$$
(22)

for all m ∈ { 0, , n}. For m = 0 this is obvious. Assume m ∈ { 1, , n}. Then the integral in (22) equals

$$\displaystyle\begin{array}{rcl} & & \mu (C)^{n-m}\mathbb{E}\int \limits _{ C^{m}}\Big\vert \,f\Big(\eta +\sum _{i=1}^{m}\delta _{ x_{i}}\Big) - f^{k}\Big(\eta +\sum _{ i=1}^{m}\delta _{ x_{i}}\Big)\Big\vert h(\eta )\mu ^{m}(\mathrm{d}(x_{ 1},\ldots,x_{m})) {}\\ & & \quad =\mu (C)^{n-m}\mathbb{E}\int \limits _{ C^{m}}\vert \,f(\eta ) - f^{k}(\eta )\vert h\Big(\eta -\sum _{ i=1}^{n}\delta _{ x_{i}}\Big)\eta ^{(m)}(\mathrm{d}(x_{ 1},\ldots,x_{m})) {}\\ & & \quad \leq \mu (C)^{n-m}\mathbb{E}\vert \,f(\eta ) - f^{k}(\eta )\vert \eta ^{(m)}(C^{m}), {}\\ \end{array}$$

where we have used (11) to get the equality. By the Cauchy–Schwarz inequality the last expression is bounded above by

$$\displaystyle\begin{array}{rcl} \mu (C)^{n-m}(\mathbb{E}(\,f(\eta ) - f^{k}(\eta ))^{2})^{1/2}(\mathbb{E}(\eta ^{(m)}(C^{m}))^{2})^{1/2}.& & {}\\ \end{array}$$

Since the Poisson distribution has moments of all orders, we obtain (22) and hence the lemma. □ 

Proof of Theorem 1

By linearity and the polarization identity

$$\displaystyle{4\langle u,v\rangle _{n} =\langle u + v,u + v\rangle _{n} -\langle u - v,u - v\rangle _{n}}$$

it suffices to prove (17) for \(f = g \in L^{2}(\mathbb{P}_{\eta })\). By Lemma 2 there are f k ∈ G, \(k \in \mathbb{N}\), satisfying f k → f in \(L^{2}(\mathbb{P}_{\eta })\) as k → . By Lemma 1, Tf k, \(k \in \mathbb{N}\), is a Cauchy sequence in \(\mathbf{H}:= \mathbb{R} \oplus \oplus _{n=1}^{\infty }L_{s}^{2}(\mu ^{n})\). The direct sum of the scalar products (n! )−1〈⋅ , ⋅ 〉 n makes H a Hilbert space. Let \(\tilde{f} = (\tilde{f}_{n}) \in \mathbf{H}\) be the limit, that is

$$\displaystyle\begin{array}{rcl} \lim _{k\rightarrow \infty }\sum _{n=0}^{\infty } \frac{1} {n!}\|T_{n}\,f^{k} -\tilde{ f}_{ n}\|_{n}^{2} = 0.& &{}\end{array}$$
(23)

Taking the limit in the identity \(\mathbb{E}f^{k}(\eta )^{2} =\langle Tf^{k},Tf^{k}\rangle _{\mathbf{H}}\) yields \(\mathbb{E}f(\eta )^{2} =\langle \tilde{ f},\tilde{f}\rangle _{\mathbf{H}}\). Equation (23) implies that \(\tilde{f}_{0} = \mathbb{E}f(\eta ) = T_{0}f\). It remains to show that for any n ≥ 1,

$$\displaystyle\begin{array}{rcl} \tilde{f}_{n} = T_{n}\,f,\quad \mu ^{n}\text{-a.e.}& &{}\end{array}$$
(24)

Let \(C \in \mathcal{X}_{0}\) and B: = C n. Let μ B n denote the restriction of the measure μ n to B. By (23) T n f k converges in L 2(μ B n) (and hence in L 1(μ B n)) to \(\tilde{f}_{n}\), while by the definition (16) of T n , and the case h ≡ 1 of (22), T n f k converges in L 1(μ B n) to T n f. Hence these \(L^{1}(\mathbb{P})\) limits must be the same almost everywhere, so that \(\tilde{f}_{n} = T_{n}\,f\ \mu ^{n}\)-a.e. on B. Since μ is assumed σ-finite, this implies (24) and hence the theorem. □ 

3 Multiple Wiener–Itô Integrals

For n ≥ 1 and g ∈ L 1(μ n) we define (see [6, 7, 28, 29])

$$\displaystyle\begin{array}{rcl} I_{n}(g):=\sum _{J\subset [n]}(-1)^{n-\vert J\vert }\iint g(x_{ 1},\ldots,x_{n})\eta ^{(\vert J\vert )}(\mathrm{d}x_{ J})\mu ^{n-\vert J\vert }(\mathrm{d}x_{ J^{c}}),& &{}\end{array}$$
(25)

where \([n]:=\{ 1,\ldots,n\}\), J c: = [n]∖ J and x J : = (x j ) j ∈ J . If J = ∅, then the inner integral on the right-hand side has to be interpreted as μ n(g). (This is to say that η (0)(1): = 1.) The multivariate Mecke equation (11) implies that all integrals in (25) are finite and that \(\mathbb{E}I_{n}(g) = 0\).

Given functions \(g_{i}: \mathbb{X} \rightarrow \mathbb{R}\) for \(i = 1,\ldots,n\), the tensor product i = 1 n g i is the function from \(\mathbb{X}^{n}\) to \(\mathbb{R}\) which maps each \((x_{1},\ldots,x_{n})\) to i = 1 n g i (x i ). When the functions \(g_{1},\ldots,g_{n}\) are all the same function h, we write h n for this tensor product function. In this case the definition (25) simplifies to

$$\displaystyle\begin{array}{rcl} I_{n}(h^{\otimes n}) =\sum _{ k=0}^{n}\binom{n}{k}(-1)^{n-k}\eta ^{(k)}(h^{\otimes k})(\mu (h))^{n-k}.& &{}\end{array}$$
(26)

Let Σ n denote the set of all permutations of [n], and for \(g \in \mathbb{X}^{n} \rightarrow \mathbb{R}\) define the symmetrization \(\tilde{g}\) of g by

$$\displaystyle\begin{array}{rcl} \tilde{g}(x_{1},\ldots,x_{n}):= \frac{1} {n!}\sum _{\pi \in \varSigma _{n}}g(x_{\pi (1)},\ldots,x_{\pi (n)}).& &{}\end{array}$$
(27)

The following isometry properties of the operators I n are crucial. The proof is similar to the one of [16, Theorem 3.1] and is based on the product form (12) of the factorial moment measures and some combinatorial arguments. For more information on the intimate relationships between moments of Poisson integrals and the combinatorial properties of partitions we refer to [16, 21, 25, 28].

Lemma 4

Let g ∈ L 2 m ) and h ∈ L 2 n ) for m,n ≥ 1 and assume that {g ≠ 0} ⊂ B m and {h ≠ 0} ⊂ B n for some \(B \in \mathcal{X}_{0}\) . Then

$$\displaystyle\begin{array}{rcl} \mathbb{E}I_{m}(g)I_{n}(h) =\mathbb{1}\{m = n\}m!\langle \tilde{g},\tilde{h}\rangle _{m}.& &{}\end{array}$$
(28)

Proof

We start with a combinatorial identity. Let \(n \in \mathbb{N}\). A subpartition of [n] is a (possibly empty) family σ of nonempty pairwise disjoint subsets of [n]. The cardinality of ∪ J ∈ σ J is denoted by \(\|\sigma \|\). For \(u \in \mathbf{F}(\mathbb{X}^{n})\) we define \(u_{\sigma }: \mathbb{X}^{\vert \sigma \vert +n-\|\sigma \|}\rightarrow \mathbb{R}\) by identifying the arguments belonging to the same J ∈ σ. (The arguments \(x_{1},\ldots,x_{\vert \sigma \vert +n-\|\sigma \|}\) have to be inserted in the order of occurrence.) Now we take \(r,s \in \mathbb{Z}_{+}\) such that r + s ≥ 1 and define Σ r, s as the set of all partitions of \(\{1,\ldots,r + s\}\) such that \(\vert J \cap \{ 1,\ldots,r\}\vert \leq 1\) and \(\vert J \cap \{ r + 1,\ldots,r + s\}\vert \leq 1\) for all J ∈ σ. Let \(u \in \mathbf{F}(\mathbb{X}^{r+s})\). It is easy to see that

$$\displaystyle\begin{array}{rcl} & & \iint u(x_{1},\ldots,x_{r+s})\eta ^{(r)}(\mathrm{d}(x_{ 1},\ldots,x_{r}))\eta ^{(s)}(\mathrm{d}(x_{ r+1},\ldots,x_{r+s})) \\ & & \quad =\sum _{\sigma \in \varSigma _{r,s}}\int u_{\sigma }\,\mathrm{d}\eta ^{(\vert \sigma \vert )}, {}\end{array}$$
(29)

provided that η({u ≠ 0}) < . (In the case r = 0 the inner integral on the left-hand side is interpreted as 1.)

We next note that g ∈ L 1(μ m) and h ∈ L 1(μ n) and abbreviate f: = gh. Let \(k:= m + n\), J 1: = [m] and \(J_{2}:=\{ m + 1,\ldots,m + n\}\). The definition (25) and Fubini’s theorem imply that

$$\displaystyle{ \begin{array}{rl} I_{m}(g)I_{n}(h)& =\sum _{I\subset [k]}(-1)^{n-\vert I\vert }\iiint f(x_{1},\ldots,x_{k}) \\ &\ \ \ \ \eta ^{(\vert I\cap J_{1}\vert )}(\mathrm{d}x_{I\cap J_{ 1}})\eta ^{(\vert I\cap J_{2}\vert )}(\mathrm{d}x_{ I\cap J_{2}})\mu ^{n-\vert I\vert }(\mathrm{d}x_{ I^{c}}),\end{array} }$$
(30)

where I c: = [k]∖ I and x J : = (x j ) j ∈ J for any J ⊂ [k]. We now take the expectation of (30) and use Fubini’s theorem (justified by our integrability assumptions on g and h). Thanks to (29) and (12) we can compute the expectation of the inner two integrals to obtain that

$$\displaystyle\begin{array}{rcl} \mathbb{E}I_{m}(g)I_{n}(h) =\sum _{\sigma \in \varSigma _{m,n}^{{\ast}}}(-1)^{k-\|\sigma \|}\int f_{\sigma }\,\mathrm{d}\mu ^{k-\|\sigma \|+\vert \sigma \vert },& &{}\end{array}$$
(31)

where Σ m, n is the set of all subpartitions σ of [k] such that | JJ 1 | ≤ 1 and | JJ 2 | ≤ 1 for all J ∈ σ. Let Σ m, n ∗, 2 ⊂ Σ m, n be the set of all subpartitions of [k] such that | J | = 2 for all J ∈ σ. For any π ∈ Σ m, n ∗, 2 we let Σ m, n (π) denote the set of all σ ∈ Σ m, n satisfying π ⊂ σ. Note that π ∈ Σ m, n (π) and that for any σ ∈ Σ m, n there is a unique π ∈ Σ m, n ∗, 2 such that σ ∈ Σ m, n (π). In this case

$$\displaystyle\begin{array}{rcl} \int f_{\sigma }\mathrm{d}\mu ^{k-\|\sigma \|+\vert \sigma \vert } =\int f_{\pi }\mathrm{d}\mu ^{k-\|\pi \|},& & {}\\ \end{array}$$

so that (31) implies

$$\displaystyle\begin{array}{rcl} \mathbb{E}I_{m}(g)I_{n}(h) =\sum _{\pi \in \varSigma _{m,n}^{{\ast},2}}\int f_{\pi }\mathrm{d}\mu ^{k-\|\pi \|}\sum _{ \sigma \in \varSigma _{m,n}^{{\ast}}(\pi )}(-1)^{k-\|\sigma \|}.& &{}\end{array}$$
(32)

The inner sum comes to zero, except in the case where \(\|\pi \|= k\). Hence (32) vanishes unless m = n. In the latter case we have

$$\displaystyle\begin{array}{rcl} {\o} \mathbb{E}I_{m}(g)I_{n}(h) =\sum _{\pi \in \varSigma _{m,m}^{{\ast},2}:\vert \pi \vert =m}\int f_{\pi }\,\mathrm{d}\mu ^{m} = m!\langle \tilde{g},\tilde{h}\rangle _{ m},& & {}\\ \end{array}$$

as asserted. □ 

Any g ∈ L 2(μ m) is the L 2-limit of a sequence g k  ∈ L 2(μ m) satisfying the assumptions of Lemma 4. For instance we may take \(g_{k}:=\mathbb{1}_{(B_{k})^{m}}g\), where μ(B k ) <  and \(B_{k} \uparrow \mathbb{X}\) as k → . Therefore the isometry (28) allows us to extend the linear operator I m in a unique way to L 2(μ m). It follows from the isometry that \(I_{m}(g) = I_{m}(\tilde{g})\) for all g ∈ L 2(μ m). Moreover, (28) remains true for arbitrary g ∈ L 2(μ m) and h ∈ L 2(μ n). It is convenient to set I 0(c): = c for \(c \in \mathbb{R}\). When m ≥ 1, the random variable I m (g) is the (m-th order) Wiener–Itô integral of g ∈ L 2(μ m) with respect to the compensated Poisson process \(\hat{\eta }:=\eta -\mu\). The reference to \(\hat{\eta }\) comes from the explicit definition (25). We note that \(\hat{\eta }(B)\) is only defined for \(B \in \mathcal{X}_{0}\). In fact, \(\{\hat{\eta }(B): B \in \mathcal{X}_{0}\}\) is an independent random measure in the sense of [7].

4 The Wiener–Itô Chaos Expansion

A fundamental result of Itô [7] and Wiener [29] says that every square integrable function of the Poisson process η can be written as an infinite series of orthogonal stochastic integrals. Our aim is to prove the following explicit version of this Wiener–Itô chaos expansion . Recall definition (16).

Theorem 2

Let \(f \in L^{2}(\mathbb{P}_{\eta })\) . Then T n  f ∈ L s 2 n ), \(n \in \mathbb{N}\) , and

$$\displaystyle\begin{array}{rcl} f(\eta ) =\sum _{ n=0}^{\infty } \frac{1} {n!}I_{n}(T_{n}\,f),& &{}\end{array}$$
(33)

where the series converges in \(L^{2}(\mathbb{P})\) . Moreover, if g n ∈ L s 2 n ) for \(n \in \mathbb{Z}_{+}\) satisfy \(f(\eta ) =\sum _{ n=0}^{\infty }\frac{1} {n!}I_{n}(g_{n})\) with convergence in \(L^{2}(\mathbb{P})\) , then \(g_{0} = \mathbb{E}f(\eta )\) and g n = T n  f, μ n -a.e. on \(\mathbb{X}^{n}\) , for all \(n \in \mathbb{N}\) .

For a homogeneous Poisson process on the real line, the explicit chaos expansion (33) was proved in [8]. The general case was formulated and proved in [13]. Stroock [27] has proved the counterpart of (33) for Brownian motion. Stroock’s formula involves iterated Malliavin derivatives and requires stronger integrability assumptions on f(η).

Theorem 2 and the isometry properties (28) of stochastic integrals show that the isometry f ↦ (T n ( f)) n ≥ 0 is in fact a bijection from \(L^{2}(\mathbb{P}_{\eta })\) onto the Fock space. The following lemma is the key for the proof.

Lemma 5

Let \(f(\chi ):= e^{-\chi (v)}\) , \(\chi \in \mathbf{N}_{\sigma }(\mathbb{X})\) , where \(v: \mathbb{X} \rightarrow [0,\infty )\) is a measurable function vanishing outside a set \(B \in \mathcal{X}\) with μ(B) < ∞. Then (33) holds \(\mathbb{P}\) -a.s. and in \(L^{2}(\mathbb{P})\) .

Proof

By (3) and (19) the right-hand side of (33) equals the formal sum

$$\displaystyle\begin{array}{rcl} I:=\exp [-\mu (1 - e^{-v})] +\exp [-\mu (1 - e^{-v})]\sum _{ n=1}^{\infty } \frac{1} {n!}I_{n}((e^{-v} - 1)^{\otimes n}).& &{}\end{array}$$
(34)

Using the pathwise definition (25) we obtain that almost surely

$$\displaystyle\begin{array}{rcl} I& =& \exp [-\mu (1 - e^{-v})]\sum _{ n=0}^{\infty } \frac{1} {n!}\sum _{k=0}^{n}\binom{n}{k}\eta ^{(k)}((e^{-v} - 1)^{\otimes k})(\mu (1 - e^{-v}))^{n-k} \\ & =& \exp [-\mu (1 - e^{-v})]\sum _{ k=0}^{\infty }\frac{1} {k!}\eta ^{(k)}((e^{-v} - 1)^{\otimes k})\sum _{ n=k}^{\infty } \frac{1} {(n - k)!}(\mu (1 - e^{-v}))^{n-k} \\ & =& \sum _{k=0}^{N} \frac{1} {k!}\eta ^{(k)}((e^{-v} - 1)^{\otimes k}), {}\end{array}$$
(35)

where N: = η(B). Assume for the moment that η is proper and write \(\delta _{X_{1}} +\ldots +\delta _{X_{N}}\) for the restriction of η to B. Then we have almost surely that

$$\displaystyle\begin{array}{rcl} I =\sum _{J\subset \{1,\ldots,N\}}\prod _{i\in J}(e^{-v(X_{i})} - 1) =\prod _{ i=1}^{N}e^{-v(X_{i})} = e^{-\eta (v)},& & {}\\ \end{array}$$

and hence (33) holds with almost sure convergence of the series. To demonstrate that convergence also holds in \(L^{2}(\mathbb{P})\), let the partial sum I(m) be given by the right-hand side of (34) with the series terminated at n = m. Then since \(\mu (1 - e^{-v})\) is nonnegative and \(\vert 1 - e^{-v(y)}\vert \leq 1\) for all y, a similar argument to (35) yields

$$\displaystyle\begin{array}{rcl} \vert I(m)\vert & \leq & \sum _{k=0}^{\min (N,m)} \frac{1} {k!}\vert \eta ^{(k)}((e^{-v} - 1)^{\otimes k})\vert {}\\ &\leq & \sum _{k=0}^{N}\frac{N(N - 1)\cdots (N - k + 1)} {k!} = 2^{N}. {}\\ \end{array}$$

Since 2N has finite moments of all orders, by dominated convergence the series (34) (and hence (33)) converges in \(L^{2}(\mathbb{P})\).

Since the convergence of the right-hand side of (34) as well as the almost sure identity \(I = e^{-\eta (v)}\) remain true for any point process with the same distribution as η (that is, for any Poisson process with intensity measure μ), it was no restriction of generality to assume that η is proper. □ 

Proof of Theorem 2

Let \(f \in L^{2}(\mathbb{P}_{\eta })\) and define T n f for \(n \in \mathbb{Z}_{+}\) by (16). By (28) and Theorem 1,

$$\displaystyle{\sum _{n=0}^{\infty }\mathbb{E}\Big( \frac{1} {n!}I_{n}(T_{n}\,f)\Big)^{2} =\sum _{ n=0}^{\infty } \frac{1} {n!}\|T_{n}\,f\|_{n}^{2} = \mathbb{E}f(\eta )^{2} <\infty.}$$

Hence the infinite series of orthogonal terms

$$\displaystyle{S:=\sum _{ n=0}^{\infty } \frac{1} {n!}I_{n}(T_{n}\,f)}$$

converges in \(L^{2}(\mathbb{P})\). Let h ∈ G, where G was defined at (18). By Lemma 5 and linearity of I n (⋅ ) the sum \(\sum _{n=0}^{\infty } \frac{1} {n!}I_{n}(T_{n}h)\) converges in \(L^{2}(\mathbb{P})\) to h(η). Using (28) followed by Theorem 1 yields

$$\displaystyle\begin{array}{rcl} \mathbb{E}(h(\eta ) - S)^{2} =\sum _{ n=0}^{\infty } \frac{1} {n!}\|T_{n}h - T_{n}f\|_{n} = \mathbb{E}(\,f(\eta ) - h(\eta ))^{2}.& & {}\\ \end{array}$$

Hence if \(\mathbb{E}(\,f(\eta ) - h(\eta ))^{2}\) is small, then so is \(\mathbb{E}(\,f(\eta ) - S)^{2}\). Since G dense in \(L^{2}(\mathbb{P}_{\eta })\) by Lemma 2, it follows that f(η) = S almost surely.

To prove the uniqueness, suppose that also g n  ∈ L s 2(μ n) for \(n \in \mathbb{Z}_{+}\) are such that \(\sum _{n=0}^{\infty } \frac{1} {n!}I_{n}(g_{n})\) converges in \(L^{2}(\mathbb{P})\) to f(η). By taking expectations we must have \(g_{0} = \mathbb{E}f(\eta ) = T_{0}f\). For n ≥ 1 and h ∈ L s 2(μ n), by (28) and (33) we have

$$\displaystyle{\mathbb{E}f(\eta )I_{n}(h) = \mathbb{E}I_{n}(T_{n}f)I_{n}(h) = n!\langle T_{n}f,h\rangle _{n}}$$

and similarly with T n f replaced by g n , so that \(\langle T_{n}\,f - g_{n},h\rangle _{n} = 0\). Putting \(h = T_{n}\,f - g_{n}\) gives \(\|T_{n}\,f - g_{n}\|_{n} = 0\) for each n, completing the proof of the theorem. □ 

5 Malliavin Operators

For any p ≥ 0 we denote by L η p the space of all random variables \(F \in L^{p}(\mathbb{P})\) such that \(F = f(\eta )\ \mathbb{P}\)-almost surely, for some f ∈ F(N σ ). Note that the space L η p is a subset of \(L^{p}(\mathbb{P})\) while \(L^{p}(\mathbb{P}_{\eta })\) is the space of all measurable functions f ∈ F(N σ ) satisfying \(\int \vert \,f\vert ^{p}\,\mathrm{d}\mathbb{P}_{\eta } = \mathbb{E}\vert \,f(\eta )\vert ^{p} <\infty\). The representative f of \(F \in L^{p}(\mathbb{P})\) is is \(\mathbb{P}_{\eta }\)-a.e. uniquely defined element of \(L^{p}(\mathbb{P}_{\eta })\). For \(x \in \mathbb{X}\) we can then define the random variable D x F: = D x f(η). More generally, we define \(D_{x_{1},\ldots,x_{n}\,}^{n}F:= D_{x_{1},\ldots,x_{n}\,}^{n}f(\eta )\) for any \(n \in \mathbb{N}\) and \(x_{1},\ldots,x_{n} \in \mathbb{X}\). The mapping \((\omega,x_{1},\ldots,x_{n})\mapsto D_{x_{1},\ldots,x_{n}\,}^{n}F(\omega )\) is denoted by D n F (or by DF in the case n = 1). The multivariate Mecke equation (11) easily implies that these definitions are \(\mathbb{P}\otimes \mu\)-a.e. independent of the choice of the representative.

By (33) any F ∈ L η 2 can be written as

$$\displaystyle\begin{array}{rcl} F = \mathbb{E}F +\sum _{ n=1}^{\infty }I_{ n}(\,f_{n}),& &{}\end{array}$$
(36)

where \(f_{n}:= \frac{1} {n!}\mathbb{E}D^{n}F\). In particular we obtain from (28) (or directly from Theorem 1) that

$$\displaystyle\begin{array}{rcl} \mathbb{E}F^{2} = (\mathbb{E}F)^{2} +\sum _{ n=1}^{\infty }n!\|\,f_{ n}\|_{n}^{2}.& &{}\end{array}$$
(37)

We denote by dom D the set of all F ∈ L η 2 satisfying

$$\displaystyle\begin{array}{rcl} \sum _{n=1}^{\infty }nn!\|\,f_{ n}\|_{n}^{2} <\infty.& &{}\end{array}$$
(38)

The following result is taken from [13] and generalizes Theorem 6.5 in [8] (see also Theorem 6.2 in [20]). It shows that under the assumption (38) the pathwise defined difference operator DF coincides with the Malliavin derivative of F. The space dom D is the domain of this operator.

Theorem 3

Let F ∈ L η 2 be given by (36) . Then \(DF \in L^{2}(\mathbb{P}\otimes \mu )\) iff F ∈dom  D. In this case we have \(\mathbb{P}\) -a.s. and for μ-a.e.  \(x \in \mathbb{X}\) that

$$\displaystyle\begin{array}{rcl} D_{x}F =\sum _{ n=1}^{\infty }nI_{ n-1}(\,f_{n}(x,\cdot )).& &{}\end{array}$$
(39)

The proof of Theorem 3 requires some preparations. Since

$$\displaystyle\begin{array}{rcl} \int \Big(\sum _{n=1}^{\infty }nn!\|\,f_{ n}(x,\cdot )\|_{n-1}^{2}\Big)\mu (\mathrm{d}x) =\sum _{ n=1}^{\infty }nn!\int \|\,f_{ n}\|_{n}^{2},& & {}\\ \end{array}$$

 (28) implies that the infinite series

$$\displaystyle\begin{array}{rcl} D'_{x}F:=\sum _{ n=1}^{\infty }nI_{ n-1}f_{n}(x,\cdot )& &{}\end{array}$$
(40)

converges in \(L^{2}(\mathbb{P})\) for μ-a.e. \(x \in \mathbb{X}\) provided that F ∈ dom D. By construction of the stochastic integrals we can assume that (ω, x) ↦ (I n−1 f n (x, ⋅ ))(ω) is measurable for all n ≥ 1. Therefore we can also assume that the mapping DF given by (ω, x) ↦ D x F(ω) is measurable. We have just seen that

$$\displaystyle\begin{array}{rcl} \mathbb{E}\int (D'_{x}F)^{2}\mu (\mathrm{d}x) =\sum _{ n=1}^{\infty }nn!\int \|\,f_{ n}\|_{n}^{2},\quad F \in \mathrm{dom}\,D.& &{}\end{array}$$
(41)

Next we introduce an operator acting on random functions that will turn out to be the adjoint of the difference operator D, see Theorem 4. For p ≥ 0 let \(L_{\eta }^{p}(\mathbb{P}\otimes \mu )\) denote the set of all \(H \in L^{p}(\mathbb{P}\otimes \mu )\) satisfying H(ω, x) = h(η(ω), x) for \(\mathbb{P}\otimes \mu\)-a.e. (ω, x) for some representative \(h \in \mathbf{F}(\mathbf{N}_{\sigma } \times \mathbb{X})\). For such a H we have for μ-a.e. x that \(H(x):= H(\cdot,x) \in L^{2}(\mathbb{P})\) and (by Theorem 2)

$$\displaystyle\begin{array}{rcl} H(x) =\sum _{ n=0}^{\infty }I_{ n}(h_{n}(x,\cdot )),\quad \mathbb{P}\text{-a.s.},& &{}\end{array}$$
(42)

where \(h_{0}(x):= \mathbb{E}H(x)\) and \(h_{n}(x,x_{1},\ldots,x_{n}):= \frac{1} {n!}\mathbb{E}D_{x_{1},\ldots,x_{n}\,}^{n}H(x)\). We can then define the Kabanov–Skorohod integral [3, 10, 11, 26] of H, denoted δ(H), by

$$\displaystyle\begin{array}{rcl} \delta (H):=\sum _{ n=0}^{\infty }I_{ n+1}(h_{n}),& &{}\end{array}$$
(43)

which converges in \(L^{2}(\mathbb{P})\) provided that

$$\displaystyle\begin{array}{rcl} \sum _{n=0}^{\infty }(n + 1)!\int \tilde{h}_{ n}^{2}\mathrm{d}\mu ^{n+1} <\infty.& &{}\end{array}$$
(44)

Here

$$\displaystyle\begin{array}{rcl} \tilde{h}_{n}(x_{1},\ldots,x_{n+1}):= \frac{1} {(n + 1)!}\sum _{i=1}^{n+1}\mathbb{E}D_{ x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{n+1}}^{n}H(x_{ i})& &{}\end{array}$$
(45)

is the symmetrization of h n . The set of all \(H \in L_{\eta }^{2}(\mathbb{P}\otimes \mu )\) satisfying the latter assumption is the domain dom δ of the operator δ.

We continue with a preliminary version of Theorem 4.

Proposition 2

Let F ∈dom  D. Let \(H \in L_{\eta }^{2}(\mathbb{P}\otimes \mu )\) be given by (42) and assume that

$$\displaystyle\begin{array}{rcl} \sum _{n=0}^{\infty }(n + 1)!\int h_{ n}^{2}\mathrm{d}\mu ^{n+1} <\infty.& &{}\end{array}$$
(46)

Then

$$\displaystyle\begin{array}{rcl} \mathbb{E}\int (D'_{x}F)H(x)\mu (\mathrm{d}x) = \mathbb{E}F\delta (H).& &{}\end{array}$$
(47)

Proof

Minkowski inequality implies (44) and hence H ∈ dom δ. Using (40) and (42) together with (28), we obtain that

$$\displaystyle\begin{array}{rcl} \mathbb{E}\int (D'_{x}F)H(x)\mu (\mathrm{d}x) =\int \bigg (\sum _{n=1}^{\infty }n!\langle f_{ n}(x,\cdot ),h_{n-1}(x,\cdot )\rangle _{n-1}\bigg)\mu (\mathrm{d}x),& & {}\\ \end{array}$$

where the use of Fubini’s theorem is justified by (41), the assumption on H and the Cauchy–Schwarz inequality. Swapping the order of summation and integration (to be justified soon) we see that the last integral equals

$$\displaystyle\begin{array}{rcl} \sum _{n=1}^{\infty }n!\langle f_{ n},h_{n-1}\rangle _{n} =\sum _{ n=1}^{\infty }n!\langle f_{ n},\tilde{h}_{n-1}\rangle _{n},& & {}\\ \end{array}$$

where we have used the fact that f n is a symmetric function. By definition (43) and (28), the last series coincides with \(\mathbb{E}F\delta (H)\). The above change of order is permitted since

$$\displaystyle\begin{array}{rcl} & & \sum _{n=1}^{\infty }n!\int \vert \langle f_{ n}(x,\cdot ),h_{n-1}(x,\cdot )\rangle _{n-1}\vert \mu (\mathrm{d}x) {}\\ & & \quad \leq \sum _{n=1}^{\infty }n!\int \|\,f_{ n}(x,\cdot )\|_{n-1}\|h_{n-1}(x,\cdot )\|_{n-1}\mu (\mathrm{d}x) {}\\ \end{array}$$

and the latter series is finite in view of the Cauchy–Schwarz inequality, the finiteness of (36) and assumption (46). □ 

Proof of Theorem 3

We need to show that

$$\displaystyle\begin{array}{rcl} DF = D'F,\quad \mathbb{P} \otimes \mu \text{-a.e.}& &{}\end{array}$$
(48)

First consider the case with \(f(\chi ) = e^{-\chi (v)}\) with a measurable \(v: \mathbb{X} \rightarrow [0,\infty )\) vanishing outside a set with finite μ-measure. Then n! f n  = T n f is given by (19). Given \(n \in \mathbb{N}\),

$$\displaystyle\begin{array}{rcl} n \cdot n!\int f_{n}^{2}d\mu ^{n} = \frac{1} {(n - 1)!}\exp [2\mu (e^{-v} - 1)](\mu ((e^{-v} - 1)^{2}))^{n}& & {}\\ \end{array}$$

which is summable in n, so (38) holds in this case. Also, in this case, \(D_{x}\,f(\eta ) = (e^{v(x)} - 1)f(\eta )\) by (13), while \(f_{n}(\cdot,x) = (e^{-v(x)} - 1)n^{-1}f_{n-1}\) so that by (40),

$$\displaystyle{D'_{x}f(\eta ) =\sum _{ n=1}^{\infty }(e^{-v(x)} - 1)I_{ n-1}(\,f_{n-1}) = (e^{-v(x)} - 1)f(\eta )}$$

where the last inequality is from Lemma 5 again. Thus (48) holds for f of this form. By linearity this extends to all elements of G.

Let us now consider the general case. Choose g k  ∈ G, \(k \in \mathbb{N}\), such that G k : = g k (η) → F in \(L^{2}(\mathbb{P})\) as k → , see Lemma 2. Let \(H \in L_{\eta }^{2}(\mathbb{P}_{\eta }\otimes \mu )\) have the representative \(h(\chi,x):= h'(\chi )\mathbb{1}_{B}(x)\), where h′ is as in Lemma 5 and \(B \in \mathcal{X}_{0}\). From Lemma 5 it is easy to see that (46) holds. Therefore we obtain from Proposition 2 and the linearity of the operator D′ that

$$\displaystyle\begin{array}{rcl} \mathbb{E}\int (D'_{x}F - D'_{x}G_{k})H(x)\mu (\mathrm{d}x) = \mathbb{E}(\,f - G_{k})\delta (H) \rightarrow 0\quad \mbox{ as $k \rightarrow \infty $}.& &{}\end{array}$$
(49)

On the other hand,

$$\displaystyle{\mathbb{E}\int (D_{x}F - D_{x}G_{k})H(x)\mu (\mathrm{d}x) = \mathbb{E}\int \limits _{B}(D_{x}\,f(\eta ) - D_{x}g_{k}(\eta ))h'(\eta )\mu (\mathrm{d}x),}$$

and by the case n = 1 of Lemma 3, this tends to zero as k → . Since D x g k  = D x g k a.s. for μ-a.e. x we obtain from (49) that

$$\displaystyle\begin{array}{rcl} \mathbb{E}\int (D'_{x}\,f)h(\eta,x)\mu (\mathrm{d}x) = \mathbb{E}\int (D_{x}\,f(\eta ))h(\eta,x)\mu (\mathrm{d}x).& &{}\end{array}$$
(50)

By Lemma 2, the linear combinations of the functions h considered above are dense in \(L^{2}(\mathbb{P}_{\eta }\otimes \mu )\), and by linearity (50) carries through to h in this dense class of functions too, so we may conclude that the assertion (48) holds.

It follows from (41) and (48) that F ∈ dom D implies \(DF \in L_{\eta }^{2}(\mathbb{P}\otimes \mu )\). The other implication was noticed in [22, Lemma 3.1]. To prove it, we assume \(DF \in L_{\eta }^{2}(\mathbb{P}\otimes \mu )\) and apply the Fock space representation (17) to \(\mathbb{E}(D_{x}F)^{2}\) for μ-a.e. x. This gives

$$\displaystyle\begin{array}{rcl} \int \mathbb{E}(D_{x}F)^{2}\mu (\mathrm{d}x)& =& \sum _{ n=0}^{\infty } \frac{1} {n!}\iint (\mathbb{E}D_{x_{1},\ldots,x_{n},x}^{n+1})^{2}\mu ^{n}(\mathrm{d}(x_{ 1},\ldots,x_{n}))\mu (\mathrm{d}x) {}\\ & =& \sum _{n=0}^{\infty }(n + 1)(n + 1)!\|\,f_{ n+1}\|_{n+1}^{2} {}\\ \end{array}$$

and hence F ∈ dom D. □ 

The following duality relation (also referred to as partial integration, or integration by parts formula) shows that the operator δ is the adjoint of the difference operator D. It is a special case of Proposition 4.2 in [20] applying to general Fock spaces.

Theorem 4

Let F ∈dom  D and H ∈dom  δ. Then,

$$\displaystyle\begin{array}{rcl} \mathbb{E}\int (D_{x}F)H(x)\mu (\mathrm{d}x) = \mathbb{E}F\delta (H).& &{}\end{array}$$
(51)

Proof

We fix F ∈ dom D. Theorem 3 and Proposition 2 imply that (51) holds if \(H \in L_{\eta }^{2}(\mathbb{P}\otimes \mu )\) satisfies the stronger assumption (46). For any \(m \in \mathbb{N}\) we define

$$\displaystyle\begin{array}{rcl} H^{(m)}(x):=\sum _{ n=0}^{m}I_{ n}(h_{n}(x,\cdot )),\quad x \in \mathbb{X}.& &{}\end{array}$$
(52)

Since H (m) satisfies (46) we obtain that

$$\displaystyle\begin{array}{rcl} \mathbb{E}\int (D_{x}F)H^{(m)}(x)\mu (\mathrm{d}x) = \mathbb{E}F\delta (H^{(m)}).& &{}\end{array}$$
(53)

From (28) we have

$$\displaystyle\begin{array}{rcl} \int \mathbb{E}(H(x) - H^{(m)}(x))^{2}\mu (\mathrm{d}x)& =& \int \bigg(\sum _{ n=m+1}^{\infty }n!\|h_{ n}(x,\cdot )\|_{n}^{2}\bigg)\mu (\mathrm{d}x) {}\\ & =& \sum _{n=m+1}^{\infty }n!\|h_{ n}\|_{n+1}^{2}. {}\\ \end{array}$$

As m →  this tends to zero, since

$$\displaystyle\begin{array}{rcl} \mathbb{E}\int H(x)^{2}\mu (\mathrm{d}x) =\int \mathbb{E}(H(x))^{2}\mu (\mathrm{d}x) =\sum _{ n=0}^{\infty }n!\|h_{ n}\|_{n+1}^{2}& & {}\\ \end{array}$$

is finite. It follows that the left-hand side of (53) tends to the left-hand side of (51).

To treat the right-hand side of (53) we note that

$$\displaystyle\begin{array}{rcl} \mathbb{E}\delta (H - H^{(m)})^{2} =\sum _{ n=m+1}^{\infty }\mathbb{E}(I_{ n+1}(h_{n}))^{2} =\sum _{ n=m+1}^{\infty }(n + 1)!\|\tilde{h}_{ n}\|_{n+1}^{2}.& &{}\end{array}$$
(54)

Since H ∈ dom δ this tends to 0 as m → . Therefore \(\mathbb{E}(\delta (H) -\delta (H^{(m)}))^{2} \rightarrow 0\) and the right-hand side of (53) tends to the right-hand side of (51). □ 

We continue with a basic isometry property of the Kabanov–Skorohod integral . In the present generality the result is in [17]. A less general version is [24, Proposition 6.5.4].

Theorem 5

Let \(H \in L_{\eta }^{2}(\mathbb{P}\otimes \mu )\) be such that

$$\displaystyle\begin{array}{rcl} \mathbb{E}\iint (D_{y}H(x))^{2}\mu (\mathrm{d}x)\mu (\mathrm{d}y) <\infty.& &{}\end{array}$$
(55)

Then, H ∈dom  δ and moreover

$$\displaystyle\begin{array}{rcl} \mathbb{E}\delta (H)^{2} = \mathbb{E}\int H(x)^{2}\mu (\mathrm{d}x) + \mathbb{E}\iint D_{ y}H(x)D_{x}H(y)\mu (\mathrm{d}x)\mu (\mathrm{d}y).& &{}\end{array}$$
(56)

Proof

Suppose that H is given as in (42). Assumption (55) implies that H(x) ∈ dom D for μ-a.e. \(x \in \mathbb{X}\). We therefore deduce from Theorem 3 that

$$\displaystyle\begin{array}{rcl} g(x,y):= D_{y}H(x) =\sum _{ n=1}^{\infty }nI_{ n-1}(h_{n}(x,y,\cdot ))& & {}\\ \end{array}$$

\(\mathbb{P}\)-a.s. and for μ 2-a.e. \((x,y) \in \mathbb{X}^{2}\). Using assumption (55) together with the isometry properties (28), we infer that

$$\displaystyle\begin{array}{rcl} \sum _{n=1}^{\infty }nn!\|\tilde{h}_{ n}\|_{n+1}^{2} \leq \sum _{ n=1}^{\infty }nn!\|h_{ n}\|_{n+1}^{2} = \mathbb{E}\iint (D_{ y}H(x))^{2}\mu (\mathrm{d}x)\mu (\mathrm{d}y) <\infty,& & {}\\ \end{array}$$

yielding that H ∈ dom δ.

Now we define H (m) ∈ dom δ, \(m \in \mathbb{N}\), by (52) and note that

$$\displaystyle\begin{array}{rcl} \mathbb{E}\delta (H^{(m)})^{2} =\sum _{ n=0}^{m}\mathbb{E}I_{ n+1}(\tilde{h}_{n})^{2} =\sum _{ n=0}^{m}(n + 1)!\|\tilde{h}_{ n}\|_{n+1}^{2}.& & {}\\ \end{array}$$

Using the symmetry properties of the functions h n it is easy to see that the latter sum equals

$$\displaystyle\begin{array}{rcl} \sum _{n=0}^{m}n!\int h_{ n}^{2}d\mu ^{n+1} +\sum _{ n=1}^{m}nn!\iint h_{ n}(x,y,z)h_{n}(y,x,z)\mu ^{2}(\mathrm{d}(x,y))\mu ^{n-1}(\mathrm{d}z).& &{}\end{array}$$
(57)

On the other hand, we have from Theorem 3 that

$$\displaystyle{D_{y}H^{(m)}(x) =\sum _{ n=1}^{m}nI_{ n-1}(h_{n}(x,y,\cdot )),}$$

so that

$$\displaystyle\begin{array}{rcl} \mathbb{E}\int H^{(m)}(x)^{2}\mu (\mathrm{d}x) + \mathbb{E}\iint D_{ y}H^{(m)}(x)D_{ x}H^{(m)}(y)\mu (\mathrm{d}x)\mu (\mathrm{d}y)& & {}\\ \end{array}$$

coincides with (57). Hence

$$\displaystyle\begin{array}{rcl} \mathbb{E}\delta (H^{(m)})^{2} = \mathbb{E}\int H^{(m)}(x)^{2}\mu (\mathrm{d}x) + \mathbb{E}\iint D_{ y}H^{(m)}(x)D_{ x}H^{(m)}(y)\mu (\mathrm{d}x)\mu (\mathrm{d}y).& &{}\end{array}$$
(58)

These computations imply that g m (x, y): = D y H (m)(x) converges in \(L^{2}(\mathbb{P} \otimes \mu ^{2})\) towards g. Similarly, g m (x, y): = D x H (m)(y) converges towards g′(x, y): = D x g(y). Since we have seen in the proof of Theorem 4 that H (m) → H in \(L^{2}(\mathbb{P}\otimes \mu )\) as m → , we can now conclude that the right-hand side of (58) tends to the right-hand side of the asserted identity (56). On the other hand we know by (54) that \(\mathbb{E}\delta (H^{(m)})^{2} \rightarrow \mathbb{E}\delta (H)^{2}\) as m → . This concludes the proof. □ 

To explain the connection of (55) with classical stochastic analysis we assume for a moment that \(\mathbb{X}\) is equipped with a transitive binary relation < such that {(x, y): x < y} is a measurable subset of \(\mathbb{X}^{2}\) and such that x < x fails for all \(x \in \mathbb{X}\). We also assume that < totally orders the points of \(\mathbb{X}\ \mu\)-a.e., that is

$$\displaystyle\begin{array}{rcl} \mu ([x]) = 0,\quad x \in \mathbb{X},& &{}\end{array}$$
(59)

where \([x]:= \mathbb{X}\setminus \{y \in \mathbb{X}: \mbox{ $y <x$ or $x <y$}\}\). For any χ ∈ N σ let χ x denote the restriction of χ to \(\{y \in \mathbb{X}: y <x\}\). Our final assumption on < is that (χ, y) ↦ χ y is measurable. A measurable function \(h: \mathbf{N}_{\sigma } \times \mathbb{X} \rightarrow \mathbb{R}\) is called predictable if

$$\displaystyle\begin{array}{rcl} h(\chi,x) = h(\chi _{x},x),\quad (\chi,x) \in \mathbf{N}_{\sigma } \times \mathbb{X}.& &{}\end{array}$$
(60)

A process \(H \in L_{\eta }^{0}(\mathbb{P}\otimes \mu )\) is predictable if it has a predictable representative. In this case we have \(\mathbb{P}\otimes \mu\)-a.e. that D x H(y) = 0 for y < x and D y H(x) = 0 for x < y. In view of (59) we obtain from (56) the classical Itô isometry

$$\displaystyle\begin{array}{rcl} \mathbb{E}\delta (H)^{2} = \mathbb{E}\int H(x)^{2}\mu (\mathrm{d}x).& &{}\end{array}$$
(61)

In fact, a combinatorial argument shows that any predictable \(H \in L_{\eta }^{2}(\mathbb{P}\otimes \mu )\) is in the domain of δ. We refer to [14] for more detail and references to the literature.

We return to the general setting and derive a pathwise interpretation of the Kabanov–Skorohod integral. For \(H \in L_{\eta }^{1}(\mathbb{P}\otimes \mu )\) with representative h we define

$$\displaystyle\begin{array}{rcl} \delta '(H):=\int h(\eta -\delta _{x},x)\eta (\mathrm{d}x) -\int h(\eta,x)\mu (\mathrm{d}x).& &{}\end{array}$$
(62)

The Mecke equation (7) implies that this definition does \(\mathbb{P}\)-a.s. not depend on the choice of the representative. The next result (see [13]) shows that the Kabanov–Skorohod integral and the operator δ′ coincide on the intersection of their domains. In the case of a diffuse intensity measure μ (and requiring some topological assumptions on \((\mathbb{X},\mathcal{X})\)) the result is implicit in [23].

Theorem 6

Let \(H \in L_{\eta }^{1}(\mathbb{P}\otimes \mu ) \cap \mathrm{dom}\,\delta\) . Then \(\delta (H) =\delta '(H)\ \mathbb{P}\) -a.s.

Proof

Let H have representative h. The Mecke equation (7) shows the integrability \(\mathbb{E}\int \vert h(\eta -\delta _{x},x)\vert \eta (\mathrm{d}x) <\infty\) as well as

$$\displaystyle\begin{array}{rcl} \mathbb{E}\int D_{x}\,f(\eta )h(\eta,x)\mu (\mathrm{d}x) = \mathbb{E}f(\eta )\delta '(H),& &{}\end{array}$$
(63)

whenever \(f: \mathbf{N}_{\sigma } \rightarrow \mathbb{R}\) is measurable and bounded. Therefore we obtain from (51) that \(\mathbb{E}F\delta '(H) = \mathbb{E}F\delta (H)\) provided that F: = f(η) ∈ dom D. By Lemma 2 the space of such bounded random variables is dense in \(L_{\eta }^{2}(\mathbb{P})\), so we may conclude that the assertion holds. □ 

Finally in this section we discuss the Ornstein–Uhlenbeck generator L whose domain dom L is given by the class of all F ∈ L η 2 satisfying

$$\displaystyle{\sum _{n=1}^{\infty }n^{2}n!\|\,f_{ n}\|_{n}^{2} <\infty.}$$

In this case one defines

$$\displaystyle\begin{array}{rcl} LF:= -\sum _{n=1}^{\infty }nI_{ n}(\,f_{n}).& & {}\\ \end{array}$$

The (pseudo) inverse L −1 of L is given by

$$\displaystyle\begin{array}{rcl} L^{-1}F:= -\sum _{ n=1}^{\infty }\frac{1} {n}I_{n}(\,f_{n}).& &{}\end{array}$$
(64)

The random variable L −1 F is well defined for any F ∈ L η 2. Moreover, (37) implies that L −1 F ∈ domL. The identity \(LL^{-1}F = F\), however, holds only if \(\mathbb{E}F = 0\).

The three Malliavin operators D, δ, and L are connected by a simple formula:

Proposition 3

Let F ∈dom  L. Then F ∈dom  D, DF ∈dom  δ and \(\delta (DF) = -LF\) .

Proof

The relationship F ∈ dom D is a direct consequence of (37). Let H: = DF. By Theorem 3 we can apply (43) with \(h_{n}:= (n + 1)f_{n+1}\). We have

$$\displaystyle\begin{array}{rcl} \sum _{n=0}^{\infty }(n + 1)!\|h_{ n}\|_{n+1}^{2} =\sum _{ n=0}^{\infty }(n + 1)!(n + 1)^{2}\|\,f_{ n+1}\|_{n+1}^{2}& & {}\\ \end{array}$$

showing that H ∈ dom δ. Moreover, since \(I_{n+1}(\tilde{h}_{n}) = I_{n+1}(h_{n})\) it follows that

$$\displaystyle\begin{array}{rcl} \delta (DF) =\sum _{ n=0}^{\infty }I_{ n+1}(h_{n}) =\sum _{ n=0}^{\infty }(n + 1)I_{ n+1}(\,f_{n+1}) = -LF,& & {}\\ \end{array}$$

finishing the proof. □ 

The following pathwise representation shows that the Ornstein–Uhlenbeck generator can be interpreted as the generator of a free birth and death process on \(\mathbb{X}\).

Proposition 4

Let F ∈dom  L with representative f and assume \(DF \in L_{\eta }^{1}(\mathbb{P}\otimes \mu )\) . Then

$$\displaystyle\begin{array}{rcl} LF =\int (\,f(\eta -\delta _{x}) - f(\eta ))\eta (\mathrm{d}x) +\int (\,f(\eta +\delta _{x}) - f(\eta ))\mu (\mathrm{d}x).& &{}\end{array}$$
(65)

Proof

We use Proposition 3. Since \(DF \in L_{\eta }^{1}(\mathbb{P}\otimes \mu )\) we can apply Theorem 6 and the result follows by a straightforward calculation. □ 

6 Products of Wiener–Itô Integrals

In this section we study the chaos expansion of I p ( f)I q (g), where f ∈ L s 2(μ p) and g ∈ L s 2(μ q) for \(p,q \in \mathbb{N}\). We define for any \(r \in \{ 0,\ldots,p \wedge q\}\) (where pq: = min{p, q}) and l ∈ [r] the contraction \(f \star _{ r}^{l}g: \mathbb{X}^{p+q-r-l} \rightarrow \mathbb{R}\) by

$$\displaystyle\begin{array}{rcl} & & f \star _{ r}^{l}g(x_{ 1},\ldots,x_{p+q-r-l}) \\ & & \quad:=\int f(y_{1},\ldots,y_{l},x_{1},\ldots,x_{p-l}) \\ & & \qquad \times g(y_{1},\ldots,y_{l},x_{1},\ldots,x_{r-l},x_{p-l+1},\ldots,x_{p+q-r-l})\mu ^{l}(\mathrm{d}(y_{ 1},\ldots,y_{l})),{}\end{array}$$
(66)

whenever these integrals are well defined. In particular f0 0 g = fg.

In the case q = 1 the next result was proved in [10]. The general case is treated in [28], though under less explicit integrability assumptions and for diffuse intensity measure. Our proof is quite different.

Proposition 5

Let f ∈ L s 2 p ) and f ∈ L s 2 q ) and assume \(f \star _{ r}^{l}g \in L^{2}(\mu ^{p+q-r-l})\) for all \(r \in \{ 0,\ldots,p \wedge q\}\) and \(l \in \{ 0,\ldots,r - 1\}\) . Then

$$\displaystyle\begin{array}{rcl} I_{p}(\,f)I_{q}(g) =\sum _{ r=0}^{p\wedge q}r!\binom{p}{r}\binom{q}{r}\sum _{ l=0}^{r}\binom{r}{l}I_{ p+q-r-l}(\,f \star _{ r}^{l}g),\quad \mathbb{P}\text{-a.s.}& &{}\end{array}$$
(67)

Proof

First note that the Cauchy–Schwarz inequality implies \(f \star _{ r}^{r}g \in L^{2}(\mu ^{p+q-2r})\) for all \(r \in \{ 0,\ldots,p \wedge q\}\).

We prove (67) by induction on p + q. For pq = 0 the assertion is trivial. For the induction step we assume that pq ≥ 1. If F, G ∈ L η 0, then an easy calculation shows that

$$\displaystyle\begin{array}{rcl} D_{x}(\,fG) = (D_{x}F)G + F(D_{x}G) + (D_{x}F)(D_{x}G)& &{}\end{array}$$
(68)

holds \(\mathbb{P}\)-a.s. and for μ-a.e. \(x \in \mathbb{X}\). Using this together with Theorem 3 we obtain that

$$\displaystyle\begin{array}{rcl} D_{x}(I_{p}(\,f)I_{q}(g)) = pI_{p-1}(\,f_{x})I_{q}(g) + qI_{p}(\,f)I_{q-1}(g_{x}) + pqI_{p-1}(\,f_{x})I_{q-1}(g_{x}),& & {}\\ \end{array}$$

where f x : = f(x, ⋅ ) and g x : = g(x, ⋅ ). We aim at applying the induction hypothesis to each of the summands on the above right-hand side. To do so, we note that

$$\displaystyle\begin{array}{rcl} (f_{x} \star _{ r}^{l}g)(x_{ 1},\ldots,x_{p-1+q-r-l}) = f \star _{ r}^{l}g(x_{ 1},\ldots,x_{p-1-l},x,x_{p-1-l+1}\ldots,x_{p-1+q-r-l})& & {}\\ \end{array}$$

for all \(r \in \{ 0,\ldots,(p - 1) \wedge q\}\) and \(l \in \{ 0,\ldots,r\}\) and

$$\displaystyle\begin{array}{rcl} (\,f_{x} \star _{ r}^{l}g_{ x})(x_{1},\ldots,x_{p-1+q-1-r-l}) = f \star _{ r+1}^{l}g(x,x_{ 1},\ldots,x_{p-1+q-1-r-l})& & {}\\ \end{array}$$

for all \(r \in \{ 0,\ldots,(p - 1) \wedge (q - 1)\}\) and \(l \in \{ 0,\ldots,r\}\). Therefore the pairs ( f x , g), ( f, g x ) and ( f x , g x ) satisfy for μ-a.e. \(x \in \mathbb{X}\) the assumptions of the proposition. The induction hypothesis implies that

$$\displaystyle\begin{array}{rcl} D_{x}(I_{p}(\,f)I_{q}(g))& =& \sum _{r=0}^{(p-1)\wedge q}r!p\binom{p - 1}{r}\binom{q}{r}\sum _{ l=0}^{r}\binom{r}{l}I_{ p+q-1-r-l}(\,f_{x} \star _{ r}^{l}g) {}\\ & & +\sum _{r=0}^{p\wedge (q-1)}r!q\binom{p}{r}\binom{q - 1}{r}\sum _{ l=0}^{r}\binom{r}{l}I_{ p+q-1-r-l}(\,f \star _{ r}^{l}g_{ x}) {}\\ & & +\sum _{r=0}^{(p-1)\wedge (q-1)}r!pq\binom{p - 1}{r}\binom{q - 1}{r}\sum _{ l=0}^{r}\binom{r}{l}I_{ p+q-2-r-l}(\,f_{x} \star _{ r}^{l}g_{ x}).{}\\ \end{array}$$

A straightforward but tedious calculation (left to the reader) implies that the above right-hand side equals

$$\displaystyle\begin{array}{rcl} \sum _{r=0}^{p\wedge q}r!\binom{p}{r}\binom{q}{r}\sum _{ l=0}^{r}\binom{r}{l}(p + q - r - l)I_{ p+q-r-l-1}((\widetilde{f \star _{ r}^{l}g})_{ x}),& & {}\\ \end{array}$$

where the summand for \(p + q - r - l = 0\) has to be interpreted as 0. It follows that

$$\displaystyle\begin{array}{rcl} D_{x}(I_{p}(\,f)I_{q}(g)) = D_{x}G,\quad \mathbb{P}\text{-a.s.},\,\mu \mbox{ -a.e. $x \in \mathbb{X}$},& & {}\\ \end{array}$$

where G denotes the right-hand side of (67). On the other hand, the isometry properties (28) show that \(\mathbb{E}I_{p}(\,f)I_{q}(g) = \mathbb{E}G\). Since \(I_{p}(\,f)I_{q}(g) \in L_{\eta }^{1}(\mathbb{P})\) we can use the Poincaré inequality of Corollary 1 in Sect. 8 to conclude that

$$\displaystyle\begin{array}{rcl} \mathbb{E}(I_{p}(\,f)I_{q}(g) - G)^{2} = 0.& & {}\\ \end{array}$$

This finishes the induction and the result is proved. □ 

If {f ≠ 0} ⊂ B p and {g ≠ 0} ⊂ B q for some \(B \in \mathcal{X}_{0}\) (as in Lemma 4), then (67) can be established by a direct computation, starting from (30). The argument is similar to the proof of Theorem 3.1 in [16]. The required integrability follows from the Cauchy–Schwarz inequality; see [16, Remark 3.1]. In the case q ≥ 2 we do not see, however, how to get from this special to the general case via approximation.

Equation (67) can be further generalized so as to cover the case of a finite product of Wiener–Itô integrals. We again refer the reader to [28] as well as to [16, 21].

7 Mehler’s Formula

In this section we assume that η is a proper Poisson process. We shall derive a pathwise representation of the inverse (64) of the Ornstein–Uhlenbeck generator.

To give the idea we define for F ∈ L η 2 with representation (36)

$$\displaystyle\begin{array}{rcl} T_{s}F:= \mathbb{E}F +\sum _{ n=1}^{\infty }e^{-ns}I_{ n}(\,f_{n}),\quad s \geq 0.& &{}\end{array}$$
(69)

The family {T s : s ≥ 0} is the Ornstein–Uhlenbeck semigroup , see e.g. [24] and also [19] for the Gaussian case. If F ∈ domL then it is easy to see that

$$\displaystyle{\lim _{s\rightarrow 0}\frac{T_{s}F - F} {s} = L}$$

in \(L^{2}(\mathbb{P})\), see [19, Proposition 1.4.2] for the Gaussian case. Hence L can indeed be interpreted as the generator of the semigroup. But in the theory of Markov processes it is well known (see, e.g., the resolvent identities in [12, Theorem 19.4]) that

$$\displaystyle\begin{array}{rcl} L^{-1}F = -\int \limits _{ 0}^{\infty }T_{ s}Fds,& &{}\end{array}$$
(70)

at least under certain assumptions. What we therefore need is a pathwise representation of the operators T s . Our guiding star is the birth and death representation in Proposition 4.

For F ∈ L η 1 with representative f we define,

$$\displaystyle\begin{array}{rcl} P_{s}F:=\int \mathbb{E}[f(\eta ^{(s)}+\chi )\mid \eta ]\varPi _{ (1-s)\mu }(\mathrm{d}\chi ),\quad s \in [0,1],& &{}\end{array}$$
(71)

where η (s) is a s-thinning of η and where Π μ denotes the distribution of a Poisson process with intensity measure μ′. The thinning η (s) can be defined by removing the points in (6) independently of each other with probability 1 − s; see [12, p. 226]. Since

$$\displaystyle\begin{array}{rcl} \varPi _{\mu } = \mathbb{E}\bigg[\int\mathbb{1}\{\eta ^{(s)}+\chi \in \cdot \}\varPi _{ (1-s)\mu }(\mathrm{d}\chi )\bigg],& &{}\end{array}$$
(72)

this definition does almost surely not depend on the representative of F. Equation (72) implies in particular that

$$\displaystyle\begin{array}{rcl} \mathbb{E}P_{s}F = \mathbb{E}F,\quad F \in L_{\eta }^{1},& &{}\end{array}$$
(73)

while Jensen’s inequality implies for any p ≥ 1 the contractivity property

$$\displaystyle\begin{array}{rcl} \mathbb{E}(P_{s}F)^{p} \leq \mathbb{E}\vert F\vert ^{p},\quad s \in [0,1],\,F \in L_{\eta }^{2}.& &{}\end{array}$$
(74)

We prepare the main result of this section with the following crucial lemma from [17].

Lemma 6

Let F ∈ L η 2 . Then, for all \(n \in \mathbb{N}\) and s ∈ [0,1],

$$\displaystyle\begin{array}{rcl} D_{x_{1},\ldots,x_{n}\,}^{n}(P_{ s}F) = s^{n}P_{ s}D_{x_{1},\ldots,x_{n}\,}^{n}F,\quad \mu ^{n}\text{-a.e. }(x_{ 1},\ldots,x_{n}) \in \mathbb{X}^{n},\; \mathbb{P}\text{-a.s.}& &{}\end{array}$$
(75)

In particular

$$\displaystyle\begin{array}{rcl} \mathbb{E}D_{x_{1},\ldots,x_{n}\,}^{n}P_{ s}F = s^{n}\mathbb{E}D_{ x_{1},\ldots,x_{n}\,}^{n}F,\quad \mu ^{n}\text{-a.e.}(x_{ 1},\ldots,x_{n}) \in \mathbb{X}^{n}.& &{}\end{array}$$
(76)

Proof

To begin with, we assume that the representative of F is given by \(f(\chi ) = e^{-\chi (v)}\) for some \(v: \mathbb{X} \rightarrow [0,\infty )\) such that μ({v > 0}) < . By the definition of a s-thinning,

$$\displaystyle{ \mathbb{E}\big[e^{-\eta ^{(s)}(v) }\mid \eta \big] =\exp \bigg [\int \log \big((1 - s) + se^{-v(y)}\big)\eta (\mathrm{d}y)\bigg], }$$
(77)

and it follows from Lemma 12.2 in [12] that

$$\displaystyle{\int \exp (-\chi (v))\varPi _{(1-s)\mu }(\mathrm{d}\chi ) =\exp \bigg [-(1 - s)\int (1 - e^{-v})\mathrm{d}\mu \bigg].}$$

Hence, the definition (71) of the operator P s implies that the following function f s is a representative of P s F:

$$\displaystyle\begin{array}{rcl} f_{s}(\chi ):=\exp \bigg [-(1 - s)\int \big(1 - e^{-v}\big)\mathrm{d}\mu \bigg]\exp \bigg[\int \log \big((1 - s) + se^{-v(y)}\big)\chi (\mathrm{d}y)\bigg].& & {}\\ \end{array}$$

Therefore we obtain for any \(x \in \mathbb{X}\) that

$$\displaystyle\begin{array}{rcl} D_{x}P_{s}F = f_{s}(\eta +\delta _{x}) - f_{s}(\eta ) = s\big(e^{-v(x)} - 1\big)f_{ s}(\eta ) = s\big(e^{-v(x)} - 1\big)P_{ s}F.& & {}\\ \end{array}$$

This identity can be iterated to yield for all \(n \in \mathbb{N}\) and all \((x_{1},\ldots,x_{n}) \in \mathbb{X}^{n}\) that

$$\displaystyle\begin{array}{rcl} D_{x_{1},\ldots,x_{n}\,}^{n}P_{ s}F = s^{n}\prod _{ i=1}^{n}\big(e^{-v(x_{i})} - 1\big)P_{ s}F.& & {}\\ \end{array}$$

On the other hand we have \(\mathbb{P}\)-a.s. that

$$\displaystyle\begin{array}{rcl} P_{s}D_{x_{1},\ldots,x_{n}\,}^{n}F = P_{ s}\prod _{i=1}^{n}\big(e^{-v(x_{i})} - 1\big)F =\prod _{ i=1}^{n}\big(e^{-v(x_{i})} - 1\big)P_{ s}F,& & {}\\ \end{array}$$

so that (75) holds for Poisson functionals of the given form.

By linearity, (75) extends to all F with a representative in the set G of all linear combinations of functions f as above. There are f k  ∈ G, \(k \in \mathbb{N}\), satisfying \(F_{k}:= f_{k}(\eta ) \rightarrow F = f(\eta )\) in \(L^{2}(\mathbb{P})\) as k → , where f is a representative of F (see [13, Lemma 2.1]). Therefore we obtain from the contractivity property (74) that

$$\displaystyle\begin{array}{rcl} \mathbb{E}[(P_{s}F_{k} - P_{s}F)^{2}] = \mathbb{E}[(P_{ s}(\,f_{k} - F))^{2}] \leq \mathbb{E}[(\,f_{ k} - F)^{2}] \rightarrow 0,& & {}\\ \end{array}$$

as k → . Taking \(B \in \mathcal{X}\) with μ(B) < , it therefore follows from [13, Lemma 2.3] that

$$\displaystyle\begin{array}{rcl} \mathbb{E}\int \limits _{B^{n}}\vert D_{x_{1},\ldots,x_{n}\,}^{n}P_{ s}F_{k} - D_{x_{1},\ldots,x_{n}\,}^{n}P_{ s}F\vert \mu (\mathrm{d}(x_{1},\ldots,x_{n})) \rightarrow 0,& & {}\\ \end{array}$$

as k → . On the other hand we obtain from the Fock space representation (17) that \(\mathbb{E}\vert D_{x_{1},\ldots,x_{n}\,}^{n}F\vert <\infty\) for μ n-a.e. \((x_{1},\ldots,x_{n}) \in \mathbb{X}^{n}\), so that linearity of P s and (74) imply

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}\int \limits _{B^{n}}\vert P_{s}D_{x_{1},\ldots,x_{n}\,}^{n}F_{ k} - P_{s}D_{x_{1},\ldots,x_{n}\,}^{n}F\vert \mu (\mathrm{d}(x_{ 1},\ldots,x_{n})) {}\\ & & \quad \leq \int \limits _{B^{n}}\mathbb{E}\vert D_{x_{1},\ldots,x_{n}\,}^{n}(\,f_{ k} - F)\vert \mu (\mathrm{d}(x_{1},\ldots,x_{n})). {}\\ \end{array}$$

Again, this latter integral tends to 0 as k → . Since (75) holds for any F k we obtain that (75) holds \(\mathbb{P} \otimes (\mu _{B})^{n}\)-a.e., and hence also \(\mathbb{P} \otimes \mu ^{n}\)-a.e.

Taking the expectation in (75) and using (73) proves (76). □ 

The following theorem from [17] achieves the desired pathwise representation of the inverse Ornstein–Uhlenbeck operator.

Theorem 7

Let F ∈ L η 2 . If \(\mathbb{E}F = 0\) then we have \(\mathbb{P}\) -a.s. that

$$\displaystyle\begin{array}{rcl} L^{-1}F = -\int \limits _{ 0}^{1}s^{-1}P_{ s}F\mathrm{d}s.& &{}\end{array}$$
(78)

Proof

Assume that F is given as in (36). Applying (36) to P s F and using (76) yields

$$\displaystyle{ P_{s}F = \mathbb{E}F +\sum _{ n=1}^{\infty }s^{n}I_{ n}(\,f_{n}),\quad \mathbb{P}\text{-a.s.},\,s \in [0,1]. }$$
(79)

Furthermore,

$$\displaystyle{-\sum _{n=1}^{m} \frac{1} {n}I_{n}(\,f_{n}) = -\int \limits _{0}^{1}s^{-1}\sum _{ n=1}^{m}s^{n}I_{ n}(\,f_{n})\mathrm{d}s,\quad m \geq 1.}$$

Assume now that \(\mathbb{E}F = 0\). In view of (64) we need to show that the above right-hand side converges in \(L^{2}(\mathbb{P})\), as m → , to the right-hand side of (78). Taking into account (79) we hence have to show that

$$\displaystyle\begin{array}{rcl} R_{m}:=\int \limits _{ 0}^{1}s^{-1}\bigg(P_{ s}F -\sum _{n=1}^{m}s^{n}I_{ n}(\,f_{n})\bigg)\mathrm{d}s =\int \limits _{ 0}^{1}s^{-1}\bigg(\sum _{ n=m+1}^{\infty }s^{n}I_{ n}(\,f_{n})\bigg)\mathrm{d}s& & {}\\ \end{array}$$

converges in \(L^{2}(\mathbb{P})\) to zero. Using that \(\mathbb{E}I_{n}(\,f_{n})I_{m}(\,f_{m}) =\mathbb{1}\{m = n\}n!\|\,f_{n}\|_{n}^{2}\) we obtain

$$\displaystyle\begin{array}{rcl} \mathbb{E}R_{m}^{2} \leq \int \limits _{ 0}^{1}s^{-2}\mathbb{E}\bigg(\sum _{ n=m+1}^{\infty }s^{n}I_{ n}(\,f_{n})\bigg)^{2}\mathrm{d}s =\sum _{ n=m+1}^{\infty }n!\|\,f_{ n}\|_{n}^{2}\int \limits _{ 0}^{1}s^{2n-2}\mathrm{d}s& & {}\\ \end{array}$$

which tends to zero as m → . □ 

Equation (79) implies Mehler’s formula

$$\displaystyle\begin{array}{rcl} P_{e^{-s}}F = \mathbb{E}F +\sum _{ n=1}^{\infty }e^{-ns}I_{ n}(\,f_{n}),\quad \mathbb{P}\text{-a.s.},\,s \geq 0,& &{}\end{array}$$
(80)

which was proved in [24] for the special case of a finite Poisson process with a diffuse intensity measure. Originally this formula was first established in a Gaussian setting, see, e.g., [19]. The family \(\{P_{e^{-s}}: s \geq 0\}\) of operators describes a special example of Glauber dynamics. Using (80) in (78) gives the identity (69).

8 Covariance Identities

The fundamental Fock space isometry (17) can be rewritten in several other disguises. We give here two examples, starting with a covariance identity from [5] involving the operators P s .

Theorem 8

Assume that η is a proper Poisson process. Then, for any F,G ∈dom  D,

$$\displaystyle\begin{array}{rcl} \mathbb{E}FG = \mathbb{E}F\,\mathbb{E}G + \mathbb{E}\int \int \limits _{0}^{1}(D_{ x}F)(P_{t}D_{x}G)\mathrm{d}t\mu (\mathrm{d}x).& &{}\end{array}$$
(81)

Proof

The Cauchy–Schwarz inequality and the contractivity property (74) imply that

$$\displaystyle\begin{array}{rcl} \bigg(\mathbb{E}\int \int \limits _{0}^{1}\vert D_{ x}F\vert \vert P_{s}D_{x}G\vert \mathrm{d}s\mu (\mathrm{d}x)\bigg)^{2} \leq \mathbb{E}\int (D_{ x}F)^{2}\mu (\mathrm{d}x)\mathbb{E}\int (D_{ x}G)^{2}\mu (\mathrm{d}x)& & {}\\ \end{array}$$

which is finite due to Theorem 3. Therefore we can use Fubini’s theorem and (75) to obtain that the right-hand side of (81) equals

$$\displaystyle\begin{array}{rcl} \mathbb{E}F\,\mathbb{E}G +\int \int \limits _{ 0}^{1}s^{-1}\mathbb{E}(D_{ x}F)(D_{x}P_{s}G)\mathrm{d}s\mu (\mathrm{d}x).& &{}\end{array}$$
(82)

For s ∈ [0, 1] and μ-a.e. \(x \in \mathbb{X}\) we can apply the Fock space isometry Theorem 1 to D x F and D x P s G. Taking into account Lemma 6, (73) and applying Fubini again (to be justified below) yields that the second summand in (82) equals

$$\displaystyle\begin{array}{rcl} & & \int \int \limits _{0}^{1}s^{-1}\mathbb{E}D_{ x}F\,\mathbb{E}D_{x}P_{s}G\,\mathrm{d}s\mu (\mathrm{d}x) {}\\ & & \quad \quad +\sum _{ n=1}^{\infty } \frac{1} {n!}\iint \int \limits _{0}^{1}s^{-1}\mathbb{E}D_{ x_{1},\ldots,x_{n},x}^{n+1}F\,\mathbb{E}D_{ x_{1},\ldots,x_{n},x}^{n+1}P_{ s}G\,\mathrm{d}s\mu ^{n}(\mathrm{d}(x_{ 1},\ldots,x_{n}))\mu (\mathrm{d}x) {}\\ & & \quad =\int \mathbb{E}D_{x}F\,\mathbb{E}D_{x}G\mu (\mathrm{d}x) {}\\ & & \quad \quad +\sum _{ n=1}^{\infty } \frac{1} {n!}\iint \int \limits _{0}^{1}s^{n}\mathbb{E}D_{ x_{1},\ldots,x_{n},x}^{n+1}F\,\mathbb{E}D_{ x_{1},\ldots,x_{n},x}^{n+1}G\,\mathrm{d}s\mu ^{n}(\mathrm{d}(x_{ 1},\ldots,x_{n}))\mu (\mathrm{d}x) {}\\ & & \quad =\sum _{ m=1}^{\infty } \frac{1} {m!}\int \mathbb{E}D_{x_{1},\ldots,x_{m}}^{m}F\,\mathbb{E}D_{ x_{1},\ldots,x_{m}}^{m}G\,\mu ^{m}(\mathrm{d}(x_{ 1},\ldots,x_{m})). {}\\ \end{array}$$

Inserting this into (82) and applying Theorem 1 yield the asserted formula (81). The use of Fubini’s theorem is justified by Theorem 1 for f = g and the Cauchy–Schwarz inequality. □ 

The integrability assumptions of Theorem 8 can be reduced to mere square integrability when using a symmetric formulation. Under the assumptions of Theorem 8 the following result was proved in [4, 5]. An even more general version is [13, Theorem 1.5].

Theorem 9

Assume that η is a proper Poisson process. Then, for any F ∈ L η 2 ,

$$\displaystyle\begin{array}{rcl} \mathbb{E}\int \int \limits _{0}^{1}(\mathbb{E}[D_{ x}F\mid \eta ^{(t)}])^{2}\mathrm{d}t\mu (\mathrm{d}x) <\infty,& &{}\end{array}$$
(83)

and for any F,G ∈ L η 2 ,

$$\displaystyle\begin{array}{rcl} \mathbb{E}FG = \mathbb{E}F\,\mathbb{E}G + \mathbb{E}\int \int \limits _{0}^{1}\mathbb{E}[D_{ x}F\mid \eta ^{(t)}]\mathbb{E}[D_{ x}G\mid \eta ^{(t)}]\mathrm{d}t\mu (\mathrm{d}x).& &{}\end{array}$$
(84)

Proof

It is well known (and not hard to prove) that η (t) and ηη (t) are independent Poisson processes with intensity measures and (1 − t)μ, respectively. Therefore we have for F ∈ L η 2 with representative f that

$$\displaystyle\begin{array}{rcl} \mathbb{E}[D_{x}F\vert \eta _{t}] =\int D_{x}f(\eta ^{(t)}+\chi )\varPi _{ (1-t)\mu }(\mathrm{d}\chi )& &{}\end{array}$$
(85)

holds almost surely. It is easy to see that the right-hand side of (85) is a measurable function of (the suppressed) ω ∈ Ω, \(x \in \mathbb{X}\), and t ∈ [0, 1].

Now we take F, G ∈ L η 2 with representatives f and g. Let us first assume that \(DF,DG \in L^{2}(\mathbb{P}\otimes \mu )\). Then (83) follows from the (conditional) Jensen inequality while (85) implies for all t ∈ [0, 1] and \(x \in \mathbb{X}\), that

$$\displaystyle\begin{array}{rcl} \mathbb{E}(D_{x}F)(P_{t}D_{x}G)& =& \mathbb{E}D_{x}F\int D_{x}g(\eta ^{(t)}+\mu )\varPi _{ (1-t)\mu }(\mathrm{d}\mu ) {}\\ & =& \mathbb{E}\mathbb{E}[D_{x}F\,\mathbb{E}[D_{x}G\mid \eta ^{(t)}]] = \mathbb{E}\mathbb{E}[D_{ x}F\mid \eta ^{(t)}]\mathbb{E}[D_{ x}G\mid \eta ^{(t)}]. {}\\ \end{array}$$

Therefore (84) is just another version of (81).

In this second step of the proof we consider general F, G ∈ L η 2. Let F k  ∈ L η 2, \(k \in \mathbb{N}\), be a sequence such that \(DF_{k} \in L^{2}(\mathbb{P}\otimes \mu )\) and \(\mathbb{E}(\,f - F_{k})^{2} \rightarrow 0\) as k → . We have just proved that

$$\displaystyle\begin{array}{rcl} \mathrm{Var}[F_{k} - F^{l}] = \mathbb{E}\int (\mathbb{E}[D_{ x}F_{k}\mid \eta ^{(t)}] - \mathbb{E}[D_{ x}F^{l}\mid \eta ^{(t)}])^{2}\mu ^{{\ast}}(\mathrm{d}(x,t)),\quad k,l \in \mathbb{N},& & {}\\ \end{array}$$

where μ is the product of μ and Lebesgue measure on [0, 1]. Since \(L^{2}(\mathbb{P} \otimes \mu ^{{\ast}})\) is complete, there is an \(h \in L^{2}(\mathbb{P} \otimes \mu ^{{\ast}})\) satisfying

$$\displaystyle\begin{array}{rcl} \lim _{k\rightarrow \infty }\mathbb{E}\int (h(x,t) - \mathbb{E}[D_{x}F_{k}\mid \eta ^{(t)}])^{2}\mu ^{{\ast}}(\mathrm{d}(x,t)) = 0.& &{}\end{array}$$
(86)

On the other hand it follows from Lemma 3 that for any \(C \in \mathcal{X}_{0}\)

$$\displaystyle\begin{array}{rcl} & & \int \limits _{C\times [0,1]}\mathbb{E}\big\vert \mathbb{E}[D_{x}F_{k}\mid \eta ^{(t)}] - \mathbb{E}[D_{ x}F\mid \eta ^{(t)}]\big\vert \mu ^{{\ast}}(\mathrm{d}(x,t)) {}\\ & & \quad \leq \int \limits _{C\times [0,1]}\mathbb{E}\vert D_{x}F_{k} - D_{x}F\vert \mu ^{{\ast}}(\mathrm{d}(x,t)) \rightarrow 0 {}\\ \end{array}$$

as k → . Comparing this with (86) shows that \(h(\omega,x,t) = \mathbb{E}[D_{x}F\mid \eta ^{(t)}](\omega )\) for \(\mathbb{P} \otimes \mu ^{{\ast}}\)-a.e. (ω, x, t) ∈ Ω× C × [0, 1] and hence also for \(\mathbb{P} \otimes \mu ^{{\ast}}\)-a.e. \((\omega,x,t) \in \varOmega \times \mathbb{X} \times [0,1]\). Therefore the fact that \(h \in L^{2}(\mathbb{P} \otimes \mu ^{{\ast}})\) implies (84). Now let G k , \(k \in \mathbb{N}\), be a sequence approximating G. Then Eq. (84) holds with ( f k , G k ) instead of ( f, G). But the second summand is just a scalar product in \(L^{2}(\mathbb{P} \otimes \mu ^{{\ast}})\). Taking the limit as k →  and using the L 2-convergence proved above yield the general result. □ 

A quick consequence of the previous theorem is the Poincaré inequality for Poisson processes. The following general version is taken from [30]. A more direct approach can be based on the Fock space representation in Theorem 1, see [13].

Theorem 10

For any F ∈ L η 2 ,

$$\displaystyle\begin{array}{rcl} \mathrm{Var}\,F \leq \mathbb{E}\int (D_{x}F)^{2}\mu (\mathrm{d}x).& &{}\end{array}$$
(87)

Proof

η is proper. Take F = G in (84) and apply Jensen’s inequality. □ 

The following extension of (87) (taken from [17]) has been used in the proof of Proposition 5.

Corollary 1

For F ∈ L η 1 ,

$$\displaystyle\begin{array}{rcl} \mathbb{E}F^{2} \leq (\mathbb{E}F)^{2} + \mathbb{E}\int (D_{ x}F)^{2}\mu (\mathrm{d}x).& &{}\end{array}$$
(88)

Proof

For s > 0 we define

$$\displaystyle{F_{s} =\mathbb{1}\{F> s\}s +\mathbb{1}\{ - s \leq F \leq s\}F -\mathbb{1}\{F <-s\}s.}$$

By definition of F s we have F s  ∈ L η 2 and | D x F s  | ≤ | D x F | for μ-a.e. \(x \in \mathbb{X}\). Together with the Poincaré inequality (87) we obtain that

$$\displaystyle{\mathbb{E}F_{s}^{2} \leq (\mathbb{E}F_{ s})^{2} + \mathbb{E}\int (D_{ x}F_{s})^{2}\mu (\mathrm{d}x) \leq (\mathbb{E}F_{ s})^{2} + \mathbb{E}\int (D_{ x}F)^{2}\mu (\mathrm{d}x).}$$

By the monotone convergence theorem and the dominated convergence theorem, respectively, we have that \(\mathbb{E}F_{s}^{2} \rightarrow \mathbb{E}F^{2}\) and \(\mathbb{E}F_{s} \rightarrow \mathbb{E}F\) as s → . Hence letting s →  in the previous inequality yields the assertion. □ 

As a second application of Theorem 9 we obtain the Harris-FKG inequality for Poisson processes , derived in [9]. Given \(B \in \mathcal{X}\), a function f ∈ F(N σ ) is increasing on B if f(χ +δ x ) ≥ f(χ) for all χ ∈ N σ and all x ∈ B. It is decreasing on B if (−f) is increasing on B.

Theorem 11

Suppose \(B \in \mathcal{X}\) . Let \(f,g \in L^{2}(\mathbb{P}_{\eta })\) be increasing on B and decreasing on \(\mathbb{X}\setminus B\) . Then

$$\displaystyle\begin{array}{rcl} \mathbb{E}f(\eta )g(\eta ) \geq \mathbb{E}f(\eta )\,\mathbb{E}g(\eta ).& &{}\end{array}$$
(89)

It was noticed in [30] that the correlation inequality (89) (also referred to as association) is a direct consequence of a covariance identity.