Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction and Historical Remarks

The theory of reciprocal processes evolved from an idea by Schrödinger. In [25], he described the motion of a Brownian particle under constraints at initial and final times as a stochastic variational problem and proposed that its solutions are stochastic processes that have the same bridges as the Brownian motion. Bernstein called them réciproques and pointed out that they are Markov fields indexed by time, which allows to state probabilistic models based on a symmetric notion of past and future: ces grandeurs deviennent stochastiquement parfaites! See [1].

Various aspects of reciprocal processes have been examined by several authors. Many fundamental reciprocal properties were given by Jamison in a series of articles [1214], first in the context of Gaussian processes. Contributions to a physical interpretation and to the development of a stochastic calculus adjusted to reciprocal diffusions have been made by Zambrini and various coauthors in their interest of creating a Euclidean version of quantum mechanics (see [8, 27] and the monograph [6]). Krener in [17] and then Clark in [7] exhibited reciprocal invariants associated with classes of reciprocal diffusions.

This chapter reviews and unifies for the first time current results on characterizing various types of reciprocal processes by duality formulae.

A first duality formula appeared under the Wiener measure as an analytical tool in Malliavin calculus; see [2]. It is an integration by parts on the set of continuous paths, which reflects the duality between a stochastic derivative operator and a stochastic integral operator. In [24], the authors indeed characterize the Brownian motion as the unique continuous process for which the Malliavin derivative and the Skorohod integral are dual operators.

In the framework of jump processes, a characterization of the Poisson process as the unique process for which a difference operator and a compensated stochastic integral are in duality was first given by Slivnjak [26] and extended to Poisson measures by Mecke [19].

We present here duality formulae as unifying tool to characterize classes of reciprocal processes in following contexts:

  • In the framework of Brownian diffusions, reviewing results of [22, 23]

  • In the framework of pure jump processes, namely, counting processes, following the recent studies of Murr [20]

2 Reciprocal Processes and Reciprocal Classes

We mainly work on the canonical càdlàg path space \(\varOmega = \mathbb{D}([0,1], \mathbb{R})\) or some subset of it. It is endowed with the canonical σ-algebra \(\mathcal{A},\) induced by the canonical process \(X = (X_{t})_{t\in [0,1]}\).

For a time interval [s, u] ⊂ [0, 1] one defines:

  • \(X_{[s,u]}:= (X_{t})_{t\in [s,u]}\)

  • \(\mathcal{A}_{[s,u]}:=\sigma (X_{[s,u]}),\) internal story of the process between time s and time u

\(\mathcal{P}(\varOmega )\) denotes the space of probability measures on Ω.

For a probability measure \(P \in \mathcal{P}(\varOmega )\),

$$\displaystyle{P_{01}:= P \circ {(X_{0},X_{1})}^{-1} \in \mathcal{P}({\mathbb{R}}^{2})}$$

denotes its endpoint marginal law.

2.1 Definition and First Properties

Definition 1.

The probability measure \(P \in \mathcal{P}(\varOmega )\) is reciprocal, or the law of a reciprocal process, if for any s ≤ u in [0, 1] and any event \(A \in \mathcal{A}_{[0,s]},B \in \mathcal{A}_{[s,u]},C \in \mathcal{A}_{[u,1]}\),

$$\displaystyle{ P(A \cap B \cap C\mid X_{s},X_{u}) = P(A \cap C\mid X_{s},X_{u})P(B\mid X_{s},X_{u})P\mbox{ -a.e.} }$$
(1)

This property—which is time symmetric—makes explicit the conditional independence under P of the future of u and the past of s with the events happened between s and u, given the σ-algebras at boundary times s and u.

The reciprocality can be expressed in several equivalent ways.

Theorem 1.

Let \(P \in \mathcal{P}(\varOmega )\) . Following assertions are equivalent:

  1. (1)

    The probability measure P is reciprocal.

  2. (1*)

    The reversed probability measure \({P}^{{\ast}}:= P \circ {(X_{1-\cdot })}^{-1}\) is reciprocal.

  3. (2)

    For any 0 ≤ s ≤ u ≤ 1 and \(B \in \mathcal{A}_{[s,u]}\)

    $$\displaystyle{ P(B\mid X_{[0,s]},X_{[u,1]}) = P(B\mid X_{s},X_{u}). }$$
    (2)
  4. (3)

    For any 0 ≤ v ≤ r ≤ s ≤ u ≤ 1, and \(A \in \mathcal{A}_{[v,r]}\) , \(B \in \mathcal{A}_{[s,u]},\)

    $$\displaystyle{ P(A \cap B\mid X_{[0,v]},X_{[r,s]},X_{[u,1]}) = P(A\mid X_{v},X_{r})P(B\mid X_{s},X_{u}). }$$

Proof.

See, e.g., Theorem 2.3 in [18]. □ 

The identity (2) points out that any reciprocal process is a Markov field parametrized by the time interval [0,1]: To condition events between s and u, knowing the future of u and past of s is equivalent to condition them knowing only the σ-algebras at both times s and u. This property is sometimes called two-side Markov property. Therefore

Proposition 1.

Any Markov process is reciprocal but the inverse is false.

Proof.

The first assertion was first done in [12] in a Gaussian framework.

Take P the law of a Markov process, 0 ≤ s ≤ u ≤ 1 and \(A \in \mathcal{A}_{[0,s]},B \in \mathcal{A}_{[s,u]}\), and \(C \in \mathcal{A}_{[u,1]}\). The following holds:

$$\displaystyle\begin{array}{rcl} P(A \cap B \cap C)& = & E[P(A \cap B \cap C\mid X_{[s,u]})] {}\\ & \mathop{=}\limits^{ {\ast}}& E[P(A\mid X_{s})\mathbf{1}_{B}P(C\mid X_{u})] {}\\ & = & E[P(A\mid X_{s})P(B\mid X_{s},X_{u})P(C\mid X_{u})] {}\\ & \mathop{=}\limits^{ {\ast}}& E[P(A\mid X_{s})P(B\mid X_{s},X_{u})P(C\mid X_{[0,u]})] {}\\ & = & E[P(A\mid X_{s})P(B\mid X_{s},X_{u})\mathbf{1}_{C}] {}\\ & \mathop{=}\limits^{ {\ast}}& E[P(A\mid X_{[s,1]})P(B\mid X_{s},X_{u})\mathbf{1}_{C}] {}\\ & = & E[\mathbf{1}_{A}P(B\mid X_{s},X_{u})\mathbf{1}_{C}], {}\\ \end{array}$$

where the Markov property was used to prove equalities with *. Therefore, (2) holds and P is reciprocal.

As a counterexample, take, e.g., the periodic process constructed in Sect. 3.1.4. □ 

Indeed a canonical method to construct reciprocal processes is to mix Markovian bridges. Take \(P \in \mathcal{P}(\varOmega )\) the law of a Markov process whose bridges \(({P}^{xy})_{x,y\in \mathbb{R}}\) can be constructed for all \(x,y \in \mathbb{R}\) as a regular version of the family of conditional laws \(P(\cdot \mid X_{0} = x,X_{1} = y),x,y \in \mathbb{R}\). (It is a difficult challenge in a general non-Markov setting, but it is already done if P is a Lévy process, see [15, 21] Proposition 3.1, or if P is a right process [11] or a Feller process, see the recent paper [4].) One can now associate with P a class of reciprocal processes as follows.

Definition 2.

The set of probability measures on Ω obtained as mixture of bridges of \(P \in \mathcal{P}(\varOmega )\),

$$\displaystyle{ \mathfrak{R}_{c}(P):=\{ Q \in \mathcal{P}(\varOmega ): Q(\cdot ) =\int _{\mathbb{R}\times \mathbb{R}}{P}^{xy}(\cdot )\,Q_{ 01}(dxdy)\}, }$$
(3)

is the so-called reciprocal class associated with P.

This concept was introduced by Jamison in [13] in the case of a Markov reference process P whose transition kernels admit densities.

Note that, in spite of its name, a reciprocal class is not an equivalence class because the relation is often not symmetric: The periodic process P per constructed in Sect. 3.1.4 belongs to \(\mathfrak{R}_{c}(P)\) but \(P\not\in \mathfrak{R}_{c}({P}^{\mathrm{per}})\) if P is not periodic.

Proposition 2.

Any process in the reciprocal class \(\mathfrak{R}_{c}(P)\) is reciprocal and its bridges coincide a.s. with those of P.

Proof.

Let \(Q \in \mathfrak{R}_{c}(P)\) as in (3). Let us show that Q satisfies (2). Let 0 ≤ s ≤ t ≤ 1, \(A \in \mathcal{A}_{[0,s]},B \in \mathcal{A}_{[s,u]}\), and \(C \in \mathcal{A}_{[u,1]}\). Then

$$\displaystyle\begin{array}{rcl} E_{Q}[\mathbf{1}_{A}Q(B\mid X_{[0,s]},X_{[u,1]})\mathbf{1}_{C}]& = & Q(A \cap B \cap C) =\int _{\mathbb{R}\times \mathbb{R}}{P}^{xy}(A \cap B \cap C)\,\pi (dxdy) {}\\ & \mathop{=}\limits^{ \checkmark}& \int _{\mathbb{R}\times \mathbb{R}}E_{{P}^{xy}}[\mathbf{1}_{A}P(B\mid X_{s},X_{t})\mathbf{1}_{C}]\,\pi (dxdy) {}\\ & = & E_{Q}[\mathbf{1}_{A}P(B\mid X_{s},X_{t})\mathbf{1}_{C}], {}\\ \end{array}$$

where the reciprocality of P was used at the marked equality. Thus \(Q(B\mid X_{[0,s]},X_{[t,1]})\) only depends on \((X_{s},X_{t})\) and \(Q(B\mid X_{[0,s]},X_{[t,1]}) = P(B\mid X_{s},X_{t})\), Q-a.e. which completes the proof. □ 

2.2 Reciprocal Characteristics

Let us now introduce, in two important frameworks, functionals of the reference process which are invariant on its reciprocal class. They indeed characterize the reciprocal class, as we will see in Theorems 2 and 3.

2.2.1 Case of Brownian Diffusions

In this paragraph the path space is restricted to the set of continuous paths \(\varOmega _{c}:= \mathbf{C}([0,1]; \mathbb{R})\). Consider as reference probability measure \(\mathbb{P}_{b} \in \mathcal{P}(\varOmega _{c})\) a Brownian diffusion with regular drift b, that is, the law of the SDE

$$\displaystyle{dX_{t} = dB_{t} + b(t,X_{t})\,dt,}$$

where B is a Brownian motion and \(b(t,x) \in {\mathbf{C}}^{1,2}([0;1] \times \mathbb{R}; \mathbb{R})\).

The family of its bridges \((\mathbb{P}_{b}^{xy})_{x,y\in \mathbb{R}}\) can be constructed for all \(x,y \in \mathbb{R}\) as mentioned in the preceding section. Since we are only interested in its reciprocal class, the marginal at time 0 of \(\mathbb{P}_{b}\) does not play any role, and, therefore, we do not mention it.

Clark proved a conjecture of Krener, stating that the reciprocal class of \(\mathbb{P}_{b}\) is, in some sense, characterized by the time–space function

$$\displaystyle{F_{b}(t,x):= \partial _{t}b(t,x) + \frac{1} {2}\partial _{x}({b}^{2} + \partial _{ x}b)(t,x),}$$

thus called reciprocal characteristics associated with \(\mathbb{P}_{b}\).

Theorem 2.

Let \(\mathbb{P}_{b}\) and \(\mathbb{P}_{\tilde{b}}\) be two Brownian diffusions with smooth drifts b and \(\tilde{b}\) .

$$\displaystyle{\mathfrak{R}_{c}(\mathbb{P}_{b}) = \mathfrak{R}_{c}(\mathbb{P}_{\tilde{b}}) \Leftrightarrow F_{b} \equiv F_{\tilde{b}}.}$$

Proof.

See [7] Theorem 1. □ 

Example 1.

  1. 1.

    The reciprocal characteristics of a Wiener measure \(\mathbb{P}_{0}\), law of a Brownian motion with any initial condition, vanishes since b ≡ 0 ⇒ F b  ≡ 0.

  2. 2.

    The reciprocal characteristics of the Ornstein–Uhlenbeck process with linear time-independent drift \(b(x) = -\lambda x\) is the linear function xλ 2 x.

  3. 3.

    It is known that if the Brownian diffusion \(\mathbb{P}_{b}\) admits a smooth transition density p b , then its bridge \(\mathbb{P}_{b}^{xy}\) between x and y can be constructed as a Brownian diffusion with drift b xy given by

    $$\displaystyle{{b}^{xy}(t,z) = b(t,z) + \partial _{ z}\log p_{b}(t,z;1,y),\quad t < 1.}$$

    Let us compute \(F_{{b}^{xy}}\):

    $$\displaystyle\begin{array}{rcl} & & F_{{b}^{xy}}(t,z) - F_{b}(t,z) {}\\ & & \quad = \partial _{t}\partial _{z}\log p_{b}(t,z;1,y) + \partial _{z}b(t,z)\,\partial _{z}\log p_{b}(t,z;1,y) + b(t,z)\,\partial _{z}^{2}\log p_{ b}(t,z;1,y) {}\\ & & \qquad +\big (\partial _{z}\log p_{b}\,\partial _{z}^{2}\log p_{ b}\big)(t,z;1,y) + \frac{1} {2}\partial _{z}^{3}\log p_{ b}(t,z;1,y) {}\\ & & \quad = 0, {}\\ \end{array}$$

    where we used the identity

    $$\displaystyle{\partial _{t}p_{b}(t,z;1,y) + \partial _{z}^{2}p_{ b}(t,z;1,y) + b(t,z)\partial _{z}p_{b}(t,z;1,y) = 0.}$$

    It confirms the fact that \(\mathbb{P}_{b} \in \mathfrak{R}_{c}(\mathbb{P}_{b}^{xy})\).

Remark 1.

In the multidimensional case, when the path space is \(\mathbf{C}([0,1]; {\mathbb{R}}^{d})\), d > 1, one needs one more function to characterize the reciprocal class \(\mathfrak{R}_{c}(\mathbb{P}_{b})\). It is denoted by G b and defined as an \({\mathbb{R}}^{d\otimes d}\)-valued function \(G_{b}(t,x) = (G_{b}^{i,j}(t,x))_{i,j}\) as follows \(G_{b}^{i,j}:= \partial _{j}{b}^{i} - \partial _{i}{b}^{j}\); see [7].

2.2.2 Case of Counting Processes

In this paragraph, let us now restrict the path space to the set of càdlàg step functions with unit jumps on [0,1]. It can be described as follows:

$$\displaystyle{\varOmega _{j}:= \left \{\omega = x\,\delta _{0} +\sum _{ i=1}^{n}\delta _{ t_{i}},0 < t_{1} < \cdots < t_{n} < 1,x \in \mathbb{R},n \in \mathbb{N}\right \}\ \ ,}$$

Consider, as reference Markov probability measure \(\mathbf{P}_{\ell} \in \mathcal{P}(\varOmega _{j})\), the law of a counting process with a regular uniformly bounded Markovian jump intensity , satisfying for all \(x \in \mathbb{R},\ell(\cdot,x) \in {\mathbf{C}}^{1}([0;1]; \mathbb{R})\) and \(0 <\inf _{t,x}\ell(t,x) \leq \sup _{t,x}\ell(t,x) < +\infty \).

Note that the definition of \(\mathfrak{R}_{c}(\mathbf{P}_{\ell})\) makes sense: On one side the family of bridges \(\mathbf{P}_{\ell}^{xy}\) can be constructed for all x, y such that \(y - x \in \mathbb{N}\); on the other side, for any \(Q \in \mathcal{P}(\varOmega _{j})\), its endpoint marginal law Q 01 is concentrated on such configurations.

Murr identified a time–space functional Ξ of the intensity as characteristics of the reciprocal class associated with P .

Theorem 3.

Let P and \(\mathbf{P}_{\tilde{\ell}}\) be two counting processes with intensities ℓ and \(\tilde{\ell}\) as below.

$$\displaystyle{ \mathfrak{R}_{c}(\mathbf{P}_{\ell}) = \mathfrak{R}_{c}(\mathbf{P}_{\tilde{\ell}}) \Leftrightarrow \varXi _{\ell}\equiv \varXi _{\tilde{\ell}}, }$$
(4)

where \(\varXi _{\ell}(t,x):= \partial _{t}\log \ell(t,x) +\big (\ell(t,x + 1) -\ell (t,x)\big)\) .

Proof.

See [20] Theorem 6.58. □ 

Example 2.

  1. 1.

    The standard Poisson process P: = P 1 has constant jump rate—or intensity—equal to 1 and initial deterministic condition equal to 0. Its reciprocal characteristics vanishes since \(\ell\equiv 1 \Rightarrow \varXi _{\ell}\equiv 0\).

  2. 2.

    All Poisson processes are in the same reciprocal class since, for any constant jump rate λ > 0, \(\ell\equiv \lambda \Rightarrow \varXi _{\ell} =\varXi _{1} \equiv 0\).

  3. 3.

    For \(x,y \in \mathbb{R}\) with \(y - x \in \mathbb{N}\), the bridge P xy of P is the Markov counting process starting at x with time–space-dependent intensity given by \({\ell}^{xy}(t,z) = \frac{\max (y-z,0)} {1-t}\), for any t < 1.

    One verifies, as in Example 1 (3), that \(\varXi _{{\ell}^{xy}} =\varXi _{1} = 0\).

3 Characterization Via Duality Formulae

Our aim is now to show that each reciprocal class coincides—in the frameworks we introduced below—with the set of random processes for which a perturbed duality relation holds between the stochastic integration and some derivative operator on the adequate path space.

3.1 Case of Brownian Diffusions

3.1.1 The Test Functions and the Operators

On Ω c , we define a set of smooth cylindrical functionals by:

$$\displaystyle{\mathcal{S} =\{\varPhi:\varPhi =\varphi (X_{t_{1}},\ldots,X_{t_{n}}),\varphi \in \mathbf{C}_{b}^{\infty }({\mathbb{R}}^{n}; \mathbb{R}),n \in {\mathbb{N}}^{{\ast}},0 \leq t_{ 1} < \cdots < t_{n} \leq 1\}.}$$

The derivation operator D g in the direction \(g \in {L}^{2}([0,1]; \mathbb{R})\) is defined on \(\mathcal{S}\) by

$$\displaystyle\begin{array}{rcl} D_{g}\varPhi (\omega )&:=& \lim _{\varepsilon }\frac{1} {\varepsilon } \left (\varPhi (\omega +\varepsilon \int _{0}^{.}g(t)dt) -\varPhi (\omega )\right ) =\sum _{ j=1}^{n}\int _{ 0}^{t_{j} }g(t)\, \frac{\partial \varphi } {\partial x_{j}}(\omega _{t_{1}},\ldots,\omega _{t_{n}})\,dt. {}\\ \end{array}$$

D g Φ is the Malliavin derivative of Φ in the direction \(\int _{0}^{.}g(t)dt\), element of the Cameron–Martin space. Furthermore,

$$\displaystyle{D_{g}\varPhi =\langle g,D_{.}\varPhi \rangle _{{L}^{2}([0,1];\mathbb{R})}\mbox{ where }D_{t}\varPhi =\sum _{ j=1}^{n} \frac{\partial \varphi } {\partial x_{j}}(X_{t_{1}},\ldots,X_{t_{n}})\mathbf{1}_{[0,t_{j}]}(t).}$$

The integration operator under the canonical process, denoted by δ g , is defined as

$$\displaystyle{\delta (g):=\int _{ 0}^{1}g(t)\,dX_{ t}.}$$

It is always well defined if the test function g is simple, i.e. a linear combination of indicator functions of time intervals.

A loop on [0, 1] is a function g with vanishing integral: \(\int _{0}^{1}g(t)dt = 0\), that is, g ∈ {1} ⊥  in \({L}^{2}([0,1]; \mathbb{R})\).

3.1.2 Duality Formula Under the Wiener Measure and Its Reciprocal Class

We are now able to present the duality between the operators D and δ under all probability measures belonging to the reciprocal class of a Wiener measure. We denote by \(\mathbb{P}\) the standard Wiener measure, which charges only paths with initial condition at 0.

Theorem 4.

Let Q be a probability measure on Ω c such that \(E_{Q}(\vert X_{t}\vert ) < +\infty \) for all t ∈ [0,1].

$$\displaystyle\begin{array}{rcl} Q\mbox{ is a Wiener measure} \Leftrightarrow \forall \varPhi \in \mathcal{S},E_{Q}(D_{g}\varPhi ) = E_{Q}\big(\varPhi \,\delta (g)\big),\forall g\,\,\mathrm{simple}.& &{}\end{array}$$
(5)
$$\displaystyle\begin{array}{rcl} \qquad \quad Q \in \mathfrak{R}_{c}(\mathbb{P}) \Leftrightarrow \forall \varPhi \in \mathcal{S},E_{Q}(D_{g}\varPhi ) = E_{Q}\big(\varPhi \,\delta (g)\big),\forall g\,\,\mbox{ simple loop.}& &{}\end{array}$$
(6)

Proof.

 

  • Sketch of \(\stackrel{(\mathrm{5})}{\Rightarrow }\): Using Girsanov formula,

    $$\displaystyle{ E_{\mathbb{P}_{0}}(D_{g}\varPhi ) = E_{\mathbb{P}_{0}}\left (\lim _{\varepsilon \rightarrow 0}\frac{\varPhi (\cdot +\varepsilon \int _{ 0}^{.}g(t)dt)-\varPhi } {\varepsilon } \right ) = E_{\mathbb{P}_{0}}\left (\varPhi \,\partial _{\varepsilon }Z_{\varepsilon }\vert _{\varepsilon =0}\right ) }$$

    with \(Z_{\varepsilon }:=\exp (\varepsilon \int _{0}^{1}g(t)dX_{t} -\frac{{\varepsilon }^{2}} {2}\int _{0}^{1}g{(t)}^{2}dt\) ).

  • \(\stackrel{(\mathrm{5})}{\Leftarrow }\): With adequate choice of Φ and g, one can prove that the canonical process \(X_{t} - X_{0}\) is a Q-martingale, as well as \({(X_{t} - X_{0})}^{2} - t\). This enables to conclude that Q is any Wiener measure. For details, see [24].

  • First note that \(Q \in \mathfrak{R}_{c}(\mathbb{P}) \Leftrightarrow Q =\int {\mathbb{P}}^{xy}\,Q_{01}(dx,dy)\).

    \(\stackrel{(6)}{\Rightarrow }\): Take \(\varPhi (\omega ) =\phi _{0}(\omega (0))\phi _{1}(\omega (1))\tilde{\varPhi }(\omega )\) in (5). Then

    $$\displaystyle{E_{\mathbb{P}}\big(\phi _{0}\,\phi _{1}\,\tilde{\varPhi }\,\delta (g)\big) = E_{\mathbb{P}}(D_{g}(\phi _{0}\,\phi _{1}\,\tilde{\varPhi }))}$$

    which implies that, for all smooth \(\phi _{0},\,\phi _{1}\),

    $$\displaystyle\begin{array}{rcl} & & E_{\mathbb{P}}\Big(\phi _{0}(X_{0})\phi _{1}(X_{1})\mathbb{P}(\tilde{\varPhi }\delta (g)\vert X_{0},X_{1})\Big) {}\\ & & = E_{\mathbb{P}}\Big(\phi _{0}(X_{0})\phi _{1}(X_{1})\mathbb{P}(D_{g}\tilde{\varPhi }\vert X_{0},X_{1})\Big) + E_{\mathbb{P}}\Big(\phi _{0}(X_{0})\phi '_{1}(X_{1})\tilde{\varPhi }\Big)\int _{0}^{1}g(t)dt {}\\ & & \Rightarrow E_{{\mathbb{P}}^{X_{0}X_{1}}}\Big(\tilde{\varPhi }\delta (g)\Big) = E_{{\mathbb{P}}^{X_{0}X_{1}}}\Big(D_{g}\tilde{\varPhi }\Big)\mbox{ if }\int _{0}^{1}g(t)dt = 0. {}\\ \end{array}$$

    This identity holds for any mixture of Brownian bridges too.

  • \(\stackrel{(6)}{\Leftarrow }\): Q xy satisfies (6) too, which leads to identify it as the unique Gaussian process with mean \(x + t(y - x)\) and covariance s(1 − t), that is, \({\mathbb{P}}^{xy}\). For details, see [22].

 □ 

Remark 2.

  1. 1.

    Equation (5) is an infinite-dimensional generalization of the one-dimensional integration by parts formula, also called Stein’s formula, satisfied by the standard Gaussian law:

    $$\displaystyle{ \int _{\mathbb{R}}\varphi '(x)\,\frac{{e}^{-{x}^{2}/2 }} {\sqrt{2\pi }} dx =\int _{\mathbb{R}}\varphi (x)\,x\,\frac{{e}^{-{x}^{2}/2 }} {\sqrt{2\pi }} dx. }$$

    Take g ≡ 1 and \(\varPhi =\varphi (X_{1})\) in (5).

  2. 2.

    Equation (5) remains true under the Wiener measure \(\mathbb{P}\), for random processes \(g \in {L}^{2}(\varOmega _{c} \times [0,1]; \mathbb{R})\) Skorohod integrable and for any general Φ ∈ D 1, 2, closure of \(\mathcal{S}\) under the norm \(\|\varPhi \|_{1,2}^{2}:=\int {(\varPhi }^{2} +\int _{ 0}^{1}\vert D_{t}\varPhi {\vert }^{2}dt)d\mathbb{P}.\) In such a generality, (5) shows the well-known duality between the Malliavin derivative D and the Skorohod integral δ under \(\mathbb{P}\); see, e.g., [2].

  3. 3.

    Since, for computing D g , paths are not perturbed at time 0, it is clear that (5) characterizes only the Brownian dynamics (Wiener measure), but not the initial law of X 0 under Q.

  4. 4.

    Since, for computing D g for a loop g, paths are perturbed neither at time 0 nor at time 1, the identity (6) characterizes only the dynamics of the bridges \({Q}^{X_{0}X_{1}}\).

3.1.3 Duality Formula Under the Reciprocal Class of Brownian Diffusions

We now investigate how the duality formula (6) is perturbed when the underlying reference process admits a drift b (satisfying the same smoothness assumptions as in Sect. 2.2.1). The transformed duality equation (7) we present below contains an additional term of order 0 in Φ, in which appears the reciprocal invariant F b associated with \(\mathbb{P}_{b}\).

Theorem 5.

Let Q be a probability measure on Ω c such that, for all t ∈ [0,1], \(E_{Q}\Big(\vert X_{t}{\vert }^{2} +\int _{ 0}^{1}F_{b}^{2}(t,X_{t})dt\Big) < +\infty \) . Then,

$$\displaystyle\begin{array}{rcl} Q \in \mathfrak{R}_{c}(\mathbb{P}_{b})& \Leftrightarrow & \forall \varPhi \in \mathcal{S},\ \forall g\,\,\mbox{ simple loop, } \\ E_{Q}(D_{g}\varPhi )& =& E_{Q}\big(\varPhi \,\,\delta (g)\big) + E_{Q}\Big(\varPhi \,\int _{0}^{1}g(r)\int _{ r}^{1}F_{ b}(t,X_{t})\,dt\,dr\Big).{}\end{array}$$
(7)

Proof.

  • Sketch of ⇒ : First, the bridges of Q coincide with those of \(\mathbb{P}_{b}\). Since \(\mathbb{P}_{b}^{xy}\) is absolutely continuous with respect to \(\mathbb{P}_{b}\) on any time interval \([0,1-\varepsilon ],\varepsilon > 0\), one can use the Girsanov density to prove that \(\mathbb{P}_{b}^{xy}\) satisfies (7), and thus, by linearity, Q satisfies (7) too.

  •  ⇐ : First, Q xy satisfies (7) for a.a. x, y. This allows to prove that the canonical process is a Q xy-quasi-martingale. Therefore, by Rao’s theorem (see [9]), it is a Q xy-semi-martingale. Its characteristics can be computed: The quadratic variation is t and the bounded variation part is of the form \(t\mapsto \int _{0}^{t}{b}^{x,y}(s,X_{s})ds\). One computes that \(F_{{b}^{x,y}} = F_{b}\). Thus \({Q}^{xy} = \mathbb{P}_{b}^{xy}\) and \(Q \in \mathfrak{R}_{c}(\mathbb{P}_{b})\). For more details, see [22], Theorem 4.3. □ 

3.1.4 Some Applications

We first illustrate the use of the identity (7) to identify a process as element of some precise reciprocal class. Consider, as Markov reference process, the Ornstein–Uhlenbeck process denoted by P OU, introduced in Example 1 (2), whose associated reciprocal characteristics is \(F_{\mathrm{OU}}(x) {=\lambda }^{2}x\). Consider now the periodic Ornstein–Uhlenbeck process denoted by \(P_{\mathrm{OU}}^{\mathrm{per}}\) and solution of the following stochastic differential equation with periodic boundary conditions on the time interval [0,1]:

$$\displaystyle{ dX_{t} = dB_{t} -\lambda X_{t}\,dt\,,\quad X_{0} = X_{1}. }$$
(8)

This process is Gaussian as the following representation shows:

$$\displaystyle{ X_{t} =\int _{ 0}^{t}\frac{{e}^{-\lambda (t-s)}} {1 - {e}^{-\lambda }} dB_{s} +\int _{ t}^{1}\frac{{e}^{-\lambda (1+t-s)}} {1 - {e}^{-\lambda }} dB_{s} =:\varPsi (B)_{t}. }$$
(9)

But it is not Markov as the following representation shows:

$$\displaystyle{X_{t} = X_{0} + B_{t} -\int _{0}^{t}\left (\lambda X_{ s} -\lambda \frac{X_{0} - {e}^{-\lambda (1-s)}X_{s}} {\sinh (\lambda (1 - s))} \,\right )ds,\quad X_{0} \sim \mathcal{N}\left (0, \frac{\coth (\lambda /2)} {2\lambda } \right ).}$$

A natural question is then to investigate if it is reciprocal. In [3] the authors analysed the form of its covariance kernel to deduce the reciprocality of \(P_{\mathrm{OU}}^{\mathrm{per}}\). We proposed in [22] an alternative proof based on (7), which allows to conclude directly that \(P_{\mathrm{OU}}^{\mathrm{per}} \in \mathfrak{R}_{c}(P_{\mathrm{OU}})\): Thanks to the representation (9), one notes that the shifted process \(X +\varepsilon \int _{ 0}^{.}g(t)dt\) can also be represented as the transform by Ψ of a shifted Brownian motion, if g is a loop. It remains to use Girsanov theorem by computing

$$\displaystyle{E_{P_{\mathrm{OU}}^{\mathrm{per}}}(D_{g}\varPhi ) = E_{P_{\mathrm{OU}}^{\mathrm{per}}}\left (\lim _{\varepsilon \rightarrow 0}\frac{\varPhi (\cdot +\varepsilon \int _{ 0}^{.}g(t)dt)-\varPhi } {\varepsilon } \right )}$$

to obtain that \(P_{\mathrm{OU}}^{\mathrm{per}}\) satisfies, for all \(\varPhi \in \mathcal{S}\) and g simple loops,

$$\displaystyle{E_{P_{\mathrm{OU}}^{\mathrm{per}}}(D_{g}\varPhi ) = E_{P_{\mathrm{OU}}^{\mathrm{per}}}\big(\varPhi \,\,\delta (g)\big) + E_{P_{\mathrm{OU}}^{\mathrm{per}}}\Big(\varPhi \,\int _{0}^{1}g(r)\int _{ r}^{1}{\lambda }^{2}X_{ t}\,dt\,dr\Big).}$$

Let us now present a generalization of the famous result stated by Kolmogorov in [16]: A Brownian diffusion with values in \({\mathbb{R}}^{d}\) and time-homogeneous drift b is reversible (i.e. there exists an initial distribution such that \(\mathbb{P}_{b} = \mathbb{P}_{b}^{{\ast}}\)) if and only if the function b is a gradient.

In the next Theorem, whose proof is detailed in [23] Theorem 5.4, we obtain the same result under much weaker assumptions: We only require that there exists one reversible law in \(\mathfrak{R}_{c}(\mathbb{P}_{b})\) and we do not suppose that the drift is time-homogeneous. Its proof is based on the d-dimensional duality formula characterizing the reciprocal class \(\mathfrak{R}_{c}(\mathbb{P}_{b})\).

Theorem 6.

Let b be a d-dimensional smooth drift such that for any i,j ∈{ 1,…,d}, the function \(\big(\partial _{j}{b}^{i} - \partial _{i}{b}^{j}\big)(t,x)\) is time-independent. Furthermore suppose there exists \(Q \in \mathfrak{R}_{c}(\mathbb{P}_{b})\) with finite entropy which is time-reversible. Then the drift b is of gradient type, i.e.

$$\displaystyle{\exists \varphi: [0,1] \times {\mathbb{R}}^{d}\mapsto \mathbb{R}\mbox{ such that, for all }t,\quad b(t,\cdot ) = -\nabla \varphi (t,\cdot ).}$$

Moreover, if Q is itself a Brownian diffusion with drift b, then b is time-independent and

$$\displaystyle{Q(\cdot ) = \frac{1} {c}\int _{{\mathbb{R}}^{d}}\mathbb{P}_{b}(\cdot \vert X_{0} = x)\,\,{e}^{-2\varphi (x)}\,dx,}$$

for some positive constant c.

3.2 Case of Counting Processes

3.2.1 The Test Functions and the Operators

Any path ω in Ω j is characterized by its initial value x, the number of its jumps till time 1, say n, and the times of its jumps, \(t_{1},\ldots,t_{n}\). We then define the ith jump-time of a path by the functional:

$$\displaystyle{T_{i}(\omega ) = T_{i}\left (x\,\delta _{0} +\sum _{ j=1}^{n}\delta _{ t_{j}}\right ):= t_{i}\mathbf{1}_{i\leq n} + \mathbf{1}_{i>n}.}$$

We now define a set of smooth test functionals on Ω j by:

$$\displaystyle{\mathcal{S} =\{\varPhi:\varPhi =\varphi \big (X_{0},T_{1},\ldots,T_{n}\big),\varphi \in \mathbf{C}_{b}^{\infty }({\mathbb{R}}^{n+1}; \mathbb{R}),n \in {\mathbb{N}}^{{\ast}}\}.}$$

The derivation operator \(\mathcal{D}_{g}\) in the direction \(g \in {L}^{2}([0,1]; \mathbb{R})\) is based on the perturbation of the jump-times and defined on \(\mathcal{S}\) by:

$$\displaystyle{ \mathcal{D}_{g}\varPhi:=\lim _{\varepsilon }\frac{1} {\varepsilon } \left (\varphi \big(X_{0},T_{1} +\varepsilon \int _{ 0}^{T_{1} }g(t)dt,\ldots,T_{n} +\varepsilon \int _{ 0}^{T_{n} }g(t)dt\big)-\varPhi \right ). }$$

It was introduced by Elliott and Tsoi in [10].

3.2.2 Duality Formula Under the Poisson Process and Its Reciprocal Class

We are now able to present the duality between \(\mathcal{D}\) and an integration operator under all probability measures in the reciprocal class of the standard Poisson process. Recall the notations introduced in Example 2: P denotes the standard Poisson process on [0,1] and P λ denotes a Poisson process on [0,1] with intensity λ and any marginal law at time 0.

Theorem 7.

Let Q be a probability measure on Ω j such that \(E_{Q}(\vert X_{1}-X_{0}\vert )\ <\ +\infty \) .

$$\displaystyle\begin{array}{rcl} & & Q = \mathbf{P}_{\lambda } \Leftrightarrow \forall \varPhi \in \mathcal{S},E_{Q}(\mathcal{D}_{g}\varPhi ) = E_{Q}\left (\varPhi \int _{0}^{1}g(s)(dX_{ s} -\lambda ds)\right ),\forall g\,\,\mathrm{simple}{}\end{array}$$
(10)
$$\displaystyle\begin{array}{rcl} Q \in \mathfrak{R}_{c}(\mathbf{P}) \Leftrightarrow \forall \varPhi \in \mathcal{S},E_{Q}\ (\mathcal{D}_{g}\varPhi )\ =\ E_{Q}\left (\varPhi \int _{0}^{1}g(s)(dX_{ s}-ds)\right )\forall g\,\,\mathrm{simple}\,\,\mathrm{loop}.& &{}\end{array}$$
(11)

Proof.

  • Sketch of \(\stackrel{(10)}{\Leftrightarrow }\). The main tool is Watanabe’s characterization: Q is a Poisson process with intensity λ on Ω j if and only if \((X_{t} - X_{0} -\lambda t)_{t}\) is a Q-martingale.

  • Sketch of \(\stackrel{(11)}{\Leftrightarrow }\). One fixes an initial value x and tries to identify the compensator of Q x. Using (11) one shows that its compensator is absolutely continuous with respect to Lebesgue measure, with Markov intensity of the form \({\ell}^{x}(t,X_{{t}^{-}})\), and that \(\varXi _{{\ell}^{x}} \equiv 0\). Thanks Theorem 3 one can conclude.

    For details, see [20] Theorem 6.39. □ 

Remark 3.

  1. 1.

    Equation (10) is an infinite-dimensional generalization of the formula characterizing the Poisson distribution \(\mathcal{P}_{\alpha }\) on \(\mathbb{N}\), known as Chen’s lemma, see [5]: Let Z a real-valued random variable.

    $$\displaystyle{ Z \sim \mathcal{P}_{\alpha }\Leftrightarrow \forall \varphi \mbox{ smooth,}\quad E(\varphi (Z)Z) =\alpha \, E(\varphi (Z + 1)). }$$
  2. 2.

    For loops g, the right side of (11) indeed reduces to \(E_{Q}\big(\varPhi \int _{0}^{1}g(s)dX_{s}\big)\). Therefore one immediately recovers that all Poisson processes with any intensity are in a unique reciprocal class, the reciprocal class of the standard Poisson process P. In particular, the law of bridges of Poisson processes depends uniquely on their boundary conditions but does not depend of their original intensities.

3.2.3 Duality Formula Under the Reciprocal Class of a Counting Process

We now investigate how the duality formula (11) is perturbed when the underlying reference process P admits a jump intensity which is no more constant, but smooth enough, as in Theorem 3. Similar to Sect. 3.1.3, the transformed duality equation (12) presented below contains an additional term of order 0 in Φ, in which appears the reciprocal invariant Ξ associated with P .

Theorem 8.

Let Q be a probability measure on Ω j such that

\(E_{Q}(\vert X_{1} - X_{0}\vert ) < +\infty \) .

$$\displaystyle\begin{array}{rcl} & & Q \in \mathfrak{R}_{c}(\mathbf{P}_{\ell}) \Leftrightarrow \forall \varPhi \in \mathcal{S},\ \forall g\,\,\mbox{ simple loop, } \\ & & E_{Q}(D_{g}\varPhi ) = E_{Q}\Big(\varPhi \,\,\int _{0}^{1}g(s)(dX_{ s}-ds)\Big)+E_{Q}\Big(\varPhi \,\,\int _{0}^{1}g(s)\int _{ s}^{1}\varXi _{ l}(r,X_{{r}^{-}})dX_{r}ds\Big).{}\end{array}$$
(12)

Such a duality formula can be used to several aims. One application is, e.g., the investigation of the time reversal of reciprocal processes belonging to the class \(\mathfrak{R}_{c}(\mathbf{P}_{\ell})\); see [20] for details.

The extension of these results to pure jump processes with general jumps is in preparation.