Abstract
Let C[0, T] denote an analogue of Wiener space, the space of real-valued continuous functions on the interval [0, T]. For a partition \(0=t_0<t_1<\cdots <t_n=T\) of [0, T], define \(X:C[0,T]\rightarrow \mathbb R^{n+1}\) by \(X(x)=(x(t_0),x(t_1),\ldots ,\) \(x(t_n))\). In this paper, we derive a simple evaluation formula for Radon–Nikodym derivatives similar to the conditional expectations of functions on C[0, T] with the conditioning function X which has a drift and an initial weight. As applications of the formula, we evaluate the Radon–Nikodym derivatives of the functions \(\int _0^T[x(t)]^m\mathrm{d}\lambda (t)(m\in \mathbb N)\) and \([\int _0^Tx(t)\mathrm{d}\lambda (t)]^2\) on C[0, T], where \(\lambda \) is a complex-valued Borel measure on [0, T].
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
A time integral is simply the Riemann integral of a function of the continuous random variable \(W(x,t)=x(t)\) with respect to the parameter t for \(x\in C_0[0,T]\) which is the Wiener space, the space of continuous real-valued functions x on [0, T] with \(x(0)=0\). The Feynman–Kac functional on \(C_0[0,T]\) is given by \(\exp \{-\int _0^T V(t, W(x,t))\mathrm{d}t\}\) including the time integral, where V is a complex-valued potential. Calculations involving the conditional expectations of the Feynman–Kac functional are important in the study of the Feynman integral [8], and it can provide a solution of the integrals equation which is formally equivalent to the Schrödinger equation [5]. In particular, when \(0=t_0<t_1<\cdots <t_n=T\) is a partition of [0, T] and \(\xi _j\in \mathbb R\) for \(j=0,1,\ldots ,n\), the conditional expectation of the time integral in which the paths pass through the point \(\xi _j\) at each time \(t_j\) is very useful in the Brownian motion theory. On the space \(C_0[0,T]\), Yeh [12] introduced an inversion formula that a conditional expectation can be found by a Fourier transform with simple calculations of the conditional expectations. But his inversion formula is very complicated in its applications when the conditioning function is vector-valued. In [7], Park and Skoug derived a simple formula for conditional Wiener integrals containing the time integral with the conditioning function \((x(t_1),\ldots ,x(t_n))\) for \(x\in C_0[0,T]\). In their simple formula, they expressed the conditional Wiener integral directly in terms of ordinary Wiener integral, which generalizes Yeh’s inversion formula. We note that the Wiener measure used in [7, 12] has no drifts with the variance function \(\beta (t)=t\) for \(t\in [0,T]\).
On the other hand, let C[0, T] denote the space of continuous real-valued functions on the interval [0, T]. Im and Ryu [6, 9] introduced a finite positive measure \(w_\varphi \) on C[0, T], where \(\varphi \) is a finite positive measure on the Borel class of \(\mathbb R\). We note that \(w_\varphi \) is exactly the Wiener measure on \(C_0[0,T]\) if \(\varphi =\delta _{0}\), the Dirac measure concentrated at 0. When \(\varphi \) is a probability measure, the author [2] and Ryu [9] derived separately the same simple formula for a generalized conditional Wiener integral of the functions on C[0, T] with the conditioning function \(X(x)=(x(t_0),x(t_1),\ldots ,x(t_n))\) for \(x\in C[0,T]\). They evaluated the conditional integrals of various functions which contain the time integral and are interested in both Feynman integral and quantum mechanics. To derive the formula, the author proved directly independence of the Brownian bridge motion used in evaluations, while Ryu did independence of the motion by using the characteristic function. In both cases [2, 9], W in the motion has no drifts with the variance function \(\beta (t)=t\) on [0, T]. Recently, the author [4] derived a simple evaluation formula for Radon–Nikodym derivatives similar to the conditional expectations with the conditioning function Y defined by \(Y(x)=(x(t_0),x(t_1),\ldots ,x(t_{n-1}))\) for \(x\in C[0,T]\). Then, he evaluated the derivative of various functions which play significant roles in the Feynman integral. In these results, W has a general drift and more extended variance function. Moreover, Y does not contain the present positions of the paths in C[0, T], that is, it does not depend on the present time.
In this paper, we will investigate properties of the Fourier transform of the process W defined on \(C[0,T]\times [0,T]\). Using the Fourier transform of W, we derive a simple evaluation formula for Radon–Nikodym derivatives similar to the conditional expectations of functions on C[0, T] with the conditioning function X which has a drift with the generalized variance function \(\beta \) and an initial weight \(\varphi \). As applications of the formula, we evaluate the Radon–Nikodym derivatives similar to the conditional expectations of the functions \(\int _0^T[W(x,t)]^m\mathrm{d}\lambda (t)(m\in \mathbb N)\) and \([\int _0^TW(x,t)\mathrm{d}\lambda (t)]^2\) on C[0, T], where \(\lambda \) is a complex-valued Borel measure on [0, T]. We note that W has a drift with the more generalized variance function \(\beta \) and our underlying space C[0, T] may not be a probability space so that the results of this paper generalize those of [2, 7, 9, 12]. Furthermore, the conditioning function X contains the present positions of the paths in C[0, T], that is, it depends on the present time, while Y does not. We also note that the evaluations in this paper are simpler than those in [4]. The main results of this paper are evaluations of the Radon–Nikodym derivatives of time integrals with detailed examples, while the results of [4] are focused on the translation theorem for derivatives because our underlying measure is not invariant under translations.
2 An Analogue of Wiener Space
In this section, we introduce a finite measure over paths and investigate its properties. We now introduce a generalized analogue of Wiener space which is cited from [3, 10, 11] with little changes.
Let \(\alpha ,\beta :[0,T]\rightarrow \mathbb R\) be two functions, where \(\beta \) is continuous and strictly increasing. Let \(\varphi \) be a positive finite measure on the Borel class \(\mathcal B(\mathbb R)\) of \(\mathbb R\) and \(m_L\) be the Lebesgue measure on \(\mathcal B(\mathbb R)\). For \(\mathbf {t}_n=(t_0,t_1,\ldots ,t_n)\) with \(0=t_0<t_1<\cdots <t_n\le T\), let \(J_{\mathbf {t}_n}:C[0,T]\rightarrow \mathbb R^{n+1}\) be the function given by \(J_{\mathbf {t}_n}(x)=(x(t_0),x(t_1),\ldots ,x(t_n))\). For \(\prod _{j=0}^n B_j\in \mathcal B(\mathbb R^{n+1})\), the subset \(J_{\mathbf {t}_n}^{-1}(\prod _{j=0}^n B_j)\) of C[0, T] is called an interval I and let \(\mathcal I\) be the set of all such intervals I. Define a pre-measure \(m_{\alpha ,\beta ;\varphi }\) on \(\mathcal I\) by
where for \(\mathbf {u}_n=(u_1,\ldots ,u_n)\in \mathbb R^n\) and \(u_0\in \mathbb R\),
The Borel \(\sigma \)-algebra \(\mathcal B(C[0,T])\) of C[0, T] with the supremum norm coincides with the smallest \(\sigma \)-algebra generated by \(\mathcal I\), and there exists a unique positive finite measure \(w_{\alpha ,\beta ;\varphi }\) on \(\mathcal B(C[0,T])\) with \(w_{\alpha ,\beta ;\varphi }(I)=m_{\alpha ,\beta ;\varphi }(I)\) for all \(I\in \mathcal I\). This measure \(w_{\alpha ,\beta ;\varphi }\) is called an analogue of a generalized Wiener measure on \((C[0,T],\mathcal B(C[0,T]))\) according to \(\varphi \).
Theorem 1
[6, Lemma 2.1] If \(f:\mathbb R^{n+1}\rightarrow \mathbb C\) is a Borel measurable function, then the following equality holds:
where \(\overset{*}{=}\) means that if either side exists, then both sides exist and they are equal.
By Theorem 1, we have the following lemma which is useful in the next sections [3].
Lemma 1
If \(0\le t_1\le t_2 \le t_3 \le t_4\le T\), then we have for nonnegative integers l and m,
where \([\frac{l}{2}]\) and \([\frac{m}{2}]\) denote the greatest integers which do not exceed \(\frac{l}{2}\) and \(\frac{m}{2}\), respectively.
Define a generalized stochastic process \(X_t (x):C[0,T]\rightarrow \mathbb R\) by \(X_t(x) = x(t)\) for \(t\in [0,T]\). By Lemma 1 and [3, Theorem 2.6], we have the following properties for \(X_t\):
- (P1):
-
If \(t_1,t_2\in [0,T]\), then \(\int _{C[0,T]}[X_{t_2}(x)-X_{t_1}(x)]\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)=\varphi (\mathbb R)[\alpha (t_2)-\alpha (t_1)]\).
- (P2):
-
If \(t_1,t_2\in [0,T]\), then \(\int _{C[0,T]}[X_{t_2}(x)-X_{t_1}(x)]^2\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)=\varphi (\mathbb R)[|\beta (t_2)\) \(-\beta (t_1)|+[\alpha (t_2)-\alpha (t_1)]^2]\).
- (P3):
-
If \(0\le t_1\le t_2 \le t_3 \le t_4\le T\), then \(\int _{C[0,T]}[X_{t_2}(x)-X_{t_1}(x)][X_{t_4}(x)-X_{t_3}(x)]\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)=\varphi (\mathbb R)[\alpha (t_2)-\alpha (t_1)][\alpha (t_4)-\alpha (t_3)]\) and \(\int _{C[0,T]}[X_{t_2}(x)-X_{t_1}(x)][X_{t_3}(x)-X_{t_1}(x)]dw_{\alpha ,\beta ;\varphi }(x) =\varphi (\mathbb R)[[\alpha (t_2)-\alpha (t_1)][\alpha (t_3)-\alpha (t_1)]+\beta (t_2)-\beta (t_1)]\).
- (P4):
-
The Fourier transform \(\mathcal F(X_0)\) of \(X_0\) is given by \(\mathcal F(X_0)(\xi )=\int _{\mathbb R}\exp \{i\xi u\}\,\mathrm{d}\varphi (u)\) for \(\xi \in \mathbb R\).
- (P5):
-
If \(t_1,t_2\in [0,T]\), then the Fourier transform \(\mathcal F(X_{t_2}-X_{t_1})\) of \(X_{t_2}-X_{t_1}\) is given by \(\mathcal F(X_{t_2}-X_{t_1})(\xi )=\varphi (\mathbb R)\exp \{-\frac{1}{2}\xi ^2|\beta (t_2)-\beta (t_1)|+i\xi [\alpha (t_2)-\alpha (t_1)]\}\) for \(\xi \in \mathbb R\).
- (P6):
-
If \(t\in [0,T]\), then the Fourier transform \(\mathcal F(X_t)\) of \(X_t\) can be expressed by \(\mathcal F(X_t)(\xi )=\frac{1}{\varphi (\mathbb R)}\mathcal F(X_t-X_0)(\xi )\mathcal F(X_0)(\xi )\) for \(\xi \in \mathbb R\).
By (P1), (P2), (P3) and (P5), we now have the following lemma.
Lemma 2
If \(\varphi (\mathbb R)=1\), then we have the following:
-
(a)
If \(t_1,t_2\in [0,T]\) with \(t_1\ne t_2\), then \(X_{t_2}-X_{t_1}\) is normally distributed with the mean \(\alpha (t_2)-\alpha (t_1)\) and the variance \(|\beta (t_2)-\beta (t_1)|\).
-
(b)
If \(0\le t_1\le t_2 \le t_3 \le t_4\le T\), then \(X_{t_2}-X_{t_1}\) and \(X_{t_4}-X_{t_3}\) are independent.
Let \(X: C[0,T]\rightarrow \mathbb R^{n+1}\) be Borel measurable and let \(F:C[0,T]\rightarrow \mathbb C\) be integrable. Let \(\mathcal D\) be the \(\sigma \)-field \(\{ X^{-1}(B) : B \in \mathcal B(\mathbb R^{n+1})\}\) and let \(w_{\mathcal D}\) be the measure induced by \(w_{\alpha ,\beta ;\varphi }\), that is, \(w_{\mathcal D} (E) = w_{\alpha ,\beta ;\varphi }(E)\) for \(E\in \mathcal D\). Define the set function \(w_X\) on \(\mathcal D\) by
Clearly, \(w_X\) is a measure on \(\mathcal D\) with \(w_X \ll w_{\mathcal D}\), so that in view of the Radon–Nikodym theorem there exists a \(\mathcal D\)-measurable function \(\frac{dw_X}{dw_{\mathcal D}}\) defined on C[0, T] such that the relation
holds for every \(E\in \mathcal D\). Here, the function \(\frac{dw_X}{dw_{\mathcal D}}\) is determined uniquely up to \(w_{\mathcal D}\) a.e. and it is called a generalized conditional expectation of F given X. On the other hand, let \(m_X\) be the image measure on the Borel class \(\mathcal B(\mathbb R^{n+1})\) of \(\mathbb R^{n+1}\) induced by X, that is, \(m_X=w_{\alpha ,\beta ;\varphi }\circ X^{-1}=w_{\mathcal D}\circ X^{-1}\). For every \(B\in \mathcal B(\mathbb R^{n+1})\), let
Then, \(\mu _X=w_X\circ X^{-1}\) with \(\mu _X \ll m_X\), so that there exists an \(m_X\)-integrable function \(\frac{\mathrm{d}\mu _X}{\mathrm{d}m_X}\) defined on \(\mathbb R^{n+1}\) which is unique up to \(m_X\) a.e. such that for every \(B\in \mathcal B(\mathbb R^{n+1})\),
We now have
where the third equality follows from the change of variable theorem. By uniqueness, \(\frac{\mathrm{d}w_X}{\mathrm{d}w_{\mathcal D}} (x) = (\frac{\mathrm{d}\mu _X}{\mathrm{d}m_X} \circ X)(x)\) for \(w_{\mathcal D}\) a.e. \(x\in C[0,T]\) and \(\frac{\mathrm{d}\mu _X}{\mathrm{d}m_X}\) is also called a generalized conditional expectation of F given X. Throughout this paper, we will consider the function \(\frac{\mathrm{d}\mu _X}{\mathrm{d}m_X}\) as the generalized conditional expectation of F given X and it is denoted by GE[F|X]. We note that GE[F|X] is a Radon–Nikodym derivative rather than a conditional expectation since \(m_X\) may not be a probability measure.
3 A Simple Formula for the Generalized Conditional Expectation
In this section, we derive a simple evaluation formula for the generalized conditional expectations of functions on C[0, T] with an appropriate conditioning function.
Throughout the remainder of this paper, we assume that \(0 = t_0< t_1< \cdots < t_n = T\) is an arbitrary fixed partition of [0, T] unless otherwise specified. To derive the desired simple evaluation formula for a generalized conditional expectation, we begin with letting
For a function \(f:[0,T]\rightarrow \mathbb R\), define the polygonal function \(P_\beta (f)\) of f by
for \(t\in [0,T]\), where \(\chi \) denotes the characteristic function. Similarly, for \(\varvec{\eta } = (\eta _0,\eta _1,\ldots ,\eta _n )\in \mathbb R^{n+1}\), the polygonal function \(P_\beta (\varvec{\eta })\) of \(\varvec{\eta }\) on [0, T] is defined by (3) with replacing \(f(t_j)\) by \(\eta _j\) for \(j=0,1,\ldots ,n\). Then, both \(P_\beta (f)\) and \(P_\beta (\varvec{\eta })\) belong to C[0, T], and \(P_\beta (f)(t_j)=f(t_j)\), \(P_\beta (\varvec{\eta })(t_j) = \eta _j\) at each \(t_j\).
For \(s_1,s_2\in [0,T]\), let
For \(t\in [0,T]\), let
and let \(Z_t(x)=x(t)-P_\beta (x)(t)\) for \(x\in C[0,T]\). Note that if \(t\in [t_{j-1}, t_j]\) for some \(j\in \{1, \ldots , n\}\), then
and
We now have the following theorem.
Theorem 2
For \(t\in [0,T]\), the Fourier transform \(\mathcal F(Z_t)\) of \(Z_t\) is given by
for \(\xi \in \mathbb R\), where \(\Gamma (t)\) is given by (5). Moreover, if \(t\in (t_{j-1}, t_j)\) for some j and \(\varphi (\mathbb R)=1\), then \(Z_t\) is Gaussian with the mean \(Z_t(\alpha )\) and variance \(\Gamma (t)\).
Proof
If \(t=t_j\) for some \(j\in \{0,1,\ldots ,n\}\), then the first result is trivial. Now, suppose that \(t\in (t_{j-1},t_j)\) for some j. Let \(\varphi _0=\frac{1}{\varphi (\mathbb R)}\varphi \) and \(\mathcal F_{\varphi _0}(Z_t)\) be the Fourier transform of \(Z_t\) with respect to \(w_{\alpha ,\beta ;\varphi _0}\). Then, by (6), (7) and Lemma 2, \(\varphi _0\) is a probability measure and \(Z_t\) is Gaussian with respect to \(w_{\alpha ,\beta ;\varphi _0}\) with the mean \(Z_t(\alpha )\) and the variance \(\Gamma _j(t,t)\) given by (4) so that for \(\xi \in \mathbb R\)
Since \(\mathcal F(Z_t)(\xi )=\varphi (\mathbb R)\mathcal F_{\varphi _0}(Z_t)(\xi )\), we have the theorem. \(\square \)
Since \(\frac{1}{\varphi (\mathbb R)}\varphi \) is a probability measure, we have the following corollaries by Lemma 2 and Theorem 2.
Corollary 1
Let \(t\in [0,T]\) and \(f:\mathbb R\rightarrow \mathbb R\) be a Borel measurable function. Then, under the notations as in Theorem 2, we have
if \(t\in (t_{j-1},t_j)\) for some j. Moreover, if \(t=t_j\) for some \(j\in \{0,1,\ldots ,n\}\), then
Corollary 2
Let \(s_1\in [t_{j-1},t_j]\) and \(s_2\in [t_{k-1},t_k]\) with \(j\ne k\). Then, the Fourier transform \(\mathcal F(Z_{s_1},Z_{s_2})\) of \((Z_{s_1},Z_{s_2})\) can be expressed by
for \(\xi _1,\xi _2\in \mathbb R\) . Consequently, the processes \(\{Z_t: t_{j-1}\le t \le t_j \}\), where \(j=1, \ldots , n\), are stochastically independent if \(\varphi (\mathbb R)=1\).
Lemma 3
Let \(0\le s_1\le s_2\le s_3\le T\). Then, we have the following:
-
(a)
The Fourier transform \(\mathcal F(X_{s_1},X_{s_3}-X_{s_2})\) of \((X_{s_1},X_{s_3}-X_{s_2})\) can be expressed by
$$\begin{aligned} \mathcal F(X_{s_1},X_{s_3}-X_{s_2})(\xi _1,\xi _2)=\frac{1}{\varphi (\mathbb R)}\mathcal F(X_{s_1})(\xi _1)\mathcal F(X_{s_3}-X_{s_2})(\xi _2) \end{aligned}$$for \(\xi _1,\xi _2\in \mathbb R\). Consequently, \(X_{s_1}\) and \(X_{s_3}-X_{s_2}\) are independent if \(\varphi \) is a probability measure.
-
(b)
The Fourier transform \(\mathcal F(X_{s_2},X_{s_3}-X_{s_1})\) of \((X_{s_2},X_{s_3}-X_{s_1})\) can be expressed by
$$\begin{aligned}&\mathcal F(X_{s_2},X_{s_3}-X_{s_1})(\xi _1,\xi _2)\\&\quad =\frac{1}{\varphi (\mathbb R)}\mathcal F(X_{s_2})(\xi _1)\mathcal F(X_{s_3}-X_{s_1})(\xi _2)\exp \{-\xi _1\xi _2[\beta (s_2)-\beta (s_1)]\} \end{aligned}$$for \(\xi _1,\xi _2\in \mathbb R\).
-
(c)
The Fourier transform \(\mathcal F(X_{s_3},X_{s_2}-X_{s_1})\) of \((X_{s_3},X_{s_2}-X_{s_1})\) can be expressed by
$$\begin{aligned}&\mathcal F(X_{s_3},X_{s_2}-X_{s_1})(\xi _1,\xi _2)\\&\quad =\frac{1}{\varphi (\mathbb R)}\mathcal F(X_{s_3})(\xi _1)\mathcal F(X_{s_2}-X_{s_1})(\xi _2)\exp \{-\xi _1\xi _2[\beta (s_2)-\beta (s_1)]\} \end{aligned}$$for \(\xi _1,\xi _2\in \mathbb R\).
Proof
For convenience, let \(s_0=0\), \(\mathbf {s}_3=(s_0,s_1,s_2,s_3)\) and \(\mathbf {u}_3=(u_1,u_2,u_3)\). We will prove this lemma for the case \(0<s_1<s_2<s_3\). The results for the other cases of \(\mathbf {s}_3\) can be similarly proved. By Theorem 1, we have
where \(W_3\) is given by (1) with \(n=3\). For \(j=1,2,3\), let \(v_j=u_j-\alpha (s_j)-u_{j-1}+\alpha (s_{j-1})\) and \(\mathbf {v}_3=(v_1,v_2,v_3)\). Then, we have by the change of variable theorem
by (P4), (P5) and (P6), which completes the proof of (a).
Similarly, we have by Theorem 1
which completes the proof of (b).
Finally, we also have by Theorem 1
which proves (c), completing the proof. \(\square \)
Lemma 4
If \(t\in [t_{j-1},t_j]\) and \(s\in [0,t_{j-1}]\cup [t_j,T]\) for some j, then the Fourier transform \(\mathcal F(X_s,Z_t)\) of \((X_s,Z_t)\) can be expressed by
for \(\xi _1,\xi _2\in \mathbb R\) so that \(X_s\) and \(Z_t\) are independent if \(\varphi (\mathbb R)=1\).
Proof
Let \(\varphi _0=\frac{1}{\varphi (\mathbb R)}\varphi \) and let \(\mathcal F_{\varphi _0}\) denote the Fourier transform with respect to \(w_{\alpha ,\beta ;\varphi _0}\). First, we will prove that for \(\xi _1,\xi _2\in \mathbb R\)
If \(t=t_{j-1}\) or \(t=t_j\), then (8) follows immediately. Assume that \(t\in (t_{j-1},t_j)\). If \(s\in [0,t_{j-1}]\), then we have (8) by (6) and (a) of Lemma 3 since \(\varphi _0\) is a probability measure. Now, suppose that \(s=t_j\). For convenience, let \(\mathbf {s}_3=(s_0,s_1,s_2,s_3)=(0,t_{j-1},t,t_j)\) and \(\mathbf {u}_3=(u_1,u_2,u_3)\). By (6) and Theorem 1, we have
where \(W_3\) is given by (1) with \(n=3\). For \(j=1,2,3\), let \(v_j=u_j-\alpha (s_j)-u_{j-1}+\alpha (s_{j-1})\) and \(\mathbf {v}_3=(v_1,v_2,v_3)\). Then, we have by (6) and the change of variable theorem
so that we have by (P4), (P5), (P6), Lemma 2 and Theorem 2
which proves (8) for \(s=t_j\). Suppose that \(t_j<s\). Note that \(X_s-X_{t_j}\) and \(Z_t\) are independent with respect to \(w_{\alpha ,\beta ;\varphi _0}\) by (6) and Lemma 2 and that \(X_{t_j}\) and \(Z_t\) are also independent by the previous result. Consequently, \(X_s\) and \(Z_t\) are independent with respect to \(w_{\alpha ,\beta ;\varphi _0}\) since \(X_s=X_s-X_{t_j}+X_{t_j}\). Now, we have (8) and finally,
which is the desired result. \(\square \)
Using Lemma 4, we have the following theorem.
Theorem 3
Let \(X : C[0,T]\rightarrow \mathbb R^{n+1}\) be given by
Then, the process \(\{Z_t: 0 \le t \le T\}\) and X are independent if \(\varphi (\mathbb R)=1\).
Theorem 4
Let \(F: C[0, T]\rightarrow \mathbb C\) be integrable and X be given by (9) in Theorem 3. Then, we have for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\)
where \(m_X\) is the measure on \(\mathcal B(\mathbb R^{n+1})\) induced by X.
Proof
Let \(\varphi _0=\frac{1}{\varphi (\mathbb R)}\varphi \) and let \(GE_{\varphi _0}[F|X]\) denote the (generalized) conditional expectation of F given X with respect to \(w_{\alpha ,\beta ;\varphi _0}\) which is a probability measure on C[0, T]. Applying the same method as used in the proofs of Theorem 2 in [7, p.383] and Theorem 3.3 in [9] with an aid of Problem 4 of [1, p.216], we have
for \(P_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), where \(P_X\equiv w_{\alpha ,\beta ;\varphi _0}\circ X^{-1}\) is the probability distribution of X on \((\mathbb R^{n+1},\mathcal B(\mathbb R^{n+1}))\). Note that for \(B\in \mathcal B(\mathbb R^{n+1})\),
so that B is a \(P_X\) null-set if and only if it is an \(m_X\) null-set. Now, we have
so that we have (10) by uniqueness of Radon–Nikodym derivative. \(\square \)
Remark 1
Note that Problem 4 of [1, p.216] describes the following: Let \((\Omega , \mathcal F,\) P) be a probability space. Let \(\mathcal C_i( i\in I)\) be classes of sets in \(\mathcal F\). If the \(\mathcal C_i\)s are independent classes and each \(\mathcal C_i\) is closed under finite intersection, then the minimal \(\sigma \)-algebras over the \(\mathcal C_i\) are also independent.
Using the above problem and Theorem 3, one can show that \(x-P_\beta (x)\) and X are independent since the Borel \(\sigma \)-algebra on C[0, T] is the smallest \(\sigma \)-algebra such that each coordinate mapping \(X_t\) is measurable. The independence of \(x-P_\beta (x)\) and X is essential for the proof of Theorem 4.
Remark 2
In the proof of Theorem 4, since B is a \(P_X\) null-set if and only if it is an \(m_X\) null-set, (10) can be rewritten by
for \(P_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\) (or equivalently, for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\)).
Remark 3
Lemma 4 and Theorem 4 are extensions of Theorems 3.1 and 3.2, respectively, in [9]. They also extend Theorems 2.8 and 2.9 in [2].
4 Evaluations of the Generalized Conditional Expectations
In this section, using Theorem 4, we evaluate the generalized conditional expectations of various functions which are useful in both quantum mechanics and the Feynman integration theory.
Lemma 5
-
(a)
If \(s_1\in [t_{j-1},t_j]\) and \(s_2\in [t_{k-1},t_k]\) with \(j\ne k\), then
$$\begin{aligned} \int _{C[0,T]}Z_{s_1}(x)Z_{s_2}(x)\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)=\varphi (\mathbb R)Z_{s_1}(\alpha )Z_{s_2}(\alpha ). \end{aligned}$$ -
(b)
If \(s_1,s_2\in [t_{j-1},t_j]\), then we have
$$\begin{aligned}&\int _{C[0,T]}Z_{s_1}(x)Z_{s_2}(x)\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)\\&\quad =\varphi (\mathbb R)[Z_{s_1}(\alpha )Z_{s_2}(\alpha )+\Gamma _j(s_1\vee s_2,s_1\wedge s_2)], \end{aligned}$$where \(s_1\vee s_2=\max \{s_1,s_2\}\), \(s_1\wedge s_2=\min \{s_1,s_2\}\) and \(\Gamma _j\) is given by (4), so that \(Cov(Z_{s_1},Z_{s_2})=\Gamma _j(s_1\vee s_2,s_1\wedge s_2)\) if \(\varphi (\mathbb R)=1\).
Proof
Let \(\varphi _0=\frac{1}{\varphi (\mathbb R)}\varphi \). Then, (a) with \(w_{\alpha ,\beta :\varphi _0}\) follows from Corollary 2. We now prove (b) for \(w_{\alpha ,\beta :\varphi _0}\). For convenience, suppose that \( s_1\le s_2\). By (P2), (P3) and Lemma 1, we have
which proves (b) for \(w_{\alpha ,\beta :\varphi _0}\). Since \(w_{\alpha ,\beta :\varphi }=\varphi (\mathbb R)w_{\alpha ,\beta :\varphi _0}\), we have the lemma and the proof is completed. \(\square \)
Theorem 5
For \(s_1,s_2\in C[0,T]\) and \(x\in C[0,T]\), let \(G(x)=x(s_1)x(s_2)\) and suppose that \(\int _{\mathbb R}u^2\mathrm{d}\varphi (u)<\infty \). Then, G is \(w_{\alpha ,\beta ;\varphi }\)-integrable and we have the following:
-
(a)
If \(s_1\in [t_{j-1},t_j]\) and \(s_2\in [t_{k-1},t_k]\) with \(j\ne k\), then for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), we have
$$\begin{aligned} GE[G|X](\varvec{\eta })=[Z_{s_1}(\alpha )+P_\beta (\varvec{\eta })(s_1)][Z_{s_2}(\alpha )+P_\beta (\varvec{\eta })(s_2)]. \end{aligned}$$ -
(b)
If \(s_1,s_2\in [t_{j-1},t_j]\), then for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), we have
$$\begin{aligned} GE[G|X](\varvec{\eta })= & {} [Z_{s_1}(\alpha )+P_\beta (\varvec{\eta })(s_1)][Z_{s_2}(\alpha )+P_\beta (\varvec{\eta })(s_2)]\\&+\Gamma _j(s_1\vee s_2,s_1\wedge s_2). \end{aligned}$$
Proof
Without loss of generality, we will prove the theorem for the case \(0\le s_1\le s_2\le T\). For \(0=s_0<s_1<s_2\), we have by Theorem 1 and the change of variable theorem
which is finite since \(\int _{\mathbb R}u^2\mathrm{d}\varphi (u)<\infty \). The integrability of G for the other cases follows similarly. Moreover, for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), we have by Theorem 4
where \(\varphi _0=\frac{1}{\varphi (\mathbb R)}\varphi \). Suppose that \(s_1\in [t_{j-1},t_j]\) and \(s_2\in [t_{k-1},t_k]\) with \(j\ne k\). By Corollaries 1, 2 and (a) of Lemma 5, we have
which proves (a) in this theorem. To prove (b) of this theorem, suppose that \(s_1,s_2\in [t_{j-1},t_j]\). Then, we have by (b) of Lemma 5
which completes the proof. \(\square \)
Theorem 6
For \(x\in C[0,T]\), let \(G_1(x)=[\int _0^Tx(t)\mathrm{d}\lambda (t)]^2\), where \(\lambda \) is a continuous complex measure on the Borel class of [0, T]. Suppose that
Then, \(G_1\) is \(w_{\alpha ,\beta ;\varphi }\)-integrable and for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), we have
where \(\Lambda (s,t)=\sum _{j=1}^n\chi _{[t_{j-1},t_j]^2}(s,t)\Gamma _j(s,t)\) for \((s,t)\in [0,T]^2\).
Proof
To prove the integrability of \(G_1\), let \(\Delta _1=\{(s_1,s_2):0\le s_1<s_2\le T\}\), \(\Delta _2=\{(s_1,s_2):0\le s_2<s_1\le T\}\) and let G be the function as given in Theorem 5. Since \(\lambda \) is continuous, we have
Note that for \(0\le s<t\le T\), we have
and
Using the above facts with (11) and (12), we have for \(l=1,2\),
which proves that \(G_1\) is \(w_{\alpha ,\beta ;\varphi }\)-integrable. To evaluate \(GE[G_1|X]\), let \(A=\{(j,k)\in \mathbb N^2 :1\le j,k\le n, j\ne k\}\). For \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), we have by Theorems 4 and 5
which is the desired result. \(\square \)
Theorem 7
Let \(m\in \mathbb N\) and \(t\in [0,T]\). For \(x\in C[0,T]\), let \(F_t(x)=[x(t)]^m\) and suppose that \(\int _{\mathbb R}|u|^m \mathrm{d}\varphi (u)<\infty \). Then, \(F_t\) is \(w_{\alpha ,\beta ;\varphi }\)-integrable and for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), we have
where \(\Gamma (t)\) is given by (5) and \([\frac{m}{2}]\) denotes the greatest integer less than or equal to \(\frac{m}{2}\). In particular, if \(t=t_j\) for some \(j\in \{0,1,\ldots ,n\}\), then \(GE[F_t|X](\varvec{\eta })=\eta _j^m\) for \(m_X\) a.e. \(\varvec{\eta }=(\eta _0,\eta _1,\ldots ,\eta _n)\in \mathbb R^{n+1}\).
Proof
Note that for \(l=0, 1, \ldots ,m \), \(|u|^l\) is \(\varphi \)-integrable on \(\mathbb R\), so that \(F_0\) is \(w_{\alpha ,\beta ;\varphi }\)-integrable. Moreover, if \(t\in (0,T]\), then we have by Theorem 1, the change of variable theorem and the multinomial expansion theorem
which is finite, so that \(F_t\) is \(w_{\alpha ,\beta ;\varphi }\)-integrable for all \(t\in [0,T]\). If \(t=t_j\) for some j, then for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), we have by Theorem 4
If \(t\in (t_{j-1},t_j)\) for some j, then we have by Corollary 1 and Theorem 4
Using the same process as used in the proof of [2, Theorem 3.1], we have
Moreover, if \(t=t_j\), then the right-hand side of the above equality reduces to \(\eta _j^m\) since \(\Gamma (t_j)=0\). The proof is now completed. \(\square \)
Theorem 8
Let \(m\in \mathbb N\) and \(F(x)=\int _0^T[x(t)]^m \mathrm{d}\lambda (t)\) for \(x\in C[0,T]\), where \(\lambda \) is a finite \(\mathbb C\)-valued measure on the Borel class of [0, T]. Suppose that
Then, F is \(w_{\alpha ,\beta ;\varphi }\)-integrable and for \(m_X\) a.e. \(\varvec{\eta }=(\eta _0,\eta _1,\ldots ,\eta _n)\in \mathbb R^{n+1}\),
where \(GE[F_t|X](\varvec{\eta })\) is given by (13).
Proof
Note that F is \(w_{\alpha ,\beta ;\varphi }\)-integrable by the proof of Theorem 7, since
which is finite by (14). The equalities of the theorem follow immediately. \(\square \)
Applying the calculations as used in [2, Example 3.3], we have the following example by Theorem 8.
Example 1
For \(l=1,2,3\), let \(F_l(x)= \int _0^T [x(t)]^l\mathrm{d}\beta (t)\) for \(x\in C[0,T]\). Then, we have the following:
-
(a)
If \(\int _{\mathbb R}|u| \mathrm{d}\varphi (u)< \infty \) and \(\int _0^T|\alpha (t)|\mathrm{d}\beta (t)<\infty \), then for \(m_X\) a.e. \(\varvec{\eta }=(\eta _0,\eta _1,\) \(\ldots ,\eta _n)\in \mathbb R^{n+1}\)
$$\begin{aligned}&GE[F_1|X](\varvec{\eta })\\&\quad =\sum _{j=1}^n\int _{t_{j-1}}^{t_j}[P_\beta (\varvec{\eta }-\alpha )(t)+\alpha (t)] \mathrm{d}\beta (t)\\&\quad =\int _0^T\alpha (t)d\beta (t)+\frac{1}{2}\sum _{j=1}^n[\beta (t_j)-\beta (t_{j-1})] [\eta _j-\alpha (t_j)+\eta _{j-1}-\alpha (t_{j-1})]. \end{aligned}$$ -
(b)
If \(\int _{\mathbb R}u^2 \mathrm{d}\varphi (u)< \infty \) and \(\int _0^T[\alpha (t)]^2\mathrm{d}\beta (t)<\infty \), then for \(m_X\) a.e. \(\varvec{\eta }=(\eta _0,\eta _1,\ldots ,\eta _n)\in \mathbb R^{n+1}\)
$$\begin{aligned}&GE[F_2|X](\varvec{\eta })\\&\quad =\sum _{j=1}^n\int _{t_{j-1}}^{t_j}[\Gamma (t)+[P_\beta (\varvec{\eta }-\alpha )(t)+\alpha (t)]^2] \mathrm{d}\beta (t)\\&\quad =\int _0^T\alpha (t)[\alpha (t)+2P_\beta (\varvec{\eta }-\alpha )(t)]\mathrm{d}\beta (t)\\&\qquad +\,\sum _{j=1}^n\int _{t_{j-1}}^{t_j}[\Gamma (t)+[P_\beta (\varvec{\eta }-\alpha )(t)]^2]\mathrm{d}\beta (t)\\&\quad = \int _0^T\alpha (t)[\alpha (t)+2P_\beta (\varvec{\eta }-\alpha )(t)]\mathrm{d}\beta (t)+\frac{1}{6}\sum _{j=1}^n[\beta (t_j)-\beta (t_{j-1})][\beta (t_j)\nonumber \\&\qquad -\,\beta (t_{j-1})+2[[\eta _j-\alpha (t_j)]^2+[\eta _j-\alpha (t_j)][\eta _{j-1}-\alpha (t_{j-1})]+[\eta _{j-1}\\&\qquad -\,\alpha (t_{j-1})]^2]]. \end{aligned}$$ -
(c)
If \(\int _{\mathbb R}u^3 d\varphi (u)< \infty \) and \(\int _0^T[\alpha (t)]^3\mathrm{d}\beta (t)<\infty \), then for \(m_X\) a.e. \(\varvec{\eta }=(\eta _0,\eta _1,\ldots ,\eta _n)\in \mathbb R^{n+1}\)
$$\begin{aligned}&GE[F_3|X](\varvec{\eta })\\&\quad = \sum _{j=1}^n\int _{t_{j-1}}^{t_j}[3\Gamma (t)[P_\beta (\varvec{\eta }-\alpha )(t)+\alpha (t)]+[P_\beta (\varvec{\eta }-\alpha )(t)+\alpha (t)]^3] \mathrm{d}\beta (t) \\&\quad =\int _0^T\alpha (t)[[\alpha (t)]^2+3[\Gamma (t)+P_\beta (\varvec{\eta }-\alpha )(t)[P_\beta (\varvec{\eta }-\alpha )(t)+\alpha (t)]]]\mathrm{d}\beta (t) \\&\qquad + \sum _{j=1}^n\int _{t_{j-1}}^{t_j}[3\Gamma (t)P_\beta (\varvec{\eta }-\alpha )(t)+[P_\beta (\varvec{\eta }-\alpha )(t)]^3]\mathrm{d}\beta (t) \\&\quad =\int _0^T\alpha (t)[[\alpha (t)]^2+3[\Gamma (t)+P_\beta (\varvec{\eta }-\alpha )(t)[P_\beta (\varvec{\eta }-\alpha )(t)+\alpha (t)]]]\mathrm{d}\beta (t)\\&\qquad +\,\frac{1}{4}\sum _{j=1}^n[\beta (t_j)-\beta (t_{j-1})][[\beta (t_j)-\beta (t_{j-1})][\eta _j-\alpha (t_j)+\eta _{j-1} \\&\qquad -\,\alpha (t_{j-1})]+[\eta _j-\alpha (t_j)]^3+[\eta _j-\alpha (t_j)]^2[\eta _{j-1}-\alpha (t_{j-1})]+[\eta _j \\&\qquad -\,\alpha (t_j)][\eta _{j-1}-\alpha (t_{j-1})]^2+[\eta _{j-1}-\alpha (t_{j-1})]^3]. \end{aligned}$$
Theorem 9
Let \(F(x)=\exp \{ \int _0^T x(t) \mathrm{d}\beta (t)\}\) for \(x\in C[0,T]\) and suppose that F is \(w_{\alpha ,\beta ;\varphi }\)-integrable. For a partition \(\tau :0=t_0<t_1<\cdots <t_n=T\) of [0, T], let \(X_\tau (x)=(x(t_0),x(t_1),\ldots ,x(t_n))\) for \(x\in C[0,T]\). Then, for \(w_{\alpha ,\beta ;\varphi }\) a.e. \(y\in C[0, T]\), we have
Proof
For \(w_{\alpha ,\beta ;\varphi }\) a.e. \(y\in C[0,T]\), we have
by Theorem 4 and the same process as used in Example 1. Letting \(\Vert \tau \Vert \rightarrow 0\), we have
because \(\lim _{\Vert \tau \Vert \rightarrow 0}Z_t(x)=0\) for \(x\in C[0, T]\). The proof is now completed. \(\square \)
References
Ash, R.B.: Real Analysis and Probability. Academic Press, New York (1972)
Cho, D.H.: A simple formula for an analogue of conditional Wiener integrals and its applications. Trans. Am. Math. Soc. 360(7), 3795–3811 (2008)
Cho, D.H.: Measurable functions similar to the Itô integral and the Paley–Wiener–Zygmund integral over continuous paths. Filomat 32(18), 6441–6456 (2018)
Cho, D.H.: An evaluation formula for a generalized conditional expectation with translation theorems over paths. J. Korean Math. Soc. 57(2), 451–470 (2020)
Chung, D.M., Skoug, D.: Conditional analytic Feynman integrals and a related Schrödinger integral equation. SIAM J. Math. Anal. 20(4), 950–965 (1989)
Im, M.K., Ryu, K.S.: An analogue of Wiener measure and its applications. J. Korean Math. Soc. 39(5), 801–819 (2002)
Park, C., Skoug, D.: A simple formula for conditional Wiener integrals with applications. Pac. J. Math. 135(2), 381–394 (1988)
Park, C., Skoug, D.: A Kac-Feynman integral equation for conditional Wiener integrals. J. Integral Equ. Appl. 3(3), 411–427 (1991)
Ryu, K.S.: The simple formula of conditional expectation on analogue of Wiener measure. Honam Math. J. 30(4), 723–732 (2008)
Ryu, K.S.: The generalized analogue of Wiener measure space and its properties. Honam Math. J. 32(4), 633–642 (2010)
Ryu, K.S.: The translation theorem on the generalized analogue of Wiener space and its applications. J. Chungcheong Math. Soc. 26(4), 735–742 (2013)
Yeh, J.: Inversion of conditional expectations. Pac. J. Math. 52, 631–640 (1974)
Acknowledgements
This research was supported by Basic Science Research Program through the National Research Foundation (NRF) of Korea funded by the Ministry of Education (2017R1D1A1B 03029876).
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by See Keong Lee.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Cho, D.H. An Evaluation Formula for Radon–Nikodym Derivatives Similar to Conditional Expectations over Paths. Bull. Malays. Math. Sci. Soc. 44, 203–222 (2021). https://doi.org/10.1007/s40840-020-00946-3
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40840-020-00946-3
Keywords
- Analogue of Wiener space
- Brownian motion
- Conditional expectation
- Fourier transform
- Radon–Nikodym derivative
- Time integral