1 Introduction

A time integral is simply the Riemann integral of a function of the continuous random variable \(W(x,t)=x(t)\) with respect to the parameter t for \(x\in C_0[0,T]\) which is the Wiener space, the space of continuous real-valued functions x on [0, T] with \(x(0)=0\). The Feynman–Kac functional on \(C_0[0,T]\) is given by \(\exp \{-\int _0^T V(t, W(x,t))\mathrm{d}t\}\) including the time integral, where V is a complex-valued potential. Calculations involving the conditional expectations of the Feynman–Kac functional are important in the study of the Feynman integral [8], and it can provide a solution of the integrals equation which is formally equivalent to the Schrödinger equation [5]. In particular, when \(0=t_0<t_1<\cdots <t_n=T\) is a partition of [0, T] and \(\xi _j\in \mathbb R\) for \(j=0,1,\ldots ,n\), the conditional expectation of the time integral in which the paths pass through the point \(\xi _j\) at each time \(t_j\) is very useful in the Brownian motion theory. On the space \(C_0[0,T]\), Yeh [12] introduced an inversion formula that a conditional expectation can be found by a Fourier transform with simple calculations of the conditional expectations. But his inversion formula is very complicated in its applications when the conditioning function is vector-valued. In [7], Park and Skoug derived a simple formula for conditional Wiener integrals containing the time integral with the conditioning function \((x(t_1),\ldots ,x(t_n))\) for \(x\in C_0[0,T]\). In their simple formula, they expressed the conditional Wiener integral directly in terms of ordinary Wiener integral, which generalizes Yeh’s inversion formula. We note that the Wiener measure used in [7, 12] has no drifts with the variance function \(\beta (t)=t\) for \(t\in [0,T]\).

On the other hand, let C[0, T] denote the space of continuous real-valued functions on the interval [0, T]. Im and Ryu [6, 9] introduced a finite positive measure \(w_\varphi \) on C[0, T], where \(\varphi \) is a finite positive measure on the Borel class of \(\mathbb R\). We note that \(w_\varphi \) is exactly the Wiener measure on \(C_0[0,T]\) if \(\varphi =\delta _{0}\), the Dirac measure concentrated at 0. When \(\varphi \) is a probability measure, the author [2] and Ryu [9] derived separately the same simple formula for a generalized conditional Wiener integral of the functions on C[0, T] with the conditioning function \(X(x)=(x(t_0),x(t_1),\ldots ,x(t_n))\) for \(x\in C[0,T]\). They evaluated the conditional integrals of various functions which contain the time integral and are interested in both Feynman integral and quantum mechanics. To derive the formula, the author proved directly independence of the Brownian bridge motion used in evaluations, while Ryu did independence of the motion by using the characteristic function. In both cases [2, 9], W in the motion has no drifts with the variance function \(\beta (t)=t\) on [0, T]. Recently, the author [4] derived a simple evaluation formula for Radon–Nikodym derivatives similar to the conditional expectations with the conditioning function Y defined by \(Y(x)=(x(t_0),x(t_1),\ldots ,x(t_{n-1}))\) for \(x\in C[0,T]\). Then, he evaluated the derivative of various functions which play significant roles in the Feynman integral. In these results, W has a general drift and more extended variance function. Moreover, Y does not contain the present positions of the paths in C[0, T], that is, it does not depend on the present time.

In this paper, we will investigate properties of the Fourier transform of the process W defined on \(C[0,T]\times [0,T]\). Using the Fourier transform of W, we derive a simple evaluation formula for Radon–Nikodym derivatives similar to the conditional expectations of functions on C[0, T] with the conditioning function X which has a drift with the generalized variance function \(\beta \) and an initial weight \(\varphi \). As applications of the formula, we evaluate the Radon–Nikodym derivatives similar to the conditional expectations of the functions \(\int _0^T[W(x,t)]^m\mathrm{d}\lambda (t)(m\in \mathbb N)\) and \([\int _0^TW(x,t)\mathrm{d}\lambda (t)]^2\) on C[0, T], where \(\lambda \) is a complex-valued Borel measure on [0, T]. We note that W has a drift with the more generalized variance function \(\beta \) and our underlying space C[0, T] may not be a probability space so that the results of this paper generalize those of [2, 7, 9, 12]. Furthermore, the conditioning function X contains the present positions of the paths in C[0, T], that is, it depends on the present time, while Y does not. We also note that the evaluations in this paper are simpler than those in [4]. The main results of this paper are evaluations of the Radon–Nikodym derivatives of time integrals with detailed examples, while the results of [4] are focused on the translation theorem for derivatives because our underlying measure is not invariant under translations.

2 An Analogue of Wiener Space

In this section, we introduce a finite measure over paths and investigate its properties. We now introduce a generalized analogue of Wiener space which is cited from [3, 10, 11] with little changes.

Let \(\alpha ,\beta :[0,T]\rightarrow \mathbb R\) be two functions, where \(\beta \) is continuous and strictly increasing. Let \(\varphi \) be a positive finite measure on the Borel class \(\mathcal B(\mathbb R)\) of \(\mathbb R\) and \(m_L\) be the Lebesgue measure on \(\mathcal B(\mathbb R)\). For \(\mathbf {t}_n=(t_0,t_1,\ldots ,t_n)\) with \(0=t_0<t_1<\cdots <t_n\le T\), let \(J_{\mathbf {t}_n}:C[0,T]\rightarrow \mathbb R^{n+1}\) be the function given by \(J_{\mathbf {t}_n}(x)=(x(t_0),x(t_1),\ldots ,x(t_n))\). For \(\prod _{j=0}^n B_j\in \mathcal B(\mathbb R^{n+1})\), the subset \(J_{\mathbf {t}_n}^{-1}(\prod _{j=0}^n B_j)\) of C[0, T] is called an interval I and let \(\mathcal I\) be the set of all such intervals I. Define a pre-measure \(m_{\alpha ,\beta ;\varphi }\) on \(\mathcal I\) by

$$\begin{aligned} m_{\alpha ,\beta ;\varphi }(I)=\int _{B_0}\int _{\prod _{j=1}^n B_j}W_n(\mathbf {t}_n,\mathbf {u}_n,u_0)\mathrm{d}m_L^n(\mathbf {u}_n)\mathrm{d}\varphi (u_0), \end{aligned}$$

where for \(\mathbf {u}_n=(u_1,\ldots ,u_n)\in \mathbb R^n\) and \(u_0\in \mathbb R\),

$$\begin{aligned} W_n(\mathbf {t}_n,\mathbf {u}_n,u_0)= & {} \biggl [\frac{1}{\prod _{j=1}^n2\pi [\beta (t_j)-\beta (t_{j-1})]}\biggr ]^{\frac{1}{2}} \nonumber \\&\times \exp \biggl \{-\frac{1}{2}\sum _{j=1}^n\frac{[u_j-\alpha (t_j)-u_{j-1}+\alpha (t_{j-1})]^2}{\beta (t_j)-\beta (t_{j-1})}\biggr \}. \end{aligned}$$
(1)

The Borel \(\sigma \)-algebra \(\mathcal B(C[0,T])\) of C[0, T] with the supremum norm coincides with the smallest \(\sigma \)-algebra generated by \(\mathcal I\), and there exists a unique positive finite measure \(w_{\alpha ,\beta ;\varphi }\) on \(\mathcal B(C[0,T])\) with \(w_{\alpha ,\beta ;\varphi }(I)=m_{\alpha ,\beta ;\varphi }(I)\) for all \(I\in \mathcal I\). This measure \(w_{\alpha ,\beta ;\varphi }\) is called an analogue of a generalized Wiener measure on \((C[0,T],\mathcal B(C[0,T]))\) according to \(\varphi \).

Theorem 1

[6, Lemma 2.1] If \(f:\mathbb R^{n+1}\rightarrow \mathbb C\) is a Borel measurable function, then the following equality holds:

$$\begin{aligned}&\int _{C[0,T]}f(x(t_0),x(t_1),\ldots ,x(t_n))\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)\\&\quad \overset{*}{=} \int _{\mathbb R^{n+1}}f(u_0,u_1,\ldots ,u_n)W_n(\mathbf {t}_n,\mathbf {u}_n,u_0)\mathrm{d}m_L^n(\mathbf {u}_n)\mathrm{d}\varphi (u_0), \end{aligned}$$

where \(\overset{*}{=}\) means that if either side exists, then both sides exist and they are equal.

By Theorem 1, we have the following lemma which is useful in the next sections [3].

Lemma 1

If \(0\le t_1\le t_2 \le t_3 \le t_4\le T\), then we have for nonnegative integers l and m,

$$\begin{aligned}&\int _{C[0,T]}[x(t_2)-x(t_1)]^l[x(t_4)-x(t_3)]^m\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)\\&\quad = \varphi (\mathbb R)\biggl [\sum _{j=0}^{[\frac{l}{2}]}\frac{l![\alpha (t_2)-\alpha (t_1)]^{l-2j}}{2^jj!(l-2j)!}[\beta (t_2)-\beta (t_1)]^j\biggr ]\\&\qquad \times \biggl [\sum _{k=0}^{[\frac{m}{2}]}\frac{m![\alpha (t_4)-\alpha (t_3)]^{m-2k}}{2^kk!(m-2k)!}[\beta (t_4)-\beta (t_3)]^k\biggr ], \end{aligned}$$

where \([\frac{l}{2}]\) and \([\frac{m}{2}]\) denote the greatest integers which do not exceed \(\frac{l}{2}\) and \(\frac{m}{2}\), respectively.

Define a generalized stochastic process \(X_t (x):C[0,T]\rightarrow \mathbb R\) by \(X_t(x) = x(t)\) for \(t\in [0,T]\). By Lemma 1 and [3, Theorem 2.6], we have the following properties for \(X_t\):

(P1):

If \(t_1,t_2\in [0,T]\), then \(\int _{C[0,T]}[X_{t_2}(x)-X_{t_1}(x)]\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)=\varphi (\mathbb R)[\alpha (t_2)-\alpha (t_1)]\).

(P2):

If \(t_1,t_2\in [0,T]\), then \(\int _{C[0,T]}[X_{t_2}(x)-X_{t_1}(x)]^2\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)=\varphi (\mathbb R)[|\beta (t_2)\) \(-\beta (t_1)|+[\alpha (t_2)-\alpha (t_1)]^2]\).

(P3):

If \(0\le t_1\le t_2 \le t_3 \le t_4\le T\), then \(\int _{C[0,T]}[X_{t_2}(x)-X_{t_1}(x)][X_{t_4}(x)-X_{t_3}(x)]\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)=\varphi (\mathbb R)[\alpha (t_2)-\alpha (t_1)][\alpha (t_4)-\alpha (t_3)]\) and \(\int _{C[0,T]}[X_{t_2}(x)-X_{t_1}(x)][X_{t_3}(x)-X_{t_1}(x)]dw_{\alpha ,\beta ;\varphi }(x) =\varphi (\mathbb R)[[\alpha (t_2)-\alpha (t_1)][\alpha (t_3)-\alpha (t_1)]+\beta (t_2)-\beta (t_1)]\).

(P4):

The Fourier transform \(\mathcal F(X_0)\) of \(X_0\) is given by \(\mathcal F(X_0)(\xi )=\int _{\mathbb R}\exp \{i\xi u\}\,\mathrm{d}\varphi (u)\) for \(\xi \in \mathbb R\).

(P5):

If \(t_1,t_2\in [0,T]\), then the Fourier transform \(\mathcal F(X_{t_2}-X_{t_1})\) of \(X_{t_2}-X_{t_1}\) is given by \(\mathcal F(X_{t_2}-X_{t_1})(\xi )=\varphi (\mathbb R)\exp \{-\frac{1}{2}\xi ^2|\beta (t_2)-\beta (t_1)|+i\xi [\alpha (t_2)-\alpha (t_1)]\}\) for \(\xi \in \mathbb R\).

(P6):

If \(t\in [0,T]\), then the Fourier transform \(\mathcal F(X_t)\) of \(X_t\) can be expressed by \(\mathcal F(X_t)(\xi )=\frac{1}{\varphi (\mathbb R)}\mathcal F(X_t-X_0)(\xi )\mathcal F(X_0)(\xi )\) for \(\xi \in \mathbb R\).

By (P1), (P2), (P3) and (P5), we now have the following lemma.

Lemma 2

If \(\varphi (\mathbb R)=1\), then we have the following:

  1. (a)

    If \(t_1,t_2\in [0,T]\) with \(t_1\ne t_2\), then \(X_{t_2}-X_{t_1}\) is normally distributed with the mean \(\alpha (t_2)-\alpha (t_1)\) and the variance \(|\beta (t_2)-\beta (t_1)|\).

  2. (b)

    If \(0\le t_1\le t_2 \le t_3 \le t_4\le T\), then \(X_{t_2}-X_{t_1}\) and \(X_{t_4}-X_{t_3}\) are independent.

Let \(X: C[0,T]\rightarrow \mathbb R^{n+1}\) be Borel measurable and let \(F:C[0,T]\rightarrow \mathbb C\) be integrable. Let \(\mathcal D\) be the \(\sigma \)-field \(\{ X^{-1}(B) : B \in \mathcal B(\mathbb R^{n+1})\}\) and let \(w_{\mathcal D}\) be the measure induced by \(w_{\alpha ,\beta ;\varphi }\), that is, \(w_{\mathcal D} (E) = w_{\alpha ,\beta ;\varphi }(E)\) for \(E\in \mathcal D\). Define the set function \(w_X\) on \(\mathcal D\) by

$$\begin{aligned} w_X(E) =\int _E F(x) dw_{\alpha ,\beta ;\varphi }(x) \text { for } E\in \mathcal D. \end{aligned}$$

Clearly, \(w_X\) is a measure on \(\mathcal D\) with \(w_X \ll w_{\mathcal D}\), so that in view of the Radon–Nikodym theorem there exists a \(\mathcal D\)-measurable function \(\frac{dw_X}{dw_{\mathcal D}}\) defined on C[0, T] such that the relation

$$\begin{aligned} w_X (E)=\int _E \frac{dw_X}{dw_{\mathcal D}} (x)dw_{\mathcal D}(x) \end{aligned}$$

holds for every \(E\in \mathcal D\). Here, the function \(\frac{dw_X}{dw_{\mathcal D}}\) is determined uniquely up to \(w_{\mathcal D}\) a.e. and it is called a generalized conditional expectation of F given X. On the other hand, let \(m_X\) be the image measure on the Borel class \(\mathcal B(\mathbb R^{n+1})\) of \(\mathbb R^{n+1}\) induced by X, that is, \(m_X=w_{\alpha ,\beta ;\varphi }\circ X^{-1}=w_{\mathcal D}\circ X^{-1}\). For every \(B\in \mathcal B(\mathbb R^{n+1})\), let

$$\begin{aligned} \mu _X (B) = \int _{X^{-1} (B)} F(x) \mathrm{d}w_{\alpha ,\beta ;\varphi }(x). \end{aligned}$$

Then, \(\mu _X=w_X\circ X^{-1}\) with \(\mu _X \ll m_X\), so that there exists an \(m_X\)-integrable function \(\frac{\mathrm{d}\mu _X}{\mathrm{d}m_X}\) defined on \(\mathbb R^{n+1}\) which is unique up to \(m_X\) a.e. such that for every \(B\in \mathcal B(\mathbb R^{n+1})\),

$$\begin{aligned} \mu _X(B)= \int _{B}\frac{\mathrm{d}\mu _X}{\mathrm{d}m_X}(\varvec{\eta })\mathrm{d}m_X(\varvec{\eta }). \end{aligned}$$

We now have

$$\begin{aligned} \int _{X^{-1}(B)}\frac{\mathrm{d}\mu _X}{\mathrm{d}m_X}(X(x))\mathrm{d}w_{\mathcal D}(x)= & {} \int _{B} \frac{\mathrm{d}\mu _X}{\mathrm{d}m_X}(\varvec{\eta }) \mathrm{d}(w_{\mathcal D} \circ X^{-1})(\varvec{\eta })\\= & {} \int _{B}\frac{\mathrm{d}\mu _X}{\mathrm{d}m_X}(\varvec{\eta })\mathrm{d}m_X(\varvec{\eta }) \\= & {} \int _{X^{-1} (B)} \frac{\mathrm{d}w_X}{\mathrm{d}w_{\mathcal D}} (x) \mathrm{d}w_{\mathcal D}(x), \end{aligned}$$

where the third equality follows from the change of variable theorem. By uniqueness, \(\frac{\mathrm{d}w_X}{\mathrm{d}w_{\mathcal D}} (x) = (\frac{\mathrm{d}\mu _X}{\mathrm{d}m_X} \circ X)(x)\) for \(w_{\mathcal D}\) a.e. \(x\in C[0,T]\) and \(\frac{\mathrm{d}\mu _X}{\mathrm{d}m_X}\) is also called a generalized conditional expectation of F given X. Throughout this paper, we will consider the function \(\frac{\mathrm{d}\mu _X}{\mathrm{d}m_X}\) as the generalized conditional expectation of F given X and it is denoted by GE[F|X]. We note that GE[F|X] is a Radon–Nikodym derivative rather than a conditional expectation since \(m_X\) may not be a probability measure.

3 A Simple Formula for the Generalized Conditional Expectation

In this section, we derive a simple evaluation formula for the generalized conditional expectations of functions on C[0, T] with an appropriate conditioning function.

Throughout the remainder of this paper, we assume that \(0 = t_0< t_1< \cdots < t_n = T\) is an arbitrary fixed partition of [0, T] unless otherwise specified. To derive the desired simple evaluation formula for a generalized conditional expectation, we begin with letting

$$\begin{aligned} \gamma _{1j}(t)=\frac{\beta (t_j)-\beta (t)}{\beta (t_j)-\beta (t_{j-1})}\text { and } \gamma _{2j}(t)=\frac{\beta (t)-\beta (t_{j-1})}{\beta (t_j)-\beta (t_{j-1})} \text { for } t\in [0,T]. \end{aligned}$$
(2)

For a function \(f:[0,T]\rightarrow \mathbb R\), define the polygonal function \(P_\beta (f)\) of f by

$$\begin{aligned} P_\beta (f)(t)=\sum _{j=1}^n\chi _{(t_{j-1},t_j]}(t)[f(t_{j-1})+\gamma _{2j}(t)[f(t_j )-f(t_{j-1})]]+\chi _{\{0\}}(t)f(0)\quad \quad \end{aligned}$$
(3)

for \(t\in [0,T]\), where \(\chi \) denotes the characteristic function. Similarly, for \(\varvec{\eta } = (\eta _0,\eta _1,\ldots ,\eta _n )\in \mathbb R^{n+1}\), the polygonal function \(P_\beta (\varvec{\eta })\) of \(\varvec{\eta }\) on [0, T] is defined by (3) with replacing \(f(t_j)\) by \(\eta _j\) for \(j=0,1,\ldots ,n\). Then, both \(P_\beta (f)\) and \(P_\beta (\varvec{\eta })\) belong to C[0, T], and \(P_\beta (f)(t_j)=f(t_j)\), \(P_\beta (\varvec{\eta })(t_j) = \eta _j\) at each \(t_j\).

For \(s_1,s_2\in [0,T]\), let

$$\begin{aligned} \Gamma _j(s_1,s_2)=\gamma _{1j}(s_1)\gamma _{2j}(s_2)[\beta (t_j)-\beta (t_{j-1})]. \end{aligned}$$
(4)

For \(t\in [0,T]\), let

$$\begin{aligned} \Gamma (t)=\sum _{j=1}^n\chi _{(t_{j-1},t_j]}(t)\Gamma _j(t,t) \end{aligned}$$
(5)

and let \(Z_t(x)=x(t)-P_\beta (x)(t)\) for \(x\in C[0,T]\). Note that if \(t\in [t_{j-1}, t_j]\) for some \(j\in \{1, \ldots , n\}\), then

$$\begin{aligned} Z_t(x)=\gamma _{1j}(t)[x(t)-x(t_{j-1})]-\gamma _{2j}(t)[x(t_j)-x(t)] \end{aligned}$$
(6)

and

$$\begin{aligned}{}[\gamma _{1j}(t)]^2[\beta (t)-\beta (t_{j-1})]+[\gamma _{2j}(t)]^2[\beta (t_j)-\beta (t)]=\Gamma _j(t,t)=\Gamma (t). \end{aligned}$$
(7)

We now have the following theorem.

Theorem 2

For \(t\in [0,T]\), the Fourier transform \(\mathcal F(Z_t)\) of \(Z_t\) is given by

$$\begin{aligned} \mathcal F(Z_t)(\xi )=\varphi (\mathbb R)\exp \biggl \{-\frac{\xi ^2}{2}\Gamma (t)+i\xi Z_t(\alpha )\biggr \} \end{aligned}$$

for \(\xi \in \mathbb R\), where \(\Gamma (t)\) is given by (5). Moreover, if \(t\in (t_{j-1}, t_j)\) for some j and \(\varphi (\mathbb R)=1\), then \(Z_t\) is Gaussian with the mean \(Z_t(\alpha )\) and variance \(\Gamma (t)\).

Proof

If \(t=t_j\) for some \(j\in \{0,1,\ldots ,n\}\), then the first result is trivial. Now, suppose that \(t\in (t_{j-1},t_j)\) for some j. Let \(\varphi _0=\frac{1}{\varphi (\mathbb R)}\varphi \) and \(\mathcal F_{\varphi _0}(Z_t)\) be the Fourier transform of \(Z_t\) with respect to \(w_{\alpha ,\beta ;\varphi _0}\). Then, by (6), (7) and Lemma 2, \(\varphi _0\) is a probability measure and \(Z_t\) is Gaussian with respect to \(w_{\alpha ,\beta ;\varphi _0}\) with the mean \(Z_t(\alpha )\) and the variance \(\Gamma _j(t,t)\) given by (4) so that for \(\xi \in \mathbb R\)

$$\begin{aligned} \mathcal F_{\varphi _0}(Z_t)(\xi )=\exp \biggl \{-\frac{\xi ^2}{2}\Gamma (t)+i\xi Z_t(\alpha )\biggr \}. \end{aligned}$$

Since \(\mathcal F(Z_t)(\xi )=\varphi (\mathbb R)\mathcal F_{\varphi _0}(Z_t)(\xi )\), we have the theorem. \(\square \)

Since \(\frac{1}{\varphi (\mathbb R)}\varphi \) is a probability measure, we have the following corollaries by Lemma 2 and Theorem 2.

Corollary 1

Let \(t\in [0,T]\) and \(f:\mathbb R\rightarrow \mathbb R\) be a Borel measurable function. Then, under the notations as in Theorem 2, we have

$$\begin{aligned}&\int _{C[0,T]} f(Z_t(x))\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)\\&\quad \overset{*}{=}\varphi (\mathbb R) \biggl [\frac{1}{2\pi \Gamma (t)}\biggr ]^{\frac{1}{2}} \int _{\mathbb R} f(u)\exp \biggl \{ -\frac{[u-Z_t(\alpha )]^2}{2\Gamma (t)}\biggr \}\mathrm{d}m_L(u) \end{aligned}$$

if \(t\in (t_{j-1},t_j)\) for some j. Moreover, if \(t=t_j\) for some \(j\in \{0,1,\ldots ,n\}\), then

$$\begin{aligned} \int _{C[0,T]} f(Z_t(x))\mathrm{d}w_{\alpha ,\beta ;\varphi }(x) =\varphi (\mathbb R)f(0). \end{aligned}$$

Corollary 2

Let \(s_1\in [t_{j-1},t_j]\) and \(s_2\in [t_{k-1},t_k]\) with \(j\ne k\). Then, the Fourier transform \(\mathcal F(Z_{s_1},Z_{s_2})\) of \((Z_{s_1},Z_{s_2})\) can be expressed by

$$\begin{aligned} \mathcal F(Z_{s_1},Z_{s_2})(\xi _1,\xi _2)=\frac{1}{\varphi (\mathbb R)}\mathcal F(Z_{s_1})(\xi _1)\mathcal F(Z_{s_2})(\xi _2) \end{aligned}$$

for \(\xi _1,\xi _2\in \mathbb R\) . Consequently, the processes \(\{Z_t: t_{j-1}\le t \le t_j \}\), where \(j=1, \ldots , n\), are stochastically independent if \(\varphi (\mathbb R)=1\).

Lemma 3

Let \(0\le s_1\le s_2\le s_3\le T\). Then, we have the following:

  1. (a)

    The Fourier transform \(\mathcal F(X_{s_1},X_{s_3}-X_{s_2})\) of \((X_{s_1},X_{s_3}-X_{s_2})\) can be expressed by

    $$\begin{aligned} \mathcal F(X_{s_1},X_{s_3}-X_{s_2})(\xi _1,\xi _2)=\frac{1}{\varphi (\mathbb R)}\mathcal F(X_{s_1})(\xi _1)\mathcal F(X_{s_3}-X_{s_2})(\xi _2) \end{aligned}$$

    for \(\xi _1,\xi _2\in \mathbb R\). Consequently, \(X_{s_1}\) and \(X_{s_3}-X_{s_2}\) are independent if \(\varphi \) is a probability measure.

  2. (b)

    The Fourier transform \(\mathcal F(X_{s_2},X_{s_3}-X_{s_1})\) of \((X_{s_2},X_{s_3}-X_{s_1})\) can be expressed by

    $$\begin{aligned}&\mathcal F(X_{s_2},X_{s_3}-X_{s_1})(\xi _1,\xi _2)\\&\quad =\frac{1}{\varphi (\mathbb R)}\mathcal F(X_{s_2})(\xi _1)\mathcal F(X_{s_3}-X_{s_1})(\xi _2)\exp \{-\xi _1\xi _2[\beta (s_2)-\beta (s_1)]\} \end{aligned}$$

    for \(\xi _1,\xi _2\in \mathbb R\).

  3. (c)

    The Fourier transform \(\mathcal F(X_{s_3},X_{s_2}-X_{s_1})\) of \((X_{s_3},X_{s_2}-X_{s_1})\) can be expressed by

    $$\begin{aligned}&\mathcal F(X_{s_3},X_{s_2}-X_{s_1})(\xi _1,\xi _2)\\&\quad =\frac{1}{\varphi (\mathbb R)}\mathcal F(X_{s_3})(\xi _1)\mathcal F(X_{s_2}-X_{s_1})(\xi _2)\exp \{-\xi _1\xi _2[\beta (s_2)-\beta (s_1)]\} \end{aligned}$$

    for \(\xi _1,\xi _2\in \mathbb R\).

Proof

For convenience, let \(s_0=0\), \(\mathbf {s}_3=(s_0,s_1,s_2,s_3)\) and \(\mathbf {u}_3=(u_1,u_2,u_3)\). We will prove this lemma for the case \(0<s_1<s_2<s_3\). The results for the other cases of \(\mathbf {s}_3\) can be similarly proved. By Theorem 1, we have

$$\begin{aligned}&\mathcal F(X_{s_1},X_{s_3}-X_{s_2})(\xi _1,\xi _2)\\&\quad =\int _{C[0,T]}\exp \{i[\xi _1X_{s_1}(x)+\xi _2[X_{s_3}(x)-X_{s_2}(x)]]\}\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)\\&\quad =\int _{\mathbb R^4}\exp \{i[\xi _1u_1+\xi _2(u_3-u_2)]\}W_3(\mathbf {s}_3,\mathbf {u}_3,u_0)dm_L^3(\mathbf {u}_3)\mathrm{d}\varphi (u_0), \end{aligned}$$

where \(W_3\) is given by (1) with \(n=3\). For \(j=1,2,3\), let \(v_j=u_j-\alpha (s_j)-u_{j-1}+\alpha (s_{j-1})\) and \(\mathbf {v}_3=(v_1,v_2,v_3)\). Then, we have by the change of variable theorem

$$\begin{aligned}&\mathcal F(X_{s_1},X_{s_3}-X_{s_2})(\xi _1,\xi _2)\\&\quad =\biggl [\frac{1}{\prod _{j=1}^32\pi [\beta (s_j)-\beta (s_{j-1})]}\biggr ]^{\frac{1}{2}}\int _{\mathbb R^4}\exp \biggl \{i[\xi _1[u_0+v_1+\alpha (s_1)-\alpha (s_0)]\\&\qquad +\,\xi _2[v_3+\alpha (s_3)-\alpha (s_2)]]-\frac{1}{2}\sum _{j=1}^3\frac{v_j^2}{\beta (s_j)-\beta (s_{j-1})}\biggr \}\mathrm{d}m_L^3(\mathbf {v}_3)\mathrm{d}\varphi (u_0)\\&\quad =\mathcal F(X_0)(\xi _1)\exp \biggl \{-\frac{1}{2}[\xi _1^2[\beta (s_1)-\beta (0)]+\xi _2^2[\beta (s_3)-\beta (s_2)]]+i[\xi _1[\alpha (s_1)\\&\qquad -\,\alpha (0)]+\xi _2[\alpha (s_3)-\alpha (s_2)]]\biggr \}\\&\quad =\frac{1}{[\varphi (\mathbb R)]^2}\mathcal F(X_0)(\xi _1)\mathcal F(X_{s_1}-X_0)(\xi _1)\mathcal F(X_{s_3}-X_{s_2})(\xi _2)\\&\quad =\frac{1}{\varphi (\mathbb R)}\mathcal F(X_{s_1})(\xi _1)\mathcal F(X_{s_3}-X_{s_2})(\xi _2) \end{aligned}$$

by (P4), (P5) and (P6), which completes the proof of (a).

Similarly, we have by Theorem 1

$$\begin{aligned}&\mathcal F(X_{s_2},X_{s_3}-X_{s_1})(\xi _1,\xi _2)\\&\quad =\int _{\mathbb R^4}\exp \{i[\xi _1u_2+\xi _2(u_3-u_1)]\}W_3(\mathbf {s}_3,\mathbf {u}_3,u_0)dm_L^3(\mathbf {u}_3)\mathrm{d}\varphi (u_0)\\&\quad =\biggl [\frac{1}{\prod _{j=1}^32\pi [\beta (s_j)-\beta (s_{j-1})]}\biggr ]^{\frac{1}{2}}\int _{\mathbb R^4}\exp \biggl \{i[\xi _1[u_0+v_1+v_2+\alpha (s_2)-\alpha (s_0)]\\&\qquad +\,\xi _2[v_2+v_3+\alpha (s_3)-\alpha (s_1)]]-\frac{1}{2}\sum _{j=1}^3\frac{v_j^2}{\beta (s_j)-\beta (s_{j-1})}\biggr \}dm_L^3(\mathbf {v}_3)\mathrm{d}\varphi (u_0)\\&\quad =\mathcal F(X_0)(\xi _1)\exp \biggl \{-\frac{1}{2}[\xi _1^2[\beta (s_1)-\beta (0)]+(\xi _1+\xi _2)^2[\beta (s_2)-\beta (s_1)]\\&\qquad +\,\xi _2^2[\beta (s_3)-\beta (s_2)]]+i[\xi _1[\alpha (s_2)-\alpha (0)]+\xi _2[\alpha (s_3)-\alpha (s_1)]]\biggr \}\\&\quad =\frac{1}{[\varphi (\mathbb R)]^2}\mathcal F(X_0)(\xi _1)\mathcal F(X_{s_2}-X_0)(\xi _1)\mathcal F(X_{s_3}-X_{s_1})(\xi _2)\exp \{-\xi _1\xi _2[\beta (s_2)\\&\qquad -\,\beta (s_1)]\}\\&\quad =\frac{1}{\varphi (\mathbb R)}\mathcal F(X_{s_2})(\xi _1)\mathcal F(X_{s_3}-X_{s_1})(\xi _2)\exp \{-\xi _1\xi _2[\beta (s_2)-\beta (s_1)]\} \end{aligned}$$

which completes the proof of (b).

Finally, we also have by Theorem 1

$$\begin{aligned}&\mathcal F(X_{s_3},X_{s_2}-X_{s_1})(\xi _1,\xi _2)\\&\quad =\int _{\mathbb R^4}\exp \{i[\xi _1u_3+\xi _2(u_2-u_1)]\}W_3(\mathbf {s}_3,\mathbf {u}_3,u_0)dm_L^3(\mathbf {u}_3)\mathrm{d}\varphi (u_0)\\&\quad =\biggl [\frac{1}{\prod _{j=1}^32\pi [\beta (s_j)-\beta (s_{j-1})]}\biggr ]^{\frac{1}{2}}\int _{\mathbb R^4}\exp \biggl \{i[\xi _1[u_0+v_1+v_2+v_3+\alpha (s_3)\\&\qquad -\,\alpha (s_0)]+\xi _2[v_2+\alpha (s_2)-\alpha (s_1)]]\\&\qquad -\,\frac{1}{2}\sum _{j=1}^3\frac{v_j^2}{\beta (s_j)-\beta (s_{j-1})}\biggr \}dm_L^3(\mathbf {v}_3)\mathrm{d}\varphi (u_0)\\&\quad =\mathcal F(X_0)(\xi _1)\exp \biggl \{-\frac{1}{2}[\xi _1^2[\beta (s_1)-\beta (0)]+(\xi _1+\xi _2)^2[\beta (s_2)-\beta (s_1)]\\&\qquad +\,\xi _1^2[\beta (s_3)-\beta (s_2)]]+i[\xi _1[\alpha (s_3)-\alpha (0)]+\xi _2[\alpha (s_2)-\alpha (s_1)]]\biggr \}\\&\quad =\frac{1}{[\varphi (\mathbb R)]^2}\mathcal F(X_0)(\xi _1)\mathcal F(X_{s_3}-X_0)(\xi _1)\mathcal F(X_{s_2}-X_{s_1})(\xi _2)\nonumber \\&\qquad \times \,\exp \{-\xi _1\xi _2[\beta (s_2)-\beta (s_1)]\}\\&\quad =\frac{1}{\varphi (\mathbb R)}\mathcal F(X_{s_3})(\xi _1)\mathcal F(X_{s_2}-X_{s_1})(\xi _2)\exp \{-\xi _1\xi _2[\beta (s_2)-\beta (s_1)]\} \end{aligned}$$

which proves (c), completing the proof. \(\square \)

Lemma 4

If \(t\in [t_{j-1},t_j]\) and \(s\in [0,t_{j-1}]\cup [t_j,T]\) for some j, then the Fourier transform \(\mathcal F(X_s,Z_t)\) of \((X_s,Z_t)\) can be expressed by

$$\begin{aligned} \mathcal F(X_s,Z_t)(\xi _1,\xi _2)=\frac{1}{\varphi (\mathbb R)}\mathcal F(X_s)(\xi _1)\mathcal F(Z_t)(\xi _2) \end{aligned}$$

for \(\xi _1,\xi _2\in \mathbb R\) so that \(X_s\) and \(Z_t\) are independent if \(\varphi (\mathbb R)=1\).

Proof

Let \(\varphi _0=\frac{1}{\varphi (\mathbb R)}\varphi \) and let \(\mathcal F_{\varphi _0}\) denote the Fourier transform with respect to \(w_{\alpha ,\beta ;\varphi _0}\). First, we will prove that for \(\xi _1,\xi _2\in \mathbb R\)

$$\begin{aligned} \mathcal F_{\varphi _0}(X_s,Z_t)(\xi _1,\xi _2)=\mathcal F_{\varphi _0}(X_s)(\xi _1)\mathcal F_{\varphi _0}(Z_t)(\xi _2). \end{aligned}$$
(8)

If \(t=t_{j-1}\) or \(t=t_j\), then (8) follows immediately. Assume that \(t\in (t_{j-1},t_j)\). If \(s\in [0,t_{j-1}]\), then we have (8) by (6) and (a) of Lemma 3 since \(\varphi _0\) is a probability measure. Now, suppose that \(s=t_j\). For convenience, let \(\mathbf {s}_3=(s_0,s_1,s_2,s_3)=(0,t_{j-1},t,t_j)\) and \(\mathbf {u}_3=(u_1,u_2,u_3)\). By (6) and Theorem 1, we have

$$\begin{aligned}&\mathcal F_{\varphi _0}(X_s,Z_t)(\xi _1,\xi _2)\\&\quad =\int _{C[0,T]}\exp \{i[\xi _1X_s(x)+\xi _2Z_t(x)]\}\mathrm{d}w_{\alpha ,\beta ;\varphi _0}(x)\\&\quad =\int _{C[0,T]}\exp \{i[\xi _1x(t_j)+\xi _2[\gamma _{1j}(t)[x(t)-x(t_{j-1})]\\&\qquad -\,\gamma _{2j}(t)[x(t_j)-x(t)]]]\}\mathrm{d}w_{\alpha ,\beta ;\varphi _0}(x)\\&\quad =\int _{\mathbb R^4}\exp \{i[\xi _1u_3+\xi _2[\gamma _{1j}(s_2)(u_2-u_1)-\gamma _{2j}(s_2)(u_3-u_2)]]\}\\&\qquad \times W_3(\mathbf {s}_3,\mathbf {u}_3,u_0)dm_L^3(\mathbf {u}_3)\mathrm{d}\varphi _0(u_0), \end{aligned}$$

where \(W_3\) is given by (1) with \(n=3\). For \(j=1,2,3\), let \(v_j=u_j-\alpha (s_j)-u_{j-1}+\alpha (s_{j-1})\) and \(\mathbf {v}_3=(v_1,v_2,v_3)\). Then, we have by (6) and the change of variable theorem

$$\begin{aligned}&\mathcal F_{\varphi _0}(X_s,Z_t)(\xi _1,\xi _2)\\&\quad =\biggl [\frac{1}{\prod _{j=1}^32\pi [\beta (s_j)-\beta (s_{j-1})]}\biggr ]^{\frac{1}{2}}\int _{\mathbb R^4}\exp \biggl \{i[\xi _1[u_0+v_1+v_2+v_3\\&\qquad +\,\alpha (s_3)-\alpha (s_0)]+\xi _2[\gamma _{1j}(s_2)[v_2+\alpha (s_2)-\alpha (s_1)]-\gamma _{2j}(s_2)[v_3\\&\qquad +\,\alpha (s_3)-\alpha (s_2)]]]-\frac{1}{2}\sum _{j=1}^3\frac{v_j^2}{\beta (s_j)-\beta (s_{j-1})}\biggr \}\mathrm{d}m_L^3(\mathbf {v}_3)d\varphi _0(u_0)\\&\quad =\mathcal F_{\varphi _0}(X_0)(\xi _1)\exp \biggl \{-\frac{1}{2}[\xi _1^2[\beta (s_1)-\beta (s_0)]+[\xi _1+\xi _2\gamma _{1j}(s_2)]^2[\beta (s_2)\\&\qquad -\,\beta (s_1)]+[\xi _1-\xi _2\gamma _{2j}(s_2)]^2[\beta (s_3)-\beta (s_2)]]+i[\xi _1[\alpha (s_3)-\alpha (s_0)]\\&\qquad +\,\xi _2Z_{s_2}(\alpha )]\biggr \}. \end{aligned}$$

By (2), (4) and (7), we have

$$\begin{aligned}&[\xi _1+\xi _2\gamma _{1j}(s_2)]^2[\beta (s_2)-\beta (s_1)]+[\xi _1-\xi _2\gamma _{2j}(s_2)]^2[\beta (s_3) -\beta (s_2)]\\&\quad = \xi _1^2[\beta (s_3)-\beta (s_1)]+\xi _2^2\Gamma (s_2) \end{aligned}$$

so that we have by (P4), (P5), (P6), Lemma 2 and Theorem 2

$$\begin{aligned}&\mathcal F_{\varphi _0}(X_s,Z_t)(\xi _1,\xi _2)\\&\quad =\mathcal F_{\varphi _0}(X_0)(\xi _1)\exp \biggl \{-\frac{1}{2}[\xi _1^2[\beta (s)-\beta (0)]+\xi _2^2\Gamma (t)]+i[\xi _1[\alpha (s)-\alpha (0)]\\&\qquad +\,\xi _2Z_t(\alpha )]\biggr \}\\&\quad =\mathcal F_{\varphi _0}(X_s)(\xi _1)\mathcal F_{\varphi _0}(Z_t)(\xi _2) \end{aligned}$$

which proves (8) for \(s=t_j\). Suppose that \(t_j<s\). Note that \(X_s-X_{t_j}\) and \(Z_t\) are independent with respect to \(w_{\alpha ,\beta ;\varphi _0}\) by (6) and Lemma 2 and that \(X_{t_j}\) and \(Z_t\) are also independent by the previous result. Consequently, \(X_s\) and \(Z_t\) are independent with respect to \(w_{\alpha ,\beta ;\varphi _0}\) since \(X_s=X_s-X_{t_j}+X_{t_j}\). Now, we have (8) and finally,

$$\begin{aligned} \mathcal F(X_s,Z_t)(\xi _1,\xi _2)= & {} \varphi (\mathbb R)\mathcal F_{\varphi _0}(X_s,Z_t)(\xi _1,\xi _2)=\varphi (\mathbb R)\mathcal F_{\varphi _0}(X_s)(\xi _1)\mathcal F_{\varphi _0}(Z_t)(\xi _2) \\= & {} \frac{1}{\varphi (\mathbb R)}\mathcal F(X_s)(\xi _1)\mathcal F(Z_t)(\xi _2) \end{aligned}$$

which is the desired result. \(\square \)

Using Lemma 4, we have the following theorem.

Theorem 3

Let \(X : C[0,T]\rightarrow \mathbb R^{n+1}\) be given by

$$\begin{aligned} X (x) = ( x(t_0), x(t_1), \ldots , x(t_n)). \end{aligned}$$
(9)

Then, the process \(\{Z_t: 0 \le t \le T\}\) and X are independent if \(\varphi (\mathbb R)=1\).

Theorem 4

Let \(F: C[0, T]\rightarrow \mathbb C\) be integrable and X be given by (9) in Theorem 3. Then, we have for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\)

$$\begin{aligned} GE[F|X](\varvec{\eta })=\frac{1}{\varphi (\mathbb R)}\int _{C[0,T]}F(x-P_\beta (x)+P_\beta (\varvec{\eta }))\mathrm{d}w_{\alpha ,\beta ;\varphi }(x), \end{aligned}$$
(10)

where \(m_X\) is the measure on \(\mathcal B(\mathbb R^{n+1})\) induced by X.

Proof

Let \(\varphi _0=\frac{1}{\varphi (\mathbb R)}\varphi \) and let \(GE_{\varphi _0}[F|X]\) denote the (generalized) conditional expectation of F given X with respect to \(w_{\alpha ,\beta ;\varphi _0}\) which is a probability measure on C[0, T]. Applying the same method as used in the proofs of Theorem 2 in [7, p.383] and Theorem 3.3 in [9] with an aid of Problem 4 of [1, p.216], we have

$$\begin{aligned} GE_{\varphi _0}[F|X](\varvec{\eta })=\int _{C[0,T]}F(x-P_\beta (x)+P_\beta (\varvec{\eta }))\mathrm{d}w_{\alpha ,\beta ;\varphi _0}(x) \end{aligned}$$

for \(P_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), where \(P_X\equiv w_{\alpha ,\beta ;\varphi _0}\circ X^{-1}\) is the probability distribution of X on \((\mathbb R^{n+1},\mathcal B(\mathbb R^{n+1}))\). Note that for \(B\in \mathcal B(\mathbb R^{n+1})\),

$$\begin{aligned} m_X(B)=w_{\alpha ,\beta ;\varphi }(X^{-1}(B))=\varphi (\mathbb R)w_{\alpha ,\beta ;\varphi _0}(X^{-1}(B))=\varphi (\mathbb R)P_X(B) \end{aligned}$$

so that B is a \(P_X\) null-set if and only if it is an \(m_X\) null-set. Now, we have

$$\begin{aligned}&\int _{B}GE[F|X](\varvec{\eta })dm_X(\varvec{\eta })\\&\quad =\varphi (\mathbb R)\int _{X^{-1}(B)}F(x)\mathrm{d}w_{\alpha ,\beta ;\varphi _0}(x)\\&\quad =\varphi (\mathbb R)\int _{B}GE_{\varphi _0}[F|X](\varvec{\eta })\mathrm{d}P_X(\varvec{\eta })\\&\quad =\varphi (\mathbb R)\int _{B}\int _{C[0,T]}F(x-P_\beta (x)+P_\beta (\varvec{\eta }))\mathrm{d}w_{\alpha ,\beta ;\varphi _0}(x)\mathrm{d}P_X(\varvec{\eta })\\&\quad =\frac{1}{\varphi (\mathbb R)}\int _{B}\int _{C[0,T]}F(x-P_\beta (x)+P_\beta (\varvec{\eta }))\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)\mathrm{d}m_X(\varvec{\eta }) \end{aligned}$$

so that we have (10) by uniqueness of Radon–Nikodym derivative. \(\square \)

Remark 1

Note that Problem 4 of [1, p.216] describes the following: Let \((\Omega , \mathcal F,\) P) be a probability space. Let \(\mathcal C_i( i\in I)\) be classes of sets in \(\mathcal F\). If the \(\mathcal C_i\)s are independent classes and each \(\mathcal C_i\) is closed under finite intersection, then the minimal \(\sigma \)-algebras over the \(\mathcal C_i\) are also independent.

Using the above problem and Theorem 3, one can show that \(x-P_\beta (x)\) and X are independent since the Borel \(\sigma \)-algebra on C[0, T] is the smallest \(\sigma \)-algebra such that each coordinate mapping \(X_t\) is measurable. The independence of \(x-P_\beta (x)\) and X is essential for the proof of Theorem 4.

Remark 2

In the proof of Theorem 4, since B is a \(P_X\) null-set if and only if it is an \(m_X\) null-set, (10) can be rewritten by

$$\begin{aligned} GE[F|X](\varvec{\eta }) =\int _{C[0,T]}F(x-P_\beta (x)+P_\beta (\varvec{\eta }))\mathrm{d}w_{\alpha ,\beta ;\varphi _0}(x)=GE_{\varphi _0}[F|X](\varvec{\eta }) \end{aligned}$$

for \(P_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\) (or equivalently, for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\)).

Remark 3

Lemma 4 and Theorem 4 are extensions of Theorems 3.1 and 3.2, respectively, in [9]. They also extend Theorems 2.8 and 2.9 in [2].

4 Evaluations of the Generalized Conditional Expectations

In this section, using Theorem 4, we evaluate the generalized conditional expectations of various functions which are useful in both quantum mechanics and the Feynman integration theory.

Lemma 5

  1. (a)

    If \(s_1\in [t_{j-1},t_j]\) and \(s_2\in [t_{k-1},t_k]\) with \(j\ne k\), then

    $$\begin{aligned} \int _{C[0,T]}Z_{s_1}(x)Z_{s_2}(x)\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)=\varphi (\mathbb R)Z_{s_1}(\alpha )Z_{s_2}(\alpha ). \end{aligned}$$
  2. (b)

    If \(s_1,s_2\in [t_{j-1},t_j]\), then we have

    $$\begin{aligned}&\int _{C[0,T]}Z_{s_1}(x)Z_{s_2}(x)\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)\\&\quad =\varphi (\mathbb R)[Z_{s_1}(\alpha )Z_{s_2}(\alpha )+\Gamma _j(s_1\vee s_2,s_1\wedge s_2)], \end{aligned}$$

    where \(s_1\vee s_2=\max \{s_1,s_2\}\), \(s_1\wedge s_2=\min \{s_1,s_2\}\) and \(\Gamma _j\) is given by (4), so that \(Cov(Z_{s_1},Z_{s_2})=\Gamma _j(s_1\vee s_2,s_1\wedge s_2)\) if \(\varphi (\mathbb R)=1\).

Proof

Let \(\varphi _0=\frac{1}{\varphi (\mathbb R)}\varphi \). Then, (a) with \(w_{\alpha ,\beta :\varphi _0}\) follows from Corollary 2. We now prove (b) for \(w_{\alpha ,\beta :\varphi _0}\). For convenience, suppose that \( s_1\le s_2\). By (P2), (P3) and Lemma 1, we have

$$\begin{aligned}&\int _{C[0,T]}Z_{s_1}(x)Z_{s_2}(x)\mathrm{d}w_{\alpha ,\beta ;\varphi _0}(x)\\&\quad =\int _{C[0,T]}[x(s_1)-x(t_{j-1})-\gamma _{2j}(s_1)[x(t_j)-x(t_{j-1})]][x(s_2)-x(t_{j-1})\\&\qquad -\,\gamma _{2j}(s_2)[x(t_j)-x(t_{j-1})]]\mathrm{d}w_{\alpha ,\beta ;\varphi _0}(x)\\&\quad =[\alpha (s_1)-\alpha (t_{j-1})][\alpha (s_2)-\alpha (t_{j-1})]+\beta (s_1)-\beta (t_{j-1})-\gamma _{2j}(s_2)[[\alpha (s_1)\\&\qquad -\,\alpha (t_{j-1})][\alpha (t_j)-\alpha (t_{j-1})]+\beta (s_1)-\beta (t_{j-1})]-\gamma _{2j}(s_1)[[\alpha (s_2)-\alpha (t_{j-1})]\\&\qquad \times \,[\alpha (t_j)-\alpha (t_{j-1})]+\beta (s_2)-\beta (t_{j-1})]+\gamma _{2j}(s_1)\gamma _{2j}(s_2)[\beta (t_j)-\beta (t_{j-1})\\&\qquad +\,[\alpha (t_j)-\alpha (t_{j-1})]^2]\\&\quad =Z_{s_1}(\alpha )Z_{s_2}(\alpha )+[\beta (s_1)-\beta (t_{j-1})][1-\gamma _{2j}(s_2)]-\gamma _{2j}(s_1)[\beta (s_2)-\beta (t_{j-1})\\&\qquad -\gamma _{2j}(s_2)[\beta (t_j)-\beta (t_{j-1})]]\\&\quad =Z_{s_1}(\alpha )Z_{s_2}(\alpha )+\Gamma _j(s_2,s_1) \end{aligned}$$

which proves (b) for \(w_{\alpha ,\beta :\varphi _0}\). Since \(w_{\alpha ,\beta :\varphi }=\varphi (\mathbb R)w_{\alpha ,\beta :\varphi _0}\), we have the lemma and the proof is completed. \(\square \)

Theorem 5

For \(s_1,s_2\in C[0,T]\) and \(x\in C[0,T]\), let \(G(x)=x(s_1)x(s_2)\) and suppose that \(\int _{\mathbb R}u^2\mathrm{d}\varphi (u)<\infty \). Then, G is \(w_{\alpha ,\beta ;\varphi }\)-integrable and we have the following:

  1. (a)

    If \(s_1\in [t_{j-1},t_j]\) and \(s_2\in [t_{k-1},t_k]\) with \(j\ne k\), then for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), we have

    $$\begin{aligned} GE[G|X](\varvec{\eta })=[Z_{s_1}(\alpha )+P_\beta (\varvec{\eta })(s_1)][Z_{s_2}(\alpha )+P_\beta (\varvec{\eta })(s_2)]. \end{aligned}$$
  2. (b)

    If \(s_1,s_2\in [t_{j-1},t_j]\), then for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), we have

    $$\begin{aligned} GE[G|X](\varvec{\eta })= & {} [Z_{s_1}(\alpha )+P_\beta (\varvec{\eta })(s_1)][Z_{s_2}(\alpha )+P_\beta (\varvec{\eta })(s_2)]\\&+\Gamma _j(s_1\vee s_2,s_1\wedge s_2). \end{aligned}$$

Proof

Without loss of generality, we will prove the theorem for the case \(0\le s_1\le s_2\le T\). For \(0=s_0<s_1<s_2\), we have by Theorem 1 and the change of variable theorem

$$\begin{aligned}&\int _{C[0,T]} |G(x)| \mathrm{d}w_{\alpha ,\beta ;\varphi } (x)\nonumber \\&\quad =\biggl [\frac{1}{\prod _{j=1}^22\pi [\beta (s_j)-\beta (s_{j-1})]}\biggr ]^{\frac{1}{2}}\int _{\mathbb R^3} |u_0+u_1+\alpha (s_1)-\alpha (0)||u_0+u_1+u_2\nonumber \\&\qquad +\,\alpha (s_2)-\alpha (0)| \exp \biggl \{-\frac{1}{2}\sum _{j=1}^2\frac{v_j^2}{\beta (s_j)-\beta (s_{j-1})}\biggr \}dm_L^2(u_1,u_2)d\varphi (u_0) \end{aligned}$$
(11)

which is finite since \(\int _{\mathbb R}u^2\mathrm{d}\varphi (u)<\infty \). The integrability of G for the other cases follows similarly. Moreover, for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), we have by Theorem 4

$$\begin{aligned} GE[G|X](\varvec{\eta })= & {} \int _{C[0,T]}G(x-P_\beta (x)+P_\beta (\varvec{\eta }))\mathrm{d}w_{\alpha ,\beta ;\varphi _0}(x)\\= & {} \int _{C[0,T]}[Z_{s_1}(x)+P_\beta (\varvec{\eta })(s_1)][Z_{s_2}(x)+P_\beta (\varvec{\eta })(s_2)]\mathrm{d}w_{\alpha ,\beta ;\varphi _0}(x), \end{aligned}$$

where \(\varphi _0=\frac{1}{\varphi (\mathbb R)}\varphi \). Suppose that \(s_1\in [t_{j-1},t_j]\) and \(s_2\in [t_{k-1},t_k]\) with \(j\ne k\). By Corollaries 1, 2 and (a) of Lemma 5, we have

$$\begin{aligned} GE[G|X](\varvec{\eta })=[Z_{s_1}(\alpha )+P_\beta (\varvec{\eta })(s_1)][Z_{s_2}(\alpha )+P_\beta (\varvec{\eta })(s_2)] \end{aligned}$$

which proves (a) in this theorem. To prove (b) of this theorem, suppose that \(s_1,s_2\in [t_{j-1},t_j]\). Then, we have by (b) of Lemma 5

$$\begin{aligned} GE[G|X](\varvec{\eta })=[Z_{s_1}(\alpha )+P_\beta (\varvec{\eta })(s_1)][Z_{s_2}(\alpha )+P_\beta (\varvec{\eta })(s_2)]+\Gamma _j(s_2,s_1) \end{aligned}$$

which completes the proof. \(\square \)

Theorem 6

For \(x\in C[0,T]\), let \(G_1(x)=[\int _0^Tx(t)\mathrm{d}\lambda (t)]^2\), where \(\lambda \) is a continuous complex measure on the Borel class of [0, T]. Suppose that

$$\begin{aligned} \int _0^T[\alpha (t)]^2 \mathrm{d}|\lambda |(t)<\infty \text { and }\int _{\mathbb R}u^2 \mathrm{d}\varphi (u)<\infty . \end{aligned}$$
(12)

Then, \(G_1\) is \(w_{\alpha ,\beta ;\varphi }\)-integrable and for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), we have

$$\begin{aligned} GE[G_1|X](\varvec{\eta })= & {} \int _0^T\int _0^T[[Z_{s_1}(\alpha )+P_\beta (\varvec{\eta })(s_1)][Z_{s_2}(\alpha )+P_\beta (\varvec{\eta })(s_2)] \\&+\Lambda (s_1\vee s_2,s_1\wedge s_2)]d\lambda ^2(s_1,s_2), \end{aligned}$$

where \(\Lambda (s,t)=\sum _{j=1}^n\chi _{[t_{j-1},t_j]^2}(s,t)\Gamma _j(s,t)\) for \((s,t)\in [0,T]^2\).

Proof

To prove the integrability of \(G_1\), let \(\Delta _1=\{(s_1,s_2):0\le s_1<s_2\le T\}\), \(\Delta _2=\{(s_1,s_2):0\le s_2<s_1\le T\}\) and let G be the function as given in Theorem 5. Since \(\lambda \) is continuous, we have

$$\begin{aligned} \int _{C[0,T]} |G_1(x)| \mathrm{d}w_{\alpha ,\beta ;\varphi } (x)= & {} \int _{C[0,T]}\biggl |\int _0^T\int _0^TG(x)\mathrm{d}\lambda ^2(s_1,s_2)\biggr |\mathrm{d}w_{\alpha ,\beta ;\varphi } (x)\\\le & {} \sum _{l=1}^2\int _{\Delta _l}\int _{C[0,T]}|G(x)|\mathrm{d} w_{\alpha ,\beta ;\varphi } (x)\mathrm{d}|\lambda |^2(s_1,s_2). \end{aligned}$$

Note that for \(0\le s<t\le T\), we have

$$\begin{aligned} \biggl [\frac{1}{2\pi [\beta (t)-\beta (s)]}\biggr ]^{\frac{1}{2}}\int _0^\infty u\exp \biggl \{-\frac{u^2}{2[\beta (t)-\beta (s)]}\biggr \}\mathrm{d}m_L(u)=\biggl [\frac{1}{2\pi }[\beta (t)-\beta (s)]\biggr ]^{\frac{1}{2}} \end{aligned}$$

and

$$\begin{aligned} \biggl [\frac{1}{2\pi [\beta (t)-\beta (s)]}\biggr ]^{\frac{1}{2}}\int _{\mathbb R} u^2\exp \biggl \{-\frac{u^2}{2[\beta (t)-\beta (s)]}\biggr \}\mathrm{d}m_L(u)=\beta (t)-\beta (s). \end{aligned}$$

Using the above facts with (11) and (12), we have for \(l=1,2\),

$$\begin{aligned} \int _{\Delta _l}\int _{C[0,T]}|G(x)|\mathrm{d}w_{\alpha ,\beta ;\varphi } (x)\mathrm{d}|\lambda |^2(s_1,s_2)< \infty \end{aligned}$$

which proves that \(G_1\) is \(w_{\alpha ,\beta ;\varphi }\)-integrable. To evaluate \(GE[G_1|X]\), let \(A=\{(j,k)\in \mathbb N^2 :1\le j,k\le n, j\ne k\}\). For \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), we have by Theorems 4 and 5

$$\begin{aligned}&GE[G_1|X](\varvec{\eta })\\&\quad =\frac{1}{\varphi (\mathbb R)}\int _{C[0,T]}G_1(x-P_\beta (x)+P_\beta (\varvec{\eta }))\mathrm{d}w_{\alpha ,\beta ;\varphi }(x) \\&\quad =\int _0^T\int _0^T GE[G|X](\varvec{\eta })\mathrm{d}\lambda ^2(s_1,s_2) \\&\quad =\sum _{(j,k)\in A}\int _{t_{k-1}}^{t_k}\int _{t_{j-1}}^{t_j}GE[G|X](\varvec{\eta })\mathrm{d}\lambda ^2(s_1,s_2)\\&\qquad +\sum _{j=1}^n\int _{[t_{j-1},t_j]^2}GE[G|X](\varvec{\eta })\mathrm{d}\lambda ^2(s_1,s_2) \\&\quad =\sum _{1\le j,k\le n}\int _{t_{k-1}}^{t_k}\int _{t_{j-1}}^{t_j}[Z_{s_1}(\alpha )+P_\beta (\varvec{\eta })(s_1)][Z_{s_2}(\alpha )+P_\beta (\varvec{\eta })(s_2)]\mathrm{d}\lambda ^2(s_1,s_2) \\&\qquad +\sum _{j=1}^n\int _{[t_{j-1},t_j]^2}\Gamma _j(s_1\vee s_2,s_1\wedge s_2)\mathrm{d}\lambda ^2(s_1,s_2)\\&\quad =\int _0^T\int _0^T[[Z_{s_1}(\alpha )+P_\beta (\varvec{\eta })(s_1)][Z_{s_2}(\alpha )+P_\beta (\varvec{\eta })(s_2)]\\&\qquad +\Lambda (s_1\vee s_2,s_1\wedge s_2)]\mathrm{d}\lambda ^2(s_1,s_2) \end{aligned}$$

which is the desired result. \(\square \)

Theorem 7

Let \(m\in \mathbb N\) and \(t\in [0,T]\). For \(x\in C[0,T]\), let \(F_t(x)=[x(t)]^m\) and suppose that \(\int _{\mathbb R}|u|^m \mathrm{d}\varphi (u)<\infty \). Then, \(F_t\) is \(w_{\alpha ,\beta ;\varphi }\)-integrable and for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), we have

$$\begin{aligned} GE[F_t|X](\varvec{\eta })=\sum _{k=0}^{[\frac{m}{2}]}\frac{m!}{2^{k}k!(m-2k)!}[P_\beta (\varvec{\eta })(t)+Z_t(\alpha )]^{m-2k}[\Gamma (t)]^k, \end{aligned}$$
(13)

where \(\Gamma (t)\) is given by (5) and \([\frac{m}{2}]\) denotes the greatest integer less than or equal to \(\frac{m}{2}\). In particular, if \(t=t_j\) for some \(j\in \{0,1,\ldots ,n\}\), then \(GE[F_t|X](\varvec{\eta })=\eta _j^m\) for \(m_X\) a.e. \(\varvec{\eta }=(\eta _0,\eta _1,\ldots ,\eta _n)\in \mathbb R^{n+1}\).

Proof

Note that for \(l=0, 1, \ldots ,m \), \(|u|^l\) is \(\varphi \)-integrable on \(\mathbb R\), so that \(F_0\) is \(w_{\alpha ,\beta ;\varphi }\)-integrable. Moreover, if \(t\in (0,T]\), then we have by Theorem 1, the change of variable theorem and the multinomial expansion theorem

$$\begin{aligned}&\int _{C[0,T]} |F_t(x)| \mathrm{d}w_{\alpha ,\beta ;\varphi } (x)\\&\quad =\biggl [\frac{1}{2\pi [\beta (t)-\beta (0)]}\biggr ]^{\frac{1}{2}}\int _{\mathbb R^2} |u_1+u_0+\alpha (t)-\alpha (0)|^m\exp \biggl \{-\frac{u_1^2}{2[\beta (t)-\beta (0)]}\biggr \}\\&\qquad dm_L(u_1)\mathrm{d}\varphi (u_0)\nonumber \\&\quad \le \biggl [\frac{1}{2\pi [\beta (t)-\beta (0)]}\biggr ]^{\frac{1}{2}}\sum _{l_1+l_2+l_3=m}\frac{m!}{l_1!l_2!l_3!}|\alpha (t)-\alpha (0)|^{l_1}\int _{\mathbb R^2} |u_1|^{l_2}|u_0|^{l_3}\\&\qquad \times \exp \biggl \{-\frac{u_1^2}{2[\beta (t)-\beta (0)]}\biggr \}\mathrm{d}m_L(u_1)d\varphi (u_0) \end{aligned}$$

which is finite, so that \(F_t\) is \(w_{\alpha ,\beta ;\varphi }\)-integrable for all \(t\in [0,T]\). If \(t=t_j\) for some j, then for \(m_X\) a.e. \(\varvec{\eta }\in \mathbb R^{n+1}\), we have by Theorem 4

$$\begin{aligned} GE[F_t|X](\varvec{\eta })= & {} \frac{1}{\varphi (\mathbb R)}\int _{C[0,T]}F_t(x-P_\beta (x)+P_\beta (\varvec{\eta }))\mathrm{d}w_{\alpha ,\beta ;\varphi }(x) \\= & {} \frac{1}{\varphi (\mathbb R)}\int _{C[0,T]}[P_\beta (\varvec{\eta })(t_j)]^m\mathrm{d}w_{\alpha ,\beta ;\varphi }(x)=\eta _j^m. \end{aligned}$$

If \(t\in (t_{j-1},t_j)\) for some j, then we have by Corollary 1 and Theorem 4

$$\begin{aligned}&GE[F_t|X](\varvec{\eta })\\&\quad =\frac{1}{\varphi (\mathbb R)}\int _{C[0,T]}F_t(x-P_\beta (x)+P_\beta (\varvec{\eta }))\mathrm{d}w_{\alpha ,\beta ;\varphi }(x) \\&\quad =\biggl [\frac{1}{2\pi \Gamma (t)}\biggr ]^{\frac{1}{2}} \int _{\mathbb R} [u+P_\beta (\varvec{\eta })(t)+Z_t(\alpha )]^m\exp \biggl \{-\frac{u^2}{2\Gamma (t)}\biggr \}\mathrm{d}m_L(u). \end{aligned}$$

Using the same process as used in the proof of [2, Theorem 3.1], we have

$$\begin{aligned} GE[F_t|X](\varvec{\eta })=\sum _{k=0}^{[\frac{m}{2}]}\frac{m!}{2^{k}k!(m-2k)!}[P_\beta (\varvec{\eta })(t)+Z_t(\alpha )]^{m-2k}[\Gamma (t)]^k. \end{aligned}$$

Moreover, if \(t=t_j\), then the right-hand side of the above equality reduces to \(\eta _j^m\) since \(\Gamma (t_j)=0\). The proof is now completed. \(\square \)

Theorem 8

Let \(m\in \mathbb N\) and \(F(x)=\int _0^T[x(t)]^m \mathrm{d}\lambda (t)\) for \(x\in C[0,T]\), where \(\lambda \) is a finite \(\mathbb C\)-valued measure on the Borel class of [0, T]. Suppose that

$$\begin{aligned} \int _{\mathbb R}|u|^m d\varphi (u)<\infty \text { and }\int _0^T|\alpha (t)|^m \mathrm{d}|\lambda |(t)<\infty . \end{aligned}$$
(14)

Then, F is \(w_{\alpha ,\beta ;\varphi }\)-integrable and for \(m_X\) a.e. \(\varvec{\eta }=(\eta _0,\eta _1,\ldots ,\eta _n)\in \mathbb R^{n+1}\),

$$\begin{aligned} GE[F|X](\varvec{\eta })= & {} \int _0^T GE[F_t|X](\varvec{\eta })\mathrm{d}\lambda (t)\\= & {} \sum _{j=0}^n\lambda (\{t_j\})\eta _j^m+\sum _{j=1}^n\int _{(t_{j-1},t_j)}GE[F_t|X](\varvec{\eta })\mathrm{d}\lambda (t), \end{aligned}$$

where \(GE[F_t|X](\varvec{\eta })\) is given by (13).

Proof

Note that F is \(w_{\alpha ,\beta ;\varphi }\)-integrable by the proof of Theorem 7, since

$$\begin{aligned}&\int _{C[0,T]} |F(x)|\mathrm{d}w_{\alpha ,\beta ;\varphi }(x) \\&\quad \le \sum _{l_1+l_2+l_3=m}\frac{m!}{l_1!l_2!l_3!}\int _0^T|\alpha (t)-\alpha (0)|^{l_1}\biggl [\frac{1}{2\pi [\beta (t)-\beta (0)]}\biggr ]^{\frac{1}{2}}\\&\qquad \times \int _{\mathbb R^2} |u_1|^{l_2}|u_0|^{l_3} \exp \biggl \{-\frac{u_1^2}{2[\beta (t)-\beta (0)]}\biggr \}\mathrm{d}m_L(u_1)\mathrm{d}\varphi (u_0)\mathrm{d}|\lambda |(t) \end{aligned}$$

which is finite by (14). The equalities of the theorem follow immediately. \(\square \)

Applying the calculations as used in [2, Example 3.3], we have the following example by Theorem 8.

Example 1

For \(l=1,2,3\), let \(F_l(x)= \int _0^T [x(t)]^l\mathrm{d}\beta (t)\) for \(x\in C[0,T]\). Then, we have the following:

  1. (a)

    If \(\int _{\mathbb R}|u| \mathrm{d}\varphi (u)< \infty \) and \(\int _0^T|\alpha (t)|\mathrm{d}\beta (t)<\infty \), then for \(m_X\) a.e. \(\varvec{\eta }=(\eta _0,\eta _1,\) \(\ldots ,\eta _n)\in \mathbb R^{n+1}\)

    $$\begin{aligned}&GE[F_1|X](\varvec{\eta })\\&\quad =\sum _{j=1}^n\int _{t_{j-1}}^{t_j}[P_\beta (\varvec{\eta }-\alpha )(t)+\alpha (t)] \mathrm{d}\beta (t)\\&\quad =\int _0^T\alpha (t)d\beta (t)+\frac{1}{2}\sum _{j=1}^n[\beta (t_j)-\beta (t_{j-1})] [\eta _j-\alpha (t_j)+\eta _{j-1}-\alpha (t_{j-1})]. \end{aligned}$$
  2. (b)

    If \(\int _{\mathbb R}u^2 \mathrm{d}\varphi (u)< \infty \) and \(\int _0^T[\alpha (t)]^2\mathrm{d}\beta (t)<\infty \), then for \(m_X\) a.e. \(\varvec{\eta }=(\eta _0,\eta _1,\ldots ,\eta _n)\in \mathbb R^{n+1}\)

    $$\begin{aligned}&GE[F_2|X](\varvec{\eta })\\&\quad =\sum _{j=1}^n\int _{t_{j-1}}^{t_j}[\Gamma (t)+[P_\beta (\varvec{\eta }-\alpha )(t)+\alpha (t)]^2] \mathrm{d}\beta (t)\\&\quad =\int _0^T\alpha (t)[\alpha (t)+2P_\beta (\varvec{\eta }-\alpha )(t)]\mathrm{d}\beta (t)\\&\qquad +\,\sum _{j=1}^n\int _{t_{j-1}}^{t_j}[\Gamma (t)+[P_\beta (\varvec{\eta }-\alpha )(t)]^2]\mathrm{d}\beta (t)\\&\quad = \int _0^T\alpha (t)[\alpha (t)+2P_\beta (\varvec{\eta }-\alpha )(t)]\mathrm{d}\beta (t)+\frac{1}{6}\sum _{j=1}^n[\beta (t_j)-\beta (t_{j-1})][\beta (t_j)\nonumber \\&\qquad -\,\beta (t_{j-1})+2[[\eta _j-\alpha (t_j)]^2+[\eta _j-\alpha (t_j)][\eta _{j-1}-\alpha (t_{j-1})]+[\eta _{j-1}\\&\qquad -\,\alpha (t_{j-1})]^2]]. \end{aligned}$$
  3. (c)

    If \(\int _{\mathbb R}u^3 d\varphi (u)< \infty \) and \(\int _0^T[\alpha (t)]^3\mathrm{d}\beta (t)<\infty \), then for \(m_X\) a.e. \(\varvec{\eta }=(\eta _0,\eta _1,\ldots ,\eta _n)\in \mathbb R^{n+1}\)

    $$\begin{aligned}&GE[F_3|X](\varvec{\eta })\\&\quad = \sum _{j=1}^n\int _{t_{j-1}}^{t_j}[3\Gamma (t)[P_\beta (\varvec{\eta }-\alpha )(t)+\alpha (t)]+[P_\beta (\varvec{\eta }-\alpha )(t)+\alpha (t)]^3] \mathrm{d}\beta (t) \\&\quad =\int _0^T\alpha (t)[[\alpha (t)]^2+3[\Gamma (t)+P_\beta (\varvec{\eta }-\alpha )(t)[P_\beta (\varvec{\eta }-\alpha )(t)+\alpha (t)]]]\mathrm{d}\beta (t) \\&\qquad + \sum _{j=1}^n\int _{t_{j-1}}^{t_j}[3\Gamma (t)P_\beta (\varvec{\eta }-\alpha )(t)+[P_\beta (\varvec{\eta }-\alpha )(t)]^3]\mathrm{d}\beta (t) \\&\quad =\int _0^T\alpha (t)[[\alpha (t)]^2+3[\Gamma (t)+P_\beta (\varvec{\eta }-\alpha )(t)[P_\beta (\varvec{\eta }-\alpha )(t)+\alpha (t)]]]\mathrm{d}\beta (t)\\&\qquad +\,\frac{1}{4}\sum _{j=1}^n[\beta (t_j)-\beta (t_{j-1})][[\beta (t_j)-\beta (t_{j-1})][\eta _j-\alpha (t_j)+\eta _{j-1} \\&\qquad -\,\alpha (t_{j-1})]+[\eta _j-\alpha (t_j)]^3+[\eta _j-\alpha (t_j)]^2[\eta _{j-1}-\alpha (t_{j-1})]+[\eta _j \\&\qquad -\,\alpha (t_j)][\eta _{j-1}-\alpha (t_{j-1})]^2+[\eta _{j-1}-\alpha (t_{j-1})]^3]. \end{aligned}$$

Theorem 9

Let \(F(x)=\exp \{ \int _0^T x(t) \mathrm{d}\beta (t)\}\) for \(x\in C[0,T]\) and suppose that F is \(w_{\alpha ,\beta ;\varphi }\)-integrable. For a partition \(\tau :0=t_0<t_1<\cdots <t_n=T\) of [0, T], let \(X_\tau (x)=(x(t_0),x(t_1),\ldots ,x(t_n))\) for \(x\in C[0,T]\). Then, for \(w_{\alpha ,\beta ;\varphi }\) a.e. \(y\in C[0, T]\), we have

$$\begin{aligned} \lim _{\Vert \tau \Vert \rightarrow 0}GE[F| X_\tau ](X_\tau (y))=F(y). \end{aligned}$$

Proof

For \(w_{\alpha ,\beta ;\varphi }\) a.e. \(y\in C[0,T]\), we have

$$\begin{aligned} GE[F|X_\tau ](X_\tau (y))= & {} \frac{1}{\varphi (\mathbb R)}\int _{C[0,T]} F(x-P_\beta (x)+P_\beta (y))\mathrm{d}w_{\alpha ,\beta ;\varphi }(x) \\= & {} \exp \biggl \{\frac{1}{2}\sum _{j=1}^n[\beta (t_j)-\beta (t_{j-1})][y(t_{j-1})+y(t_j)] \biggr \} \\&\times \frac{1}{\varphi (\mathbb R)}\int _{C[0,T]}\exp \biggl \{\int _0^T Z_t(x) \mathrm{d}\beta (t)\biggr \}\mathrm{d}w_{\alpha ,\beta ;\varphi }(x) \end{aligned}$$

by Theorem 4 and the same process as used in Example 1. Letting \(\Vert \tau \Vert \rightarrow 0\), we have

$$\begin{aligned} \lim _{\Vert \tau \Vert \rightarrow 0}GE[F| X_\tau ](X_\tau (y))=F(y) \end{aligned}$$

because \(\lim _{\Vert \tau \Vert \rightarrow 0}Z_t(x)=0\) for \(x\in C[0, T]\). The proof is now completed. \(\square \)