In this chapter, we consider the stochastic differential equations and backward stochastic differential equations driven by G-Brownian motion. The conditions and proofs of existence and uniqueness of a stochastic differential equation is similar to the classical situation. However the corresponding problems for backward stochastic differential equations are not that easy, many are still open. We only give partial results to this direction.

1 Stochastic Differential Equations

In this chapter, we denote by \(\bar{M}_{G}^{p}(0,T;\mathbb {R}^{n})\), \(p\ge 1\), the completion of \(M_{G}^{p, 0}(0,T;\mathbb {R}^{n})\) under the norm \((\int _{0}^{T}\hat{\mathbb {E}}[|\eta _{t}|^{p}]dt)^{1/p}\). It is not hard to prove that \(\bar{M}_{G}^{p}(0,T;\mathbb {R}^{n})\subseteq M_{G}^p(0,T;\mathbb {R}^{n})\). We consider all the problems in the space \(\bar{M}_{G}^{p}(0,T;\mathbb {R}^{n})\). The following lemma is useful in our future discussion.

Lemma 5.1.1

Suppose that \(\varphi \in M_G^2(0,T)\). Then for \(\mathbf {a}\in \mathbb {R}^d\), it holds that

$$\begin{aligned} \eta _t:=\int ^t_0\varphi _s dB^{\mathbf {a}}_s\in \bar{M}_G^2(0,T). \end{aligned}$$

Proof

Choosing a sequence of processes \(\varphi ^n\in M_G^{2,0}(0,T)\) such that

$$ \lim \limits _{n\rightarrow \infty }\mathbb {\hat{E}}\left[ \int ^T_0|\varphi _s-\varphi ^n_s|^2ds\right] =0. $$

Then for each integer n, it is easy to check that the process \( \eta _t^n=\int ^t_0\varphi _s^n dB^{\mathbf {a}}_s\) belongs to the space \(\bar{M}_{G}^{2}(0,T)\).

On the other hand, it follows from the property of G-Itô integral that

$$\begin{aligned} \int ^T_0\mathbb {\hat{E}}[|\eta _t-\eta _t^n|^2]dt=\sigma _{\mathbf {aa}^{T}}^{2}\int ^T_0 \mathbb {\hat{E}}\left[ \int ^{t}_{0}|\varphi _s-\varphi _s^n|^2ds\right] dt\le \sigma _{\mathbf {aa}^{T}}^{2} T \mathbb {\hat{E}}\left[ \int ^T_0|\varphi _s-\varphi ^n_s|^2ds\right] , \end{aligned}$$

which implies the desired result.    \(\square \)

Now we consider the following SDE driven by a d-dimensional G-Brownian motion:

$$\begin{aligned} X_{t}=X_{0}+\int _{0}^{t}b(s, X_{s})ds+\int _{0}^{t}h_{ij}(s, X_{s})d\left\langle B\right\rangle ^{ij} _{s}+\int _{0}^{t}\sigma _{j}(s, X_{s})dB_{s}^{j},\ t\in [0,T], \end{aligned}$$
(5.1.1)

where the initial condition \(X_{0}\in \mathbb {R}^{n}\) is a given constant, \(b, h_{ij},\sigma _{j}\) are given functions satisfying \(b(\cdot , x)\), \(h_{ij}(\cdot , x)\), \(\sigma _{j}(\cdot , x)\in {M}_{G}^{2}(0,T;\mathbb {R}^{n})\) for each \(x\in \mathbb {R}^{n}\) and the Lipschitz condition, i.e., \(|\phi (t,x)-\phi (t, x^{\prime })|\le K|x-x^{\prime }|\), for each \(t\in [0,T]\), x, \(x^{\prime }\in \mathbb {R}^{n}\), \(\phi =b\), \(h_{ij}\) and \(\sigma _{j}\), respectively. Here the horizon [0, T] can be arbitrarily large. The solution is a process \((X_t)_{t\in [0,T]}\in \bar{M}_{G}^{2}(0,T;\mathbb {R}^{n})\) satisfying the SDE (5.1.1).

We first introduce the following mapping on a fixed interval [0, T]:

$$ \Lambda _{\cdot }:\bar{M}_{G}^{2}(0,T;\mathbb {R}^{n})\mapsto \bar{M}_{G}^{2}(0,T;\mathbb {R}^{n})\ \ $$

by setting \(\Lambda _{t}\), \(t\in [0,T]\), with

$$ \Lambda _{t}(Y)=X_{0}+\int _{0}^{t}b(s, Y_{s})ds+\int _{0}^{t}h_{ij}(s, Y_{s})d\left\langle B\right\rangle ^{ij} _{s}+\int _{0}^{t}\sigma _{j}(s, Y_{s})dB_{s}^{j}. $$

From Lemma 5.1.1 and Exercise 5.4.2 of this chapter, we see that the mapping \(\Lambda \) is well-defined.

We immediately have the following lemma, whose proof is left to the reader.

Lemma 5.1.2

For any \(Y, Y^{\prime }\in \bar{M}_{G}^{2}(0,T;\mathbb {R}^{n}),\) we have the following estimate:

$$\begin{aligned} \hat{\mathbb {E}}[|\Lambda _{t}(Y)-\Lambda _{t}(Y^{\prime })|^{2}]\le C\int _{0}^{t}\hat{\mathbb {E}}[|Y_{s}-Y_{s}^{\prime }|^{2}]ds,\ t\in [0,T], \end{aligned}$$
(5.1.2)

where the constant C depends only on the Lipschitz constant K.

We now prove that the SDE (5.1.1) has a unique solution. We multiply on both sides of (5.1.2) by \(e^{-2Ct}\) and integrate them on [0, T], thus deriving

$$\begin{aligned} \int _{0}^{T}\hat{\mathbb {E}}[|\Lambda _{t}(Y)-\Lambda _{t}(Y^{\prime })|^{2}]e^{-2Ct}dt&\le C\int _{0}^{T}e^{-2Ct}\int _{0}^{t}\mathbb {\hat{E}}[|Y_{s}-Y_{s}^{\prime }|^{2}]dsdt\\&=C\int _{0}^{T}\int _{s}^{T}e^{-2Ct}dt\hat{\mathbb {E}}[|Y_{s}-Y_{s}^{\prime }|^{2}]ds\\&=\frac{1}{2}\int _{0}^{T}(e^{-2Cs}-e^{-2CT})\hat{\mathbb {E}}[|Y_{s}-Y_{s}^{\prime }|^{2}]ds. \end{aligned}$$

We then have

$$\begin{aligned} \int _{0}^{T}\hat{\mathbb {E}}[|\Lambda _{t}(Y)-\Lambda _{t}(Y^{\prime })|^{2}]e^{-2Ct}dt\le \frac{1}{2}\int _{0}^{T}\hat{\mathbb {E}}[|Y_{t}-Y_{t}^{\prime }|^{2}]e^{-2Ct}dt. \end{aligned}$$
(5.1.3)

Note that the following two norms are equivalent in the space \(\bar{M}_{G} ^{2}(0,T;\mathbb {R}^{n})\):

$$ \left( \int _{0}^{T}\hat{\mathbb {E}}[|Y_{t}|^{2}]dt\right) ^{1/2}\thicksim \left( \int _{0} ^{T}\hat{\mathbb {E}}[|Y_{t}|^{2}]e^{-2Ct}dt\right) ^{1/2}. $$

From (5.1.3) we obtain that \(\Lambda (Y)\) is a contraction mapping. Consequently, we have the following theorem.

Theorem 5.1.3

There exists a unique solution \((X_t)_{0\le t\le T}\in \bar{M}_{G}^{2}(0,T;\mathbb {R}^{n})\) of the stochastic differential equation (5.1.1).

We now consider a particular but important case of a linear SDE. For simplicity, assume that \(d=1\), \(n=1.\) and let

$$\begin{aligned} X_{t}=X_{0}+\int _{0}^{t}(b_{s}X_{s}+\tilde{b}_{s})ds+\int _{0}^{t}(h_{s}X_{s}+\tilde{h}_{s})d\langle B\rangle _{s}+\int _{0}^{t}(\sigma _{s}X_{s}+\tilde{\sigma }_{s})dB_{s},\ t\in [0,T]. \end{aligned}$$
(5.1.4)

Here \(X_{0}\in \mathbb {R}\) is given, \(b_{.}, h_{.},\sigma _{.}\) are given bounded processes in \({M}_{G}^{2}(0,T;\mathbb {R})\) and \(\tilde{b}_{.},\tilde{h}_{.},\tilde{\sigma }_{.}\) are given processes in \({M}_{G}^{2}(0,T;\mathbb {R})\). It follows from Theorem 5.1.3 that the linear SDE (5.1.4) has a unique solution.

Remark 5.1.4

The solution of the linear SDE (5.1.4) is

$$\begin{aligned} X_{t}=\Gamma _{t}^{-1}(X_{0}+\int _{0}^{t}\tilde{b}_{s}\Gamma _{s}ds+\int _{0}^{t}(\tilde{h}_{s}-\sigma _{s}\tilde{\sigma }_{s})\Gamma _{s}d\langle B\rangle _{s}+\int _{0}^{t}\tilde{\sigma }_{s}\Gamma _{s}dB_{s}),\ t\in [0,T], \end{aligned}$$

where \(\Gamma _{t}=\exp (-\int _{0}^{t}b_{s}ds-\int _{0}^{t}(h_{s}-\frac{1}{2}\sigma _{s}^{2})d\langle B\rangle _{s}-\int _{0}^{t}\sigma _{s}dB_{s})\).

In particular, if \(b_{.}, h_{.},\sigma _{.}\) are constants and \(\tilde{b}_{.},\tilde{h}_{.},\tilde{\sigma }_{.}\) are zero, then X is a geometric G-Brownian motion.

Definition 5.1.5

We say that \((X_t)_{t\ge 0}\) is a geometric G -Brownian motion if

$$\begin{aligned} X_{t}=\exp (\alpha t+\beta \langle B\rangle _{t}+\gamma B_{t}), \end{aligned}$$
(5.1.5)

where \(\alpha ,\beta ,\gamma \) are constants.

2 Backward Stochastic Differential Equations (BSDE)

We consider the following type of BSDE:

$$\begin{aligned} Y_{t}=\hat{\mathbb {E}}\left[ \xi +\int _{t}^{T}f(s,Y_{s})ds+\int _{t}^{T}h_{ij}(s, Y_{s})d\left\langle B\right\rangle ^{ij} _{s}\Big |\Omega _{t}\right] ,\ \ t\in [0,T], \end{aligned}$$
(5.2.1)

where \(\xi \in L_{G}^{1}(\Omega _{T};\mathbb {R}^{n})\), \(f, h_{ij}\) are given functions such that \(f(\cdot , y)\), \(h_{ij}(\cdot , y)\in {M}_{G}^{1}(0,T;\mathbb {R}^{n})\) for each \(y\in \mathbb {R}^{n}\) and these functions satisfy the Lipschitz condition, i.e.,

$$|\phi (t,y)-\phi (t, y^{\prime })|\le K|y-y^{\prime }|,\,\,\, \text { for each }\,\, t\in [0,T],\,\,\,\, y, y^{\prime }\in \mathbb {R}^{n},\,\,\,\, \phi =f \,\, \text { and } h_{ij}. $$

The solution is a process \((Y_t)_{0\le t\le T}\in \bar{M}_{G}^{1}(0,T;\mathbb {R}^{n})\) satisfying the above BSDE.

We first introduce the following mapping on a fixed interval [0, T]:

$$ \Lambda _{\cdot }:\bar{M}_{G}^{1}(0,T;\mathbb {R}^{n})\rightarrow \bar{M}_{G}^{1}(0,T;\mathbb {R}^{n})\ \ $$

by setting \(\Lambda _{t}\), \(t\in [0,T]\) as follows:

$$ \Lambda _{t}(Y)=\hat{\mathbb {E}}\left[ \xi +\int _{t}^{T}f(s,Y_{s})ds+\int _{t}^{T}h_{ij}(s, Y_{s})d\left\langle B\right\rangle ^{ij} _{s}\Big |\Omega _{t}\right] , $$

which is well-defined by Lemma 5.1.1 and Exercises 5.4.2, 5.4.5.

We immediately derive a useful property of \(\Lambda _t\).

Lemma 5.2.1

For any \(Y, Y^{\prime }\in \bar{M}_{G}^{1}(0,T;\mathbb {R}^{n}),\) we have the following estimate:

$$\begin{aligned} \hat{\mathbb {E}}[|\Lambda _{t}(Y)-\Lambda _{t}(Y^{\prime })|]\le C\int _{t}^{T}\hat{\mathbb {E}}[|Y_{s}-Y_{s}^{\prime }|]ds,\ t\in [0,T], \end{aligned}$$
(5.2.2)

where the constant C depends only on the Lipschitz constant K.

Now we are going to prove that the BSDE (5.2.1) has a unique solution. We multiplying on both sides of (5.2.2) by \(e^{2Ct}\), and integrate them on [0, T]. We find

$$\begin{aligned} \int _{0}^{T}\hat{\mathbb {E}}[|\Lambda _{t}(Y)-\Lambda _{t}(Y^{\prime })|]e^{2Ct}dt&\le C\int _{0}^{T}\int _{t}^{T}\hat{\mathbb {E}}[|Y_{s}-Y_{s}^{\prime }|]e^{2Ct}dsdt\nonumber \\&=C\int _{0}^{T}\hat{\mathbb {E}}[|Y_{s}-Y_{s}^{\prime }|]\int _{0} ^{s}e^{2Ct}dtds\nonumber \\&=\frac{1}{2}\int _{0}^{T}\hat{\mathbb {E}}[|Y_{s}-Y_{s}^{\prime }|](e^{2Cs}-1)ds\nonumber \\&\le \frac{1}{2}\int _{0}^{T}\hat{\mathbb {E}}[|Y_{s}-Y_{s}^{\prime }|]e^{2Cs}ds. \end{aligned}$$
(5.2.3)

We observe that the following two norms in the space \(\bar{M}_{G} ^{1}(0,T;\mathbb {R}^{n})\) are equivalent:

$$ \int _{0}^{T}\hat{\mathbb {E}}[|Y_{t}|]dt\thicksim \int _{0}^{T}\hat{\mathbb {E}}[|Y_{t}|]e^{2Ct}dt. $$

From (5.2.3), we can obtain that \(\Lambda (Y)\) is a contraction mapping. Consequently, we have proved the following theorem.

Theorem 5.2.2

There exists a unique solution \((Y_{t})_{t\in [0,T]}\in \bar{M}_{G}^{1}(0,T;\mathbb {R}^{n})\) of the backward stochastic differential equation (5.2.1).

Let \(Y^{(v)}\), \(v=1,2\), be the solutions of the following BSDE:

$$ Y_{t}^{(v)}=\hat{\mathbb {E}}\left[ \xi ^{(v)}+\int _{t}^{T}(f(s,Y_{s}^{(v)})+\varphi _{s}^{(v)})ds+\int _{t}^{T}(h_{ij}(s, Y_{s}^{(v)})+\psi _{s}^{ij,(v)})d\left\langle B\right\rangle _{s}^{ij}\Big |\Omega _{t}\right] . $$

Then the following estimate holds.

Proposition 5.2.3

We have

$$\begin{aligned} \hat{\mathbb {E}}\left[ |Y_{t}^{(1)}-Y_{t}^{(2)}|\right] \le Ce^{C(T-t)}\mathbb {\hat{E}}[|\xi ^{(1)}-\xi ^{(2)}|+\int _{t}^{T}|\varphi _{s}^{(1)}-\varphi _{s}^{(2)}|+|\psi _{s}^{ij,(1)}-\psi _{s}^{ij,(2)}|ds], \end{aligned}$$
(5.2.4)

where the constant C depends only on the Lipschitz constant K.

Proof

As in the proof of Lemma 5.2.1, we have

$$\begin{aligned} \hat{\mathbb {E}}[|Y_{t}^{(1)}-Y_{t}^{(2)}|]&\le C\left( \int _{t}^{T}\hat{\mathbb {E}}[|Y_{s}^{(1)}-Y_{s}^{(2)}|]ds+\hat{\mathbb {E}}[|\xi ^{(1)}-\xi ^{(2)}|\right. \\&\ \ \ \left. +\int _{t}^{T}|\varphi _{s}^{(1)}-\varphi _{s}^{(2)}|+|\psi _{s}^{ij,(1)}-\psi _{s}^{ij,(2)}|ds]\right) . \end{aligned}$$

By applying the Gronwall inequality (see Exercise 5.4.4), we obtain the statement.

Remark 5.2.4

In particular, if \(\xi ^{(2)}=0\), \(\varphi _{s}^{(2)}=-f(s, 0)\), \(\psi _{s}^{ij,(2)}=-h_{ij}(s, 0)\), \(\xi ^{(1)}=\xi \), \(\varphi _{s}^{(1)}=0\), \(\psi _{s}^{ij,(1)}=0\), we obtain the estimate of the solution of the BSDE. Let Y be the solution of the BSDE (5.2.1). Then

$$\begin{aligned} \hat{\mathbb {E}}[|Y_{t}|]\le Ce^{C(T-t)}\hat{\mathbb {E}}\left[ |\xi |+\int _{t}^{T}|f(s, 0)|+|h_{ij}(s, 0)|ds\right] , \end{aligned}$$
(5.2.5)

where the constant C depends only on the Lipschitz constant K.

3 Nonlinear Feynman-Kac Formula

Consider the following SDE:

$$\begin{aligned} \left\{ \begin{aligned}dX_{s}^{t,\xi }&=b(X_{s}^{t,\xi })ds+h_{ij}(X_{s}^{t,\xi })d\left\langle B\right\rangle _{s}^{ij}+\sigma _{j}(X_{s}^{t,\xi })dB_{s}^{j},\ s\in [t, T],\\ X_{t}^{t,\xi }&=\xi ,\end{aligned}\right. \end{aligned}$$
(5.3.1)

where \(\xi \in L_{G}^{2}(\Omega _{t};\mathbb {R}^{n})\) and b, \(h_{ij}\), \(\sigma _{j}:\mathbb {R}^{n}\mapsto \mathbb {R}^{n}\) are given Lipschitz functions, i.e., \(|\phi (x)-\phi (x^{\prime })|\le K|x-x^{\prime }|\), for all x, \(x^{\prime }\in \mathbb {R}^{n}\), \(\phi =b\), \(h_{ij}\) and \(\sigma _{j}\).

We then consider the associated BSDE:

$$\begin{aligned} Y_{s}^{t,\xi }=\hat{\mathbb {E}}\left[ \Phi (X_{T}^{t,\xi })+\int _{s}^{T}f(X_{r}^{t,\xi },Y_{r}^{t,\xi })dr+\int _{s}^{T}g_{ij}(X_{r}^{t,\xi },Y_{r}^{t,\xi })d\left\langle B^{i}, B^{j}\right\rangle _{r}\Big |\Omega _{s}\right] , \end{aligned}$$
(5.3.2)

where \(\Phi :\mathbb {R}^{n}\rightarrow \mathbb {R}\) is a given Lipschitz function and f, \(g_{ij}:\mathbb {R}^{n}\times \mathbb {R}\mapsto \mathbb {R}\) are given Lipschitz functions, i.e., \(|\phi (x,y)-\phi (x^{\prime }, y^{\prime })|\le K(|x-x^{\prime }|+|y-y^{\prime }|)\), for each x, \(x^{\prime }\in \mathbb {R}^{n}\), y, \(y^{\prime }\in \mathbb {R}\), \(\phi =f\) and \(g_{ij}\).

We have the following estimates:

Proposition 5.3.1

For each \(\xi \), \(\xi ^{\prime }\in L_{G}^{2}(\Omega _{t};\mathbb {R}^{n})\), we have, for each \(s\in [t, T],\)

$$\begin{aligned} \hat{\mathbb {E}}[|X_{s}^{t,\xi }-X_{s}^{t,\xi ^{\prime }}|^{2}|\Omega _{t}]\le C|\xi -\xi ^{\prime }|^{2} \end{aligned}$$
(5.3.3)

and

$$\begin{aligned} \hat{\mathbb {E}}[|X_{s}^{t,\xi }|^{2}|\Omega _{t}]\le C(1+|\xi |^{2}), \end{aligned}$$
(5.3.4)

where the constant C depends only on the Lipschitz constant K.

Proof

It is easy to see that

$$ \hat{\mathbb {E}}[|X_{s}^{t,\xi }-X_{s}^{t,\xi ^{\prime }}|^{2}\big |\Omega _{t}]\le C_{1}(|\xi -\xi ^{\prime }|^{2}+\int _{t}^{s}\hat{\mathbb {E}}[|X_{r}^{t,\xi }-X_{r}^{t,\xi ^{\prime }}|^{2}|\Omega _{t}]dr). $$

By the Gronwall inequality, we obtain (5.3.3), namely

$$ \hat{\mathbb {E}}[|X_{s}^{t,\xi }-X_{s}^{t,\xi ^{\prime }}|^{2}|\Omega _{t}]\le C_{1}e^{C_{1}T}|\xi -\xi ^{\prime }|^{2}. $$

Similarly, we derive (5.3.4).    \(\square \)

Corollary 5.3.2

For any \(\xi \in L_{G}^{2}(\Omega _{t};\mathbb {R}^{n})\), we have

$$\begin{aligned} \hat{\mathbb {E}}[|X_{t+\delta }^{t,\xi }-\xi |^{2}|\Omega _{t}]\le C(1+|\xi |^{2})\delta \ \ \text {for} \ \delta \in [0,T-t], \end{aligned}$$
(5.3.5)

where the constant C depends only on the Lipschitz constant K.

Proof

It is easy to see that

$$ \hat{\mathbb {E}}[|X_{t+\delta }^{t,\xi }-\xi |^{2}\big |\Omega _{t}]\le C_{1}\int _{t}^{t+\delta }\left( 1+\hat{\mathbb {E}}[|X_{s}^{t,\xi }|^{2}\big |\Omega _{t}]\right) ds. $$

Then the result follows from Proposition 5.3.1.    \(\square \)

Proposition 5.3.3

For each \(\xi \), \(\xi ^{\prime }\in L_{G}^{2}(\Omega _{t};\mathbb {R}^{n})\), we have

$$\begin{aligned} |Y_{t}^{t,\xi }-Y_{t}^{t,\xi ^{\prime }}|\le C|\xi -\xi ^{\prime }| \end{aligned}$$
(5.3.6)

and

$$\begin{aligned} |Y_{t}^{t,\xi }|\le C(1+|\xi |), \end{aligned}$$
(5.3.7)

where the constant C depends only on the Lipschitz constant K.

Proof

For each \(s\in [0,T]\), it is easy to check that

$$ |Y_{s}^{t,\xi }-Y_{s}^{t,\xi ^{\prime }}|\le C_{1}\hat{\mathbb {E}}\left[ |X_{T}^{t,\xi }-X_{T}^{t,\xi ^{\prime }}|+\int _{s}^{T}(|X_{r}^{t,\xi }-X_{r}^{t,\xi ^{\prime }}|+|Y_{r}^{t,\xi }-Y_{r}^{t,\xi ^{\prime }}|)dr|\Omega _{s}\right] . $$

Since

$$ \hat{\mathbb {E}}[|X_{s}^{t,\xi }-X_{s}^{t,\xi ^{\prime }}||\Omega _{t}]\le \left( \hat{\mathbb {E}}[|X_{s}^{t,\xi }-X_{s}^{t,\xi ^{\prime }}|^{2}|\Omega _{t}]\right) ^{1/2}, $$

we have

$$ \hat{\mathbb {E}}[|Y_{s}^{t,\xi }-Y_{s}^{t,\xi ^{\prime }}||\Omega _{t}]\le C_{2}(|\xi -\xi ^{\prime }|+\int _{s}^{T}\hat{\mathbb {E}}[|Y_{r}^{t,\xi }-Y_{r}^{t,\xi ^{\prime }}||\Omega _{t}]dr). $$

By the Gronwall inequality, we obtain (5.3.6). Similarly we derive (5.3.7).    \(\square \)

We are more interested in the case when \(\xi =x\in \mathbb {R}^{n}\). Define

$$\begin{aligned} u(t,x):=Y_{t}^{t,x},\ \ (t, x)\in [0,T]\times \mathbb {R}^{n}. \end{aligned}$$
(5.3.8)

By Proposition 5.3.3, we immediately have the following estimates:

$$\begin{aligned} |u(t,x)-u(t, x^{\prime })|\le C|x-x^{\prime }|, \end{aligned}$$
(5.3.9)
$$\begin{aligned} |u(t, x)|\le C(1+|x|), \end{aligned}$$
(5.3.10)

where the constant C depends only on the Lipschitz constant K.

Remark 5.3.4

It is important to note that u(tx) is a deterministic function of (tx), because \(X_{s}^{t, x}\) and \(Y_{s}^{t, x}\) are independent from \(\Omega _{t}\).

Theorem 5.3.5

For any \(\xi \in L_{G}^{2}(\Omega _{t};\mathbb {R}^{n})\), we have

$$\begin{aligned} u(t,\xi )=Y_{t}^{t,\xi }. \end{aligned}$$
(5.3.11)

Proof

Without loss of generality, suppose that \(n=1\).

First, we assume that \(\xi \in Lip(\Omega _T)\) is bounded by some constant \(\rho \). Thus for each integer \(N>0\), we can choose a simple function

$$ \eta ^{N}=\sum _{i=-N}^{N}x_{i}{} \mathbf{I}_{A_{i}}(\xi ) $$

with \(x_i=\frac{i\rho }{N}, A_i=[\frac{i\rho }{N},\frac{(i+1)\rho }{N})\) for \(i=-N,\ldots , N-1\) and \(x_N=\rho , A_N=\{\rho \}\). From the definition of u, we conclude that

$$\begin{aligned} |Y_{t}^{t,\xi }-u(t,\eta ^{N})| =|Y_{t}^{t,\xi }-\sum _{i=-N}^{N}u(t,x_i)\mathbf{I}_{A_{i}}(\xi )|&=|Y_{t}^{t,\xi }-\sum _{i=-N}^{N}Y_{t}^{t,x_{i}}{} \mathbf{I}_{A_{i}}(\xi )|\\&=\sum _{i=-N}^{N}|Y_{t}^{t,\xi }-Y_{t}^{t, x_{i}}|\mathbf{I}_{A_{i}}(\xi ). \end{aligned}$$

Then it follows from Proposition 5.3.3 that

$$\begin{aligned} |Y_{t}^{t,\xi }-u(t,\eta ^{N})| \le C\sum _{i=-N}^{N}|\xi -x_{i}|\mathbf{I}_{A_{i}}(\xi ) \le C\frac{\rho }{N}. \end{aligned}$$

Noting that

$$ |u(t,\xi )-u(t,\eta ^{N})|\le C|\xi -\eta ^{N}|\le C\frac{\rho }{N}, $$

we get \(\mathbb {\hat{E}}[|Y_{t}^{t,\xi }-u(t,\xi )|]\le 2C\frac{\rho }{N}\). Since N can be arbitrarily large, we obtain \(Y_{t}^{t,\xi }=u(t,\xi )\).

In the general case, by Exercise 3.10.4 in Chap. 3, we can find a sequence of bounded random variables \(\xi _k\in Lip(\Omega _T)\) such that

$$ \lim \limits _{k\rightarrow \infty }\hat{\mathbb {E}}[|\xi -\xi _k|^2]=0. $$

Consequently, applying Proposition 5.3.3 again yields that

$$ \lim \limits _{k\rightarrow \infty }\hat{\mathbb {E}}[|Y_{t}^{t,\xi }-Y_{t}^{t,\xi _k}|^2]\le C\lim \limits _{k\rightarrow \infty }\hat{\mathbb {E}}[|\xi -\xi _k|^2]=0, $$

which together with \(Y_{t}^{t,\xi _k}=u(t,\xi _k)\) imply the desired result.    \(\square \)

Proposition 5.3.6

We have, for \(\delta \in [0,T-t],\)

$$\begin{aligned} u(t,x)=\hat{\mathbb {E}}\left[ u(t+\delta ,X_{t+\delta }^{t,x})+\int _{t}^{t+\delta }f(X_{r}^{t,x},Y_{r}^{t,x})dr+\int _{t}^{t+\delta }g_{ij}(X_{r}^{t,x},Y_{r}^{t, x})d\left\langle B\right\rangle _{r}^{ij}\right] . \end{aligned}$$
(5.3.12)

Proof

Since \(X_{s}^{t,x}=X_{s}^{t+\delta ,X_{t+\delta }^{t, x}}\) for \(s\in [t+\delta , T]\), we get \(Y_{t+\delta }^{t,x}=Y_{t+\delta }^{t+\delta ,X_{t+\delta }^{t, x}}\). By Theorem 5.3.5, we have \(Y_{t+\delta }^{t,x}=u(t+\delta ,X_{t+\delta }^{t, x})\), which implies the result.    \(\square \)

For any \(A\in \mathbb {S}(n)\), \(p\in \mathbb {R}^{n}\), \(r\in \mathbb {R}\), we set

$$\begin{aligned} F(A,p,r,x):=G(B(A,p,r,x))+\langle p,b(x)\rangle +f(x, r), \end{aligned}$$

where B(Aprx) is a \(d\times d\) symmetric matrix with

$$\begin{aligned} B_{ij}(A,p,r,x):=\langle A\sigma _{i}(x),\sigma _{j}(x)\rangle +\langle p,h_{ij}(x)+h_{ji}(x)\rangle +g_{ij}(x,r)+g_{ji}(x, r). \end{aligned}$$

Theorem 5.3.7

The function u(tx) is the unique viscosity solution of the following PDE:

$$\begin{aligned} \left\{ \begin{array} [c]{l}\partial _{t}u+F(D^{2}u,Du,u, x)=0,\\ u(T, x)=\Phi (x). \end{array} \right. \end{aligned}$$
(5.3.13)

Proof

We first show that u is a continuous function. By (5.3.9) we know that u is a Lipschitz function in x. It follows from (5.2.5) and (5.3.4) that

$$ \hat{\mathbb {E}}[|Y_{s}^{t, x}|]\le C(1+|x|),\,\,\, \text { for } s\in [t, T]. $$

In view of (5.3.5) and (5.3.12), we get \(|u(t,x)-u(t+\delta , x)|\le C(1+|x|)(\delta ^{1/2}+\delta )\) for \(\delta \in [0,T-t]\). Thus u is \(\frac{1}{2} \)-Hölder continuous in t, which implies that u is a continuous function. We can also show (see Exercise 5.4.8), that for each \(p\ge 2\),

$$\begin{aligned} \hat{\mathbb {E}}[|X_{t+\delta }^{t, x}-x|^{p}]\le C(1+|x|^{p})\delta ^{p/2}. \end{aligned}$$
(5.3.14)

Now for fixed \((t, x)\in (0,T)\times \mathbb {R}^{n}\), let \(\psi \in C_{l, Lip}^{2,3}([0,T]\times \mathbb {R}^{n})\) be such that \(\psi \ge u\) and \(\psi (t,x)=u(t, x)\). By (5.3.12), (5.3.14) and Taylor’s expansion, it follows that, for \(\delta \in (0,T-t),\)

$$\begin{aligned} 0&\le \hat{\mathbb {E}}\left[ \psi (t+\delta ,X_{t+\delta }^{t,x})-\psi (t,x)+\int _{t}^{t+\delta }f(X_{r}^{t,x},Y_{r}^{t,x})dr\right. \\&\ \ \ \left. +\int _{t}^{t+\delta }g_{ij}(X_{r}^{t,x},Y_{r}^{t,x})d\left\langle B^{i}, B^{j}\right\rangle _{r}\right] \\&\le \frac{1}{2}\hat{\mathbb {E}}[(B(D^{2}\psi (t,x),D\psi (t,x),\psi (t,x),x),\langle B\rangle _{t+\delta }-\langle B\rangle _{t})]\\&\ \ \ +(\partial _{t}\psi (t,x)+\langle D\psi (t,x),b(x)\rangle +f(x,\psi (t, x)))\delta +C(1+|x|^{m})\delta ^{3/2}\\&\le (\partial _{t}\psi (t, x)+F(D^{2}\psi (t,x),D\psi (t,x),\psi (t,x), x))\delta +C(1+|x|^{m})\delta ^{3/2}, \end{aligned}$$

where m is some constant depending on the function \(\psi \). Consequently, it is easy to check that

$$ \partial _{t}\psi (t, x)+F(D^{2}\psi (t,x),D\psi (t,x),\psi (t,x), x)\ge 0. $$

This implies that u is a viscosity subsolution of (5.3.13). Similarly we can show that u is also a viscosity supersolution of (5.3.13). The uniqueness is from Theorem C.2.9 (in Appendix C).    \(\square \)

Example 5.3.8

Let \(B=(B^{1}, B^{2})\) be a 2-dimensional G-Brownian motion with

$$ G(A)=G_{1}(a_{11})+G_{2}(a_{22}), $$

where

$$ G_{i}(a)=\frac{1}{2}(\overline{\sigma }_{i}^{2}a^{+}-\underline{\sigma }_{i}^{2}a^{-}),\ \ i=1,2. $$

In this case, we consider the following 1-dimensional SDE:

$$ dX_{s}^{t,x}=\mu X_{s}^{t,x}ds+\nu X_{s}^{t, x}d\left\langle B^{1}\right\rangle _{s}+\sigma X_{s}^{t, x}dB_{s}^{2},\ \ X_{t}^{t, x}=x, $$

where \(\mu \), \(\nu \) and \(\sigma \) are constants.

The corresponding function u is defined by

$$ u(t,x):=\hat{\mathbb {E}}[\varphi (X_{T}^{t, x})]. $$

Then

$$ u(t,x)=\hat{\mathbb {E}}[u(t+\delta ,X_{t+\delta }^{t, x})] $$

and u is the viscosity solution of the following PDE:

$$ \partial _{t}u+\mu x\partial _{x}u+2G_{1}(\nu x\partial _{x}u)+\sigma ^{2}x^{2}G_{2}(\partial _{xx}^{2}u)=0,\ u(T, x)=\varphi (x). $$

4 Exercises

Exercise 5.4.1

Prove that \(\bar{M}_{G}^{p}(0,T;\mathbb {R}^{n})\subseteq M_{G}^p(0,T;\mathbb {R}^{n})\).

Exercise 5.4.2

Show that \(b(s, Y_s)\in M^p_G(0,T;\mathbb {R}^n)\) for each \(Y \in M^p_G(0,T;\mathbb {R}^n)\), where b is given by Eq. (5.1.1).

Exercise 5.4.3

Complete the proof of Lemma 5.1.2.

Exercise 5.4.4

(The Gronwall inequality) Let u(t) be a Lebesgue integrable function in [0, T] such that

$$u(t)\le C+A\int _0^tu(s)ds\ ~\text {for}~0\le t\le T,$$

where \(C>0\) and \(A>0\) are constants. Prove that \(u(t)\le Ce^{At}\) for \(0\le t\le T\).

Exercise 5.4.5

For any \(\xi \in L_{G}^{1}(\Omega _{T};\mathbb {R}^{n})\), show that the process \((\hat{\mathbb {E}}[\xi |\Omega _{t}])_{t\in [0,T]}\) belongs to \(\bar{M}_{G}^{1}(0,T;\mathbb {R}^{n})\).

Exercise 5.4.6

Complete the proof of Lemma .

Exercise 5.4.7

Suppose that \(\xi \), f and \(h_{ij}\) are all deterministic functions. Solve the BSDE (5.2.1).

Exercise 5.4.8

For each \(\xi \in L_{G}^{p}(\Omega _{t};\mathbb {R}^{n})\) with \(p\ge 2\), show that SDE (5.3.1) has a unique solution in \(\bar{M}_{G}^{p}(t, T;\mathbb {R}^{n})\). Further, show that the following estimates hold:

$$ \mathbb {\hat{E}}_{t}[|X_{t+\delta }^{t,\xi }-X_{t+\delta }^{t,\xi ^{\prime }}|^{p}]\le C|\xi -\xi ^{\prime }|^{p}, $$
$$ \mathbb {\hat{E}}_{t}[|X_{t+\delta }^{t,\xi }|^{p}]\le C(1+|\xi |^{p}), $$
$$ \mathbb {\hat{E}}_{t}[\sup _{s\in [t, t+\delta ]}|X_{s}^{t,\xi }-\xi |^{p}]\le C(1+|\xi |^{p})\delta ^{p/2}, $$

where the constant C depends on L, G, p, n and T.

Exercise 5.4.9

Let \(\widetilde{\mathbb {E}}\) be a nonlinear expectation dominated by G-expectation, where \(\widetilde{G}:\mathbb {S}(d) \mapsto \mathbb {R}\) is dominated by G and \(\tilde{G}(0)=0\). Then we replace the G-expectation \(\mathbb {\hat{E}}\) by \(\widetilde{\mathbb {E}}\) in BSDEs (5.2.1) and (5.3.2). Show that

  1. (i)

    the BSDE (5.2.1) admits a unique solution \({Y}\in \bar{M}_G^1(0,T)\).

  2. (ii)

    u is the unique viscosity solution of the PDE (5.3.13) corresponding to \(\widetilde{G}\).