Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Let X = (X t )0 ≤ t ≤ T be a solution of the one-dimensional stochastic differential equation (SDE)

$$\displaystyle{ X_{t} = x_{0} +\int _{ 0}^{t}b(X_{ s})ds +\int _{ 0}^{t}\sigma (X_{ s})dW_{s},\:x_{0} \in \mathbb{R},\:t \in [0,T], }$$
(1)

where W: = (W t )0 ≤ t ≤ T is a standard one-dimensional Brownian motion on a probability space \((\varOmega,\mathcal{F}, \mathbb{P})\) with a filtration \((\mathcal{F}_{t})_{0\leq t\leq T}\) satisfying the usual conditions. The drift coefficient b and the diffusion coefficient σ are Borel-measurable functions from \(\mathbb{R}\) into \(\mathbb{R}\). The diffusion process X is used in many fields of application, for example, mathematical finance, optimal control and filtering.

Let X (n) be a solution of the SDE ( 1) with drift coefficient b n and diffusion coefficient σ n . We consider the stability problem for (X, X (n)) when the pair of coefficients (b n , σ n ) converges to (b, σ). Stroock and Varadhan introduced the stability problem in the weak sense in order to consider the martingale problem with continuous and locally bounded coefficients (see Chap. 11 of [17]). In [11], Kawabata and Yamada consider the strong convergence of the stability problem under the condition that the drift coefficients b and b n are Lipschitz continuous functions, the diffusion coefficients σ and σ n are Hölder continuous and (b n , σ n ) locally uniformly converges to (b, σ) (see [11, Example 1]). Kaneko and Nakao [10] prove that if the coefficients b n and σ n are uniformly bounded, σ n is uniformly elliptic and (b n , σ n ) tends to (b, σ) in L 1-sense, then \((X^{(n)})_{n\in \mathbb{N}}\) converges to X in L 2-sense. Moreover they also prove that the solution of the SDE ( 1) can be constructed as the limit of the Euler-Maruyama approximation under the condition that the coefficients b and σ are continuous and of linear growth (see [10, Theorem D]). Recently, under the Nakao-Le Gall condition, Hashimoto and Tsuchiya [8] prove that \((X^{(n)})_{n\in \mathbb{N}}\) converges to X in L p sense for any p ≥ 1 and give the rate of convergence under the condition that b n  → b and σ n  → σ in L 1 and L 2 sense, respectively. Their proof is based on the Yamada-Watanabe approximation technique which was introduced in [19] and some estimates for the local time.

On a related study, the convergence for the Euler-Maruyama approximation with non-Lipschitz coefficients has been studied recently. Yan [18] has proven that if the sets of discontinuous points of b and σ are countable, then the Euler-Maruyama approximation converges weakly to the unique weak solution of the corresponding SDE. Kohatsu-Higa et al. [12] have studied the weak approximation error for the one-dimensional SDE with the drift 1 (−, 0](x) −1 (0, +)(x) and constant diffusion. Gyöngy and Rásonyi [7] give the order of the strong rate of convergence for a class of one-dimensional SDEs whose drift is the sum of a Lipschitz continuous function and a monotone decreasing Hölder continuous function and its diffusion coefficient is a Hölder continuous function. The Yamada-Watanabe approximation technique is a key idea to obtain their results. In [15], Ngo and Taguchi extend the results in [7] for SDEs with discontinuous drift. They prove that if the drift coefficient b is bounded and one-sided Lipschitz function, and the diffusion coefficient is bounded, uniformly elliptic and η-Hölder continuous, then there exists a positive constant C such that

$$\displaystyle{ \sup _{0\leq t\leq T}\mathbb{E}[\vert X_{t}-\overline{X}_{t}^{(n)}\vert ] \leq \left \{\begin{array}{ll} \frac{C} {n^{\eta -1/2}},&\mathit{if }\eta \in (1/2,1], \\ \frac{C} {\log n}, &\mathit{if }\eta = 1/2, \end{array} \right. }$$

where \(\overline{X}^{(n)}\) is the Euler-Maruyama approximation for SDE ( 1). This fact implies that the strong rate of convergence for the stability problem may also depend on the Hölder exponent of the diffusion coefficient.

The goal of this paper is to estimate the difference between two SDEs using the norm of the difference of coefficients. More precisely, let us consider another SDE given by

$$\displaystyle{ \hat{X}_{t} = x_{0} +\int _{ 0}^{t}\hat{b}(\hat{X}_{ s})ds +\int _{ 0}^{t}\hat{\sigma }(\hat{X}_{ s})dW_{s}. }$$
(2)

We will prove the following inequality:

$$\displaystyle{ \sup _{0\leq t\leq T}\mathbb{E}[\vert X_{t}-\hat{X}_{t}\vert ] \leq \left \{\begin{array}{ll} C(\vert \vert b -\hat{ b}\vert \vert _{1} \vee \vert \vert \sigma -\hat{\sigma }\vert \vert _{2}^{2})^{(2\eta -1)/(2\eta )},&\mathit{if }\eta \in (1/2,1], \\ \frac{C} {\log (1/(\vert \vert b -\hat{ b}\vert \vert _{1} \vee \vert \vert \sigma -\hat{\sigma }\vert \vert _{2}^{2}))}, &\mathit{if }\eta = 1/2, \end{array} \right. }$$
(3)

where η is the Hölder exponent of the diffusion coefficients, C is a positive constant and | | ⋅ | |  p is a L p-norm which will be defined by ( 4). We will also estimate \(\mathbb{E}[\sup _{0\leq t\leq T}\vert X_{t} -\hat{ X}_{t}\vert ^{p}]\) for any p ≥ 1. It is worth noting that in the papers [10] and [11], the authors only prove the strong convergence for the stability problem. On the other hand, applying our main results, we are able to establish the strong rate of convergence for the stability problem (see Sect. 4). In order to obtain ( 3), we use the Yamada-Watanabe approximation technique and a Gaussian upper bound for the density of SDE ( 2) (see [2, 16] and [14]).

Finally, we note that SDEs with discontinuous drift coefficient have many applications in mathematical finance [1] and [9], optimal control problems [4] and other domains (see also [5] and [13]).

This paper is organized as follows: Sect. 2 introduces our framework and main results. All the proofs are shown in Sect. 3. In Sect. 4, we apply the main results to the stability problem.

2 Main Results

2.1 Notations and Assumptions

We will assume that the drift coefficient b belongs to the class of one-sided Lipschitz functions which is defined as follows.

Definition 1

A function \(f: \mathbb{R} \rightarrow \mathbb{R}\) is called a one-sided Lipschitz function if there exists a positive constant L such that for any \(x,y \in \mathbb{R}\),

$$\displaystyle{ (x - y)(\,f(x) - f(y)) \leq L\vert x - y\vert ^{2}. }$$

Let \(\mathcal{L}\) be the class of all one-sided Lipschitz functions.

Remark 1

By the definition of the class \(\mathcal{L}\), if \(f,g \in \mathcal{L}\) and α ≥ 0, then f + g, \(\alpha f \in \mathcal{L}\). The one-sided Lipschitz property is closely related to the monotonicity condition. Actually, any monotone decreasing function is one-sided Lipschitz. Moreover, any Lipschitz continuous function is also a one-sided Lipschitz.

Now we give assumptions for the coefficients \(b,\hat{b},\sigma\) and \(\hat{\sigma }\).

Assumption 1

We assume that the coefficients \(b,\hat{b},\sigma\) and \(\hat{\sigma }\) satisfy the following conditions:

  • A-(i): \(b \in \mathcal{L}\).

  • A-(ii): b and \(\hat{b}\) are measurable and there exists K > 0 such that

    $$\displaystyle{ \sup _{x\in \mathbb{R}}\left (\vert b(x)\vert \vee \vert \hat{b}(x)\vert \right ) \leq K. }$$
  • A-(iii): σ and \(\hat{\sigma }\) are η: = (1∕2 +α)-Hölder continuous with some α ∈ [0, 1∕2], i.e., there exists K > 0 such that

    $$\displaystyle{ \sup _{x,y\in \mathbb{R},x\neq y}\left (\frac{\vert \sigma (x) -\sigma (y)\vert } {\vert x - y\vert ^{\eta }} \vee \frac{\vert \hat{\sigma }(x) -\hat{\sigma } (y)\vert } {\vert x - y\vert ^{\eta }} \right ) \leq K. }$$
  • A-(iv): a = σ 2 and \(\hat{a} =\hat{\sigma } ^{2}\) are bounded and uniformly elliptic, i.e., there exists λ ≥ 1 such that for any \(x \in \mathbb{R}\),

    $$\displaystyle{ \lambda ^{-1} \leq a(x) \leq \lambda \text{ and }\lambda ^{-1} \leq \hat{ a}(x) \leq \lambda. }$$

Remark 2

Assume that A-(ii), A-(iii) and A-(iv) hold. Then the SDE ( 1) and the SDE ( 2) have unique strong solution (see [20]). Note that the one-sided Lipschitz property is used only in ( 11) for b, so we don’t need to assume \(\hat{b} \in \mathcal{L}\).

2.2 Gaussian Upper Bound for the Density of SDE

A Gaussian upper bounded for the density of X t is well-known under suitable conditions for the coefficients. If coefficients b and σ are Hölder continuous and σ is bounded and uniformly elliptic, then a Gaussian type estimate holds for the fundamental solution of parabolic type partial differential equations (see [6, Theorem 11, Chap. 1]). Under A-(ii), (iii) and (iv), the density function p t (x 0, ⋅ ) of X t exists for any t ∈ (0, T] and there exist positive constants \(\overline{C}\) and c such that for any \(y \in \mathbb{R}\) and t ∈ (0, T],

$$\displaystyle{ p_{t}(x_{0},y) \leq \overline{C}p_{c_{{\ast}}}(t,x_{0},y), }$$

where \(p_{c}(t,x,y):= \frac{e^{-\frac{(y-x)^{2}} {2ct} }} {\sqrt{2\pi ct}}\) (see [14, Remark 4.1]).

Using a Gaussian upper bound for the density of X t , we can prove the following estimate.

Lemma 1

Let p ≥ 1. Assume that A-(ii), A-(iii) and A-(iv) hold. Then we have

$$\displaystyle{ \int _{0}^{T}\mathbb{E}[\vert b(\hat{X}_{ s}) -\hat{ b}(\hat{X}_{s})\vert ^{p}]ds \leq C_{ T}\vert \vert b -\hat{ b}\vert \vert _{p}^{p} }$$

and

$$\displaystyle{ \int _{0}^{T}\mathbb{E}[\vert \sigma (\hat{X}_{ s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2p}]ds \leq C_{ T}\vert \vert \sigma -\hat{\sigma }\vert \vert _{2p}^{2p}, }$$

where \(C_{T}:= \overline{C}\sqrt{\frac{2T} {\pi c_{{\ast}}}}\) and for any bounded measurable function f, ||⋅|| p is defined by

$$\displaystyle{ \vert \vert \,f\vert \vert _{p}:= \left (\int _{\mathbb{R}}\vert \,f(x)\vert ^{p}e^{-\frac{\vert x-x_{0}\vert ^{2}} {2c_{{\ast}}T} }dx\right )^{1/p}. }$$
(4)

Proof

We only prove the first estimate. The second one can be obtained by using a similar argument. From a Gaussian upper bound for the density of \(\hat{X}_{t}\), for any \(x \in \mathbb{R}\) and s ∈ (0, T], we have

$$\displaystyle{ \hat{p}_{s}(x_{0},x) \leq \overline{C}p_{c_{{\ast}}}(s,x_{0},x) \leq \frac{\overline{C}} {\sqrt{2\pi c_{{\ast} } s}}e^{-\frac{\vert x-x_{0}\vert ^{2}} {2c_{{\ast}}T} }, }$$

where \(\hat{p}_{s}(x_{0},\cdot )\) is a density function of \(\hat{X}_{s}\). Hence we obtain

$$\displaystyle\begin{array}{rcl} \int _{0}^{T}\mathbb{E}[\vert b(\hat{X}_{ s}) -\hat{ b}(\hat{X}_{s})\vert ^{p}]ds& =& \int _{ 0}^{T}ds\int _{ \mathbb{R}}dx\vert b(x) -\hat{ b}(x)\vert ^{p}\hat{p}_{ s}(x_{0},x) \\ & \leq & \int _{0}^{T}ds \frac{\overline{C}} {\sqrt{2\pi c_{{\ast} } s}}\int _{\mathbb{R}}dx\vert b(x) -\hat{ b}(x)\vert ^{p}e^{-\frac{\vert x-x_{0}\vert ^{2}} {2c_{{\ast}}T} } \\ & =& C_{T}\vert \vert b -\hat{ b}\vert \vert _{p}^{p}. {}\end{array}$$
(5)

This concludes the proof.

Remark 3

Our proof of Lemma 1 is based on the fact that we are in the one-dimensional setting. In multi-dimensional case, the integrand of ( 5) is not integrable with respect to s in general. This is the main reason for restricting our discussion to the one-dimensional SDE case.

2.3 Rate of Convergence

For any p ≥ 1, we define

$$\displaystyle{ \varepsilon _{p}:= \vert \vert b -\hat{ b}\vert \vert _{p}^{p} \vee \vert \vert \sigma -\hat{\sigma }\vert \vert _{ 2p}^{2p}. }$$

Then we have the following estimate for the difference between two SDEs.

Theorem 1

Suppose that Assumption  1 holds. We assume that ɛ 1 < 1 if α ∈ (0,1∕2] and 1∕log (1∕ɛ 1 ) < 1 if α = 0. Then there exists a positive constant C which depends on \(\overline{C},c_{{\ast}},K,L,T,\alpha,\lambda\) and x 0 such that

$$\displaystyle{ \sup _{\tau \in \mathcal{T}}\mathbb{E}[\vert X_{\tau }-\hat{X}_{\tau }\vert ] \leq \left \{\begin{array}{ll} C\varepsilon _{1}^{2\alpha /(2\alpha +1)} & \mathit{if }\alpha \in (0,1/2], \\ \frac{C} {\log (1/\varepsilon _{1})} &\mathit{if }\alpha = 0, \end{array} \right. }$$

where \(\mathcal{T}\) is the set of all stopping times τ ≤ T.

Theorem 2

Suppose that Assumption  1 holds. We assume that ɛ 1 < 1 if α ∈ (0,1∕2] and 1∕log (1∕ɛ 1 ) < 1 if α = 0. Then there exists a positive constant C which depends on \(\overline{C},c_{{\ast}},K,L,T,\alpha,\lambda\) and x 0 such that

$$\displaystyle{ \mathbb{E}[\sup _{0\leq t\leq T}\vert X_{t}-\hat{X_{t}}\vert ] \leq \left \{\begin{array}{ll} C\varepsilon _{1}^{4\alpha ^{2}/(2\alpha +1) } & \mathit{if }\alpha \in (0,1/2], \\ \frac{C} {\sqrt{\log (1/\varepsilon _{1 } )}} &\mathit{if }\alpha = 0. \end{array} \right. }$$

Theorem 3

Suppose that Assumption  1 holds and p ≥ 2. We assume that ɛ p < 1 if α ∈ (0,1∕2] and 1∕log (1∕ɛ p ) < 1 if α = 0. Then there exists a positive constant C which depends on \(\overline{C},c_{{\ast}},K,L,T,p,\alpha,\lambda\) and x 0 such that

$$\displaystyle{ \mathbb{E}[\sup _{0\leq t\leq T}\vert X_{t}-\hat{X_{t}}\vert ^{p}] \leq \left \{\begin{array}{ll} C\varepsilon _{p}^{1/2} & \mathit{if }\alpha = 1/2, \\ C\varepsilon _{1}^{2\alpha /(2\alpha +1)} & \mathit{if }\alpha \in (0,1/2), \\ \frac{C} {\log (1/\varepsilon _{1})} &\mathit{if }\alpha = 0. \end{array} \right. }$$

Using Jensen’s inequality, we can extend Theorem 3 as follows.

Corollary 1

Suppose that Assumption  1 holds and p ∈ (1,2). We assume that ɛ 2p < 1 if α ∈ (0,1∕2] and 1∕log (1∕ɛ 2p ) < 1 if α = 0. Then there exists a positive constant C which depends on \(\overline{C},c_{{\ast}},K,L,T,p,\alpha,\lambda\) and x 0 such that

$$\displaystyle{ \mathbb{E}[\sup _{0\leq t\leq T}\vert X_{t}-\hat{X_{t}}\vert ^{p}] \leq \left \{\begin{array}{ll} C\varepsilon _{2p}^{1/2} & \mathit{if }\alpha = 1/2, \\ C\varepsilon _{1}^{\alpha /(2\alpha +1)} & \mathit{if }\alpha \in (0,1/2), \\ \frac{C} {\sqrt{\log (1/\varepsilon _{1 } )}} &\mathit{if }\alpha = 0. \end{array} \right. }$$

Next, we will find a bound for \(\mathbb{E}[\vert g(X_{T}) - g(\hat{X}_{T})\vert ^{r}]\) where g is a function of bounded variation and r ≥ 1.

Definition 2

For a function \(f: \mathbb{R} \rightarrow \mathbb{R}\), we define

$$\displaystyle{ T_{f}(x):=\sup \sum _{ j=1}^{N}\vert \,f(x_{ j}) - f(x_{j-1})\vert. }$$

Here the supremum is taken over all positive integers N and all partitions − < x 0 < x 1 < ⋯ < x N  = x < . We call f a function of bounded variation, if

$$\displaystyle{ V (\,f):=\lim _{x\rightarrow \infty }T_{f}(x) <\infty. }$$

Denote by BV the class of all functions of bounded variation.

Corollary 2

Suppose that Assumption  1 holds. Furthermore assume that ɛ 1 < 1 if α ∈ (0,1∕2] and 1∕log (1∕ɛ 1 ) < 1 if α = 0. Then there exists a positive constant C which depends on \(\overline{C},c_{{\ast}},K,L,T,\alpha,\lambda\) and x 0 such that for any g ∈ BV and r ≥ 1,

$$\displaystyle{\mathbb{E}[\vert g(X_{T})-g(\hat{X}_{T})\vert ^{r}] \leq \left \{\begin{array}{ll} 3^{r+1}V (g)^{r}C\varepsilon _{ 1}^{\alpha /(2\alpha +1)} & \mathit{if }\alpha \in (0,1/2], \\ \frac{3^{r+1}V (g)^{r}C} {\sqrt{\log (1/\varepsilon _{1 } )}} &\mathit{if }\alpha = 0. \end{array} \right.}$$

Remark 4

In the proof of all results, we calculate the constant C explicitly. In Theorems 13 and Corollary 1, the constant C does not blow up when T → 0. On the other hand, in Corollary 2, the constant C may tend to infinity as T → 0 because we use a Gaussian upper bound for the density of X T in ( 17).

3 Proofs

3.1 Yamada-Watanabe Approximation Technique

In this section, we introduce the approximation method of Yamada and Watanabe (see [19] and [7]) which is the key technique for our proof. We define an approximation for the function ϕ(x) = | x | . For each δ ∈ (1, ) and κ ∈ (0, 1), there exists a continuous function \(\psi _{\delta,\kappa }: \mathbb{R} \rightarrow \mathbb{R}^{+}\) with suppψ δ, κ  ⊂ [κδ, κ] such that

$$\displaystyle{\int _{\kappa /\delta }^{\kappa }\psi _{ \delta,\kappa }(z)dz = 1\mathit{and}0 \leq \psi _{\delta,\kappa }(z) \leq \frac{2} {z\log \delta },\:\:\:z> 0.}$$

For example, we can take

$$\displaystyle{\psi _{\delta,\kappa }(z):=\mu _{\delta,\kappa }\exp \left [- \frac{1} {(\kappa -z)(z -\kappa /\delta )}\right ]\mathbf{1}_{(\kappa /\delta,\kappa )}(z),}$$

where \(\mu _{\delta,\kappa }^{-1}:=\int _{ \kappa /\delta }^{\kappa }\exp (- \frac{1} {(\kappa -z)(z-\kappa /\delta )})dz\). We define a function \(\phi _{\delta,\kappa } \in C^{2}(\mathbb{R}; \mathbb{R})\) by

$$\displaystyle{\phi _{\delta,\kappa }(x):=\int _{ 0}^{\vert x\vert }\int _{ 0}^{y}\psi _{ \delta,\kappa }(z)dzdy.}$$

It is easy to verify that ϕ δ, κ has the following useful properties:

$$\displaystyle{ \frac{\phi '_{\delta,\kappa }(x)} {x}> 0,\mbox{ for any $x \in \mathbb{R}\setminus \{0\}$}. }$$
(6)
$$\displaystyle{ 0 \leq \vert \phi '_{\delta,\kappa }(x)\vert \leq 1,\mbox{ for any $x \in \mathbb{R}$}. }$$
(7)
$$\displaystyle{ \vert x\vert \leq \kappa +\phi _{\delta,\kappa }(x),\mbox{ for any $x \in \mathbb{R}$}. }$$
(8)
$$\displaystyle{ \phi ''_{\delta,\kappa }(\pm \vert x\vert ) =\psi _{\delta,\kappa }(\vert x\vert ) \leq \frac{2} {\vert x\vert \log \delta }\mathbf{1}_{[\kappa /\delta,\kappa ]}(\vert x\vert ),\mbox{ for any $x \in \mathbb{R}\setminus \{0\}$}. }$$
(9)

The property ( 8) implies that the function ϕ δ, κ approximates ϕ.

3.2 Proof of Theorem 1

To simplify the discussion, we set

$$\displaystyle{Y _{t}:= X_{t} -\hat{ X_{t}},\:t \in [0,T].}$$

Proof (Proof of Theorem 1)

Let δ ∈ (1, ) and κ ∈ (0, 1). From Itô’s formula, ( 7) and ( 8), we have

$$\displaystyle\begin{array}{rcl} \vert Y _{t}\vert & \leq & \kappa +\phi _{\delta,\kappa }(Y _{t}) \\ & =& \kappa +\int _{0}^{t}\phi '_{ \delta,\kappa }(Y _{s})(b(X_{s}) -\hat{ b}(\hat{X}_{s}))ds \\ & & +\frac{1} {2}\int _{0}^{t}\phi ''_{ \delta,\kappa }(Y _{s})\vert \sigma (X_{s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}ds + M_{ t}^{\delta,\kappa } \\ & =& \kappa +\int _{0}^{t}\phi '_{ \delta,\kappa }(Y _{s})(b(X_{s}) - b(\hat{X}_{s}))ds +\int _{ 0}^{t}\phi '_{ \delta,\kappa }(Y _{s})(b(\hat{X}_{s}) -\hat{ b}(\hat{X}_{s}))ds \\ & & +\frac{1} {2}\int _{0}^{t}\phi ''_{ \delta,\kappa }(Y _{s})\vert \sigma (X_{s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}ds + M_{ t}^{\delta,\kappa } \\ & \leq & \kappa +\int _{0}^{t}\phi '_{ \delta,\kappa }(Y _{s})(b(X_{s}) - b(\hat{X}_{s}))ds +\int _{ 0}^{T}\vert b(\hat{X}_{ s}) -\hat{ b}(\hat{X}_{s})\vert ds \\ & & +\frac{1} {2}\int _{0}^{t}\phi ''_{ \delta,\kappa }(Y _{s})\vert \sigma (X_{s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}ds + M_{ t}^{\delta,\kappa }, {}\end{array}$$
(10)

where

$$\displaystyle{M_{t}^{\delta,\kappa }:=\int _{ 0}^{t}\phi '_{ \delta,\kappa }(Y _{s})(\sigma (X_{s}) -\hat{\sigma } (\hat{X}_{s}))dW_{s}.}$$

Note that since σ, \(\hat{\sigma }\) and ϕ δ, κ are bounded, (M t δ, κ)0 ≤ t ≤ T is a martingale so \(\mathbb{E}[M_{t}^{\delta,\kappa }] = 0\). Since \(b \in \mathcal{L}\), for any \(x,y \in \mathbb{R}\) with xy, we have, from ( 6) and ( 7),

$$\displaystyle\begin{array}{rcl} \phi '_{\delta,\kappa }(x - y)(b(x) - b(y))& =& \frac{\phi '_{\delta,\kappa }(x - y)} {x - y} (x - y)(b(x) - b(y)) \\ & \leq & L\frac{\phi '_{\delta,\kappa }(x - y)} {x - y} \vert x - y\vert ^{2} \\ & \leq & L\vert x - y\vert. {}\end{array}$$
(11)

Therefore we get

$$\displaystyle{ \int _{0}^{t}\phi '_{ \delta,\kappa }(Y _{s})(b(X_{s}) - b(\hat{X}_{s}))ds \leq L\int _{0}^{t}\vert Y _{ s}\vert ds. }$$
(12)

Using Lemma 1 with p = 1, we have

$$\displaystyle{ \int _{0}^{T}\mathbb{E}[\vert b(\hat{X}_{ s}) -\hat{ b}(\hat{X}_{s})\vert ]ds \leq C_{T}\vert \vert b -\hat{ b}\vert \vert _{1}. }$$
(13)

From ( 9) and (x + y)2 ≤ 2x 2 + 2y 2 for any x, y ≥ 0, we have

$$\displaystyle\begin{array}{rcl} & & \frac{1} {2}\int _{0}^{t}\phi ''_{ \delta,\kappa }(Y _{s})\vert \sigma (X_{s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}ds \leq \int _{ 0}^{t}\frac{\mathbf{1}_{[\kappa /\delta,\kappa ]}(\vert Y _{s}\vert )} {\vert Y _{s}\vert \log \delta } \vert \sigma (X_{s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}ds \\ & & \leq 2\int _{0}^{t}\frac{\mathbf{1}_{[\kappa /\delta,\kappa ]}(\vert Y _{s}\vert )} {\vert Y _{s}\vert \log \delta } \vert \sigma (X_{s}) -\sigma (\hat{X}_{s})\vert ^{2}ds + 2\int _{ 0}^{t}\frac{\mathbf{1}_{[\kappa /\delta,\kappa ]}(\vert Y _{s}\vert )} {\vert Y _{s}\vert \log \delta } \vert \sigma (\hat{X}_{s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}ds \\ & & \leq 2\int _{0}^{t}\frac{\mathbf{1}_{[\kappa /\delta,\kappa ]}(\vert Y _{s}\vert )} {\vert Y _{s}\vert \log \delta } \vert \sigma (X_{s}) -\sigma (\hat{X}_{s})\vert ^{2}ds + \frac{2\delta } {\kappa \log \delta } \int _{0}^{T}\vert \sigma (\hat{X}_{ s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}ds. {}\end{array}$$
(14)

Again using Lemma 1 with p = 1, we have

$$\displaystyle{ \frac{2\delta } {\kappa \log \delta } \int _{0}^{T}\mathbb{E}[\vert \sigma (\hat{X}_{ s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}]ds \leq \frac{2C_{T}\delta } {\kappa \log \delta } \vert \vert \sigma -\hat{\sigma }\vert \vert _{2}^{2}. }$$
(15)

Since σ is (1∕2 +α)-Hölder continuous, we have

$$\displaystyle\begin{array}{rcl} 2\int _{0}^{T}\frac{\mathbf{1}_{[\kappa /\delta,\kappa ]}(\vert Y _{s}\vert )} {\vert Y _{s}\vert \log \delta } \vert \sigma (X_{s}) -\sigma (\hat{X}_{s})\vert ^{2}ds& \leq & 2K^{2}\int _{ 0}^{T}\frac{\mathbf{1}_{[\kappa /\delta,\kappa ]}(\vert Y _{s}\vert )} {\vert Y _{s}\vert \log \delta } \vert Y _{s}\vert ^{1+2\alpha }ds \\ & \leq & \frac{2TK^{2}\kappa ^{2\alpha }} {\log \delta }. {}\end{array}$$
(16)

Let τ be a stopping time with τ ≤ T and Z t : = | Y tτ  | . From ( 10), ( 12), ( 13), ( 15) and ( 16), we obtain

$$\displaystyle\begin{array}{rcl} \mathbb{E}[Z_{t}]& \leq & \kappa +L\int _{0}^{t}\mathbb{E}[Z_{ s}]ds + C_{T}\vert \vert b -\hat{ b}\vert \vert _{1} + \frac{2C_{T}\delta } {\kappa \log \delta } \vert \vert \sigma -\hat{\sigma }\vert \vert _{2}^{2} + \frac{2TK^{2}\kappa ^{2\alpha }} {\log \delta } {}\\ & \leq & \kappa +L\int _{0}^{t}\mathbb{E}[Z_{ s}]ds + C_{T}\varepsilon _{1} + \frac{2C_{T}\delta } {\kappa \log \delta } \varepsilon _{1} + \frac{2TK^{2}\kappa ^{2\alpha }} {\log \delta }. {}\\ \end{array}$$

If α ∈ (0, 1∕2], then since ɛ 1 < 1, by choosing δ = 2 and κ = ɛ 1 1∕(2α+1), we have

$$\displaystyle\begin{array}{rcl} \mathbb{E}[Z_{t}]& \leq & L\int _{0}^{t}\mathbb{E}[Z_{ s}]ds +\varepsilon _{ 1}^{1/(2\alpha +1)} + C_{ T}\varepsilon _{1} + \frac{4C_{T}\varepsilon _{1}^{1-1/(2\alpha +1)}} {\log 2} + \frac{2TK^{2}\varepsilon _{1}^{2\alpha /(2\alpha +1)}} {\log 2} {}\\ & \leq & L\int _{0}^{t}\mathbb{E}[Z_{ s}]ds + C_{1}(\alpha,T)\varepsilon _{1}^{2\alpha /(2\alpha +1)}, {}\\ \end{array}$$

where

$$\displaystyle{ C_{1}(\alpha,T):= 1 + C_{T} + \frac{4C_{T}} {\log 2} + \frac{2TK^{2}} {\log 2}. }$$

By Gronwall’s inequality, we get

$$\displaystyle{ \mathbb{E}[Z_{t}] \leq C_{1}(\alpha,T)e^{LT}\varepsilon _{ 1}^{2\alpha /(2\alpha +1)}. }$$

Therefore by the dominated convergence theorem, we conclude the statement by taking t → T.

If α = 0, then since 1∕log(1∕ɛ 1) < 1, by choosing δ = ɛ 1 −1∕2 and κ = 1∕log(1∕ɛ 1), we have

$$\displaystyle\begin{array}{rcl} \mathbb{E}[Z_{t}]& \leq & L\int _{0}^{t}\mathbb{E}[Z_{ s}]ds + \frac{1} {\log (1/\varepsilon _{1})} + C_{T}\varepsilon _{1} + 4C_{T}\varepsilon _{1}^{1/2} + \frac{4TK^{2}} {\log (1/\varepsilon _{1})} {}\\ & \leq & L\int _{0}^{t}\mathbb{E}[Z_{ s}]ds + \frac{C_{1}(0,T)} {\log (1/\varepsilon _{1})}, {}\\ \end{array}$$

where

$$\displaystyle{ C_{1}(0,T):= 1 + 5C_{T} + 4TK^{2}. }$$

By Gronwall’s inequality, we obtain

$$\displaystyle{ \mathbb{E}[Z_{t}] \leq \frac{C_{1}(0,T)e^{LT}} {\log (1/\varepsilon _{1})}. }$$

Therefore by the dominated convergence theorem, we conclude the statement by taking t → T.

3.3 Proof of Corollary 2

To prove Corollary 2, we recall the upper bound for \(\mathbb{E}[\vert g(X) - g(\hat{X})\vert ^{r}]\) where g is a function of bounded variation, r ≥ 1, X and \(\hat{X}\) are random variables.

Lemma 2 ([3], Theorem 4.3)

Let X and \(\hat{X}\) be random variables. Assume that X has a bounded density p X . If g ∈ BV and r ≥ 1, then for every q ≥ 1, we have

$$\displaystyle{ \mathbb{E}[\vert g(X) - g(\hat{X})\vert ^{r}] \leq 3^{r+1}V (g)^{r}\left (\sup _{ x\in \mathbb{R}}p_{X}(x)\right )^{ \frac{q} {q+1} }\mathbb{E}[\vert X -\hat{ X}\vert ^{q}]^{1/(q+1)}. }$$

Using the above Lemma, we can prove Corollary 2.

Proof (Proof of Corollary 2)

From the Gaussian upper bound for the density p T (x 0, ⋅ ) of X T , we have for any \(y \in \mathbb{R}\),

$$\displaystyle{ p_{T}(x_{0},y) \leq \overline{C}p_{c_{{\ast}}}(T,x_{0},y) \leq \frac{\overline{C}} {\sqrt{2\pi c_{{\ast} } T}}. }$$
(17)

This means that the density p T (x 0, ⋅ ) of X T is bounded. Hence from Lemma 2 with q = 1 and Theorem 1 with τ = T, for any g ∈ BV and r ≥ 1, we have

$$\displaystyle\begin{array}{rcl} \mathbb{E}[\vert g(X_{T}) - g(\hat{X}_{T})\vert ^{r}]& \leq & \frac{3^{r+1}V (g)^{r}\overline{C}^{1/2}} {(2\pi c_{{\ast}}T)^{1/4}} \mathbb{E}[\vert X_{T} -\hat{ X}_{T}\vert ]^{1/2} {}\\ & \leq & \left \{\begin{array}{ll} 3^{r+1}V (g)^{r}C_{ 2}(\alpha,T)\varepsilon _{1}^{\alpha /(2\alpha +1)} & \mathit{if }\alpha \in (0,1/2], \\ \frac{3^{r+1}V (g)^{r}C_{2}(0,T)} {\sqrt{\log (1/\varepsilon _{1 } )}} &\mathit{if }\alpha = 0, \end{array} \right.{}\\ \end{array}$$

where

$$\displaystyle{ C_{2}(\alpha,T):= \frac{\overline{C}^{1/2}C_{1}(\alpha,T)^{1/2}e^{LT/2}} {(2\pi c_{{\ast}}T)^{1/4}},\mathit{for}\alpha \in [0,1/2]. }$$

This concludes the proof of statement.

3.4 Proof of Theorem 2

Let V t : = sup0 ≤ s ≤ t  | Y s  | . Recall that for each δ ∈ (1, ) and κ ∈ (0, 1),

$$\displaystyle{ M_{t}^{\delta,\kappa } =\int _{ 0}^{t}\phi '_{ \delta,\kappa }(Y _{s})(\sigma (X_{s}) -\hat{\sigma } (\hat{X}_{s}))dW_{s}. }$$

Hence the quadratic variation of M t δ, κ is given by

$$\displaystyle{ \langle M^{\delta,\kappa }\rangle _{ t} =\int _{ 0}^{t}\vert \phi '_{\delta,\kappa }(Y _{s})\vert ^{2}\vert \sigma (X_{ s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}ds. }$$

Before proving Theorem 2, we estimate the expectation of sup0 ≤ s ≤ t  | M s δ, κ | for any t ∈ [0, T], δ ∈ (1, ) and κ ∈ (0, 1).

Lemma 3

Suppose that the assumption of Theorem  2 hold. Then for any t ∈ [0,T], δ ∈ (1,∞) and κ ∈ (0,1), we have

$$\displaystyle{ \mathbb{E}[\sup _{0\leq s\leq t}\vert M_{s}^{\delta,\kappa }\vert ] \leq \left \{\begin{array}{ll} \frac{1} {2}\mathbb{E}[V _{t}] + C_{3}(\alpha,T)\varepsilon _{1}^{4\alpha ^{2}/(2\alpha +1) } & \mathit{if }\alpha \in (0,1/2], \\ \frac{C_{3}(0,T)} {\sqrt{\log (1/\varepsilon _{1 } )}} &\mathit{if }\alpha = 0, \end{array} \right. }$$

where

$$\displaystyle{ C_{3}(\alpha,T):= \left \{\begin{array}{ll} \hat{C}_{1}^{2}K^{2}TC_{ 1}(\alpha,T)^{2\alpha }e^{2\alpha LT} + \sqrt{2}\hat{C}_{ 1}C_{T}^{1/2}, &\mathit{if }\alpha \in (0,1/2], \\ \sqrt{2}\hat{C}_{1}KT^{1/2}C_{ 1}(0,T)^{1/2}e^{LT/2} + \sqrt{2}\hat{C}_{ 1}C_{T}^{1/2},&\mathit{if }\alpha = 0, \end{array} \right. }$$

and \(\hat{C}_{p}\) is the constant of Burkholder-Davis-Gundy’s inequality with p > 0.

Proof

From Burkholder-Davis-Gundy’s inequality, we have

$$\displaystyle\begin{array}{rcl} \mathbb{E}[\sup _{0\leq s\leq t}\vert M_{s}^{\delta,\kappa }\vert ]& \leq & \hat{C}_{ 1}\mathbb{E}[\langle M^{\delta,\kappa }\rangle _{ t}^{1/2}] \leq \hat{ C}_{ 1}\mathbb{E}\left [\left (\int _{0}^{t}\vert \sigma (X_{ s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}ds\right )^{1/2}\right ] {}\\ & \leq & \sqrt{2}\hat{C}_{1}\mathbb{E}\left [\left (\int _{0}^{t}\vert \sigma (X_{ s}) -\sigma (\hat{X}_{s})\vert ^{2}ds\right )^{1/2}\right ] {}\\ & & +\sqrt{2}\hat{C}_{1}\mathbb{E}\left [\left (\int _{0}^{T}\vert \sigma (\hat{X}_{ s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}ds\right )^{1/2}\right ]. {}\\ \end{array}$$

From Jensen’s inequality and Lemma 1, we have

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [\left (\int _{0}^{T}\vert \sigma (\hat{X}_{ s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}ds\right )^{1/2}\right ]& \leq & \left (\int _{ 0}^{T}\mathbb{E}\left [\vert \sigma (\hat{X}_{ s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}\right ]ds\right )^{1/2} {}\\ & \leq & C_{T}^{1/2}\vert \vert \sigma -\hat{\sigma }\vert \vert _{ 2}. {}\\ \end{array}$$

Since σ is (1∕2 +α)-Hölder continuous, we obtain

$$\displaystyle{ \mathbb{E}[\sup _{0\leq s\leq t}\vert M_{s}^{\delta,\kappa }\vert ] \leq \sqrt{2}\hat{C}_{ 1}K\mathbb{E}\left [\left (\int _{0}^{t}\vert Y _{ s}\vert ^{1+2\alpha }ds\right )^{1/2}\right ] + \sqrt{2}\hat{C}_{ 1}C_{T}^{1/2}\vert \vert \sigma -\hat{\sigma }\vert \vert _{ 2}. }$$
(18)

If α ∈ (0, 1∕2], then we get

$$\displaystyle{ \sqrt{2}\hat{C}_{1}K\mathbb{E}\left [\left (\int _{0}^{t}\vert Y _{ s}\vert ^{1+2\alpha }ds\right )^{1/2}\right ] \leq \sqrt{2}\hat{C}_{ 1}K\mathbb{E}\left [V _{t}^{1/2}\left (\int _{ 0}^{t}\vert Y _{ s}\vert ^{2\alpha }ds\right )^{1/2}\right ]. }$$

Using Young’s inequality \(xy \leq \frac{x^{2}} {2\sqrt{2}\hat{C}_{1}K} + \frac{\sqrt{2}\hat{C}_{1}Ky^{2}} {2}\) for any x, y ≥ 0 and Jensen’s inequality, we obtain

$$\displaystyle\begin{array}{rcl} \sqrt{ 2}\hat{C}_{1}K\mathbb{E}\left [\left (\int _{0}^{t}\vert Y _{ s}\vert ^{1+2\alpha }ds\right )^{1/2}\right ]& \leq & \frac{1} {2}\mathbb{E}[V _{t}] + \frac{2\hat{C}_{1}^{2}K^{2}} {2} \int _{0}^{T}\mathbb{E}[\vert Y _{ s}\vert ^{2\alpha }]ds {}\\ & \leq & \frac{1} {2}\mathbb{E}[V _{t}] +\hat{ C}_{1}^{2}K^{2}T^{1-2\alpha }\left (\int _{ 0}^{T}\mathbb{E}[\vert Y _{ s}\vert ]ds\right )^{2\alpha }. {}\\ \end{array}$$

From Theorem 1 with τ = s, we have

$$\displaystyle{ \sqrt{2}\hat{C}_{1}K\mathbb{E}\left [\left (\int _{0}^{t}\vert Y _{ s}\vert ^{1+2\alpha }ds\right )^{1/2}\right ] \leq \frac{1} {2}\mathbb{E}[V _{t}] +\hat{ C}_{1}^{2}K^{2}TC_{ 1}(\alpha,T)^{2\alpha }e^{2\alpha LT}\varepsilon _{ 1}^{4\alpha ^{2}/(2\alpha +1) }. }$$
(19)

Since 4α 2∕(2α + 1) ≤ α ≤ 1∕2, from ( 18) and ( 19), we get

$$\displaystyle{ \mathbb{E}[\sup _{0\leq s\leq t}\vert M_{s}^{\delta,\kappa }\vert ] \leq \frac{1} {2}\mathbb{E}[V _{t}] + C_{3}(\alpha,T)\varepsilon _{1}^{4\alpha ^{2}/(2\alpha +1) } }$$

which concludes the statement for α ∈ (0, 1∕2].

If α = 0, then from Jensen’s inequality and Theorem 1 with τ = s, we get

$$\displaystyle\begin{array}{rcl} \sqrt{ 2}\hat{C}_{1}K\mathbb{E}\left [\left (\int _{0}^{t}\vert Y _{ s}\vert ds\right )^{1/2}\right ]& \leq & \sqrt{2}\hat{C}_{ 1}K\left (\int _{0}^{T}\mathbb{E}[\vert Y _{ s}\vert ]ds\right )^{1/2} {}\\ & \leq & \frac{\sqrt{2}\hat{C}_{1}KT^{1/2}C_{1}(0,T)^{1/2}e^{LT/2}} {\sqrt{\log (1/\varepsilon _{1 } )}}. {}\\ \end{array}$$

Therefore we have

$$\displaystyle\begin{array}{rcl} \mathbb{E}[\sup _{0\leq s\leq T}\vert M_{s}^{\delta,\kappa }\vert ]& \leq & \frac{\sqrt{2}\hat{C}_{1}KT^{1/2}C_{ 1}(0,T)^{1/2}e^{LT/2}} {\sqrt{\log (1/\varepsilon _{1 } )}} + \sqrt{2}\hat{C}_{1}C_{T}^{1/2}\vert \vert \sigma -\hat{\sigma }\vert \vert _{ 2} {}\\ & \leq & \frac{C_{3}(0,T)} {\sqrt{\log (1/\varepsilon _{1 } )}}. {}\\ \end{array}$$

This concludes the statement for α = 0.

Using the above estimate, we can prove Theorem 2.

Proof (Proof of Theorem 2)

From ( 10), ( 12), ( 14) and ( 16), we have

$$\displaystyle\begin{array}{rcl} V _{t}& \leq & \kappa +L\int _{0}^{t}V _{ s}ds +\int _{ 0}^{T}\vert b(\hat{X}_{ s}) -\hat{ b}(\hat{X}_{s})\vert ds \\ & & +\frac{2\delta } {\kappa \log \delta } \int _{0}^{T}\vert \sigma (\hat{X}_{ s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}ds + \frac{2TK^{2}\kappa ^{2\alpha }} {\log \delta } +\sup _{0\leq s\leq t}\vert M_{s}^{\delta,\kappa }\vert.{}\end{array}$$
(20)

If α ∈ (0, 1∕2], then from ( 20), Lemmas 1 and 3, we have

$$\displaystyle\begin{array}{rcl} \mathbb{E}[V _{t}]& \leq & \kappa +L\int _{0}^{t}\mathbb{E}[V _{ s}]ds + C_{T}\vert \vert b -\hat{ b}\vert \vert _{1} + \frac{2C_{T}\delta } {\kappa \log \delta } \vert \vert \sigma -\hat{\sigma }\vert \vert _{2}^{2} + \frac{2TK^{2}\kappa ^{2\alpha }} {\log \delta } {}\\ & & +\frac{1} {2}\mathbb{E}[V _{t}] + C_{3}(\alpha,T)\varepsilon _{1}^{4\alpha ^{2}/(2\alpha +1) } {}\\ & \leq & \kappa +L\int _{0}^{t}\mathbb{E}[V _{ s}]ds + C_{T}\varepsilon _{1} + \frac{2C_{T}\delta } {\kappa \log \delta } \varepsilon _{1} + \frac{2TK^{2}\kappa ^{2\alpha }} {\log \delta } {}\\ & & +\frac{1} {2}\mathbb{E}[V _{t}] + C_{3}(\alpha,T)\varepsilon _{1}^{4\alpha ^{2}/(2\alpha +1) }. {}\\ \end{array}$$

Hence we get

$$\displaystyle\begin{array}{rcl} \mathbb{E}[V _{t}]& \leq & 2\kappa + 2L\int _{0}^{t}\mathbb{E}[V _{ s}]ds + 2C_{T}\varepsilon _{1} + \frac{4C_{T}\delta } {\kappa \log \delta } \varepsilon _{1} {}\\ & & +\frac{4TK^{2}\kappa ^{2\alpha }} {\log \delta } + 2C_{3}(\alpha,T)\varepsilon _{1}^{4\alpha ^{2}/(2\alpha +1) }. {}\\ \end{array}$$

Note that 0 < 4α 2∕(2α + 1) ≤ α ≤ 1∕2. Taking δ = 2 and κ = ɛ 1 1∕2, we have

$$\displaystyle\begin{array}{rcl} \mathbb{E}[V _{t}]& \leq & 2L\int _{0}^{t}\mathbb{E}[V _{ s}]ds + 2\left (1 + C_{T} + \frac{4C_{T}} {\log 2} \right )\varepsilon _{1}^{1/2} {}\\ & & +\frac{4TK^{2}} {\log 2} \varepsilon _{1}^{\alpha } + 2C_{ 3}(\alpha,T)\varepsilon _{1}^{4\alpha ^{2}/(2\alpha +1) } {}\\ & \leq & 2L\int _{0}^{t}\mathbb{E}[V _{ s}]ds + C_{4}(\alpha,T)\varepsilon _{1}^{4\alpha ^{2}/(2\alpha +1) }, {}\\ \end{array}$$

where

$$\displaystyle{ C_{4}(\alpha,T):= 2\left (1 + C_{T} + \frac{4C_{T} + 2TK^{2}} {\log 2} + C_{3}(\alpha,T)\right ). }$$

By Gronwall’s inequality, we obtain

$$\displaystyle{ \mathbb{E}[V _{t}] \leq C_{4}(\alpha,T)e^{2LT}\varepsilon _{ 1}^{4\alpha ^{2}/(2\alpha +1) }. }$$

If α = 0, then from ( 20), Lemmas 1 and 3, we have

$$\displaystyle{ \mathbb{E}[V _{t}] \leq \kappa +L\int _{0}^{t}\mathbb{E}[V _{ s}]ds + C_{T}\varepsilon _{1} + \frac{2C_{T}\delta } {\kappa \log \delta } \varepsilon _{1} + \frac{2TK^{2}} {\log \delta } + \frac{C_{3}(0,T)} {\sqrt{\log (1/\varepsilon _{1 } )}}. }$$

Taking δ = ɛ 1 −1∕2 and κ = 1∕log(1∕ɛ 1), we get

$$\displaystyle{ \mathbb{E}[V _{t}] \leq L\int _{0}^{t}\mathbb{E}[V _{ s}]ds + \frac{C_{4}(0,T)} {\sqrt{\log (1/\varepsilon _{1 } )}}, }$$

where

$$\displaystyle{ C_{4}(0,T):= 1 + 5C_{T} + 4TK^{2} + C_{ 3}(0,T). }$$

By Gronwall’s inequality, we obtain

$$\displaystyle{ \mathbb{E}[V _{t}] \leq \frac{C_{4}(0,T)e^{LT}} {\sqrt{\log (1/\varepsilon _{1 } )}}. }$$

Hence we conclude the proof of Theorem 2.

3.5 Proof of Theorem 3

In this section, we also estimate the expectation of sup0 ≤ s ≤ t  | M s δ, κ | p for any p ≥ 2, t ∈ [0, T], δ ∈ (1, ) and κ ∈ (0, 1).

Lemma 4

Let p ≥ 2. Assume that A-(ii), A-(iii) and A-(iv) hold. Then for any t ∈ [0,T], δ ∈ (1,∞) and κ ∈ (0,1), we have

$$\displaystyle{ \mathbb{E}[\sup _{0\leq s\leq t}\vert M_{s}^{\delta,\kappa }\vert ^{p}] \leq C_{ 5}(p,T)\mathbb{E}\left [\left (\int _{0}^{t}\vert Y _{ s}\vert ^{1+2\alpha }ds\right )^{p/2}\right ] + C_{ 6}(p,T)\vert \vert \sigma -\hat{\sigma }\vert \vert _{2p}^{p}, }$$

where C 5 (p,T):= 2 p∕2 C p K p and \(C_{6}(p,T):= 2^{p/2}T^{\frac{p-1} {2} }C_{p}C_{T}^{1/2}\) . In particular, if α = 1∕2, we have

$$\displaystyle\begin{array}{rcl} \mathbb{E}[\sup _{0\leq s\leq t}\vert M_{s}^{\delta,\kappa }\vert ^{p}]& \leq & \frac{1} {2 \cdot 5^{p-1}}\mathbb{E}[V _{t}^{p}] + \frac{5^{p-1}C_{ 5}(p,T)^{2}T^{p-1}} {2} \int _{0}^{t}\mathbb{E}[V _{ s}^{p}]ds {}\\ & & +C_{6}(p,T)\vert \vert \sigma -\hat{\sigma }\vert \vert _{2p}^{p}. {}\\ \end{array}$$

Proof (Proof of Lemma 4)

From Burkholder-Davis-Gundy’s inequality, we have

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}[\sup _{0\leq s\leq t}\vert M_{s}^{\delta,\kappa }\vert ^{p}] \leq C_{ p}\mathbb{E}[\langle M^{\delta,\kappa }\rangle _{ t}^{p/2}] \leq C_{ p}\mathbb{E}\left [\left (\int _{0}^{t}\vert \sigma (X_{ s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}ds\right )^{p/2}\right ] {}\\ & & \quad \leq 2^{p/2}C_{ p}\left (\mathbb{E}\left [\left (\int _{0}^{t}\vert \sigma (X_{ s}) -\sigma (\hat{X}_{s})\vert ^{2}ds\right )^{p/2}\right ] + \mathbb{E}\left [\left (\int _{ 0}^{T}\vert \sigma (\hat{X}_{ s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}ds\right )^{p/2}\right ]\right ). {}\\ \end{array}$$

From Jensen’s inequality and Lemma 1, we have

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [\left (\int _{0}^{T}\vert \sigma (\hat{X}_{ s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2}ds\right )^{p/2}\right ]& \leq & T^{\frac{p-1} {2} }\left (\int _{0}^{T}\mathbb{E}[\vert \sigma (\hat{X}_{s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2p}]ds\right )^{1/2} {}\\ & \leq & T^{\frac{p-1} {2} }C_{T}^{1/2}\vert \vert \sigma -\hat{\sigma }\vert \vert _{2p}^{p}. {}\\ \end{array}$$

Since σ is (1∕2 +α)-Hölder continuous, we get

$$\displaystyle{ \mathbb{E}[\sup _{0\leq s\leq t}\vert M_{s}^{\delta,\kappa }\vert ^{p}] \leq C_{ 5}(p,T)\mathbb{E}\left [\left (\int _{0}^{t}\vert Y _{ s}\vert ^{1+2\alpha }ds\right )^{p/2}\right ] + C_{ 6}(p,T)\vert \vert \sigma -\hat{\sigma }\vert \vert _{2p}^{p}. }$$

This concludes the first statement.

In particular, if α = 1∕2, then we get from definition of V t ,

$$\displaystyle{ C_{5}(p,T)\mathbb{E}\left [\left (\int _{0}^{t}\vert Y _{ s}\vert ^{2}ds\right )^{p/2}\right ] \leq C_{ 5}(p,T)\mathbb{E}\left [\left (V _{t}\right )^{p/2}\left (\int _{ 0}^{t}\vert Y _{ s}\vert ds\right )^{p/2}\right ]. }$$

Using Young’s inequality \(xy \leq \frac{x^{2}} {2\cdot 5^{p-1}C_{5}(p,T)} + \frac{5^{p-1}C_{ 5}(p,T)y^{2}} {2}\) for any x, y ≥ 0 and Jensen’s inequality, we obtain

$$\displaystyle\begin{array}{rcl} C_{5}(p,T)\mathbb{E}\left [\left (\int _{0}^{t}\vert Y _{ s}\vert ^{2}ds\right )^{p/2}\right ]& \leq & \frac{1} {2 \cdot 5^{p-1}}\mathbb{E}[V _{t}^{p}] + \frac{5^{p-1}C_{ 5}(p,T)^{2}} {2} \mathbb{E}\left [\left (\int _{0}^{t}\vert Y _{ s}\vert ds\right )^{p}\right ] {}\\ & \leq & \frac{1} {2 \cdot 5^{p-1}}\mathbb{E}[V _{t}^{p}] + \frac{5^{p-1}C_{ 5}(p,T)^{2}T^{p-1}} {2} \int _{0}^{t}\mathbb{E}[V _{ s}^{p}]ds,{}\\ \end{array}$$

which concludes the second statement.

To prove Theorem 3, we recall the following Gronwall type inequality.

Lemma 5 ([7] Lemma 3.2.-(ii))

Let (A t ) 0≤t≤T be a nonnegative continuous stochastic process and set B t := sup0≤s≤t A s . Assume that for some r > 0, q ≥ 1, ρ ∈ [1,q] and C 1 ,ξ ≥ 0,

$$\displaystyle{ \mathbb{E}[B_{t}^{r}] \leq C_{ 1}\mathbb{E}\left [\left (\int _{0}^{t}B_{ s}ds\right )^{r}\right ] + C_{ 1}\mathbb{E}\left [\left (\int _{0}^{t}A_{ s}^{\rho }ds\right )^{r/q}\right ]+\xi <\infty }$$

for all t ∈ [0,T]. If r ≥ q or q + 1 −ρ < r < q hold, then there exists constant C 2 depending on r,q,ρ,T and C 1 such that

$$\displaystyle{ \mathbb{E}[B_{T}^{r}] \leq C_{ 2}\xi + C_{2}\int _{0}^{T}\mathbb{E}[A_{ s}]ds. }$$

Now using Lemmas 4 and 5, we can prove Theorem 3.

Proof (Proof of Theorem 3)

From ( 20) and the inequality \(\left (\sum _{i=1}^{m}a_{i}\right )^{p} \leq m^{p-1}\sum _{i=1}^{m}a_{i}^{p}\) for any p ≥ 2 a i  > 0 and \(m \in \mathbb{N}\), and Jensen’s inequality, we have

$$\displaystyle\begin{array}{rcl} V _{t}^{p}& \leq & 5^{p-1}\Bigg(\kappa ^{p} + \left (L\int _{ 0}^{t}V _{ s}ds\right )^{p} + T^{p-1}\int _{ 0}^{T}\vert b(\hat{X}_{ s}) -\hat{ b}(\hat{X}_{s})\vert ^{p}ds {}\\ & & +\frac{2T^{p-1}\delta ^{p}} {\kappa ^{p}(\log \delta )^{p}} \int _{0}^{T}\vert \sigma (\hat{X}_{ s}) -\hat{\sigma } (\hat{X}_{s})\vert ^{2p}ds + \frac{(2TK^{2})^{p}\kappa ^{2p\alpha }} {(\log \delta )^{p}} +\sup _{0\leq s\leq t}\vert M_{s}^{\delta,\kappa }\vert ^{p}\Bigg). {}\\ \end{array}$$

From Lemma 1 with p ≥ 2, we have

$$\displaystyle\begin{array}{rcl} \mathbb{E}[V _{t}^{p}]& \leq & 5^{p-1}\kappa ^{p} + 5^{p-1}L^{p}\mathbb{E}\left [\left (\int _{ 0}^{t}V _{ s}ds\right )^{p}\right ] + (5T)^{p-1}C_{ T}\vert \vert b -\hat{ b}\vert \vert _{p}^{p} {}\\ & & +\frac{2(5T)^{p-1}C_{T}\delta ^{p}} {\kappa ^{p}(\log \delta )^{p}} \vert \vert \sigma -\hat{\sigma }\vert \vert _{2p}^{2p} + \frac{5^{p-1}(2TK^{2})^{p}\kappa ^{2p\alpha }} {(\log \delta )^{p}} + 5^{p-1}\mathbb{E}[\sup _{ 0\leq s\leq t}\vert M_{s}^{\delta,\kappa }\vert ^{p}].{}\\ \end{array}$$

If α = 1∕2, using Lemma 4, we have

$$\displaystyle\begin{array}{rcl} \mathbb{E}[V _{t}^{p}]& \leq & 5^{p-1}\kappa ^{p} + (5T)^{p-1}\left (L^{p} + \frac{C_{5}(p,T)^{2}} {2} \right )\int _{0}^{t}\mathbb{E}[V _{ s}^{p}]ds + (5T)^{p-1}C_{ T}\vert \vert b -\hat{ b}\vert \vert _{p}^{p} {}\\ & & +\frac{2(5T)^{p-1}C_{T}\delta ^{p}} {\kappa ^{p}(\log \delta )^{p}} \vert \vert \sigma -\hat{\sigma }\vert \vert _{2p}^{2p} + \frac{5^{p-1}(2TK^{2})^{p}\kappa ^{p}} {(\log \delta )^{p}} {}\\ & & +\frac{1} {2}\mathbb{E}[V _{T}^{p}] + 5^{p-1}C_{ 6}(p,T)\vert \vert \sigma -\hat{\sigma }\vert \vert _{2p}^{p}. {}\\ \end{array}$$

Hence we get

$$\displaystyle\begin{array}{rcl} \mathbb{E}[V _{t}^{p}]& \leq & 2 \cdot 5^{p-1}\kappa ^{p} + (5T)^{p-1}\left (2L^{p} + C_{ 5}(p,T)^{2}\right )\int _{ 0}^{t}\mathbb{E}[V _{ s}^{p}]ds {}\\ & & +2(5T)^{p-1}C_{ T}\vert \vert b -\hat{ b}\vert \vert _{p}^{p} + \frac{4(5T)^{p-1}C_{ T}\delta ^{p}} {\kappa ^{p}(\log \delta )^{p}} \vert \vert \sigma -\hat{\sigma }\vert \vert _{2p}^{2p} {}\\ & & +\frac{2 \cdot 5^{p-1}(2TK^{2})^{p}\kappa ^{p}} {(\log \delta )^{p}} + 2 \cdot 5^{p-1}C_{ 6}(p,T)\vert \vert \sigma -\hat{\sigma }\vert \vert _{2p}^{p} {}\\ & \leq & 2 \cdot 5^{p-1}\kappa ^{p} + (5T)^{p-1}\left (2L^{p} + C_{ 5}(p,T)^{2}\right )\int _{ 0}^{t}\mathbb{E}[V _{ s}^{p}]ds + 2(5T)^{p-1}C_{ T}\varepsilon _{p} {}\\ & & +\frac{4(5T)^{p-1}C_{T}\delta ^{p}} {\kappa ^{p}(\log \delta )^{p}} \varepsilon _{p} + \frac{2 \cdot 5^{p-1}(2TK^{2})^{p}\kappa ^{p}} {(\log \delta )^{p}} + 2 \cdot 5^{p-1}C_{ 6}(p,T)\varepsilon _{p}^{1/2}. {}\\ \end{array}$$

Taking δ = 2 and κ = ɛ p 1∕(2p), we have

$$\displaystyle{ \mathbb{E}[V _{t}^{p}] \leq (5T)^{p-1}\left (2L^{p} + C_{ 5}(p,T)^{2}\right )\int _{ 0}^{t}\mathbb{E}[V _{ s}^{p}]ds + C_{ 7}(1/2,p,T)\varepsilon _{p}^{1/2}, }$$

where

$$\displaystyle\begin{array}{rcl} C_{7}(1/2,p,T)&:=& 2 \cdot 5^{p-1} + 2(5T)^{p-1}C_{ T} + \frac{4 \cdot 2^{p}(5T)^{p-1} + 2 \cdot 5^{p-1}(2TK^{2})^{p}} {(\log 2)^{p}} {}\\ & & +2 \cdot 5^{p-1}C_{ 6}(p,T). {}\\ \end{array}$$

By Gronwall’s inequality, we obtain

$$\displaystyle{ \mathbb{E}[V _{t}^{p}] \leq C_{ 7}(1/2,p,T)\exp (5^{p-1}T^{p}\left (2L^{p} + C_{ 5}(p,T)^{2}\right ))\varepsilon _{ p}^{1/2}. }$$

If α ∈ [0, 1∕2), using Lemma 4, we have

$$\displaystyle\begin{array}{rcl} \mathbb{E}[V _{t}^{p}]& \leq & 5^{p-1}\kappa ^{p} + 5^{p-1}L^{p}\mathbb{E}\left [\left (\int _{ 0}^{t}V _{ s}ds\right )^{p}\right ] + (5T)^{p-1}C_{ T}\vert \vert b -\hat{ b}\vert \vert _{p}^{p} {}\\ & & +\frac{2(5T)^{p-1}C_{T}\delta ^{p}} {\kappa ^{p}(\log \delta )^{p}} \vert \vert \sigma -\hat{\sigma }\vert \vert _{2p}^{2p} + \frac{5^{p-1}(2TK^{2})^{p}\kappa ^{2p\alpha }} {(\log \delta )^{p}} {}\\ & & +5^{p-1}C_{ 5}(p,T)\mathbb{E}\left [\left (\int _{0}^{t}\vert Y _{ s}\vert ^{1+2\alpha }ds\right )^{p/2}\right ] + 5^{p-1}C_{ 6}(p,T)\vert \vert \sigma -\hat{\sigma }\vert \vert _{2p}^{p} {}\\ & \leq & 5^{p-1}L^{p}\mathbb{E}\left [\left (\int _{ 0}^{t}V _{ s}ds\right )^{p}\right ] + 5^{p-1}C_{ 5}(p,T)\mathbb{E}\left [\left (\int _{0}^{t}\vert Y _{ s}\vert ^{1+2\alpha }ds\right )^{p/2}\right ] {}\\ & & +5^{p-1}\kappa ^{p} + ((5T)^{p-1}C_{ T} + 5^{p-1}C_{ 6}(p,T))\varepsilon _{p}^{1/2} {}\\ & & +\frac{2(5T)^{p-1}C_{T}\delta ^{p}} {\kappa ^{p}(\log \delta )^{p}} \varepsilon _{p} + \frac{5^{p-1}(2TK^{2})^{p}\kappa ^{2p\alpha }} {(\log \delta )^{p}}. {}\\ \end{array}$$

Now we apply Theorem 1 with τ = s and Lemma 5 with r = p, q = 2, ρ = 1 + 2α and

$$\displaystyle\begin{array}{rcl} \xi & =& 5^{p-1}\kappa ^{p} + ((5T)^{p-1}C_{ T} + 5^{p-1}C_{ 6}(p,T))\varepsilon _{p}^{1/2} {}\\ & & +\frac{2(5T)^{p-1}C_{T}\delta ^{p}} {\kappa ^{p}(\log \delta )^{p}} \varepsilon _{p} + \frac{5^{p-1}(2TK^{2})^{p}\kappa ^{2p\alpha }} {(\log \delta )^{p}}. {}\\ \end{array}$$

Then there exists C 7(α, p, T) which depends on p, α, T, L and C 5(p, T) such that

$$\displaystyle\begin{array}{rcl} \mathbb{E}[V _{T}^{p}]& \leq & C_{ 7}(\alpha,p,T)\left (\kappa ^{p} +\varepsilon _{ p}^{1/2} + \frac{\delta ^{p}\varepsilon _{ p}} {\kappa ^{p}(\log \delta )^{p}} + \frac{\kappa ^{2p\alpha }} {(\log \delta )^{p}}\right ) {}\\ & & +C_{7}(\alpha,p,T)\int _{0}^{T}\mathbb{E}[\vert Y _{ s}\vert ]ds {}\\ & \leq & C_{7}(\alpha,p,T)\left (\kappa ^{p} +\varepsilon _{ p}^{1/2} + \frac{\delta ^{p}\varepsilon _{ p}} {\kappa ^{p}(\log \delta )^{p}} + \frac{\kappa ^{2p\alpha }} {(\log \delta )^{p}}\right ) {}\\ & & +\left \{\begin{array}{ll} C_{7}(\alpha,p,T)C_{1}(\alpha,T)e^{LT}T\varepsilon _{ 1}^{2\alpha /(2\alpha +1)} & \mathit{if }\alpha \in (0,1/2), \\ \frac{C_{7}(0,p,T)C_{1}(0,T)e^{LT}T} {\log (1/\varepsilon _{1})} &\mathit{if }\alpha = 0. \end{array} \right.{}\\ \end{array}$$

Taking δ = 2 and κ = ɛ p 1∕(2p) if α ∈ (0, 1∕2) and δ = ɛ p −1∕(2p) and κ = 1∕log(1∕ɛ p ) if α = 0, we get

$$\displaystyle{ \mathbb{E}[V _{T}^{p}] \leq \left \{\begin{array}{ll} C_{8}(\alpha,p,T)\varepsilon _{1}^{2\alpha /(2\alpha +1)} & \mathit{if }\alpha \in (0,1/2), \\ \frac{C_{8}(\alpha,p,T)} {\log (1/\varepsilon _{1})} &\mathit{if }\alpha = 0, \end{array} \right. }$$

where

$$\displaystyle{C_{8}(\alpha,p,T):= \left \{\begin{array}{ll} C_{7}(\alpha,p,T)\left (2 + \frac{2^{p} + 1} {(\log 2)^{p}} + C_{1}(\alpha,T)e^{LT}T\right )&\mathit{if }\alpha \in (0,1/2), \\ C_{7}(\alpha,p,T)\left (2 + 2(2p)^{p} + C_{ 1}(\alpha,T)e^{LT}T\right ) &\mathit{if }\alpha = 0, \end{array} \right.}$$

Hence we conclude the proof of Theorem 3.

4 Application to the Stability Problem

In this section, we apply our main results to the stability problem. For any \(n \in \mathbb{N}\), we consider the one-dimensional stochastic differential equation

$$\displaystyle{ X_{t}^{(n)} = x_{ 0} +\int _{ 0}^{t}b_{ n}(X_{s}^{(n)})ds +\int _{ 0}^{t}\sigma _{ n}(X_{t}^{(n)})dW_{ s}. }$$

Assumption 2

We assume that the coefficients b, σ and the sequence of coefficients \((b_{n})_{n\in \mathbb{N}}\) and \((\sigma _{n})_{n\in \mathbb{N}}\) satisfy the following conditions:

  • A′-(i): \(b \in \mathcal{L}\).

  • A′-(ii): b and b n are bounded measurable i.e., there exists K > 0 such that

    $$\displaystyle{ \sup _{n\in \mathbb{N},x\in \mathbb{R}}\left (\vert b_{n}(x)\vert \vee \vert b(x)\vert \right ) \leq K. }$$
  • A′-(iii): σ and σ n are η = 1∕2 +α-Hölder continuous with α ∈ [0, 1∕2], i.e., there exists K > 0 such that

    $$\displaystyle{ \sup _{n\in \mathbb{N},x,y\in \mathbb{R},x\neq y}\left (\frac{\vert \sigma (x) -\sigma (y)\vert } {\vert x - y\vert ^{\eta }} \vee \frac{\vert \sigma _{n}(x) -\sigma _{n}(y)\vert } {\vert x - y\vert ^{\eta }} \right ) \leq K. }$$
  • A′-(iv): a = σ and a n : = σ n 2 are bounded and uniformly elliptic, i.e., there exists λ ≥ 1 such that for any \(x \in \mathbb{R}\) and \(n \in \mathbb{N}\),

    $$\displaystyle\begin{array}{rcl} \lambda ^{-1} \leq a(x) \leq \lambda \mathit{and}\lambda ^{-1} \leq a_{ n}(x) \leq \lambda.& & {}\\ \end{array}$$
  • A′-(p): For given p > 0,

    $$\displaystyle{ \varepsilon _{p,n}:= \vert \vert b - b_{n}\vert \vert _{p}^{p} \vee \vert \vert \sigma -\sigma _{ n}\vert \vert _{2p}^{2p} \rightarrow 0 }$$

    as n → .

For p ≥ 1 and α ∈ [0, 1∕2], we define N α, p by

$$\displaystyle{ N_{\alpha,p}:= \left \{\begin{array}{ll} \min \{n \in \mathbb{N}:\varepsilon _{p,m} <1,\forall m \geq n\}, &\mathit{if }\alpha \in (0,1/2], \\ \min \{n \in \mathbb{N}:\varepsilon _{p,m} <1/e,\forall m \geq n\},&\mathit{if }\alpha = 0. \end{array} \right. }$$

Then using Theorem 13 and Corollary 12, we have the following corollaries.

Corollary 3

Suppose that Assumption  2 holds with p = 1. Then there exists a positive constant C which depends on \(\overline{C},c_{{\ast}},K,L,T,\alpha,\lambda\) and x 0 such that for any n ≥ N α,1 ,

$$\displaystyle{ \sup _{\tau \in \mathcal{T}}\mathbb{E}[\vert X_{\tau }-X_{\tau }^{(n)}\vert ] \leq \left \{\begin{array}{ll} C\varepsilon _{1,n}^{2\alpha /(2\alpha +1)} & \mathit{if }\alpha \in (0,1/2], \\ \frac{C} {\log (1/\varepsilon _{1,n})} &\mathit{if }\alpha = 0 \end{array} \right. }$$

and

$$\displaystyle{ \mathbb{E}[\sup _{0\leq t\leq T}\vert X_{t}-X_{t}^{(n)}\vert ] \leq \left \{\begin{array}{ll} C\varepsilon _{1,n}^{4\alpha ^{2}/(2\alpha +1) } & \mathit{if }\alpha \in (0,1/2], \\ \frac{C} {\sqrt{\log (1/\varepsilon _{1,n } )}} &\mathit{if }\alpha = 0 \end{array} \right. }$$

and for any g ∈ BV and r ≥ 1, we have

$$\displaystyle{ \mathbb{E}[\vert g(X_{T})-g(X_{T}^{(n)})\vert ^{r}] \leq \left \{\begin{array}{ll} 3^{r+1}V (g)^{r}C\varepsilon _{ 1,n}^{\alpha /(2\alpha +1)} & \mathit{if }\alpha \in (0,1/2], \\ \frac{3^{r+1}V (g)^{r}C} {\sqrt{\log (1/\varepsilon _{1,n } )}} &\mathit{if }\alpha = 0. \end{array} \right. }$$

Corollary 4

Suppose that Assumption  2 holds with p ≥ 2. Then there exists a positive constant C which depends on \(\overline{C},c_{{\ast}},K,L,T,p,\alpha,\lambda\) and x 0 such that for any n ≥ N α,p ,

$$\displaystyle{ \mathbb{E}[\sup _{0\leq t\leq T}\vert X_{t}-X_{t}^{(n)}\vert ^{p}] \leq \left \{\begin{array}{ll} C\varepsilon _{p,n}^{1/2} & \mathit{if }\alpha = 1/2, \\ C\varepsilon _{1,n}^{2\alpha /(2\alpha +1)} & \mathit{if }\alpha \in (0,1/2), \\ \frac{C} {\log (1/\varepsilon _{1,n})} &\mathit{if }\alpha = 0. \end{array} \right. }$$

Corollary 5

Suppose that Assumption  2 holds with 2p for p ∈ (1,2). Then there exists a positive constant C which depends on \(\overline{C},c_{{\ast}},K,L,T,p,\alpha,\lambda\) and x 0 such that for any n ≥ N α,2p ,

$$\displaystyle{ \mathbb{E}[\sup _{0\leq t\leq T}\vert X_{t}-X_{t}^{(n)}\vert ^{p}] \leq \left \{\begin{array}{ll} C\varepsilon _{2p,n}^{1/2} & \mathit{if }\alpha = 1/2, \\ C\varepsilon _{1,n}^{\alpha /(2\alpha +1)} & \mathit{if }\alpha \in (0,1/2), \\ \frac{C} {\sqrt{\log (1/\varepsilon _{1,n } )}} &\mathit{if }\alpha = 0. \end{array} \right. }$$

The next proposition shows that there exist the sequences \((b_{n})_{n\in \mathbb{N}}\) and \((\sigma _{n})_{n\in \mathbb{N}}\) satisfying Assumption 2.

Proposition 1

 

  1. (i)

    Assume \(\sup _{x\in \mathbb{R}}\vert b(x)\vert \leq K\) . If the set of discontinuity points of b is a null set with respect to the Lebesgue measure, then there exists a differentiable and bounded sequence \((b_{n})_{n\in \mathbb{N}}\) such that for any p ≥ 1,

    $$\displaystyle{ \int _{\mathbb{R}}\vert b(x) - b_{n}(x)\vert ^{p}e^{-\frac{\vert x-x_{0}\vert ^{2}} {2c_{{\ast}}T} }dx \rightarrow 0 }$$
    (21)

    as n →∞. Moreover, if b is a one-sided Lipschitz function, we can construct an explicit sequence \((b_{n})_{n\in \mathbb{N}}\) which satisfies a one-sided Lipschitz condition.

  2. (ii)

    If the diffusion coefficient σ satisfies A′-(ii) and A′-(iii), then there exists a differentiable sequence \((\sigma _{n})_{n\in \mathbb{N}}\) such that for any \(n \in \mathbb{N}\) , σ n satisfies A′-(iii), A′-(iv) and for any p ≥ 1,

    $$\displaystyle{ \int _{\mathbb{R}}\vert \sigma (x) -\sigma _{n}(x)\vert ^{2p}e^{-\frac{\vert x-x_{0}\vert ^{2}} {2c_{{\ast}}T} }dx \leq \frac{K^{2p}\sqrt{2\pi c_{ {\ast}}T}} {n^{2p\eta }}. }$$

Proof

Let \(\rho (x):=\mu e^{-1/(1-\vert x\vert ^{2}) }\mathbf{1}(\vert x\vert <1)\) with \(\mu ^{-1} =\int _{\vert x\vert <1}e^{-1/(1-\vert x\vert ^{2}) }dx\) and a sequence \((\rho _{n})_{n\in \mathbb{N}}\) be defined by ρ n (x): = n ρ(nx). We set \(b_{n}(x):=\int _{\mathbb{R}}b(y)\rho _{n}(x - y)dy\) and \(\sigma _{n}(x):=\int _{\mathbb{R}}\sigma (y)\rho _{n}(x - y)dy\). Then for any \(n \in \mathbb{N}\) and \(x \in \mathbb{R}\), we have | b n (x) | ≤ K and λ −1 ≤ a n (x): = σ n 2(x) ≤ λ, b n and σ n are differentiable.

Proof of (i). From Jensen’s inequality, we have

$$\displaystyle\begin{array}{rcl} \int _{\mathbb{R}}\vert b(x) - b_{n}(x)\vert ^{p}e^{-\frac{\vert x-x_{0}\vert ^{2}} {2c_{{\ast}}T} }dx& \leq & \int _{\mathbb{R}}dx\left (\int _{\mathbb{R}}dy\vert b(x) - b(y)\vert \rho _{n}(x - y)\right )^{p}e^{-\frac{\vert x-x_{0}\vert ^{2}} {2c_{{\ast}}T} } {}\\ & =& \int _{\mathbb{R}}dx\left (\int _{\vert z\vert <1}dz\vert b(x) - b(x - z/n)\vert \rho (z)\right )^{p}e^{-\frac{\vert x-x_{0}\vert ^{2}} {2c_{{\ast}}T} } {}\\ & \leq & \int _{\vert z\vert <1}dz\int _{\mathbb{R}}dx\vert b(x) - b(x - z/n)\vert ^{p}e^{-\frac{\vert x-x_{0}\vert ^{2}} {2c_{{\ast}}T} }\rho (z). {}\\ \end{array}$$

Since b is bounded, we have

$$\displaystyle{ \int _{\mathbb{R}}\vert b(x) - b(x - z/n)\vert ^{p}e^{-\frac{\vert x-x_{0}\vert ^{2}} {2c_{{\ast}}T} }dx \leq (2K)^{p}\int _{\mathbb{R}}e^{-\frac{\vert x-x_{0}\vert ^{2}} {2c_{{\ast}}T} }dx = (2K)^{p}\sqrt{2\pi c_{{\ast} } T}. }$$
(22)

On the other hand, since the set of discontinuity points of b is a null set with respect to the Lebesgue measure, b is continuous almost everywhere. From ( 22), using the dominated convergence theorem, we have

$$\displaystyle{ \int _{\mathbb{R}}\vert b(x) - b(x - z/n)\vert ^{p}e^{-\frac{\vert x-x_{0}\vert ^{2}} {2c_{{\ast}}T} }dx \rightarrow 0 }$$

as n → . From this fact and the dominated convergence theorem, \((b_{n})_{n\in \mathbb{N}}\) satisfies ( 21).

Let b be a one-sided Lipschitz function. Then, we have

$$\displaystyle\begin{array}{rcl} (x - y)(b_{n}(x) - b_{n}(y))& =& \int _{\mathbb{R}}(x - y)(b(x - z) - b(y - z))\rho _{n}(z)dz {}\\ & =& \int _{\mathbb{R}}\{(x - z) - (z - y)\}(b(x - z) - b(y - z))\rho _{n}(z)dz {}\\ & \leq & L\vert x - y\vert ^{2}, {}\\ \end{array}$$

which implies that \((b_{n})_{n\in \mathbb{N}}\) satisfies the one-sided Lipschitz condition.

Proof of (ii). In the same way as in the proof of (i), we have from Hölder continuity of σ

$$\displaystyle\begin{array}{rcl} & & \int _{\mathbb{R}}\vert \sigma (x) -\sigma _{n}(x)\vert ^{2p}e^{-\frac{\vert x-x_{0}\vert ^{2}} {2c_{{\ast}}T} }dx \leq \int _{\vert z\vert <1}dz\int _{\mathbb{R}}dx\vert \sigma (x) -\sigma (x - z/n)\vert ^{2p}e^{-\frac{\vert x-x_{0}\vert ^{2}} {2c_{{\ast}}T} }\rho (z) {}\\ & & \quad \leq \frac{K^{2p}} {n^{2p\eta }} \int _{\vert z\vert <1}dz\int _{\mathbb{R}}dxe^{-\frac{\vert x-x_{0}\vert ^{2}} {2c_{{\ast}}T} }\rho (z) = \frac{K^{2p}\sqrt{2\pi c_{ {\ast}}T}} {n^{2p\eta }}. {}\\ \end{array}$$

Finally, we show that σ n is η-Hölder continuous. For any \(x,y \in \mathbb{R}\),

$$\displaystyle{ \vert \sigma _{n}(x) -\sigma _{n}(y)\vert \leq \int _{\mathbb{R}}\vert \sigma (x - z) -\sigma (y - z)\vert \rho _{n}(z)dz \leq K\vert x - y\vert ^{\eta }, }$$

which implies that σ n is η-Hölder continuous. This concludes that \((\sigma _{n})_{n\in \mathbb{N}}\) satisfies (ii).