1 Introduction

In this paper, we study the numerical approximations to a class of time-changed stochastic differential equations (SDEs) which are of the form

$$\begin{aligned} \mathrm {d}X(t) = f(E(t),X(t))\,\mathrm {d}E(t) + g(E(t),X(t))\,\mathrm {d}B(E(t)). \end{aligned}$$

Here the coefficients f and g satisfy some regularity conditions (to be specified in Sect. 2), B(t) represents a standard Brownian motion, and E(t) is an independent time-change given by an inverse subordinator. The rigorous mathematical definitions are postponed to Sect. 2.

Since it is in general impossible to derive the explicit solution to such SDEs, numerical approximations become extremely important when one applies them to model uncertain phenomenon in real life. This paper aims to construct a numerical method for these time-changed SDEs. The strong convergence with the convergence rate and the mean square stability of the numerical method are investigated.

To our best knowledge, [8] is the first paper to study the finite time strong convergence of numerical methods for time-changed SDEs by directly discretizing the equations. In [8], the authors used the duality principle established in [10] to construct the Euler–Maruyama (EM) method. In a very recent work [6], the authors studied the EM method for a larger class of time-changed SDEs without the duality principle. However, both of these two works required the coefficients of the time-changed SDEs to satisfy the global Lipschitz condition. This requirement rules out many interesting SDEs like

$$\begin{aligned} \mathrm {d}X(t) = \left( X(t) - X^3(t)\right) \,\mathrm {d}E(t) + X(t) \,\mathrm {d}B(E(t)), \end{aligned}$$

where some cubic term appears in the drift coefficient. Moreover, the EM is proved to be divergent to SDEs with super-linear growing coefficients [5].

To cope with such super-linearity, we propose the semi-implicit EM method to approximate the SDEs driven by time-changed Brownian motions in this paper. It should be noted that the semi-implicit EM (also called the backward Euler method) have been studied for approximating different types of SDEs driven by Brownian motions, see [3, 4, 11, 12, 17, 21, 24, 26] and the references therein.

Stabilities in different senses for SDEs driven by time-changed Brownian motion have been discussed in [27]. See [19, 20] for related results when the driven process is a time-changed Lévy process. As far as we know, however, there is no result concerning the stability analysis for numerical methods for time-changed SDEs.

In the three papers mentioned above, the global Lipschitz condition was required for the coefficients of the equations. In this paper, building upon the ideas presented in [8], we study the the mean square stability of the underlying time-changed SDEs, where the global Lipschitz condition on the drift coefficients is not required. Then, we investigate the capability of the semi-implicit EM method to reproduce such a property under the similar condition.

The main contributions of this paper are as follows.

  • The semi-implicit EM method is proved to be convergent to a class of time-changed SDEs and the convergence rate is explicitly given.

  • We establish the mean square stability of the underlying time-changed SDEs. In addition, the numerical solution is proved to be able to preserve such a property.

  • For various Berstein functions, the speed of the stability is observed to be polynomial-like, which is significantly different from the classical SDEs but in line with the behaviour of the subdiffusion that is related to time-changed processes.

The rest of this paper is organized as follows. Section 2 is devoted to some mathematical preliminaries for the time-changed SDEs to be considered in this paper, and some necessary lemmas. The strong convergence of the numerical method is proved in Sect. 3.1, and the mean square stabilities of both underlying and numerical solutions are shown in Sect. 3.2. In Sect. 4, we present numerical simulations to demonstrate the theoretical results derived in Sect. 3.

2 Preliminaries

Throughout this paper, unless otherwise specified, we will use the following notation. Let \(|\cdot |\) be the Euclidean norm in \({\mathbb {R}}^{d}\) and \(\langle x,y \rangle \) be the inner product of vectors \(x,y\in {\mathbb {R}}^{d}\). If A is a vector or matrix, its transpose is denoted by \(A^T\). If A is a matrix, its trace norm is denoted by \(|A|=\sqrt{trace(A^TA)}\). For two real numbers u and v, we use \(u\wedge v=\min (u,v)\) and \(u\vee v=\max (u,v)\).

Moreover, let \((\varOmega , {\mathscr {F}}, {\mathbb {P}})\) be a complete probability space with a filtration \(\left\{ {\mathscr {F}}_t\right\} _{t \ge 0}\) satisfying the usual conditions (that is, it is right continuous and increasing while \({\mathscr {F}}_0\) contains all \({\mathbb {P}}\)-null sets). Let \(B(t)= (B_1(t), B_2(t), \ldots , B_m(t))^T\) be an m-dimensional \({\mathscr {F}}_t\)-adapted standard Brownian motion. Let \({\mathbb {E}}\) denote the expectation under the probability measure \({\mathbb {P}}\).

Let D(t) be an \({\mathscr {F}}_t\)-adapted subordinator (without killing), i.e. a nondecreasing Lévy process on \([0,\infty )\) starting at \(D(0)=0\). The Laplace transform of D(t) is of the form

$$\begin{aligned} {\mathbb {E}}\,\mathrm {e}^{-rD(t)} = \mathrm {e}^{-t \phi (r)},\quad r>0,\,t\ge 0, \end{aligned}$$

where the characteristic (Laplace) exponent \(\phi :(0,\infty )\rightarrow (0,\infty )\) is a Bernstein function with \(\phi (0+):=\lim _{r\downarrow 0}\phi (r)=0\), i.e. a \(C^\infty \)-function such that \((-1)^{n-1}\phi ^{(n)}\ge 0\) for all \(n\in {\mathbb {N}}\). Every such \(\phi \) has a unique Lévy–Khintchine representation

$$\begin{aligned} \phi (r) =\vartheta r+\int _{(0,\infty )}\left( 1-\mathrm {e}^{-rx}\right) \,\nu (\mathrm {d}x),\quad r>0, \end{aligned}$$

where \(\vartheta \ge 0\) is the drift parameter and \(\nu \) is a Lévy measure on \((0,\infty )\) satisfying \(\int _{(0,\infty )}(1\wedge x) \,\nu (\mathrm {d}x)<\infty \). We will focus on the case that \(t\mapsto D(t)\) is a.s. strictly increasing, i.e. \(\vartheta >0\) or \(\nu (0,\infty )=\infty \); obviously, this is also equivalent to \(\phi (\infty ):=\lim _{r\rightarrow \infty }\phi (r)=\infty \).

Let E(t) be the (generalized, right-continuous) inverse of D(t), i.e.

$$\begin{aligned} E(t) := \inf \{ s\ge 0\,;\,D(s) > t \}, \quad t \ge 0. \end{aligned}$$

We call E(t) an inverse subordinator associated with the Bernstein function \(\phi \). Note that \(t\mapsto E(t)\) is a.s. continuous and nondecreasing.

We always assume that B(t) and D(t) are independent. The process B(E(t)) is called a time-changed Brownian motion, which is trapped whenever \(t\mapsto E(t)\) is constant. We remark that the jumps of \(t\mapsto D(t)\) correspond to flat pieces of \(t\mapsto E(t)\). Due to these traps, the time-change slows down the original Brownian motion B(t), and B(E(t)) is understood as a subdiffusion in the literature (cf. [18, 25]).

Consider the following time-changed SDE

$$\begin{aligned} \mathrm {d}X(t) = f(E(t),X(t))\,\mathrm {d}E(t) + g(E(t),X(t))\,\mathrm {d}B(E(t)), \quad t\in [0,T], \end{aligned}$$
(2.1)

with \({\mathbb {E}}|X(0)|^\gamma < 0\) for any \(\gamma \in (0,\infty )\), where \(f:[0,\infty ) \times {\mathbb {R}}^d \rightarrow {\mathbb {R}}^d \) and \(g:[0,\infty ) \times {\mathbb {R}}^{d} \rightarrow {\mathbb {R}}^{d\times m}\) are measurable coefficients. We will need the following assumptions on the drift and diffusion coefficients.

Assumption 2.1

There exists a constant \(K_1 > 0\) such that, for all \(t\ge 0\) and \(x,y\in {\mathbb {R}}^d\),

$$\begin{aligned} \left\langle x - y, ~ f(t,x) - f(t,y) \right\rangle \le K_1 |x - y|^2. \end{aligned}$$

Assumption 2.2

There exist constants \(K_2 >0\), \(a \ge 2\) and \(\gamma \in (0,2]\) such that, for all \(t,s\ge 0\) and \(x\in {\mathbb {R}}^d\),

$$\begin{aligned} \left| f(t,x) - f(s,x) \right| ^2 \le K_2 \left( 1 + |x|^a \right) |t - s|^{\gamma } \end{aligned}$$

and

$$\begin{aligned} \left| g(t,x) - g(s,x) \right| ^2 \le K_2 \left( 1 + |x|^2 \right) |t - s|^{\gamma }. \end{aligned}$$

Assumption 2.3

Assume that there exist constants \(K_3 > 0\) and \(b\ge 0\) such that, for all \(t\ge 0\) and \(x,y\in {\mathbb {R}}^d\),

$$\begin{aligned} \left| f(t,x) - f(t,y) \right| ^2 \le K_3 \left( 1 + |x|^b + |y|^b \right) |x - y|^2 \end{aligned}$$

and

$$\begin{aligned} \left| g(t,x) - g(t,y) \right| ^2 \le K_3 |x - y|^2. \end{aligned}$$

Assumption 2.4

Assume that there exist constant \(p \ge 2\) and \(K_4 > 0\) such that, for all \(t\ge 0\) and \(x\in {\mathbb {R}}^d\),

$$\begin{aligned} \langle x,~f(t,x) \rangle + \frac{p-1}{2} |g(t,x)|^2 \le K_4 (1 + |x|^2). \end{aligned}$$

To avoid complicated notations, we further assume that both |f(t, 0)| and |g(t, 0)| are bounded. Then by Assumption 2.3, we can see that there exists a constant \(K_5 > 0\) such that

$$\begin{aligned} |f(t,x)|^2 \le K_5 (1 + |x|^a) \end{aligned}$$
(2.2)

and

$$\begin{aligned} |g(t,x)|^2 \le K_5 (1 + |x|^2) \end{aligned}$$
(2.3)

for all \(t\ge 0\) and \(x\in {\mathbb {R}}^d\).

According to the duality principle in [10], the time-changed SDE (2.1) and the classical SDE of Itô type

$$\begin{aligned} \mathrm {d}Y(t) = f(t,Y(t))\,\mathrm {d}t + g(t,Y(t))\,\mathrm {d}B(t),~~~Y(0) = X(0), \end{aligned}$$
(2.4)

have a deep connection.

The existence and uniqueness of the strong solution to (2.1) can be obtained in the similar manner as Lemma 4.1 in [10]. Although the global Lipschitz condition was assumed in Lemma 4.1 of [10], the proof there did not use this assumption explicity. Actually, X(t) is a \({\mathscr {G}}_t\)-semimartingale, where \({\mathscr {G}}_t = {\mathscr {F}}_{E(t)}\). Then the exsitence and uniqueness of the strong solution to (2.1) can be derived from the exsitence and uniqueness of the strong solution to SDEs driven by semimartingale, see for example [16] and [22]. Following the classical approach, the exsitence and uniqueness of the strong solution to (2.4) can be obtained, see for example [9] and [15].

The next lemma states the relationship between the solution to (2.1) and the solution to (2.4).

Lemma 2.1

Suppose Assumptions 2.12.3 hold. If Y(t) is the unique solution to the SDE (2.4), then the time-changed process Y(E(t)), which is an \({\mathscr {F}}_{E(t)}\)-semimartingale, is the unique solution to the time-changed SDE (2.1). On the other hand, if X(t) is the unique solution to the time-changed SDE (2.1), then the process X(D(t)), which is an \({\mathscr {F}}_t\)-semimartingale, is the unique solution to the SDE (2.4).

The proof of Lemma 2.1 is similar to that of Theorem 4.2 in [10]. It should be mentioned that the global Lipschitz condition was assumed in Theorem 4.2 of [10], but such a condition was just imposed to guarantee the existence and uniqueness of the strong solution to the time-changed SDE.

The plan to numerically approximate the time-changed SDE (2.1) in this paper is as follows. Firstly, we construct the numerical method for the SDE (2.4). Secondly, we discretize the inverse subordinator E(t). Then the composition of the numerical solution of the SDE (2.4) and the discretized inverse subordinator is used to approximate the solution to the time-changed SDE (2.1).

The semi-implicit EM method for (2.4) is defined as

$$\begin{aligned} y_{i+1} = y_i + f(t_{i+1},y_{i+1})h + g(t_i,y_i) \varDelta B_i,\quad i\in {\mathbb {N}}, \end{aligned}$$
(2.5)

with \(y_0 = Y(0)\), where \(\varDelta B_i\) is the Brownian increment following the normal distribution with the mean 0 and the variance \(h>0\) and \(t_i = ih\).

Note that under Assumption 2.1, the semi-implicit EM method (2.5) is well defined for any \(h \in (0,1/K_1)\) (see for example [17]). To be more precisely, this means that given \(y_i\) is known a unique \(y_{i+1}\) can be found. Throughout the paper, we always assume \(h \in (0,1/K_1)\).

We also define the piecewise continuous numerical solution by \(y(t):= y_i\) for \(t \in [t_i,t_{i+1})\), \(i\in {\mathbb {N}}\).

We follow the idea in [2] to approximate the inverse subordinator E(t) in a time interval [0, T] for any given \(T>0\). Firstly, we simulate the path of D(t) by \(D_h(t_i) = D_h(t_{i-1} )+ \varDelta _i\) with \(D_h(0) = 0\), where \(\varDelta _i\) is independently identically sequence with \(\varDelta _i = D(h)\) in distribution. The procedure is stopped when

$$\begin{aligned} T \in [ D_h(t_{n}), D_h(t_{n+1})), \end{aligned}$$

for some n. Then the approximate \(E_h(t)\) to E(t) is generated by

$$\begin{aligned} E_h(t) = \big (\min \{n; D_h(t_n) > t\} - 1\big )h, \end{aligned}$$
(2.6)

for \(t \in [0,T]\). It is easy to see

$$\begin{aligned} E_h(t) = ih,\quad \text {when }t \in \left[ D_h(t_{i}), D_h(t_{i+1})\right) . \end{aligned}$$

The next lemma will be used as the approximation error of \(E_h(t)\) to E(t), whose proof can be found in [8, 13].

Lemma 2.2

Almost surely,

$$\begin{aligned} E(t) - h \le E_h(t) \le E(t) \end{aligned}$$

holds for all \(t>0\).

The following lemma states that the inverse subordinator E(t) is known to have the finite exponential moment, which was proved in [8, 14]. Here, we give an alternative proof, which can, furthermore, provide an explicit upper bound.

Lemma 2.3

For any \(\delta >0\), there exists \(C=C(\delta )>0\) such that

$$\begin{aligned} {\mathbb {E}}\,\mathrm {e}^{\delta E(t)}\le \mathrm {e}^{Ct} \quad \hbox { for all}\ t\ge 1. \end{aligned}$$

Proof

By the definition of E(t), it is clear that

$$\begin{aligned} {\mathbb {P}}\left( E(t) \le s \right) = {\mathbb {P}}\left( D(s) \ge t \right) ,~~~t,s \ge 0. \end{aligned}$$

Note that

$$\begin{aligned} {\mathbb {E}}\,\mathrm {e}^{\delta E(t)}&= \int _0^\infty {\mathbb {P}}\left( \mathrm {e}^{\delta E(t)}> r \right) \,\mathrm {d}r\\&=1+\int _1^\infty {\mathbb {P}}\left( E(t) >\frac{1}{\delta }\log r \right) \,\mathrm {d}r\\&=1+\int _1^\infty {\mathbb {P}}\left( D\left( \frac{1}{\delta }\log r\right)<t \right) \,\mathrm {d}r\\&=1+\delta \int _0^\infty {\mathbb {P}}\left( D(r) < t \right) \mathrm {e}^{\delta r}\,\mathrm {d}r. \end{aligned}$$

Denote by \(\phi ^{-1}\) the inverse function of \(\phi \). By the Chebyshev inequality,

$$\begin{aligned} {\mathbb {P}}\left( D(r) < t \right)&={\mathbb {P}}\left( \mathrm {e}^{-\phi ^{-1}(2\delta ) D(r)} > \mathrm {e}^{-\phi ^{-1}(2\delta ) t} \right) \\&\le \mathrm {e}^{\phi ^{-1}(2\delta ) t}\, {\mathbb {E}}\, \mathrm {e}^{-\phi ^{-1}(2\delta ) D(r)} \\&=\mathrm {e}^{\phi ^{-1}(2\delta ) t} \mathrm {e}^{-r\phi (\phi ^{-1}(2\delta ))}\\&=\mathrm {e}^{\phi ^{-1}(2\delta ) t-2\delta r}. \end{aligned}$$

Thus, for all \(t > 0\),

$$\begin{aligned} {\mathbb {E}}\,\mathrm {e}^{\delta E(t)} \le 1+\delta \mathrm {e}^{\phi ^{-1}(2\delta ) t} \int _0^\infty \mathrm {e}^{-\delta r}\,\mathrm {d}r =1+\mathrm {e}^{\phi ^{-1}(2\delta ) t}, \end{aligned}$$

which immediately implies the assertion. \(\square \)

The following result is taken from [15, Theorem 4.1, p. 59].

Lemma 2.4

Suppose that Assumptions 2.12.4 hold. Then the solution to (2.4) satisfies

$$\begin{aligned} {\mathbb {E}}|Y(t)|^p \le 2^{\frac{p-2}{2}} \left( 1 + {\mathbb {E}}|Y(0)|^p\right) \mathrm {e}^{pK_4t}\quad \hbox { for all}\ t\ge 0. \end{aligned}$$

The next lemma is easy; for the sake of completeness and our readers’ convenience, we give a brief proof.

Lemma 2.5

Suppose that Assumptions 2.12.4 hold. Then for any \(q\in (1,2p/a]\) and \(t,s\ge 0\) with \(|t - s|\le 1\),

$$\begin{aligned} {\mathbb {E}}|Y(t) - Y(s)|^q \le C |t - s|^{q/2}\mathrm {e}^{Ct}, \end{aligned}$$

where \(C>0\) is a constant independent of t and s.

Proof

For any \(0\le s < t\), we derive from (2.4) that

$$\begin{aligned} Y(t) - Y(s) = \int _s^t f(r,Y(r))\,\mathrm {d}r + \int _s^t g(r,Y(r))\,\mathrm {d}B(r). \end{aligned}$$

When \(q>2\), by the elementary inequality

$$\begin{aligned} \left| \sum _{i=1}^nu_i\right| ^q \le n^{q-1}\sum _{i=1}^n|u_i|^q, \quad u_i\in {\mathbb {R}}^d \end{aligned}$$
(2.7)

with \(n=2\), the Hölder inequality and [15, Theorem 7.1, p. 39], we get

$$\begin{aligned} {\mathbb {E}}|Y(t) - Y(s)|^q&\le |2(t-s)|^{q-1} {\mathbb {E}}\int _s^t \left| f(r,Y(r)) \right| ^q\,\mathrm {d}r\\&\quad + 2^{q/2 - 1} |q(q-1)|^{q/2} |t - s|^{q/2-1}{\mathbb {E}}\int _s^t \left| g(r,Y(r)) \right| ^q\,\mathrm {d}r \end{aligned}$$

Combining this with (2.2), (2.3) and Lemma 2.4, we obtain

$$\begin{aligned} {\mathbb {E}}|Y(t) - Y(s)|^q&\le C\left( |t-s|^q + |t - s|^q \mathrm {e}^{Ct} + |t - s|^{q/2} + |t - s|^{q/2}\mathrm {e}^{Ct}\right) \nonumber \\&\le C |t - s|^{q/2}\mathrm {e}^{Ct}, \end{aligned}$$
(2.8)

where \(C>0\) is a generic constant independent of t and s that may change from line to line.

When \(q\in (1,2]\), the claim holds by (2.8) together with Jensen’s inequality. This completes the proof. \(\square \)

3 Main results

3.1 Strong convergence

Briefly speaking, the following theorem states the strong convergence with the rate of \((\gamma \wedge 1)/2\) of the semi-implicit EM method, which is not surprising.

But to our best knowledge, it seems that no existing result fulfills our needs in this paper. To be more precise, we need to carefully trace the temporal variable t so that no term like \(t^a\) for \(a>1\) would appear in the exponential function on the right hand side of the inequality in the statement of Theorem 3.1. Since the t will be replaced by E(t) in Theorem 3.2 and an expectation will be taken on it, by Lemma 2.3 a term like \(t^a\) with \(a>1\) will lead to the unboundedness of \({\mathbb {E}} e^{\delta (E(t))^a}\) with \(a>1\).

In addition, it seems that no such a result exists on the semi-implicit EM method for non-autonomous SDEs.

Theorem 3.1

Suppose that Assumptions 2.12.4 hold with \(p \ge 2 (a\vee b)\) and the step size satisfies \(h <1/(2(K_1+1))\). Then the semi-implicit EM method (2.5) is convergent to (2.4) with

$$\begin{aligned} {\mathbb {E}}\left| Y(t) - y(t) \right| ^2 \le C h^{\gamma \wedge 1} \,\mathrm {e}^{C t},\quad t\ge 0, \end{aligned}$$

where C is a constant independent of t and h.

Proof

From (2.4) and (2.5), it holds that for \(i=1,2,\ldots \),

$$\begin{aligned} Y(t_{i+1}) - y_{i+1} =&\left( Y(t_i) - y_i \right) + \int _{t_i}^{t_{i+1}} \left( f(s,Y(s)) - f(t_{i+1},y_{i+1}) \right) \,\mathrm {d}s\\&+ \int _{t_i}^{t_{i+1}} \left( g(s,Y(s)) - g(t_i,y_i) \right) \,\mathrm {d}B(s). \end{aligned}$$

Multiplying both sides with \(Y(t_{i+1}) - y_{i+1}\) yields

$$\begin{aligned} \left| Y(t_{i+1}) - y_{i+1} \right| ^2 = I_1 + I_2, \end{aligned}$$

where

$$\begin{aligned} I_1&:= \left\langle Y(t_{i+1}) - y_{i+1}, ~\int _{t_i}^{t_{i+1}} \left( f(s,Y(s)) - f(t_{i+1},y_{i+1}) \right) \,\mathrm {d}s\right\rangle \\&=\int _{t_i}^{t_{i+1}} \left\langle Y(t_{i+1}) - y_{i+1}, ~ f(s,Y(s)) - f(t_{i+1},y_{i+1}) \right\rangle \,\mathrm {d}s \end{aligned}$$

and

$$\begin{aligned} I_2 := \left\langle Y(t_{i+1}) - y_{i+1}, ~ \left( Y(t_i) - y_i \right) + \int _{t_i}^{t_{i+1}} \left( g(s,Y(s)) - g(t_i,y_i) \right) \,\mathrm {d}B(s) \right\rangle . \end{aligned}$$

To estimate \(I_1\), we rewrite the integrand of \(I_1\) into three parts

$$\begin{aligned}&\left\langle Y(t_{i+1}) - y_{i+1}, ~ f(s,Y(s)) - f(t_{i+1},y_{i+1}) \right\rangle \\&\quad =\left\langle Y(t_{i+1}) - y_{i+1}, ~ f(t_{i+1},Y(t_{i+1})) - f(t_{i+1},y_{i+1}) \right\rangle \\&\qquad + \left\langle Y(t_{i+1}) - y_{i+1}, ~ f(s,Y(t_{i+1})) - f(t_{i+1},Y(t_{i+1})) \right\rangle \\&\qquad + \left\langle Y(t_{i+1}) - y_{i+1}, ~ f(s,Y(s)) - f(s,Y(t_{i+1})) \right\rangle \\ \nonumber&\quad =:I_{11} + I_{12} + I_{13}. \end{aligned}$$

Using Assumption 2.1, we obtain

$$\begin{aligned} I_{11} \le K_1 \left| Y(t_{i+1}) - y_{i+1} \right| ^2. \end{aligned}$$

Applying the elementary inequality

$$\begin{aligned} \langle u,v\rangle \le \frac{|u|^2+|v|^2}{2}, \quad u,v\in {\mathbb {R}}^d, \end{aligned}$$
(3.1)

we have

$$\begin{aligned} I_{12} \le \frac{1}{2} \left| Y(t_{i+1}) - y_{i+1} \right| ^2 + \frac{1}{2} \left| f(s,Y(t_{i+1})) - f(t_{i+1},Y(t_{i+1})) \right| ^2. \end{aligned}$$

By Assumption 2.2, we can see

$$\begin{aligned} \left| f(s,Y(t_{i+1})) - f(t_{i+1},Y(t_{i+1})) \right| ^2 \le K_2 \left( 1 + |Y(t_{i+1})|^a \right) |s - t_{i+1}|^{\gamma }. \end{aligned}$$

Thus,

$$\begin{aligned} I_{12} \le \frac{1}{2} \left| Y(t_{i+1}) - y_{i+1} \right| ^2 + \frac{K_2}{2} \left( 1 + |Y(t_{i+1})|^a \right) |s - t_{i+1}|^{\gamma }. \end{aligned}$$

Applying the elementary inequality (3.1) and Assumption 2.3 gives

$$\begin{aligned} I_{13}&\le \frac{1}{2} \left| Y(t_{i+1}) - y_{i+1} \right| ^2 + \frac{1}{2} \left| f(s,Y(s)) - f(s,Y(t_{i+1})) \right| ^2 \\&\le \frac{1}{2} \left| Y(t_{i+1}) - y_{i+1} \right| ^2 + \frac{K_3}{2} \left( 1 + |Y(s)|^b + |Y(t_{i+1})|^b \right) \left| Y(s) - Y(t_{i+1}) \right| ^2. \end{aligned}$$

Combining the upper bound estimates of \(I_{11}\), \(I_{12}\) and \(I_{13}\), we conclude that

$$\begin{aligned} \begin{aligned} I_1&\le \int _{t_i}^{t_{i+1}} \bigg ( (K_1 + 1) \left| Y(t_{i+1}) - y_{i+1} \right| ^2 + \frac{K_2}{2} \left( 1 + |Y(t_{i+1})|^a \right) |s - t_{i+1}|^{\gamma }\\&\quad + \frac{K_3}{2} \left( 1 + |Y(s)|^b + |Y(t_{i+1})|^b \right) \left| Y(s) - Y(t_{i+1}) \right| ^2 \bigg )\,\mathrm {d}s. \end{aligned} \end{aligned}$$
(3.2)

By the Hölder inequality, we find

$$\begin{aligned}&{\mathbb {E}}\bigg ( \left( 1 + |Y(s)|^b + |Y(t_{i+1})|^b \right) \left| Y(s) - Y(t_{i+1}) \right| ^2 \bigg )\\&\qquad \qquad \le \left( {\mathbb {E}}\left( 1 + |Y(s)|^b + |Y(t_{i+1})|^b \right) ^2\right) ^{1/2} \left( {\mathbb {E}}\left| Y(s) - Y(t_{i+1}) \right| ^4 \right) ^{1/2}. \end{aligned}$$

Taking expectations on both sides of (3.2) and applying Lemmas 2.4 and 2.5, we obtain

$$\begin{aligned} \begin{aligned} {\mathbb {E}}I_1&\le (K_1 + 1)h {\mathbb {E}}\left| Y(t_{i+1}) - y_{i+1} \right| ^2 + C h^{\gamma + 1} + C h^{\gamma + 1}\,\mathrm {e}^{C t_{i+1}} + C h^2\,\mathrm {e}^{Ct_{i+1}}\\&\le (K_1 + 1) h {\mathbb {E}}\left| Y(t_{i+1}) - y_{i+1} \right| ^2 + C h^{\gamma + 1}\,\mathrm {e}^{Ct_{i+1}}, \end{aligned} \end{aligned}$$
(3.3)

where (and in what follows) C is a generic constant independent of i and the step size h that may change from line to line.

Next, we bound \(I_2\). Applying the elementary inequality (3.1) again, we have

$$\begin{aligned} I_2&\le \frac{1}{2} \left| Y(t_{i+1}) - y_{i+1} \right| ^2 + \frac{1}{2} \left| \left( Y(t_i) - y_i \right) + \int _{t_i}^{t_{i+1}} \left( g(s,Y(s)) - g(t_i,y_i) \right) \,\mathrm {d}B(s) \right| ^2 \\&=:\frac{1}{2} \left| Y(t_{i+1}) - y_{i+1} \right| ^2 + \frac{1}{2} I_{21}. \end{aligned}$$

Taking expectation on both sides and using the Itô isometry, it follows that

$$\begin{aligned} {\mathbb {E}}I_{21} = {\mathbb {E}}|Y(t_i) - y_i|^2 +{\mathbb {E}}\int _{t_i}^{t_{i+1}} \left| g(s,Y(s)) - g(t_i,y_i) \right| ^2\,\mathrm {d}s. \end{aligned}$$

Rewriting the integrand of the second term on the right hand side, and using the elementary inequality (2.7) with \(n=3\) and \(q=2\) and Assumptions 2.2 and 2.3, we can see

$$\begin{aligned} \left| g(s,Y(s)) - g(t_i,y_i) \right| ^2&\le 3 \bigg ( \left| g(s,Y(s)) - g(s,Y(t_i)) \right| ^2 + \left| g(s,Y(t_i)) - g(t_i,Y(t_i)) \right| ^2\\&\quad + \left| g(t_i,Y(t_i)) - g(t_i,y_i) \right| ^2 \bigg ) \\&\le 3 \left( K_3 |Y(s) - Y(t_i)|^2 + K_2 (1 + |Y(t_i)|^2)|s - t_i|^{\gamma } + K_3 |Y(t_i) - y_i|^2 \right) . \end{aligned}$$

Now applying Lemmas 2.4 and 2.5 gives

$$\begin{aligned} {\mathbb {E}}I_2 \le \frac{1}{2}\,{\mathbb {E}}\left| Y(t_{i+1}) - y_{i+1} \right| ^2 + \frac{1+3K_3 h}{2}\, {\mathbb {E}}\left| Y(t_i) - y_i \right| ^2 + C h^{(1+\gamma )\wedge 2 }\,\mathrm {e}^{C t_{i+1}}.\nonumber \\ \end{aligned}$$
(3.4)

Combining (3.3) and (3.4) yields

$$\begin{aligned} {\mathbb {E}}\left| Y(t_{i+1}) - y_{i+1} \right| ^2 \le&\left( \frac{1}{2} + h(K_1 + 1)\right) {\mathbb {E}}\left| Y(t_{i+1}) - y_{i+1} \right| ^2 \\&+ \frac{1+3K_3 h}{2}\,{\mathbb {E}}\left| Y(t_i) - y_i \right| ^2 + C h^{(1+\gamma )\wedge 2 }\,\mathrm {e}^{C t_{i+1}}, \end{aligned}$$

which implies that

$$\begin{aligned} {\mathbb {E}}\left| Y(t_{i+1}) - y_{i+1} \right| ^2 \le \frac{1+3K_3h}{1 - 2h(K_1 + 1)}\left( {\mathbb {E}}\left| Y(t_i) - y_i \right| ^2 + Ch^{(1+\gamma )\wedge 2}\,\mathrm {e}^{C t_{i}}\right) . \end{aligned}$$

Now summing both sides from 0 to \(i-1\) yields

$$\begin{aligned} \sum _{l=1}^i {\mathbb {E}}\left| Y(t_{l}) - y_{l} \right| ^2 \le \frac{1+3K_3h}{1 - 2h(K_1 + 1)} \left( \sum _{l=0}^{i-1}{\mathbb {E}}\left| Y(t_{l}) - y_{l} \right| ^2 + iC h^{(1+\gamma )\wedge 2}\,\mathrm {e}^{C t_{i}}\right) . \end{aligned}$$

Due to the fact that \(ih = t_i \le \mathrm {e}^{C t_{i}}\), from combining same terms together on both sides we can derive

$$\begin{aligned} {\mathbb {E}}\left| Y(t_i) - y_i \right| ^2 \le \frac{h(3K_3 + 2K_1 +2)}{1 - 2h(K_1 + 1)} \sum _{l=0}^{i-1} {\mathbb {E}}\left| Y(t_{l}) - y_{l} \right| ^2 + C h^{\gamma \wedge 1 }\,\mathrm {e}^{C t_{i}}. \end{aligned}$$

By the discrete version of the Gronwall inequality, we have

$$\begin{aligned} {\mathbb {E}}\left| Y(t_i) - y_i \right| ^2 \le C h^{\gamma \wedge 1}\,\mathrm {e}^{C t_{i}}. \end{aligned}$$
(3.5)

Moveover, when \(t \in [t_i,t_{i+1})\) for some \(i=1,2,\ldots \), Lemma 2.5 and (3.5) yield

$$\begin{aligned} {\mathbb {E}}\left| Y(t) - y(t) \right| ^2&={\mathbb {E}}\left| Y(t) - y_i \right| ^2\\&\le 2 {\mathbb {E}}\left| Y(t) - Y(t_i) \right| ^2 + 2 {\mathbb {E}}\left| Y(t_i) - y_i \right| ^2 \\&\le Ch\,\mathrm {e}^{Ct}+C h^{\gamma \wedge 1}\,\mathrm {e}^{Ct_i}\\&\le C h^{\gamma \wedge 1}\,\mathrm {e}^{C t}. \end{aligned}$$

Therefore, the proof is completed. \(\square \)

Theorem 3.2

Suppose that Assumptions 2.12.4 hold with \(p > 2 (a\vee b)\) and the step size satisfies \(h <1/(2(K_1+1))\). Then the composition of the semi-implicit EM solution, y(t), and the discretized inverse subordinator, \(E_h(t)\), i.e. \(y(E_h(t))\), converges strongly to the solution of (2.1) with

$$\begin{aligned} {\mathbb {E}}\left| X(T) - y(E_h(T))\right| ^2 \le C h^{\gamma \wedge 1}\mathrm {e}^{C T}, \end{aligned}$$

where C is a constant independent of T and h.

Proof

By Lemma 2.1 and (2.7) with \(n=2\) and \(q=2\),

$$\begin{aligned} {\mathbb {E}}\left| X(T) - y(E_h(T))\right| ^2&= {\mathbb {E}}\left| Y(E(T)) - y(E_h(T))\right| ^2 \\&\le 2 {\mathbb {E}}\left| Y(E(T)) - Y(E_h(T))\right| ^2 + 2 {\mathbb {E}}\left| Y(E_h(T) - y(E_h(T))\right| ^2. \end{aligned}$$

By Lemmas 2.2, 2.3 and 2.5, we can see

$$\begin{aligned} {\mathbb {E}}\left| Y(E(T)) - Y(E_h(T))\right| ^2 \le C h\,{\mathbb {E}}\,\mathrm {e}^{C E(T)}\le C h\mathrm {e}^{C(T\vee 1)}. \end{aligned}$$
(3.6)

On the other hand, it holds from Lemmas 2.2 and 2.3 and Theorem 3.1 that

$$\begin{aligned} {\mathbb {E}}\left| Y(E_h(T) - y(E_h(T))\right| ^2 \le C h^{\gamma \wedge 1}\,{\mathbb {E}}\,\mathrm {e}^{C E_h(T)} \le C h^{\gamma \wedge 1}\,{\mathbb {E}}\,\mathrm {e}^{C E(T)} \le C h^{\gamma \wedge 1}\mathrm {e}^{C(T\vee 1)}. \end{aligned}$$
(3.7)

Combining (3.6) and (3.7), we obtain the required assertion. \(\square \)

3.2 Stability

In the section, we always assume the existence and uniqueness of the solutions to (2.1) and (2.4). In fact, Assumptions 2.12.3 are sufficient to guarantee it, but we do not use them explicitly.

A function \(F:(0,\infty )\rightarrow (0,\infty )\) is said to be regularly varying at zero with index \(\alpha \in {\mathbb {R}}\) if for any \(c>0\),

$$\begin{aligned} \lim _{s\downarrow 0}\frac{F(cs)}{F(s)} =c^\alpha . \end{aligned}$$

Denote by \({\text {RV}}_\alpha \) the class of all regularly varying functions at 0. A function \(F\in {\text {RV}}_0\) is said to be slowly varying at 0. It is clear that every \(F\in {\text {RV}}_\alpha \) can be rewritten as

$$\begin{aligned} F(s)=s^\alpha \ell (s), \end{aligned}$$

where \(\ell \) is a slowly varying function at 0.

In the following, we will assume that the Bernstein function \(\phi \in {\text {RV}}_\alpha \) with \(\alpha \in (0,1)\). Typical examples are

  • Let \(\phi (r)=r^\alpha \log ^\beta (1+r)\) with \(0<\alpha <1\) and \(0\le \beta <1-\alpha \). Then \(\phi \in {\text {RV}}_{\alpha +\beta }\);

  • Let \(\phi (r)=r^\alpha \log ^{-\beta }(1+r)\) with \(0<\beta<\alpha <1\). Then \(\phi \in {\text {RV}}_{\alpha -\beta }\);

  • Let \(\phi (r)=\log \left( 1+r^\alpha \right) \) with \(0<\alpha <1\). Then \(\phi \in {\text {RV}}_{\alpha }\);

  • Let \(\phi (r)=r^\alpha (1+r)^{-\alpha }\) with \(0<\alpha <1\). Then \(\phi \in {\text {RV}}_{\alpha }\).

We refer the reader to [23, Chapter 16] for more examples of such Bernstein functions.

Lemma 3.1

If the Bernstein function \(\phi \in {\text {RV}}_{\alpha }\) with \(\alpha \in (0,1)\), then for any \(\lambda >0\)

$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{\log {\mathbb {E}}\,\mathrm {e}^{-\lambda E(t)}}{\log t} =-\alpha . \end{aligned}$$

Proof

Denote by \({\mathcal {L}}_t[F(t)]\) the Laplace transform of a function F(t). It follows from [8, (3.10)] that for any \(s>0\) and \(\lambda >0\),

$$\begin{aligned} {\mathcal {L}}_t\left[ {\mathbb {E}}\,\mathrm {e}^{-\lambda E(t)}\right] (s) =\frac{\phi (s)}{s[\phi (s)+\lambda ]}. \end{aligned}$$

Since \(\phi \in {\text {RV}}_{\alpha }\), we get

$$\begin{aligned} s{\mathcal {L}}_t\left[ {\mathbb {E}}\,\mathrm {e}^{-\lambda E(t)}\right] (s) =\frac{\phi (s)}{\phi (s)+\lambda } \,\,\sim \,\,\frac{\phi (s)}{\lambda } =\frac{1}{\lambda }\,s^\alpha \ell (s), \quad s\downarrow 0, \end{aligned}$$

where \(\ell \) is a slowly varying function at 0. Combining this with Karamata’s Tauberian theorem (cf. [1, Theorem 1.7.6]), it holds that

$$\begin{aligned} {\mathbb {E}}\,\mathrm {e}^{-\lambda E(t)}\,\,\sim \,\,\frac{1}{\lambda \Gamma (1-\alpha )} \,t^{-\alpha }\ell \left( \frac{1}{t}\right) , \quad t\rightarrow \infty . \end{aligned}$$
(3.8)

Noting that \(t\mapsto \ell (1/t)\) is slowly varying at \(\infty \), one has (see [1, Proposition 1.3.6 (i)])

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{ {\log } \ell (1/t)}{\log t} =0, \end{aligned}$$

which, together with (3.8), implies the desired limit. \(\square \)

Theorem 3.3

Assume that the Bernstein function \(\phi \in {\text {RV}}_{\alpha }\) with \(\alpha \in (0,1)\), and that there exists a constant \(L_1 > 0\) such that

$$\begin{aligned} \langle x,~f(t,x) \rangle + \frac{1}{2} |g(t,x)|^2 \le -L_1 |x|^2,\quad (x,t) \in {\mathbb {R}}^d \times [0,\infty ). \end{aligned}$$
(3.9)

Then

$$\begin{aligned} \limsup _{t \rightarrow \infty } \frac{\log {\mathbb {E}}|X(t)|^2}{\log t} \le -\alpha . \end{aligned}$$

In other words, the solution to (2.1) is mean square polynomially stable.

Proof

Given (3.9), by [15, Theorem 4.4, p. 130] we know that the solution to (2.4) is mean square exponentially stable

$$\begin{aligned} {\mathbb {E}}|Y(t)|^2 \le \mathrm {e}^{-L_1 t}\,{\mathbb {E}}|Y(0)|^2. \end{aligned}$$

Using Lemma 2.1, we obtain

$$\begin{aligned} {\mathbb {E}}|X(t)|^2 = {\mathbb {E}}|Y(E(t))|^2 \le {\mathbb {E}}\, \mathrm {e}^{-L_1 E(t)} \,\cdot {\mathbb {E}}|Y(0)|^2. \end{aligned}$$

It remains to apply Lemma 3.1 to complete the proof. \(\square \)

Remark 3.1

It is interesting to observe that the time-changed SDEs (2.1) is polynomially stable while the dual SDEs (2.4) is stable in the exponential rate. This may be due to the effect of the time-changed Brownian, which slows down the diffusion.

Now, we present our result about the stability of the semi-implicit EM method.

Theorem 3.4

Assume that the Bernstein function \(\phi \in {\text {RV}}_{\alpha }\) with \(\alpha \in (0,1)\), and that there exist positive constants \(L_2\) and \(L_3\) with \(2L_2 > L_3\) such that

$$\begin{aligned} \langle x,~f(t,x) \rangle \le - L_2 |x|^2 \text { and } |g(t,x)|^2 \le L_3 |x|^2, \quad (x,t) \in {\mathbb {R}}^d \times [0,\infty ). \end{aligned}$$
(3.10)

Then

$$\begin{aligned} \limsup _{t \rightarrow \infty } \frac{\log {\mathbb {E}}|Y(E_h(t))|^2 }{\log t} \le -\alpha . \end{aligned}$$

That is to say, the numerical solution to (2.1) is mean square polynomially stable.

Proof

Assume that (3.10) holds with \(2L_2 > L_3\), the standard approach (see for example [17]) gives

$$\begin{aligned} {\mathbb {E}}|Y(t_i)|^2 \le \mathrm {e}^{-L_4 t_i}\,{\mathbb {E}}|Y(0)|^2, \end{aligned}$$

where \(L_4 = (2L_2 - L_3)/(1 + 2 L_2)\). Now, replacing \(t_i\) by \(E_h(t)\) and using Lemma 2.2, we have

$$\begin{aligned} {\mathbb {E}}|Y(E_h(t))|^2 \le {\mathbb {E}}|Y(0)|^2\,\cdot {\mathbb {E}}\,\mathrm {e}^{-L_4 E_h(t)} \le {\mathbb {E}}|Y(0)|^2\,\cdot \mathrm {e}^{L_4h}\,\cdot {\mathbb {E}}\,\mathrm {e}^{-L_4 E(t)}. \end{aligned}$$

Now, the application of Lemma 3.1 yields the desired assertion. \(\square \)

Remark 3.2

It is not hard to see that (3.10) together with \(2L_2 > L_3\) indicates (3.9) in Theorem 3.3. Hence, it can be seen from Theorem 3.4 that the semi-implicit EM method can preserve the mean square polynomial stability of the underlying time-changed SDE.

Fig. 1
figure 1

Numerical simulations of D(t), E(t) and X(t)

Fig. 2
figure 2

The \(L^1\) errors between the exact solution and the numerical solutions for step sizes \(\varDelta =10^{-2},~10^{-3},~10^{-4}\)

4 Numerical simulations

In this section, we will present two numerical examples. The first example is used to illustrate the strong convergence as well as the convergence rate. The second example demonstrates the mean square stability of the numerical stability. Throughout this section, we focus on the case that E(t) is an inverse 0.9-stable subordinator with Bernstein function \(\phi (r)=r^{0.9}\).

Example 4.1

A one-dimensional nonlinear autonomous time-changed SDE

$$\begin{aligned} \mathrm {d}X(t) = \left( X(t) - X^3(t)\right) \,\mathrm {d}E(t) + X(t) \,\mathrm {d}B(E(t)), \quad \hbox { with}\ X(0)=1, \end{aligned}$$
(4.1)

is considered.

It is not hard to check that Assumptions 2.12.3 hold for (4.1). Therefore, by Theorem 3.2 the numerical solution proposed in this paper is strongly convergent to the underlying solution with the rate of 1/2.

For a given step size h, one path of the numerical solution to (4.1) is simulated in the following way.

Step 1 The semi-implicit EM method with the step size h is used to simulate the numerical solution, \(y(t) = y_i\), when \(t \in [ih, (i+1)h)\) for \(i=1,2,3,\ldots \), to the duel SDE

$$\begin{aligned} \mathrm {d}Y(t) = \left( Y(t) - Y^3(t)\right) \,\mathrm {d}t + Y(t)\, \mathrm {d}B(t), \quad \hbox { with}\ Y(0)=1. \end{aligned}$$

Step 2 One path of the subordinator D(t) is simulated with the same step size h. (see for example [7]).

Step 3 The \(E_h(t)\) is found by using (2.6).

Step 4 The composition, \(y(E_h(t))\), is used to approximate (4.1).

One path of the 0.9-stable subordinator D(t) is plotted using \(h^{-6}\) in Fig. 1a. The corresponding inverse subordinator E(t) is drawn in Fig. 1b. One path of the numerical solution to Example 4.1 is displayed in Fig. 1c.

Fig. 3
figure 3

Stabilities of numerical solutions

Now we illustrate the strong convergence and the convergence rate. Since the explicit form of the true solution to (4.1) is hard to obtain. The numerical solution with a small step size, \(h_0= 10^{-6}\), is regarded as the true solution. The step sizes of \(h=10^{-2}\), \(10^{-3}\) and \(10^{-4}\) are used to calculated the numerical solutions. For the given step size h, the \(L^1\) strong error is calculated by

$$\begin{aligned} \frac{1}{N} \sum _{i=1}^{N} \left| y_i(E_{h_0}(T)) - y_i(E_h(T)) \right| . \end{aligned}$$

Two hundreds (\(N=200\)) sample paths are used to draw Loglog plot of the \(L^1\) error against the step sizes in Fig. 2. The red solid line is the reference line with the slope of 1/2. It can be seen that the strong convergence rate is approximately 1/2. A simple regression also shows that the rate is 0.4996, which is in line with the theoretical one.

Example 4.2

A one-dimensional nonlinear time-changed SDE

$$\begin{aligned} \mathrm {d}X(t) = \left( -X(t) - X^3(t)\right) \,\mathrm {d}E(t) + X(t) \,\mathrm {d}B(E(t)), \quad \hbox { with}\ X(0)=5, \end{aligned}$$
(4.2)

is considered.

It is not hard to check that (3.9) is satisfied, thus the underlying time-changed SDE is stable in the mean square sense. In addition, (3.10) holds for (4.2) indicates the numerical solution is also mean square stable.

One hundred paths are used to draw the mean square of the numerical solutions from \(t=0\) to \(t=10\). It is clear in Fig. 3a that the second moments of the solution tends to 0 as the time t advances, which indicates the numerical solution is mean square stable. In addition, five sample paths are displayed in Fig. 3b.