1 Introduction

The direct method, named also Lyapunov functional method, has become the most widely used tool for studying the exponential stability of stochastic equations. For differential equations, we mention the interesting book of Khas’minskii [11] dealing with necessary and sufficient criterion for almost sure exponential stability of linear Itô equation, which opened a new chapter in stochastic stability theory. Since then, many mathematicians have devoted their interests in the theory of stochastic stability. We here mention Arnold [1], Baxendale [2], Kolmanovskii [12], Mohammed [19], Pardoux [20], Pinsky [22], ... Most of these researches were restricted on the study of the stability for the classical Itô stochastic differential equations.

In 1989, Mao published the papers [15, 16] which can be considered as the first works concerning the stability of stochastic differential equations with respect to semimartingales. For the stability of nonlinear random difference systems, we can refer to [21, 23,24,25].

On the other hand, in order to unify the theory of differential and difference equations into a single set-up, the theory of analysis on time scales has received much attention from many research groups. While the stability theory for deterministic dynamic equations on time scales have been investigated for a long history (see [3, 13, 18, 26]), as far as we know, we can only refer to very few papers [4, 8] dealing with the stochastically stability and the almost sure exponential stability of stochastic dynamic equations on time scales. In [8], the authors studied the exponential P-stability of stochastic ∇-dynamic equations on time scales, via Lyapunov function. Continuing these ideas, we investigate the stochastical stability and the almost sure exponential stability of ∇-stochastic dynamic equations on time scale \(\mathbb {T}\)

$$\begin{array}{@{}rcl@{}} d^{\nabla} X(t)=f(t, X(t_{-}))d^{\nabla} t+g(t,X(t_{-}))d^{\nabla} M(t)\\ X(a)=x_{a}\in\mathbb{R}^{d}, \quad t\in\mathbb{T}_{a}, \end{array} $$

where \((M_{t})_{t\in \mathbb {T}_{a}}\) is a \(\mathbb {R}\)-valued square integrable martingale and \(f: \mathbb {T}_{a}\times \mathbb {R}^{d}\to \mathbb {R}^{d}\) and \(g:\mathbb {T}_{a}\times \mathbb {R}^{d}\to \mathbb {R}^{d}\) are two Borel functions. This work can be considered as a unification and generalization of works dealing with the stability of stochastic difference and differential equations.

The organization of this paper is as follows. Section 2 surveys some basic notation and properties of the analysis on time scales. Section 3 is devoted to giving definition and some results for the stochastical stability for ∇-stochastic dynamic equations. The last section deals with some theorems, corollaries concerning the almost sure exponential stability for ∇-stochastic dynamic equations on time scales. Some examples are also provided to illustrate our results.

2 Preliminaries on Time Scales

Let \(\mathbb {T}\) be a closed subset of \(\mathbb {R}\), endowed with the topology inherited from the standard topology on \(\mathbb {R}\). Let \(\sigma (t)=\inf \{s\in \mathbb {T}: s>t\}, \mu (t)=\sigma (t)-t\) and \(\rho (t)=\sup \{s\in \mathbb {T}: s<t\}, \nu (t)=t-\rho (t)\) (supplemented by \(\sup \emptyset =\inf \mathbb {T}, \inf \emptyset =\sup \mathbb {T}\)). A point \( t\in \mathbb {T}\) is said to be right-dense if σ(t) = t, right-scattered if σ(t) > t, left-dense if ρ(t) = t, left-scattered if ρ(t) < t and isolated if t is simultaneously right-scattered and left-scattered. The set \(_{k}\mathbb {T}\) is defined to be \(\mathbb {T}\) if \(\mathbb {T}\) does not have a right-scattered minimum; otherwise it is \(\mathbb {T}\) without this right-scattered minimum. A function f defined on \(\mathbb {T}\) is regulated if there exist the left-sided limit at every left-dense point and right-sided limit at every right-dense point. A regulated function is called ld-continuous if it is continuous at every left-dense point. Similarly, one has the notion of rd-continuous. For every \(a,b\in \mathbb {T}\), by [a,b], we mean the set \(\{t\in \mathbb {T}: a\leq t\leq b\}\). Denote \(\mathbb {T}_{a}=\{t\in \mathbb {T}: t\geq a\}\) and by \(\mathcal {R}\) (resp. \(\mathcal {R}^{+})\) the set of all rd-continuous and regressive (resp. positive regressive) functions. For any function f defined on \(\mathbb {T}\), we write f ρ for the function fρ; i.e., \(f^{\rho }_{t} = f(\rho (t))\) for all t\(_{k}\mathbb {T}\) and \(\lim _{\sigma (s)\uparrow t}f(s)\) by f(t ) or \(f_{t_{-}}\) if this limit exists. It is easy to see that if t is left-scattered then \(f_{t_{-}}=f^{\rho }_{t}\). Let

$$\mathbb{I}=\{ t: \text{\textit{t} is left-scattered}\}. $$

Clearly, the set \(\mathbb {I}\) of all left-scattered points of \(\mathbb {T}\) is at most countable.

Throughout this paper, we suppose that the time scale \(\mathbb {T}\) has bounded graininess, that is \(\nu ^{*}=\sup \{\nu (t):t\in \,_{k}\mathbb {T}\}<\infty \).

Let A be an increasing right continuous function defined on \(\mathbb {T}\). We denote by \(\mu _{\nabla }^{A}\) the Lebesgue ∇-measure associated with A. For any \(\mu _{\nabla }^{A}\)-measurable function \(f: \mathbb {T}\to \mathbb {R}\) we write \({{\int }_{a}^{t}}f_{\tau }\nabla A_{\tau }\) for the integral of f with respect to the measure \(\mu _{\nabla }^{A}\) on (a,t]. It is seen that the function \(t\mapsto {{\int }_{a}^{t}}f_{\tau }\nabla A_{\tau }\) is cadlag. It is continuous if A is continuous. In case A(t) ≡ t we write simply \({{\int }_{a}^{t}}f_{\tau }\nabla \tau \) for \({{\int }_{a}^{t}}f_{\tau }\nabla A_{\tau }\). For details, we can refer to [5]. If the integrand f is regulated then

$${{\int}_{a}^{b}}f(\tau_{-})\nabla \tau={{\int}_{a}^{b}}f(\tau){\Delta} \tau \;\; \forall \; a, b\in\mathbb{T}^{k}. $$

Therefore, if α is a regressive function on \(\mathbb {T}\), the exponential function e α (t,a) defined by [4, Definition 2.30, pp. 59] is a solution of the initial value problem

$$ y^{\nabla}(t) =\alpha(t_{-})y(t_{-}),\;\; y(a)=1, \;\;t\in\mathbb{T}_{a}, $$
(2.1)

(see [7] for details). Let \(({\Omega }, \mathcal {F},\{\mathcal {F}_{t}\}_{t\in \mathbb {T}_{a}}, \mathbb {P})\) be a probability space with filtration \(\{\mathcal {F}_{t}\}_{t\in \mathbb {T}_{a}}\) satisfying the usual conditions (i.e., \(\{\mathcal {F}_{t}\}_{t\in \mathbb {T}_{a}}\) is increasing and \(\bigcap \{ \mathcal {F}_{\rho (s)}:s\in \mathbb {T}, s>t\}=\mathcal {F}_{t}\) for all \(t\in \mathbb {T}_{a}\) while \(\mathcal {F}_{a}\) contains all \(\mathbb {P}\)-null sets). The notions of continuous process, rd-continuous process, ld-continuous process, cadlag process, martingale, submartingale, semimartingale, stopping time... for a stochastic process \(X=\{X_{t}\}_{t\in \mathbb {T}_{a}}\) on probability space \(({\Omega }, \mathcal {F},\{\mathcal {F}_{t}\}_{t\in \mathbb {T}_{a}}, \mathbb {P})\) are defined as usual.

Denote by \(\mathcal {M}_{2}\) the set of square integrable \(\mathcal {F}_{t}\)-martingales and by \(\mathcal {M}_{2}^{r}\) the subspace of the space \(\mathcal {M}_{2}\) consisting of martingales with continuous characteristics. For any \(M\in \mathcal {M}_{2}\), set

$$\widehat{M}_{t}=M_{t}-\sum\limits_{s\in (a, t]}\left( M_{s}-M_{\rho(s)}\right). $$

It is clear that \(\widehat M_{t} \) is an \(\mathcal {F}_{t}\)-martingale and \(\widehat {M}_{t}=\widehat {M}_{\rho (t)}\) for any \(t\in \mathbb {T}\). Further,

$$ \langle \widehat M\rangle_{t}=\langle M\rangle_{t}-\sum\limits_{s\in (a, t]}\left( \langle M\rangle_{s}-\langle M\rangle_{\rho(s)}\right). $$
(2.2)

Therefore, \(M\in \mathcal {M}_{2}^{r}\) if and only if \(\widehat {M}\in \mathcal {M}_{2}^{r}\). In this case, \(\widehat {M}\) can be extended to a regular martingale \(\overline {M}\) defined on [a,) by setting \(\overline {M}_{s}=\widehat {M}_{\rho (t)}\) if \(s\in [\rho (t),t], t\in \mathbb {T}_{a}\).

Denote by \(\mathfrak {B}\) the class of Borel sets in \(\mathbb {R}\) whose closures do not contain the point 0. Let δ(t,A) be the number of jumps of M on (a,t] whose values fall into the set \( A\in \mathfrak {B}.\) Since the sample functions of the martingale M are cadlag, the process δ(t,A) is defined with probability 1 for all \(t\in \mathbb {T}_{a}, A\in \mathfrak {B}.\) We extend its definition over the whole Ω by setting δ(t,A) ≡ 0 if the sample tM t (ω) is not cadlag. Clearly the process δ(t,A) is \(\mathcal {F}_{t}\)-adapted and its sample functions are nonnegative, monotonically nondecreasing, continuous from the right and take integer values. We also define \(\widehat {\delta }(t,A)\) for \(\widehat {M}_{t}\) in a similar way. Let \(\widetilde \delta (t,A)=\sharp \{s\in (a, t]: M_{s}-M_{\rho (s)}\in A\}\). It is evident that

$$ \delta(t,A)=\widehat \delta(t,A)+\widetilde \delta(t,A). $$
(2.3)

Further, for fixed t, \(\delta (t,\cdot ),\widehat \delta (t,\cdot )\) and \(\widetilde \delta (t,\cdot )\) are measures.

The processes \(\delta (t, A), \widehat \delta (t,A)\) and \(\widetilde \delta (t,A), t\in \mathbb {T}_{a}\) are \(\mathcal F_{t}\)-regular submartingales for fixed A. By Doob-Meyer decomposition, each process has a unique representation of the form

$$\begin{array}{@{}rcl@{}} \delta (t, A)&=&\zeta(t, A)+\pi(t, A),\quad \widehat\delta (t, A)=\widehat\zeta(t, A)+\widehat\pi(t, A),\\ \widetilde\delta (t, A)&=&\widetilde\zeta(t, A)+\widetilde\pi(t, A), \end{array} $$

where \(\pi (t, A), \widehat \pi (t, A)\) and \( \widetilde \pi (t, A)\) are natural increasing integrable processes and \(\zeta (t, A), \widehat \zeta (t, A)\), \(\widetilde \zeta (t, A)\) are martingales. We find a version of these processes such that they are measures when t is fixed. Throughout this paper, we suppose that 〈M t is absolutely continuous with respect to Lebesgue measure μ , i.e., there exists an \(\mathcal {F}_{t}\)-adapted progressively measurable process K t such that

$$ \langle M\rangle_{t}={{\int}_{a}^{t}}K_{\tau}\nabla \tau. $$
(2.4)

Further, suppose that there exists a positive constant N such that

$$ \mathbb{P}\{\mu_{\nabla}\text{--esssup}_{\text{t}\in \mathbb{T}_{a}}|K_{t}|\leq N\}=1. $$
(2.5)

From (2.2) it follows that \(\langle \widehat M\rangle _{t}\) is also absolutely continuous with respect μ Δ. Let

$$\widehat{M}_{t}^{d}={\int}_{\mathbb{R}}u\widehat\zeta(t, du) \text{ and } \widehat{M}^{c}_{t}=\widehat{M}_{t}-\widehat{M}^{d}_{t}. $$

We note that \(\widehat {\delta }(t,A)\) is also the number of jumps of \(\overline {M}\) on (a,t] whose values fall into the set \( A\in \mathfrak {B}.\) Therefore, by applying [9, Theorem 9, pp. 90] to regular martingale \(\overline {M}\) on [0,), we conclude that

$$ \langle\widehat{M}^{d}\rangle_{t}={\int}_{\mathbb{R}}u^{2}\widehat\pi(t, du). $$
(2.6)

Further, from the relation

$$\langle \widehat{M}\rangle_{t}=\langle \widehat{M}^{c}\rangle_{t}+\langle \widehat{M}^{d}\rangle_{t}, $$

it follows that \(\langle \widehat {M}^{c}\rangle _{t}\) and \(\langle \widehat {M}^{d}\rangle _{t}\) are also absolutely continuous with respect to μ on \(\mathbb {T}\). Thus, there exist \(\mathcal {F}_{t}\)-adapted, progressively measurable bounded, nonnegative processes \(\widehat {K}^{c}_{t}\) and \(\widehat {K}^{d}_{t}\) satisfying

$$ \langle\widehat{M}^{c}\rangle_{t}={{\int}_{a}^{t}}\widehat{K}^{c}_{\tau}\nabla\tau,\quad \langle\widehat{M}^{d}\rangle_{t}={{\int}_{a}^{t}}\widehat{K}^{d}_{\tau}\nabla\tau. $$
(2.7)

Moreover, it is easy to show that \(\widehat {\pi }(t, A)\) is absolutely continuous with respect to μ on \(\mathbb {T}\). This means that it can be expressed as

$$ \widehat{\pi}(t, A)={{\int}_{a}^{t}}\widehat{\Upsilon}(\tau, A)\nabla\tau, $$
(2.8)

with an \(\mathcal {F}_{t}\)-adapted, progressively measurable process \(\widehat {\Upsilon }(t, A)\). Since \(\mathfrak {B}\) is generated by a countable family of Borel sets, we can find a version of \(\widehat {\Upsilon }(t, A)\) such that the map \(t\mapsto \widehat {\Upsilon }(t, A)\) is measurable and for t fixed, \(\widehat {\Upsilon }(t, \cdot )\) is a measure. Hence, from (2.6) we see that

$$\langle\widehat{M}^{d}\rangle_{t}={{\int}_{a}^{t}}{\int}_{\mathbb{R}}u^{2}\widehat{\Upsilon}(\tau, du)\nabla\tau. $$

This implies that

$$\widehat{K}^{d}_{t}={\int}_{\mathbb{R}}u^{2}\widehat{\Upsilon}(t, du). $$

For the process \(\widetilde \pi (t, A)\) we can write

$$\widetilde\pi(t, A)=\sum\limits_{s\in (a, t]}\mathbb E [1_{A}(M_{s}-M_{\rho(s)})\left|\mathcal{F}_{\rho(s)}\right.]. $$

Putting \( \widetilde {\Upsilon }(t, A)=\frac {\mathbb E [1_{A}(M_{t}-M_{\rho (t)}) |\mathcal {F}_{\rho (t)}]}{\nu (t)}\) if ν(t) > 0 and \(\widetilde {\Upsilon }(t, A)=0\) if ν(t) = 0 yields

$$ \widetilde\pi(t, A)={{\int}_{a}^{t}} \widetilde {\Upsilon}(\tau, A)\nabla \tau. $$
(2.9)

We see by the definition that if ν(t) > 0 then

$$ {\int}_{\mathbb{R}} u{\widetilde{\Upsilon}}(t, du)=\frac{ \mathbb E \left[M_{t}-M_{\rho(t)} \left|\mathcal{F}_{\rho(t)}\right.\right]}{\nu(t)}=0, $$
(2.10)

and

$$ {\int}_{\mathbb{R}} u^{2}{\widetilde{\Upsilon}}(t, du)=\frac{ \mathbb{E} \left[\left( M_{t}-M_{\rho(t)}\right)^{2} \left|\mathcal{F}_{\rho(t)}\right.\right]}{\nu(t)}=\frac{\langle M\rangle_{t}-\langle M\rangle_{\rho(t)}}{\nu(t)}. $$
(2.11)

Let \({\Upsilon }(t, A)=\widehat {\Upsilon }(t, A)+\widetilde {\Upsilon }(t, A)\). We see from (2.3) that

$$\pi(t, A)={{\int}_{a}^{t}}{\Upsilon}(\tau, A)\nabla\tau. $$

Denote by \(\mathcal {L}_{1}^{\text {loc}}(\mathbb {T}_{a}, \mathbb {R})\) (resp. by \(\mathcal L_{2}^{\text {loc}}(\mathbb {T}_{a}; M)\)) the family of real valued, \(\mathcal {F}_{t}\)-progressively measurable processes ϕ(t) with \({{\int }_{a}^{T}}|\phi (\tau )|\nabla \tau <+\infty \) a.s. for every T > a (resp. the space of all real valued, \(\mathcal {F}_{t}\)-predictable processes ϕ(t) satisfying \(\mathbb E{{\int }_{a}^{T}}\phi ^{2}(\tau )\nabla \langle M\rangle _{\tau }<\infty , \) for any T > a). Let \(C^{1,2}(\mathbb {T}_{a}\times \mathbb {R}^{d}; \mathbb {R})\) be the set of all functions V (t,x) defined on \(\mathbb {T}_{a}\times \mathbb {R}^{d}\), having continuous ∇-derivative in t and continuous second derivative in x.

Consider a d-tuple of semimartingales X(t) = (X 1(t),…,X d (t)) defined by

$$X_{i}(t)=X_{i}(a)+{{\int}_{a}^{t}} f_{i}(\tau)\,\nabla \tau + {{\int}_{a}^{t}} g_{i}(\tau)\nabla M_{\tau}, $$

where \(f_{i}\in \mathcal {L}_{1}^{\text {loc}}(\mathbb {T}_{a}, \mathbb {R})\) and \(g_{i}\in \mathcal {L}_{2}^{\text {loc}}(\mathbb {T}_{a};M)\) for i = 1,…,d. For \(V\in C^{1,2}(\mathbb {T}_{a}\times \mathbb {R}^{d}; \mathbb {R})\), put

$$\begin{array}{@{}rcl@{}} &&\mathcal{L} V(t,x)\\ &=&V^{\nabla_{t}}(t,x) +\sum\limits_{i=1}^{d} \frac{\partial V(t,x)}{\partial x_{i}}(1-1_{\mathbb I}(t))f_{i}(t)+\left( V(t, x+f(t)\nu(t))-V(t,x)\right){\Phi}(t)\\ && +\frac{1}{2}\sum\limits_{i,j} \frac{\partial^{2} V(t,x)}{\partial x_{i}x_{j}}g_{i}(t)g_{j}(t) \widehat{K}^{c}_{t}- \sum\limits_{i=1}^{d} \frac{\partial V(t,x)}{\partial x_{i}}g_{i}(t){\int}_{\mathbb{R}} u\widehat{\Upsilon}(t, du)\\ &&+{\int}_{\mathbb{R}}(V\left( t,x+f(t)\nu(t)+g(t)u\right)-V(t, x+f(t)\nu(t))){\Upsilon}(t, du), \end{array} $$
(2.12)

with f = (f 1,f 2,…,f d ); g = (g 1,g 2,…,g d ) and

$${\Phi}(t)=\left\{\begin{array}{llll}0&\text{if }t\text{ left-dense}\\ \frac{1}{\nu(t)}&\text{if }t\text{ left-scattered}. \end{array}\right.$$

By using the Itô’s formula in [7] we see that

$$\begin{array}{@{}rcl@{}} H_{t}&=&V(t,X(t))-V(a, X(a))-{{\int}_{a}^{t}}\mathcal{L} V(\tau,X({\tau_{-}}))\nabla\tau\\ &=&\sum\limits_{i=1}^{d} {{\int}_{a}^{t}}\frac{\partial V(\tau,X(\tau_{-}))}{\partial x_{i}}g_{i}(\tau)\nabla {\widehat{M_{\tau}}} +{{\int}_{a}^{t}} {\int}_{\mathbb{R}}{\Psi}(\tau)\widetilde{\zeta}(\nabla\tau, du)\\ &&+{{\int}_{a}^{t}} {\int}_{\mathbb{R}}\left( {\Psi}(\tau)-\sum\limits_{i=1}^{d}u\frac{\partial V(\tau,X(\tau_{-}))}{\partial x_{i}}g_{i}(\tau)\right)\widehat{\zeta}(\nabla\tau, du) \end{array} $$
(2.13)

is a locally integrable martingale, where Ψ(τ) = V(τ,X(τ ) + f(τ)ν(τ) + g(τ)u) − V (τ,X(τ ) + f(τ)ν(τ)).

3 Stochastical Stability of Stochastic Dynamic Equations

Let \(M=(M_{t})_{t\in \mathbb {T}_{a}}\) be a square integrable \((\mathcal {F}_{t})\)-martingale. Let \(f:\mathbb {T}_{a}\times \mathbb {R}^{d}\to \mathbb {R}^{d}\) and \(g: \mathbb {T}_{a}\times \mathbb {R}^{d}\to \mathbb {R}^{d}\) be two Borel functions. Consider the stochastic differential equation

$$ \left\{\begin{array}{llll} d^{\nabla} X(t)=f(t, X(t_{-}))d^{\nabla} t+g(t,X(t_{-}))d^{\nabla} M(t) \quad\forall\hspace{0.1cm} t\in\mathbb{T}_{a}\\ X(a)=x_{a}\in \mathbb{R}^{d}.\end{array}\right. $$
(3.1)

Throughout this paper we will assume that the (3.1) has a unique solution defined on \(\mathbb {T}_{a}\). This assumption holds if the coefficients of (3.1) are Lipschitz and the condition (2.5) is satisfied (see [6]). We denote by X(t;a,x a ) the solution of (3.1) with initial condition x a . We write simply X(t) for X(t;a,x a ) if there is no confusion.

Denote by \(\mathcal {K}\) the family of all continuous nondecreasing functions \(\varphi : \mathbb {R}_{+}\rightarrow \mathbb {R}_{+}\) such that φ(0) = 0 and φ(r) > 0 if r > 0. For h > 0, let \( S_{h}=\lbrace x\in \mathbb {R}^{d}:\Vert x\Vert <h\rbrace \) and \(C^{1,2}(\mathbb {T}_{a}\times S_{h}; \mathbb {R}_{+})\) be the family of all nonnegative functions V (t,x) from \(\mathbb {T}_{a}\times S_{h}\) to \( \mathbb {R}_{+}\) such that they are continuously once differentiable in t and twice in x. We assume further that

$$f(t,0)=0;\; g(t,0)=0 \quad \forall \; t\in \mathbb{T}_{a}. $$

This assumption implies that (3.1) has the trivial solution X(t;a,0) ≡ 0. The definitions of stochastic stability; stochastic asymptotic stability and stochastic asymptotic stability in the large for the trivial solution of (3.1) are referred to [17]. Precisely,

Definition 3.1

(i) Stochastically stable: for every pair of ε ∈ (0,1) and r > 0, there exists δ = δ(ε,r,a) > 0 such that

$$\mathbb{P}\left\{ \sup_{t\in \mathbb{T}_{a}}\Vert X(t;a,x_{a})\Vert<r \right\} \geq {1-\varepsilon}\text{ for any } x_{a}\in \mathbb{R}^{d} \text{ with } \Vert x_{a}\Vert<\delta.$$

(ii) Stochastically asymptotically stable: it is stochastically stable and, for every ε ∈ (0,1), there exists δ 0 = δ 0(ε,a) > 0 such that

$$\mathbb{P}\left\{ \lim_{t\rightarrow \infty} X(t;a,x_{a})=0\right\} \geq {1-\varepsilon}\text{ whenever}~ \Vert x_{a}\Vert<\delta_{0}.$$

(iii) Stochastically asymptotically stable in the large: it is stochastically stable and, moreover, for all \(x_{a}\in \mathbb {R}^{d}\)

$$\mathbb{P}\left\{ \lim_{t\rightarrow \infty} X(t;a,x_{a})=0\right\} =1.$$

Theorem 3.2

Suppose that for a h > 0, there exists a function \(V(t,x)\in C^{1,2}(\mathbb {T}_{a}\times S_{h}; \mathbb {R}_{+})\) satisfying V (t,0) ≡ 0, such that for some \(\varphi \in \mathcal {K},\)

$$V(t,x)\geq \varphi(\Vert x\Vert), $$

and

$$\mathcal{L}V(t, x)\leq 0$$

forall \((t,x)\in \mathbb {T}_{a}\times S_{h}.\)Then, the trivial solution of (3.1)is stochastically stable.

Proof

Let ε ∈ (0,1) and 0 < r < h be arbitrary. By the continuity of V (t,x) and the fact V (a,0) = 0, we can find a 0 < δ = δ(ε,r,a) < r such that

$$ \frac{1}{\varepsilon}\sup_{x\in S_{\delta}}V(a,x)\leq \varphi (r). $$
(3.2)

For any x a S δ , consider a stopping time

$$\kappa_{r}=\inf\left\{ t\geq a: X(t)\notin S_{r} \right\}.$$

By [7, Corollary 2, pp. 325], for any ta,

$$\begin{array}{@{}rcl@{}} &&V(\kappa_{r}\wedge t, X(\kappa_{r}\wedge t))\\ &=&V(a,X(a))+{\int}_{a}^{\kappa_{r}\wedge t}\hskip -.10cm \mathcal{L} V(\tau,X(\tau_{-}))\nabla\tau\\ &&+\sum\limits_{i=1}^{d} {\int}_{a}^{\kappa_{r}\wedge t}\hskip -.10cm \frac{\partial V(\tau,X(\tau_{-}))}{\partial x_{i}}g_{i}(\tau,X(\tau_{-}))\nabla {\widehat{M_{\tau}}} +{\int}_{a}^{\kappa_{r}\wedge t}\hskip -.10cm {\int}_{\mathbb{R}}{\Psi}(\tau)\widetilde{\zeta}(\nabla\tau, du)\\ &&+{\int}_{a}^{\kappa_{r}\wedge t}\hskip -.10cm {\int}_{\mathbb{R}}\left( {\Psi}(\tau)-\sum\limits_{i=1}^{d}u\frac{\partial V(\tau,X(\tau_{-}))}{\partial x_{i}}g_{i}(\tau,X(\tau_{-}))\right)\widehat{\zeta}(\nabla\tau, du). \end{array} $$

Because \(\mathcal {L}V(t, x)\leq 0,\) we obtain that

$$ \mathbb E V(\kappa_{r}\wedge t, X(\kappa_{r}\wedge t))\leq V(a, x_{a}). $$
(3.3)

Since ∥X(κ r t)∥ = ∥X(κ r )∥≥ r if κ r t and V (t,x) ≥ φ(∥x∥) for all \( (t,x)\in \mathbb {T}_{a}\times S_{h},\)

$$ \mathbb E V(\kappa_{r}\wedge t, X(\kappa_{r}\wedge t))\geq \mathbb E\left[ 1_{\lbrace\kappa_{r}\leq t\rbrace} V(\kappa_{r}, X(\kappa_{r}))\right]\geq\varphi(r)\mathbb{P}\lbrace \kappa_{r}\leq t \rbrace. $$
(3.4)

Combining (3.2), (3.3) and (3.4) we obtain

$$\mathbb{P}\lbrace \kappa_{r}\leq t \rbrace \leq \varepsilon.$$

Letting t, we get \(\mathbb {P}\lbrace \kappa _{r}<\infty \rbrace \leq \varepsilon \). This means that

$$\mathbb{P}\left\{ \sup_{t\in \mathbb{T}_{a}}\Vert X(t;a,x_{a})\Vert<r \right\} \geq {1-\varepsilon}.$$

The proof is complete. □

Theorem 3.3

Suppose that for a h > 0, there exists a function \(V(t,x)\in C^{1,2}(\mathbb {T}_{a}\times S_{h}; \mathbb {R}_{+})\) such that for some \(\varphi _{1},\varphi _{2},\varphi _{3}\in \mathcal {K}\) ,

$$\varphi_{1}(\Vert x\Vert)\leq V(t,x)\leq \varphi_{2}(\Vert x\Vert), $$

and

$$\mathcal{L}V(t, x)\leq -\varphi_{3}(\Vert x\Vert)$$

forall \((t,x)\in \mathbb {T}_{a}\times S_{h}\).Then, the trivial solution of (3.1)is stochastically asymptotically stable.

Proof

From Theorem 3.2, the trivial solution (3.1) is stochastically stable. So, we need only to show that for any ε ∈ (0,1), there is a δ 0 = δ 0(ε,a) > 0 such that

$$ \mathbb{P}\left\{ \lim_{t\rightarrow \infty} X(t;a,x_{a})=0\right\} \geq {1-\varepsilon} \text{ for any } x_{a}\in \mathbb{R}^{d} \text{ with } \Vert x_{a}\Vert <\delta_{0}. $$
(3.5)

By Theorem 3.2, there is a δ 0 = δ 0(ε,a) > 0 such that

$$ \mathbb{P}\left\lbrace \sup_{t\in \mathbb{T}_{a}}\Vert X(t;a,x_{a})\Vert< \frac{h}{2}\right\rbrace \geq 1-\frac{\varepsilon}{4}, $$
(3.6)

provided \(x_{a}\in S_{\delta _{0}}\). Fix \(x_{a}\in S_{\delta _{0}}\) and choose 0 < b < ∥x a ∥. Let 0 < a 1 < b be sufficiently small such that

$$ \frac{\varphi_{2}(a_{1})}{\varphi_{1}(b)}\leq \frac{\varepsilon}{4}. $$
(3.7)

Define the stopping times

$$\kappa_{a_{1}}=\inf\lbrace t\geq a: \Vert X(t)\Vert\leq a_{1}\rbrace,$$

and

$$\kappa_{h}=\inf\left\{ t\geq a: \Vert X(t)\Vert\geq \frac{h}{2}\right\}.$$

From (3.6) we get

$$ \mathbb{P}\lbrace \kappa_{h}= \infty\rbrace \geq 1-\frac{\varepsilon}{4}. $$
(3.8)

By [7, Corollary 2, pp. 323], we can derive that for any ta,

$$\begin{array}{@{}rcl@{}} 0&\leq& \mathbb E V(\kappa_{a_{1}}\wedge \kappa_{h}\wedge t, X(\kappa_{a_{1}}\wedge \kappa_{h}\wedge t))=V(a,x_{a}) \\ &&+\mathbb E{\int}_{a}^{\kappa_{a_{1}}\wedge \kappa_{h}\wedge t}\hskip -.10cm \mathcal{L} V(\tau,X(\tau_{-}))\nabla\tau \leq V(a,x_{a})-\varphi_{3}(a_{1})\mathbb E(\kappa_{a_{1}}\wedge \kappa_{h}\wedge t-a). \end{array} $$

Consequently,

$$(t-a)\mathbb{P}\lbrace\kappa_{a_{1}}\wedge \kappa_{h}\geq t\rbrace \leq \mathbb E(\kappa_{a_{1}}\wedge \kappa_{h}\wedge t-a)\leq\frac{V(a,x_{a})}{\varphi_{3}(a_{1})}.$$

Letting t implies that

$$ \mathbb{P}\lbrace\kappa_{a_{1}}\wedge \kappa_{h}= \infty\rbrace =0. $$
(3.9)

Combining (3.8) and (3.9) yields \(\mathbb {P}\lbrace \kappa _{a_{1}}=\infty \rbrace \leq \frac {\varepsilon }{4}\). Therefore, we can choose c sufficiently large such that

$$\mathbb{P}\lbrace \kappa_{a_{1}}< c\rbrace\geq 1-\frac{\varepsilon}{2}. $$

Hence,

$$\begin{array}{@{}rcl@{}} \mathbb{P}\lbrace\kappa_{a_{1}}<c \wedge \kappa_{h}\rbrace &\geq& \mathbb{P}(\lbrace \kappa_{a_{1}}<c\rbrace\cap \lbrace \kappa_{h}= \infty\rbrace)\\ &\geq& \mathbb{P}\lbrace \kappa_{a_{1}}<c\rbrace - \mathbb{P}\lbrace \kappa_{h}< \infty\rbrace\geq 1-\frac{3\varepsilon}{4}. \end{array} $$
(3.10)

Now, define two stopping times

$$d=\left\{\begin{array}{llll}\kappa_{a_{1}} &\text{if}\;\;\kappa_{a_{1}} <\kappa_{h}\wedge c,\\ \infty &\text{otherwise} \end{array}\right.$$

and

$$\kappa_{b}=\inf\lbrace t>d: \Vert X(t)\Vert\geq b \rbrace. $$

By [7, Corollary 2, pp. 323], for any tc,

$$\mathbb{E} V(\kappa_{b}\wedge t, X(\kappa_{b}\wedge t))\leq \mathbb{E}V(d\wedge t, X(d\wedge t)).$$

Noting that

$$V(\kappa_{b}\wedge t, X(\kappa_{b}\wedge t))=V(d\wedge t, X(d\wedge t))=V(t,X(t))$$

on \(\omega \in \lbrace \kappa _{a_{1}} \geq \kappa _{h}\wedge c\rbrace \), we get

$$\mathbb{E}\left[ 1_{\lbrace\kappa_{a_{1}} <\kappa_{h}\wedge c\rbrace}V(\kappa_{b}\wedge t, X(\kappa_{b}\wedge t))\right]\leq \mathbb{E}\left[ 1_{\lbrace\kappa_{a_{1}} <\kappa_{h}\wedge c\rbrace}V(\kappa_{a_{1}}\wedge t, X(\kappa_{a_{1}}\wedge t))\right].$$

Since \(\lbrace \kappa _{b}\leq t\rbrace \subset \lbrace \kappa _{a_{1}} <\kappa _{h}\wedge c\rbrace \),

$$\varphi_{1}(b)\mathbb{P}\lbrace\kappa_{b}\leq t\rbrace\leq\varphi_{2}(a_{1}).$$

From (3.7) it yields

$$\mathbb{P}\lbrace\kappa_{b}\leq t\rbrace\leq\frac{\varepsilon}{4}.$$

Letting t we have

$$\mathbb{P}\lbrace\kappa_{b}<\infty\rbrace\leq\frac{\varepsilon}{4}.$$

It then follows, using (3.10) as well, that

$$\mathbb{P}\lbrace d<\infty \text{ and } \kappa_{b}=\infty \rbrace\geq \mathbb{P}\lbrace\kappa_{a_{1}} <\kappa_{h}\wedge c\rbrace-\mathbb{P}\lbrace\kappa_{b}<\infty\rbrace\geq 1-\varepsilon.$$

So

$$\mathbb{P}\left\{\omega:\limsup_{t\rightarrow\infty}\Vert X(t)\Vert\leq b\right\}\geq 1-\varepsilon.$$

Since b is arbitrary, we must have

$$\mathbb{P}\left\{\omega:\limsup_{t\rightarrow\infty}\Vert X(t)\Vert =0\right\}\geq 1-\varepsilon$$

as required. The proof is complete. □

Theorem 3.4

Suppose there exists a function \(V(t,x)\in C^{1,2}(\mathbb {T}_{a}\times \mathbb {R}^{d}; \mathbb {R}_{+})\) with V (t,0) ≡ 0 such that for any h > 0

$$\begin{array}{@{}rcl@{}} \varphi_{1}(\Vert x\Vert)&\leq& V(t,x)\leq \varphi_{2}(\Vert x\Vert) \text{\; for all\; } (t,x)\in \mathbb{T}_{a}\times S_{h},\\ \mathcal{L}V(t, x)&\leq& -\varphi_{3}(\Vert x\Vert) \text{\; for all\; } (t,x)\in \mathbb{T}_{a}\times S_{h}, \end{array} $$
(3.11)

for some \(\varphi _{1},\varphi _{2},\varphi _{3}\in \mathcal {K}\) . Further,

$$\lim_{\Vert x \Vert \rightarrow\infty}\inf_{t\geq a}V(t,x)=\infty.$$

Then, the trivial solution of (3.1)is stochastically asymptotically stable in thelarge.

Proof

From Theorem 3.2, the trivial solution is stochastically stable. So we only need to show that

$$ \mathbb{P}\left\{ \lim_{t\rightarrow \infty} X(t;a,x_{a})=0\right\} =1 $$
(3.12)

for all \( x_{a}\in \mathbb {R}^{d}\). Fix any x a and write X(t;a,x a ) = X(t) again. Let ε ∈ (0,1) be arbitrary. Since \(\lim _{\Vert x \Vert \rightarrow \infty }\inf _{t\geq a}V(t,x)=\infty , \)we can find an h > 2∥x a ∥ sufficiently large for

$$ \inf_{2\Vert x\Vert\geq h, t\geq a} V(t,x)\geq\frac{4V(a,x_{a})}{\varepsilon}. $$
(3.13)

Let

$$\kappa_{h}=\inf\lbrace t\geq a: 2\Vert X(t)\Vert\geq h\rbrace.$$

Similarly as above, we can show that for any ta,

$$ \mathbb{E} V(\kappa_{h}\wedge t, X(\kappa_{h}\wedge t))\leq V(a,x_{a}). $$
(3.14)

But, by (3.13), we see that

$$\mathbb{E} V(\kappa_{h}\wedge t, X(\kappa_{h}\wedge t))\geq \frac{4V(a,x_{a})}{\varepsilon}\mathbb{P}\lbrace\kappa_{h}\leq t\rbrace.$$

It then follows from (3.14) that

$$\mathbb{P}\lbrace\kappa_{h}\leq t\rbrace\leq\frac{\varepsilon}{4}.$$

Letting t gives \(\mathbb {P}\lbrace \kappa _{h}<\infty \rbrace \leq \frac {\varepsilon }{4}.\) That means

$$ \mathbb{P}\left\lbrace\Vert X(t)\Vert \leq \frac h2\text{\; for all\; } t\geq a\right\rbrace\geq 1-\frac{\varepsilon}{4}. $$
(3.15)

Thus, we get the inequality (3.6). Hence, we can follow the same argument as in the proof of Theorem (3.3) to show that

$$\mathbb{P}\left\{\lim_{t\rightarrow\infty}X(t)=0\right\}\geq 1-\varepsilon.$$

Since ε is arbitrary,

$$\mathbb{P}\left\{\lim_{t\rightarrow\infty}X(t)=0\right\}=1.$$

The proof is complete. □

We now consider a special case. Let P be a positive definite matrix and V (t,x) = x P x, where x is the transpose of a vector x. Using (2.12) we have

$$\begin{array}{@{}rcl@{}} \mathcal{L} V(t,x)&=& (1-1_{\mathbb I}(t))\left( x^{\top} P f(t,x)+ f(t,x)^{\top} Px\right)\\ &&+\left[(x+f(t,x)\nu(t))^{\top} P(x+f(t,x)\nu(t))- x^{\top} Px\right]{\Phi}(t)\\ &&+g(t,x)^{\top} Pg(t,x) \widehat{K}^{c}_{t} -\left( x^{\top} Pg(t,x)+g(t,x)^{\top} Px\right){\int}_{\mathbb{R}}u{\widehat{\Upsilon}}(t, du)\\ &&+{\int}_{\mathbb{R}}\left[\left( x+f(t,x)\nu(t)+g(t,x)u\right)^{\top} P\left( x+f(t,x)\nu(t)+g(t,x)u\right)\right.\\ &&- \left (x+f(t,x)\nu(t))^{\top} P(x+f(t,x)\nu(t))\right]{\Upsilon}(t, du). \end{array} $$
(3.16)

It is easy to see that

$$\begin{array}{@{}rcl@{}} &&(1-1_{\mathbb I}(t))\left( x^{\top} P f(t,x)+ f(t,x)^{\top} Px\right)\\ &&+\left[(x+f(t,x)\nu(t))^{\top} P(x+f(t,x)\nu(t))- x^{\top} Px\right]{\Phi}(t)\\ &=& x^{\top} P f(t,x)+f(t,x)^{\top} P x+f(t, x)^{\top} P f(t,x)\nu(t). \end{array} $$
(3.17)

Paying attention that \(\nu (t){\int }_{\mathbb {R}}u{\widehat {\Upsilon }}(t, du)=0\), \(\nu (t){\int }_{\mathbb R}u{\widetilde {\Upsilon }}(t, du)=0\) and \({\Upsilon }(t, A)=\widehat {\Upsilon }(t, A)+\widetilde {\Upsilon }(t, A)\), we have

$$\begin{array}{@{}rcl@{}} &&{\int}_{\mathbb{R}}\left[\left( x+f(t,x)\nu(t)+g(t,x)u\right)^{\top} P\!\left( x+f(t,x)\nu(t)+g(t,x)u\right)\right.\\ &&- \left.(x+f(t,x)\nu(t))^{\top} P(x+f(t,x)\nu(t))\right]{\Upsilon}(t, du)\\ &=&{\int}_{\mathbb{R}}g(t,x)^{\top} Pg(t,x)u^{2}{\Upsilon}(t, du)+\left( x^{\top} Pg(t,x)+g(t,x)^{\top} P x\right){\int}_{\mathbb{R}}u{\widehat{\Upsilon}}(t, du). \end{array} $$
(3.18)

Since \(K_{t}=\widehat {K}^{c}_{t}+{\int }_{\mathbb {R}} u^{2}{\Upsilon }(t, du)\), we can substitute (3.17) and (3.18) into (3.16) to obtain

$$\begin{array}{@{}rcl@{}} \mathcal{L}{ V}(t,x) &=&x^{\top} P f(t,x)+f(t,x)^{\top} Px +f(t, x)^{\top} P f(t,x)\nu(t)\\ &&+g(t, x)^{\top} P g(t,x)K_{t}. \end{array} $$
(3.19)

Thus, if we can find a positively defined matrix P such that \(\mathcal LV\) defined by (3.19) satisfies (3.11) then the trivial solution of (3.1) is stochastically asymptotically stable in the large.

Example 3.5

Let \(\mathbb {T}\) be a time scale

$$\mathbb{T}=\bigcup_{k=1}^{\infty}\left[k\left( \frac{1}{3}+b\right),k\left( \frac{1}{3}+b\right)+b\right],$$

where b is a positive real number. We have

$$ \nu(t)= \left\{\begin{array}{llll} 0 \;\ &\text{if}\;\ t\in\bigcup_{k=1}^{\infty}\left( k(\frac{1}{3}+b),k(\frac{1}{3}+b)+b\right]\\ \frac{1}{3}&\text{if} \;\ t\in\bigcup_{k=1}^{\infty} {\lbrace k(\frac{1}{3}+b)\rbrace}. \;\; \end{array}\right. $$
(3.20)

Consider the stochastic dynamic equation on time scale \(\mathbb {T}\)

$$ \left\{\begin{array}{lll}d^{\nabla} X(t)=AX(t_{-})d^{\nabla} t+BX(t_{-})d^{\nabla} W(t), t\in \mathbb{T}\\ X(0)=x_{0}\in \mathbb{R}^{d}, \end{array}\right. $$
(3.21)

where W(t) is an one dimensional Brownian motion on time scale defined as in [10] and A,B are d × d - matrices. In this case K t = 1. Let P be a positive definite matrix and V (t,x) = x P x. By (3.19), we have

$$ \mathcal{L}{ V}(t,x)= x^{\top}\left( PA+A^{\top} P+A^{\top} PA\nu(t)+B^{\top} PBK_{t}\right)x. $$
(3.22)

Hence, if the spectral abscissa of the matrix \(PA+A^{\top } P+\frac 13A^{\top } PA+B^{\top } PB\) is bounded by a negative constant − c, then we have \(\mathcal {L}{ V}(t,x)\leq -c\|x\|^{2}\). By virtue of Theorem 3.4, the trivial solution of (3.21) is stochastically asymptotically stable in the large.

4 Almost Sure Exponential Stability of Stochastic Dynamic Equations

In this section, we keep all assumptions imposed on the coefficients f and g of (3.1).

Definition 4.1

The trivial solution of the (3.1) is said to be almost surely exponentially stable if

$$ \limsup_{t\to\infty}\frac{\ln\|X(t;a,x_{a})\|}{t}<0 \enskip \text{a.s.} $$
(4.1)

holds for any \(x_{a}\in \mathbb {R}^{d}.\)

Theorem 4.2

Let α 1,c 1,p be positive numbers and α be a positive number satisfying \(\frac \alpha {1+\alpha \nu (t)}\leq \alpha _{1}\) . Suppose that there exists a function \(V\in C^{1,2}(\mathbb {T}_{a}\times \mathbb {R}^{d};\mathbb {R}_{+})\) such that for all \((t, x)\in {\mathbb {T}}_{a}\times \mathbb {R}^{d}\) ,

$$ c_{1}\|x\|^{p}\leq V(t, x), $$
(4.2)

and

$$ \mathcal{L}V(t,x)\leq -\alpha_{1} V(t_{-},x) +\eta_{t}\enskip a.s., $$
(4.3)

where η t is a nonnegative ld-continuous function defined on\(\mathbb {T}_{a}\)satisfying

$$ {\int}_{a}^{\infty} e_{\alpha}(t_{-}, a)\eta_{t}\nabla t<\infty \enskip a.s. $$
(4.4)

Then, the trivial solution of (3.1)is almost surely exponentially stable.

Proof

From (2.13), (4.3), we have

$$\begin{array}{@{}rcl@{}} &&e_{\alpha}(t, a)V(t, X(t))\\ &=&V(a,x_{a})+{{\int}_{a}^{t}} e_{\alpha}(\tau_{-},a)\left( \alpha V(\tau_{-}, X(\tau_{-}))+(1+\alpha\nu(\tau))\mathcal{L}V(\tau,X(\tau_{-})\right)\nabla\tau\\ &&+{{\int}_{a}^{t}}e_{\alpha}(\tau, a)\nabla H_{\tau}\\ &\leq& V(a,x_{a})\\ &&+{{\int}_{a}^{t}} e_{\alpha}(\tau_{-},a)\left( \alpha V(\tau_{-}, X(\tau_{-}))+(1+\alpha\nu(\tau))\left( -\alpha_{1} V(\tau_{-}, X(\tau_{-})) +\eta_{\tau}\right)\right)\nabla\tau\\ &&+{{\int}_{a}^{t}}e_{\alpha}(\tau, a)\nabla H_{\tau}. \end{array} $$

It follows from inequality \(\frac \alpha {1+\alpha \nu (t)}\leq \alpha _{1}\) that

$$\begin{array}{@{}rcl@{}} &&{{\int}_{a}^{t}} e_{\alpha}(\tau_{-},a)\left( \alpha V(\tau_{-}, X(\tau_{-}))+(1+\alpha\nu(\tau))\left( -\alpha_{1} V(\tau_{-}, X(\tau_{-})) +\eta_{\tau}\right)\right)\nabla\tau\\&\leq&{{\int}_{a}^{t}} e_{\alpha}(\tau_{-},a)(1+\alpha\nu(\tau))\eta_{\tau}\nabla\tau. \end{array} $$

Therefore,

$$e_{\alpha}(t, a)V(t, X(t))\leq V(a,x_{a})+F_{t}+G_{t},$$

where

$$F_{t}={{\int}_{a}^{t}}(1+\alpha\nu(\tau))e_{\alpha}(\tau_{-}, a)\eta_{\tau}\nabla\tau ;\enskip G_{t}={{\int}_{a}^{t}}e_{\alpha}(\tau, a)\nabla H_{\tau}.$$

By assumption (4.4), it follows that

$$F_{\infty}=\lim_{t\to\infty}F_{t}<\infty.$$

Define

$$Y_{t}= V(a,x_{a})+F_{t}+G_{t}\enskip\text{for all} \enskip t\in\mathbb{T}_{a}.$$

Then Y t is a nonnegative semimartingale. By [14, Theorem 7, pp. 139], one sees that

$$\{F_{\infty} < \infty\} \subset \left\{\lim_{t\to\infty}Y_{t} \text{ exists and finite }\right\} \enskip\text{a.s.}$$

Since \(\mathbb {P}\{F_{\infty }<\infty \}=1\),

$$\mathbb{P}\left\{\lim_{t\to\infty}Y_{t} \text{ exists and finite}\right\}=1.$$

Noting that 0 ≤ e α (t,a)V (t,X(t)) ≤ Y t for all ta a.s., we have

$$\mathbb{P}\left\{\limsup_{t\to\infty}e_{\alpha}(t, a)V(t, X(t))<\infty\right\}=1.$$

So,

$$ \limsup_{t\to\infty}\left[e_{\alpha}(t, a)V(t, X(t))\right]<\infty\enskip \text{a.s.} $$
(4.5)

The relations (4.2) and (4.5) imply

$$\limsup_{t\to\infty}\frac{\ln\|X(t)\|^{p}}t+\liminf_{t\to\infty}\frac{\ln e_{\alpha}(t,a)}t\leq \limsup_{t\to\infty}\frac{\ln e_{\alpha}(t,a)V(t, X(t))}t=0.$$

It is easy to see that \(\liminf _{t\to \infty }\frac {\ln e_{\alpha }(t,a)}t=\beta >0\). Therefore,

$$\lim_{t\to\infty}\frac{\ln \|X(t)\|}t \leq -\frac{\beta}p \enskip \text{a.s.}$$

The proof is complete. □

Consider now a special case of function V (t,x) = ∥x2. By (3.19)

$$ \mathcal{L}V(t, x)=2x^{\top} f(t,x)+\| g(t,x)\|^{2}K_{t} +\|f(t, x)\|^{2}\nu(t). $$
(4.6)

We can impose conditions on the functions f and g such that there are a positive number α and a nonnegative ld-continuous function η t satisfying (4.4) such that

$$2x^{\top} f(t,x) +\|f(t, x)\|^{2}\nu(t)+\| g(t,x)\|^{2}K_{t}\leq -\alpha \|x\|^{2}+ \eta_{t}.$$

Example 4.3

Let \(\mathbb {T}\) be a time scale and \(0\leq a\in \mathbb {T}\). Let 1 e = (1,1,…,1). Consider the stochastic dynamic equation on time scale \(\mathbb {T}\)

$$ \left\{\begin{array}{lll}d^{\nabla} X(t)=\left( AX(t_{-})+e^{-t}\sin(\|X(t_{-})\|) 1_{e}\right)d^{\nabla} t+ BX(t_{-}))d^{\nabla} W(t), \\ X(0)=x_{0}\in \mathbb{R}^{d}, t\in \mathbb{T}_{a}, \end{array}\right. $$
(4.7)

where A and B are d × d matrices and W(t) is an one dimensional Brownian motion on time scale defined as in [10]. Let V (t,x) = ∥x2. By (4.6) we have

$$\begin{array}{@{}rcl@{}} \mathcal{L} V(t,x)&=&2x^{\top} Ax+2 e^{-t}\sin(\|x\|) x^{\top} 1_{e}+\|Ax+e^{-t}\sin(\|x\|) 1_{e}\|^{2}\nu(t)+x^{\top} B^{\top} B x\\ &\leq& 2x^{\top} Ax+2 e^{-t} \|x\|\sqrt{d}+2(\|Ax\|^{2}+e^{-2t}d)\nu(t)+x^{\top} B^{\top} B x\\ &\leq& x^{\top} \left( 2A+2A^{\top} A \nu^{*}+B^{\top} B\right)x+2(\sqrt{d}\|x\|+d\nu^{*})e^{-t}. \end{array} $$

Suppose that the spectral abscissa of the matrix 2A + 2A A ν + B B is bounded by a negative constant − β. Then, we have

$$\mathcal{L} V(t,x)\leq -\frac \beta 2\|x\|^{2}+2(\sqrt{d}\|x\|+d\nu^{*})e^{-t}-\frac \beta 2\|x\|^{2}\leq -\frac \beta 2\|x\|^{2}+2d\left( \nu^{*}+\frac {1}{\beta}\right)e^{-t} $$

for all \(t\in \mathbb {T}_{a}\). For \(\alpha =\frac 12\min \{1, \beta \}\), all assumptions of Theorem 4.2 are satisfied. Thus, the trivial solution of (4.7) is almost surely exponentially stable.