1 Introduction

It is well known that the state estimation or filtering is one of the foundational problems in communications and control systems. The so-called state estimator is designed from the measured output to estimate the state of the given system. Over the past few decades, a lot of effective approaches have been proposed in this research area (see e.g., [1, 3, 9, 17, 19, 24, 35]). In particular, the Kalman–Bucy filtering (KBF) technique has been widely used for the state estimation problem of linear stochastic systems [2, 18, 30]. Furthermore, to handle nonlinear systems, the extended Kalman filter (EKF) has been developed whose idea is to linearize about the current mean and covariance. Both KBF and EKF have found successful applications in state estimation and machine learning problems [12, 24, 41]. These two types of filters, however, require not only the exact system model but also the statistical property of the noise in order to achieve desired performance. Since modeling errors and incomplete statistical information are often encountered in real-time applications, robust filtering schemes have recently received considerable research attention in order to improve the robustness of the traditional Kalman filters with respect to parameter uncertainties and external noises. The widely used robust filtering algorithms can be generally categorized as \(H_2\) filtering method, \(H_\infty \) filtering approach, and mixed \(H_2/H_\infty \) filtering scheme (see e.g., [11, 28, 29, 34, 37, 39] and the references therein).

In reality, many physical systems are subject to frequent unpredictable structural changes such as random failures and repairs of the components, changes in the interconnections of subsystems, and sudden environment changes. These systems can be appropriately modeled by the so-called Markovian jump systems (MJSs) that represent an important family of models subject to abrupt variations/switches. In the past few decades, the optimal regulator, controllability, observability, stability, and stabilization problems have been extensively studied for Markovian jump linear systems (MJLSs) and a series of results have been available in the literature (see e.g., [46, 10, 16, 21, 38]). Considering the fact that almost all real-world systems are essentially nonlinear, the nonlinear systems with Markovian jumping parameters deserve more research attention from both the theoretical and practical viewpoints, and accordingly, some promising results have been reported. For example, a robust EKF has been designed in [41] for discrete-time Markovian jump nonlinear systems with noise uncertainty. In [33], a nonlinear full-order filter has been implemented such that the dynamics of the estimation error are guaranteed to be stochastically exponentially stable in the mean square. The stochastic stability problem has been tackled in [22, 23] for nonlinear stochastic systems with Markovian switching. It should be pointed out that, so far, although the state estimation problem has been widely investigated for MJLSs, the state estimation problem of general nonlinear stochastic Markovian jump systems has gained much less research attention due probably to the mathematical complexity.

On the other hand, recognizing that nonlinearity is commonly encountered in engineering practice, the stability problems for nonlinear stochastic systems have long been a focus of research for many researchers. A large number of results have been published in the literature on a variety of research topics, including stochastic stability in probability [7, 25], \(p\)th moment asymptotic stability [13, 14, 23], and exponential mean square stability [26, 27, 32]. It is worth mentioning that all the results mentioned above have been concerned with the average or probability property (e.g., the mean square sense) of the performance without much consideration on the sample properties. However, in practical applications, it is quite common that only sample behaviors of a stochastic system can be observed. In this case, the average property (e.g., mean square stability) is somewhat too conservative to quantify the system performances. Rather than using the stability concept of the “average system” or the “ensemble of all possible systems”, it would make more practical sense to investigate the almost sure stability which is concerned with sample path properties. Nevertheless, compared with the fruitful results on mean square stability of nonlinear stochastic systems, the corresponding results regarding almost sure asymptotic stability have received much less attention simply because of the additional theoretical difficulty. Among the few results available, necessary and sufficient conditions have been established in [40] on the almost sure stability for a class of nonlinear stochastic differential systems. In [15, 36], almost sure stability problems have been addressed for nonlinear stochastic systems with Markovian switching. Unfortunately, to the best of the authors’ knowledge, the almost sure asymptotic stability for the state estimation problem of nonlinear stochastic systems with Markovian switching has not been fully studied despite its potential in practical application, and this situation motivates our present investigation.

Summarizing the above discussions, in this paper, we aim to investigate the almost sure asymptotic stability for the state estimation problem of a general class of nonlinear stochastic systems with Markovian switching. The main contributions of this paper lie in the following aspects. (1) A right-continuous Markov chain on the probability space and general nonlinearity are utilized to model the system that may experience probabilistic abrupt changes in nonlinear system structure. (2) The almost sure asymptotic stability is, for the first time, investigated for the state estimation problem of nonlinear stochastic system with Markovian switching. (3) An easy-to-verify sufficient condition is given for the state estimation problem of linear stochastic system with Markovian switching. The rest of this paper is outlined as follows. In Sect. 2, the nonlinear state estimator with Markovian switching is proposed and the problem under consideration is formulated. In Sect. 3, the main results are given to analyze the almost sure asymptotic stability for the state estimation problem of a general class of nonlinear stochastic systems with Markovian switching, and the corresponding results for linear systems are obtained as a corollary. In Sect. 4, two numerical examples are employed to demonstrate the effectiveness of the main results obtained. Finally, we conclude the paper in Sect. 5.

Notation The notation used here is fairly standard except where otherwise stated. \({\mathbb R}^{n}\) and \({\mathbb {R}}^{n\times m}\) denote, respectively, the \(n\)-dimensional Euclidean space and the set of all \(n\times m\) real matrices, and \({\mathbb {R}}_+=[0,+\infty )\). For a vector \(x=(x_1,x_2,\ldots , x_n)^T\in {\mathbb {R}}^n,\,| x|\) is the Euclidean norm with \(|x|=(\sum _{i=1}^nx_i^2)^{\frac{1}{2}}\). \(a \vee b=\max \{a, b\}\) and \(a \wedge b=\min \{a, b\}\). Moreover, let \((\Omega ,{\mathcal {F}},{\{{\mathcal {F}}_t\}}_{t \ge 0},\mathbf {P})\) be a complete probability space with a natural filtration \(\{{\mathcal {F}}_t\}_{t \ge 0}\) satisfying the usual conditions (i.e., it is right continuous, and \({\mathcal {F}}_0\) contains all \(\mathbf {P}\)-null sets). \({\mathbb E}\{x\}\) stands for the expectation of the stochastic variable \(x\) with respect to the given probability measure \(\mathbf {P}\). \(C^{2,1}({\mathbb {R}}^{n}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+)\) denotes the class of all nonnegative functions \(V(x,i,t)\) on \({\mathbb {R}}^{n}\times S\times {\mathbb {R}}_+\) that are twice continuously differentiable in \(x\) and once in \(t\). \(L^1({\mathbb {R}}_+;{\mathbb {R}}_+)\) denotes the family of functions \(\lambda :~{\mathbb {R}}_+\rightarrow {\mathbb {R}}_+\) such that \(\int _0^\infty \lambda (t)\mathrm{d}t<\infty \). \(\mathcal {K}\) denotes the class of functions \(\gamma :~{\mathbb {R}}_+\rightarrow {\mathbb {R}}_+ \) that are continuous, strictly increasing and \(\gamma (0)=0\). \({\mathcal {K}}_\infty \) denotes the family of all functions that \(\gamma \in \mathcal {K}\) and \(\gamma (x)\rightarrow \infty \) as \(x\rightarrow \infty \).

2 Problem formulation and preliminaries

Let \(r(t)\) be a right-continuous Markov chain on the probability space taking values in the finite space \(S={1,2,\dots ,N}\) with generator \(\Gamma =(\gamma _{ij})_{N\times N}\) given by

$$\begin{aligned}&P\{r(t+\Delta )=j| r(t)=i\}\nonumber \\&\quad =\left\{ \begin{array}{l@{\quad }l} \gamma _{ij}\Delta +o(\Delta ) &{} \text {if } i\ne j\\ 1+\gamma _{ii}\Delta +o(\Delta ) &{} \text {if } i=j \end{array} \right. \end{aligned}$$
(1)

where \(\Delta >0\) and \(\lim _{\Delta \rightarrow 0}\frac{o(\Delta )}{\Delta }=0,\,\gamma _{ij}\ge 0\) is the transition rate from \(i\) to \(j\) if \(i\ne j\) and \(\gamma _{ii}=-\sum _{j\ne i}\gamma _{ij}\).

Consider the following nonlinear stochastic system with Markovian switching:

$$\begin{aligned} \left\{ \begin{array}{l} \mathrm{d}x(t)\!=\!f(x(t),r(t),t)\mathrm{d}t\!+\!g(x(t),r(t),t)\mathrm{d}W(t)\\ y(t)=h(x(t),r(t),t) \end{array}\right. \nonumber \\ \end{aligned}$$
(2)

with initial value \(x(0)=x_0\in {\mathbb {R}}^p\) and \(r(0)=i_0\in S\), where \(x(t)\in {\mathbb {R}}^p\) is the state vector, \(y(t)\in {\mathbb {R}}^q\) is the actual measured output vector, \(W(t)= (w_1(t),\dots ,w_m(t))^T\) is an \(m\)-dimensional Brownian motion defined on the complete probability space \((\Omega ,{\mathcal {F}},\{{\mathcal {F}}_t\}_{t \ge 0},\mathbf {P})\) and independent of the Markov chain \(r(\cdot ),\, and f: {\mathbb {R}}^p\times S\times {\mathbb {R}}_+\rightarrow {\mathbb {R}}^p,\,g: {\mathbb {R}}^p\times S\times {\mathbb {R}}_+\rightarrow {\mathbb {R}}^{p\times m},\,h: {\mathbb {R}}^p\times S\times {\mathbb {R}}_+\rightarrow {\mathbb {R}}^q\) are nonlinear functions with \(f(0,i,t)=0\) , \(g(0,i,t)=0\), and \(h(0,i,t)=0\).

We start with constructing the following state estimator for system (2):

$$\begin{aligned} \mathrm{d}\hat{x}(t)&= f(\hat{x}(t),r(t),t)\mathrm{d}t\nonumber \\&+\,K(r(t))[y(t)-h(\hat{x},r(t),t)]\mathrm{d}t \end{aligned}$$
(3)

with initial value \(\hat{x}(0)=0\), where \(\hat{x}(t)\) is the state estimate and \(K(r(t))\) is the estimation gain to be determined.

Setting \(\eta (t)=[x^T(t),\hat{x}^T(t)]^T\), we obtain an augmented system as follows:

$$\begin{aligned} \mathrm{d}\eta (t)&= f_e(\eta (t),r(t),t)\mathrm{d}t\nonumber \\&+\, g_e(\eta (t),r(t),t)\mathrm{d}W(t) \end{aligned}$$
(4)

where

$$\begin{aligned}&f_e(\eta (t),r(t),t)\nonumber \\&\quad =\left[ \begin{array}{l} f(x(t),r(t),t)\\ f(\hat{x}(t),r(t),t)+K(r(t))[y(t)-h(\hat{x},r(t),t)]\\ \end{array}\right] ,\nonumber \\&\quad g_e(\eta (t),r(t),t) = \left[ \begin{array}{l} g(x(t),r(t),t)\\ 0\\ \end{array}\right] . \end{aligned}$$
(5)

Assumption 1

All \(f,\,g\), and \(h\) are locally Lipschitz continuous in \(x(t)\in {\mathbb {R}}^p\) uniformly in \(t\in {\mathbb {R}}_+\), that is, there exists a constant \(C_R\ge 0\) such that

$$\begin{aligned}&| f(x_1,i,t)-f(x_2,i,t)| ^2 \vee | g(x_1,i,t)-g(x_2,i,t)|^2\\&\quad \vee |[h(x_1,i,t)-h(x_2,i,t)]|^2\le C_R | x_1-x_2|^2 \end{aligned}$$

for any \((t,i)\in {\mathbb {R}}_+\times S\) and \(x_1,x_2 \in {\mathbb {R}}^p\) with \(| x_1| \vee | x_2| \le R\).

Remark 1

Suppose that Assumption 1 holds and recall that \(f(0,i,t)=0,\,g(0,i,t)=0\) and \(h(0,i,t)=0\). Then, for any initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2p}\), it is not difficult to prove that there exists a unique solution \(\eta (t)\) to stochastic system (4).

For \(V\in C^{2,1}({\mathbb {R}}^{2p}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+)\), introduce the infinitesimal generator \({\mathcal {L}}V\): \({\mathbb {R}}^{2p}\times S\times {\mathbb {R}}_+\rightarrow \mathbb {R}\) by

$$\begin{aligned} {\mathcal {L}}V(\eta ,i,t)&= V_t(\eta ,i,t)+V_\eta (\eta ,i,t)f_e(\eta ,i,t)\nonumber \\&+\,\frac{1}{2}\mathrm{trace}\left[ g_e^T(\eta ,i,t)V_{\eta \eta }(\eta ,i,t)g_e(\eta ,i,t)\right] \nonumber \\&+\sum _{j=1}^N\gamma _{ij}V(\eta ,j,t) \end{aligned}$$
(6)

where \(V_t(\eta ,i,t)=\frac{\partial V(\eta ,i,t)}{\partial t},\,V_\eta (\eta ,i,t)=\big (\frac{\partial V(\eta ,i,t)}{\partial \eta _1},\dots ,\frac{\partial V(\eta ,i,t)}{\partial \eta _{2p}}\big )\) and \(V_{\eta \eta }(\eta ,i,t)=\big (\frac{\partial ^2 V(\eta ,i,t)}{\partial \eta _j \partial \eta _k}\big )_{{2p}\times {2p}}\).

Definition 1

The solution of the augmented system (4) is said to be almost surely asymptotically stable if, for all \(i_0\in S\) and \(\eta (0)\in {\mathbb {R}}^{2p}\), the following holds

$$\begin{aligned} P\left( \lim _{t\rightarrow \infty }| \eta (t;\eta (0),i_0)|=0\right) =1. \end{aligned}$$
(7)

The main purpose of this paper is to design a desired state estimator of the form (3) for the stochastic system (2) such that the solution of the augmented system (4) is almost surely asymptotically stable.

3 Main results

Firstly, let us give the following lemmas which will be used in the proof of our main results.

Lemma 1

[20] (Nonnegative semimartingale convergence) Let \(A_1(t)\) and \(A_2(t)\) be two continuously adapted increasing processes on \(t\ge 0\) with \(A_1(0)=A_2(0)=0~\text {almost surely (a.s. for short)},\,M(t)\) be a real-valued continuous local martingale with \(M(0)=0~\text {a.s.,}\) and \(\zeta \) be a nonnegative \({\mathcal {F}}_0\)-measurable random variable such that \(\mathbb {E}\{\zeta \} < \infty \). Denote \(X(t)=\zeta +A_1(t)-A_2(t)+M(t)\) for all \(t\ge 0\) . If \(X(t)\) is nonnegative, then

$$\begin{aligned}&\left\{ \lim _{t\rightarrow \infty }A_1(t)<\infty \right\} \subset \left\{ \lim _{t\rightarrow \infty }X(t)<\infty \right\} \cap \\&\left\{ \lim _{t\rightarrow \infty }A_2(t)<\infty \right\} \,\text {a.s.} \end{aligned}$$

where \(C\subset D~\text {a.s.}\) means \(P(C\cap D^c=\emptyset )=0\). In particular, if \(\lim _{t\rightarrow \infty }A_1(t)<\infty ~\text {a.s.}\), then

$$\begin{aligned}&\lim _{t\rightarrow \infty }X(t)<\infty ,\,\lim _{t\rightarrow \infty }A_2(t)<\infty ~\text {and}\\&-\infty <\lim _{t\rightarrow \infty }M(t)<\infty \,~\text {a.s.}. \end{aligned}$$

That is, all of the three processes \(X(t),~A_2(t)\), and \(M(t)\) converge to finite random variables with probability one.

Lemma 2

[31, 36] (Generalized Itô’s formula) If \(V\in C^{2,1}({\mathbb {R}}^{2p}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+)\), then for any \(t\ge 0\), the generalized Itô’s formula is given as

$$\begin{aligned} \mathrm{d}V(\eta (t),r(t),t)&= {\mathcal {L}}V(\eta (t),r(t),t)\mathrm{d}t\nonumber \\&+\,V_\eta (\eta (t),r(t),t)g_e(\eta (t),r(t),t)\mathrm{d}W(t)\nonumber \\&+\int \limits _{\mathbb {R}}[V(\eta (t),r(t),t)+l(r(t),m)\nonumber \\&-\,V(\eta (t),r(t),t)]\mu (\mathrm{d}t,\mathrm{d}m) \end{aligned}$$
(8)

where \(\mu (\mathrm{d}t,\mathrm{d}m)=\nu (\mathrm{d}t,\mathrm{d}m)-n(\mathrm{d}m)\mathrm{d}t\) is a martingale measure, function \(l: \mathbb {R}\times S\rightarrow \mathbb {R}\) is defined as

$$\begin{aligned} l(i,y)=\left\{ \begin{array}{l@{\quad }l} j-i &{} \text {if}\,\quad y\in \Delta _{ij}\\ 0 &{} \text {otherwise}\end{array}\right. \end{aligned}$$
(9)

with \(\Delta _{12}=[0,\gamma _{12}),~\Delta _{13}=[\gamma _{12},\gamma _{12}+\gamma _{13}),\dots ,\Delta _{1N}=[\sum _{j=2}^{N-1}\gamma _{1j},\sum _{j=2}^{N}\gamma _{1j}),~\Delta _{21}=[\sum _{j=2}^{N}\gamma _{1j},\sum _{j=2}^{N}\gamma _{1j}+\gamma _{21}),\dots , ~\Delta _{2N}=[\sum _{j=2}^{N}\gamma _{1j}+\sum _{j=1,j\ne 2}^{N-1}\gamma _{2j},\sum _{j=2}^{N}\gamma _{1j}+\sum _{j=1,j\ne 2}^{N}\gamma _{2j})\) and so on. \(dr(t)=\int _{\mathbb {R}}l(r(t-,y))\nu (\mathrm{d}t,dy)\), and \(\nu (\mathrm{d}t,dy)\) is a Poisson random measure with intensity \(\mathrm{d}t\times n(dy)\), in which \(n\) is the Lebesgue measure on \(\mathbb {R}\).

In particular, taking the expectation on both sides of (8), we can have the following useful lemma.

Lemma 3

[23] Let \(V(\eta (t),r(t),t)\in C^{2,1}({\mathbb {R}}^{2p}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+)\) and \(\tau _1,~\tau _2\) be bounded stopping times such that \(0\le \tau _1\le \tau _2~\text {a.s.}\). If \(V(\eta (t),r(t),t)\) and \({\mathcal {L}}V(\eta (t),r(t),t)\) are bounded on \(t\in [\tau _1,\tau _2]~\text {a.s.}\), then we have

$$\begin{aligned}&{\mathbb {E}}\{V(\eta (\tau _2),r(\tau _2),\tau _2)-V(\eta (\tau _1),r(\tau _1),\tau _1)\}\nonumber \\&\quad ={\mathbb {E}}\int \limits _{\tau _1}^{\tau _2}{\mathcal {L}}V(\eta (t),r(t),t)\mathrm{d}t. \end{aligned}$$
(10)

The following theorem provides a sufficient condition under which the solution of the augmented system (4) is almost surely asymptotically stable.

Theorem 1

Consider the stochastic system (4). For all \(\eta \in {\mathbb {R}}^{2p}\)\(t\ge 0\) and \(i\in S\), let Assumption 1 hold and suppose that there are Lyapunov function \(V(\eta ,r(t),t)\in C^{2,1}({\mathbb {R}}^{2p}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+),\,\lambda (t)\in L^1(\mathbb {R}_+;\mathbb {R}_+)\) , \(\alpha _1,~\alpha _2\in \mathcal {K}_\infty \) and \(\alpha \in {\mathcal {K}}\) satisfying the following two conditions

$$\begin{aligned} \alpha _1(| \eta |)\le V(\eta ,i,t) \le \alpha _2(| \eta |) \end{aligned}$$
(11)

and

$$\begin{aligned} {\mathcal {L}}V(\eta ,i,t)\le \lambda (t)-\alpha (| \eta |). \end{aligned}$$
(12)

Then, for any initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2p}\), the solution of the stochastic system (4) is almost surely asymptotically stable.

Proof

We take three steps to prove the assertion in this theorem.

  • Step 1: We shall prove that

$$\begin{aligned} \int \limits _0^\infty \alpha (| \eta (s)|)\mathrm{d}s<\infty \quad \text {a.s.}. \end{aligned}$$
(13)

For \(V(\eta ,r(t),t)\in C^{2,1}({\mathbb {R}}^{2p}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+)\), using the generalized Itô’s formula, one can show that

$$\begin{aligned}&V(\eta (t),r(t),t) = V(\eta (0),r(0),0)\nonumber \\&\quad +\int \limits _0^t{\mathcal {L}}V(\eta (s),r(s),s)\mathrm{d}s\nonumber \\&\quad +\int \limits _0^tV_\eta (\eta (s),r(s),s)g_e(\eta (s),r(s),s)\mathrm{d}W(s)\nonumber \\&\quad +\int \limits _0^t\int \limits _{\mathbb {R}}(V(\eta (s),i_0+l(r(s),m),s)\nonumber \\&\quad -\,V_\eta (\eta (s),r(s),s))\mu (\mathrm{d}s,\mathrm{d}m). \end{aligned}$$
(14)

Furthermore, it follows from (11) that

$$\begin{aligned}&V(\eta (t),r(t),t) \le V(\eta (0),r(0),0)\nonumber \\&\quad +\int \limits _0^t\lambda (s)\mathrm{d}s-\int \limits _0^t\alpha (| \eta (s)|)\mathrm{d}s\nonumber \\&\quad +\int \limits _0^tV_\eta (\eta (s),r(s),s)g_e(\eta (s),r(s),s)\mathrm{d}W(s)\nonumber \\&\quad +\int \limits _0^t\int \limits _{\mathbb {R}}(V(\eta (s),i_0+l(r(s),m),s)\nonumber \\&\quad -\,V_\eta (\eta (s),r(s),s))\mu (\mathrm{d}s,\mathrm{d}m). \end{aligned}$$
(15)

Since \(\lambda (t)\in L^1(\mathbb {R}_+;\mathbb {R}_+)\), one can see from Lemma 1 that

$$\begin{aligned} \limsup _{t\rightarrow \infty }V(\eta (t),r(t),t)<\infty \quad \text {a.s.}. \end{aligned}$$
(16)

Taking the expectations on both sides of (15) and letting \(t\rightarrow \infty \), one obtains that

$$\begin{aligned} {\mathbb {E}}\bigg \{\int \limits _0^\infty \alpha (| \eta (s)|)\mathrm{d}s\bigg \}<\infty , \end{aligned}$$
(17)

which implies

$$\begin{aligned} \int \limits _0^\infty \alpha (| \eta (s)|)\mathrm{d}s<\infty \quad \text {a.s.}. \end{aligned}$$
(18)
  • Step 2: We shall show that

$$\begin{aligned} P\left( \lim _{t\rightarrow \infty } \alpha (| \eta (t)|)=0\right) =1. \end{aligned}$$
(19)

As \(\alpha \in \mathcal {K}\) and \(\int _0^\infty \alpha (| \eta (s)|)\mathrm{d}s<\infty \, \text {a.s.}\), it is straightforward to see that

$$\begin{aligned} \liminf _{t\rightarrow \infty } \alpha (| \eta (t)|)=0 \quad \text {a.s.}. \end{aligned}$$
(20)

Decompose the sample space into two mutually exclusive events as follows:

$$\begin{aligned} D_1&= \left\{ \omega :\limsup _{t\rightarrow \infty }\alpha (|\eta (t,\omega )|)\right. \nonumber \\&= \left. \liminf _{t\rightarrow \infty }\alpha (|\eta (t,\omega )|)=0\right\} ,\nonumber \\ D_2&= \left\{ \omega :\limsup _{t\rightarrow \infty }\alpha (|\eta (t,\omega )|)>0\,\text {and}\right. \nonumber \\&\left. \liminf _{t\rightarrow \infty }\alpha (|\eta (t,\omega )|)=0\right\} . \end{aligned}$$
(21)

Now, we claim \(P(D_2)=0\), and hence, \(P(D_1)=1\), which implies the desired result immediately since \(\alpha \in \mathcal {K}\). If this is not true, there exists \(\epsilon >0\) such that

$$\begin{aligned}&P\{\alpha (|\eta (\cdot )|)~\text {crosses from below}~\epsilon ~\text {to above}~2\epsilon \nonumber \\&\quad \text {and back infinitely many times}\}\ge 3\epsilon . \end{aligned}$$
(22)

Define the stopping time \(\sigma _r=\inf \{t\ge 0:~|\eta (t)| \ge r\}\). By the generalized Itô’s formula, one can derive that

$$\begin{aligned}&{\mathbb {E}}\{V(\eta (\sigma _r\wedge t),r(\sigma _r\wedge t),\sigma _r\wedge t)\} = V(\eta (0),r(0),0)\nonumber \\&\qquad +\,{\mathbb {E}}\bigg \{\int \limits _0^{\sigma _r\wedge t}{\mathcal {L}}V(\eta (s),r(s),s)\mathrm{d}s\bigg \}\nonumber \\&\quad \le V(\eta (0),r(0),0)+\int \limits _0^t\lambda (s)\mathrm{d}s\nonumber \\&\qquad -\,{\mathbb {E}}\bigg \{\int \limits _0^{\sigma _r\wedge t}\alpha (|\eta (s)|)\mathrm{d}s\bigg \}\nonumber \\&\quad \le V(\eta (0),r(0),0)+\int \limits _0^t\lambda (s)\mathrm{d}s. \end{aligned}$$
(23)

By considering \(\lambda (t)\in L^1(\mathbb {R}_+;\mathbb {R}_+)\), there is a constant \(M>0\) such that

$$\begin{aligned} {\mathbb {E}}\{V(\eta (\sigma _r\wedge t),r(\sigma _r\wedge t),\sigma _r\wedge t)\}\le M, \quad \forall ~r,~t\ge 0. \end{aligned}$$
(24)

It follows from condition (11) that

$$\begin{aligned} \inf _{|\eta |\ge R,i\in S}V(\eta ,i,t)\rightarrow \infty \quad \text {as} \quad R\rightarrow \infty . \end{aligned}$$
(25)

We can obtain from (11) and (24) that

$$\begin{aligned}&{\mathbb {E}}\{V(\eta (\sigma _r\wedge t),r(\sigma _r\wedge t),\sigma _r\wedge t)\}\nonumber \\&\quad \ge \int \limits _{\{\sup _{0\le s\le t}|\eta (s)|\ge r\}}V(\eta (\sigma _r\wedge t),r(\sigma _r\!\wedge \! t),\sigma _r\!\wedge \! t)\mathrm{d}P\nonumber \\&\quad \ge P\left( \sup _{0\le s\le t}|\eta (s)|\ge r\right) \inf _{|\eta (s)|\ge r}V(\eta (\sigma _r\wedge t),\nonumber \\&\qquad r(\sigma _r\wedge t),\sigma _r\wedge t), \end{aligned}$$
(26)

which yields

$$\begin{aligned}&P\left( \sup _{0\le s\le t}|\eta (s)|\ge r\right) \nonumber \\&\quad \le \frac{M}{\inf _{|\eta (s)|\ge r}V(\eta (\sigma _r\wedge t),r(\sigma _r\wedge t),\sigma _r\wedge t)}. \end{aligned}$$
(27)

As (25) holds, for any given \(\epsilon _1\), there exists a \({\mathcal {K}}_\infty \) function \(\beta (\cdot )\) such that

$$\begin{aligned} \inf _{|\eta (s)|\ge \beta (r)}V(\eta (\sigma _r\!\wedge t),r(\sigma _r\!\wedge t),\sigma _r\!\wedge t)\!\ge \! \frac{M}{\epsilon _1}.\qquad \end{aligned}$$
(28)

It follows readily from (27) and (28) that

$$\begin{aligned}&P\left( \sup _{0\le s\le t}|\eta (s)|<\beta (r)\right) \ge 1\nonumber \\&\quad -\frac{M}{\inf _{|\eta (s)|\ge \beta (r)}V(\eta (\sigma _r\wedge t),r(\sigma _r\wedge t),\sigma _r\wedge t)}\ge 1-\epsilon _1.\nonumber \\ \end{aligned}$$
(29)

For any \(r>0,~i\in S\), two functions \(\beta _1\) and \(\beta _2\) can be defined as follows:

$$\begin{aligned}&\beta _1(r)\triangleq \max _{| \eta |\le r}\sup _{t\ge 0}| f_e(\eta ,r(t),t) |,\nonumber \\&\beta _2(r)\triangleq \max _{| \eta |\le r}\sup _{t\ge 0}| g_e(\eta ,r(t),t) |. \end{aligned}$$
(30)

Denote \(s_r=\min \{s,\sigma _r\}\) and \(\delta _r=\min \{\delta ,\sigma _r\}\). By \(C_p\) inequality, Itô isometry and Doob’s maximal inequality, we obtain

$$\begin{aligned}&{\mathbb {E}}\{\sup _{0\le s \le \delta }| \eta (s_r)-\eta (0)|^2\}\nonumber \\&\quad ={\mathbb {E}}\bigg \{\sup _{0\le s \le \delta }| \int \limits _0^{s_r}f_e(\eta ,r(t),t)\mathrm{d}t\nonumber \\&\qquad +\int \limits _0^{s_r}g_e(\eta ,r(t),t)\mathrm{d}W(t)|^2\bigg \}\nonumber \\&\quad \le 2{\mathbb {E}}\bigg \{\sup _{0\le s \le \delta }| \int \limits _0^{s_r}f_e(\eta ,r(t),t)\mathrm{d}t|^2\}\nonumber \\&\qquad +\, 2{\mathbb {E}}\{\sup _{0\le s \le \delta }| \int \limits _0^{s_r}g_e(\eta ,r(t),t)\mathrm{d}W(t)|^2\bigg \}\nonumber \\&\quad \le 2\beta _1^2(r) \delta ^2 + 8{\mathbb {E}}\bigg \{| \int \limits _0^{\delta _r}g_e(\eta ,r(t),t)\mathrm{d}W(t)|^2\bigg \}\nonumber \\&\quad \le 2\beta _1^2(r) \delta ^2 + 8{\mathbb {E}}\bigg \{\int \limits _0^{\delta _r}| g_e(\eta ,r(t),t)|^2\mathrm{d}t\bigg \}\nonumber \\&\quad \le 2\beta _1^2(r) \delta ^2 +8\beta _2^2(r) \delta . \end{aligned}$$
(31)

Applying Chebyshev’s inequality for any \(\xi >0\), one has

$$\begin{aligned}&P\left( \sup _{0\le s \le \delta }| \eta (s_r)-\eta (0)|^2>\xi \right) \nonumber \\&\quad \le \frac{{\mathbb {E}}\{\sup _{0\le s \le \delta }| \eta (s_r)-\eta (0)|^2\}}{\xi ^2}\nonumber \\&\quad \le \frac{2\beta _1^2(r) \delta ^2 +8\beta _2^2(r) \delta }{\xi ^2}. \end{aligned}$$
(32)

It is not difficult to see that \(\alpha (\cdot )\in {\mathcal {K}}\) is uniformly continuous in the closed ball \(B=\{\eta \in {\mathbb {R}}^{2p}:~| \eta | \le \beta (| \eta _0 |)\}\), and therefore, there is a function \(\gamma (\cdot )\in \mathcal {K}\) such that for all \(x,~y \in B\),  \(| x-y| \le \gamma (\nu )\) implies \(| \alpha (x)-\alpha (y)| \le \nu \) for all \(\nu >0\). Now, it follows from (32) that

$$\begin{aligned}&P\left\{ \sup _{0\le s \le \delta }| \alpha (|\eta (s)|)-\alpha (|\eta (0)|)| > \frac{\epsilon }{2}\right\} \nonumber \\&\quad \le P\left\{ \sup _{0\le s \le \delta }| \eta (s)-\eta (0)| > \gamma \left( \frac{\epsilon }{2}\right) \,\text {and}\right. \nonumber ~\\&\qquad \left. \sup _{0\le s \le \delta }| \eta (s)| < \beta (| \eta (0) |)\right\} \nonumber \\&\quad +\, P\left\{ \sup _{0\le s \le \delta }| \eta (s)| \ge \beta (| \eta (0) |)\right\} \nonumber \\&\quad \le P\left\{ \sup _{0\le s \le \delta }| \eta (s_{\beta (| \eta (0) |)})-\eta (0)| > \gamma \left( \frac{\epsilon }{2}\right) \right\} +\epsilon _1\nonumber \\&\quad \le \frac{ 2\beta _1^2\big (\beta (| \eta (0) |)\big ) \delta ^2 \!+\! 8\beta _2^2\big (\beta (| \eta (0) |)\big ) \delta }{\gamma ^2\left( \frac{\epsilon }{2}\right) }+\epsilon _1. \qquad \quad \end{aligned}$$
(33)

If we choose \(\epsilon _1=\frac{1}{2}\), there exists a \(\delta ^*=\delta ^*(| \eta (0) |,\frac{\epsilon }{2})>0\) such that

$$\begin{aligned}&P\left\{ \sup _{0\le s \le \delta }| \alpha (|\eta (s)|)-\alpha (|\eta (0)|)| \le \frac{\epsilon }{2}\right\} \ge \frac{1}{4}, \nonumber \\&\quad \forall ~ \delta \in (0,\delta ^*]. \end{aligned}$$
(34)

A sequence of stopping times can be defined as follows:

$$\begin{aligned}&\tau _1 = \inf \{t\ge 0:~\alpha (|\eta (t)|)\ge 2\epsilon \},\\&\tau _2 = \inf \{t\ge \tau _1:~\alpha (|\eta (t)|)\le \epsilon \},\\&\tau _{2k+1} = \inf \{t\ge \tau _{2k}:~\alpha (|\eta (t)|)\ge 2\epsilon \},\\&\tau _{2k+2} = \inf \{t\ge \tau _{2k+1}:~\alpha (|\eta (t)|)\!\le \! \epsilon \}\,\, k\!=\!1,2,3,\ldots \end{aligned}$$

Note that \(\inf \emptyset =\infty \). As \(\alpha (\eta (\cdot ))\in {\mathcal {K}}\), it is easy to know that \(\tau _{2k},\tau _{2k+1}\rightarrow \infty \) almost surely as \(k\rightarrow \infty \). By the result of Step 1, we can obtain that

$$\begin{aligned} \infty&> {\mathbb {E}}\bigg \{\int \limits _0^\infty \alpha (|\eta (s)|)\mathrm{d}s\bigg \}\nonumber \\&\ge \sum _{k=1}^\infty {\mathbb {E}}\bigg \{I_{\tau _{2k}<\sigma _r}\int \limits _{\tau _{2k-1}}^{\tau _{2k}}\alpha (|\eta (s)|)\mathrm{d}s\bigg \}\nonumber \\&\ge \epsilon \sum _{k=1}^\infty {\mathbb {E}}\bigg \{I_{\tau _{2k}<\sigma _r}(\tau _{2k}-\tau _{2k-1})\bigg \}\nonumber \\&= \epsilon \sum _{k=1}^\infty {\mathbb {E}}\bigg \{I_{\tau _{2k}<\sigma _r}{\mathbb {E}}(\tau _{2k}-\tau _{2k-1}| {\mathcal {F}}_{\tau _{2k-1}})\bigg \}. \end{aligned}$$
(35)

If \(\omega \in \{\tau _{2k-1}<\sigma _r\}\cap \{\sup _{0\le s\le \delta ^*}| \alpha (|\eta (s+\tau _{2k-1})|)-\alpha (|\eta (\tau _{2k-1})|)|<\frac{\epsilon }{2}\}\), it follows from (34) that

$$\begin{aligned}&{\mathbb {E}}(\tau _{2k}\!-\!\tau _{2k-1}|{\mathcal {F}}_{\tau _{2k-1}})\!\ge \!\delta ^*P\nonumber \\&\quad \left\{ \sup _{0\le s\le \delta ^*}|\alpha (|\eta (s\!+\!\tau _{2k-1})|)-\alpha (|\eta (\tau _{2k-1})|)|<\frac{\epsilon }{2}|{\mathcal {F}}_{\tau _{2k-1}}\right\} \nonumber \\ \end{aligned}$$
(36)

where \(\delta ^*=\delta ^*(|\eta (0)|,\frac{\epsilon }{2})\). Then, we have

$$\begin{aligned} \infty > \sum _{k=1}^\infty \frac{\delta ^*}{4}\epsilon P\{\tau _{2k-1}<\sigma _r\}. \end{aligned}$$
(37)

Applying Borel–Cantelli lemma, one has

$$\begin{aligned} P\{\tau _{2k-1}<\sigma _r\,\text {for infinitely many}~k\}=0, \end{aligned}$$
(38)

and then, it is obvious that

$$\begin{aligned} P\{\tau _{2k-1}\!<\!\infty \,\text {for infinitely many}~k~\text {and}~\sigma _r\!=\!\infty \}\!=\!0.\nonumber \\ \end{aligned}$$
(39)

Next, we prove that \(P\{\sigma _r=\infty \}\rightarrow 1\) holds almost surely as \(r\rightarrow \infty \).

Letting \(t\rightarrow \infty \) in (22), we have

$$\begin{aligned}&P\left( \sup _{s\ge 0}|\eta (s)|\ge r\right) \nonumber \\&\quad \le \frac{M}{\inf _{|\eta (s)| \ge r}V(\eta (\sigma _r\wedge t),r(\sigma _r\wedge t),\sigma _r\wedge t)}, \quad \forall r>0.\nonumber \\ \end{aligned}$$
(40)

It follows from (40) that

$$\begin{aligned}&P\{\sigma _r=\infty \}\ge P\left( \sup _{s\ge 0}|\eta (s)|\le r\right) \ge 1\nonumber \\&-\frac{M}{\inf _{|\eta (s)|\ge r}V(\eta (\sigma _r\wedge t),r(\sigma _r\wedge t),\sigma _r\wedge t)}, \quad \forall r>0,\nonumber \\ \end{aligned}$$
(41)

which implies that \(P\{\sigma _r=\infty \}\rightarrow 1\) almost surely as \(r\rightarrow \infty \). Up to now, we have \(P(\lim _{t\rightarrow \infty } \alpha (|\eta (t)|)=0)=1\). It is clear that the set \(\{\sigma _r=\infty \}\) is increasing with \(r\). If we can prove that \(P\{\sigma _r=\infty \}\rightarrow 1\) almost surely as \(r\rightarrow \infty \), then it is not difficult to see that

$$\begin{aligned} P\{\tau _{2k-1}<\infty \, \text {for infinitely many}~k\}=0. \end{aligned}$$
(42)

Obviously, it contradicts (22), and this yields the desired result of this step directly.

  • Step 3: By (25) and (29), we have

$$\begin{aligned} \sup _{0 \le t< \infty }| \eta (t,\omega ) |<\infty . \end{aligned}$$
(43)

From the result of Step 2, there is an \(\Omega _0\subset \Omega \) with \(P(\Omega _0)=1\) such that

$$\begin{aligned}&\lim _{t\rightarrow \infty }\alpha (|\eta (t,\omega )|)=0\,\text {and}\nonumber \\&\sup _{0\le t< \infty }| \eta (t,\omega ) |<\infty , \quad \forall ~\omega \in \Omega _0. \end{aligned}$$
(44)

Now, we claim

$$\begin{aligned} \lim _{t\rightarrow \infty }\eta (t,\omega )=0, \quad \forall ~\omega \in \Omega _0. \end{aligned}$$
(45)

If it is not true, then there is an \(\bar{\omega }\in \Omega _0\) such that

$$\begin{aligned} \limsup _{t\rightarrow \infty }\eta (t,\bar{\omega })>0, \end{aligned}$$
(46)

and it can be deduced that there is a subsequence \(\{\eta (t_k,\bar{\omega })\}_{k\ge 1}\) of \(\{\eta (t,\bar{\omega })\}_{t\ge 0}\) such that

$$\begin{aligned} | \eta (t_k,\bar{\omega })| \ge \zeta , \quad \forall ~k\ge 1, \end{aligned}$$
(47)

for some \(\zeta >0\). Note from (43) that \(\{\eta (t_k,\bar{\omega })\}_{k\ge 1}\) is bounded. Thus, by Bolzano–Weierstrass theorem, there is an increasing subsequence \(\{\bar{t}_k\}_{k\ge 1}\) and a constant \(c\) satisfying \(c\ge \zeta \) such that

$$\begin{aligned} \{\bar{t}_k\}_{k\ge 1}\rightarrow c\quad \text {as}\quad k\rightarrow \infty \end{aligned}$$
(48)

and therefore, we have

$$\begin{aligned} \alpha (c)=\lim _{k\rightarrow \infty }\alpha (|\eta (t_k,\omega )|)>0, \end{aligned}$$
(49)

which contradicts with (46). So, (45) holds, which means that the solution of the stochastic system (4) is almost surely asymptotically stable. The proof is now complete. \(\square \)

Remark 2

The techniques developed in Theorem 1 can be used to deal with the problem of almost sure asymptotic stability for other nonlinear stochastic systems such as those in [8, 22]. Particularly, if we do not consider Markovian switching, the result of [8] can be seen as a special case of this paper when \(\lambda (t)=0\).

Remark 3

It should be pointed out that our result can be extended to the case of nonlinear stochastic delay systems with Markovian switching. In fact, we just need to replace the condition (12) in Theorem 1 by \({\mathcal {L}}V(\eta _1,\eta _2,i,t)\le \lambda (t)-\alpha (|\eta _1|)+\bar{\alpha }(|\eta _2|)\) where \(\alpha ,~\bar{\alpha } \in {\mathcal {K}}\), and \(\alpha (|\eta _1|)>\bar{\alpha }(|\eta _1|)\).

A very general condition is given in Theorem 1, which guarantees the almost sure asymptotic stability for the state estimation problem of nonlinear stochastic system with Markovian switching. To gradually reduce the difficulty in verifying such a condition, we are going to introduce several simplified conditions by choosing different forms of Lyapunov functions.

Take the Lyapunov function

$$\begin{aligned} V(\eta ,r(t),t)=V^1(x,r(t),t)+V^2(\hat{x},r(t),t) \end{aligned}$$

where \(V^1(x,r(t),t),~V^2(\hat{x},r(t),t)\in C^{2,1}({\mathbb {R}}^{p}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+)\).

The following corollaries can be obtained from Theorem 1.

Corollary 1

Consider the stochastic system (4). Under Assumption 1, for all \((x,i,t),~(\hat{x},i,t)\in {\mathbb {R}}^{p}\times S\times {\mathbb {R}}_+\), if there exist two Lyapunov functions \(V^1(x,r(t),t),V^2(\hat{x},r(t),t)\in C^{2,1}({\mathbb {R}}^{p}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+),\,\lambda (t)\in L^1(\mathbb {R}_+;\mathbb {R}_+)\), constants \(C_1\),\(~C_2\),\(~C_3\),\(~C_4\),\(~C_5\), and \(~C_6\in {\mathbb {R}}_+\) satisfying the following conditions

$$\begin{aligned}&C_1| x |^2\le V^1(x,i,t) \le C_2| x |^2\,\text {and}\nonumber \\&\quad C_3|\hat{x} |^2\le V^2(\hat{x},i,t) \le C_4| \hat{x} |^2 \end{aligned}$$
(50)

and

$$\begin{aligned} {\mathcal {L}}V^1(x,i,t)\!+\!{\mathcal {L}}V^2(\hat{x},i,t)\!\le \! \lambda (t)\!-\!C_5| x |^2\!-\!C_6| \hat{x} |^2,\nonumber \\ \end{aligned}$$
(51)

then for any initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2p}\), the solution of the stochastic system (4) is almost surely asymptotically stable.

Proof

By Theorem 1, we only need to set the Lyapunov function as \(V(\eta ,r(t),t)=V^1(x,r(t),t)+V^2(\hat{x},r(t),t)\), where \(\eta =[x^T,\hat{x}^T]^T\). It can be easily seen from (50) that \(\min \{C_1,~C_3\}| \eta |^2\le V^1(\eta ,i,t) \le \max \{C_2,~C_4\}| \eta |^2\), and \(\min \{C_1,~C_3\}| \eta |^2,\,\max \{C_2,~C_4\}| \eta |^2 \in {\mathcal {K}}_\infty \). Moreover, since

$$\begin{aligned}&\frac{\partial V^T}{\partial x}(\eta ,r(t),t)={V_x^1}^T(x,r(t),t),\\&\frac{\partial V^T}{\partial \hat{x}}(\eta ,r(t),t)={V_{\hat{x}}^2}^T(\hat{x},r(t),t),\\&\frac{\partial ^2 V}{\partial x^2}(\eta ,r(t),t)=V_{xx}^1(x,r(t),t),\\&\frac{\partial ^2 V}{\partial \hat{x}^2}(\eta ,r(t),t)=V_{\hat{x}\hat{x}}^2(\hat{x},r(t),t),\\&\frac{\partial ^2 V}{\partial x^T \partial \hat{x}}(\eta ,r(t),t)=\frac{\partial ^2 V}{\partial x^T \partial \hat{x}}(\eta ,r(t),t)=0, \end{aligned}$$

we have

$$\begin{aligned}&{\mathcal {L}}V(\eta ,i,t)={\mathcal {L}}V^1(x,i,t)+{\mathcal {L}}V^2(\hat{x},i,t)\le \lambda (t)\\&-\,C_5| x |^2-C_6| \hat{x} |^2\le \lambda (t)-\min \{C_5,~C_6\}| \eta |^2 \end{aligned}$$

where it is obvious that \(\min \{C_5,~C_6\}| \eta |^2 \in {\mathcal {K}}\). Therefore, the rest of the proof follows directly from Theorem 1. \(\square \)

In what follows, we take a more special form of the Lyapunov function in order to deduce more simplified condition. By considering \(V(\eta ,i,t)=x^TP_ix+\hat{x}^TQ_i\hat{x}\), the following corollary can be obtained, which guarantees the solution of the stochastic system (4) is almost surely asymptotically stable for any initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2p}\).

Corollary 2

Consider the stochastic system (4). Under Assumption 1, for all \((x,i,t),~(\hat{x},i,t)\in {\mathbb {R}}^{p}\times S\times {\mathbb {R}}_+\), if there exist positive define matrices \(P_i=P_i^T>0\) and \(Q_i=Q_i^T>0,\,\lambda (t)\in L^1(\mathbb {R}_+;\mathbb {R}_+)\), constants \(C_1,~C_2\in {\mathbb {R}}_+\) satisfying the following condition

$$\begin{aligned}&2x^TP_if(x,i,t)+2\hat{x}^TQ_if(\hat{x},i,t)\nonumber \\&\quad +\,2\hat{x}^TQ_iK(i)[h(x,i,t)-h(\hat{x},i,t)]\nonumber \\&\quad +\,\mathrm{trace}[g^T(x,i,t)P_ig(x,i,t)]+\sum _{j=1}^N\gamma _{ij}x^TP_ix\nonumber \\&\quad +\sum _{j=1}^N\gamma _{ij}\hat{x}^TQ_i\hat{x}\nonumber \\&\quad \le ~ \lambda (t)-C_1| x |^2-C_2| \hat{x} |^2, \end{aligned}$$
(52)

then for any initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2p}\), the solution of the stochastic system (4) is almost surely asymptotically stable.

Proof

Set \(V^1(x,i,t)=x^TP_ix\) and \(V^2(\hat{x},i,t)=\hat{x}^TQ_i\hat{x}\). Clearly, for all \((x,i,t),~(\hat{x},i,t)\in {\mathbb {R}}^{p}\times S\times {\mathbb {R}}_+\),

$$\begin{aligned}&\min \{\lambda _{\min }(P_i),~\lambda _{\min }(Q_i)\}| \eta |^2 \le V(\eta ,i,t)\\&\quad \le \max \{\lambda _{\max }(P_i),~\lambda _{\max }(Q_i)\}| \eta |^2. \end{aligned}$$

It is obvious that \(\min \{\lambda _{\min }(P_i),~\lambda _{\min }(Q_i)\}| \eta |^2,\max \{\lambda _{\max }(P_i),~\lambda _{\max }(Q_i)\}| \eta |^2 \in \mathcal {K}_\infty \). Furthermore, since

$$\begin{aligned} \frac{\partial V^T}{\partial x}(\eta ,r(t),t)\!=\!2x^TP_i, \,\, \frac{\partial V^T}{\partial \hat{x}}(\eta ,r(t),t)\!=\!2\hat{x}^TQ_i, \end{aligned}$$

and

$$\begin{aligned}&\frac{1}{2}\mathrm{trace}[g^T(x,i,t)V_{xx}^1g(x,i,t)]\\&\quad =\mathrm{trace}[g^T(x,i,t)P_ig(x,i,t)], \end{aligned}$$

we have

$$\begin{aligned}&{\mathcal {L}}V(\eta ,i,t) = 2x^TP_if(x,i,t)+2\hat{x}^TQ_if(\hat{x},i,t)\\&\quad +\,2\hat{x}^TQ_iK(i)[h(x,i,t)-h(\hat{x},i,t)]\\&\quad +\,\mathrm{trace}[g^T(x,i,t)P_ig(x,i,t)]\\&\quad +\sum _{j=1}^N\gamma _{ij}x^TP_ix+\sum _{j=1}^N\gamma _{ij}\hat{x}^TQ_i\hat{x}\\&\quad \le \lambda (t)-C_1| x |^2-C_2| \hat{x} |^2\\&\quad \le \lambda (t)-\min \{C_1,~C_2\}| \eta |^2 \end{aligned}$$

the conclusion now follows from Corollary 1. \(\square \)

Remark 4

In Theorem 1, Corollaries 1 and 2, a series of sufficient conditions of almost sure asymptotic stability for the state estimation of nonlinear stochastic system (2) have been obtained. Nevertheless, it might be difficult to verify the conditions since nonlinear functions are involved. Next, let us see how to check the almost sure asymptotic stability for the state estimation problem when the system (2) and the state estimator (3) are both in linear forms.

If we take \(f(x,r(t),t)=A(r(t))x(t),\,g(x,r(t),t)=B(r(t))x(t)\), and \(h(x,r(t),t)=C(r(t))x(t)\), then the augmented system (4) can be expressed as follows:

$$\begin{aligned} \mathrm{d}\eta (t)=f_e(\eta (t),r(t),t)\mathrm{d}t+g_e(\eta (t),r(t),t)\mathrm{d}W(t)\nonumber \\ \end{aligned}$$
(53)

where

$$\begin{aligned}&f_e(\eta (t),r(t),t)\nonumber \\&\quad =\left[ \begin{array}{l} A(r(t))x(t)\\ A(r(t))\hat{x}(t)+K(r(t))C(r(t))[x(t)-\hat{x}(t)]\\ \end{array}\right] ,\nonumber \\&g_e(\eta (t),r(t),t)=\left[ \begin{array}{l} B(r(t))x(t)\\ 0\\ \end{array}\right] . \end{aligned}$$
(54)

Corollary 3

Consider the stochastic system (53). For all \((x,i,t),~(\hat{x},i,t)\in {\mathbb {R}}^{p}\times S\times {\mathbb {R}}_+\), suppose that there exist positive define matrices \(P_i=P_i^T>0\) and \(Q_i=Q_i^T>0\), constants \(C_1,~C_2\in {\mathbb {R}}_+\) satisfying the following conditions

$$\begin{aligned} 2P_iA(i)+I+B^T(i)P_iB(i)+\sum _{j=1}^N\gamma _{ij}P_i \le 0 \end{aligned}$$
(55)

and

$$\begin{aligned}&2Q_iA(i)+C^T(i)K^T(i)Q^T_iQ_iK(i)C(i)\nonumber \\&\quad -2Q_iK(i)C(i)+\sum _{j=1}^N\gamma _{ij}Q_i\le 0. \end{aligned}$$
(56)

Then, for any initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2p}\), the solution of the stochastic system (53) is almost surely asymptotically stable.

Proof

By replacing \(f(x,i,t),~g(x,i,t)\), and \(h(x,i,t)\) with \(A(i)x(t),\,B(i)x(t)\), and \(C(i)x(t)\), respectively, it can be easily known that (55) and (56) imply (51). Therefore, the rest of the proof follows from Corollary 2 immediately. \(\square \)

Remark 5

It is noted that the conditions derived in Corollary 3 can be easily transformed into linear matrix inequalities (LMIs). Therefore, the state estimator for linear stochastic system with Markovian switching can be designed by using MATLAB LMI Toolbox.

Remark 6

Up to now, we have obtained a series of almost sure asymptotic stability criteria for the state estimation problem of a general class of nonlinear stochastic systems with Markovian switching. Theorem 1 offers a sufficient condition that guarantees the almost sure asymptotic stability of the dynamics of the estimation error. Such a sufficient condition is decoupled into some auxiliary ones by taking special form of Lyapunov functions. As a consequence, some simplified conditions are obtained to solve the parameter design problem of state estimator for a linear stochastic system with Markovian switching. In the next section, two numerical examples will be given to illustrate the effectiveness of the main results.

4 Illustrative examples

In this section, we shall present two examples to demonstrate the results derived in this paper.

Example 1

(Nonlinear stochastic systems with Markovian switching) Let \(W(t)\) be a one-dimensional Brownian motion and \(r(t)\) be a right-continuous Markov chain taking values in \(S=\{1,~2\}\) with generator

$$\begin{aligned} \Gamma =(\gamma _{ij})_{2\times 2}=\left[ \begin{array}{c@{\quad }c} -1 &{} 1\\ 1 &{} -1 \end{array}\right] \end{aligned}$$

and assume that \(W(t)\) and \(r(t)\) are independent. Consider the following nonlinear stochastic system with Markovian switching

$$\begin{aligned} \left\{ \begin{array}{l} \mathrm{d}x(t)=f(x(t),r(t),t)\mathrm{d}t+g(x(t),r(t),t)\mathrm{d}W(t)\\ y(t)=h(x(t),r(t),t) \end{array}\right. \nonumber \\ \end{aligned}$$
(57)

where

$$\begin{aligned}&f(x(t),1,t)=-\frac{7}{5}\root 3 \of {x}, \quad g(x(t),1,t)=\frac{4}{3}\root 3 \of {x^2},\nonumber \\&h(x(t),1,t)=\root 3 \of {x}\cos {t^2};\nonumber \\&f(x(t),2,t)=-\frac{5}{3}\root 3 \of {x}+\frac{\root 5 \of {x}}{2\sqrt{1+t}},\nonumber \\&g(x(t),2,t)=\root 3 \of {x^2}\cos {t},\nonumber \\&h(x(t),2,t)=\root 3 \of {x}. \end{aligned}$$
(58)

By choosing a state estimator (3) with \(K(1)=1\,\text {and}~K(2)=\frac{1}{6}\), then the coefficients of augmented system (4) are given by

$$\begin{aligned}&f_e(\eta (t),1,t)=\left[ \begin{array}{l} -\frac{7}{5}\root 3 \of {x}\\ -\frac{7}{5}\root 3 \of {\hat{x}}+\root 3 \of {x}\cos {t^2}-\root 3 \of {\hat{x}}\cos {t^2} \end{array}\right] ,\nonumber \\&g_e(\eta (t),1,t)=\left[ \begin{array}{l} \frac{4}{3}\root 3 \of {x^2}\\ 0 \end{array}\right] ,\nonumber \\&f_e(\eta (t),2,t)=\left[ \begin{array}{l} -\frac{5}{3}\root 3 \of {x}+\frac{\root 5 \of {x}}{2\sqrt{1+t}}\\ -\frac{5}{3}\root 3 \of {\hat{x}}+\frac{\root 5 \of {\hat{x}}}{2\sqrt{1+t}}+\frac{1}{6}[\root 3 \of {x}-\root 3 \of {\hat{x}}]\\ \end{array}\right] ,\nonumber \\&g_e(\eta (t),2,t)=\left[ \begin{array}{l} \root 3 \of {x^2}\cos {t}\\ 0 \end{array}\right] . \end{aligned}$$
(59)

We consider a Lyapunov function candidate \(V:{\mathbb {R}}^{2}\times S \times {\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+\) as \(V(\eta ,i,t)=\eta ^2\) for \(i=1,2\). By the generalized Itô formula (8) and Young’s inequality, we have

$$\begin{aligned} {\mathcal {L}}V(\eta ,1,t)&\le -\frac{47}{180}\eta ^{\frac{4}{3}},\nonumber \\ {\mathcal {L}}V(\eta ,2,t)&\le \frac{2}{5(1+t)^5}-\frac{33}{40}\eta ^{\frac{4}{3}}. \end{aligned}$$
(60)

It follows from the above inequality that

$$\begin{aligned} {\mathcal {L}}V(\eta ,i,t)\le \frac{2}{5(1+t)^5}-\frac{47}{180}\eta ^{\frac{4}{3}} \end{aligned}$$
(61)

for all \((\eta ,i,t)\in {\mathbb {R}}^{2}\times S\times {\mathbb {R}}_+\).

By Theorem 1, we claim that for any given initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2}\), the solution of the augmented system is almost surely asymptotically stable.

Remark 7

In general, the state estimator is not unique. For example, if we chose \(K(1)=K(2)=\frac{1}{8}\), then by the generalized Itô formula (8) and Young’s inequality, we also have \({\mathcal {L}}V(\eta ,1,t)\le -\frac{691}{1440}\eta ^{\frac{4}{3}}\) and \({\mathcal {L}}V(\eta ,2,t)\le \frac{2}{5(1+t)^5}-\frac{401}{480}\eta ^{\frac{4}{3}}\). It means that \({\mathcal {L}}V(\eta ,i,t)\le \frac{2}{5(1+t)^5}-\frac{691}{1440}\eta ^{\frac{4}{3}}\) for all \((\eta ,i,t)\in {\mathbb {R}}^{2}\times S\times {\mathbb {R}}_+\). Then, for any given initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2}\), the solution of the augmented system is almost surely asymptotically stable.

Example 2

(Linear stochastic systems with Markovian switching) Let \(W(t)\) and \(r(t)\) be chosen as in Example 1. Consider the two-dimensional linear stochastic system with Markovian switching as follows:

$$\begin{aligned} \left\{ \begin{array}{l} \mathrm{d}x(t)=A(r(t))x(t)\mathrm{d}t+B(r(t))x(t)\mathrm{d}W(t)\\ y(t)=C(r(t))x(t) \end{array}\right. \end{aligned}$$
(62)

where

$$\begin{aligned} A(1)&= \left[ \begin{array}{c@{\quad }c} -80 &{} 1\\ 2 &{} -15 \end{array}\right] , B(1)=\left[ \begin{array}{c@{\quad }c} 2 &{} 3\\ 4 &{} -4 \end{array}\right] ,\nonumber \\ C(1)&= \left[ \begin{array}{l@{\quad }l} 1 &{} 0\\ 0 &{} 0 \end{array}\right] ;\nonumber \\ A(2)&= \left[ \begin{array}{c@{\quad }c} -20 &{} 1\\ 2 &{} -11 \end{array}\right] , B(2)=\left[ \begin{array}{c@{\quad }c} 1 &{} 3\\ -2 &{} 4 \end{array}\right] ,\nonumber \\ C(2)&= \left[ \begin{array}{l@{\quad }l} 0 &{} 0\\ 0 &{} 1 \end{array}\right] . \end{aligned}$$
(63)

By choosing a state estimator (3) with \(K(1)=\left[ \begin{array}{c@{\quad }c} 10 &{} -3\\ -3 &{} 9 \end{array}\right] \,\text {and}\,K(2)=\left[ \begin{array}{l@{\quad }l} 30 &{} 1\\ 7 &{} 2 \end{array}\right] \), then the coefficients of augmented system (53) are given by

$$\begin{aligned}&f_e(\eta (t),i,t)=\left[ \begin{array}{l} A(i)x(t)\\ A(i)\hat{x}(t)+K(i)C(i)[x(t)-\hat{x}(t)] \end{array}\right] \text {and}\nonumber \\&g_e(\eta (t),i,t)=\left[ \begin{array}{l} B(i)x(t)\\ 0 \end{array}\right] \quad i=1,2. \end{aligned}$$
(64)

Consider a Lyapunov function candidate \(V:{\mathbb {R}}^{4}\times S \times \mathbb {R}_+ \rightarrow \mathbb {R}_+\) as \(V(\eta ,i,t)=x^TP_ix+\hat{x}^TQ_i\hat{x}\) for \(i=1,2\). \(P_i\) and \(Q_i\) can be chosen as follows:

$$\begin{aligned} P_1&= \left[ \begin{array}{l@{\quad }l} 1 &{} 0\\ 0 &{} 1 \end{array}\right] , Q_1=\left[ \begin{array}{l@{\quad }l} 1 &{} 0\\ 0 &{} 2 \end{array}\right] \text {and}\\ P_2&= \left[ \begin{array}{l@{\quad }l} 1 &{} 0\\ 0 &{} 2 \end{array}\right] , Q_2=\left[ \begin{array}{l@{\quad }l} 1 &{} 0\\ 0 &{} 2 \end{array}\right] . \end{aligned}$$

It is not difficult to verify that (55) and (56) are true. By Corollary 3, for any given initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{4}\), the solution of the augmented system is almost surely asymptotically stable.

Remark 8

As mentioned in Remark 7, here, the desired estimation gain \(K(r(t))\) is not unique. For instance, we can use \(K(1)=\left[ \begin{array}{c@{\quad }c} -5 &{} 2\\ 1 &{} 4 \end{array}\right] \text {and} K(2)=\left[ \begin{array}{c@{\quad }c} -11 &{} 4\\ -2 &{} 3 \end{array}\right] .\) It is easy to verify that (55) and (56) are true. Then, for any given initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{4}\), the solution of the augmented system is almost surely asymptotically stable.

5 Conclusions

In this paper, we have investigated the almost sure asymptotic stability for the state estimation problem of a class of general nonlinear stochastic systems with Markovian switching. A sufficient condition that guarantees the almost sure asymptotic stability of the dynamics of the estimation error has been derived for the nonlinear stochastic system with Markovian switching. Subsequently, the Lyapunov function in the sufficient condition has been replaced with a special form, which can be verified easily. Then, we have obtained some more simplified conditions directly from the main results. Moreover, the almost sure asymptotic stability has been investigated for the state estimation problem of linear stochastic systems with Markovian switching as a special case. Finally, the main results of this paper have been demonstrated by two numerical examples.