Abstract
In this paper, the almost sure asymptotic stability is investigated for the state estimation problem of a general class of nonlinear stochastic systems with Markovian switching. A nonlinear state estimator with Markovian switching is first proposed, and then, a sufficient condition is given, which guarantees the almost sure asymptotic stability of the dynamics of the estimation error. Based on this condition, some simplified criteria are deduced by taking special forms of Lyapunov functions. Subsequently, an easy-to-verify procedure is put forward for the state estimation problem of the linear stochastic system with Markovian switching. Finally, two numerical examples are used to illustrate the effectiveness of the main results.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
It is well known that the state estimation or filtering is one of the foundational problems in communications and control systems. The so-called state estimator is designed from the measured output to estimate the state of the given system. Over the past few decades, a lot of effective approaches have been proposed in this research area (see e.g., [1, 3, 9, 17, 19, 24, 35]). In particular, the Kalman–Bucy filtering (KBF) technique has been widely used for the state estimation problem of linear stochastic systems [2, 18, 30]. Furthermore, to handle nonlinear systems, the extended Kalman filter (EKF) has been developed whose idea is to linearize about the current mean and covariance. Both KBF and EKF have found successful applications in state estimation and machine learning problems [12, 24, 41]. These two types of filters, however, require not only the exact system model but also the statistical property of the noise in order to achieve desired performance. Since modeling errors and incomplete statistical information are often encountered in real-time applications, robust filtering schemes have recently received considerable research attention in order to improve the robustness of the traditional Kalman filters with respect to parameter uncertainties and external noises. The widely used robust filtering algorithms can be generally categorized as \(H_2\) filtering method, \(H_\infty \) filtering approach, and mixed \(H_2/H_\infty \) filtering scheme (see e.g., [11, 28, 29, 34, 37, 39] and the references therein).
In reality, many physical systems are subject to frequent unpredictable structural changes such as random failures and repairs of the components, changes in the interconnections of subsystems, and sudden environment changes. These systems can be appropriately modeled by the so-called Markovian jump systems (MJSs) that represent an important family of models subject to abrupt variations/switches. In the past few decades, the optimal regulator, controllability, observability, stability, and stabilization problems have been extensively studied for Markovian jump linear systems (MJLSs) and a series of results have been available in the literature (see e.g., [4–6, 10, 16, 21, 38]). Considering the fact that almost all real-world systems are essentially nonlinear, the nonlinear systems with Markovian jumping parameters deserve more research attention from both the theoretical and practical viewpoints, and accordingly, some promising results have been reported. For example, a robust EKF has been designed in [41] for discrete-time Markovian jump nonlinear systems with noise uncertainty. In [33], a nonlinear full-order filter has been implemented such that the dynamics of the estimation error are guaranteed to be stochastically exponentially stable in the mean square. The stochastic stability problem has been tackled in [22, 23] for nonlinear stochastic systems with Markovian switching. It should be pointed out that, so far, although the state estimation problem has been widely investigated for MJLSs, the state estimation problem of general nonlinear stochastic Markovian jump systems has gained much less research attention due probably to the mathematical complexity.
On the other hand, recognizing that nonlinearity is commonly encountered in engineering practice, the stability problems for nonlinear stochastic systems have long been a focus of research for many researchers. A large number of results have been published in the literature on a variety of research topics, including stochastic stability in probability [7, 25], \(p\)th moment asymptotic stability [13, 14, 23], and exponential mean square stability [26, 27, 32]. It is worth mentioning that all the results mentioned above have been concerned with the average or probability property (e.g., the mean square sense) of the performance without much consideration on the sample properties. However, in practical applications, it is quite common that only sample behaviors of a stochastic system can be observed. In this case, the average property (e.g., mean square stability) is somewhat too conservative to quantify the system performances. Rather than using the stability concept of the “average system” or the “ensemble of all possible systems”, it would make more practical sense to investigate the almost sure stability which is concerned with sample path properties. Nevertheless, compared with the fruitful results on mean square stability of nonlinear stochastic systems, the corresponding results regarding almost sure asymptotic stability have received much less attention simply because of the additional theoretical difficulty. Among the few results available, necessary and sufficient conditions have been established in [40] on the almost sure stability for a class of nonlinear stochastic differential systems. In [15, 36], almost sure stability problems have been addressed for nonlinear stochastic systems with Markovian switching. Unfortunately, to the best of the authors’ knowledge, the almost sure asymptotic stability for the state estimation problem of nonlinear stochastic systems with Markovian switching has not been fully studied despite its potential in practical application, and this situation motivates our present investigation.
Summarizing the above discussions, in this paper, we aim to investigate the almost sure asymptotic stability for the state estimation problem of a general class of nonlinear stochastic systems with Markovian switching. The main contributions of this paper lie in the following aspects. (1) A right-continuous Markov chain on the probability space and general nonlinearity are utilized to model the system that may experience probabilistic abrupt changes in nonlinear system structure. (2) The almost sure asymptotic stability is, for the first time, investigated for the state estimation problem of nonlinear stochastic system with Markovian switching. (3) An easy-to-verify sufficient condition is given for the state estimation problem of linear stochastic system with Markovian switching. The rest of this paper is outlined as follows. In Sect. 2, the nonlinear state estimator with Markovian switching is proposed and the problem under consideration is formulated. In Sect. 3, the main results are given to analyze the almost sure asymptotic stability for the state estimation problem of a general class of nonlinear stochastic systems with Markovian switching, and the corresponding results for linear systems are obtained as a corollary. In Sect. 4, two numerical examples are employed to demonstrate the effectiveness of the main results obtained. Finally, we conclude the paper in Sect. 5.
Notation The notation used here is fairly standard except where otherwise stated. \({\mathbb R}^{n}\) and \({\mathbb {R}}^{n\times m}\) denote, respectively, the \(n\)-dimensional Euclidean space and the set of all \(n\times m\) real matrices, and \({\mathbb {R}}_+=[0,+\infty )\). For a vector \(x=(x_1,x_2,\ldots , x_n)^T\in {\mathbb {R}}^n,\,| x|\) is the Euclidean norm with \(|x|=(\sum _{i=1}^nx_i^2)^{\frac{1}{2}}\). \(a \vee b=\max \{a, b\}\) and \(a \wedge b=\min \{a, b\}\). Moreover, let \((\Omega ,{\mathcal {F}},{\{{\mathcal {F}}_t\}}_{t \ge 0},\mathbf {P})\) be a complete probability space with a natural filtration \(\{{\mathcal {F}}_t\}_{t \ge 0}\) satisfying the usual conditions (i.e., it is right continuous, and \({\mathcal {F}}_0\) contains all \(\mathbf {P}\)-null sets). \({\mathbb E}\{x\}\) stands for the expectation of the stochastic variable \(x\) with respect to the given probability measure \(\mathbf {P}\). \(C^{2,1}({\mathbb {R}}^{n}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+)\) denotes the class of all nonnegative functions \(V(x,i,t)\) on \({\mathbb {R}}^{n}\times S\times {\mathbb {R}}_+\) that are twice continuously differentiable in \(x\) and once in \(t\). \(L^1({\mathbb {R}}_+;{\mathbb {R}}_+)\) denotes the family of functions \(\lambda :~{\mathbb {R}}_+\rightarrow {\mathbb {R}}_+\) such that \(\int _0^\infty \lambda (t)\mathrm{d}t<\infty \). \(\mathcal {K}\) denotes the class of functions \(\gamma :~{\mathbb {R}}_+\rightarrow {\mathbb {R}}_+ \) that are continuous, strictly increasing and \(\gamma (0)=0\). \({\mathcal {K}}_\infty \) denotes the family of all functions that \(\gamma \in \mathcal {K}\) and \(\gamma (x)\rightarrow \infty \) as \(x\rightarrow \infty \).
2 Problem formulation and preliminaries
Let \(r(t)\) be a right-continuous Markov chain on the probability space taking values in the finite space \(S={1,2,\dots ,N}\) with generator \(\Gamma =(\gamma _{ij})_{N\times N}\) given by
where \(\Delta >0\) and \(\lim _{\Delta \rightarrow 0}\frac{o(\Delta )}{\Delta }=0,\,\gamma _{ij}\ge 0\) is the transition rate from \(i\) to \(j\) if \(i\ne j\) and \(\gamma _{ii}=-\sum _{j\ne i}\gamma _{ij}\).
Consider the following nonlinear stochastic system with Markovian switching:
with initial value \(x(0)=x_0\in {\mathbb {R}}^p\) and \(r(0)=i_0\in S\), where \(x(t)\in {\mathbb {R}}^p\) is the state vector, \(y(t)\in {\mathbb {R}}^q\) is the actual measured output vector, \(W(t)= (w_1(t),\dots ,w_m(t))^T\) is an \(m\)-dimensional Brownian motion defined on the complete probability space \((\Omega ,{\mathcal {F}},\{{\mathcal {F}}_t\}_{t \ge 0},\mathbf {P})\) and independent of the Markov chain \(r(\cdot ),\, and f: {\mathbb {R}}^p\times S\times {\mathbb {R}}_+\rightarrow {\mathbb {R}}^p,\,g: {\mathbb {R}}^p\times S\times {\mathbb {R}}_+\rightarrow {\mathbb {R}}^{p\times m},\,h: {\mathbb {R}}^p\times S\times {\mathbb {R}}_+\rightarrow {\mathbb {R}}^q\) are nonlinear functions with \(f(0,i,t)=0\) , \(g(0,i,t)=0\), and \(h(0,i,t)=0\).
We start with constructing the following state estimator for system (2):
with initial value \(\hat{x}(0)=0\), where \(\hat{x}(t)\) is the state estimate and \(K(r(t))\) is the estimation gain to be determined.
Setting \(\eta (t)=[x^T(t),\hat{x}^T(t)]^T\), we obtain an augmented system as follows:
where
Assumption 1
All \(f,\,g\), and \(h\) are locally Lipschitz continuous in \(x(t)\in {\mathbb {R}}^p\) uniformly in \(t\in {\mathbb {R}}_+\), that is, there exists a constant \(C_R\ge 0\) such that
for any \((t,i)\in {\mathbb {R}}_+\times S\) and \(x_1,x_2 \in {\mathbb {R}}^p\) with \(| x_1| \vee | x_2| \le R\).
Remark 1
Suppose that Assumption 1 holds and recall that \(f(0,i,t)=0,\,g(0,i,t)=0\) and \(h(0,i,t)=0\). Then, for any initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2p}\), it is not difficult to prove that there exists a unique solution \(\eta (t)\) to stochastic system (4).
For \(V\in C^{2,1}({\mathbb {R}}^{2p}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+)\), introduce the infinitesimal generator \({\mathcal {L}}V\): \({\mathbb {R}}^{2p}\times S\times {\mathbb {R}}_+\rightarrow \mathbb {R}\) by
where \(V_t(\eta ,i,t)=\frac{\partial V(\eta ,i,t)}{\partial t},\,V_\eta (\eta ,i,t)=\big (\frac{\partial V(\eta ,i,t)}{\partial \eta _1},\dots ,\frac{\partial V(\eta ,i,t)}{\partial \eta _{2p}}\big )\) and \(V_{\eta \eta }(\eta ,i,t)=\big (\frac{\partial ^2 V(\eta ,i,t)}{\partial \eta _j \partial \eta _k}\big )_{{2p}\times {2p}}\).
Definition 1
The solution of the augmented system (4) is said to be almost surely asymptotically stable if, for all \(i_0\in S\) and \(\eta (0)\in {\mathbb {R}}^{2p}\), the following holds
The main purpose of this paper is to design a desired state estimator of the form (3) for the stochastic system (2) such that the solution of the augmented system (4) is almost surely asymptotically stable.
3 Main results
Firstly, let us give the following lemmas which will be used in the proof of our main results.
Lemma 1
[20] (Nonnegative semimartingale convergence) Let \(A_1(t)\) and \(A_2(t)\) be two continuously adapted increasing processes on \(t\ge 0\) with \(A_1(0)=A_2(0)=0~\text {almost surely (a.s. for short)},\,M(t)\) be a real-valued continuous local martingale with \(M(0)=0~\text {a.s.,}\) and \(\zeta \) be a nonnegative \({\mathcal {F}}_0\)-measurable random variable such that \(\mathbb {E}\{\zeta \} < \infty \). Denote \(X(t)=\zeta +A_1(t)-A_2(t)+M(t)\) for all \(t\ge 0\) . If \(X(t)\) is nonnegative, then
where \(C\subset D~\text {a.s.}\) means \(P(C\cap D^c=\emptyset )=0\). In particular, if \(\lim _{t\rightarrow \infty }A_1(t)<\infty ~\text {a.s.}\), then
That is, all of the three processes \(X(t),~A_2(t)\), and \(M(t)\) converge to finite random variables with probability one.
Lemma 2
[31, 36] (Generalized Itô’s formula) If \(V\in C^{2,1}({\mathbb {R}}^{2p}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+)\), then for any \(t\ge 0\), the generalized Itô’s formula is given as
where \(\mu (\mathrm{d}t,\mathrm{d}m)=\nu (\mathrm{d}t,\mathrm{d}m)-n(\mathrm{d}m)\mathrm{d}t\) is a martingale measure, function \(l: \mathbb {R}\times S\rightarrow \mathbb {R}\) is defined as
with \(\Delta _{12}=[0,\gamma _{12}),~\Delta _{13}=[\gamma _{12},\gamma _{12}+\gamma _{13}),\dots ,\Delta _{1N}=[\sum _{j=2}^{N-1}\gamma _{1j},\sum _{j=2}^{N}\gamma _{1j}),~\Delta _{21}=[\sum _{j=2}^{N}\gamma _{1j},\sum _{j=2}^{N}\gamma _{1j}+\gamma _{21}),\dots , ~\Delta _{2N}=[\sum _{j=2}^{N}\gamma _{1j}+\sum _{j=1,j\ne 2}^{N-1}\gamma _{2j},\sum _{j=2}^{N}\gamma _{1j}+\sum _{j=1,j\ne 2}^{N}\gamma _{2j})\) and so on. \(dr(t)=\int _{\mathbb {R}}l(r(t-,y))\nu (\mathrm{d}t,dy)\), and \(\nu (\mathrm{d}t,dy)\) is a Poisson random measure with intensity \(\mathrm{d}t\times n(dy)\), in which \(n\) is the Lebesgue measure on \(\mathbb {R}\).
In particular, taking the expectation on both sides of (8), we can have the following useful lemma.
Lemma 3
[23] Let \(V(\eta (t),r(t),t)\in C^{2,1}({\mathbb {R}}^{2p}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+)\) and \(\tau _1,~\tau _2\) be bounded stopping times such that \(0\le \tau _1\le \tau _2~\text {a.s.}\). If \(V(\eta (t),r(t),t)\) and \({\mathcal {L}}V(\eta (t),r(t),t)\) are bounded on \(t\in [\tau _1,\tau _2]~\text {a.s.}\), then we have
The following theorem provides a sufficient condition under which the solution of the augmented system (4) is almost surely asymptotically stable.
Theorem 1
Consider the stochastic system (4). For all \(\eta \in {\mathbb {R}}^{2p}\), \(t\ge 0\) and \(i\in S\), let Assumption 1 hold and suppose that there are Lyapunov function \(V(\eta ,r(t),t)\in C^{2,1}({\mathbb {R}}^{2p}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+),\,\lambda (t)\in L^1(\mathbb {R}_+;\mathbb {R}_+)\) , \(\alpha _1,~\alpha _2\in \mathcal {K}_\infty \) and \(\alpha \in {\mathcal {K}}\) satisfying the following two conditions
and
Then, for any initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2p}\), the solution of the stochastic system (4) is almost surely asymptotically stable.
Proof
We take three steps to prove the assertion in this theorem.
-
Step 1: We shall prove that
For \(V(\eta ,r(t),t)\in C^{2,1}({\mathbb {R}}^{2p}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+)\), using the generalized Itô’s formula, one can show that
Furthermore, it follows from (11) that
Since \(\lambda (t)\in L^1(\mathbb {R}_+;\mathbb {R}_+)\), one can see from Lemma 1 that
Taking the expectations on both sides of (15) and letting \(t\rightarrow \infty \), one obtains that
which implies
-
Step 2: We shall show that
As \(\alpha \in \mathcal {K}\) and \(\int _0^\infty \alpha (| \eta (s)|)\mathrm{d}s<\infty \, \text {a.s.}\), it is straightforward to see that
Decompose the sample space into two mutually exclusive events as follows:
Now, we claim \(P(D_2)=0\), and hence, \(P(D_1)=1\), which implies the desired result immediately since \(\alpha \in \mathcal {K}\). If this is not true, there exists \(\epsilon >0\) such that
Define the stopping time \(\sigma _r=\inf \{t\ge 0:~|\eta (t)| \ge r\}\). By the generalized Itô’s formula, one can derive that
By considering \(\lambda (t)\in L^1(\mathbb {R}_+;\mathbb {R}_+)\), there is a constant \(M>0\) such that
It follows from condition (11) that
We can obtain from (11) and (24) that
which yields
As (25) holds, for any given \(\epsilon _1\), there exists a \({\mathcal {K}}_\infty \) function \(\beta (\cdot )\) such that
It follows readily from (27) and (28) that
For any \(r>0,~i\in S\), two functions \(\beta _1\) and \(\beta _2\) can be defined as follows:
Denote \(s_r=\min \{s,\sigma _r\}\) and \(\delta _r=\min \{\delta ,\sigma _r\}\). By \(C_p\) inequality, Itô isometry and Doob’s maximal inequality, we obtain
Applying Chebyshev’s inequality for any \(\xi >0\), one has
It is not difficult to see that \(\alpha (\cdot )\in {\mathcal {K}}\) is uniformly continuous in the closed ball \(B=\{\eta \in {\mathbb {R}}^{2p}:~| \eta | \le \beta (| \eta _0 |)\}\), and therefore, there is a function \(\gamma (\cdot )\in \mathcal {K}\) such that for all \(x,~y \in B\), \(| x-y| \le \gamma (\nu )\) implies \(| \alpha (x)-\alpha (y)| \le \nu \) for all \(\nu >0\). Now, it follows from (32) that
If we choose \(\epsilon _1=\frac{1}{2}\), there exists a \(\delta ^*=\delta ^*(| \eta (0) |,\frac{\epsilon }{2})>0\) such that
A sequence of stopping times can be defined as follows:
Note that \(\inf \emptyset =\infty \). As \(\alpha (\eta (\cdot ))\in {\mathcal {K}}\), it is easy to know that \(\tau _{2k},\tau _{2k+1}\rightarrow \infty \) almost surely as \(k\rightarrow \infty \). By the result of Step 1, we can obtain that
If \(\omega \in \{\tau _{2k-1}<\sigma _r\}\cap \{\sup _{0\le s\le \delta ^*}| \alpha (|\eta (s+\tau _{2k-1})|)-\alpha (|\eta (\tau _{2k-1})|)|<\frac{\epsilon }{2}\}\), it follows from (34) that
where \(\delta ^*=\delta ^*(|\eta (0)|,\frac{\epsilon }{2})\). Then, we have
Applying Borel–Cantelli lemma, one has
and then, it is obvious that
Next, we prove that \(P\{\sigma _r=\infty \}\rightarrow 1\) holds almost surely as \(r\rightarrow \infty \).
Letting \(t\rightarrow \infty \) in (22), we have
It follows from (40) that
which implies that \(P\{\sigma _r=\infty \}\rightarrow 1\) almost surely as \(r\rightarrow \infty \). Up to now, we have \(P(\lim _{t\rightarrow \infty } \alpha (|\eta (t)|)=0)=1\). It is clear that the set \(\{\sigma _r=\infty \}\) is increasing with \(r\). If we can prove that \(P\{\sigma _r=\infty \}\rightarrow 1\) almost surely as \(r\rightarrow \infty \), then it is not difficult to see that
Obviously, it contradicts (22), and this yields the desired result of this step directly.
From the result of Step 2, there is an \(\Omega _0\subset \Omega \) with \(P(\Omega _0)=1\) such that
Now, we claim
If it is not true, then there is an \(\bar{\omega }\in \Omega _0\) such that
and it can be deduced that there is a subsequence \(\{\eta (t_k,\bar{\omega })\}_{k\ge 1}\) of \(\{\eta (t,\bar{\omega })\}_{t\ge 0}\) such that
for some \(\zeta >0\). Note from (43) that \(\{\eta (t_k,\bar{\omega })\}_{k\ge 1}\) is bounded. Thus, by Bolzano–Weierstrass theorem, there is an increasing subsequence \(\{\bar{t}_k\}_{k\ge 1}\) and a constant \(c\) satisfying \(c\ge \zeta \) such that
and therefore, we have
which contradicts with (46). So, (45) holds, which means that the solution of the stochastic system (4) is almost surely asymptotically stable. The proof is now complete. \(\square \)
Remark 2
The techniques developed in Theorem 1 can be used to deal with the problem of almost sure asymptotic stability for other nonlinear stochastic systems such as those in [8, 22]. Particularly, if we do not consider Markovian switching, the result of [8] can be seen as a special case of this paper when \(\lambda (t)=0\).
Remark 3
It should be pointed out that our result can be extended to the case of nonlinear stochastic delay systems with Markovian switching. In fact, we just need to replace the condition (12) in Theorem 1 by \({\mathcal {L}}V(\eta _1,\eta _2,i,t)\le \lambda (t)-\alpha (|\eta _1|)+\bar{\alpha }(|\eta _2|)\) where \(\alpha ,~\bar{\alpha } \in {\mathcal {K}}\), and \(\alpha (|\eta _1|)>\bar{\alpha }(|\eta _1|)\).
A very general condition is given in Theorem 1, which guarantees the almost sure asymptotic stability for the state estimation problem of nonlinear stochastic system with Markovian switching. To gradually reduce the difficulty in verifying such a condition, we are going to introduce several simplified conditions by choosing different forms of Lyapunov functions.
Take the Lyapunov function
where \(V^1(x,r(t),t),~V^2(\hat{x},r(t),t)\in C^{2,1}({\mathbb {R}}^{p}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+)\).
The following corollaries can be obtained from Theorem 1.
Corollary 1
Consider the stochastic system (4). Under Assumption 1, for all \((x,i,t),~(\hat{x},i,t)\in {\mathbb {R}}^{p}\times S\times {\mathbb {R}}_+\), if there exist two Lyapunov functions \(V^1(x,r(t),t),V^2(\hat{x},r(t),t)\in C^{2,1}({\mathbb {R}}^{p}\times S\times {\mathbb {R}}_+;{\mathbb {R}}_+),\,\lambda (t)\in L^1(\mathbb {R}_+;\mathbb {R}_+)\), constants \(C_1\),\(~C_2\),\(~C_3\),\(~C_4\),\(~C_5\), and \(~C_6\in {\mathbb {R}}_+\) satisfying the following conditions
and
then for any initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2p}\), the solution of the stochastic system (4) is almost surely asymptotically stable.
Proof
By Theorem 1, we only need to set the Lyapunov function as \(V(\eta ,r(t),t)=V^1(x,r(t),t)+V^2(\hat{x},r(t),t)\), where \(\eta =[x^T,\hat{x}^T]^T\). It can be easily seen from (50) that \(\min \{C_1,~C_3\}| \eta |^2\le V^1(\eta ,i,t) \le \max \{C_2,~C_4\}| \eta |^2\), and \(\min \{C_1,~C_3\}| \eta |^2,\,\max \{C_2,~C_4\}| \eta |^2 \in {\mathcal {K}}_\infty \). Moreover, since
we have
where it is obvious that \(\min \{C_5,~C_6\}| \eta |^2 \in {\mathcal {K}}\). Therefore, the rest of the proof follows directly from Theorem 1. \(\square \)
In what follows, we take a more special form of the Lyapunov function in order to deduce more simplified condition. By considering \(V(\eta ,i,t)=x^TP_ix+\hat{x}^TQ_i\hat{x}\), the following corollary can be obtained, which guarantees the solution of the stochastic system (4) is almost surely asymptotically stable for any initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2p}\).
Corollary 2
Consider the stochastic system (4). Under Assumption 1, for all \((x,i,t),~(\hat{x},i,t)\in {\mathbb {R}}^{p}\times S\times {\mathbb {R}}_+\), if there exist positive define matrices \(P_i=P_i^T>0\) and \(Q_i=Q_i^T>0,\,\lambda (t)\in L^1(\mathbb {R}_+;\mathbb {R}_+)\), constants \(C_1,~C_2\in {\mathbb {R}}_+\) satisfying the following condition
then for any initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2p}\), the solution of the stochastic system (4) is almost surely asymptotically stable.
Proof
Set \(V^1(x,i,t)=x^TP_ix\) and \(V^2(\hat{x},i,t)=\hat{x}^TQ_i\hat{x}\). Clearly, for all \((x,i,t),~(\hat{x},i,t)\in {\mathbb {R}}^{p}\times S\times {\mathbb {R}}_+\),
It is obvious that \(\min \{\lambda _{\min }(P_i),~\lambda _{\min }(Q_i)\}| \eta |^2,\max \{\lambda _{\max }(P_i),~\lambda _{\max }(Q_i)\}| \eta |^2 \in \mathcal {K}_\infty \). Furthermore, since
and
we have
the conclusion now follows from Corollary 1. \(\square \)
Remark 4
In Theorem 1, Corollaries 1 and 2, a series of sufficient conditions of almost sure asymptotic stability for the state estimation of nonlinear stochastic system (2) have been obtained. Nevertheless, it might be difficult to verify the conditions since nonlinear functions are involved. Next, let us see how to check the almost sure asymptotic stability for the state estimation problem when the system (2) and the state estimator (3) are both in linear forms.
If we take \(f(x,r(t),t)=A(r(t))x(t),\,g(x,r(t),t)=B(r(t))x(t)\), and \(h(x,r(t),t)=C(r(t))x(t)\), then the augmented system (4) can be expressed as follows:
where
Corollary 3
Consider the stochastic system (53). For all \((x,i,t),~(\hat{x},i,t)\in {\mathbb {R}}^{p}\times S\times {\mathbb {R}}_+\), suppose that there exist positive define matrices \(P_i=P_i^T>0\) and \(Q_i=Q_i^T>0\), constants \(C_1,~C_2\in {\mathbb {R}}_+\) satisfying the following conditions
and
Then, for any initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2p}\), the solution of the stochastic system (53) is almost surely asymptotically stable.
Proof
By replacing \(f(x,i,t),~g(x,i,t)\), and \(h(x,i,t)\) with \(A(i)x(t),\,B(i)x(t)\), and \(C(i)x(t)\), respectively, it can be easily known that (55) and (56) imply (51). Therefore, the rest of the proof follows from Corollary 2 immediately. \(\square \)
Remark 5
It is noted that the conditions derived in Corollary 3 can be easily transformed into linear matrix inequalities (LMIs). Therefore, the state estimator for linear stochastic system with Markovian switching can be designed by using MATLAB LMI Toolbox.
Remark 6
Up to now, we have obtained a series of almost sure asymptotic stability criteria for the state estimation problem of a general class of nonlinear stochastic systems with Markovian switching. Theorem 1 offers a sufficient condition that guarantees the almost sure asymptotic stability of the dynamics of the estimation error. Such a sufficient condition is decoupled into some auxiliary ones by taking special form of Lyapunov functions. As a consequence, some simplified conditions are obtained to solve the parameter design problem of state estimator for a linear stochastic system with Markovian switching. In the next section, two numerical examples will be given to illustrate the effectiveness of the main results.
4 Illustrative examples
In this section, we shall present two examples to demonstrate the results derived in this paper.
Example 1
(Nonlinear stochastic systems with Markovian switching) Let \(W(t)\) be a one-dimensional Brownian motion and \(r(t)\) be a right-continuous Markov chain taking values in \(S=\{1,~2\}\) with generator
and assume that \(W(t)\) and \(r(t)\) are independent. Consider the following nonlinear stochastic system with Markovian switching
where
By choosing a state estimator (3) with \(K(1)=1\,\text {and}~K(2)=\frac{1}{6}\), then the coefficients of augmented system (4) are given by
We consider a Lyapunov function candidate \(V:{\mathbb {R}}^{2}\times S \times {\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+\) as \(V(\eta ,i,t)=\eta ^2\) for \(i=1,2\). By the generalized Itô formula (8) and Young’s inequality, we have
It follows from the above inequality that
for all \((\eta ,i,t)\in {\mathbb {R}}^{2}\times S\times {\mathbb {R}}_+\).
By Theorem 1, we claim that for any given initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2}\), the solution of the augmented system is almost surely asymptotically stable.
Remark 7
In general, the state estimator is not unique. For example, if we chose \(K(1)=K(2)=\frac{1}{8}\), then by the generalized Itô formula (8) and Young’s inequality, we also have \({\mathcal {L}}V(\eta ,1,t)\le -\frac{691}{1440}\eta ^{\frac{4}{3}}\) and \({\mathcal {L}}V(\eta ,2,t)\le \frac{2}{5(1+t)^5}-\frac{401}{480}\eta ^{\frac{4}{3}}\). It means that \({\mathcal {L}}V(\eta ,i,t)\le \frac{2}{5(1+t)^5}-\frac{691}{1440}\eta ^{\frac{4}{3}}\) for all \((\eta ,i,t)\in {\mathbb {R}}^{2}\times S\times {\mathbb {R}}_+\). Then, for any given initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{2}\), the solution of the augmented system is almost surely asymptotically stable.
Example 2
(Linear stochastic systems with Markovian switching) Let \(W(t)\) and \(r(t)\) be chosen as in Example 1. Consider the two-dimensional linear stochastic system with Markovian switching as follows:
where
By choosing a state estimator (3) with \(K(1)=\left[ \begin{array}{c@{\quad }c} 10 &{} -3\\ -3 &{} 9 \end{array}\right] \,\text {and}\,K(2)=\left[ \begin{array}{l@{\quad }l} 30 &{} 1\\ 7 &{} 2 \end{array}\right] \), then the coefficients of augmented system (53) are given by
Consider a Lyapunov function candidate \(V:{\mathbb {R}}^{4}\times S \times \mathbb {R}_+ \rightarrow \mathbb {R}_+\) as \(V(\eta ,i,t)=x^TP_ix+\hat{x}^TQ_i\hat{x}\) for \(i=1,2\). \(P_i\) and \(Q_i\) can be chosen as follows:
It is not difficult to verify that (55) and (56) are true. By Corollary 3, for any given initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{4}\), the solution of the augmented system is almost surely asymptotically stable.
Remark 8
As mentioned in Remark 7, here, the desired estimation gain \(K(r(t))\) is not unique. For instance, we can use \(K(1)=\left[ \begin{array}{c@{\quad }c} -5 &{} 2\\ 1 &{} 4 \end{array}\right] \text {and} K(2)=\left[ \begin{array}{c@{\quad }c} -11 &{} 4\\ -2 &{} 3 \end{array}\right] .\) It is easy to verify that (55) and (56) are true. Then, for any given initial value \(\eta (0)=[x^T_0,0]^T\in {\mathbb {R}}^{4}\), the solution of the augmented system is almost surely asymptotically stable.
5 Conclusions
In this paper, we have investigated the almost sure asymptotic stability for the state estimation problem of a class of general nonlinear stochastic systems with Markovian switching. A sufficient condition that guarantees the almost sure asymptotic stability of the dynamics of the estimation error has been derived for the nonlinear stochastic system with Markovian switching. Subsequently, the Lyapunov function in the sufficient condition has been replaced with a special form, which can be verified easily. Then, we have obtained some more simplified conditions directly from the main results. Moreover, the almost sure asymptotic stability has been investigated for the state estimation problem of linear stochastic systems with Markovian switching as a special case. Finally, the main results of this paper have been demonstrated by two numerical examples.
References
Ackerson, G.A., Fu, K.: On state estimation in switching environments. IEEE Trans. Autom. Control 15(1), 10–17 (1970)
Anderson, B.D.O., Moore, J.B.: Optimal Filtering, Englewood Cliffs. Prentice-Hall, New Jersey (1979)
Balasubramaniam, P., Lakshmanan, S., Theesar, S.J.S.: State estimation for Markovian jumping recurrent neural networks with interval time-varying delays. Nonlinear Dynam. 60(4), 661–675 (2010)
Boukas, E.K., Liu, Z.K.: Robust \(H_{\infty }\) control of discrete-time Markovian jump linear system with mode-dependent time-delays. IEEE Trans. Autom. Control 46(12), 1924–1981 (2001)
Costa, O.L.V., Guerra, S.: Robust linear filtering for discrete-time hybrid Markov linear systems. Int. J. Control 75(10), 712–727 (2002)
De Souza, C.E., Trofino, A., Barbosa, K.A.: Mode-independent \(H_{\infty }\) filters for Markovian jump linear systems. IEEE Trans. Autom. Control 51(11), 1837–1841 (2006)
Deng, H., Krstić, M.: Stochastic nonlinear stabilization - I: A backstepping design. Syst. Control Lett. 32(3), 143–150 (1997)
Deng, H., Krstić, M., Williams, R.J.: Stabilization of stochastic nonlinear systems driven by noise of unknown covariance. IEEE Trans. Autom. Control 46(8), 1237–1253 (2001)
Dong, H., Wang, Z., Ho, D.W.C., Gao, H.: Robust \(H_{\infty }\) filtering for Markovian jump systems with randomly occurring nonlinearities and sensor saturation: the finite-horizon case. IEEE Trans. Signal Process. 59(7), 3048–3057 (2011)
Dragan, V., Morozan, T.: Stability and robust stabilization to linear stochastic systems described by differential equations with Markovian jumping and multiplicative white noise. Stoch. Anal. Appl. 22(1), 33–92 (2002)
Gao, H., Chen, T.: \(H_\infty \) estimation for uncertain systems with limited communication capacity. IEEE Trans. Autom. Control 52(11), 2070–2084 (2007)
Gelb, A.: Applied Optimal Estimation. MIT Press, Cambridge, MA (1974)
Has’minskii, R.Z.: Stochastic Stability of Differential Equations. Sithoff and Noordhoff, Alphen, The Netherlands (1980)
Huang, L., Mao, X., Deng, F.: Stability of hybrid stochastic retarded systems. IEEE Trans. Circuits Syst. I: Regular Papers 55(11), 3413–3420 (2008)
Huang, L., Mao, X.: On almost sure stability of hybrid stochastic systems with mode-dependent interval delays. IEEE Trans. Autom. Control 55(8), 1946–1952 (2010)
Ji, Y., Chizeck, H.J.: Controllability, stabilizability, and continuous-time Markovian jump linear quadratic control. IEEE Trans. Autom. Control 35(7), 777–788 (1990)
Lakshmanan, S., Park, JuH, Ji, D.H.: State estimation of neural networks with time-varying delays and Markovian jumping parameter based on passivity theory. Nonlinear Dynam. 70(2), 1407–1420 (2012)
Lewis, F.L.: Optimal Estimation. Wiley, New York (1986)
Li, N., Hu, J., Hu, J.: Exponential state estimation for delayed recurrent neural networks with sampled-data. Nonlinear Dynam. 69(1–2), 555–564 (2012)
Lipster, R.S., Shiryayev, A.N.: Theory of Martingales, Dordrecht. Kluwer Academic, The Netherlands (1989)
Liu, M., Ho, D.W.C., Niu, Y.: Stabilization of Markovian Jump linear system over networks with random communication delay. Automatica 45(2), 416–421 (2009)
Mao, X.: Stability of stochastic differential equations with Markovian switching. Stoch. Process. Appl. 79(1), 45–67 (1999)
Mao, X., Yuan, C.: Stochastic Differential Equations with Markovian Switching. Imperial College Press, London (2006)
Nørgaard, M., Poulsen, N.K., Ravn, O.: New developments in state estimation for nonlinear systems. Automatica 36(11), 1627–1638 (2000)
Pan, Z., Başar, T.: Backstepping controller design for nonlinear stochastic systems under a risk-sensitive cost criterion. SIAM J. Control Optim. 37(3), 957–995 (1999)
Reif, K., Unbehauen, R.: The extend Kalman filter as an exponential observer for nonlinear systems. IEEE Trans. Signal Process. 47(8), 2324–2328 (1999)
Reif, K., Güther, S., Yaz, E., Unbehauen, R.: Stochastic stability of the continuous-time extended Kalman filter. IEE Proc. Control Theory Appl. 147(1), 45–52 (2000)
Shen, B., Wang, Z., Hung, Y.S., Chesi, G.: Distributed \(H_{\infty }\) filtering for polynomial nonlinear stochastic systems in sensor networks. IEEE Trans. Ind. Electron. 58(5), 1971–1979 (2011)
Shen, B., Wang, Z., Liu, X.: Bounded \(H_{\infty }\) synchronization and state estimation for discrete time-varying stochastic complex networks over a finite-horizon. IEEE Trans. Neural Networks 22(1), 145–157 (2011)
Shi, P., Boukas, E.K., Agarwal, R.K.: Kalman filtering for continuous-time uncertain systems with Markovian jumping parameters. IEEE Trans. Autom. Control 44(8), 1592–1597 (1999)
Skorohod, A.V.: Aymptotic Methods in the Theory of Stochastic Differential Equations. American Mathematical Society, Providence, RI (1989)
Wang, Z., Liu, Y., Liu, X.: Exponential stabilization of a class of stochastic system with Markovian jump parameters and mode-dependent mixed time-delays. IEEE Trans. Autom. Control 55(7), 1656–1662 (2010)
Wang, Z., Lam, J., Liu, X.: Nonlinear filtering for state delayed systems with Markovian switching. IEEE Trans. Signal Process. 51(9), 2321–2328 (2003)
Wei, G., Wang, Z., Shen, B.: Error-constrained filtering for a class of nonlinear time-varying delay systems with non-Gaussian noises. IEEE Trans. Autom. Control 55(12), 2876–2882 (2010)
Wu, L., Shi, P., Gao, H.: State estimation and sliding mode control of Markovian jump singular systems. IEEE Trans. Autom. Control 55(5), 1213–1219 (2010)
Yuan, C., Mao, X.: Robust stability and controllability of stochastic differential delay with Markovian switching. Automatica 40(3), 343–354 (2004)
Zhang, L., Boukas, E.K., Lam, J.: Analysis and synthesis of Markov jump linear systems with time-varying delays and partially known transition probabilities. IEEE Trans. Autom. Control 53(10), 2458–2464 (2008)
Zhang, L., Lam, J.: Necessary and sufficient conditions for analysis and synthesis of Markov jump linear systems with incomplete transition descriptions. IEEE Trans. Autom. Control 55(7), 1695–1701 (2010)
Zhang, X., Zheng, Y.: Nonlinear \(H_\infty \) filtering for interconnected Markovian jump systems. J. Syst. Eng. Electron. 17(1), 138–146 (2006)
Zhang, Z., Kozin, F.: On the almost sure sample stability of nonlinear stochastic dynamic systems. IEEE Trans. Autom. Control 39(3), 560–565 (1994)
Zhu, J., Park, J., Lee, K., Spiryagin, M.: Robust extended Kalman filter of discrete-time Markovian jump nonlinear system under uncertain noise. J. Mech. Sci. Technol. 22(6), 1132–1139 (2008)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Kan, X., Shu, H. & Li, Z. Almost sure state estimation for nonlinear stochastic systems with Markovian switching. Nonlinear Dyn 76, 1591–1602 (2014). https://doi.org/10.1007/s11071-013-1231-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11071-013-1231-y