1 INTRODUCTION

A high-water mark in the study of quantized feedback using data-rate limited feedback channels is known as the data-rate theorem [1]. In networked control systems (NCSs), the data-rate theorem refers to the smallest feedback data rate above which an unstable dynamical system can be stabilized [2].

The intuitively appealing result was proved in [35]. This result was generalized to different notions of stabilization and system models, and was also extended to multi-dimensional systems [68]. Control under communication constraints inevitably suffers signal transmission delay, data packet dropout and measurement quantization which might be potential sources of instability and poor performance of control systems [911].

In [12], a quantized-observer based encoding-decoding scheme was designed, which integrated the state observation with encoding-decoding. The paper [13] addressed some of the challenging issues on moving horizon state estimation for networked control systems in the presence of multiple packet dropouts. It was shown in [14] that maxmin information was used to derive tight conditions for uniformly estimating the state of a linear time-invariant system over a stationary memoryless uncertain digital channel without channel feedback. The case with the fixed data rate was considered in [15] and the case with stochastic time delay was addressed in [16]. Networked control systems may be formulated as Markovian jump systems [17]. The problem of stability analysis and stabilization was investigated for discrete-time two-dimensional (2-D) switched systems in [18].

In this paper, we focus on data-rate limitations, and deal with the observability and stabilizability problem for linear time-invariant systems in the presence of limited feedback data rates. Here, we employ a time-varying coding scheme, and present a lower bound on the average data rate for observability and stabilizability, which is tighter than the ones given by the data-rate theorem in the literature.

The remainder of this paper is organized as follows: Section 2 introduces problem formulation; Section 3 deals with observability and stabilizability problems; The results of numerical simulation are presented in Section 4; Conclusions are stated in Section 5.

2 PROBLEM FORMULATION

In this paper, we are concerned with the following linear time-invariant system:

$$X\left( {k + 1} \right) = AX\left( k \right) + BU\left( k \right),$$
((1))
$$Y\left( k \right) = CX\left( k \right),$$
((2))

where \(X\left( k \right) \in {{R}^{n}}\) denotes the state process, \(Y\left( k \right) \in {{R}^{p}}\) denotes the measured output, and \(U\left( k \right) \in {{R}^{q}}\) denotes the control input. A, B, and C are known constant matrices with appropriate dimensions. Similarly to [8], we set \(C = I,\) where I denotes the identity matrix, such that we have full-state observation at the encoder. Without loss of generality, we suppose that the initial state \(X\left( 0 \right)\) is bounded, uncertain variable satisfies \(\left| {\left| {X\left( 0 \right)} \right|} \right| < {{\varphi }_{0}}\). Assume that the plant is unstable but the pair (A, B) is stabilizable.

Similarly to the problem statement from [8], we also consider the case where sensors and controllers are geographically separated and connected by a stationary memoryless digital communication channel without data packet dropout and time delay. The measured output \(Y\left( k \right)\) needs to be quantized, encoded, and transmitted over such a channel to the decoder. We focus on the observability and stabilizability problem under data-rate limitations. This is the most basic question in a data-rate limited feedback control framework. This result may also be extended to many other cases.

Let \(\hat {X}\left( k \right)\) and \(V\left( k \right)\) denote the state estimate and estimation error at the decoder, respectively. Namely,

$$V\left( k \right): = X\left( k \right) - \hat {X}\left( k \right).$$
((3))

We implement a state feedback control law of the form

$$U\left( k \right) = K\hat {X}\left( k \right).$$
((4))

Both the encoder and the decoder have synchronized clocks, and have access to the quantization, coding, and control scheme. Thus, the state estimate and the control input may be obtained both at the encoder and at the decoder.

The system (1) is asymptotically observable if there exists a quantization, coding, and control scheme such that the state estimation error

$$\mathop {\lim }\limits_{k \to \infty } \left| {\left| {V\left( k \right)} \right|} \right| \to 0.$$
((5))

The system (1) is asymptotically stabilizable if there exists a quantization, coding, and control scheme such that

$$\mathop {\lim }\limits_{k \to \infty } \left| {\left| {X\left( k \right)} \right|} \right| \to 0.$$
((6))

The data rate \(R\left( k \right)\) denotes the number of bits transmitted at the k-th time step, which may be time-varying. Then, the average data rate is defined as

$$R = \mathop {\lim \sup }\limits_{T \to \infty } \frac{1}{T}~\mathop \sum \limits_{k = 0}^{T - 1} R\left( k \right).$$
((7))

The objective here is to derive a lower bound on the average data rate of the channel, above which there exists a quantization, coding, and control scheme such that the system (1) is asymptotically observable and asymptotically stabilizable.

3 OBSERVABILITY AND STABILIZABILITY UNDER DATA-RATE LIMITATIONS

If system matrix A has only real eigenvalues each with geometric multiplicity one, let H be a real valued nonsingular matrix that diagonalizes \(A = {{H}^{T}}{\Lambda }H\) where \({\Lambda }: = {diag\;}\left[ {{{\lambda }_{1}} \ldots {{\lambda }_{n}}} \right].\) Here, \({{\lambda }_{1}},{\;}{{\lambda }_{2}}, \ldots ,~{{\lambda }_{n}}{\;}\) denote the distinct eigenvalues of A. Otherwise, we have\({\;\Lambda }: = {diag\;}\left[ {{{J}_{1}} \ldots {{J}_{m}}} \right],\) where each \({{J}_{i}}\left( {i = 1,2, \ldots ,m} \right)\) is a Jordan block of dimension (geometric multiplicity) ni. Clearly,

$${{n}_{1}} + {{n}_{2}} + \ldots + {{n}_{m}} = n.$$
((8))

We define

$$\bar {X}\left( k \right): = HX\left( k \right) = H{{\left[ {\begin{array}{*{20}{c}} {{{x}_{1}}\left( k \right)}&{{{x}_{2}}\left( k \right)}&{\begin{array}{*{20}{c}} \ldots &{{{x}_{n}}\left( k \right)} \end{array}} \end{array}} \right]}^{T}},$$
((9))
$$\bar {U}\left( k \right): = HBU\left( k \right) = HB{{\left[ {\begin{array}{*{20}{c}} {{{u}_{1}}\left( k \right)}&{{{u}_{2}}\left( k \right)}&{\begin{array}{*{20}{c}} \ldots &{{{u}_{n}}\left( k \right)} \end{array}} \end{array}} \right]}^{T}}.$$
((10))

Then, the system (1) can be rewritten as

$$\bar {X}\left( {k + 1} \right) = \Lambda ~\bar {X}\left( k \right) + \bar {U}\left( k \right).~$$
((11))

Furthermore, we define

$$\bar {X}\left( k \right): = {{\left[ {\begin{array}{*{20}{c}} {{{{\bar {x}}}_{1}}\left( k \right)}&{{{{\bar {x}}}_{2}}\left( k \right)}&{\begin{array}{*{20}{c}} \ldots &{{{{\bar {x}}}_{n}}\left( k \right)} \end{array}} \end{array}} \right]}^{T}},$$
((12))
$$\bar {U}\left( k \right): = {{\left[ {\begin{array}{*{20}{c}} {{{{\bar {u}}}_{1}}\left( k \right)}&{{{{\bar {u}}}_{2}}\left( k \right)}&{\begin{array}{*{20}{c}} \ldots &{{{{\bar {u}}}_{n}}\left( k \right)} \end{array}} \end{array}} \right]}^{T}}.$$
((13))

Then, the channel would transmit without error \({{r}_{i}}\) bits of the information on \({{\bar {x}}_{i}}\left( k \right)\) to the decoder.

Let \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} _{i}^{*}\left( k \right)\) and \({{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }_{i}}\left( k \right)\) denote the prediction values of \({{\bar {x}}_{i}}\left( k \right)\) at the encoder and at the decoder, respectively. In the proof of Theorem 3.1, we will derive \({{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }_{i}}\left( k \right)\) = \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} _{i}^{*}\left( k \right)\) at any time k. Then, the prediction error is defined as

$${{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( k \right): = {{\bar {x}}_{i}}\left( k \right) - {{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }_{i}}\left( k \right).$$
((14))

However, we find that, communication would not be needed when the prediction error \({{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( k \right)\) is small enough. Then, the data rate \({{R}_{i}}\left( k \right)\) corresponding to \({{\bar {x}}_{i}}\left( k \right)\) is given by

$${{R}_{i}}\left( k \right) = \left\{ \begin{gathered} 0~\left( {{\text{bits/sample}}} \right),~\,\,{\text{when}}\,{{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}}_{i}}\left( k \right)~\,{is\;small\;enough}; \hfill \\ {{r}_{i}}~\left( {{\text{bits/sample}}} \right),{\text{otherwise}}.~ \hfill \\ \end{gathered} \right.$$
((15))

Clearly, \(R\left( k \right) = \sum\nolimits_{i = 1}^n {{{R}_{i}}\left( k \right)} \). Then, we give the following result.

Theorem 3.1.

Consider the system (1) over the errorless channel with the data rate \(R\left( k \right)\). Let \({{\lambda }_{i}}\) denote the \({\text{i}}\)th eigenvalue of system matrix A. Let \({\Xi }\) denote the set \(~\left\{ {i \in \left\{ {1,2, \ldots ,n} \right\}:~\left| {{{\lambda }_{i}}} \right| \geqslant 1} \right\}\). Then, there exists a quantization, coding, and control scheme such that the system (1) is asymptotically observable if the average data rate R of the channel satisfies the following condition:

$$R > \mathop \sum \limits_{i \in \Xi } ~\frac{1}{{~{{{\bar {k}}}_{i}}}}~\frac{{\left| {{{\lambda }_{i}}} \right| - 1}}{{\left| {{{\lambda }_{i}}} \right|}}{\text{lo}}{{{\text{g}}}_{2}}{{\left| {{{\lambda }_{i}}} \right|}^{{{{{\bar {k}}}_{i}} - 1}}}~\left( {{\text{bits}}/{\text{sample}}} \right),$$
((16))

with

$${{\bar {k}}_{i}}: = \frac{{{{r}_{i}} - {\text{lo}}{{{\text{g}}}_{2}}\left( {\left| {{{\lambda }_{i}}} \right| - 1} \right)}}{{{\text{lo}}{{{\text{g}}}_{2}}\left| {{{\lambda }_{i}}} \right|}}.$$
((17))

Proof. In this paper, we suppose that the initial state \(X\left( 0 \right)\) is a bounded, uncertain variable satisfying \(~\left\| {X\left( 0 \right)} \right\| \leqslant {{\varphi }_{0}} \leqslant \infty \), where \({{\varphi }_{0}}\) is a known constant. Then, we define \({{\bar {\varphi }}_{0}}: = H{{\varphi }_{0}},~\) and obtain

$${{\bar {x}}_{i}}\left( 0 \right) \in \phi \left[ {0,{{{\bar {\varphi }}}_{0}}} \right],~\,\,\,i = 1,2, \ldots ,$$
((18))

where \(\phi \left[ {c,l} \right]\) represents the set \(\left\{ {x \in R:\left| {x - c} \right| \leqslant l,~c \in R,~l \in R} \right\}\). Both the encoder and the decoder set the initial prediction values

$$\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} _{i}^{*}\left( 0 \right) = {{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }_{i}}\left( 0 \right) = 0.$$
((19))

Clearly, the initial prediction error is given by

$${{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( 0 \right) \in \phi \left[ {0,{{{\bar {\varphi }}}_{0}}} \right],{\;}\,\,\,i = 1,2, \ldots ,n.$$
((20))

First, we consider the case where system matrix A has only real eigenvalues each with geometric multiplicity one, and may rewrite the system (11) as

$${{\bar {x}}_{i}}\left( {k + 1} \right) = {{\lambda }_{i}}{{\bar {x}}_{i}}\left( k \right) + {{\bar {u}}_{i}}\left( k \right),\,\,\,~i = 1,2, \ldots ,n.$$
((21))

For any time k, we assume that the encoder has access to

$${{\bar {x}}_{i}}\left( k \right) \in \phi \left[ {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} _{i}^{*}\left( k \right),l_{i}^{*}\left( k \right)} \right],$$
((22))

where \(l_{i}^{*}\left( k \right)\) and \({{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }_{i}}\left( k \right)\) denote the half-length and midpoint of the bound of \({{\bar {x}}_{i}}\left( k \right)\) at the encoder, respectively. Furthermore, we also assume that the decoder has access to

$${{\bar {x}}_{i}}\left( k \right) \in \phi \left[ {{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }}_{i}}\left( k \right),{{l}_{i}}\left( k \right)} \right],$$
((23))

where \({{l}_{i}}\left( k \right)\) and \({{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }_{i}}\left( k \right)\) denote the half-length and midpoint of the bound of \({{\bar {x}}_{i}}\left( k \right)\) at the decoder, respectively. We stress that, the encoder and the decoder must synchronously update their states and work together. Then, we further assume that for any time k

$$\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} _{i}^{*}\left( k \right) = {{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }_{i}}\left( k \right),$$
((24))

and

$$l_{i}^{*}\left( k \right) = {{l}_{i}}\left( k \right),$$
((25))

hold. Clearly, it follows that

$${{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( k \right) \in \phi \left[ {0,{{l}_{i}}\left( k \right)} \right].$$
((26))

For the case \(\left| {{{\lambda }_{i}}} \right| \geqslant 1\), the upper and lower bounds of \({{\bar {x}}_{i}}\left( k \right)\) and \(\smash{\scriptscriptstyle\smile}\) grows by \(\left| {{{\lambda }_{i}}} \right|\) due to the system dynamics. In order to make them reduce, the information of the plant state needs to be transmitted to the controller.

We define \(\alpha \in \left( {0,1} \right)\), and divide the interval \(\left[ { - {{l}_{i}}\left( k \right),{{l}_{i}}\left( k \right)} \right]\) into three subintervals:

$$\left[ { - \frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}{{l}_{i}}\left( k \right),~\frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}{{l}_{i}}\left( k \right)} \right],~\left[ { - {{l}_{i}}\left( k \right),~ - \frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}{{l}_{i}}\left( k \right)} \right),~\left( {\frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}{{l}_{i}}\left( k \right),~{{l}_{i}}\left( k \right)} \right].$$
((27))

For the case with \({{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( k \right) \in \phi \left[ {0,~\frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}{{l}_{i}}\left( k \right)} \right]\), the prediction error \({{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( k \right) }\) is so small that no data packet on \({{\bar {x}}_{i}}\left( k \right)\) needs to be transmitted to the controller. In contrast, for the case with \({{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( k \right) \notin \phi \left[ {0,~\frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}{{l}_{i}}\left( k \right)} \right]\), the prediction error \({{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( k \right) }\) is so large that the value of \({{\bar {x}}_{i}}\left( k \right)\) needs to be quantized, encoded, and transmitted to the decoder over the communication channel.

We define

$$\tilde {X}\left( k \right): = H\hat {X}\left( k \right) = {{\left[ {\begin{array}{*{20}{c}} {{{{\tilde {x}}}_{1}}\left( k \right)}&{{{{\tilde {x}}}_{2}}\left( k \right)}&{\begin{array}{*{20}{c}} \ldots &{{{{\tilde {x}}}_{n}}\left( k \right)} \end{array}} \end{array}} \right]}^{T}}.$$
((28))

Let \({{\vec {x}}_{i}}\left( k \right)\) and \({{{\vec {v}}}_{i}}\left( k \right)\) denote the quantization value and quantization error of \({{\bar {x}}_{i}}\left( k \right)\), respectively. Let \({{\eta }_{i}}\left( k \right) = 1\) indicate that the data packet on \({{\bar {x}}_{i}}\left( k \right)\) is transmitted to the decoder over the communication channel at time k. Then, the decoder may obtain the quantization value, and set

$${{\tilde {x}}_{i}}\left( k \right) = {{\vec {x}}_{i}}\left( k \right).$$
((29))

In contrast, let \({{\eta }_{i}}\left( k \right) = 0\) indicate that no data packet on \({{\bar {x}}_{i}}\left( k \right)\) is transmitted to the decoder. Then, the decoder can not receive any data packet on \({{\bar {x}}_{i}}\left( k \right)\), and set

$${{\tilde {x}}_{i}}\left( k \right) = {{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }_{i}}\left( k \right).$$
((30))

Thus, we implement a state feedback control law of the form

$$U\left( k \right) = K\hat {X}\left( k \right) = K{{H}^{T}}\tilde {X}\left( k \right) = K{{H}^{T}}{{\left[ {\begin{array}{*{20}{c}} {{{{\tilde {x}}}_{1}}\left( k \right)}&{{{{\tilde {x}}}_{2}}\left( k \right)}&{\begin{array}{*{20}{c}} \ldots &{{{{\tilde {x}}}_{n}}\left( k \right)} \end{array}} \end{array}} \right]}^{T}},$$
((31))

where

$${{\tilde {x}}_{i}}\left( k \right) = \left\{ {\begin{array}{*{20}{c}} {{{{\vec {x}}}_{i}}\left( k \right),\,\,\,{\text{when}}\,{{\eta }_{i}}\left( k \right) = 1,} \\ {{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }}_{i}}\left( k \right),\,\,\,{\text{when}}\,~{{\eta }_{i}}\left( k \right) = 0.} \end{array}} \right.$$
((32))

Both the encoder and the decoder know the quantization, coding, and control policy such that they can obtain the same control input \({{\bar {u}}_{i}}\left( k \right)\).

At time \(k + 1\), the encoder and the decoder will update their states together. For the case with \({{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( k \right) \in \phi \left[ {0,~\frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}{{l}_{i}}\left( k \right)} \right]\), the encoder will do nothing and only update its state

$${{\bar {x}}_{i}}\left( {k + 1} \right) \in \phi \left[ {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} _{i}^{*}\left( {k + 1} \right),l_{i}^{*}\left( {k + 1} \right)} \right],$$
((33))

where

$$\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} _{i}^{*}\left( {k + 1} \right) = {{\lambda }_{i}}\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} _{i}^{*}\left( k \right) + {{\bar {u}}_{i}}\left( k \right),$$
((34))
$$l_{i}^{*}\left( {k + 1} \right) = \alpha l_{i}^{*}\left( k \right).$$
((35))

At the same time, the decoder will not receive any data packet on \({{\bar {x}}_{i}}\left( k \right)\), and may update its state

$${{\bar {x}}_{i}}\left( {k + 1} \right) \in \phi \left[ {{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }}_{i}}\left( {k + 1} \right),{{l}_{i}}\left( {k + 1} \right)} \right],$$
((36))

where

$${{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }_{i}}\left( {k + 1} \right) = {{\lambda }_{i}}{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }_{i}}\left( k \right) + {{\bar {u}}_{i}}\left( k \right),$$
((37))

and

$${{l}_{i}}\left( {k + 1} \right) = \alpha {{l}_{i}}\left( k \right).$$
((38))

Substitute (24) and (25) into the equalities above, and we have

$$\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} _{i}^{*}\left( {k + 1} \right) = {{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }_{i}}\left( {k + 1} \right),$$
((39))
$$l_{i}^{*}\left( {k + 1} \right) = {{l}_{i}}\left( {k + 1} \right).$$
((40))

Clearly, for this case, the encoder and the decoder can synchronously update their states and work together. Then, it is straightforward to show that

$${{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }_{i}}\left( {k + 1} \right) \in \phi \left[ {0,{{l}_{i}}\left( {k + 1} \right)} \right].$$
((41))

For the case with \(~{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( k \right) \notin \phi \left[ {0,~\frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}{{l}_{i}}\left( k \right)} \right]\), the channel will transmit without error \({{r}_{i}}\) bits of the information on \({{\bar {x}}_{i}}\left( k \right)~\) in order to make the prediction error reduce. We define \({{d}_{i}}: = {{2}^{{{{r}_{i}}}}}\). Clearly, \({{r}_{i}},{{d}_{i}} \in {{\mathbb{Z}}^{ + }}\). We divide the interval \(\left[ {\left. {{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }}_{i}}\left( k \right) - {{l}_{i}}\left( k \right),{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }}_{i}}\left( k \right) - \frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}{{l}_{i}}\left( k \right)} \right)} \right.\) and \(\left. {\left( {{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }}_{i}}\left( k \right) + \frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}{{l}_{i}}\left( k \right),~{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }}_{i}}\left( k \right) + {{l}_{i}}\left( k \right)} \right.} \right]\) into \(~\frac{{{{d}_{i}}}}{2}\) equal subintervals, respectively. Then, \({{\bar {x}}_{i}}\left( k \right)\) will fall into one of \({{d}_{i}}\) equal subintervals. The corresponding quantization value \({{\vec {x}}_{i}}\left( k \right)\) is the midpoint of the subinterval which \({{\bar {x}}_{i}}\left( k \right)\) falls into. Then, the quantization error \({{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( k \right)\) is given by

$${{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( k \right) \in \phi \left[ {0,{\;}\frac{{1 - \frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}}}{{{{d}_{i}}}}{{l}_{i}}\left( k \right)} \right].$$
((42))

The \({{d}_{i}}\) indices corresponding to the \({{d}_{i}}\) subintervals are encoded, and converted into codewords of \({{r}_{i}}\) bits. The channel can transmit without error \({{r}_{i}}\) bits of infor-mation such that the decoder may know which subinterval \({{\bar {x}}_{i}}\left( k \right)\) falls into at time k. Then, the decoder can compute and obtain the quantization value \({{\vec {x}}_{i}}\left( k \right)\).

At time \(k + 1\), the encoder updates its state

$${{\bar {x}}_{i}}\left( {k + 1} \right) \in \phi \left[ {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} _{i}^{*}\left( {k + 1} \right),l_{i}^{*}\left( {k + 1} \right)} \right],$$
((43))

where

$$\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} _{i}^{*}\left( {k + 1} \right) = {{\lambda }_{i}}{{\vec {x}}_{i}}\left( k \right) + {{\bar {u}}_{i}}\left( k \right),$$
((44))
$$l_{i}^{*}\left( {k + 1} \right) = \frac{{\left| {{{\lambda }_{i}}} \right| - \alpha }}{{{{d}_{i}}}}l_{i}^{*}\left( k \right).$$
((45))

At the same time, the decoder also updates its state

$${{\bar {x}}_{i}}\left( {k + 1} \right) \in \phi \left[ {{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }}_{i}}\left( {k + 1} \right),{{l}_{i}}\left( {k + 1} \right)} \right],$$
((46))

where

$${{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }_{i}}\left( {k + 1} \right) = {{\lambda }_{i}}{{\vec {x}}_{i}}\left( k \right) + {{\bar {u}}_{i}}\left( k \right),$$
((47))
$${{l}_{i}}\left( {k + 1} \right) = \frac{{\left| {{{\lambda }_{i}}} \right| - \alpha }}{{{{d}_{i}}}}{{l}_{i}}\left( k \right).$$
((48))

Substitute (24) and (25) into the equalities above, and we have

$$\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} _{i}^{*}\left( {k + 1} \right) = {{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }_{i}}\left( {k + 1} \right),$$
((49))
$$l_{i}^{*}\left( {k + 1} \right) = {{l}_{i}}\left( {k + 1} \right).$$
((50))

Clearly, for this case, the encoder and the decoder can also synchronously update their states and work together. Then, it is straightforward to show that

$${{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( {k + 1} \right) \in \phi \left[ {0,{{l}_{i}}\left( {k + 1} \right)} \right].$$
((51))

Notice that if \({{d}_{i}}\) is large enough, it is possible that

$${{l}_{i}}\left( {k + 1} \right) = \frac{{\left| {{{\lambda }_{i}}} \right| - \alpha }}{{{{d}_{i}}}}{{l}_{i}}\left( k \right) \leqslant \frac{{{{\alpha }^{2}}}}{{\left| {{{\lambda }_{i}}} \right|}}{{l}_{i}}\left( k \right),$$
((52))

holds. If the inequality above holds, no data packet on \({{\bar {x}}_{i}}\left( {k + 1} \right)\) needs to be transmitted to the controller at time \(k + 1\) too. Arguing as before, we can show that

$${{\bar {x}}_{i}}\left( {k + 2} \right) \in \phi \left[ {{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }}_{i}}\left( {k + 2} \right),{{l}_{i}}\left( {k + 2} \right)} \right],$$
((53))
$${{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( {k + 2} \right) \in \phi \left[ {0,{{l}_{i}}\left( {k + 2} \right)} \right],$$
((54))

where

$${{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }_{i}}\left( {k + 2} \right) = \lambda _{i}^{2}{{\vec {x}}_{i}}\left( k \right) + {{\lambda }_{i}}{{\bar {u}}_{i}}\left( k \right) + {{\bar {u}}_{i}}\left( {k + 1} \right),$$
((55))
$${{l}_{i}}\left( {k + 2} \right) = \frac{{\left| {{{\lambda }_{i}}} \right|\left( {\left| {{{\lambda }_{i}}} \right| - \alpha } \right)}}{{{{d}_{i}}}}{{l}_{i}}\left( k \right).$$
((56))

We define

$${{k}_{i}}: = \frac{{{{r}_{i}} - {\text{lo}}{{{\text{g}}}_{2}}\left( {\left| {{{\lambda }_{i}}} \right| - \alpha } \right)}}{{{\text{lo}}{{{\text{g}}}_{2}}\frac{{\left| {{{\lambda }_{i}}} \right|}}{\alpha }}},$$
((57))

such that the channel needs to transmit without error \({{r}_{i}}\) bits of the information on \({{\bar {x}}_{i}}\left( {k + {{k}_{i}}} \right)\) again at time \(k + {{k}_{i}}\). Repeating the procedure above, we obtain

$${{\bar {x}}_{i}}\left( {k + {{k}_{i}}} \right) \in \phi \left[ {{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }}_{i}}\left( {k + {{k}_{i}}} \right),{{l}_{i}}\left( {k + {{k}_{i}}} \right)} \right],$$
((58))
$${{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( {k + {{k}_{i}}} \right) \in \phi \left[ {0,{{l}_{i}}\left( {k + {{k}_{i}}} \right)} \right],$$
((59))

where

$${{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} }_{i}}\left( {k + {{k}_{i}}} \right) = \lambda _{i}^{{{{k}_{i}}}}{{\vec {x}}_{i}}\left( k \right) + \mathop \sum \limits_{j = 0}^{{{k}_{i}} - 1} \lambda _{i}^{{{{k}_{i}} - j - 1}}{{\bar {u}}_{i}}\left( {k + j} \right),$$
((60))
$${{l}_{i}}\left( {k + {{k}_{i}}} \right) = \frac{{{{{\left| {{{\lambda }_{i}}} \right|}}^{{{{k}_{i}} - 1}}}\left( {\left| {{{\lambda }_{i}}} \right| - \alpha } \right)}}{{{{d}_{i}}}}{{l}_{i}}\left( k \right).$$
((61))

Notice that, if \({{r}_{i}}\) satisfies the following condition:

$${{r}_{i}} = {\text{lo}}{{{\text{g}}}_{2}}{{d}_{i}} \geqslant {{\log }_{2}}{{\left| {{{\lambda }_{i}}} \right|}^{{{{k}_{i}} - 1}}}\left( {\left| {{{\lambda }_{i}}} \right| - \alpha } \right),$$
((62))

there exists \(\alpha \in \left( {0,1} \right)\) such that

$${{l}_{i}}\left( {k + {{k}_{i}}} \right) \leqslant {{\alpha }^{{{{k}_{i}}}}}~{{l}_{i}}\left( k \right),$$
((63))

holds.

Notice that, the equality (38) and inequality (63) hold for any time k, and the equalities (18) and (20) hold for any initial state \(X\left( 0 \right)\). Then, it is straightforward to show that

$${{l}_{i}}\left( k \right) \leqslant {{\alpha }^{k}}{{\varphi }_{0}}.$$
((64))

This means that \({{l}_{i}}\left( k \right) \to 0\) as \(~k \to \infty \). Then, the state prediction error \({{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( k \right) \to 0\). Thus, it follows that the state estimation error

$$\mathop {\lim }\limits_{k \to \infty } \left| {\left| {V\left( k \right)} \right|} \right| \to 0.$$
((65))

Notice that, the probability of the case with \({{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( k \right) \in \phi \left[ {0,~\frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}{{l}_{i}}\left( k \right)} \right]\) is given by

$${{p}_{0}} = p\left( {{{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}}_{i}}\left( k \right) \in \phi \left[ {0,~\frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}{{l}_{i}}\left( k \right)} \right]} \right) = \frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}~.$$
((66))

Furthermore, the probability of the case with \({{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}_{i}}\left( k \right) \notin \phi \left[ {0,~\frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}{{l}_{i}}\left( k \right)} \right]\) is given by

$${{p}_{r}} = p\left( {{{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{v} }}}_{i}}\left( k \right) \notin \phi \left[ {0,~\frac{\alpha }{{\left| {{{\lambda }_{i}}} \right|}}{{l}_{i}}\left( k \right)} \right]} \right) = \frac{{\left| {{{\lambda }_{i}}} \right| - \alpha }}{{\left| {{{\lambda }_{i}}} \right|}}.$$
((67))

We take the expectation over the data rate of the channel and obtain the average data rate

$$R = \mathop \sum \limits_{i \in {\Xi }} \left[ {{{p}_{0}} \times 0 + {{p}_{r}}\frac{1}{{{{k}_{i}}}}{{r}_{i}}} \right] \geqslant \mathop \sum \limits_{i \in {\Xi }} ~\frac{1}{{{{k}_{i}}}}\frac{{\left| {{{\lambda }_{i}}} \right| - \alpha }}{{\left| {{{\lambda }_{i}}} \right|}}{\text{lo}}{{{\text{g}}}_{2}}{{\left| {{{\lambda }_{i}}} \right|}^{{{{k}_{i}} - 1}}}\left( {\left| {{{\lambda }_{i}}} \right| - \alpha } \right)\left( {{\text{bits}}/{\text{sample}}} \right).$$
((68))

Notice that, \(\alpha {\;}\)denotes the rate of convergence. Here, we do not examine the control performance on the rate of convergence. Then, we let α approach one, and obtain the condition (16).

For the case where system matrix A has real eigenvalues with geometric multiplicity larger than one or has the complex conjugate pair of eigenvalues, its proof proceeds along the same lines and will not be given. Furthermore, it was also derived in [8] that there are the same results in these cases.

Now, we deal with the stabilizability problem for the system (1) under data-rate limitations, and give the following result.

Theorem 3.2. Consider the system (1) over the errorless channel with the data rate \(R\left( k \right)\). Let \({{\lambda }_{i}}\) denote the \({\text{i}}\)th eigenvalue of system matrix A. Let \({\Xi \;}\) denote the set \(\left\{ {i \in \left\{ {1,2, \ldots ,n} \right\}:~\left| {{{\lambda }_{i}}} \right| \geqslant 1} \right\}\). Suppose that there exists a control gain matrix K such that all eigenvalues of A+BK lie inside the unit circle. Then, there exists a quantization, coding, and control scheme such that the system (1) is asymptotically stabilizable if the average data rate R of the channel satisfies the following condition:

$$R > \mathop \sum \limits_{i \in \Xi } ~\frac{1}{{{{{\bar {k}}}_{i}}}}~\frac{{\left| {{{\lambda }_{i}}} \right| - 1}}{{\left| {{{\lambda }_{i}}} \right|}}{\text{lo}}{{{\text{g}}}_{2}}{{\left| {{{\lambda }_{i}}} \right|}^{{{{{\bar {k}}}_{i}} - 1}}}\left( {\left| {{{\lambda }_{i}}} \right| - 1} \right)~\left( {{\text{bits}}/{\text{sample}}} \right),$$
((69))

with

$${{\bar {k}}_{i}}: = \frac{{{{r}_{i}} - {\text{lo}}{{{\text{g}}}_{2}}\left( {\left| {{{\lambda }_{i}}} \right| - 1} \right)}}{{{\text{lo}}{{{\text{g}}}_{2}}\left| {{{\lambda }_{i}}} \right|}}.$$
((70))

Proof. Consider the system (1) which we can also write as

$$X\left( {k + 1} \right) = \left( {A + BK} \right)X\left( k \right) - BKV\left( k \right).$$
((71))

Then, we obtain

$$X\left( k \right) = {{\left( {A + BK} \right)}^{k}}X\left( 0 \right) - \mathop \sum \limits_{j = 0}^{k - 1} {{\left( {A + BK} \right)}^{{k - j - 1}}}BKV\left( j \right).$$
((72))

The first addend in the equality (72) goes to zero since the initial state \(X\left( 0 \right)\) is boundable and \(A + BK\) is stable. Furthermore, it follows from (65) that, the econd addend in the equality (72) goes to zero. Thus, it follows that

$$\mathop {\lim }\limits_{k \to \infty } \left| {\left| {X\left( k \right)} \right|} \right| \to 0.$$
((73))

In the data-rate theorem [1, 8], a necessary and sufficient condition on the average data rate for observability and stabilizability is

$$R > \mathop \sum \limits_{i \in {\Xi }} {\text{lo}}{{{\text{g}}}_{2}}\left| {{{\lambda }_{i}}} \right|~\left( {{\text{bits}}{\text{/}}{\text{sample}}} \right).$$
((74))

However, it is shown in Theorem 3.1 and Theorem 3.2 that, there exists the smaller lower bound on the average data rate, above which the system (1) is still asymptotically observable and asymptotically stabilizable. Thus, our result is less conservative.

4 NUMERICAL EXAMPLES AND SIMULATIONS

In this section, we present a practical example, where three of the states of unmanned ground vehicles (UGVs) evolve in discrete-time according to

$$\left[ {\begin{array}{*{20}{c}} {{{x}_{1}}\left( {k + 1} \right)} \\ {{{x}_{2}}\left( {k + 1} \right)} \\ {{{x}_{3}}\left( {k + 1} \right)} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {19.84}&{6.68}&{17.04} \\ { - 7.02}&{2.46}&{ - 7.02} \\ { - 1.00}&{ - 2.00}&{1.80} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{{x}_{1}}\left( k \right)} \\ {{{x}_{2}}\left( k \right)} \\ {{{x}_{3}}\left( k \right)} \end{array}} \right] + \left[ {\begin{array}{*{20}{c}} {{{u}_{1}}\left( k \right)} \\ {{{u}_{2}}\left( k \right)} \\ {{{u}_{3}}\left( k \right)} \end{array}} \right].$$
((75))

The initial state \(X\left( 0 \right)\) is a bounded, uncertain variable, satisfying

$${{x}_{1}}\left( k \right),{{x}_{2}}\left( k \right),{{x}_{3}}\left( k \right) \in \phi \left[ {0,10} \right] = \left[ { - 10,10} \right].$$
((76))

The control gain is given by

$$K = \left[ {\begin{array}{*{20}{c}} {1.330}&{ - 0.798}&{ - 6.270} \\ 0&{ - 1.596}&{3.135} \\ { - 1.330}&{1.330}&0 \end{array}} \right].$$
((77))

Let \({{\left[ {{{{\bar {x}}}_{1}}\left( k \right)~{{{\bar {x}}}_{2}}\left( k \right)~{{{\bar {x}}}_{3}}\left( k \right)} \right]}^{T}} = H{{\left[ {{{x}_{1}}\left( k \right)~{{x}_{2}}\left( k \right)~{{x}_{3}}\left( k \right)} \right]}^{T}}\), where

$$H = \left[ {\begin{array}{*{20}{c}} 1&2&3 \\ 1&2&1 \\ 3&1&3 \end{array}} \right].$$
((78))

Then, we have

$$\left[ {\begin{array}{*{20}{c}} {{{{\bar {x}}}_{1}}\left( {k + 1} \right)} \\ {{{{\bar {x}}}_{2}}\left( {k + 1} \right)} \\ {{{{\bar {x}}}_{3}}\left( {k + 1} \right)} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {2.8}&0&0 \\ 0&{4.8}&0 \\ 0&0&{16.5} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{{{\bar {x}}}_{1}}\left( k \right)} \\ {{{{\bar {x}}}_{2}}\left( k \right)} \\ {{{{\bar {x}}}_{3}}\left( k \right)} \end{array}} \right] + H\left[ {\begin{array}{*{20}{c}} {{{u}_{1}}\left( k \right)} \\ {{{u}_{2}}\left( k \right)} \\ {{{u}_{3}}\left( k \right)} \end{array}} \right].$$
((79))

Here, \({{\bar {x}}_{i}}\left( k \right)\) is quantized and encoded. Then, the channel may transmit without error ri bits of the information on \({{\bar {x}}_{i}}\left( k \right)\), where we set \({{r}_{1}} = 1,{{r}_{2}} = 2,{{r}_{3}} = 4\). The corresponding simulation is given in Figs. 1 and 2. The real average data rate R is equal to 5.95 (bits/sample). However, the lower bound given by the data-rate theorem in the literature is equal to 7.79 (bits/sample) in this case. Clearly, the system is still observable and stabilizable even though the real average data rate R is smaller than the lower bound. Furthermore, the lower bound given by Theorem 3.1 and Theorem 3.2 is equal to 5.98 (bits/sample), which is a little bigger than the real average data rate. This means that, the lower bound given by Theorem 3.1 and Theorem 3.2 is sufficient and our result is less conservative.

Fig. 1.
figure 1

The state estimation error with R = 5.95 (bits/sample).

Fig. 2.
figure 2

The plant state with R = 5.95 (bits/sample).

5 CONCLUSIONS

In this paper, we discussed the important effect that data-rate limitations have on observability and stabilizability of networked control systems. We obtained the tighter lower bound by employing a time-varying coding scheme, and gave the less conservative results. This is especially important for practical applications. As shown in Figs. 1 and 2, both the plant state and the state estimation error converge to zero as k → ∞. Thus, the system (1) is still asymptotically observable and asymptotically stabilizable when the average data rate is less than the lower bound given by the data-rate theorem in the literature. Compared to the simulation results, the results of the theoretical analysis proved to be true and credible. In particular, the error of the average data rate is about 0.57%. The simulation results have illustrated the effectiveness of the quantization, coding and control scheme.