Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The stability of Networked Control Systems (NCSs) is an important research topic in the area of control, see e.g., [1, 11, 21]. Calculation and communication resources limitation, the distribution of calculation, actuation, and sensing devices, and the related scheduling render the analysis and design complex and challenging.

A simple model of an NCS is given by a sampled-data one with the single-sampling period (that is, an NCS essentially has one constant or time-varying sampling period). For such NCSs with single-sampling periods, there exists an abundant literature devoted to the stability and related problems.

In particular, a lot of recent studies focus on the robust stability and stabilization problems when NCSs are subject to uncertain sampling periods and/or network-induced delays (see, for instance, the delayed-input method used in [9], the small gain approach adopted in [17], and the convex-embedding approach used in [7, 10]).

For nominal NCS models (i.e., when the sampling periods, network-induced delays, and the system matrices are all fixed), an eigenvalue-based approach was introduced in [16] for characterizing the stability domain for the sampling period or network-induced delay. Such an approach represents a novelty in the domain. Recently, a hyper-sampling period, from a perspective of real-time systems, was proposed in [2, 5] (see also [4] for a more detailed introduction). Compared with the single-sampling counterpart, a hyper-sampling period consists of multiple sub-sampling periods and hence provides with a more flexible and realistic sampling mechanism for NCSs.

In the hyper-sampling context, stability conditions were studied in [13], where a discrete-time model and some robust analysis techniques similar to the ones used in [8, 18, 19] were adopted. It was shown in [13] that an NCS can be asymptotically stable with less system resources consumption. In [13] the feedback gain matrix is supposed to be designed a priori and the stability region with respect to the sub-sampling periods are explicitly studied. One may naturally predict that system resources may be further saved if both the hyper-sampling period and the feedback gain matrix are considered as free design parameters. This motivates us to consider in this chapter the following co-design problem:

For an NCS under the hyper-sampling period, design the feedback gain matrices such that the obtained stabilizable region in the space of sub-sampling parameters is as large as possible .

For simplicity, we assume in this chapter that the hyper-sampling period has two sub-sampling periods \(T_1\) and \(T_2\) which practically means that a control task is executed twice in each hyper-sampling period. We believe that the method proposed and the results obtained here may be extended to the case involving more sub-sampling periods. When the hyper-sampling period is assumed to have two sub-sampling periods \(T_1\) and \(T_2\), the value of \( 1/(T_1 + T_2) \) corresponds to an index average sampling frequency (ASF). It is not hard to see that if an NCS can be stabilized by a hyper-sampling period with a smaller ASF, then less calculation and communication resources are consumed (the consumption of resources is in general proportional to the ASF). Therefore, in this chapter, we will consider how to find a stabilizable region for an NCS in the \(T_1 - T_2\) plane with the values \(T_1 + T_2\) as large as possible.

Due to the complexity of the problem, we start with the specific case where \(T_1 = T_2\) (instead of direct studying the general case) and calculate the maximal stabilizable bound \(\overline{T} \), i.e., the NCS can be stabilized by some (not necessary one) feedback gain matrices if \(0<T_1 = T_2< \overline{T}\). We will show that for this specific case (corresponding to the single-sampling case), the maximal stabilizable bound \(\overline{T}\) can be easily obtained through a necessary and sufficient condition in terms of linear matrix inequalities (LMIs) . We next seek the stabilizable region in the hyper-sampling case. It should be emphasized that, unlike for the single-sampling case, for a given hyper-sampling period \((T_1, T_2)\) there does not exist a direct way so far to find a stabilizing feedback gain matrix K (the corresponding conditions are in terms of nonlinear matrix inequalities).

In this chapter, we will propose an eigenvalue-based procedure to iteratively adjust the feedback gain matrix K. First, from the results for the single-sampling case, we may obtain a stabilizable region for the hyper-sampling period in the \(T_1 - T_2\) plane. This stabilizable region is dented by \(S^{(0)}\), whose boundary is denoted by \(B^{(0)}\) and can be detected by parameter sweeping. Each point on \(B^{(0)}\) must correspond to a feedback gain matrix with which the NCS has characteristic roots located on the unit circle, called critical characteristic roots or eigenvalues.Footnote 1 Then, we study the asymptotic behavior of these critical characteristic roots with respect to the elements of K such that we know the way to adjust K in order to have a larger stabilizable region. In this way, we will have some new feedback gain matrices leading to a new stabilizable region \(S^{(1)}\), whose boundary is \(B^{{(1)}}\). Next, the above step may be applied to the points on \(B^{{(1)}}\) and then obtain one more stabilizable region \(S^{(2)}\). By repeating this method in an iterative manner, we may obtain new stabilizable regions \(S^{(3)}\), \(S^{(4)}\)\(\ldots \) Finally, the combination \({S^{(0)}} \cup {S^{(1)}} \cup \cdots \) is the overall stabilizable region we detect.

The asymptotic behavior analysis is a relatively new approach for the analysis and design of NCSs (see e.g., [16]), and, to the best of the authors’ knowledge, was not sufficiently exploited in the community of NCSs. In this chapter, we will only study the case with only simple critical characteristic roots (i.e., we suppose that no multiple critical characteristic roots appear). One may refer to [14] for a general method for asymptotic behavior analysis. The proposed procedure will be illustrated by a numerical example. In addition, we can see from the example that, compared with the single-sampling period, a smaller ASF guaranteeing the NCS stability can be obtained by the hyper-sampling period. That is to say, calculation and communication resources may be saved by adopting the hyper-sampling period.

This chapter is organized as follows. In Sect. 10.2, some preliminaries are given. In Sect. 10.3, the stabilization for NCSs under the single-sampling mode is considered. A procedure for designing the feedback gain matrices under the hyper-sampling mode is proposed in Sect. 10.4. An illustrative example is given in Sect. 10.5. Finally, some concluding remarks end this chapter in Sect. 10.6.

Notations: In this chapter, the following standard notations will be used: \(\mathbb {R}\) (\(\mathbb {R}_+\)) is the set of (positive) real numbers; \(\mathbb {N}\) is the set of non-negative integers and \(\mathbb {N}_+\) is the set of positive integers. Next, I is the identity matrix with appropriate dimensions. For a matrix A, \(A'\) denotes its transpose. We denote by \(\rho (A)\) the spectral radius of matrix A. Finally, \(A > 0\) implies that A is positive definite.

2 Preliminaries

The controlled plant of a networked control system (NCS) is given by

$$\begin{aligned} \dot{x}(t) = Ax(t) + Bu(t), \end{aligned}$$
(10.1)

where x(t) and u(t) denote, respectively, the system state and control input at time t, and A and B are constant matrices with appropriate dimensions. It is a trivial assumption that A is not Hurwitz, otherwise the system is open-loop stable and less interesting for study. At a sampling instant \(t_k\) (\(k \in \mathbb {N}\)), the control input to the plant (1) is updated to \(u(t_k)\). Implemented with the Zero-Order-Hold (ZOH) devices, the control signal is

$$\begin{aligned} u(t) = u(t_k ),~t_k \le t < t_{k + 1}. \end{aligned}$$
(10.2)

We employ the commonly used state feedback control:

$$\begin{aligned} u(t_k ) = Kx(t_k ), \end{aligned}$$
(10.3)

where K is the feedback gain matrix, to be designed in this chapter. The closed-loop of the NCS can be expressed by the following discrete-time model

$$\begin{aligned} x({t_{k + 1}}) = \varPhi (T(k),K)x({t_k}), ~k \in \mathbb {N}, \end{aligned}$$
(10.4)

where \(T(k) \buildrel \varDelta \over = {t_{k + 1}} - {t_k}\) denote the sampling periods and \(\varPhi (T(k),K)\) is the transition matrix function defined by

$$\begin{aligned} \varPhi (\alpha ,\beta ) = {e^{A\alpha }} + \int _0^\alpha {{e^{A\theta }}} d\theta B\beta . \end{aligned}$$
(10.5)

Introducing \(\widetilde{A}(T(k)) = {e^{AT(k)}}\) and \(\widetilde{B}(T(k)) = \int _0^{T(k)} {{e^{A\theta }}} d\theta B\), we may rewrite \(\varPhi (T(k),K)\) as

$$\begin{aligned} \varPhi (T(k),K) = \widetilde{A}(T(k)) + \widetilde{B}(T(k))K. \end{aligned}$$
(10.6)

In the control area, the most used sampling mode is the standard Footnote 2 single-sampling mode. In particular, throughout this chapter we focus on the nominal cases and the sampling periods \({T(k)} \buildrel \varDelta \over = {t_k} - {t_{k - 1}}\) are a constant value, denoted by T, for all \(k \in \mathbb {N}_+\) under the single-sampling mode.

A well-known necessary and sufficient stability condition for the NCS under the standard single-sampling mode is as follows (see e.g., [1, 6]).

Lemma 1

The networked control system described by (10.1)–(10.3) with a constant sampling period T is asymptotically stable if and only if the transition matrix \(\varPhi (T, K)\) is Schur.

The stabilization problem in the single-sampling case is relatively simple to solve (details will be given in Sect. 10.3). As earlier mentioned, the main objective of this chapter is to study the stabilization under the hyper-sampling mode. In general, a hyper-sampling period is composed of \(n \in \mathbb {N}_+\) sub-sampling periods \({T_i} \in \mathbb {R}_+, i=1, \ldots , n\), as depicted in Fig. 10.1. The n sub-sampling periods are allowed to be different from each other.

Fig. 10.1
figure 1

A hyper-sampling period

Under the hyper-sampling mode, the sampling instants \(t_k\) (\(k \in \mathbb {N}\)) are as depicted in Fig. 10.2. It follows that \({t_1} - {t_0} = {T_1}\) (\({t_0} = 0\)), \({t_2} - {t_1} = {T_2}\), \(\ldots \), \({t_n} - {t_{n-1}} = {T_n}\), \({t_{n+1}} - {t_{n}} = {T_1}\), \({t_{n+2}} - {t_{n+1}} = {T_2}\), \(\ldots \). That is, the sampling periods T(k) are generated periodically according to hyper-sampling period. More precisely, they are generated by the following rule:

$$\begin{aligned} T(k) = {t_{k+1}} - {t_{k}} = \left\{ {\begin{array}{*{20}{l}} {{T_{k+1\,\bmod \,n}},k+1\,\bmod \,n \ne 0,k \in \mathbb {N},}\\ {{T_n},k+1\,\bmod \,n = 0,k \in \mathbb {N},} \end{array}} \right. \end{aligned}$$
(10.7)

where the notation of modulo operation\(a \bmod b\)” denotes the remainder of division of a by b.

Fig. 10.2
figure 2

Sampling instants under the hyper-sampling mode

Remark 1

It is easy to see that the hype-sampling mode reduces to the single-sampling mode when \(n=1\). Thus, we may treat the single-sampling mode as a special case of the hyper-sampling mode. In general, a larger n offers more design flexibility of the hyper-sampling period. Meanwhile, the price to be paid for a larger n is the increased solution complexity of the stabilization problem.

For a hyper-sampling period with n sub-sampling periods \(T_1\), \( \ldots \), \(T_n\), we define the Average Sampling Frequency (ASF) as follows:

$$\begin{aligned} f_\mathscr {A} = \frac{n}{{\sum \limits _{i = 1}^n {{T_i}} }}. \end{aligned}$$
(10.8)

Remark 2

The concept of ASF is easy to understand: On average, in a unit of time the system state is sampled \(f_\mathscr {A}\) times (or, equivalently, the system state is sampled once in \(1/f_\mathscr {A}\) units of time). The value of \(f_\mathscr {A}\) corresponds to the calculation and communication resources consumed by a control task, since a sample instant is associated with a series of actions including state sampling by sensor, data transition over network, calculation of control input by controller, and updating of control input by actuator. The higher the ASF is, more resources are consumed.

For the sake of simplicity, in this chapter we study the case \(n=2\). That is, the hype-sampling period is assumed to contain two sub-sampling periods \(T_1\) and \(T_2\). In this context, a hyper-sampling period can be denoted by a pair \((T_1, T_2)\). In our opinion, the results of this chapter can be extended to the case with more sub-sampling periods. Following the analysis in [13], we have the following necessary and sufficient stability condition:

Lemma 2

The networked control system described by (10.1)–(10.3) with two sub-sampling periods \(T_1\) and \(T_2\) is asymptotically stable if and only if \(\varPhi ({T_1},K)\varPhi ({T_2},K)\) (or, equivalently \(\varPhi ({T_2},K)\varPhi ({T_1},K)\)) is Schur.

Lemma 2 is based on the discrete-time expression of the NCS:

$$x({t_{k + 2}}) = \varPhi ({T_2},K)\varPhi ({T_1},K)x({t_k}),$$

if k is even while \(x({t_{k + 2}}) = \varPhi ({T_1},K)\varPhi ({T_2},K)x({t_k})\) if k is odd. Next, the equivalence (from the stability point of view) between \(\varPhi ({T_1},K)\varPhi ({T_2},K)\) and \(\varPhi ({T_2},K)\varPhi ({T_1},K)\) is due to the following immediate yet important property:

Propertry 1

For two square matrices \(Q_1\) and \(Q_2\), the matrices \(Q_1 Q_2\) and \(Q_2 Q_1\) have the same eigenvalues.

In view of Lemma 2, we define

$$\begin{aligned} {\varPhi _H}({T_1},{T_2},K) = \varPhi ({T_1},K)\varPhi ({T_2},K), \end{aligned}$$

and we know that the NCS is asymptotically stable if and only if \({\varPhi _H}({T_1},{T_2},K)\) is Schur (it is equivalent if we define \({\varPhi _H}({T_1},{T_2},K)\) as \(\varPhi ({T_2},K)\varPhi ({T_1},K)\)). Next, we clarify the notions “stabilizable hyper-sampling period”, “stabilizable point”, and “stabilizable region”, to be frequently used in this chapter.

A hyper-sampling period \((T_1, T_2)\) is called a stabilizable one, if there exists a feedback gain matrix K stabilizing the closed-loop NCS (i.e., there exists a K such that \({\varPhi _H}({T_1},{T_2},K)\) is Schur). The corresponding point, with coordinate \((T_1, T_2)\) in the \(T_1 - T_2\) parameter plane, is called a stabilizable point. A stabilizable region refers to the set of stabilizable points in the \(T_1 - T_2\) parameter plane.

Note that a stabilizable region generally corresponds to multiple different feedback gain matrices. For instance, if for a feedback gain matrix \({K_\alpha }\) (\({K_\beta }\)), the NCS is asymptotically stable when \((T_1, T_2)\) lies in a region \({S_{{K_\alpha }}}\) (\({S_{{K_\beta }}}\)), then \({S_{{K_\alpha }}}\) (\({S_{{K_\beta }}}\)) is a stabilizable region. Furthermore, \({S_{{K_\alpha }}} \cup {S_{{K_\beta }}}\) is a larger stabilizable region and a stabilizing K for all \(({T_1},{T_2}) \in {S_{{K_\alpha }}} \cup {S_{{K_\beta }}}\) does not necessarily exist.

Remark 3

According to Property 1, if \((T_1^*, T_2^*)\) is a stabilizable point if and only if \((T_2^*, T_1^*)\) is a stabilizable point. Therefore, it suffices to consider only the domain with \({T_1} \le {T_2}\) (\(T_1 > 0\), \(T_2 > 0\)). For an obtained stabilizable region \(S^*\) therein, there must be a stabilizable region \(S^\sharp \) in the domain with \({T_2} \le {T_1}\) (\(T_1 > 0\), \(T_2 > 0\)), such that \(S^*\) and \(S^\sharp \) are symmetric with respect to the line \(T_1 = T_2\).

In the sequel, we will first study the stabilization problem in the case of single-sampling period. Next, we will study the stabilization problem in the case of hyper-sampling period based on the obtained results for the single-sampling period case.

3 Stabilization of NCS Under Single-Sampling Mode

Although the result given below is not new, it will represent the staring point of our study in handling the case of hyper-sampling period.

Lemma 3

Consider a networked control system described by (10.1)–(10.3) under the single-sampling mode. For a given sampling period T, the networked control system is stabilizable if and only if there exist a positive-definite matrix P and a matrix Y such that the following linear matrix inequality (LMI) is feasible

$$\begin{aligned} \left( {\begin{array}{*{20}{c}} { - P}&{}{\widetilde{A}(T)P + \widetilde{B}(T)Y}\\ {(\widetilde{A}(T)P + \widetilde{B}(T)Y)'}&{}{ - P} \end{array}} \right) < 0. \end{aligned}$$
(10.9)

If the LMI (10.9) is feasible, we have a feedback gain matrix \(K = Y{P^{ - 1}}\) with which the networked control system is asymptotically stable.

Proof

The condition of Lemma 3 can be easily developed from a standpoint of discrete-time system. For a given T, the NCS is stabilizable if and only if there exists a feedback gain matrix K and a positive-definite matrix P such that:

$$(\widetilde{A}(T) + \widetilde{B}(T)K)P(\widetilde{A}(T) + \widetilde{B}(T)K)' - P < 0,$$

which is equivalent to the condition:

$$\begin{aligned} (\widetilde{A}(T) + \widetilde{B}(T)Y){P^{ - 1}}(\widetilde{A}(T) + \widetilde{B}(T)Y)' - P < 0, \end{aligned}$$
(10.10)

where \(Y = KP\). The condition (10.10) can be equivalently transformed into the LMI form (10.9) by using the Schur complement properties (see [3]). \(\Box \)

Lemma 3 can be easily implemented by using the LMI toolbox in MATLAB. Thus, for any single-sampling period T, we may precisely determine if the NCS is stabilizable and obtain a corresponding feedback gain matrix K if stabilizable.

Furthermore, by sweeping T and using Lemma 3, we may accurately find the stabilizable interval \(T \in (0,\overline{T})\) under the single-sampling mode (note that this result is without conservatism). Then, we choose some \(T_{0,i}\) such that \(0< {T_{0,1}}< {T_{0,2}}< \cdots < \overline{T} \) and for each \(T_{0,i}\) we have a stabilizing feedback gain matrix, denoted by \(K_i^{(0)}\). Each \(K_i^{(0)}\) provides with a stabilizable region in the \(T_1 - T_2\) plane near \(({T_1} = {T_{0,i}},{T_2} = {T_{0,i}})\), denoted by \(S_{^i}^{(0)}\). Note that \(S_{^i}^{(0)}\) can be obtained by parameter sweeping.

The combination of all \(S_{^i}^{(0)}\), \(S_{^1}^{(0)} \cup S_{^2}^{(0)} \cup \cdots \), constitute a (larger) stabilizable region \({S^{(0)}}\) in the \(T_1 - T_2\) plane. The boundary of \(S ^{(0)}\) is denoted by \(B^{(0)}\). Note that \({S^{(0)}}\) is a stabilizable region for the hyper-sampling mode, though it is obtained from the results for the stabilization of the single-sampling case.

In the next section, we will further enlarge the stabilizable region for the hyper-sampling mode based on \({S^{(0)}}\).

4 Stabilization of NCS Under Hyper-sampling Mode

First of all, it should be noticed that, unlike for the single-sampling case, it is difficult to determine if an NCS is stabilizable for a given hyper-sampling period \((T_1, T_2)\) and to find the corresponding stabilizing feedback gain matrix K (if stabilizable). If we straightforwardly follow the idea in Sect. 10.3, we need to find a positive-definite matrix P and a feedback gain matrix K such that the following matrix inequality holds:

$$\begin{aligned} {\varPhi _H}({T_1},{T_2},K) P { \varPhi _H^{'}}({T_1},{T_2},K) - P < 0. \end{aligned}$$
(10.11)

However, the condition (10.11) is a nonlinear matrix inequality and it is not easy to equivalently transform it into a linear one. To the best of the authors’ knowledge, it is difficult to directly find a K satisfying condition (10.11). The problem will become more involved when \(n > 2\).

Instead of trying to give a direct procedure, in the sequel, we will take advantage of the results obtained for the single-sampling case to design the feedback gain matrix K for the case of hyper-sampling period. From the results proposed in Sect. 10.3, we have a stabilizable region \(S^{(0)}\) with its boundary \(B^{(0)}\) in the \(T_1 - T_2\) plane. For every point \(({T_1},{T_2})\) on \(B^{(0)}\), there is a K with which \({\varPhi _H}({T_1},{T_2},K)\) has an eigenvalue located on the unit circle (such an eigenvalue is called a “critical” one).

To simplify the analysis of this chapter, we adopt the following assumption.

Assumption 1

All the critical eigenvalues of the closed-loop system are simple.

Remark 4

If multiple critical eigenvalues appear, the problem will generally become more complicated and we may invoke the Puiseux series to treat such a case (see e.g., [14] for the analysis of multiple critical roots for time-delay systems ).

Remark 5

If \({\lambda ^{*}}\) is a critical eigenvalue, its conjugate \(\overline{{\lambda ^*}}\) is also a critical eigenvalue and the variations of \({\lambda ^{*}}\) and \(\overline{{\lambda ^*}}\) as K varies are symmetric with respect to the real axis. Thus, it is sufficient to analyze either of them.

In the sequel, we will design K through analyzing the asymptotic behavior of the critical eigenvalues with respect to the elements of K. Without any loss of generality, suppose K has \(m \in \mathbb {N}_+\) elements. For instances, a \(1 \times 3\) K has 3 elements and it can be expressed by \((\begin{array}{*{20}{c}} {{k_1}}&{{k_2}}&{{k_3}} \end{array})\); a \(2 \times 2\) K has 4 elements and it can be expressed by \(\left( {\begin{array}{*{20}{c}} {{k_1}}&{}{{k_2}}\\ {{k_3}}&{}{{k_4}} \end{array}} \right) \). The elements of K are expressed by \({k_\gamma },\gamma = 1, \ldots , m\). The characteristic function for the transition matrix of an NCS under the hyper-sampling mode can be denoted by:

$$\begin{aligned} f(\lambda ,{T_1},{T_2},{k_1}, \ldots ,{k_m}) = \mathrm{{det}}(\lambda I - {\varPhi _H}({T_1},{T_2},K)). \end{aligned}$$
(10.12)

By the implicit function theorem (see e.g., [12, 20]), we have the following theorem:

Theorem 2

Suppose when \(\lambda = {\lambda ^*},{T_1} = T_1^*,{T_2} = T_2^*,{k_1} = k_1^*, \ldots , {k_m} = k_m^*\),

$$f(\lambda ,{T_1},{T_2},{k_1}, \cdots {k_m}) = 0$$

and \({f_\lambda } \ne 0\). As \(k_ \gamma \) vary near \(k_\gamma ^*\) (\(\gamma =1, \ldots ,m\)), \(f(\lambda ,{T_1},{T_2},{k_1}, \ldots ,{k_m}) = 0 \) uniquely determines a characteristic root \(\lambda ({k_1}, \ldots ,{k_m})\) with \(\lambda (k_1^*, \ldots ,k_m^*) = {\lambda ^*}\) and \(\lambda ({k_1}, \ldots ,{k_m})\) has continuous partial derivatives

$$\begin{aligned} \frac{{\partial \lambda }}{{\partial {k_1}}} = - \frac{{{f_{{k_1}}}}}{{{f_\lambda }}}, \ldots ,\frac{{\partial \lambda }}{{\partial {k_m}}} = - \frac{{{f_{{k_m}}}}}{{{f_\lambda }}}. \end{aligned}$$

According to Theorem 2, we may express the asymptotic behavior of \(\lambda \) with respect to the elements of the feedback gain matrix K by the following (first-order) Taylor series

$$\begin{aligned} \varDelta \lambda = {C_1}\varDelta ({k_1}) + \cdots + {C_m}\varDelta ({k_m}) + o(\varDelta ({k_1}), \ldots ,\varDelta ({k_m})), \end{aligned}$$
(10.13)

where

$$\begin{aligned} {C_\gamma } = \frac{{\partial \lambda }}{{\partial {k_\gamma }}}, \gamma = 1, \ldots ,m. \end{aligned}$$

Remark 6

In this chapter, we only invoke the first-order terms of the Taylor series. If needed, we may further invoke higher-order terms. One may refer to [15] concerning the degenerate case for time-delay systems, where invoking the first-order terms is not sufficient for the stability analysis.

For a critical characteristic root \(\lambda \) (i.e., \(\left| \lambda \right| = 1\)), from the stability point of view, we are interest in the direction of \(\varDelta \lambda \) with respect to the unit circle. If the direction points inside the unit circle, the variation makes the system asymptotically stable. Equivalently, we may consider the variation of the norm of the critical characteristic root \(\lambda \), i.e., \(\varDelta (\left| \lambda \right| )\). Such an analysis can be fulfilled by computing the projection of \(\varDelta \lambda \) on the normal line of the unit circle at \(\lambda \). We have the following theorem.

Theorem 3

Suppose when \(\lambda = {\lambda ^*},{T_1} = T_1^*,{T_2} = T_2^*,{k_1} = k_1^*, \ldots , {k_m} = k_m^*\), \(f(\lambda ,\) \({T_1},{T_2},{k_1}, \cdots {k_m}) = 0\) and \({f_\lambda } \ne 0\). As \(k_\gamma \) vary near \(k_\gamma ^*\) (\(\gamma =1, \ldots ,m\)), it follows that

$$\varDelta (\left| \lambda \right| ) = \left( {\begin{array}{*{20}{c}} {{\mathrm{Re}} ({\lambda ^*})}&{{\mathrm{Im}} ({\lambda ^*})} \end{array}} \right) \cdot \left( {\begin{array}{*{20}{c}} {{\mathrm{Re}} (\varDelta \lambda )}&{{\mathrm{Im}} (\varDelta \lambda )} \end{array}} \right) .$$

We now apply Theorem 3 to adjust K in order to have a larger stabilizable region.

We choose some points on \(B^{(0)}\), denoted by \((T_{1,i}^{(0)},T_{2,i}^{(0)})\) (\(i = 1,2, \ldots \)). Each \((T_{1,i}^{(0)},T_{2,i}^{(0)})\) corresponds to a \(K_i^{(0)}\) (whose elements are denoted by \(k_{i,1}^{(0)}, \ldots ,k_{i,m}^{(0)}\)) and \(\lambda _i^{(0)}\) with \(\left| {\lambda _i^{(0)}} \right| = 1\) such that \(f(\lambda _i^{(0)},T_{1,i}^{(0)},T_{2,i}^{(0)},k_{i,1}^{(0)}, \ldots ,k_{i,m}^{(0)}) = 0\) . Then, we may adjust \(K_i^{(0)}\) according to Theorem 3 to find a new feedback gain matrix, denote by \(K_i^{(1)}\) such that the NCS with \(K_i^{(1)}\) is asymptotically stable near \((T_{1,i}^{(0)},T_{2,i}^{(0)})\).

More precisely, for each element \(k_ \gamma \) we may know from Theorem 3 the following. Suppose other elements of K are fixed, a sufficient small increase (decrease) of \(k_\gamma \) at \(k_{i, \gamma }^{(0)}\) makes \({\lambda _i^{(0)}}\) move inside the unit circle if

$$\left( {\begin{array}{*{20}{c}} {\mathrm{{Re}}({\lambda ^*})}&{\mathrm{{Im}}({\lambda ^*})} \end{array}} \right) \cdot \left( {\begin{array}{*{20}{c}} {\mathrm{{Re}}({C_\gamma })}&{\mathrm{{Im}}({C_\gamma })} \end{array}} \right) >0\quad (<0).$$

With this property, we may adjust all elements \(k_{i, \gamma }^{(0)}, \gamma = 1, \ldots , m\), appropriately to find a new stabilizing feedback gain matrix.

From each \(K_i^{(1)} \) we have a new stabilizable region near \((T_{1,i}^{(0)},T_{2,i}^{(0)})\), denoted by \(S_{^i}^{(1)}\). The combination of all \(S_{^i}^{(1)}\), \(S_{^1}^{(1)} \cup S_{^2}^{(1)} \cup \cdots \), constitute a larger stabilizable region \({S^{(1)}}\) in the \(T_1 - T_2\) plane. The boundary of \(S^{(1)}\) is denoted by \(B^{(1)}\).

The above step can be used iteratively such that a sequence of new stabilizable regions \(S^{(2)}\), \(S^{(3)}\), \(\ldots \) can be obtained. This is the procedure, proposed in this chapter, for solving the stabilization problem in the hyper-sampling case and can be summarized as below.

Procedure for stabilization of NCSs with hyper-sampling period:

  • Step 1: Using the method proposed in Sect. 10.3, we find the stabilizable region \(S^{(0)}\), and the boundary of \(S^{(0)}\), \(B^{(0)}\), in the \(T_1 - T_2\) plane. Let \(l=0\).

  • Step 2: Choose some points on \(B^{(l)}\), \((T_{1,i}^{(l)},T_{2,i}^{(l)}), i = 1,2, \ldots \) Each \((T_{1,i}^{(l)},T_{2,i}^{(l)})\) corresponds to a \(K_i^{(l)}\) such that \(\rho ({\varPhi _H}(T_{1,i}^{(l)},T_{2,i}^{(l)},K_i^{(l)})) = 1\). Then, we adjust the elements of \(K_i^{(l)}\) according to Theorem 3 to find a new feedback gain matrix \(K_i^{(l+1)}\) associated with a new stabilizable region \(S_{^i}^{(l + 1)}\). The combination of all \(S_i^{(l + 1)}\) form a new stabilizable region \({S^{(l + 1)}}\), whose boundary is \(B^{(l+1)}\).

  • Step 3: If we want to further detect the stabilizable region in the \(T_1 - T_2\) plane, let \(l=l+1\) and return to Step 2. Otherwise or when it is hard to find a larger stabilizable region by Step 2, the procedure stops. The combination \({S^{(0)}} \cup \cdots \cup {S^{(l)}}\) is the overall stabilizable region we find.

Remark 7

The above procedure is not very simple to use and the computational effort may further increase if we choose more points ont the boundaries. However, as the procedure can be implemented off-line, the computational complexity is not an important issue here.

5 Illustrative Example

In the sequel, the procedure proposed in Sect. 10.4 will be illustrated by a numerical example.

Example 1

Consider an NCS with the controlled plant (10.1) with

$$\begin{aligned} A = \left[ {\begin{array}{*{20}{c}} 12&{}1\\ 1&{}{ - 9} \end{array}} \right] ,\quad B = \left[ {\begin{array}{*{20}{c}} 0.1\\ {0} \end{array}} \right] . \end{aligned}$$

We first employ Step 1 to find the stabilizable region \(S^{(0)}\). The stabilizable interval under the single-sampling mode is \(T \in (0, 1.26)\). That is, the minimal stabilizable ASF under the single-sampling mode is \(f_\mathscr {A}= \frac{1}{{1.26}} = 0.79\). The stabilizable region \(S^{(0)}\) is shown in Fig. 10.3.

Next, on the boundary of \(S^{(0)}\), \(B^{(0)}\), we choose some \((T_{1,i}^{(0)},T_{2,i}^{(0)})\) and apply Step 2 to adjust the corresponding \(K_i^{(0)}\). As a consequence, we may find some new feedback gain matrices \(K_i^{(1)}\) and a new stabilizable region \(S^{(1)}\) with the boundary \(B^{(1)}\).

We may repeat the above step (Step 2), and, as a consequence, we find a sequence of new stabilizable regions \(S^{(2)}\), \(S^{(3)}\), \(S^{(4)}\), as shown in Fig. 10.3, with the boundaries \(B^{(2)}\), \(B^{(3)}\), \(B^{(4)}\). If needed, we may obtain more stabilizable regions by repeating step 2 for more times.

We see from Fig. 10.3 that each time a new stabilizable region (with larger values of \(T_1 + T_2\)) is found, a smaller ASF can be obtained. For instance, we find a hyper-sampling period \(({T_1} = 1.40,{T_2} = 1.54)\) with the corresponding \(K=(-120.475116~-5.723971)\), in the obtained stabilizable region. That is, the ASF corresponding to this hyper-sampling period is \(f_\mathscr {A} = \frac{2}{{1.40\mathrm{{ + }}1.54}} = 0.68\), smaller than the minimal ASF under the single-sampling mode, 0.79. To illustrate the asymptotic stability of the NCS under this hyper-sampling period, we give the state response with an initial state \(x(0) = (1.1~-1.1)'\) in Fig. 10.4.

Fig. 10.3
figure 3

Stabilizable region found for Example 1

Fig. 10.4
figure 4

State evolution x(t) for Example 1 (initial condition \(x(0) = (1.1~-1.1)'\))

6 Concluding Remarks

In this chapter, we proposed a procedure for the stabilization of networked control systems (NCSs) under the hyper-sampling mode. The procedure consists of two steps.

Step 1 is to solve the stabilization problem in the case of single-sampling period and we can obtain a stabilizable region for the hyper-sampling period from this step. Step 2 is to find a larger stabilizable region based on the results of Step 1 by using a method for asymptotic behavior analysis. Step 2 can be used in an iterative manner such that the stabilizable region can be further detected in the parameter plane.

An example illustrates the proposed procedure and shows that the hyper-sampling period may lead to a smaller average sampling frequency (ASF) guaranteeing the asymptotic stability of the NCS than the single-sampling period, which means that calculation and communication resources of an NCS can be saved by using the hyper-sampling mode.