1.1 Background

With the rapid development of modern technology, the world has entered the age of networks. Typical examples of networks include the World Wide Web, routes of airlines, biological networks, human relationships, and so on [1]. As a special kind of network, complex networked systems consisting of large groups of cooperating agents have made a significant impact on a broad range of applications including cooperative control of autonomous underwater vehicles (AUVs) [2], scheduling of automated highway systems [3], and congestion control in communication networks [4].

The study of complex networks can be traced back to Euler’s celebrated solution of the Königsberg bridge problem in 1735, which is often regarded as the first true proof in the theory of networks. In the early 1960s, a random-graph model was proposed by Paul Erdös and Alfréd Rényi [5], which laid a solid foundation for modern network theory. Watts and Strogatz proposed a model of small-world networks in 1998 [6], after that Albert and Barabasi proposed a model of scale-free networks in 1999 based on preferential attachment [7]. These two works reveal small-world effect and scale-free property of the complex networks and the reasons for the above phenomena. Over the past two decades, complex dynamical networks have been widely exploited by researchers in various fields of physics [8], mathematics [9], engineering [10, 11], biology [12], and sociology [13].

What makes complex networked systems distinct from other kinds of systems is that they make it possible to deploy a large number of subsystems as a team to cooperatively carry out a prescribed task. Furthermore, the most striking feature that can be observed in complex networked systems is their ability to show collective behavior that cannot be well explained in terms of individual dynamics of each single node. Two significant kinds of cooperative behaviors are synchronization and consensus [9, 14,15,16,17,18], both of which mean that all agents reach an agreement on certain quantities of interest.

The formal study of consensus dates back to 1974 [19], where a mathematical model was presented to describe how the group reaches agreement. Another interesting discovery is the collective behavior of a group of birds exhibited in foraging or flight, which is found by biologists in the observation of birds’ flocking [20]. If attention is paid, one can find that consensus is a universal phenomenon in nature, such as the shoaling behavior of fish [21], the synchronous flashing of fireflies [22], the swarming behavior of insects [20, 23, 24], and herd behavior of land animals [25]. The key feature of consensus is how local communications and cooperations among agents, i.e., consensus protocols (or consensus algorithms), can lead to certain desirable global behavior [26,27,28,29]. Various models have been proposed to study the mechanism of multi-agent consensus problem [30,31,32,33,34,35,36,37]. In [38], the consensus problem was considered of a switched multi-agent system which composed of continuous-time and discrete-time subsystems. The authors in [39] investigated consensus problems of a class of second-order continuous-time multi-agent systems with time-delay and jointly-connected topologies. Literature [40] focused on the mean square practical leader-following consensus of second-order nonlinear multi-agent systems with noises and unmodeled dynamics.

Synchronization, as typical collective behavior and basic motion in nature, means that the difference among the states of any two different subsystems goes to zero as time goes to infinity or time goes to certain fixed value. Synchronization phenomena exist widely and can be found in different forms in nature and man-made systems, such as fireflies’ synchronous flashing, attitude alignment, and the synchronized applause of audiences. To reveal the mechanism of synchronization of complex dynamical networks, a vast volume of work on synchronization has been done over the past few years. Before the appearance of small-world [6] and scale-free [7] network models, Wu in [41, 42] investigated synchronization of an array of linearly coupled systems and gave some effective synchronization criteria. In 1998, Pecora and Carroll [43] proposed the concept of master stability function as synchronization criterion, which revealed that synchronization highly depends on the coupling strategy or the topology of the network. In [14, 44,45,46], synchronization in small-world and scale-free networks was studied in detail. Over the past few years, different kinds of synchronization have been found and studied, such as complete synchronization [14, 41, 42, 47, 48], cluster synchronization [49,50,51,52], phase synchronization [53], lag synchronization [54, 55], and generalized synchronization [56].

In the literatures, most works on the consensus/synchronization of complex networks mainly focus on the analysis of network models with perfect communication, in which it is assumed that each agent can receive timely and accurate information from its neighbors. However, such models cannot reflect real circumstances, since the information flow between two neighboring nodes can always be affected by many uncertain factors including limited communication capacity, network induced time delays, communication noise, random packet loss, and so on. The aforementioned constraints should be considered in the design of control strategy or algorithms. Hence, it is desirable to formulate more realistic models to describe such complex dynamical networks under imperfect communication constraints and node failure. In this book, three kinds of specific imperfect communications and node failure will be investigated, and some detailed analysis of consensus/synchronization of complex dynamical networks will be presented.

1.2 Research Problems

The following three kinds of imperfect communication problems are considered in this book:

  • Quantization: In real-world networked systems, the amount of information that can be reliably transmitted over the communication channels is always bounded. To comply with such a communication constraint, the signals in real-world systems are required to be quantized before transmission, and the number of quantization levels is closely related to the information transmitting capacity between the components of the system. For example, information such as data and codes in computers is stored digitally in the form of a finite number of bits and hence all the signals need to be quantized before they are processed by the computer. In this book, two kinds of quantizations in networks are considered. One is called communication quantization which is related to communication from one agent to another. The other is called input quantization which is related to processing of the information arriving at each own agent. One natural question is how does the state of a networked systems evolve under quantization?

  • Communication delays: In many real complex networked systems, due to the remote location of agents or the unreliable communication medium (such as Internet), communication delays will occur during the information exchange between the agents and their neighbors. Generally, communication delays can have a negative effect on the stability and consensus/synchronization performance of the complex networks. Thus, it is important to investigate the effect of time delays on the coordinate performance of the complex networked systems and design the delay-tolerant communication protocol. Moreover, it would be very interesting to study the collective behavior of the complex networked systems simultaneously with communication delays and quantization.

  • Event-driven sampled data: In complex networked systems, it is assumed that all information exchange between the agent and its neighbors is timely. However, the communication channels generally are unreliable and the communication capacity is limited in many real networks such as sensor networks. Moreover, the sensing ability of each agent is restricted in the networked systems. Thus, it is more practical to use sampled information transmission, i.e., the nodes of the network can only use the information at some particular time instants instead of employing the whole spectrum of information of their neighbors. Sampled-data control has been widely studied in many areas such as tracking problems and consensus problems. Unlike traditional time-driven sampled control approach (i.e., periodic sampling), event-triggered control means the control signals are kept constant until a certain condition is violated and then the control signal is updated (or recomputed). Event-driven control is more similar to the way in which a human being behaves as a controller since his or her behavior is event-driven rather than time-driven when control manually. Thus, an interesting question arises, i.e., is it possible to propose an effective distributed event-triggered communication protocol to realize expected collective behaviors?

Traditional distributed communication protocols require that the agents exchange perfect information with their neighbors over the complex networked systems. This kind of information exchange can be an implicit property of complex networked systems. The objective of this book is to design efficient distributed protocols or algorithms for the complex networked systems with imperfect communication and node failure in order to comply with bandwidth limitation and tolerate communication delays and node failure. Specifically, the following problems concerning the collective behavior analysis of complex networked systems will be addressed and investigated in detail:

  1. Problem1.

    How does one model the multi-agent networks with arbitrary finite communication delays and directed information flow simultaneously [57]? Can consensus be realized no matter what kind of form the finite communication delays are? How to regulate all nodes’ final state of the multi-agent networks, even when the external signal is very weak? These three questions will be addressed in Chap. 2.

  2. Problem2.

    How can we model the multi-agent consensus model with input quantization and communication delays simultaneously [58, 59]? Does there exist the global solution for the considered consensus model with discontinuous quantization function? How do quantization and communication delays affect the final consensus result? These three questions will be addressed in Chap. 3.

  3. Problem3.

    When the communication quantization and communication delays exist simultaneously in discrete-time multi-agent networks, can the complex networked system achieve consensus [60, 61]? For the continuous-time cases, does the global solution exist? Can the consensus of such a kind of multi-agent network be realized? These questions will be explored in Chap. 4.

  4. Problem4.

    Can the discrete-time and the continuous-time multi-agent networks with communication delays achieve consensus via non-periodic sampled information transmission [62, 63]? How to decide when should the information be transmitted for each agent? What effect does the communication delay have on the multi-agent networks with non-periodic sampling information? Chap. 5 will focus on these problems.

  5. Problem5.

    It can be found in many real multi-agent networks that the agents possess not only cooperative but also antagonistic interactions. Ensuring the desired performance of the cooperative-antagonistic multi-agent networks in the presence of communication constraints is an important task in many applications of real systems. How does one model the cooperative-antagonistic multi-agent networks with arbitrary finite communication delays [64,65,66]? How to deal with the difficulty stemmed from communication delays in cooperative-antagonistic multi-agent networks? What are the final consensus results for this kind of networks with communication delays? How to design the consensus protocol for cooperative-antagonistic multi-agent networks under the event-triggered control? Chap. 6 will focus on these problems.

  6. Problem6.

    Finite-time (or fixed-time) consensus problem has become a hot topic due to its wide applications. For the cooperative-antagonistic multi-agent networks, how to design finite-time (or fixed-time) bipartite consensus protocols [67, 68]? How to establish criteria to guarantee the bipartite agreement of all agents, and show the explicit expression of the settling time? Chap. 7 will focus on these problems.

  7. Problem7.

    It should be pointed out that many of the real-world networks are very large. A nature question is how to obtain synchronization criteria for large-scale directed dynamical networks? When energy constraint is imposed, how to design event-triggered sampled-data transmission strategy to realize expected synchronization behaviors [69, 70]? Chaps. 8 and 9 will discuss these synchronization problems.

  8. Problem8.

    The size of most real-world networks is very large, which would greatly increase the complexity and difficulty of the consensus analysis of the corresponding networks. Is it possible to greatly reduce the size of the networks, but reserve the consensus property [71]? In large-scale networks, is it possible to isolate (or remove) the failure nodes of the networks and meanwhile reserve the consensus property? Chap. 10 will focus on these problems.

1.2.1 Consensus and Practical Consensus

Consider a multi-agent network \(\mathcal {A}\) with N agents. Let \(x_{i}\in \mathbb {R}\) be the information state of the ith agent which may be position, velocity, decision variable, and so on, where \(i\in \mathcal {N}\).

Definition 1.1 (Consensus)

If for all \(x_{i}(0)\in \mathbb {R}\), i = 1, 2, . . . , N, x i(t) converges to some common equilibrium point x (dependent on the initial values of some agents), as t → +, then we say that multi-agent network \(\mathcal {A}\) solves a consensus problem asymptotically. The common value x is called the group decision value.

Now, we give the definition of the distance from a point to a set and practical consensus which will be used in Chap. 3.

Definition 1.2

The distance from a point \(p\in \mathbb {R}\) to a set \(\mathbb {U}\subseteq \mathbb {R}\) is defined as the minimum distance between the given point and the points on the set, i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} dist(p, \mathbb{U}) = \displaystyle\min_{r\in\mathbb{U}}\{dist(p, r)\}=\displaystyle\min_{r\in\mathbb{U}}\{|p-r|\}. \end{array} \end{aligned} $$

Definition 1.3

If for all \(x_{i}(0)\in \mathbb {R}\), \(i\in \mathcal {N}\), the distance of x i(t) to a set \(\mathbb {U}\subseteq \mathbb {R}\) converges to 0 as t → +. Then, the set \(\mathbb {U}\) is called practical consensus set.

1.2.2 General Model Description

In this subsection, a brief introduction of the multi-agent consensus model [32] is presented, which requires that each agent receives timely and accurate information from its neighbors.

1.2.2.1 Continuous-time Multi-agent Consensus Model

The continuous-time multi-agent consensus model is as follows:

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \displaystyle \dot{x}_{i}(t)=\sum_{j\in \mathcal {N}_{i}}a_{ij}(x_{j}(t)-x_{i}(t)),~~~i\in\mathcal{N}, \end{array} \end{aligned} $$
(1.1)

where \(x_i(t)\in \mathbb {R}^n\), \(\mathcal {N}=\{1,2,\ldots ,N\},~N>1\), \(\mathcal {N}_{i}=\{j\mid a_{ij}>0,\, j=1,2,\ldots ,N\}\), and a ij is defined as follows:

  • when i is not equal to j:

    • If there is a connection from node j to node i, a ij > 0;

    • otherwise, a ij = 0;

  • when i is equal to j: a ii = 0, for all \(i\in \mathcal {N}\).

Let l ij = −a ij for i ≠ j, and \(l_{ii}=-\sum ^N_{j=1,j\neq i}l_{ij}\). The continuous-time linear consensus protocol (1.1) can be written in matrix form as

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle \dot{x}(t)=-(L\otimes I_{n})x(t), \end{array} \end{aligned} $$
(1.2)

where L = (l ij)N×N is the graph Laplacian matrix and \(x = [x_{1}^{\top },\,\ldots ,\, x_{N}^{\top }]^{\top }\).

1.2.2.2 Discrete-time Multi-agent Consensus Model

A general discrete-time multi-agent consensus model can be constructed as follows:

$$\displaystyle \begin{aligned} x_i(k+1)=x_i(k)+\iota\sum_{j\in \mathcal {N}_{i}}\bar{a}_{ij}(x_{j}(k)-x_{i}(k)),~~i\in\mathcal{N}, \end{aligned} $$
(1.3)

where \(x_i(k)\in \mathbb {R}^n\), the constant ι > 0 denotes the step size; \(\bar {a}_{ij}\) is defined as follows:

  • when j is not equal to i:

    • If there is a connection from node j to node i, \(\bar {a}_{ij}>0\);

    • otherwise, \(\bar {a}_{ij}=0\);

  • when i is equal to j: \(\bar {a}_{ii}=0\), for all \(i\in \mathcal {N}\).

\(\bar {A}=(\bar {a}_{ij})_{N\times N}\) represents the topological structure of the system. Let A = (a ij)N×N with \(a_{ij}=\iota \bar {a}_{ij}\geq 0\) for i ≠ j, and \(a_{ii}=1-\displaystyle\sum _{j=1,\,j\neq i}^{N} a_{ij}\). Then, the dynamic of multi-agent networks can be written in a compact form as

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle x(k+1)=(A\otimes I_{n})x(k). \end{array} \end{aligned} $$
(1.4)

Proposition 1.4 ([72])

System (1.4) solves a consensus problem if and only if

  1. (1)

    ρ(A) = 1, where ρ(A) is the spectral radius of A;

  2. (2)

    1 is an algebraically simple eigenvalue of A, and is the unique eigenvalue of maximum modulus;

  3. (3)

    A 1 = 1 , where\(\mathbf {1}=(1,\,1,\,\ldots ,\,1)^{\top }\in \mathbb {R}^{N}\);

  4. (4)

    There exists a nonnegative left eigenvector\(\xi =(\xi _{1},\xi _{2},\ldots ,\xi _{N})^{\top }\in \mathbb {R}^{N}\)of A associated with eigenvalue 1 such that ξ 1 = 1.

1.3 Mathematical Preliminaries

1.3.1 Matrices and Graphs

A graph is an essential tool of the diagrammatical representation of the multi-agent networks. The set of vertices for the network are described as \(\mathcal {V}\) , and the set of edges among these vertices are described as \(\mathcal {E}\). The graph is denoted as \(\mathcal {G}(\mathcal {V},\mathcal {E})\). To distinguish graphs from digraphs (directed graph), we generally refer to graphs as undirected graphs.

A graph \(\mathcal {G}(\mathcal {V},\mathcal {E})\), where \(\mathcal {V}\) containing N vertices is said to have order N. Analogously, the size of a graph is the number of its edges m, i.e., the number of elements in set \(\mathcal {E}\). An edge of \(\mathcal {G}\) is denoted by e ij = (v i, v j), where v i and v j are called neighbors.

  • Self-loop: If two vertices of an edge are the same, we call this edge a self-loop.

  • Directed graph: A graph in which all the edges are directed from one vertex to another.

  • Digraph: A path in a digraph is an ordered sequence of vertices such that the sequence of any two consecutive vertices is a directed edge of the digraph.

  • Connected graph: A graph is connected, if there is a path between any pair of vertices.

  • Strongly connected graph: A graph is strongly connected, if there is a directed path between every two different vertices.

  • Subgraph: A subgraph of a graph \(\mathcal {G}_{1}(\mathcal {V}_{1},\mathcal {E}_{1})\) is a graph \(\mathcal {G}_{2}(\mathcal {V}_{2},\mathcal {E}_{2})\) such that \(\mathcal {V}_{2}\subseteq \mathcal {V}_{1}\), \(\mathcal {E}_{2}\subseteq \mathcal {E}_{1}\).

  • Directed tree: A directed tree is a digraph with n vertices and n − 1 edges with a root vertex such that there is a directed path from the root vertex to every other vertex.

  • Rooted spanning tree: A rooted spanning tree of a graph is a subgraph which is a directed tree with the same vertex set.

In general, graphs are weighted, i.e., a positive weight is associated to each edge.

There is an intrinsic relationship between graph theory and matrix theory, which can help us to better understand the main concept of them.

  • Reducible: A matrix is said to be reducible if it can be written as

    $$\displaystyle \begin{aligned} \begin{array}{rcl} P\cdot\left(\begin{array}{cc} A_1 & A_3 \\ \mathcal{O} & A_2 \end{array}\right)\cdot Q, \end{array} \end{aligned} $$
    (1.5)

    where P and Q are permutation matrices, A 1 and A 2 are square matrices and \(\mathcal {O}\) is a null matrix.

  • Irreducible: An irreducible matrix is a matrix which is not reducible.

  • Adjacency matrix: The adjacency matrix A = [a ij] of a (di)graph is a nonnegative matrix defined as a ji = ω if and only if (i, j) is an edge with weight ω.

  • Out-degree: The out-degree d o(v) of a vertex v is the sum of the weights of edges emanating from v.

  • In-degree: The in-degree d i(v) of a vertex v is the sum of the weights of edges into v.

  • Balance graph: A vertex is balanced if its out-degree is equal to its in-degree. A graph is balanced if all of its vertices are balanced.

  • Laplacian matrix: The Laplacian matrix of a graph is a zero row sums nonnegative matrix L denoted as L = D − A, where A is the adjacency matrix and D is the diagonal matrix of vertex in-degrees.

Lemma 1.5 ([73])

A network is strongly connected if and only if its Laplacian matrix is irreducible.

Lemma 1.6 ([73])

For an irreducible matrix A = (a ij)N×Nwith nonnegative off-diagonal elements, which satisfies the diffusive coupling condition\(a_{ii}=-\sum ^N_{j=1,j\neq i}a_{ij}\) , we have the following propositions:

  • If λ is an eigenvalue of A and λ ≠ 0, then Re(λ) < 0;

  • A has an eigenvalue 0 with multiplicity 1 and the right eigenvector [1, 1, …, 1];

  • Suppose that\(\xi =[\xi _1,\xi _2,\ldots ,\xi _N]^\top \in \mathbb {R}^N\)satisfying\(\sum ^N_{i=1}\xi _i=1\)is the normalized left eigenvector of A corresponding to eigenvalue 0. Then, ξ i > 0 for all i = 1, 2, …, N. Furthermore, if A is symmetric, then we have\(\xi _i=\frac {1}{N}\)for i = 1, 2, …, N.

1.3.2 Signed Graphs

Let G(V, ε, A) be an undirected signed graph, where V = {ν 1, ν 2, …, ν N} is the set of finite nodes, ε ⊆ V × V is the set of edges, \(A=[a_{ij}]\in \mathbb {R}^{N\times N}\) is the adjacency matrix of G with the elements a ij, and a ij ≠ 0⇔(ν j, ν i) ∈ ε. Since a ij can be positive or negative, the adjacency matrix A uniquely corresponds to a signed graph. G(A) is used to denote the signed graph corresponding to A for simplicity, and assume that G(A) has no self-loops, i.e., a ii = 0.

  • Path: Let a path ofG(A) be a sequence of edges in ε of the form: \((\nu _{i_l},\nu _{i_{l+1}})\in \varepsilon \) for l = 1, 2, …, j − 1, where \(\nu _{i_1},\nu _{i_2},\ldots ,\nu _{i_j}\) are distinct vertices.

  • Connected: We say that an undirected graph G(A) is connected when any two vertices of G(A) can be connected through paths.

  • Structurally Balanced: A signed graph G(A) is structurally balanced if it admits a bipartition of the nodes V 1, V 2, V 1 ∪ V 2 = V , V 1 ∩ V 2 = ∅, such that a ij ≥ 0, ∀ν i, ν j ∈ V q, (q ∈{1, 2}); and a ij ≤ 0, ∀ν i ∈ V q, ν j ∈ V r, q ≠ r, (q, r ∈{1, 2}). It is said structurally unbalanced otherwise.

Definition 1.7

\(\mathcal {D}=\{diag(\sigma )\mid \sigma =[\sigma _{1},\sigma _{2},\ldots ,\sigma _{N}],\sigma _{i}\in \{\pm 1\}\}\) is a set of diagonal matrices, where

$$\displaystyle \begin{aligned} diag(\sigma)=\left[ \begin{array}{cccc} \sigma_{1} & 0 & \cdot\cdot\cdot & 0 \\ 0 & \sigma_{2} & \cdot\cdot\cdot & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdot\cdot\cdot & \sigma_{N} \\ \end{array} \right]~. \end{aligned} $$

In the sequel, we consider {σ i, i = 1, 2, …, N} as defined in Definition 1.7 for a structurally balanced signed graph. By following [74], the Laplacian matrix L = (l ij)N×N for a signed graph G(A) is defined with elements given in the form of

$$\displaystyle \begin{aligned} l_{ij}=\left\{ \begin{aligned} &\sum^{N}_{k=1}|a_{ik}|, & j=i, \\ &-a_{ij},& j\neq i.\\ \end{aligned} \right. \end{aligned}$$

Lemma 1.8 ([74])

A connected signed graph G(A) is structurally balanced if and only if one of the following equivalent conditions holds:

  1. (1)

    all cycles of G(A) are positive;

  2. (2)

    \(\exists D\in \mathcal {D}\)such that DAD has all nonnegative entries.

Remark 1.9

This lemma can be proved in a special way. The adjacency matrix A can be rewritten as \(A=\left [ \begin {array}{cc} A_{11}^{+} & A_{12}^{-} \\ A_{12}^{-} & A_{22}^{+}\\ \end {array} \right ]~,\) then let \(D=\left [ \begin {array}{cc} I & 0 \\ 0 & -I\\ \end {array} \right ]~,\) we have DAD ≥ 0. This proof is simple and explicit.

Lemma 1.10 ([74])

A connected signed graph G(A) is structurally unbalanced if and only if one of the following equivalent conditions holds:

  1. (1)

    one or more cycles of G(A) are negative;

  2. (2)

    \(\not \exists D\in \mathcal {D}\)such that DAD has all nonnegative entries.

Lemma 1.11 ([74])

Consider a connected signed graph G(A). Let λ k(L), k = 1, 2, …, N be the k-th smallest eigenvalue of the Laplacian matrix L. If G(A) is structurally balanced, then 0 = λ 1(L) < λ 2(L) ≤⋯ ≤ λ N(L).

Lemma 1.12 ([75])

If a directed signed graph\(\mathcal {G}\)contains a rooted spanning tree, then there exists a proper invertible matrix P satisfying PP  = I such that the Laplacian matrix\(\mathcal {L}\)can be depicted in the following Frobenius normal form:

$$\displaystyle \begin{aligned} P^\top\mathcal{L}P=\begin{bmatrix} \mathcal{L}_{11} & 0 & \cdots & 0 \\ \mathcal{L}_{21} & \mathcal{L}_{22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ \mathcal{L}_{p1} & \mathcal{L}_{p2} & \cdots & \mathcal{L}_{pp}\\ \end{bmatrix}, \end{aligned} $$
(1.6)

where\(\mathcal {L}_{ii},i=1,2,\ldots ,p\) , are irreducible matrices, and for any 1 < k  p, there exists at least one q < k such that\(\mathcal {L}_{kq}\)is nonzero.

1.3.3 Quantizer

A quantizer is a device which converts a real-valued signal into a piecewise constant one taking on a finite or countable infinite set of values, i.e., a piecewise constant function \(q:\, \mathbb {R}\rightarrow \mathcal {Q}\), where \(\mathcal {Q}\) is a finite or countable infinite subset of \(\mathbb {R}\) (see [76, 77]). Next, we introduce two kinds of uniform quantizers which will be used in Chaps. 2 and 3, respectively.

The first kind of uniform quantizer is defined as (see Fig. 1.1)

$$\displaystyle \begin{aligned} q(x)=\left\lfloor x+\frac{1}{2}\right\rfloor, \end{aligned} $$
(1.7)

where ⌊⋅⌋ denote the lower integer function.

Fig. 1.1
figure 1

The first kind of uniform quantizer

The second kind of uniform quantizer is defined as (see Fig. 1.2)

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} q(x)=\left\{\begin{array}{ccc} &\lfloor x\rfloor,~~~ &x\geq0,\\ &-\lfloor-x\rfloor,~~~&x<0. \end{array}\right. \end{array} \end{aligned} $$
(1.8)
Fig. 1.2
figure 2

The second kind of uniform quantizer

In this book, we will use the one-parameter family of quantizers \(q_{\mu }(x):=\mu q(\frac {x}{\mu }),\,\mu >0\).

1.3.4 Discontinuous Differential Equations

For differential equations with discontinuous right hand sides, we understand the solutions in terms of differential inclusions following Filippov [78].

Definition 1.13

Let I be an interval in the real line \(\mathbb {R}\). A function \(f: I\subseteq \mathbb {R}\rightarrow \mathbb {R}\) is absolutely continuous on I if for every positive number 𝜖, there is a positive number δ such that whenever a finite sequence of pairwise disjoint sub-intervals (x k, y k) of I satisfies ∑ k|y k − x k| < δ, then

$$\displaystyle \begin{aligned} \begin{array}{rcl} \displaystyle\sum_{k}|f(y_{k})-f(x_{k})|<\epsilon. \end{array} \end{aligned} $$
(1.9)

Moreover, we call the function \(\bar {f}=(f_{1},\,f_{2},\,\ldots ,\,f_{n}): I\subseteq \mathbb {R}\rightarrow \mathbb {R}^{n}\) is absolutely continuous on I if every f i, i = 1, …, n is absolutely continuous.

Now we introduce the concept of Filippov solution. Consider the following system:

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle\frac{dx(t)}{dt}=f(x(t)), \end{array} \end{aligned} $$
(1.10)

where \(x\in \mathbb {R}^{n},\,f: \mathbb {R}^{n}\rightarrow \mathbb {R}^{n}\) is Lebesgue measurable and locally essentially bounded.

Definition 1.14

A set-valued map is defined as

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \mathcal {K}(f(x))=\displaystyle\bigcap_{\delta>0}\displaystyle\bigcap_{\mu(N)=0}\bar{co}[f(B(x,\delta)\setminus N)], \end{array} \end{aligned} $$
(1.11)

where \(\bar {co}(\varOmega )\) is the closure of the convex hull of set Ω, B(x, δ) = {y : ∥y − x∥≤ δ}, and μ(N) is Lebesgue measure of set N.

Definition 1.15 ([78])

A solution in the sense of Filippov of the Cauchy problem for Eq. (1.10) with initial condition x(0) = x 0 is an absolutely continuous function x(t), t ∈ [0, T], which satisfies x(0) = x 0 and differential inclusion:

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \frac{dx}{dt}\in \mathcal {K}(f(x)),~~a.e.~t\in[0,T], \end{array} \end{aligned} $$
(1.12)

where \(\mathcal {K}(f(x))=(\mathcal {K}[f_{1}(x)],\ldots ,\mathcal {K}[f_{n}(x)])\).

A property of Filippov differential inclusion \(\mathcal {K}\) is presented in the following lemma:

Lemma 1.16 ([79])

Assume that \(f, \,g:\, \mathbb {R}^{m}\rightarrow \mathbb {R}^{n}\) are locally bounded. Then,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{K}(f+g)(x)\subseteq\mathcal{K}(f)(x)+\mathcal{K}(g)(x). \end{array} \end{aligned} $$
(1.13)

Let \(h:\,\mathbb {R}^{n}\rightarrow \mathbb {R}\) be a locally Lipschitz function and S h be the set of points where h fails to be differentiable. Then,

  • Clarke generalized gradient [80]: Clarke generalized gradient of h at \(x\in \mathbb {R}^{n}\) is the set \(\displaystyle\partial _{c} h(x)=co\{\lim _{i \to +\infty }\nabla h(x^{(i)}):x^{(i)}\rightarrow x,\,x^{(i)}\in \mathbb {R}^{n},\,x^{(i)}\not \in S\cup S_{h}\}\), where co(Ω) denotes the convex hull of set Ω and S can be any set of zero measure.

  • Maximal solution [80]: A Filippov solution to (1.10) is a maximal solution if it cannot be extended further in time.

Definition 1.17 ([81])

\((\varOmega , \mathcal {A})\) is a measurable space and X is a complete separable metric space. Consider a set-valued map \(F:\,\varOmega \rightsquigarrow X\). A measurable map f : ΩX satisfying

$$\displaystyle \begin{aligned} \begin{array}{rcl} \forall \omega\in\varOmega,\,f(\omega)\in F(\omega) \end{array} \end{aligned} $$
(1.14)

is called a measurable selection of F.

Lemma 1.18 ([81] Measurable Selection)

Let X be a complete separable metric space, \((\varOmega , \mathcal {A})\) a measurable space, and F a measurable set-valued map from Ω to closed nonempty subsets of X. Then there exists a measurable selection of F.

Lemma 1.19 ([82] Chain Rule)

If\(V:\, \mathbb {R}^{n}\rightarrow \mathbb {R}\)is a locally Lipschitz function and\(\psi :\,\mathbb {R}\rightarrow \mathbb {R}^{n}\)is absolutely continuous, then for almost everywhere (a.e.) t there exists p 0 ∈ ∂ c V (ψ(t)) such that\(\frac {d}{dt}V(\psi (t))=p_{0}\cdot \dot {\psi }(t)\).

1.3.5 Some Lemmas

Lemma 1.20 ([83] Jensen Inequality)

Assume that the vector function\(\omega : [0, r]\longrightarrow \mathbb {R}^{m}\)is well defined for the following integrations. For any symmetric matrix\(W\in \mathbb {R}^{m\times m}\)and scalar r > 0, one has

$$\displaystyle \begin{aligned}r\int_{0}^{r}\omega^{\top}(s)W\omega(s)ds \geq \left(\int_{0}^{r}\omega(s)ds\right)^{\top}W\left(\int_{0}^{r}\omega(s)ds\right).\end{aligned}$$

Lemma 1.21 ([84])

Consider the differential equation

$$\displaystyle \begin{aligned} \dot x(t)=f(t,x_t). \end{aligned}$$

Suppose that f is continuous and\(f:\mathbb {R}\times C\rightarrow \mathbb {R}^n\)takes\(\mathbb {R}\times \) (bounded sets of C) into bounded sets of\(\mathbb {R}^n\) , and u, v, w:\(\mathbb {R}^+\rightarrow \mathbb {R}^+\)are continuous and strictly monotonically non-decreasing functions, u(s), v(s), w(s) are positive for s > 0 with u(0) = v(0) = 0. If there exists a continuous functional V :\(\mathbb {R}\times C\rightarrow \mathbb {R}\)such that

$$\displaystyle \begin{aligned} &u(\|x(t)\|)\leq V(t,x(t))\leq v(\|x(t)\|),\\ &\dot V(t,x(t))\leq -w(\|x(t)\|), \end{aligned} $$

where\(\dot V\)is the derivative of V along the solution of the above delayed differential equation, then the solution x = 0 of this equation is uniformly asymptotically stable.

Lemma 1.22 ([85])

Let x(t) be a solution to

(1.15)

where\(x(0)=x_0\in \mathbb {R}^N\) , and let Ω be a bounded closed set. Suppose that there exists a continuous differentiable positive definite function V (x) such that the derivative of V (t) along the trajectories of system (1.15) satisfies\(\displaystyle\frac {dV}{dt}\leq 0\) . Let\(E=\{x|\displaystyle\frac {dV}{dt}=0, x\in \varOmega \}\)and M  E be the biggest invariant set, then one has x(t) → M as t → +∞.

Lemma 1.23 ([86])

If\(A=(a_{ij})\in \mathbb {R}^{N\times N}\)is an irreducible matrix satisfying a ij = a ji ≥ 0, if i  j, and\(\sum _{j=1}^{N}a_{ij}=0\) , for i = 1, 2, . . . , N. For any 𝜖 > 0, all eigenvalues of the matrix

(1.16)

are negative.