Abstract
With the rapid development of modern technology, the world has entered the age of networks. Typical examples of networks include the World Wide Web, routes of airlines, biological networks, human relationships, and so on.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
1.1 Background
With the rapid development of modern technology, the world has entered the age of networks. Typical examples of networks include the World Wide Web, routes of airlines, biological networks, human relationships, and so on [1]. As a special kind of network, complex networked systems consisting of large groups of cooperating agents have made a significant impact on a broad range of applications including cooperative control of autonomous underwater vehicles (AUVs) [2], scheduling of automated highway systems [3], and congestion control in communication networks [4].
The study of complex networks can be traced back to Euler’s celebrated solution of the Königsberg bridge problem in 1735, which is often regarded as the first true proof in the theory of networks. In the early 1960s, a random-graph model was proposed by Paul Erdös and Alfréd Rényi [5], which laid a solid foundation for modern network theory. Watts and Strogatz proposed a model of small-world networks in 1998 [6], after that Albert and Barabasi proposed a model of scale-free networks in 1999 based on preferential attachment [7]. These two works reveal small-world effect and scale-free property of the complex networks and the reasons for the above phenomena. Over the past two decades, complex dynamical networks have been widely exploited by researchers in various fields of physics [8], mathematics [9], engineering [10, 11], biology [12], and sociology [13].
What makes complex networked systems distinct from other kinds of systems is that they make it possible to deploy a large number of subsystems as a team to cooperatively carry out a prescribed task. Furthermore, the most striking feature that can be observed in complex networked systems is their ability to show collective behavior that cannot be well explained in terms of individual dynamics of each single node. Two significant kinds of cooperative behaviors are synchronization and consensus [9, 14,15,16,17,18], both of which mean that all agents reach an agreement on certain quantities of interest.
The formal study of consensus dates back to 1974 [19], where a mathematical model was presented to describe how the group reaches agreement. Another interesting discovery is the collective behavior of a group of birds exhibited in foraging or flight, which is found by biologists in the observation of birds’ flocking [20]. If attention is paid, one can find that consensus is a universal phenomenon in nature, such as the shoaling behavior of fish [21], the synchronous flashing of fireflies [22], the swarming behavior of insects [20, 23, 24], and herd behavior of land animals [25]. The key feature of consensus is how local communications and cooperations among agents, i.e., consensus protocols (or consensus algorithms), can lead to certain desirable global behavior [26,27,28,29]. Various models have been proposed to study the mechanism of multi-agent consensus problem [30,31,32,33,34,35,36,37]. In [38], the consensus problem was considered of a switched multi-agent system which composed of continuous-time and discrete-time subsystems. The authors in [39] investigated consensus problems of a class of second-order continuous-time multi-agent systems with time-delay and jointly-connected topologies. Literature [40] focused on the mean square practical leader-following consensus of second-order nonlinear multi-agent systems with noises and unmodeled dynamics.
Synchronization, as typical collective behavior and basic motion in nature, means that the difference among the states of any two different subsystems goes to zero as time goes to infinity or time goes to certain fixed value. Synchronization phenomena exist widely and can be found in different forms in nature and man-made systems, such as fireflies’ synchronous flashing, attitude alignment, and the synchronized applause of audiences. To reveal the mechanism of synchronization of complex dynamical networks, a vast volume of work on synchronization has been done over the past few years. Before the appearance of small-world [6] and scale-free [7] network models, Wu in [41, 42] investigated synchronization of an array of linearly coupled systems and gave some effective synchronization criteria. In 1998, Pecora and Carroll [43] proposed the concept of master stability function as synchronization criterion, which revealed that synchronization highly depends on the coupling strategy or the topology of the network. In [14, 44,45,46], synchronization in small-world and scale-free networks was studied in detail. Over the past few years, different kinds of synchronization have been found and studied, such as complete synchronization [14, 41, 42, 47, 48], cluster synchronization [49,50,51,52], phase synchronization [53], lag synchronization [54, 55], and generalized synchronization [56].
In the literatures, most works on the consensus/synchronization of complex networks mainly focus on the analysis of network models with perfect communication, in which it is assumed that each agent can receive timely and accurate information from its neighbors. However, such models cannot reflect real circumstances, since the information flow between two neighboring nodes can always be affected by many uncertain factors including limited communication capacity, network induced time delays, communication noise, random packet loss, and so on. The aforementioned constraints should be considered in the design of control strategy or algorithms. Hence, it is desirable to formulate more realistic models to describe such complex dynamical networks under imperfect communication constraints and node failure. In this book, three kinds of specific imperfect communications and node failure will be investigated, and some detailed analysis of consensus/synchronization of complex dynamical networks will be presented.
1.2 Research Problems
The following three kinds of imperfect communication problems are considered in this book:
-
Quantization: In real-world networked systems, the amount of information that can be reliably transmitted over the communication channels is always bounded. To comply with such a communication constraint, the signals in real-world systems are required to be quantized before transmission, and the number of quantization levels is closely related to the information transmitting capacity between the components of the system. For example, information such as data and codes in computers is stored digitally in the form of a finite number of bits and hence all the signals need to be quantized before they are processed by the computer. In this book, two kinds of quantizations in networks are considered. One is called communication quantization which is related to communication from one agent to another. The other is called input quantization which is related to processing of the information arriving at each own agent. One natural question is how does the state of a networked systems evolve under quantization?
-
Communication delays: In many real complex networked systems, due to the remote location of agents or the unreliable communication medium (such as Internet), communication delays will occur during the information exchange between the agents and their neighbors. Generally, communication delays can have a negative effect on the stability and consensus/synchronization performance of the complex networks. Thus, it is important to investigate the effect of time delays on the coordinate performance of the complex networked systems and design the delay-tolerant communication protocol. Moreover, it would be very interesting to study the collective behavior of the complex networked systems simultaneously with communication delays and quantization.
-
Event-driven sampled data: In complex networked systems, it is assumed that all information exchange between the agent and its neighbors is timely. However, the communication channels generally are unreliable and the communication capacity is limited in many real networks such as sensor networks. Moreover, the sensing ability of each agent is restricted in the networked systems. Thus, it is more practical to use sampled information transmission, i.e., the nodes of the network can only use the information at some particular time instants instead of employing the whole spectrum of information of their neighbors. Sampled-data control has been widely studied in many areas such as tracking problems and consensus problems. Unlike traditional time-driven sampled control approach (i.e., periodic sampling), event-triggered control means the control signals are kept constant until a certain condition is violated and then the control signal is updated (or recomputed). Event-driven control is more similar to the way in which a human being behaves as a controller since his or her behavior is event-driven rather than time-driven when control manually. Thus, an interesting question arises, i.e., is it possible to propose an effective distributed event-triggered communication protocol to realize expected collective behaviors?
Traditional distributed communication protocols require that the agents exchange perfect information with their neighbors over the complex networked systems. This kind of information exchange can be an implicit property of complex networked systems. The objective of this book is to design efficient distributed protocols or algorithms for the complex networked systems with imperfect communication and node failure in order to comply with bandwidth limitation and tolerate communication delays and node failure. Specifically, the following problems concerning the collective behavior analysis of complex networked systems will be addressed and investigated in detail:
-
Problem1.
How does one model the multi-agent networks with arbitrary finite communication delays and directed information flow simultaneously [57]? Can consensus be realized no matter what kind of form the finite communication delays are? How to regulate all nodes’ final state of the multi-agent networks, even when the external signal is very weak? These three questions will be addressed in Chap. 2.
-
Problem2.
How can we model the multi-agent consensus model with input quantization and communication delays simultaneously [58, 59]? Does there exist the global solution for the considered consensus model with discontinuous quantization function? How do quantization and communication delays affect the final consensus result? These three questions will be addressed in Chap. 3.
-
Problem3.
When the communication quantization and communication delays exist simultaneously in discrete-time multi-agent networks, can the complex networked system achieve consensus [60, 61]? For the continuous-time cases, does the global solution exist? Can the consensus of such a kind of multi-agent network be realized? These questions will be explored in Chap. 4.
-
Problem4.
Can the discrete-time and the continuous-time multi-agent networks with communication delays achieve consensus via non-periodic sampled information transmission [62, 63]? How to decide when should the information be transmitted for each agent? What effect does the communication delay have on the multi-agent networks with non-periodic sampling information? Chap. 5 will focus on these problems.
-
Problem5.
It can be found in many real multi-agent networks that the agents possess not only cooperative but also antagonistic interactions. Ensuring the desired performance of the cooperative-antagonistic multi-agent networks in the presence of communication constraints is an important task in many applications of real systems. How does one model the cooperative-antagonistic multi-agent networks with arbitrary finite communication delays [64,65,66]? How to deal with the difficulty stemmed from communication delays in cooperative-antagonistic multi-agent networks? What are the final consensus results for this kind of networks with communication delays? How to design the consensus protocol for cooperative-antagonistic multi-agent networks under the event-triggered control? Chap. 6 will focus on these problems.
-
Problem6.
Finite-time (or fixed-time) consensus problem has become a hot topic due to its wide applications. For the cooperative-antagonistic multi-agent networks, how to design finite-time (or fixed-time) bipartite consensus protocols [67, 68]? How to establish criteria to guarantee the bipartite agreement of all agents, and show the explicit expression of the settling time? Chap. 7 will focus on these problems.
-
Problem7.
It should be pointed out that many of the real-world networks are very large. A nature question is how to obtain synchronization criteria for large-scale directed dynamical networks? When energy constraint is imposed, how to design event-triggered sampled-data transmission strategy to realize expected synchronization behaviors [69, 70]? Chaps. 8 and 9 will discuss these synchronization problems.
-
Problem8.
The size of most real-world networks is very large, which would greatly increase the complexity and difficulty of the consensus analysis of the corresponding networks. Is it possible to greatly reduce the size of the networks, but reserve the consensus property [71]? In large-scale networks, is it possible to isolate (or remove) the failure nodes of the networks and meanwhile reserve the consensus property? Chap. 10 will focus on these problems.
1.2.1 Consensus and Practical Consensus
Consider a multi-agent network \(\mathcal {A}\) with N agents. Let \(x_{i}\in \mathbb {R}\) be the information state of the ith agent which may be position, velocity, decision variable, and so on, where \(i\in \mathcal {N}\).
Definition 1.1 (Consensus)
If for all \(x_{i}(0)\in \mathbb {R}\), i = 1, 2, . . . , N, x i(t) converges to some common equilibrium point x ∗ (dependent on the initial values of some agents), as t → +∞, then we say that multi-agent network \(\mathcal {A}\) solves a consensus problem asymptotically. The common value x ∗ is called the group decision value.
Now, we give the definition of the distance from a point to a set and practical consensus which will be used in Chap. 3.
Definition 1.2
The distance from a point \(p\in \mathbb {R}\) to a set \(\mathbb {U}\subseteq \mathbb {R}\) is defined as the minimum distance between the given point and the points on the set, i.e.,
Definition 1.3
If for all \(x_{i}(0)\in \mathbb {R}\), \(i\in \mathcal {N}\), the distance of x i(t) to a set \(\mathbb {U}\subseteq \mathbb {R}\) converges to 0 as t → +∞. Then, the set \(\mathbb {U}\) is called practical consensus set.
1.2.2 General Model Description
In this subsection, a brief introduction of the multi-agent consensus model [32] is presented, which requires that each agent receives timely and accurate information from its neighbors.
1.2.2.1 Continuous-time Multi-agent Consensus Model
The continuous-time multi-agent consensus model is as follows:
where \(x_i(t)\in \mathbb {R}^n\), \(\mathcal {N}=\{1,2,\ldots ,N\},~N>1\), \(\mathcal {N}_{i}=\{j\mid a_{ij}>0,\, j=1,2,\ldots ,N\}\), and a ij is defined as follows:
-
when i is not equal to j:
-
If there is a connection from node j to node i, a ij > 0;
-
otherwise, a ij = 0;
-
-
when i is equal to j: a ii = 0, for all \(i\in \mathcal {N}\).
Let l ij = −a ij for i ≠ j, and \(l_{ii}=-\sum ^N_{j=1,j\neq i}l_{ij}\). The continuous-time linear consensus protocol (1.1) can be written in matrix form as
where L = (l ij)N×N is the graph Laplacian matrix and \(x = [x_{1}^{\top },\,\ldots ,\, x_{N}^{\top }]^{\top }\).
1.2.2.2 Discrete-time Multi-agent Consensus Model
A general discrete-time multi-agent consensus model can be constructed as follows:
where \(x_i(k)\in \mathbb {R}^n\), the constant ι > 0 denotes the step size; \(\bar {a}_{ij}\) is defined as follows:
-
when j is not equal to i:
-
If there is a connection from node j to node i, \(\bar {a}_{ij}>0\);
-
otherwise, \(\bar {a}_{ij}=0\);
-
-
when i is equal to j: \(\bar {a}_{ii}=0\), for all \(i\in \mathcal {N}\).
\(\bar {A}=(\bar {a}_{ij})_{N\times N}\) represents the topological structure of the system. Let A = (a ij)N×N with \(a_{ij}=\iota \bar {a}_{ij}\geq 0\) for i ≠ j, and \(a_{ii}=1-\displaystyle\sum _{j=1,\,j\neq i}^{N} a_{ij}\). Then, the dynamic of multi-agent networks can be written in a compact form as
Proposition 1.4 ([72])
System (1.4) solves a consensus problem if and only if
-
(1)
ρ(A) = 1, where ρ(A) is the spectral radius of A;
-
(2)
1 is an algebraically simple eigenvalue of A, and is the unique eigenvalue of maximum modulus;
-
(3)
A 1 = 1 , where\(\mathbf {1}=(1,\,1,\,\ldots ,\,1)^{\top }\in \mathbb {R}^{N}\);
-
(4)
There exists a nonnegative left eigenvector\(\xi =(\xi _{1},\xi _{2},\ldots ,\xi _{N})^{\top }\in \mathbb {R}^{N}\)of A associated with eigenvalue 1 such that ξ ⊤ 1 = 1.
1.3 Mathematical Preliminaries
1.3.1 Matrices and Graphs
A graph is an essential tool of the diagrammatical representation of the multi-agent networks. The set of vertices for the network are described as \(\mathcal {V}\) , and the set of edges among these vertices are described as \(\mathcal {E}\). The graph is denoted as \(\mathcal {G}(\mathcal {V},\mathcal {E})\). To distinguish graphs from digraphs (directed graph), we generally refer to graphs as undirected graphs.
A graph \(\mathcal {G}(\mathcal {V},\mathcal {E})\), where \(\mathcal {V}\) containing N vertices is said to have order N. Analogously, the size of a graph is the number of its edges m, i.e., the number of elements in set \(\mathcal {E}\). An edge of \(\mathcal {G}\) is denoted by e ij = (v i, v j), where v i and v j are called neighbors.
-
Self-loop: If two vertices of an edge are the same, we call this edge a self-loop.
-
Directed graph: A graph in which all the edges are directed from one vertex to another.
-
Digraph: A path in a digraph is an ordered sequence of vertices such that the sequence of any two consecutive vertices is a directed edge of the digraph.
-
Connected graph: A graph is connected, if there is a path between any pair of vertices.
-
Strongly connected graph: A graph is strongly connected, if there is a directed path between every two different vertices.
-
Subgraph: A subgraph of a graph \(\mathcal {G}_{1}(\mathcal {V}_{1},\mathcal {E}_{1})\) is a graph \(\mathcal {G}_{2}(\mathcal {V}_{2},\mathcal {E}_{2})\) such that \(\mathcal {V}_{2}\subseteq \mathcal {V}_{1}\), \(\mathcal {E}_{2}\subseteq \mathcal {E}_{1}\).
-
Directed tree: A directed tree is a digraph with n vertices and n − 1 edges with a root vertex such that there is a directed path from the root vertex to every other vertex.
-
Rooted spanning tree: A rooted spanning tree of a graph is a subgraph which is a directed tree with the same vertex set.
In general, graphs are weighted, i.e., a positive weight is associated to each edge.
There is an intrinsic relationship between graph theory and matrix theory, which can help us to better understand the main concept of them.
-
Reducible: A matrix is said to be reducible if it can be written as
$$\displaystyle \begin{aligned} \begin{array}{rcl} P\cdot\left(\begin{array}{cc} A_1 & A_3 \\ \mathcal{O} & A_2 \end{array}\right)\cdot Q, \end{array} \end{aligned} $$(1.5)where P and Q are permutation matrices, A 1 and A 2 are square matrices and \(\mathcal {O}\) is a null matrix.
-
Irreducible: An irreducible matrix is a matrix which is not reducible.
-
Adjacency matrix: The adjacency matrix A = [a ij] of a (di)graph is a nonnegative matrix defined as a ji = ω if and only if (i, j) is an edge with weight ω.
-
Out-degree: The out-degree d o(v) of a vertex v is the sum of the weights of edges emanating from v.
-
In-degree: The in-degree d i(v) of a vertex v is the sum of the weights of edges into v.
-
Balance graph: A vertex is balanced if its out-degree is equal to its in-degree. A graph is balanced if all of its vertices are balanced.
-
Laplacian matrix: The Laplacian matrix of a graph is a zero row sums nonnegative matrix L denoted as L = D − A, where A is the adjacency matrix and D is the diagonal matrix of vertex in-degrees.
Lemma 1.5 ([73])
A network is strongly connected if and only if its Laplacian matrix is irreducible.
Lemma 1.6 ([73])
For an irreducible matrix A = (a ij)N×Nwith nonnegative off-diagonal elements, which satisfies the diffusive coupling condition\(a_{ii}=-\sum ^N_{j=1,j\neq i}a_{ij}\) , we have the following propositions:
-
If λ is an eigenvalue of A and λ ≠ 0, then Re(λ) < 0;
-
A has an eigenvalue 0 with multiplicity 1 and the right eigenvector [1, 1, …, 1]⊤;
-
Suppose that\(\xi =[\xi _1,\xi _2,\ldots ,\xi _N]^\top \in \mathbb {R}^N\)satisfying\(\sum ^N_{i=1}\xi _i=1\)is the normalized left eigenvector of A corresponding to eigenvalue 0. Then, ξ i > 0 for all i = 1, 2, …, N. Furthermore, if A is symmetric, then we have\(\xi _i=\frac {1}{N}\)for i = 1, 2, …, N.
1.3.2 Signed Graphs
Let G(V, ε, A) be an undirected signed graph, where V = {ν 1, ν 2, …, ν N} is the set of finite nodes, ε ⊆ V × V is the set of edges, \(A=[a_{ij}]\in \mathbb {R}^{N\times N}\) is the adjacency matrix of G with the elements a ij, and a ij ≠ 0⇔(ν j, ν i) ∈ ε. Since a ij can be positive or negative, the adjacency matrix A uniquely corresponds to a signed graph. G(A) is used to denote the signed graph corresponding to A for simplicity, and assume that G(A) has no self-loops, i.e., a ii = 0.
-
Path: Let a path ofG(A) be a sequence of edges in ε of the form: \((\nu _{i_l},\nu _{i_{l+1}})\in \varepsilon \) for l = 1, 2, …, j − 1, where \(\nu _{i_1},\nu _{i_2},\ldots ,\nu _{i_j}\) are distinct vertices.
-
Connected: We say that an undirected graph G(A) is connected when any two vertices of G(A) can be connected through paths.
-
Structurally Balanced: A signed graph G(A) is structurally balanced if it admits a bipartition of the nodes V 1, V 2, V 1 ∪ V 2 = V , V 1 ∩ V 2 = ∅, such that a ij ≥ 0, ∀ν i, ν j ∈ V q, (q ∈{1, 2}); and a ij ≤ 0, ∀ν i ∈ V q, ν j ∈ V r, q ≠ r, (q, r ∈{1, 2}). It is said structurally unbalanced otherwise.
Definition 1.7
\(\mathcal {D}=\{diag(\sigma )\mid \sigma =[\sigma _{1},\sigma _{2},\ldots ,\sigma _{N}],\sigma _{i}\in \{\pm 1\}\}\) is a set of diagonal matrices, where
In the sequel, we consider {σ i, i = 1, 2, …, N} as defined in Definition 1.7 for a structurally balanced signed graph. By following [74], the Laplacian matrix L = (l ij)N×N for a signed graph G(A) is defined with elements given in the form of
Lemma 1.8 ([74])
A connected signed graph G(A) is structurally balanced if and only if one of the following equivalent conditions holds:
-
(1)
all cycles of G(A) are positive;
-
(2)
\(\exists D\in \mathcal {D}\)such that DAD has all nonnegative entries.
Remark 1.9
This lemma can be proved in a special way. The adjacency matrix A can be rewritten as \(A=\left [ \begin {array}{cc} A_{11}^{+} & A_{12}^{-} \\ A_{12}^{-} & A_{22}^{+}\\ \end {array} \right ]~,\) then let \(D=\left [ \begin {array}{cc} I & 0 \\ 0 & -I\\ \end {array} \right ]~,\) we have DAD ≥ 0. This proof is simple and explicit.
Lemma 1.10 ([74])
A connected signed graph G(A) is structurally unbalanced if and only if one of the following equivalent conditions holds:
-
(1)
one or more cycles of G(A) are negative;
-
(2)
\(\not \exists D\in \mathcal {D}\)such that DAD has all nonnegative entries.
Lemma 1.11 ([74])
Consider a connected signed graph G(A). Let λ k(L), k = 1, 2, …, N be the k-th smallest eigenvalue of the Laplacian matrix L. If G(A) is structurally balanced, then 0 = λ 1(L) < λ 2(L) ≤⋯ ≤ λ N(L).
Lemma 1.12 ([75])
If a directed signed graph\(\mathcal {G}\)contains a rooted spanning tree, then there exists a proper invertible matrix P satisfying PP ⊤ = I such that the Laplacian matrix\(\mathcal {L}\)can be depicted in the following Frobenius normal form:
where\(\mathcal {L}_{ii},i=1,2,\ldots ,p\) , are irreducible matrices, and for any 1 < k ≤ p, there exists at least one q < k such that\(\mathcal {L}_{kq}\)is nonzero.
1.3.3 Quantizer
A quantizer is a device which converts a real-valued signal into a piecewise constant one taking on a finite or countable infinite set of values, i.e., a piecewise constant function \(q:\, \mathbb {R}\rightarrow \mathcal {Q}\), where \(\mathcal {Q}\) is a finite or countable infinite subset of \(\mathbb {R}\) (see [76, 77]). Next, we introduce two kinds of uniform quantizers which will be used in Chaps. 2 and 3, respectively.
The first kind of uniform quantizer is defined as (see Fig. 1.1)
where ⌊⋅⌋ denote the lower integer function.
The second kind of uniform quantizer is defined as (see Fig. 1.2)
In this book, we will use the one-parameter family of quantizers \(q_{\mu }(x):=\mu q(\frac {x}{\mu }),\,\mu >0\).
1.3.4 Discontinuous Differential Equations
For differential equations with discontinuous right hand sides, we understand the solutions in terms of differential inclusions following Filippov [78].
Definition 1.13
Let I be an interval in the real line \(\mathbb {R}\). A function \(f: I\subseteq \mathbb {R}\rightarrow \mathbb {R}\) is absolutely continuous on I if for every positive number 𝜖, there is a positive number δ such that whenever a finite sequence of pairwise disjoint sub-intervals (x k, y k) of I satisfies ∑ k|y k − x k| < δ, then
Moreover, we call the function \(\bar {f}=(f_{1},\,f_{2},\,\ldots ,\,f_{n}): I\subseteq \mathbb {R}\rightarrow \mathbb {R}^{n}\) is absolutely continuous on I if every f i, i = 1, …, n is absolutely continuous.
Now we introduce the concept of Filippov solution. Consider the following system:
where \(x\in \mathbb {R}^{n},\,f: \mathbb {R}^{n}\rightarrow \mathbb {R}^{n}\) is Lebesgue measurable and locally essentially bounded.
Definition 1.14
A set-valued map is defined as
where \(\bar {co}(\varOmega )\) is the closure of the convex hull of set Ω, B(x, δ) = {y : ∥y − x∥≤ δ}, and μ(N) is Lebesgue measure of set N.
Definition 1.15 ([78])
A solution in the sense of Filippov of the Cauchy problem for Eq. (1.10) with initial condition x(0) = x 0 is an absolutely continuous function x(t), t ∈ [0, T], which satisfies x(0) = x 0 and differential inclusion:
where \(\mathcal {K}(f(x))=(\mathcal {K}[f_{1}(x)],\ldots ,\mathcal {K}[f_{n}(x)])\).
A property of Filippov differential inclusion \(\mathcal {K}\) is presented in the following lemma:
Lemma 1.16 ([79])
Assume that \(f, \,g:\, \mathbb {R}^{m}\rightarrow \mathbb {R}^{n}\) are locally bounded. Then,
Let \(h:\,\mathbb {R}^{n}\rightarrow \mathbb {R}\) be a locally Lipschitz function and S h be the set of points where h fails to be differentiable. Then,
-
Clarke generalized gradient [80]: Clarke generalized gradient of h at \(x\in \mathbb {R}^{n}\) is the set \(\displaystyle\partial _{c} h(x)=co\{\lim _{i \to +\infty }\nabla h(x^{(i)}):x^{(i)}\rightarrow x,\,x^{(i)}\in \mathbb {R}^{n},\,x^{(i)}\not \in S\cup S_{h}\}\), where co(Ω) denotes the convex hull of set Ω and S can be any set of zero measure.
-
Maximal solution [80]: A Filippov solution to (1.10) is a maximal solution if it cannot be extended further in time.
Definition 1.17 ([81])
\((\varOmega , \mathcal {A})\) is a measurable space and X is a complete separable metric space. Consider a set-valued map \(F:\,\varOmega \rightsquigarrow X\). A measurable map f : Ω↦X satisfying
is called a measurable selection of F.
Lemma 1.18 ([81] Measurable Selection)
Let X be a complete separable metric space, \((\varOmega , \mathcal {A})\) a measurable space, and F a measurable set-valued map from Ω to closed nonempty subsets of X. Then there exists a measurable selection of F.
Lemma 1.19 ([82] Chain Rule)
If\(V:\, \mathbb {R}^{n}\rightarrow \mathbb {R}\)is a locally Lipschitz function and\(\psi :\,\mathbb {R}\rightarrow \mathbb {R}^{n}\)is absolutely continuous, then for almost everywhere (a.e.) t there exists p 0 ∈ ∂ c V (ψ(t)) such that\(\frac {d}{dt}V(\psi (t))=p_{0}\cdot \dot {\psi }(t)\).
1.3.5 Some Lemmas
Lemma 1.20 ([83] Jensen Inequality)
Assume that the vector function\(\omega : [0, r]\longrightarrow \mathbb {R}^{m}\)is well defined for the following integrations. For any symmetric matrix\(W\in \mathbb {R}^{m\times m}\)and scalar r > 0, one has
Lemma 1.21 ([84])
Consider the differential equation
Suppose that f is continuous and\(f:\mathbb {R}\times C\rightarrow \mathbb {R}^n\)takes\(\mathbb {R}\times \) (bounded sets of C) into bounded sets of\(\mathbb {R}^n\) , and u, v, w:\(\mathbb {R}^+\rightarrow \mathbb {R}^+\)are continuous and strictly monotonically non-decreasing functions, u(s), v(s), w(s) are positive for s > 0 with u(0) = v(0) = 0. If there exists a continuous functional V :\(\mathbb {R}\times C\rightarrow \mathbb {R}\)such that
where\(\dot V\)is the derivative of V along the solution of the above delayed differential equation, then the solution x = 0 of this equation is uniformly asymptotically stable.
Lemma 1.22 ([85])
Let x(t) be a solution to
where\(x(0)=x_0\in \mathbb {R}^N\) , and let Ω be a bounded closed set. Suppose that there exists a continuous differentiable positive definite function V (x) such that the derivative of V (t) along the trajectories of system (1.15) satisfies\(\displaystyle\frac {dV}{dt}\leq 0\) . Let\(E=\{x|\displaystyle\frac {dV}{dt}=0, x\in \varOmega \}\)and M ⊂ E be the biggest invariant set, then one has x(t) → M as t → +∞.
Lemma 1.23 ([86])
If\(A=(a_{ij})\in \mathbb {R}^{N\times N}\)is an irreducible matrix satisfying a ij = a ji ≥ 0, if i ≠ j, and\(\sum _{j=1}^{N}a_{ij}=0\) , for i = 1, 2, . . . , N. For any 𝜖 > 0, all eigenvalues of the matrix
are negative.
References
Newman MEJ. The structure and function of complex networks. SIAM Rev. 2003;45(2):167–256.
Chen MZ, Zhu DQ. Multi-AUV cooperative hunting control with improved Glasius bio-inspired neural network. J Navig. 2019;72(3):759–76.
Ramezani M, Ye E. Lane density optimisation of automated vehicles for highway congestion control. Transpotmetrica B Transp Dyn. 2019;7(1):1096–116.
Omotosho O, Oyebode A, Hinmikaiye J. Understand congestion: it’s effects on modern networks. Am J Comput Eng. 2019;2(5).
Erdos P, Renyi A. On random graphs. Publ Math Debrecen. 1959;6:290–7.
Watts DJ, Strogatz SH. Collective dynamics of ’small-world’ networks. Nature. 1998;393:440–2.
Barabasi AL, Albert R. Emergence of scaling in random networks. Science. 1999;286(5439):509–12.
Zou Y, Donner RV, Marwan N, Donges J, Kurths J. Complex network approaches to nonlinear time series analysis. Phys Rep. 2019;787:1–97.
Zheng JY, Xu L, Xie LH, You KY. Consensus ability of discrete-time multiagent systems with communication delay and packet dropouts. IEEE Trans Automat Contr. 2019;64(3):1185–92.
Walter BB. Robust engineering of dynamic structures in complex networks. Engineering and Applied Science Theses and Dissertations. 2018.
Hong YG, Wang XL, Jiang ZP. Distributed output regulation of leader-follower multi-agent systems. Int J Robust Nonlinear Control. 2013;23(1):48–66.
Camacho DM, Collins KM, Powers RK, Costello JC, Collins JJ. Next-generation machine learning for biological networks. Cell. 2018;173(7):1581–92.
Lu XY, Szymanski BK. A regularized stochastic block model for the robust community detection in complex networks. Sci Rep. 2019;9:13247.
Wang XF, Chen GR. Synchronization in scale-free dynamical networks: robustness and fragility. IEEE Trans Circuits Syst I Fundam Theory Appl. 2002;49(1):54–62.
Qian YY, Liu L, Feng G. Output consensus of heterogeneous linear multi-agent systems with adaptive event-triggered control. IEEE Trans Automat Contr. 2019;64(6):2606–13.
Liu ZX, Guo L. Synchronization of multi-agent systems without connectivity assumptions. Automatica. 2009;45(12):2744–53.
Chen C, Chen G, Guo L. Consensus of flocks under M-nearest-neighbor rules. J Syst Sci Complex. 2015;28(1):1–15.
Xie GM, Wang L. Consensus control for a class of networks of dynamic agents. Int J Robust Nonlinear Control IFAC-Affiliated J. 2007;17(10–11):941–59.
Degroot MH. Reaching a consensus. J Am Stat Assoc. 1974; 69(345):118–21.
Robinson SK, Holmes RT. Foraging behavior of forest birds: the relationships among search tactics, diet, and habitat structure. Ecology. 1982;63(6):1918–31.
Gaffney KA, Webster MM. Consistency of fish-shoal social network structure under laboratory conditions: consistency of social network structure. J Fish Biol. 2018;92(5):1574–89.
Wen GH, Yu WW, Li ZK, Yu XH, Cao JD. Neuro-adaptive consensus tracking of multiagent systems with a high-dimensional leader. IEEE Trans Cybern. 2017;47(7):1730–42.
Zhao XW, Guan ZH, Li J, Zhang XH, Chen CY. Flocking of multi-agent nonholonomic systems with unknown leader dynamics and relative measurements. Int J Robust Nonlinear Control. 2017;27(17):3685–702.
Ma CQ, Zhang JF. Necessary and sufficient conditions for consensusability of linear multi-agent systems. IEEE Trans Automat Contr. 2010;55(5):1263–8.
Pluhacek J, Tuckova V, King SRB. Overmarking behaviour of zebra males: no scent masking, but a group cohesion function across three species. Behav Ecol Sociobiol. 2019;73(10).
Lin ZY, Francis BA, Maggiore M. Necessary and sufficient graphical conditions for formation control of unicycles. IEEE Trans Automat Contr. 2005;50(1):121–7.
Chen Y, Lü JH, Yu XH, Lin ZL. Consensus of discrete-time second-order multi-agent systems based on infinite products of general stochastic matrices. SIAM J Control Optim. 2013;51(4):3274–301.
Zhu JD, Tian YP, Kuang J. On the general consensus protocol of multi-agent systems with double-integrator dynamics. Linear Algebra Appl. 2009;431(5–7):701–15.
Lou YC, Hong YG. Target containment control of multi-agent systems with random switching interconnection topologies. Automatica. 2012;48(5):879–85.
Jadbabaie A, Lin J, Morse AS. Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Trans Automat Contr. 2003;48(6):988–1001.
Lin ZY, Broucke M, Francis B. Local control strategies for groups of mobile autonomous agents. IEEE Trans Automat Contr. 2004;49(4):622–9.
Olfati-Saber R, Murray RM. Consensus problems in networks of agents with switching topology and time delays. IEEE Trans Automat Contr. 2004;49(9):1520–33.
Ren W, Beard RW. Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Trans Automat Contr. 2005;50(5):655–61.
Ren W, Beard RW, Atkins EM. A survey of consensus problems in multi-agent coordination. Proceedings of the American control conference; 2005. p. 1859–64.
Fax JA, Murray RM. Information flow and cooperative control of vehicle formations. IEEE Trans Automat Contr. 2004;49(9):1465–76.
Zhu JD, Lü JH, Yu XH. Flocking of multi-agent non-holonomic systems with proximity graphs. IEEE Trans Circuits Syst I Regul Pap. 2013;60(1):199–210.
Wang XL, Su HS, Wang XF, Chen GR. Nonnegative edge quasi-consensus of networked dynamical systems. IEEE Trans Circuits Syst II Express Briefs. 2017;64(3):304–8.
Zheng YS, Wang L. Consensus of switched multiagent systems. IEEE Trans Circuits Syst II Express Briefs. 2016;63(3):314–8.
Lin P, Jia YM. Consensus of a class of second-order multi-agent systems with time-delay and jointly-connected topologies. IEEE Trans Automat Contr. 2010;55(3):778–84.
Zou WC, Xiang ZR, Ahn CK. Mean square leader-following consensus of second-order nonlinear multiagent systems with noises and unmodeled dynamics. IEEE Trans Syst Man Cybern Syst. 2019;49(12):2478–86.
Wu CW, Chua LO. Synchronization in an array of linearly coupled dynamical systems. IEEE Trans Circuits Syst I Fundam Theory Appl. 1995;42(8):430–47.
Wu CW, Chua LO. On a conjecture regarding the synchronization in an array of linearly coupled dynamical systems. IEEE Trans Circuits Syst I Fundam Theory Appl. 1996;43(2):161–5.
Pecora LM, Carroll TL. Master stability functions for synchronized coupled systems. Phys Rev Lett. 1998;80(10):2109–12.
Barahona M, Pecora LM. Synchronization in small-world systems. Phys Rev Lett. 2002;89(5):054101.
Boccaletti S, Hwang DU, Chavez M, Amann A, Kurths J, Pecora LM. Synchronization in dynamical networks: evolution along commutative graphs. Phys Rev E. 2006;74(1):016102.
Wang XF, Chen GR. Synchronization in small-world dynamical networks. Int J Bifurcat Chaos. 2002;12(1):187–92.
Zhou J, Lu JA, Lü JH. Pinning adaptive synchronization of a general complex dynamical network. Automatica. 2008;44(4):996–1003.
Liang JL, Wang ZD, Liu YR, Liu XH. Global synchronization control of general delayed discrete-time networks with stochastic coupling and disturbances. IEEE Trans Syst Man Cybern B Cybern. 2008;38(4):1073–83.
Cao JD, Li LL. Cluster synchronization in an array of hybrid coupled neural networks with delay. Neural Netw. 2009;22(4):335–42.
Qin WX, Chen GR. Coupling schemes for cluster synchronization in coupled Josephson equations. Physica D Nonlinear Phenom. 2004;197(3–4):375–91.
Wu W, Chen TP. Partial synchronization in linearly and symmetrically coupled ordinary differential systems. Physica D Nonlinear Phenom. 2009;238(4):355–64.
Wu W, Zhou WJ, Chen TP. Cluster synchronization of linearly coupled complex networks under pinning control. IEEE Trans Circuits Syst I Regul Pap. 2009;56(4):829–39.
Rosenblum MG, Pikovsky AS, Kurths J. From phase to lag synchronization in coupled chaotic oscillators. Phys Rev Lett. 1997;78(22):4193–6.
Sun YH, Cao JD. Adaptive lag synchronization of unknown chaotic delayed neural networks with noise perturbation. Phys Lett A. 2007;364(3–4):277–85.
Taherion S, Lai YC. Observability of lag synchronization of coupled chaotic oscillators. Phys Rev E. 1999;59(6):R6247–50.
Afraimovich V, Cordonet A, Rulkov NF. Generalized synchronization of chaos in noninvertible maps. Phys Rev E. 2002;66(1):016208.
Lu JQ, Ho DWC, Kurths J. Consensus over directed static networks with arbitrary finite communication delays. Phys Rev E. 2009;80(6):066121.
Li L L, Ho DWC, Lu JQ. A unified approach to practical consensus with quantized data and time delay. IEEE Trans Circuits Syst I Regul Pap. 2013;60(10):2668–78.
Li LL, Ho DWC, Lu JQ. Distributed practical consensus in multi-agent networks with communication constrains. Proceedings of 2012 UKACC international conference on control; 2012. p. 1057–62.
Li LL, Ho DWC, Liu YR. Discrete-time multi-agent consensus with quantization and communication delays. Int J Gen Syst. 2014;43(3–4):319–29.
Li LL, Ho DWC, Huang C, Lu JQ. Event-triggered discrete-time multi-agent consensus with delayed quantized information. The 33rd Chinese control conference; 2014. p. 1722–7.
Li LL, Ho DWC, Xu SY. A distributed event-triggered scheme for discrete-time multi-agent consensus with communication delays. IET Control Theory Appl. 2014;8(10):830–7.
Li LL, Ho DWC, Lu JQ. Event-based network consensus with communication delays. Nonlinear Dyn. 2017;87(3):1847–58.
Guo X, Lu JQ, Alsaedi A, Alsaadi FE. Bipartite consensus for multi-agent systems with antagonistic interactions and communication delays. Physica A Stat Mech Appl. 2018;495:488–97.
Lu JQ, Guo X, Huang TW, Wang ZD. Consensus of signed networked multi-agent systems with nonlinear coupling and communication delays. Appl Math Comput. 2019;350:153–62.
Li LL, Lu JQ, Ho DWC. Event-based discrete-time multi-agent consensus over signed digraphs with communication delays. J Franklin Inst Eng Appl Math. 2019;356(18):11668–89.
Shi XC, Lu JQ, Liu Y, Huang TW, Alssadi FE. A new class of fixed-time bipartite consensus protocols for multi-agent systems with antagonistic interactions. J Franklin Inst Eng Appl Math. 2018;355(12):5256–71.
Lu JQ, Wang YQ, Shi XC, Cao JD. Finite-time bipartite consensus for multi-agent systems under detail-balanced antagonistic interactions. IEEE Trans Syst Man Cybern Syst. 2019. https://doi.org/10.1109/TSMC.2019.2938419.
Lu JQ, Ho DWC. Globally exponential synchronization and synchronizability for general dynamical networks. IEEE Trans Syst Man Cybern B Cybern. 2010;40(2):350–61.
Li LL, Ho DWC, Cao JD, Lu JQ. Pinning cluster synchronization in an array of coupled neural networks under event-based mechanism. Neural Netw. 2016;76:1–12.
Li LL, Ho DW , Lu JQ. A consensus recovery approach to nonlinear multi-agent system under node failure. Inf Sci. 2016;367:975–89.
Xiao F, Wang L, Wang AP. Consensus problems in discrete-time multiagent systems with fixed topology. J Math Anal Appl. 2006;322(2):587–98.
Horn RA, Johnson CR. Matrix analysis. Cambridge: Cambridge University Press; 1985.
Altafini C. Consensus problems on networks with antagonistic interactions. IEEE Trans Automat Contr. 2013;58(4):935–46.
Yang SF, Cao JD, Lu JQ. A new protocol for finite-time consensus of detail-balanced multi-agent networks. Chaos Interdiscip J Nonlinear Sci. 2012;22(4):93–153.
Liberzon D. Hybrid feedback stabilization of systems with quantized signals. Automatica. 2003;39(9):1543–54.
Liberzon D. Quantization, time delays, and nonlinear stabilization. IEEE Trans Automat Contr. 2006;51(7):1190–5.
Filippov AF. Differential equations with discontinuous righthand sides, volume 18. Cham: Springer; 1988.
Paden B, Sastry S. A calculus for computing Filippov’s differential inclusion with application to the variable structure control of robot manipulators. IEEE Trans Circuits Syst. 1987;34(1):73–82.
Clarke FH. Optimization and nonsmooth analysis. New York: Wiley; 1983.
Aubin JP, Frankowska H. Set-valued analysis. Boston: Birkhäuser; 1990.
Forti M, Nistri P. Global convergence of neural networks with discontinuous neuron activations. IEEE Trans. Circuits Syst. I Regul Pap. 2003;50(11):1421–35.
Gu KQ, Kharitonov VL, Chen J. Stability of time-delay systems. Boston: Birkhäuser; 2003.
Hale JK, Lunel SMV. Introduction to functional differential equations. Applied mathematical sciences. New York: Springer; 1933.
Rouche N, Habets P, Laloy M. Stability theory by Liapunov’s direct method. Applied mathematical sciences, vol. 22. New York: Springer; 1977.
Chen TP, Liu XW, Lu WL. Pinning complex networks by a single controller. IEEE Trans Circuits Syst I Regul Pap. 2007;54(6):1317–26.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd. and Science Press, China
About this chapter
Cite this chapter
Lu, J., Li, L., Ho, D.W.C., Cao, J. (2021). Introduction. In: Collective Behavior in Complex Networked Systems under Imperfect Communication. Springer, Singapore. https://doi.org/10.1007/978-981-16-1506-1_1
Download citation
DOI: https://doi.org/10.1007/978-981-16-1506-1_1
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-1505-4
Online ISBN: 978-981-16-1506-1
eBook Packages: Computer ScienceComputer Science (R0)