Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The consensus problem has been extensively studied in the last decade. Depending on the application, the framework can assume directed or undirected interaction graph, connections affected or not by delays, discrete or continuous, linear or nonlinear agent dynamics, fixed or dynamical interaction graph, synchronized or desynchronized interactions [12, 14, 16, 17, 19]. Controlling the network in a decentralized way, by modeling it as a multiagent system , results in computation and communication cost reduction [15, 18, 20]. On the other hand, the coordination and performances of interconnected systems are related to the network topology. Most of existing works assume the connectivity of the interaction graph in order to guarantee the coordination behavior. However, some works have been oriented toward networks in which the global agreement cannot be reached and only local ones are obtained [15, 21]. Others propose controllers that are able to maintain the network connectivity in order to ensure the global coordination [6, 8, 9, 22].

Here we briefly recall the main results provided in our previous work [8, 9] and we show how they can be implemented. Precisely, we consider a multiagent system with discrete-time dynamics and state-dependent interconnection topology. Two agents are able to communicate if an algebraic relation between their states is satisfied. The connected agents are called neighbors. The agents update their state in a decentralized manner by taking into account their neighbors state. A connection is preserved as far as the algebraic relation is verified. The design of the decentralized controllers satisfying the algebraic constraint can be done either by minimizing a cost function [8], or by negotiations through the network at each step [7]. Our approach use invariance-based techniques (see [13] for the use of invariance in control theory) to characterize the conditions assuring that the algebraic constraint holds. The resulting topology preservation conditions rewrites as a convex constraint that may be posed in LMI form [4, 5]. Thus, we not only propose a new tool for decentralized control but also an easy implementable one. The practical implementation of this set theory-based control strategy [10, 11] requires a minimal number of interconnections ensuring the network connectivity. It should be noted that our procedure is quite flexible and, as we shall see, additional global objectives can be addressed. Precisely, we focus on the implementation of the topology preservation, presented in [8], to tackle specific problems concerning multiagent systems . The subsystems composing the network are mobile agents moving on the plane and whose communication capability is subject to constraints on their distances. Different coordination tasks, as flocking , consensus, and predictive control, are considered and solved employing the LMI conditions for avoiding the connections loss. Numerical illustrative examples allow us to analyze the results and to compare the different control strategies.

The chapter is organized as follows. Section 15.2 contains the main theoretical results. First, we formulate the decentralized control problem under analysis. Second, we derive the LMI conditions for network topology preservation in general settings as well as in the case of common feedback gains. Control design strategies for full or partial state consensus of identical systems with double-integrator dynamics are discussed in Sect. 15.3. In Sect. 15.4, we present some numerical examples illustrating the control strategies proposed in Sect. 15.3. Some conclusions and remarks on further works are provided at the end of the chapter.

1.1 Notation

The set of positive integers smaller than or equal to the integer \(n \in \mathbb {N}\) is denoted as \(\mathbb {N}_n\), i.e., \(\mathbb {N}_n = \{x \in \mathbb {N} : 1 \le x \le n \}\). Given the finite set \(\mathscr {A} \subseteq \mathbb {N}_n\), \(| \mathscr {A} |\) is its cardinality. Given a symmetric matrix \(P \in \mathbb {R}^{n \times n}\), notation \(P > 0\) (\(P \ge 0)\) means that P is positive (semi-)definite. By \(A^{\dagger }\) we denote the left pseudoinverse of the matrix A. Given the matrix \(T \in \mathbb {R}^{n \times m}\) and \(N \in \mathbb {N}\), \(\mathscr {D}_N(T) \in \mathbb {R}^{nN \times mN}\) is the block-diagonal matrix whose N block-diagonal elements are given by T, while \(\mathscr {D}(A,B,\ldots ,Z)\) is the block-diagonal matrix, of adequate dimension, whose block-diagonal elements are the matrices \(A, B,\ldots , Z\). Given a set of N matrices \(A_k\) with \(k \in \mathbb {N}_N\), denote by \(\{A_k\}_{k \in \mathbb {N}_N}\) the matrix obtained concatenating \(A_k\) in column.

2 Set Theory Results for Topology Preservation

2.1 Problem Formulation

Throughout the chapter, we consider a multiagent system with \(V \ge 2\) interconnected agents assumed identical:

$$\begin{aligned} x_i^+ = A x_i + B u_i, \end{aligned}$$
(15.1)

where \(x_i \in \mathbb {R}^n\) is the state, \(u_i \in \mathbb {R}^m\) is the control input of the i-th agent and \(A \in \mathbb {R}^{n \times n}\), \(B \in \mathbb {R}^{n \times m}\).

In order to pursue collaborative tasks in a decentralized way, the agents exchange some information. The information available to every agent is supposed to be partial, as only a portion of the overall system is assumed accessible to every agent. We suppose that any agent has access to the state of a neighbor only if a constraint on the distance between them is satisfied. If communication network looses its connectivity the system may not be able to reach the global objective. Thus, the primary problem underlying any cooperative task in the multiagent context is the connection topology preservation. Theoretical results on this topic, presented in [8], are recalled hereafter and applied in the following sections.

2.2 Feedback Design for Topology Preservation

Let us suppose that the initial interconnection topology is given by the graph \(G = (\mathscr {V}, \mathscr {E})\) where the vertex set is \(\mathscr {V} = \mathbb {N}_V\) and the connecting edge set \(\mathscr {E} \subseteq \mathscr {V} \times \mathscr {V}\) represents the set of pairs of agents that satisfy a distance-like condition. Precisely, given the real scalar \(r > 0\), \(d \in \mathbb {N}\) with \(d \le n\) and \(T \in \mathbb {R}^{d \times n}\) such that \(TT^\top \) is invertible, the initial edge set is given by

$$\begin{aligned} \mathscr {E} = \{(i,j)\in \mathbb {N}_V \times \mathbb {N}_V \mid \Vert T(x_i(0) - x_j(0))\Vert _2 \le r\}. \end{aligned}$$

The set of edges that must be preserved is denoted by \(\mathscr {N} \subseteq \mathscr {E}\). We suppose that every agent i knows the state of the j-th one if and only if \((i,j) \in \mathscr {N}\).

Definition 1

For all \(i\in \mathscr {V}\), we define the set of connected neighbors of the i-th agent as

$$\begin{aligned} \mathscr {N}_i = \{j \in \mathbb {N}_V : (i,j) \in \mathscr {N} \}. \end{aligned}$$

Given the set of connections \(\mathscr {N}\), the objective is to design a decentralized control law ensuring that none of these connections are lost. In other words, the aim is to design the state-dependent control actions \(u_i(k)\) independently from \(u_j(k)\), for all \(i, \, j\in \mathbb {N}_V\) and \(k \in \mathbb {N}\), such that every connection in \(\mathscr {N}\) is maintained.

As usual in multiagent systems , we consider the i-th input to be the sum of terms proportional to the distances between agent i and its neighbors. That is, denoting \(e_{l,m} = x_l - x_m\) for all \(l,m \in \mathbb {N}_V\), we define

$$\begin{aligned} u_i = \sum \limits _{j \in \mathscr {N}_i} K_{i,j} (x_i - x_j) = \sum \limits _{j \in \mathscr {N}_i} K_{i,j} e_{i,j}. \end{aligned}$$
(15.2)

The design of each \(u_i\) is reduced to the design of the controller gains \(K_{i,j}\) chosen such that the link (ij) is preserved where the dynamics of the ij system results in

$$\begin{aligned} e_{i,j}^+ = ( A + B K_{i,j} + B K_{j,i}) e_{i,j} + \sum \limits _{k \in \mathscr {N}_i}^{k \ne j} B K_{i,k} e_{i,k} - \sum \limits _{k \in \mathscr {N}_j}^{k \ne i} B K_{j,k} e_{j,k}, \end{aligned}$$
(15.3)

for all \(i, j \in \mathbb {N}_V\). It is not difficult to see that, in the centralized case the dynamics of the error can be imposed by an adequate choice \(u_i\), for all \(i\in \mathbb {N}_V\), provided that the agents dynamics is stabilizable.

The dynamics of the ij system is given by the matrix \(A + B K_{i,j} + B K_{j,i}\) if no perturbations due to the presence of other agents are present. Such perturbations, which complicate the decentralized control design, can be bounded within a set depending on the radius r and on the information on the neighbors common to the i-th and j-th agents.

Consider the sets

$$\begin{aligned} \begin{array}{l} \mathscr {N}_{i,j} = \mathscr {N}_i \cap \mathscr {N}_j, \\ \bar{\mathscr {N}}_{i,j} = \mathscr {N}_i \setminus ( \mathscr {N}_{i,j} \cup \{j\}), \\ \bar{\mathscr {N}}_{j,i}= \mathscr {N}_j \setminus ( \mathscr {N}_{i,j} \cup \{i\}), \end{array} \end{aligned}$$
(15.4)

then \(\mathscr {N}_{i,j}\) denotes the common neighbors of the i-th and the j-th agents and \(\bar{\mathscr {N}}_{i,j}\) the neighbors of the i-th one which are neither j nor one of its neighbors, analogously for \(\bar{\mathscr {N}}_{j,i}\). We define the cardinalities

$$\begin{aligned} \begin{array}{ll} N = 2 |\mathscr {N}_{i,j}| + 1,&\qquad \bar{N} = | \bar{\mathscr {N}}_{i,j} | + |\bar{\mathscr {N}}_{i,j}|, \end{array} \end{aligned}$$

where the indices are avoided here and in the following definitions to improve the readability. The dynamics of the ij system, perturbed by the noncommon neighbors, is

$$\begin{aligned} \!e_{i,j}^+ \! = \! (A + B K_{i,j} + B K_{j,i}) e_{i,j} \! + \, \sum \limits _{k \in \mathscr {N}_{i,j} } \!\!\!\!\!\;\;(B K_{i,k} e_{i,k} \! - B K_{j,k} e_{j,k} ) + w_{i,j}, \end{aligned}$$
(15.5)

with the bounded perturbation described by

$$\begin{aligned} w_{i,j} = \!\! \sum \limits _{k \in \bar{\mathscr {N}}_{i,j}} \!\!\;\;(B K_{i,k} e_{i,k} ) - \!\! \sum \limits _{l \in \bar{\mathscr {N}}_{j,i}} \!\!\;\;(B K_{j,l} e_{j,l} ). \end{aligned}$$
(15.6)

The problem addressed in this work can be state as follows:

Problem 1

Design a procedure to find at each step a condition on the decentralized control gains \(K_{i,j}\), with \(i,j \in \mathbb {N}_V\) such that the following algebraic relation is satisfied

$$\begin{aligned} \Vert T e_{i,j}^+\Vert _2 < r, \qquad \forall (i,j) \in \mathscr {N}, \end{aligned}$$
(15.7)

if the constraints

$$\begin{aligned} \begin{array}{ll} \Vert T e_{i,k} \Vert _2 \le r, \quad &{} \forall k \in \bar{\mathscr {N}}_{i,j},\\ \Vert T e_{j,k} \Vert _2 \le r, \quad &{} \forall k \in \bar{\mathscr {N}}_{j,i}, \end{array} \end{aligned}$$
(15.8)

hold.

In order to ease the presentation, we introduce different notations for the controller gains.

Definition 2

Denote with e \(\in \mathbb {R}^{n N}\) the vector obtained concatenating \(e_{i,j}\) with all \(e_{i,k}\) and \(e_{j,k}\) where \(k \in \mathscr {N}_{i,j}\). Denote with \({\check{\mathbf{K}}}_{i,j} \in \mathbb {R}^{m \times n ( N - 1 ) }\) the matrix obtained concatenating \(K_{i,k}\) and \(-K_{j,k}\) where \(k \in \mathscr {N}_{i,j}\) and with \({\hat{\mathbf{K}}}_{i,j} \in \mathbb {R}^{m \times n \bar{N} }\) the vector obtained concatenating all \(K_{i,k}\) where \(k \in \bar{\mathscr {N}}_{i,j}\) and \(-K_{j,k}\) where \(k \in \bar{\mathscr {N}}_{j,i}\). We also define

$$\begin{aligned} \begin{array}{ll} \varDelta = T \ [A + B \check{K}_{i,j}, \ B {\check{\mathbf{K}}}_{i,j}] \ \mathscr {D}_N(T)^{\dagger } \in \mathbb {R}^{d \times d N}, \\ \varGamma = T B \, {\hat{\mathbf{K}}}_{i,j} \, \mathscr {D}_{\bar{N}}(T)^{\dagger } \in \mathbb {R}^{d \times d \bar{N}},\\ Z = \mathscr {D}_N(T) \, \mathrm{{e}} \in \mathbb {R}^{d N}, \\ \end{array} \end{aligned}$$
(15.9)

where \(\check{K}_{i,j} = K_{i,j} + K_{j,i}\).

We recall here an important contribution presented in [8], namely the sufficient condition for the constraint (15.7) to hold.

Theorem 1

Problem 1 admits solutions if there exists \(\varLambda = \mathscr {D}(\lambda _1 I_d, ..., \lambda _{\bar{N}} I_d)\) with \(\lambda _k \ge 0\), for all \(k \in \mathbb {N}_{\bar{N}}\) such that

$$\begin{aligned} \left[ \begin{array}{ccc} r^2 - r^2 \!\! \sum \limits _{k \in \mathbb {N}_{\bar{N}}} \!\! \lambda _k &{} 0 &{} Z^\top \varDelta ^\top \\ 0 &{} \varLambda &{} \varGamma ^\top \\ \varDelta \, Z &{} \varGamma &{} I_d \end{array}\right] > 0. \end{aligned}$$
(15.10)

Furthermore, any solution \((\varDelta ,\varGamma )\) of the previous LMI defines admissible controller gains for the Problem 1.

Proof

First notice that every solution of (15.10) satisfies also

$$\begin{aligned} \sum \limits _{k \in \mathbb {N}_{\bar{N}}} \lambda _k< 1, \qquad \varGamma ^\top \varGamma - \varLambda < 0, \end{aligned}$$
(15.11)

as the principal minors of a positive definite matrix are positive definite. Since (15.11) is a necessary condition for Problem 1 to admit a solution, there is no loss of generality in assuming it satisfied. Condition (15.7) is equivalent to

$$\begin{aligned}{}[Z^\top \, \bar{Z}^\top ] \left[ \begin{array}{cc} \varDelta ^\top \varDelta &{}\quad \varDelta ^\top \varGamma \\ \varGamma ^\top \varDelta &{}\quad \varGamma ^\top \varGamma \end{array} \right] \left[ \begin{array}{c} Z \\ \bar{Z} \end{array} \right] < r^2. \end{aligned}$$
(15.12)

This condition must be satisfied for every \(\bar{Z}\) such that

$$\begin{aligned} \bar{Z}^\top D_k \bar{Z} \le r^2, \qquad \forall k \in \mathbb {N}_{\bar{N}}, \end{aligned}$$
(15.13)

with

$$\begin{aligned} D_k = diag(0_d, \, \ldots , \, 0_d, \, I_d, \, 0_d, \, \ldots , \, 0_d) \in \mathbb {R}^{d \bar{N} \times d \bar{N}}, \end{aligned}$$

holds. Applying the S-procedure, a sufficient condition for (15.7) to hold for every \(\bar{Z} \in \mathbb {R}^{d \bar{N}}\) satisfying (15.13) is the existence of \(\lambda _k \ge 0\), for all \(k \in \mathbb {N}_{\bar{N}}\), such that

$$\begin{aligned} \begin{array}{l} Z^\top \varDelta ^\top \varDelta \, Z + 2 \bar{Z}^\top \varGamma ^\top \varDelta \, Z + \bar{Z}^\top [ \varGamma ^\top \varGamma - \varLambda ] \bar{Z} < r^2 - r^2 \delta , \end{array} \end{aligned}$$
(15.14)

for every \(\bar{Z} \in \mathbb {R}^{d \bar{N}}\), with \(\delta = \sum \nolimits _{k \in \mathbb {N}_{\bar{N}}} \lambda _k\). From (15.11) and Z being known, the left-hand side of (15.14) is a concave function in \(\bar{Z}\) whose maximum is attained at

$$\begin{aligned} \bar{Z} = - (\varGamma ^\top \varGamma - \varLambda )^{-1} \varGamma ^\top \varDelta \, Z. \end{aligned}$$
(15.15)

Hence condition (15.14) holds for every \(\bar{Z} \in \mathbb {R}^{d \bar{N}}\) if and only if it is satisfied for the maximum of the function at left-hand side, that is if and only if

$$\begin{aligned} \begin{array}{c} Z^\top \varDelta ^\top \varDelta \, Z - Z^\top \varDelta ^\top \varGamma (\varGamma ^\top \varGamma - \varLambda )^{-1} \varGamma ^\top \varDelta \, Z < r^2 - r^2 \delta , \end{array} \end{aligned}$$
(15.16)

which is given by (15.14) at (15.15). Hence every \(\varLambda \), \(\varDelta \) and \(\varGamma \) satisfying conditions (15.11) and (15.16) ensure the satisfaction of \(\Vert T e_{i,j}^+ \Vert _2 < r\) for all \(\bar{Z}\) such that (15.13) holds. The condition (15.16) is equivalent to

$$\begin{aligned}&\left[ \begin{array}{cc} Z^\top \varDelta ^\top \varDelta \, Z - r^2 + r^2 \delta &{}\quad Z^\top \varDelta ^\top \varGamma \\ \varGamma ^\top \varDelta \, Z &{}\quad \varGamma ^\top \varGamma - \varLambda \end{array}\right]< 0 \\ \Leftrightarrow&\left[ \begin{array}{cc} Z^\top \varDelta ^\top \varDelta \, Z &{}\quad Z^\top \varDelta ^\top \varGamma \\ \varGamma ^\top \varDelta \, Z &{}\quad \varGamma ^\top \varGamma \end{array}\right]< \left[ \begin{array}{cc} r^2 - r^2 \delta &{} 0\\ 0 &{} \varLambda \end{array}\right] \\ \Leftrightarrow&\left[ \begin{array}{c} Z^\top \varDelta ^\top \\ \varGamma ^\top \end{array}\right] \left[ \begin{array}{cc} \varDelta \, Z&\varGamma \end{array}\right] < \left[ \begin{array}{cc} r^2 - r^2 \delta &{} 0\\ 0 &{} \varLambda \end{array}\right] \\ \Leftrightarrow&\left[ \begin{array}{ccc} r^2 - r^2 \delta &{}\ 0 &{}\ Z^\top \varDelta ^\top \\ 0 &{}\ \varLambda &{}\ \varGamma ^\top \\ \varDelta \, Z &{}\ \varGamma &{}\ I_d \end{array}\right] > 0. \end{aligned}$$

Thus (15.10) is equivalent to (15.14), sufficient condition for (15.7) to hold.

The quantity \(\delta = \sum \nolimits _{k \in \mathbb {N}_{\bar{N}}} \lambda _k\) can be geometrically interpreted as a bound on the perturbation generated in the ij dynamics by the noncommon neighbors. Precisely, the effect of the noncommon neighbors can be modeled as a perturbation on the ij system bounded by an ellipsoid determined by \(T^\top T\) and of radius \(\sqrt{\delta } r\). Therefore, the condition \(\delta < 1\), implicitly imposed by (15.10), is necessary to ensure the preservation of the connection (ij).

2.3 Network Preservation with Common Feedback Gains

The condition presented in the previous subsection ensures that the algebraic constraint related to the ij system is satisfied at the successive time instant. No insurance on its satisfaction along the evolution of the overall system can be guaranteed, unless proper choices of \(K_{i,j}\) are done. In case the feedback gains are assumed to be the same for every agent and every ij system, a sufficient condition for guaranteeing the network topology preservation at every future time instant can be posed.

Assumption 2

Given the system (15.1) with control input (15.2), assume that \(K_{i,j} = \bar{K}\) for all \((i,j) \in \mathscr {N}\).

The objective is to characterize the set of common feedback gains such that, if applied to control the multiagent system , they ensure that the value of \(\Vert T e_{i,j} \Vert _2\) does not increase for all \((i,j) \in \mathscr {N}\). In fact, this would clearly implies that if the connection condition is satisfied by the initial condition, i.e., \(\Vert T e_{i,j}(0) \Vert _2 \le r\) for all \((i,j) \in \mathscr {N}\), it holds also at every successive instant, then the network topology is preserved. Given the sets as in (15.4), define

$$\begin{aligned} N_M = \max \limits _{(i,j) \in \mathscr {N}} \{ | \mathscr {N}_{i} | + | \mathscr {N}_{j} | - 2 \}. \end{aligned}$$

Then, roughly speaking, \(N_M \in \mathbb {N}\) is the maximum over \((i,j) \in \mathscr {N}\) of the sum of neighbors of the agents i-th and j-th, apart from the agents themselves.

Proposition 1

Let Assumption 2 hold. If there exists \(\lambda \in [0, 1]\) such that

$$\begin{aligned} \begin{array}{c} \left[ \begin{array}{cc} \lambda T^\top T &{} (A + 2 B \bar{K})^\top T^\top \\ T (A + 2 B \bar{K}) &{} \lambda I_d \end{array}\right] \ge 0, \\ \left[ \begin{array}{cc} (1- \lambda ) T^\top T &{} N_M \bar{K}^\top B^\top T^\top \\ N_M T B \bar{K} &{} (1 - \lambda ) I_d \end{array}\right] \ge 0, \end{array} \end{aligned}$$
(15.17)

then the systems given by (15.5) and (15.6) are such that \(\Vert T e_{i,j}^+ \Vert _2 \le r\) for all \((i,j) \in \mathscr {N}\) if \(e_{l,k} \in \mathbb {R}^n\) satisfies \(\Vert T e_{l,k} \Vert _2 \le r\) for all \((l,k) \in \mathscr {N}\).

Proof

Define the set \(\mathscr {B}_T = \{ e \in \mathbb {R}^n : \, \Vert T e \Vert _2 \le r\}\), then \(e \in \mathscr {B}_T\) if and only if \(e^\top T^\top T e \le r^2\). The first condition in (15.17) is equivalent to \((A + 2 B \bar{K})^\top T^\top T (A + 2 B \bar{K}) \le \lambda ^2 T^\top T\), which implies that \((A + 2 B \bar{K}) \mathscr {B}_T \subseteq \lambda \mathscr {B}_T\). From Assumption 2 one have that \(K_{i,j} = K_{j,i} = \bar{K}\), which means that \(A + 2 B \bar{K}\) is the dynamics of any ij system in the absence of the perturbation of the neighbors. Then the set \(\mathscr {B}_T\) is mapped in \(\lambda \mathscr {B}_T\) if no perturbation is present, that is \((A + B K_{i,j} + B K_{j,i}) e_{i,j} \in \lambda \mathscr {B}_T\), for all \(e_{i,j} \in \mathscr {B}_T\). Analogously, the second condition in (15.17) is equivalent to \(N_M^2 \bar{K}^\top B^\top T^\top T B \bar{K} \le (1 - \lambda )^2 T^\top T\), which leads to \(\sum \nolimits _{k \in \mathbb {N}_{N_M}} \!\!\! \!\! B \bar{K} \mathscr {B}_T = N_M B \bar{K} \mathscr {B}_T \subseteq (1-\lambda ) \mathscr {B}_T\). This means that if \(e_{i,k} \in \mathscr {B}_T\) for all \(k \in \mathscr {N}_i \setminus \{j\}\) and \(e_{k,j} \in \mathscr {B}_T\) for all \(k \in \mathscr {N}_j \setminus \{i\}\), as implicitly assumed, then

$$\begin{aligned} \sum \limits _{k \in \mathscr {N}_{i,j} } \!\!\!\!\;\; (B \bar{K} e_{i,k} - B \bar{K} e_{j,k} ) + \!\!\!\! \sum \limits _{k \in \bar{\mathscr {N}}_{i,j}} \!\!\!\!\;\;(B \bar{K} e_{i,k} ) - \!\!\!\! \sum \limits _{l \in \bar{\mathscr {N}}_{j,i}} \!\!\!\!\;\;(B \bar{K} e_{j,l} ) \! \in \! (1-\lambda ) \mathscr {B}_T, \end{aligned}$$

for all \((i,j) \in \mathscr {N}\). From properties of the Minkowski set addition, we have \(e_{i,j}^+ \in \lambda \mathscr {B}_T + (1 - \lambda ) \mathscr {B}_T = \mathscr {B}_T\), if \(e_{l,k} \in \mathscr {B}_T\) for all \((l,k) \in \mathscr {N}\), which ends the proof.

Corollary 1

Let Assumption 2 hold. If there exist \(\lambda \in [0, 1]\) and \(\bar{\lambda } > 0\) such that

$$\begin{aligned} \begin{array}{c} \left[ \begin{array}{cc} (\lambda - \bar{\lambda }) T^\top T &{} (A + 2 B \bar{K})^\top T^\top \\ T (A + 2 B \bar{K}) &{} (\lambda - \bar{\lambda }) I_d \end{array}\right] \ge 0, \\ \left[ \begin{array}{cc} (1- \lambda ) T^\top T &{} N_M \bar{K}^\top B^\top T^\top \\ N_M T B \bar{K} &{} (1 - \lambda ) I_d \end{array}\right] \ge 0, \end{array} \end{aligned}$$

then the systems given by (15.5) and (15.6) are such that

$$\begin{aligned} \Vert T e_{i,j}^+ \Vert _2 \le (1 - \bar{\lambda }) \Vert T e_{i,j} \Vert _2, \end{aligned}$$

for all \((i,j) \in \mathscr {N}\) if \(e_{l,k} \in \mathbb {R}^n\) satisfies \(\Vert T e_{l,k} \Vert _2 \le r\) for all \((l,k) \in \mathscr {N}\).

3 Applications to Decentralized Control of Multiagent Systems

In this section we illustrate the application of our results, published in [8, 9] and recalled above, for controlling the multiagent system presented in the first part of Sect. 15.2. Different strategies (based on optimal and predictive control) to achieve the collaborative objectives are presented hereafter and numerically implemented.

Let us consider the double-integrator dynamics on the plane, that is, (15.1) with \(x_i = \![p_{i}^x(k), \, v_{i}^x(k), \, p_i^y(k), \, v_{i}^y(k)]^{\top }\), the input \(u_i = [u_i^x, \, u_i^y]^{\top }\) and

$$\begin{aligned} A = \left[ \begin{array}{cc} \bar{A} &{} 0 \\ 0 &{} \bar{A} \end{array} \right] , \quad B = \left[ \begin{array}{cc} \bar{B} &{} 0 \\ 0 &{} \bar{B} \end{array} \right] , \text{ where } \bar{A} = \left[ \begin{array}{cc} 1 &{} t\\ 0 &{} 1 \end{array} \right] , \quad \bar{B} = \left[ \begin{array}{c} 0 \\ 1 \end{array} \right] . \end{aligned}$$

We denote \(p_{i,j}^y = p_i^x - p_j^x\), \(v_{i,j}^y = v_i^x - v_j^x\), \(p_{i,j}^y = p_i^y - p_j^y\), \(v_{i,j}^y = v_i^x - v_j^x\) and

$$\begin{aligned} e_{i,j} = [p_{i,j}^y, \ v_{i,j}^y, \ p_{i,j}^y, \ v_{i,j}^y]^{\top } = x_i^{\top } - x_j^{\top }, \end{aligned}$$
(15.18)

and \(u_{i,j} = [u_i^x - u_j^x, \, u_i^y - u_j^y]^{\top }\). The control inputs are given by (15.2) and Definition 2 with feedback gains

$$\begin{aligned} \check{K}_{i,j} = \left[ \begin{array}{cccc} k_{i,j}^{p^x} &{} k_{i,j}^{v^x} &{} 0 &{} 0 \\ 0 &{} 0 &{} k_{i,j}^{p^y} &{} k_{i,j}^{v^y} \end{array} \right] , \end{aligned}$$
(15.19)

for all \((i,j) \in \mathscr {N}\). Once obtained a value for \(\check{K}_{i,j}\), we define the nominal selection \(K_{i,j} = K_{j,i} = 0.5 \check{K}_{i,j}\) for all \((i,j) \in \mathscr {N}\).

Moreover, the following constraint on the norm of \(\check{K}_{i,j}\) is imposed

$$\begin{aligned} \check{K}_{i,j}^{\top } \check{K}_{i,j} \le I_n, \end{aligned}$$
(15.20)

to limit the effect of the control of the ij nominal system on the neighbors. Recall, in fact, that the perturbation on the neighbors of the agents i and j depends on their states and on the gains \(K_{i,j}\) and \(K_{j,i}\).

3.1 Topology Preservation Constraint

We suppose that the distance between two agents must be smaller than or equal to r to allow them to communicate. Thus the topology preservation problem consists of upper-bounding by r the euclidean distance between the connected neighbors. The constraint on the state of the ij system to preserve is

$$\begin{aligned} p_{i,j}^y(k)^2 + p_{i,j}^y(k)^2 \le r^2. \end{aligned}$$
(15.21)

Notice that the effect of the inputs \(u_i^x\) and \(u_i^y\) at time k has no influence on \(p_{i}^x\) and \(p_i^y\) at time \(k+1\). Thus, any algebraic condition involving the positions \(p_{i}^x, \, p_i^y\) of the systems at \(k+1\) would not depend on the control action \(u_i^x, \, u_i^y\) at time k. From the computational point of view, every constraint concerning only the agents positions, would lead to LMI conditions independent on the variable \(\check{K}_{i,j}\). Then the results provided in Theorem 1 are not applicable directly in this case for the state at time \(k+1\). On the other hand, the controls \(u_i^x(k), u_i^y(k)\) affect the position (and the velocity) at time \(k+2\) and a condition on the feedback gain \(\check{K}_{i,j}\) to ensure the preservation of the (ij) connection at time \(k+2\) can be posed. The distance constraint can be imposed on the states at \(k+2\), as nothing can be done at time k in order to prevent its violation at time \(k+1\). Then a constraint on \(e_{i,j}(k)\) can be determined characterizing the region of the state space such that \(p_{i,j}^y(k)^2 + p_{i,j}^y(k)^2 \le r^2\) and \(p_{i,j}^y(k+1)^2 + p_{i,j}^y(k+1)^2 \le r^2\) in terms of matrix T. Since the former constraint does not involve the input, only \(p_{i,j}^y(k+1)^2 + p_{i,j}^y(k+1)^2 \le r^2\) might be taken into account for the control design.

Proposition 2

The condition (15.21) holds at time \(k+2\) if and only if we have that \(\Vert T e_{i,j}(k+1) \Vert _2 \le r\) with

$$\begin{aligned} T = \left[ \begin{array}{cccc} 1 &{} t &{} 0 &{} 0\\ 0 &{} 0 &{} 1 &{} t \end{array} \right] . \end{aligned}$$
(15.22)

Proof

The region of the space of \(e_{ij}(k)\) such that the topology constraint (15.21) is satisfied at \(k+1\) is given by \(p_{ij}^x(k+1)^2 +p_{ij}^y(k+1)^2 \le r^2\), which is equivalent to \(\Vert T e_{ij}(k) \Vert _2 \le r\) for T as in (15.22). Hence imposing that the system error state belongs to such a region at \(k+1\) implies assuring that the distance between the agents i-th and j-th is smaller than or equal to r at \(k+2\), preserving the topology at \(k+2\). Then \(p_{ij}^x(k+2)^2 + p_{ij}^y(k+2)^2 \le r^2\) if and only if

$$\begin{aligned} \Vert T e_{ij}(k+1)\Vert _2 = \Vert T (A e_{ij}(k) + B u_{ij}(k)) \Vert _2 \le r, \end{aligned}$$

with T as in (15.22).

Proposition 2, then, implies that the topology preservation constraint for time \(k+2\) can be expressed in terms of \(e_{i,j}(k)\) and the input \(u_{i,j}(k)\). The results presented in Theorem 1, with T as in (15.22), allow to characterize the sets of feedback gains ensuring the satisfaction of the distance constraint at \(k+2\), for every pair of connected neighbors i and j. Such set would depend on the current state \(e_{i,j}(k)\) and on the gains designed to compensate the errors and enforce the topology preservation.

3.2 Relevant Multiagents Applications

Among the local feedback gains which guarantee the connection preservation, different selection criteria can be applied, depending on the collaborative task to be achieved. Hereafter three popular criteria are illustrated and analyzed.

3.2.1 Full State Consensus

The first criterion is to select the feedback gain, among those satisfying (15.10), to achieve the full state agreement . In other words, the objective in this case is to both steer all the agents at the same point and align all the velocities without loosing any connection. One possibility is to compute at any sampling instant the matrix \(\check{K}_{i,j}\) minimizing a sum of nominal values of the position distance at \(k+2\) and of the speed difference at \(k+1\). By nominal values, we mean the values of positions and speeds in absence of the perturbation on the ij system due to the other agents. Then, given the positive weighting parameters \(q_p, \, q_v \in \mathbb {R}\), the cost to minimize is

$$\begin{aligned} \begin{array}{l} Q_c(e_{i,j}(k), \check{K}_{i,j}) = q_p (p_{i,j}^y(k+2)^2 + p_{i,j}^y(k+2)^2) \\ \qquad +\, q_v (v_{i,j}^y(k+1)^2 + v_{i,j}^y(k+1)^2). \end{array} \end{aligned}$$
(15.23)

Proposition 3

Any optimal solution of the convex optimization problem

$$\begin{aligned} \begin{array}{l} \min \limits _{\varDelta , \, \varGamma , \, \varLambda , \, \check{K}_{i,j}, M} \ \ e_{i,j}(k)^{\top } M^{\top } M e_{i,j}(k) \\ \begin{array}{ll} \text { s.t. } &{} (15.9), (15.10), (15.19),\\ &{} \left[ \begin{array}{cc} I_n &{} \check{K}_{i,j}^{\top } \\ \check{K}_{i,j} &{} I_m \end{array} \right] \ge 0, \end{array} \end{array} \end{aligned}$$
(15.24)

with

$$\begin{aligned} M = \left[ \begin{array}{cccc} q_p &{} q_p t &{} 0 &{} 0\\ 0 &{} q_v &{} 0 &{} 0\\ 0 &{} 0 &{} q_p &{} q_p t\\ 0 &{} 0 &{} 0 &{} q_v \end{array} \right] (A + B \check{K}_{i,j}), \end{aligned}$$
(15.25)

and T as in (15.22), minimizes the cost (15.23) subject to the norm gain constraint (15.20) and the distance constraints (15.21) at \(k+2\).

3.2.2 Partial State Consensus: Flocking

An alternative objective, often considered in the framework of decentralized control, is partial state consensus . That is, to steer a part of the state \(e_{i,j}\) to zero, for all \((i,j) \in \mathscr {N}\). To preserve the connectivity of \(G = (\mathscr {V}, \mathscr {N})\) while speed differences converge to zero, the cost to minimize is a measure of the difference between neighbors speeds, for instance

$$\begin{aligned} \begin{array}{l} Q_f(e_{i,j}(k), \check{K}_{i,j}) = v_{i,j}^y(k+1)^2 + v_{i,j}^y(k+1)^2. \end{array} \end{aligned}$$
(15.26)

This is achieved by solving a convex optimization problem analogous to (15.24), as stated in the proposition below. The proof is avoided since similar to the one of Proposition 3.

Proposition 4

Any optimal solution of the convex optimization problem (15.24) with

$$\begin{aligned} M = \left[ \begin{array}{cccc} 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 1 \end{array} \right] (A + B \check{K}_{i,j}), \end{aligned}$$
(15.27)

and T as in (15.22), minimizes the cost (15.26) subject to the norm gain constraint (15.20) and the distance constraints (15.21) at \(k+2\).

Clearly, changing opportunely the matrix M would permit to regulate different part of the state of the ij system and also any linear combination of the state.

3.2.3 Predictive Control

Finally, we present another interesting optimization criterion. One of the most popular control technique suitable for dealing with control in presence of hard constraints is the predictive control . These control strategies exploit the prediction of the system evolution and the receding horizon strategy to react in advance in order to prevent the constraint violations and to avoid the potentially dangerous regions of the state space. Moreover, the control input that would generate the optimal trajectory, among the admissible ones, is usually computed and applied. In general, the longer is the prediction horizon, the higher is the capability of prevent unsafe regions and constraint violations. Based on this idea, we propose to optimize a measure of the future state position, in order to react in advance and prevent the states to approach the limits of the distance constraints. In particular, we minimize a measure of the nominal distance between the positions of the i-th and j-th agents at time \(k+3\) in function of the input gain at time k, that is, \((p_{i,j}^y(k+2) + t v_{i,j}^y(k+1) )^2 + (p_{i,j}^y(k+2) + t v_{i,j}^y(k+1) )^2\). The control horizon can be extended to values higher than 3, but the predicted state \(e_{i,j}(k+N)\) would depend on the future inputs and the cost would result in a nonconvex function of \(\check{K}_{i,j}\). A simplifying hypothesis can be posed to obtain a suboptimal control strategy but with greater prediction capability. Let us denote the horizon \(N_p \in \mathbb {N}\) and suppose that only the nominal control action \(u_{i,j}(k) = \check{K}_{i,j} e_{i,j}(k)\) is applied, i.e., \(u_{i,j}(k+p) = 0\) for \(p \in \mathbb {N}_{N_p}\). The minimization of the nominal position at \(k+N_p\), i.e.,

$$\begin{aligned} \begin{array}{l} Q_p(e_{i,j}(k), \check{K}_{i,j}) = p_{i,j}^y(k+N_p)^2 + p_{i,j}^y(k+N_p)^2, \end{array} \end{aligned}$$
(15.28)

leads to a suboptimal control with high predictive power.

Proposition 5

Any optimal solution of the convex optimization problem (15.24) with

$$\begin{aligned} M = T + (N_p -1) t \left( \left[ \begin{array}{cccc} 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 1 \end{array} \right] + \check{K}_{i,j} \right) , \end{aligned}$$
(15.29)

and T as in (15.22), minimizes the cost (15.28) subject to the norm gain constraint (15.20) and the distance constraints (15.21) at \(k+2\).

The benefits of the prediction-based strategy will be highlighted in the numerical examples section.

4 Numerical Examples

Consider the six interconnected agents with the initial conditions given in [13] and connected by the minimal robust graph computed in the same work. That is, \(\mathscr {N} = \{ (1,2), \, (2,3), \, (3,4), \, (4,5), \, (5,6)\}\), \(r = 3.2\) and initial conditions:

$$\begin{aligned} \begin{array}{l} x_1(0) = \left[ -4 \ \ -v_0 \ \ 3 \ \ 0 \right] ^{\top }, \quad x_6(0) = \left[ 4 \ \ v_0 \ \ 3 \ \ 0 \right] ^{\top }, \\ x_2(0) = \left[ -2 \ \ -v_0 \ \ 2 \ \ 0 \right] ^{\top }, \quad x_5(0) = \left[ 2 \ \ v_0 \ \ 2 \ \ 0 \right] ^{\top }, \\ x_3(0) = \left[ -1 \ \ -v_0 \ \ 0 \ \ 0 \right] ^{\top }, \quad x_4(0) = \left[ 1 \ \ v_0 \ \ 0 \ \ 0 \right] ^{\top }, \\ \end{array} \end{aligned}$$

where \(v_0\) is used as a parameter to analyze the maximal initial speed that may be dealt with by different control strategy. It is noteworthy that, as shown in [13], for the classical consensus algorithm the preservation of the minimal robust graph is guaranteed for a critical speed value \(v_c\simeq 0.23\). Nevertheless, it is numerically shown that the sufficient condition is conservative since for \(v_0=1.5v_c\) (generating approximately a 4 times higher global velocity disagreement) the robust graph is not broken. We also note that the classical consensus algorithm is not able to preserve the connectivity when the global disagreement is 5 times superior to the one guaranteeing the consensus (i.e., \(v_0 > 2.1 v_c\)).

In the sequel, we show that our design allows to increase considerably the initial speed value (and consequently the initial global disagreement) avoiding the loss of connections. Let us first give the initial error vectors between the states of the neighbors:

$$\begin{aligned} \begin{array}{c} e_{1,2}(0) = \left[ -2 \ \ 0 \ \ 1 \ \ 0 \right] ^{\top } \!\!, \quad e_{5,6}(0) = \left[ -2 \ \ 0 \ \ -1 \ \ 0 \right] ^{\top } \!\!, \\ e_{2,3}(0) = \left[ -1 \ \ 0 \ \ 2 \ \ 0 \right] ^{\top } \!\!, \quad e_{4,5}(0) = \left[ -1 \ \ 0 \ \ -2 \ \ 0 \right] ^{\top } \!\!, \\ e_{3,4}(0) = \left[ -2 \ \ -2 v_0 \ \ 0 \ \ 0 \right] ^{\top } \!\!. \ \end{array} \end{aligned}$$

4.1 Flocking

The control problem formulated in Sect. 15.3 has admissible solutions for \(v_0=19v_c\) and the connection between the third and the fourth agent is lost for \(v_0=20v_c\) as shown in Fig. 15.2. It is worth noting that the control acts like springs between agents’ velocities (compare the bottom of Figs. 15.1, 15.2 and 15.3). First, the control cancels the speed difference between neighbors with opposite velocities creating a speed disagreement in both symmetric branches of the graph. Next, it cancels the disagreement between 2-nd and the 3-rd agent and between the 4-th and 5-th one, mimicking a gossiping procedure where the choice of active communication link is given by the error between neighbors speeds. Doing so, either the flocking is reached before the connectivity is lost, or the graph splits into two groups that will independently agree to two different velocity values.

Fig. 15.1
figure 1

Flocking: trajectories and errors of the 12 system

Fig. 15.2
figure 2

Flocking: errors of the 23 and the 34 systems

Fig. 15.3
figure 3

Flocking: errors of the 45 and the 56 systems

4.2 Full State Consensus

The control problem formulated in Sect. 15.3 with \(q_x = 10, \ q_v = 1\) has admissible solutions for \(v_0 = 23 v_c\) as shown in Fig. 15.4.

Fig. 15.4
figure 4

Consensus: trajectories and errors of the 34 system

Fig. 15.5
figure 5

Predictive control: trajectories and errors of the 34 system

4.3 Predictive Control Strategies

The control problem formulated in Sect. 15.3 with \(N_p = 3\) works for \(v_0 = 21 v_c\), but the trajectories are far from ideal. The behaviour is largely improved with \(N_p = 21\), see Fig. 15.5 representing the trajectories and the time evolution of the 34 dynamics for \(v_0 = 28 v_c\). Notice how the position error of the critical system, the 34, approaches the bound avoiding the constraint violation, also for an initial speed much higher than those used for the other approaches, i.e., \(v_0 = 28 v_c\). Furthermore, the evolutions and trajectories present a much smoother and regular behavior. All these desirable properties are due to the predictive capability of the approach which permits the control to react to possible violations and to prevent undesired situations in advance.

5 Conclusion and Further Works

This chapter provides an LMI-based methodology to design the controllers that preserve a given network topology in multiagent applications. Precisely, we suppose that the agents have limited communication capability and they have to stay in a given range in order to preserve their neighbors. We show that each agent can preserve all its neighbors by applying a controller obtained by solving a specific LMI. On the other hand, different convex optimization problems have been posed in order to pursue several classical objectives in the multiagent context, as consensus, flocking, and predictive control. The numerical simulations show the effectiveness of the method with respect to other existing techniques.

We note that the main applications provided in the chapter concern fleets of autonomous vehicles. Thus, the size of this associated network does not represent and obstacle for the numerical treatments by LMIs. Moreover, we can choose the network to be preserved as a very sparse one. Consequently, the number of low order LMIs to be solved is of the same order as the network size.