1 Introduction

The objective of this paper is to present mathematical models, specifications, notions, and tools for the design of algorithms for networked hybrid dynamical systems. A network of such systems is defined as multiple agents running algorithms that are allowed to share information over a network so as to fulfill a given design specification. The mathematical models of the agents, the algorithms, and the network are all given in terms of hybrid inclusions. In the autonomous case, a hybrid inclusion is given by

$$\begin{aligned} \begin{aligned} \dot{x}&\in F(x) \qquad x\in C\\ x^+&\in G(x) \qquad x \in D, \end{aligned} \end{aligned}$$
(16.1)

where x is the state. This model allows the state to change continuously according to the constrained differential inclusion in (16.1) during flows and, at jumps, change discretely according to the constrained difference inclusion in (16.1). With such a general model, the agents may have states that evolve continuously and discretely, the models of the algorithms can have logic statements and conditions under which their response changes, and the models of the network may capture the conditions triggering communication events for agents to exchange information over the network. Due to the combination of heterogeneous continuous and discrete dynamics, the analysis of the resulting system as well as the design of algorithms and system parameters to satisfy a particular design specification cannot be carried out with tools for purely continuous-time or discrete-time systems.

Several unique features of networked hybrid dynamical systems make their analysis and design challenging. These features include the unavoidable effect of the network, which, in most cases, does not allow continuous exchange of information, perturbations, and the inherently hybrid dynamics of the agents. More precisely:

  1. 1.

    Distributed agents with hybrid dynamics: the intervals of time over which the state of the agents changes continuously may be different among agents. The time instants at which the state of the agents change discretely may also not be the same. In fact, the assumption that all of the agents flow and jump at the same time might be too restrictive.

  2. 2.

    Asynchronous communication events at unknown times: the time instances at which agents exchange information may not be synchronized, meaning that each agent may receive information at different time instances. Furthermore, as in the previous item, the amount of ordinary time elapsed between communication events for each agent might be different; for instance, an agent can receive information at a much faster rate than others. In addition, the exact times at which information is exchanged may not be known a priori.

  3. 3.

    Lack of full information at the same time: The information about the states of the neighboring agents may not be available at the same time. In fact, most realistic models of networks would not provide information continuously, but rather, at isolated time instances. To meet certain design specifications, such a constraint may require algorithms that can cope with limited information, both in terms of its value and the time information is received.

  4. 4.

    Perturbations in the dynamics, parameters, and measurements: the lack of knowledge of the actual models of each component of a network of hybrid systems would prevent one from compensating for their effect at the design stage. Designs that are robust to perturbations such as measurement noise, unmodeled dynamics, and delay are mandatory.

Section 16.2 of this paper pertains to modeling of networked hybrid dynamical systems. A mathematical model of each of the agents is introduced first. Each agent is modeled as a hybrid system, similar to (16.1). Such a model is general enough to allow for nonlinear, nonautonomous, set-valued, and heterogeneous dynamics with solutions that evolve continuously and, at times, jump. Hybrid dynamical models capturing the mechanisms behind the networks connecting the agents are presented. The hybrid dynamics in these models allows to capture the discrete nature of communication events in digital networks. These models are also modular to permit their use in the definition of interconnections between the agents in the network, where its topology is defined by a graph. Finally, a general model of hybrid algorithms for the control of the agents is given in terms of hybrid inclusions as in (16.1) as well.

The interconnection between the agents, the networks, and the algorithms defines a closed-loop system that, after appropriate design, is to meet certain given specifications. With such a model at hand, Sect. 16.3 introduces specifications that are of typical interest in networked systems problems. These specifications are given in terms of the dynamical properties of the resulting closed-loop system. The property of all agents converging to a desired relative configuration, typically referred to as formation, is introduced as the property that solutions converge (in the limit or in finite time) to the set of points defining the formation. Synchronization is defined as the property that all solutions (or some of its components) converge to each other, property that we define as asymptotic synchronization, potentially with stability, which we refer to as stable synchronization. In addition, specifications that capture safety and security are also presented.

With the specifications introduced in Sect. 16.3, notions and tools that can be used to satisfy the given specifications are introduced in Sect. 16.4. The notions include asymptotic stability, finite time convergence, forward invariance, and robustness. Due to space constraints, we provide pointers to the literature of hybrid dynamical systems where formal statements and further applications of these notions and tools can be found. These methods have been recently used to solve problems pertaining to certain classes of networked hybrid dynamical systems, specifically, to solve state estimation [28, 29], consensus [53], synchronization [48, 49, 51, 52], and security [54] problems over networks. Section 16.5 provides a summary of some of these applications.

2 Networked Hybrid Dynamical Systems

In this section, we introduce a general model of N networked hybrid systems. A graph defines the network structure, in particular, the nodes and the communication links between them. Each node in the graph corresponds to an agent with general hybrid dynamics. The exchange of information between the agents is also modeled as a hybrid system, in particular, to capture the events at which communication events occur. Each agent is controlled by an algorithm that may also be hybrid.

2.1 Agents

For each \(i \in {\mathscr {V}} := \{1,2,\dots ,N\}\), the ith agent is modeled as a hybrid system \({\mathscr {H}}^a_i\) with data \((C^a_i,F^a_i,D^a_i, G^a_i, E^a_i,H^a_i)\) and given by the hybrid inclusion with inputs and outputs

$$\begin{aligned} \begin{aligned} \dot{z}_i&\in F^a_i(z_i,u_i) \qquad (z_i,u_i) \in C^a_i\\ z_i^+&\in G^a_i(z_i,u_i) \qquad (z_i,u_i) \in D^a_i\\ y_i&\in H^a_i(z_i,u_i) \qquad (z_i,u_i) \in E^a_i, \end{aligned} \end{aligned}$$
(16.2)

where \(z_i\in \mathbb {R}^{n^a_i}\) is the state, \(u_i \in \mathbb {R}^{m^a_i}\) the input, and \(y_i \in \mathbb {R}^{p^a_i}\) the output of the ith agent. The set-valued map \(F^a_i\) is the flow map capturing the continuous dynamics, and \(C^a_i\) defines the flow set on which flows are allowed. The set-valued map \(G^a_i\) defines the jump map and models the discrete behavior, and \(D^a_i\) defines the jump set, which is where jumps are allowed. The set \(E^a_i\) defines the output set. A solutionFootnote 1 to \({\mathscr {H}}^{a}_i\) is given by a pair \((\phi _i,u_i)\) parametrized by \((t,j) \in \mathbb {R}_{\geqslant 0} \times \mathbb {N}\), where t denotes ordinary time and j denotes jump time. The domain \(\mathop {\mathrm{dom}}\nolimits (\phi _i,u_i) \subset \mathbb {R}_{\geqslant 0} \times \mathbb {N} \) is a hybrid time domain if for every \((T,J) \in \mathop {\mathrm{dom}}\nolimits (\phi _i,u_i)\), the set \(\mathop {\mathrm{dom}}\nolimits (\phi _i,u_i)\cap ([0,T]\times \{0,1, \dots , J\})\) can be written as \(\cup _{j=0}^{J}( I_j \times \{ j \})\), where \(I_j := [t_j,t_{j+1}]\) for a time sequence \(0=t_0\leqslant t_1\leqslant t_2\leqslant \cdots \leqslant t_{J} \leqslant t_{J+1}\). The \(t_j\)’s with \(j>0\) define the time instants when the state of the hybrid system jumps and j counts the number of jumps. The set \({\mathscr {S}}_{{\mathscr {H}}^a_i}\) contains all maximal solutions to \({\mathscr {H}}^a_i\), and the set \({\mathscr {S}}_{{\mathscr {H}}^a_i}(\xi )\) contains all maximal solutions to \({\mathscr {H}}^a_i\) with initial condition \(\xi \).

Example 16.1

A widely studied problem in the literature of multi-agent systems is the problem of controlling the state of point-mass systems over a network to reach consensus. In such a case, the dynamics of the agents are simply \(\dot{z}_i = u_i\) for each \(i \in {\mathscr {V}}\), where \(z_i, u_i \in \mathbb {R}^{n^a_i}\) for some \(n^a_i=m^a_i\). Certainly, such dynamics can be modeled as shown in (16.2) by choosing \(F^a_i(z_i,u_i):=u_i\), \(G^a_i(z_i,u_i)\) arbitrary, \(C^a_i = \mathbb {R}^{n^a_i}\times \mathbb {R}^{n^a_i}\), and \(D^a_i\) empty. The model in (16.2) also allows to include constraints in the state and the input of each agent. For instance, if the input of the agent is constrained to \(|u_i|\leqslant \overline{u}\) for some \(\overline{u}>0\) then the flow set can be defined as \(C^a_i = \mathbb {R}^{n^a_i}\times \left\{ u_i \in \mathbb {R}^{m^a_i}\ :\ |u_i|\leqslant \overline{u}\right\} \). More interestingly, the model in (16.2) permits capturing agents with point-mass hybrid dynamics, such as

$$\begin{aligned} \dot{z}_i&=u_{i,1} =: F^a_i(z_i,u_i) \end{aligned}$$

during flows and

$$\begin{aligned} z_i^+&=u_{i,2} =: G^a_i(z_i,u_i) \end{aligned}$$

at jumps, where \(u_i=(u_{i,1},u_{i,2})\). In such a model, the conditions on the state and the input imposed by the flow and jump sets would determine when the input \(u_{i,1}\) affecting the flows is active, and when the input \(u_{i,2}\) assigning the state after jumps is active.   \(\blacksquare \)

Example 16.2

Synchronization of the state of nonlinear continuous-time systems of the form \(\dot{z}_i = f_i(z_i,u_i)\) emerges in many problems in science and engineering. Such an agent model is captured by defining \(F^a_i(z_i,u_i):=f_i(z_i,u_i)\), \(G^a_i(z_i,u_i)\) arbitrary, \(C^a_i = \mathbb {R}^{n^a_i}\times \mathbb {R}^{m^a_i}\), and \(D^a_i\) empty. More interestingly, the model in (16.2) allows for jumps in the state that can emerge due to hybrid dynamics in the agents themselves. The mathematical models of impulse-coupled oscillators used in the literature to capture the dynamics of populations of fireflies and neurons exhibit such dynamics; see, e.g., [40]. For instance, one such a model consists of a scalar state \(z_i\) of each oscillator taking values in the compact set [0, T], where \(T > 0\) is a parameter, and that, during flows, increases monotonically toward T. During this regime, the change of \(z_i\) is governed by the autonomous system \(\dot{z}_i = f_i(z_i)\), and the state \(z_i\) is constrained to [0, T]. Upon reaching a threshold T, the state \(z_i\) self-resets to zero. Furthermore, when agents that are neighbors to the ith agent self-reset their states to zero, they trigger a reset of the state \(z_i\) to a value that may depend on the state of the ith agent and of its neighbors. Letting \(u_i\) be the input to the ith agent, which is to be assigned to a function of the state of the neighbors so as to externally reset \(z_i\) as just described, the change of \(z_i\) at Self-Triggered jumps is \(z_i^+ = 0\) and at externally triggered jumps as \( z_i^+ = g_i(z_i,u_i) \) An agent model as in (16.2) is given by

$$\begin{aligned} \begin{aligned} \dot{z}_i&= f_i(z_i) =: F^a_i(z_i,u_i)\qquad \qquad \qquad \qquad \quad (z_i,u_i) \in [0,T] \times \mathbb {R}^{m_i^a} =: C^a_i, \\ z_i^+&\in G^a_i(z_i,u_i) := \left\{ \begin{array}{ll} 0 &{} \text { if } z_i = T, u_i \notin D^e_i \\ g_i(z_i,u_i) &{} \text { if } z_i \in [0,T), u_i \in D^e_i \\ \{0,g_i(z_i,u_i)\} &{} \text { if } z_i =T, u_i \in D^e_i \end{array} \right. \\&\qquad \qquad \qquad \qquad \qquad (z_i,u_i) \in (\{T\}\times \mathbb {R}^{m_i^a}) \cup ([0,T] \times D_i^e) =: D^a_i,\\ y_i&= z_i =: H^a_i(z_i,u_i) \qquad \qquad \qquad \qquad \quad (z_i,u_i) \in [0,T]\times \mathbb {R}^{m_i^a} =: E^a_i. \end{aligned} \end{aligned}$$

In this model, the events are triggered when \(z_i=T\) or \(u_i\) is equal to a value, or more generally, belong to an appropriately defined set describing the conditions that externally reset \(z_i\). The latter set is denoted as \(D_i^e\) in the model above. Note that when both reset conditions occur simultaneously, the jump map of the agent is set valued, meaning that either one of the two possible resets is possible.   \(\blacksquare \)

2.2 Networks

A directed graph (digraph) is defined as \(\varGamma = ({\mathscr {V}}, {\mathscr {E}}, {\mathscr {G}})\). The set of nodes of the digraph are indexed by the elements of \({\mathscr {V}}\) and the edges are pairs in the set \({\mathscr {E}} \subset {\mathscr {V}} \times {\mathscr {V}}\). Each edge directly links two different nodes, i.e., an edge from i to k, denoted by (ik), implies that agent i can send information to agent k. The adjacency matrix of the digraph \(\varGamma \) is denoted by \({\mathscr {G}} \in \mathbb {R}^{N\times N}\), whose entries \(g_{ik}\) take values on \(\{0,1\}\) according to the connectivity map: \(g_{ik} =1\) if \((i,k)\in {\mathscr {E}}\), and \(g_{ik}=0\) otherwise. The set of indices corresponding to the neighbors that can send information to the ith agent is denoted by \({\mathscr {N}}(i):=\{k\in {\mathscr {V}}: (k,i) \in {\mathscr {E}} \}\). The in-degree and out-degree of agent i are defined by \(d^{\text { in}}_i = \sum _{k=1}^N g_{ki}\) and \(d^{\text { out}}_i = \sum _{k=1}^N g_{ik}\). The in-degree matrix \(\mathscr {D}\) is the diagonal matrix with entries \(D_{ii}=d^{\text { in}}_i\) for all \(i\in {\mathscr {V}}\). The Laplacian matrix of the digraph \(\varGamma \), denoted by \({\mathscr {L}} \in \mathbb {R}^{N\times N}\), is defined as \({\mathscr {L}}={\mathscr {D}} - {\mathscr {G}}\). A digraph is said to be

  • weight balanced if, at each node \(i \in {\mathscr {V}}\), the out-degree and in-degree are equal; i.e., for each \(i \in {\mathscr {V}}\), \(d^{\text { out}}_i = d^{\text { in}}_i\);

  • completely connected if every pair of distinct vertices is connected by a unique edge; that is, \(g_{ik} = 1\) for each \(i,k \in {\mathscr {V}}\), \(i \ne k\);

  • strongly connected if and only if any two different nodes of the digraph can be connected via a path that traverses the directed edges of the digraph.

In most applications involving networks, the transfer of information between neighboring agents is driven by events. The events triggering communication between neighboring agents may depend on the state, the input, output information, or on a local quantity. The following general hybrid system model, denoted \({\mathscr {H}}^{\text {net}}_{ik}\), is used to trigger such events for each \((i,k) \in {\mathscr {E}}\):

$$\begin{aligned} \begin{aligned} \dot{\mu }_{ik}&\in F^{\text {net}}_{ik}(\mu _{ik},\omega _{ik}) \qquad (\mu _{ik},\omega _{ik}) \in C^{\text {net}}_{ik}\\ \mu _{ik}^+&\in G^{\text {net}}_{ik}(\mu _{ik},\omega _{ik}) \qquad (\mu _{ik},\omega _{ik}) \in D^{\text {net}}_{ik} \\ \chi _{ik}&\in H^{\text {net}}_{ik}(\mu _{ik},\omega _{ik}) \qquad (\mu _{ik},\omega _{ik}) \in E^{\text {net}}_{ik}, \end{aligned} \end{aligned}$$
(16.3)

where \(\mu _{ik} \in \mathbb {R}^{n^{\text { net}}_{ik}}\) is a state variable associated to the communication of information from agent i to agent k, \(\omega _{ik} \in \mathbb {R}^{m^{\text { net}}_{ik}}\) is its input, which might be assigned to information that agent i has to transmit to agent k as well as state variables in agent i that determine whether \(\mu _{ik}\) should evolve continuously or discretely, and \(\chi _{ik} \in \mathbb {R}^{p^{\text { net}}_{ik}}\) is its output, which includes the information that is transmitted from agent i to agent k. The hybrid model \({\mathscr {H}}^{\text {net}}_{ik}\) is general enough to capture most communication mechanisms or protocols in the literature. The following sample-and-hold mechanism defines perhaps the simplest version of a model to trigger communication of information from agent i to agent k.

Example 16.3

(Periodic communication events with memory) The simplest event-driven communication protocol is perhaps one that collects information and transmits it periodically. Let \(T>0\) denote the period for the events. A model that, after the first event, updates the information provided by the network after every T seconds have elapsed can be modeled as (16.3) for each \((i,k)\in {\mathscr {E}}\). Let \(\tau _{ik}\) denote a timer state that triggers the communication events and let \(\ell _{ik}\) be a memory state that stores the information at those events. Then, defining the state of (16.3) as \(\mu _{ik} = (\tau _{ik},\ell _{ik})\), the following model captures the network described above:

$$\begin{aligned} \begin{aligned} \dot{\mu }_{ik}&= \begin{bmatrix}\dot{\tau }_{ik} \\ \dot{\ell }_{ik}\end{bmatrix} = \begin{bmatrix}1 \\ 0\end{bmatrix} \qquad \ \ \text { when } \tau _{ik} \in [0,T]\\ \mu _{ik}^+&= \begin{bmatrix}{\tau }^+_{ik} \\ {\ell }^+_{ik}\end{bmatrix} = \begin{bmatrix}0 \\ \omega _{ik}\end{bmatrix} \qquad \text { when } \tau _{ik} = T, \end{aligned} \end{aligned}$$
(16.4)

where \(\omega _{ik}\) is the input to the network, which has the information to communicate, and the output is \(\chi _{ik} = \ell _{ik}\). Then, the data of \({\mathscr {H}}^{\text {net}}_{ik}\) is given by

$$\begin{aligned} F^{\text {net}}_{ik}(\mu _{ik},\omega _{ik}):= & {} \begin{bmatrix}1 \\ 0\end{bmatrix}\\ C^{\text {net}}_{ik}:= & {} \left\{ (\mu _{ik},\omega _{ik})\ :\ \tau _{ik} \in [0,T]\right\} \\ G^{\text {net}}_{ik}(\mu _{ik},\omega _{ik}):= & {} \begin{bmatrix}0 \\ \omega _{ik}\end{bmatrix} \\ D^{\text {net}}_{ik}:= & {} \left\{ (\mu _{ik},\omega _{ik})\ :\ \tau _{ik} = T\right\} \\ H^{\text {net}}_{ik}(\mu _{ik},\omega _{ik}):= & {} \ell _{ik} \\ E^{\text {net}}_{ik}:= & {} \left\{ (\mu _{ik},\omega _{ik})\ :\ \tau _{ik} \in [0,T]\right\} . \end{aligned}$$

A network model in which collection and transmission of information do not occur simultaneously can be obtained by adding a timer and a memory state to the model above. In such a model, one of the timers, denoted as \(\tau _{ik,1}\), triggers the events every \(T_1\) seconds, at which events the input \(\omega _{ik}\) is stored in a memory state, denoted as \(\ell _{ik,1}\). The other timer, denoted as \(\tau _{ik,2}\), triggers the events every \(T_2\) seconds updating the memory state assigning the output, denoted \(\ell _{ik,2}\), to the recorded value of \(\omega _{ik}\) in \(\ell _{ik,2}\). A model as in (16.3) capturing such mechanism has state \(\mu _{ik} = (\tau _{ik,1},\ell _{ik,1},\tau _{ik,2},\ell _{ik,2})\) and data

$$\begin{aligned} F^{\text {net}}_{ik}(\mu _{ik},\omega _{ik}):= & {} (1,0,1,0) \\ C^{\text {net}}_{ik}:= & {} \left\{ (\mu _{ik},\omega _{ik})\ :\ \tau _{ik,1} \in [0,T_1],\tau _{ik,2} \in [0,T_2]\right\} \\ G^{\text {net}}_{ik}(\mu _{ik},\omega _{ik}):= & {} \left\{ \begin{array}{ll} (0, \omega _{ik}, \tau _{ik,2}, \ell _{ik,2}) &{} \text { if } \tau _{ik,1} = T_1,\tau _{ik,2} \in [0,T_2), \\ (\tau _{ik,1}, \ell _{ik,1}, 0, \ell _{ik,1}) &{} \text { if } \tau _{ik,1} \in [0, T_1),\tau _{ik,2} = T_2, \\ \left\{ (0, \omega _{ik}, \tau _{ik,2}, \ell _{ik,2}),(\tau _{ik,1},\ell _{ik,1},0,\ell _{ik,1})\right\} &{} \\ &{} \text { if } \tau _{ik,1} = T_1,\tau _{ik,2} = T_2, \end{array} \right. \\ D^{\text {net}}_{ik}:= & {} \left\{ (\mu _{ik},\omega _{ik})\ :\ \tau _{ik,1} = T_1\right\} \cup \left\{ (\mu _{ik},\omega _{ik})\ :\ \tau _{ik,2} = T_2\right\} \\ H^{\text {net}}_{ik}(\mu _{ik},\omega _{ik}):= & {} \ell _{ik} \\ E^{\text {net}}_{ik}:= & {} \left\{ (\mu _{ik},\omega _{ik})\ :\ \tau _{ik,1} \in [0,T_1], \tau _{ik,2} \in [0,T_2]\right\} . \end{aligned}$$

The jump set \(D^{\text {net}}_{ik}\) of this hybrid system captures the two possible events, which are when \(\tau _{ik,1} = T_1\) or when \(\tau _{ik,2} = T_2\), and the jump map \(G^{\text {net}}_{ik}\) resets the state variables according to which even has occurred. Note that when both events happen simultaneously, the jump map is set valued. In general, the parameters \(T_1\) and \(T_2\) in the models above may depend on each agent, in which case they will be denoted as \(T_1^i\) and \(T_2^i\).   \(\blacksquare \)

While the models in Example 16.3 capture the key property that information transmitted over networks is typically only available at isolated time instances, they make the assumption that transmissions occur periodically. The following model relaxes that assumption by allowing consecutive communication events to occur within a window of finite length. For simplicity, this extension is carried out for the model in (16.4) and without a memory state. An extension for the case with memory states and two timers follows similarly.

Example 16.4

(Aperiodic communication events) The first model in Example 16.3 guarantees that every solution has a hybrid time domain defined by a sequence \(t_0 = 0 \leqslant t_1< t_2< t_3 < \ldots \) satisfying

$$ t_{j+1} - t_j = T $$

for all \(j > 0\) such that (tj) is in the domain of the solution. When the time in between consecutive events is not constant, but rather known to occur no later than \(T_2\) seconds and no sooner than \(T_1\) seconds after every event, the sequence of times \(\{t_j\}\) would satisfy

$$\begin{aligned} t_{j+1} - t_j \in [T_1, T_2] \end{aligned}$$
(16.5)

for all \(j > 0\) such that (tj) is in the domain of the solution. The parameters \(T_1\) and \(T_2\) are such that \(T_2 \geqslant T_1 > 0\). In principle, the event times \(t_j\) can be thought of being determined by a random variable taking values in the interval \([T_1, T_2]\). The following model generates solutions satisfying (16.5) by exploiting nondeterministic behavior due to an overlap between the flow and jump sets:

$$\begin{aligned} \dot{\tau }_{ik}&= 1&\tau _{ik}&\in [0, T_2] \end{aligned}$$
(16.6a)
$$\begin{aligned} \tau _{ik}^+&= 0&\tau _{ik}&\in [T_1,T_2] , \end{aligned}$$
(16.6b)

In fact, whenever the timer state \(\tau _{ik}\) is in \([T_1,T_2)\), both flows and jumps are possible, meaning that there exist solutions that jump or that flow when \(\tau _{ik}\) is equal to any point in that set. A model as in (16.3) capturing such mechanism has state \(\mu _{ik} = \tau _{ik}\), and flow and jump maps/sets given by

$$\begin{aligned} F^{\text {net}}_{ik}(\mu _{ik},\omega _{ik}):= & {} 1 \\ C^{\text {net}}_{ik}:= & {} \left\{ (\mu _{ik},\omega _{ik})\ :\ \tau _{ik} \in [0,T_2]\right\} \\ G^{\text {net}}_{ik}(\mu _{ik},\omega _{ik}):= & {} 0 \\ D^{\text {net}}_{ik}:= & {} \left\{ (\mu _{ik},\omega _{ik})\ :\ \tau _{ik} \in [T_1,T_2]\right\} . \end{aligned}$$

An alternative model that generates solutions satisfying (16.5) but, instead, through a set-valued jump map is given by

$$\begin{aligned} \dot{\tau }_{ik}&= -1&\tau _{ik}&\in [0, T_2] \end{aligned}$$
(16.7a)
$$\begin{aligned} \tau _{ik}^+&\in [T_1, T_2]&\tau _{ik}&= 0. \end{aligned}$$
(16.7b)

In this model, the communication events are triggered by a timer \(\tau _{ik}\) that decreases and upon reaching zero, it is reset to a point in \([T_1, T_2]\). The hybrid system (16.7) can be captured by (16.3) by choosing the state as \(\mu _{ik} = \tau _{ik}\), and flow and jump maps/sets given by

$$\begin{aligned} F^{\text {net}}_{ik}(\mu _{ik},\omega _{ik}):= & {} -1 \\ C^{\text {net}}_{ik}:= & {} \left\{ (\mu _{ik},\omega _{ik})\ :\ \tau _{ik} \in [0,T_2]\right\} \\ G^{\text {net}}_{ik}(\mu _{ik},\omega _{ik}):= & {} [T_1,T_2] \\ D^{\text {net}}_{ik}:= & {} \left\{ (\mu _{ik},\omega _{ik})\ :\ \tau _{ik} =0\right\} . \end{aligned}$$

   \(\blacksquare \)

Network mechanisms and protocols that employ timers, memory states, and logic can be fit in the network model (16.3), for instance, TCP/IP [7], wireless Ethernet, and Bluetooth protocols [64] can be modeled with such a hybrid model.

2.3 Algorithms

For each \(i \in {\mathscr {V}}\), the algorithm associated to the ith agent is modeled as a hybrid system \({\mathscr {H}}^K_i\) with data \((C^K_i,F^K_i,D^K_i, G^K_i ,E^K_i, H^K_i)\) and given by the hybrid inclusion with inputs and outputs

$$\begin{aligned} \begin{aligned} \dot{\eta }_i&\in F^K_i(\eta _i,v_i) \qquad (\eta _i,v_i) \in C^K_i\\ \eta _i^+&\in G^K_i(\eta _i,v_i) \qquad (\eta _i,v_i) \in D^K_i\\ \zeta _i&\in H^K_i(\eta _i,v_i) \qquad (\eta _i,v_i) \in E^K_i, \end{aligned} \end{aligned}$$
(16.8)

where \(\eta _i\in \mathbb {R}^{n^K_i}\) is the state, \(v_i \in \mathbb {R}^{m^K_i}\) the input, and \(\zeta _i \in \mathbb {R}^{p^K_i}\) the output of the ith agent. As for \({\mathscr {H}}^a_i\) in (16.2), the set-valued map \(F^K_i\) is the flow map capturing the continuous dynamics and \(C^K_i\) defines the flow set on which flows are allowed. The set-valued map \(G^K_i\) defines the jump map and models the discrete behavior, and \(D^K_i\) defines the jump set which is where jumps are allowed. The set \(E^K_i\) defines the output set. A solution to the algorithm \({\mathscr {H}}^K_i\) can also be defined, as done for \({\mathscr {H}}^a_i\).

Example 16.5

Algorithms that, at isolated time instants, measure the received data and compute a feedback control law that is to be applied to the agent can be modeled as the algorithm in (16.8). An algorithm with sampling events triggered when one of its inputs reaches a particular value, at which event uses the information in another of its inputs to compute the control law and stores it in a memory state that is to be applied to the agent is given as follows. Let \(v_i = (v_{i,1},v_{i,2})\) be the input to the algorithm, where \(v_{i,1}\) is the input triggering the events and \(v_{i,2}\) is the input with the information needed to compute the control law. Suppose that \(v_{i,1}\) reaching zero triggers the computation events. Let \(\ell _i\) be a state variable that, at the events, stores the value of the feedback control law, which is given by the function \(\kappa \), and in between events remains constant. The discrete dynamics of the algorithm are

$$ \ell _i^+ = \kappa (v_{i,2}) $$

which are active when \(v_{i,1} = 0\). The continuous dynamics of the algorithm are simply

$$ \dot{\ell }_i = 0 $$

which, in principle,Footnote 2 are active when \(v_{i,1} > 0\). In this way, the state \(\ell _i\) operates as a memory state. This algorithm is given by \({\mathscr {H}}_i^K\) as in (16.8) with state \( \eta _i = \ell _i \), input \( v_i = (v_{i,1},v_{i,2}) \), and data given by

$$\begin{aligned} F^K_i(\eta _i,v_i)= & {} 0\\ C^K_i= & {} \left\{ (\eta _i,v_i)\ :\ v_{i,1} >0\right\} \\ G^K_i(\eta _i,v_i)= & {} \kappa (v_{i,2})\\ D^K_i= & {} \left\{ (\eta _i,v_i)\ :\ v_{i,1} = 0\right\} \\ H^K_i(\eta _i,v_i)= & {} \ell _i \end{aligned}$$

and \(E^K_i\) the entire state and input space of \({\mathscr {H}}_i^K\).

The model for \({\mathscr {H}}_i^K\) also allows to capture the dynamics behind an algorithm that does not trigger computations synchronously with the arrival of information. The computation events in such algorithm could be triggered by an internal state, at which events the last piece of information received is used to compute the feedback law. Such a mechanism can be modeled using a memory state that stores the information received, a memory state that stores the computed feedback law, and a state that triggers the events. Denote these state variables as \(\ell _{i,1}\), \(\ell _{i,2}\), and \(\tau _{i}\), respectively, which define the state of the algorithm \(\eta _i=(\ell _{i,1},\ell _{i,2},\tau _{i})\). As in the model with a single memory state given above, the memory state \(\ell _{i,1}\) stores the information in \(v_{i,2}\) when the input \(v_{i,1}\) reaches zero. Let \(\widetilde{\gamma }\) be a function that, when zero, triggers the computation of the feedback control law, which is denoted as \(\kappa \). Then, the discrete dynamics of \(\tau _{i}\) are active when

$$ \widetilde{\gamma }(\eta _i) = 0. $$

At each jump, the discrete dynamics update \(\tau _{i}\) according to

$$ \tau _{i}^+ = \rho ^d_i(\eta _i), $$

where \(\rho ^d_i\) is a function to be defined, and \(\ell _{i,2}\) according to

$$ \ell _{i,2}^+ = \kappa (\ell _{i,1}). $$

We assume that flows of \(\tau _{i}\) are active when

$$ \widetilde{\gamma }(\eta _i) \geqslant 0 $$

and that are governed by

$$ \dot{\tau }_{i} = \rho ^c_i(\eta _i). $$

The function \(\rho ^c_i\) is assumed to not allow flows that remain in \(\widetilde{\gamma }(\eta _i) = 0\). The memory state \(\ell _{i,2}\) remains constant during flows. This algorithm is given by \({\mathscr {H}}_i^K\) as in (16.8) with state \( \eta _i = (\ell _{i,1},\ell _{i,2},\tau _{i}) \), input \( v_i = (v_{i,1},v_{i,2}) \), and data given by

$$\begin{aligned} F^K_i(\eta _i,v_i)= & {} (0, 0, \rho ^c_i(\eta _i))\\ C^K_i= & {} \left\{ (\eta _i,v_i)\ :\ v_{i,1}>0, \widetilde{\gamma }(\eta _i) \geqslant 0\right\} \\ G^K_i(\eta _i,v_i)= & {} \left\{ \begin{array}{ll} (v_{i,1},\ell _{i,2},\tau _{i}) &{} \text { if } v_{i,2}=0, \widetilde{\gamma }(\eta _i)> 0 \\ (\ell _{i,1}, \kappa (\ell _{i,1}), \rho ^d_i(\eta _i)) &{} \text { if } v_{i,1}>0, \widetilde{\gamma }(\eta _i) = 0\\ \{(v_{i,2},\ell _{i,2},\tau _{i}),(\ell _{i,1}, \kappa (\ell _{i,1}), \rho ^d_i(\eta _i))\} &{} \text { if } v_{i,1}=0, \widetilde{\gamma }(\eta _i) = 0 \end{array}\right. \\ D^K_i= & {} \left\{ (\eta _i,v_i)\ :\ v_{i,1} = 0\right\} \cup \left\{ (\eta _i,v_i)\ :\ \widetilde{\gamma }(\eta _i) = 0\right\} \\ H^K_i(\eta _i,v_i)= & {} \ell _{i,2} \end{aligned}$$

and \(E^K_i\) the entire state and input space of \({\mathscr {H}}_i^K\).   \(\blacksquare \)

The model in \({\mathscr {H}}_i^K\) is general enough to allow for multimode, Event-Triggered, and predictive-based algorithms.

2.4 Closed-Loop System

Given a digraph \(\varGamma \), the interconnection between the agents, algorithms, and network models results in a hybrid system. Assuming that, for each \(i\in {\mathscr {V}}\), the input \(u_i\) of the ith agent is assigned to the output \(\zeta _i\) of the ith algorithm, that the input \(v_i\) of the ith algorithm is assigned to a function of \(y_i\) and of the output of the networks connected to it, namely, \(\{\chi _{ki}\}_{k\in {\mathscr {N}}(i)}\), and that, for each \(k\in {\mathscr {N}}(i)\), the input \(\omega _{ik}\) is assigned to \(y_{i}\), the interconnection between these hybrid systems lead to an autonomous hybrid system \({\mathscr {H}}\) of the form

$$\begin{aligned} \begin{aligned} \dot{x}&\in F(x) \qquad x\in C\\ x^+&\in G(x) \qquad x \in D, \end{aligned} \end{aligned}$$
(16.9)

where

$$x = (x_1,x_2, \ldots , x_N)\in \mathbb {R}^{n}$$

is the state with \(n = \sum _{i \in {\mathscr {V}}} \left( n^a_i + n^K_i + d^{\text { in}}_i n^{\text { net}}_{ki}\right) \), where \(x_i\) collects the states components of the agent, algorithm, and networks associated to the ith agent. The data (CFDG) is constructed using the data of the individual systems. In Sect. 16.5, we provide numerous examples of such construction.

3 Design Specifications

In this section, we formulate specific properties of interest in the design of networked systems. The network-specific properties introduced include the situation when the states of the individual systems reach a particular set that depends on the local variables, which we call formation, that the states of all systems converge to each other, which is referred to as synchronization, that the entire interconnected hybrid system is safe, called safety, and that exogenous signals injected at specific agents are detectable, which we refer to as security. These properties are given in terms of the variables and inputs of the individual agents.

3.1 Formation

A property that is of interest in network system problems is when, for each \(i \in {\mathscr {V}}\), the state \(x_i\) converges to a particular relative configuration. For the closed-loop system \({\mathscr {H}}\), the set of interest is given as

$$\begin{aligned} {\mathscr {A}} := \bigcap _{i \in {\mathscr {V}}} {\mathscr {A}}_i, \end{aligned}$$
(16.10)

where

$$ {\mathscr {A}}_i := \left\{ x \in \mathbb {R}^n\ : \ \rho _i(x) = 0\right\} $$

and, for each \(i \in \{1,2,\ldots ,N\}\), the function \(\rho _i\) defines the relative formation between the agent i and the other agents. Convergence of solutions to this set can be interpreted as the network reaching a formation, in particular, when components of \(x_i\) are related to physical quantities, such as position or angles. To formulate this property, denote the distance from x to \({\mathscr {A}}\) as \(|x|_{{\mathscr {A}}}\), namely,

$$ |x|_{{\mathscr {A}}} = \inf _{x' \in {\mathscr {A}}} |x-x'|. $$

Then, the goal is to design an algorithm such that every maximal solution \(\phi \) to \({\mathscr {H}}\) converges to \({\mathscr {A}}\) in finite time or asymptotically, that is, in the limit as “hybrid” time gets large:

  • For some \((t^*,j^*) \in \mathop {\mathrm{dom}}\nolimits \phi \)

    $$ \lim _{(t,j) \in \mathop {\mathrm{dom}}\nolimits \phi ,\ t+j\searrow t^*+j^*} |\phi (t,j)|_{{\mathscr {A}}}=0. $$
  • If \(\phi \) is complete, then

    $$ \lim _{(t,j) \in \mathop {\mathrm{dom}}\nolimits \phi ,\ t+j\rightarrow \infty } |\phi (t,j)|_{{\mathscr {A}}}=0. $$

Note that while in some network systems problems converging to the set \({\mathscr {A}}\) might be possible without exchanging information between agents, there are numerous problems where transmission of information between agents and algorithms is mandatory. One such a case for \(N=2\) is when the algorithm that controls agent \({\mathscr {H}}^a_1\) is \({\mathscr {H}}^K_2\), and the algorithm that controls agent \({\mathscr {H}}^a_2\) is \({\mathscr {H}}^K_1\).

Certainly, the construction of the set \({\mathscr {A}}\) in (16.10) covers the situation when state components of the algorithm for each agent are to converge to a common point, say \(z^*\). In such a case, the definition of the sets \({\mathscr {A}}_i\) will include the condition \(z_i = z^*\) for each \(i \in {\mathscr {V}}\). It also covers the setting when the algorithm reconstructs the state \(z_i\) from measurements of \(y_i\), namely, the algorithm includes an observer. In such a case, the set \({\mathscr {A}}_i\) will include a condition of the form \(z_i = \hat{z}_i\), where \(\hat{z}_i\) is the component of \(\eta _i\) that provides an estimate of \(z_i\).

3.2 Synchronization

Another dynamical property of interest in many network systems problems is when particular components of the solutions to each agent converge to each other, rather than to a particular set or point. For the closed-loop system \({\mathscr {H}}\), this property is stated as follows. Let \(x= (x_1,x_2, \ldots , x_N)\) be partitioned as \(x_i = (p_i,q_i)\). The closed-loop system \({\mathscr {H}}\) is said to have

  • stable synchronization with respect to p if for every \(\varepsilon > 0\), there exists \(\delta > 0\) such that every maximal solution \(\phi = (\phi _1,\phi _2,\dots , \phi _N)\), where \(\phi _i = (\phi _{i,p},\phi _{i,q})\), to \({\mathscr {H}}\) such that

    $$ |\phi _i(0,0) - \phi _k(0,0)| \leqslant \delta $$

    for each \(i,k \in {\mathscr {V}}\) implies

    $$ |\phi _{i,p}(t,j) - \phi _{k,p}(t,j)| \leqslant \varepsilon $$

    for all \(i,k \in {\mathscr {V}}\) and \((t,j) \in \mathop {{\mathrm{dom}}}\nolimits \phi \).

  • globally attractive synchronization with respect to p if every maximal solution is complete, and for each \(i,k \in {\mathscr {V}}\)

    $$ \lim _{\begin{array}{c} (t,j) \in \mathop {\mathrm{dom}}\nolimits \phi \\ t+ j \rightarrow \infty \end{array}} |\phi _{i,p}(t,j) - \phi _{k,p}(t,j)| = 0. $$
  • global asymptotic synchronization with respect to p if it has both stable synchronization and global attractive synchronization with respect to p.

In general, this is a partial state synchronization notion, but if \(x_i = p_i\) for each \(i \in {\mathscr {V}}\), then this notion can be considered to be a full-state synchronization notion. Note that stable synchronization with respect to p requires solutions \(\phi _i\) for each \(i \in {\mathscr {V}}\) to start close to each other, while only the components \(\phi _{i,p}\), \(i\in {\mathscr {V}}\) remain close to each other over their domain of definition. Similarly, global attractive synchronization with respect to p only requires that the Euclidean distance between each \(\phi _i\) approaches zero, while the other components are left unconstrained. Also, note that boundedness of the solutions is not required.

3.3 Safety

Safety is a property of interest in the design of most algorithms for dynamical systems. Safety is typically characterized by conditions on the system variables, called safety conditions, that guarantee system operation within limits and away from undesired configurations. A system is said to be safe when its solutions are such that they remain within the set of points where the safety conditions are satisfied. For each \(i \in {\mathscr {V}}\), let \(K_i\) denote the set of points defining the safety conditions for the variables of the ith agent and the set

$$ K := K_1 \times K_2 \times \ldots \times K_N $$

be the set that captures all safety conditions for the closed-loop system \({\mathscr {H}}\). Then, a particular safety goal is to design the algorithms and the networks such that every solution \(\phi \) to \({\mathscr {H}}\) with initial condition

$$ \phi (0,0) \in K $$

is such that

$$ \phi (t,j) \in K \qquad \forall (t,j) \in \mathop {\mathrm{dom}}\nolimits \phi . $$

Note that this property enforces all solutions that start from K to remain in K, even if they are not complete. At times, one might be interested in the property that solutions starting from a potentially smaller set than K, stay in K. More precisely, let \(K_0\) denote the set of allowed initial conditions. Then, such a safety property is as follows: design the algorithms and the networks such that every solution \(\phi \) to \({\mathscr {H}}\) with initial condition

$$ \phi (0,0) \in K_0 $$

is such that

$$ \phi (t,j) \in K \qquad \forall (t,j) \in \mathop {\mathrm{dom}}\nolimits \phi , $$

where, in most cases, the set \(K_0\) would be strictly contained in the K.

3.4 Security

The general closed-loop system \({\mathscr {H}}\) allows to model the dynamics of the physical components, such as sensors and actuators, the cyber components, which include digital devices and computing, as well as their interfaces. These interfaces can be exploited by adversaries to, for example, deny access or corrupt the information transmitted among agents. The characterization of which attacks are detectable and the design of algorithms to detect them are of great importance. Modeling the attacks as exogenous signals \(w_c\) and \(w_d\) affecting the continuous and discrete dynamics of \({\mathscr {H}}\), respectively, the closed loop under the effect of attacks is given by

$$\begin{aligned} \begin{aligned} \dot{x}&\in F(x+w_{c,1})+w_{c,2} \qquad x+w_{c,3}\in C\\ x^+&\in G(x+w_{d,1})+w_{d,2} \qquad x +w_{d,3} \in D, \end{aligned} \end{aligned}$$

where \(w_c = (w_{c,1},w_{c,2},w_{c,3})\) and \(w_d = (w_{d,1},w_{d,2},w_{d,3})\). We refer to this closed-loop system as \({\mathscr {H}}_w\). In this context, the security problem consists of detecting when the exogenous signal \(w:=(w_c,w_d)\) is nonzero. One way to accomplish that is to design a function that, when evaluated along solutions only, is nonzero if the attacker’s input w is nonzero. For instance, one would be interested in designing a function r such that for every solution pair \((\phi ,w)\) to \({\mathscr {H}}_w\)

$$ \exists (t,j) \in \mathop {\mathrm{dom}}\nolimits (\phi ,w)\ : \ |w(t,j)|> 0 \qquad \Rightarrow \qquad |r(\phi (t,j))| > 0. $$

Note that when the input w is nonzero over an interval, it might suffice to have a function r that becomes nonzero at some time over that interval, within some reasonable amount of time since the attack started.

4 Notions and Design Tools

In this section, we present dynamical properties that are suitable to certify the network-specific properties given in Sect. 16.3. These properties are stated for general hybrid systems given as \({\mathscr {H}}\) in (16.9) and later, in Sect. 16.5, specialized to networked systems problems. The presentation of these properties and related results is informal, and pointers to the literature with formal statements are given.

4.1 Asymptotic Stability

Given a subset of the state space of a dynamical system, asymptotic stability captures the property that solutions starting close to the set stay close to it, and that solutions that are complete converge to it asymptotically. For a hybrid system \({\mathscr {H}}\) as in (16.9) with state space \(\mathbb {R}^n\), a closed set \({\mathscr {A}} \subset \mathbb {R}^n\) is said to be

  • stable for \({\mathscr {H}}\) if for each \(\varepsilon >0\) there exists \(\delta >0\) such that each solution \(\phi \) to \({\mathscr {H}}\) with initial condition such that

    $$|\phi (0,0)|_{{\mathscr {A}}}\leqslant \delta $$

    satisfies

    $$|\phi (t,j)|_{{\mathscr {A}}} \leqslant \varepsilon \qquad \forall (t,j)\in \mathop {{\mathrm{dom}}}\nolimits \phi . $$
  • globally asymptotically attractive for \({\mathscr {H}}\) if every maximal solution \(\phi \) to \({\mathscr {H}}\) is completeFootnote 3 and satisfies

    $$ \lim _{(t,j) \in \mathop {\mathrm{dom}}\nolimits \phi ,\ t+j\rightarrow \infty } |\phi (t,j)|_{{\mathscr {A}}}=0. $$
  • globally asymptotically stable for \({\mathscr {H}}\) if it is stable and globally asymptotically attractive.

Algorithms \({\mathscr {H}}^K_i\) for \({\mathscr {H}}^a_i\) that, under the effect of the networks \({\mathscr {H}}^{\text {net}}_{ik}\), guarantee asymptotic stability of the set \({\mathscr {A}}\) can be designed using the Lyapunov stability analysis tools in [23, Chaps. 3 and 7]. In particular, asymptotic stability is of interest to networked systems problems as it can be employed to guarantee formation and synchronization. In fact, the problem of guaranteeing that the network \({\mathscr {H}}\) asymptotically reaches a formation can be solved by showing that the set \({\mathscr {A}}\) in (16.10) is asymptotically attractive. The problem of designing algorithms \({\mathscr {H}}^K_i\) that guarantee full-state asymptotic synchronization of \({\mathscr {H}}\) can be recast as the problem of asymptotically stabilizing the closed set

$$\begin{aligned} {\mathscr {A}} := \left\{ x \in \mathbb {R}^n\ : x_1 = x_2 = \cdots = x_N\right\} . \end{aligned}$$

Sufficient conditions for asymptotic stability in terms of Lyapunov functions can be found in [23, Chaps. 3, 6, and 7]; see Sect. 16.5 for illustrations.

4.2 Finite Time Convergence

At times, convergence to the set of points of interest in finite time is desired. For instance, in a network system, one might be interested in assuring that the state of the individual systems converge to a particular formation, and after that, accomplish a different task. For a hybrid system \({\mathscr {H}}\) on \(\mathbb {R}^n\), given a closed set \({\mathscr {A}}\subset \mathbb {R}^n\), an open neighborhood \({\mathscr {N}}\) of \({\mathscr {A}}\), and a function \({\mathscr {T}}: {\mathscr {N}} \rightarrow [0,\infty )\) called the settling-time function, the closed set \({\mathscr {A}}\) is said to be

  • finite time attractive for \({\mathscr {H}}\) if each solution \(\phi \) to \({\mathscr {H}}\) with initial condition such that

    $$ \phi (0,0) \in {\mathscr {N}} $$

    satisfies

    $$\begin{aligned} \sup _{(t,j)\in \mathop {\mathrm{dom}}\nolimits \phi } t+j \geqslant {\mathscr {T}}(\phi (0,0)) \end{aligned}$$
    (16.11)

    and

    $$\begin{aligned} \lim _{(t,j)\in \mathop {\mathrm{dom}}\nolimits \phi : t+j \nearrow {\mathscr {T}}(\phi (0,0))} |\phi (t,j)|_{{\mathscr {A}}} = 0. \end{aligned}$$
    (16.12)

This property becomes global when \(\mathscr {N}\) can be picked such that \(\overline{C} \cup D \subset {\mathscr {N}}\). Condition (16.11) assures that convergence occurs at a point in \(\mathop {\mathrm{dom}}\nolimits \phi \), in turn guaranteeing that the solution actually converges to \({\mathscr {A}}\).

The design of networked systems with such finite time convergence properties can be designed using the tools in [26], in particular, to guarantee network formation in finite time.

4.3 Forward Invariance

A set K is said to be forward invariant for a dynamical system if every solution to the system from K stays in K for all future time. Also referred in the literature as flow-invariance and positively invariance, this property assures the key property of interest in dynamical systems that solutions remain in a desired region of the state space. For a hybrid system \({\mathscr {H}}\) on \(\mathbb {R}^n\), a given set

$$K \subset \overline{C} \cup D$$

is said to be forward pre-invariant for \({\mathscr {H}}\) if for each \(x \in K\), each solution \(\phi \) to \({\mathscr {H}}\) with initial condition \(\phi (0,0) = x\) is such that

$$ \phi (t,j) \in K \qquad \forall (t,j) \in \mathop {\mathrm{dom}}\nolimits \phi . $$

The condition on K belonging to \(\overline{C} \cup D\) is so that a set being pre-invariant is such that a solution exists from each point in it.Footnote 4 The prefix “pre” indicates that the notion does not enforce that maximal solutions are complete, and when every maximal solution from K is complete, then the notion reduces to forward invariance as defined in [11] – see therein also “weak” notions of forward invariance.

Sufficient conditions guaranteeing forward pre-invariance of sets are given in [11,12,13] for general hybrid systems modeled as \({\mathscr {H}}\) in (16.9). In particular, these conditions can be used as design tools to certify safety in a networked system.

4.4 Robustness

In real-world settings, networked systems are affected by a variety of perturbations that may compromise the satisfaction of the properties that they were designed for. Unmodeled dynamics in the models used for the agents \({\mathscr {H}}^a_i\) leads to perturbations of the data \((C^a_i,F^a_i,D^a_i, G^a_i,E^a_i,H^a_i)\). In particular, additive (in the general set-valued sense) perturbations to the flow map \(F_i^a\) and the jump map \(G_i^a\) can be used to capture terms that were omitted at the modeling stage, potentially with the intention of providing a simplified agent model that would enable analysis and design. Deflations and inflations of the sets \(C_i^a\) and \(D_i^a\) can be defined to model perturbations in the conditions allowing flows and jumps. Similar perturbations may appear in the models of the networks and algorithms. When such perturbations are not known at the design stage, one typically performs the design in nominal conditions, with the expectation that when the perturbations are present and have small size, then the established properties will hold, practically and semiglobally.

A semiglobal and practical (on the size of the perturbation) version of the asymptotic stability property of a set \({\mathscr {A}}\) defined in Sect. 16.4.1 would guarantee, for every compact set \(M \subset \mathbb {R}^n\) and every level of closeness \(\varepsilon >0\), the existence of a maximum allowed perturbation size \(\delta ^*>0\) such that every complete solution \(\widetilde{\phi }\) to \({\mathscr {H}}\) under perturbations with size smaller that \(\delta ^*\) that has initial condition \(\widetilde{\phi }(0,0) \in M\) is such that, in the limit as t or j grow unbounded, the distance from \(\widetilde{\phi }\) to \({\mathscr {A}}\) is less than or equal to \(\varepsilon \). When \({\mathscr {A}}\) is an asymptotically stable compact set for \({\mathscr {H}}\), this property is guaranteed to hold under mild conditions on the data of \({\mathscr {H}}\). Such a result can be found in Chap. 7 of [23]; see Definition 7.18, Lemma 7.19, and Theorem 7.21 therein.

The property outlined above can be formally written in terms of a \(\mathscr {K}L\) bound. First, when \({\mathscr {H}}\) is nominally well posed and a compact set \({\mathscr {A}}\) is asymptotically stable, then there exists a class-\(\mathscr {K}L\) function \(\beta \) such that every solution \(\phi \) to \({\mathscr {H}}\) satisfies

$$\begin{aligned} |\phi (t,j)|_{{\mathscr {A}}}\leqslant \beta (|\phi (0,0)|_{{\mathscr {A}}},t+j) \qquad \forall (t,j) \in \mathop {\mathrm{dom}}\nolimits \phi . \end{aligned}$$
(16.13)

See [23, Theorem 7.12]. Then, [23, Theorem 7.21] implies that for each compact set \(M \subset \mathbb {R}^n\) and every level of closeness \(\varepsilon >0\), there exists \(\delta ^*>0\) such that

$$\begin{aligned} |\widetilde{\phi }(t,j)|_{{\mathscr {A}}}\leqslant \beta (|\phi (0,0)|_{{\mathscr {A}}},t+j) + \varepsilon \qquad \forall (t,j) \in \mathop {\mathrm{dom}}\nolimits \widetilde{\phi }\end{aligned}$$
(16.14)

for every solution \(\widetilde{\phi }\) that starts from M and that is under the effect of perturbations with size smaller than \(\delta ^*\). A similar result for the case of finite time convergence is in Theorem 4.1 in [26].

Another typical perturbation in networked systems is the presence of noise in the quantities measured and transmitted by the network, and the values that finally arrive to the agents. When such noise is small, a semiglobal property that is practical on the size of the noise can be established using the tools mentioned above. When the noise is large, one is typically interested in characterizing the effect of the noise on the nominal asymptotic stability property, namely, on the distance to the set \({\mathscr {A}}\). The notion of input-to-state stability (ISS) is one way to characterize the effect of large noise. Denote by \(\widetilde{\mathscr {H}}\) as the hybrid system under the effect of an exogenous disturbance d. The hybrid system \(\widetilde{\mathscr {H}}\) is input-to-state stable with respect to \(\mathscr {A}\) if there exist \(\beta \in {\mathscr {K}L}\) and \(\kappa \in {\mathscr {K}}\) such that each solution \(\widetilde{\phi }\) to \(\widetilde{\mathscr {H}}\) with associated disturbance d satisfies

$$\begin{aligned} | \widetilde{\phi }(t,j)|_{{\mathscr {A}}}\leqslant \max \{\beta (| \phi (0,0)|_{{\mathscr {A}}},t+j), \kappa (\Vert d\Vert _{(t,j)})\} \end{aligned}$$
(16.15)

for eachFootnote 5 \((t,j)\in \mathop {\mathrm{dom}}\nolimits \widetilde{\phi }\). Several characterizations and Lyapunov-based tools to certify ISS in hybrid systems are given in [8]. The set \(\mathscr {K}L\) is the set of \(\mathscr {K}L\) functions and \(\mathscr {K}\) is the set of \(\mathscr {K}\) functions; see [23, Sect. 3.5].

A direct approach to design algorithms conferring robustness is to perform the design task using a model that explicitly includes a model of the perturbations. Such an approach allows for a variety of perturbations, as long as they can be modeled and a certificate guaranteeing the desired properties can be found. When robust asymptotic stability is of interest, robust control Lyapunov functions for hybrid systems can be employed to guarantee robust asymptotic stability of sets; see [56]. Forward invariance of sets with robustness to perturbations can be certified for hybrid dynamical systems when the model includes the perturbations. Tools for the design of algorithms conferring robust forward invariance to general sets and, in particular, to sets given by sublevel sets of Lyapunov functions are available in [13].

Delay is a perturbation that is of particular interest in networked systems as it is unavoidable in real-world settings. Compared to the tools to deal with the sources of perturbations mentioned above, design methods to guarantee robustness to delays are much less developed. Works pertaining to systems with hybrid dynamics and delays have focused on guaranteeing pre-asymptotic stability through the use of Razumikhin functions [37, 67] and Lyapunov functionals for retarded functional impulse differential equations [63]. Results for switched systems with delays are also available in [14, 33, 62, 66]. Results for linear reset systems with delays developed using passivity appeared in [4, 5]. Tools for the study of delays in hybrid systems modeled as in (16.9) have recently appeared in the sequence of articles [30,31,32, 34], which provide tools to study the effects of general delays. Along a different vein, in [2], we have recently proposed a way to exploit well posedness and a \(\mathscr {K}L\) bound as in (16.13) to handle the sole effect of delays on events in hybrid systems.

5 Applications

The models and tools presented in the previous sections have recently been used to solve problems pertaining to certain classes of networked hybrid dynamical systems. In [28, 29], a distributed hybrid observer to estimate the state of a linear time-invariant system was designed to guarantee asymptotic stability of a set on which the state estimation error is zero. In [53], a solution to the control problem of steering the state of the agents with point-mass dynamics to the same value over a network that only allows exchange of information at isolated, aperiodic time instances is proposed. The algorithm discretely updates the input to the point-mass system at communication events and, in between events, changes continuously according to a linear differential equation. A hybrid control algorithm for the synchronization of multiple systems with general linear time-invariant dynamics over a similar communication network appeared in [50, 51]. The remainder of this section presents a summary of these results.

5.1 Distributed Estimation

State estimation in networked systems has seen increased attention recently. These include continuous-time algorithms for distributed estimation of the state of a plant in [27] with robustness guarantees and in [25, 65], both when information is exchanged continuously. Algorithms for which information arrives at common discrete time instances include the network of local observers proposed in [47] for linear time-invariant plants and the optimal estimators in [68], for time-varying networked systems, both in discrete time and with information shared at each discrete time instant. Approaches that keep the continuous dynamics of the plant and treat the communication events as impulsive events include the observer-based controller [42] for network control systems modeled as time-varying hybrid systems, the observer-protocol pair in [16] to asymptotically reconstruct the state of a linear time-invariant plant using periodic measurements from multiple sensors, the distributed observer in [19] designed by partitioning the dynamics into disjoint areas and attaching an algorithm to each area that updates the estimates over time windows with common length, and the robust continuous-time observer for estimation in network control systems in [55] designed via an emulation-like approach and exploiting trajectory-based and small-gain arguments. Other approaches that mix continuous and discrete dynamics have appeared in the nonlinear and stochastic systems literature; see [1, 10, 15, 18, 20, 39, 43, 59, 60].

In this section, we consider the problem of estimating the state of a dynamical system from intermittent measurements of functions of its output over a network with \(N'\) nodes, each running a decentralized state estimator. The communication events occur according to one of the models in Example 16.4. Under nominal conditions, the model governing the dynamics of the system to estimate the state of is given by a linear time-invariant system. The algorithm we propose builds from the hybrid observer in [22], which is shown to guarantee global exponential stability of the zero-estimation error under sporadic measurements. Without loss of generality, following the model in (16.2) and defining \(N = N' + 1\), we assume that the first agent corresponds to this dynamical system, while the dynamics of agents with \(i \in {\mathscr {V}}' := \{2,3,\ldots ,N'+1\}\) implement the decentralized state estimators. In this way, the dynamics of the first agent are given by

$$\begin{aligned} \dot{z}_1&= A z_1 , \end{aligned}$$
(16.16)

where \(z_1 \in \mathbb {R}^{n_1^a}\) denotes its state and \(A \in \mathbb {R}^{n_1^a\times n_1^a}\) is the system matrix. For each \(i \in {\mathscr {V}}'\), the ith agent running a state estimator receives the measurement

$$\begin{aligned} y_i = H_i z_1 \end{aligned}$$
(16.17)

and the outputs \(y_k\) of its neighbors, that is, for each \(k \in {\mathscr {N}}(i)\), at time instances \(t^i_j\) satisfying

$$\begin{aligned} t^i_{j+1} - t^i_j \in [T^i_1, T^i_2] , \qquad j > 0, \end{aligned}$$
(16.18)

where \(H_i \in \mathbb {R}^{p_i^a \times n_i^a}\) is the local output matrix of the ith agent and \(T^i_2 \geqslant T^i_1 > 0\) are parameters that, as \(T_2\) and \(T_1\) in (16.5), determine the minimum and maximum amount of time to elapse between communication events for the ith agent. Following the network models proposed in Sect. 16.2.2, in particular, those in Example 16.4, we employ the model

$$\begin{aligned} \dot{\tau }_{i}&= 1&\tau _{i}&\in [0, T^i_2] \end{aligned}$$
(16.19a)
$$\begin{aligned} \tau _{i}^+&\in [T^i_1,T^i_2]&\tau _{i}&= 0 \end{aligned}$$
(16.19b)

to trigger the events at which the ith agent receives \(y_i\) and the \(y_k\)’s. Since the information from all neighbors to agent i arrives simultaneously, we can employ a single state \(\mu _i\) for each agent, rather than \(d_i^{\text {in}}\) states \(\mu _{ik}\) for each agent, \(i \in {\mathscr {V}}'\). A model as in (16.3) can be derived following the construction in Example 16.4, where the state \(\mu _i\) would be given by \(\tau _i\) and the input \(\omega _{i}\) by the information to transmit, namely, \(y_i\) and the \(y_k\)’s.

We propose a decentralized hybrid algorithm that, at each agent and by employing information received from the neighbors over a communication graph, generates a converging estimate of the state of the first agent. More precisely, at the ith agent, \(i \in {\mathscr {V}}'\), the hybrid algorithm has a state with a variable \(\hat{z}_i \in \mathbb {R}^{n_1^a}\) storing the estimate of the state \(z_1\) and an information fusion state variable, denoted \(\ell _i\), storing the measurements received from its neighbors. These state variables are continuously updated by differential equations

$$\begin{aligned} \dot{\hat{z}}_i&= A \hat{z}_i + \ell _i \end{aligned}$$
(16.20a)
$$\begin{aligned} \dot{\ell }_i&= h_i \ell _i \end{aligned}$$
(16.20b)

when no information is received, while when information is received, the states \(\hat{z}_i\) and \(\ell _i\) are updated according to

$$\begin{aligned} \hat{z}_i^+&= \hat{z}_i \end{aligned}$$
(16.21a)
$$\begin{aligned} \ell _i^+&= \sum _{k \in {\mathscr {N}}(i)} G_{oi}^k(\hat{z}_i, \hat{z}_k, y_i, y_k) \end{aligned}$$
(16.21b)

with

$$\begin{aligned} G_{oi}^k(\hat{z}_i, \hat{z}_k, y_i, y_k) = \frac{1}{d^{\text {in}_i}} K_{ii} y_i^e + K_{ik} y_k^e + \gamma (\hat{z}_i - \hat{z}_k ), \end{aligned}$$
(16.22)

where, for each \(i, k \in {\mathscr {V}}'\), \(y_i^e = H_i \hat{z}_i - y_i\) is the output estimation error; the scalars \(h_i\) and \(\gamma \), and the matrix \(K_{ik}\) define the parameters of the algorithm. The constants \(g_{ik}\) in (16.21) and \(d_i^{in}\) in (16.22) are associated with the communication graph, which is assumed to be given. The map \(G_{oi}^k\) defines the impulsive update law when new information is collected from the first agent and the kth neighbor for agent i. The information fusion state \(\ell _i\) is injected into the continuous dynamics of the local estimate \(\hat{z}_i\) and, at communication events, injects new information impulsively – the right-hand side of (16.21) is the “innovation term” of the proposed observer. The specific update law in (16.22) is such that the second term in (16.22) uses the output error of each kth agent that is a neighborhood of the ith agent, and the third term in (16.22) uses the difference between the estimates \(\hat{z}_i\) and \(\hat{z}_k\). These are the quantities that are transmitted (instantaneously) at communication events only.

The continuous and discrete dynamics in (16.20) and (16.21) can be modeled as a hybrid algorithm \({\mathscr {H}}_i^K\) as in (16.8) with state \( \eta _i = (\hat{z}_i, \ell _i) \), input \( v_i = (y_i,\{(\hat{z}_k,y_k)\}_{k \in {\mathscr {N}}(i)},\mu _i) \), and data given by

$$\begin{aligned} F^K_i(\eta _i,v_i):= & {} \begin{bmatrix}A \hat{z}_i + \ell _i \\ h_i \ell _i\end{bmatrix} \\ C^K_i:= & {} \left\{ (\eta _i,v_i)\ :\ \mu _i \in [0,T_2]\right\} \\ G^K_i(\eta _i,v_i):= & {} \begin{bmatrix} \hat{z}_i \\ \sum _{k \in {\mathscr {N}}(i)}\frac{1}{d^{\text {in}}_i} K_{ii} (H_i \hat{z}_i - y_i) \!+\! K_{ik} (H_k \hat{z}_k - y_k) \!+\! \gamma (\hat{z}_i - \hat{z}_k ) \end{bmatrix} \\ D^K_i:= & {} \left\{ (\eta _i,v_i)\ :\ \mu _i = 0\right\} \\ H^K_i(\eta _i,v_i):= & {} \hat{z}_i \end{aligned}$$

and \(E^K_i\) the entire state and input space of \({\mathscr {H}}_i^K\). Note that the input to the algorithm includes the output \(\mu _i = x_i\) of the network model in (16.7). Due to \(\mu _i\) triggering the jumps in \({\mathscr {H}}_i^K\), for each \(i \in {\mathscr {V}}'\), jumps of the network and the hybrid algorithm for the ith agent occur simultaneously.

The goal of the algorithm is to guarantee that, for each \(i \in {\mathscr {V}}'\), the estimate \(\hat{z}_i\) converges to the state \(z_1\). When the estimates are equal to \(z_1\), the update law maps \(\ell _i\) to zero. Noting that the timers \(\tau _i\) (\(=\mu _i\)) in the model of the network remain within the set \([0,T_2^i]\), the goal of the algorithm is to render the set

$$\begin{aligned} {\mathscr {A}} := \left\{ x\ : \ z_1 = \hat{z}_i, \mu _i \in [0,T_2^i], \ell _i = 0\ \ \forall i \in {\mathscr {V}}\right\} \end{aligned}$$
(16.23)

for the resulting closed-loop system with state x, which is given by the stack of the state variables of the first agent (\(z_1\)), each algorithm (\(\eta _i\)’s), and each network (\(\mu _i\)’s).

A result for the design of the parameters of the proposed hybrid algorithm can be found in [29] to guarantee that the set \({\mathscr {A}}\) in (16.23) is globally exponentially stable for the closed-loop hybrid system \({\mathscr {H}}\). Given the network parameters \(0 < T_1^i \leqslant T_2^i\) for each \(i\in {\mathscr {V}}'\), it is assumed that the \(N'\) agents are connected via a digraph \(\varGamma = ({\mathscr {V}}, {\mathscr {E}}, {\mathscr {G}})\) that is such that there exist a constant \(\delta > 0\) and matrices \(K_g\), \(P= P^\top > 0\), \(Q_i= Q_i^\top > 0\) satisfyingFootnote 6

$$\begin{aligned} {\mathscr {M}}(\tau ) := \left[ \begin{array}{cc} \text {He}(A_\theta , P) &{} -P + \widetilde{A}_\theta ^\top {\mathscr {K}}^\top \widetilde{Q}(\tau ) \\ \star &{} - \delta \widetilde{Q}(\tau ) - \text {He}(\widetilde{\mathscr {K}}, \widetilde{Q}(\tau )) \end{array} \right] < 0 \qquad \forall \tau \in {\mathscr {T}}, \end{aligned}$$
(16.24)

where \(\tau = (\tau _2,\tau _3,\ldots ,\tau _N)\), \({\mathscr {T}}:= [0, T_2^2] \times [0, T_2^3] \times \dots \times [0, T_2^N]\),

$$\begin{aligned} A_\theta= & {} I_N \otimes A + {\mathscr {K}}\\ {\mathscr {K}}= & {} (K_g H_g) * (I_N + {\mathscr {G}}) + \gamma {\mathscr {L}} \otimes I_n\\ H_g= & {} \mathop {\mathrm{diag}}\nolimits (H_2, H_3, \dots , H_N)\\ \widetilde{A}_\theta= & {} A_\theta - \widetilde{H}\\ \widetilde{\mathscr {K}}= & {} {\mathscr {K}} - \widetilde{H}\\ \widetilde{H}= & {} \mathop {\mathrm{diag}}\nolimits (h_2 I_n, h_3 I_n, \dots , h_N I_n)\\ \widetilde{Q}(\tau )= & {} \mathop {\mathrm{diag}}\nolimits \left( \widetilde{Q}_2(\tau _2), \widetilde{Q}_3(\tau _3), \ldots , \widetilde{Q}_N(\tau _N)\right) \\ \widetilde{Q}_i(\tau _i)= & {} \exp (\delta \tau _i) Q_i. \end{aligned}$$

These design conditions are obtained using sufficient conditions for asymptotic stability in [23] (specifically, Proposition 3.29 therein), which for the current data turns out to be exponential, and a convenient change of coordinates. The Lyapunov function used to show global exponential stability of the set \({\mathscr {A}}\) in (16.23) is given by

$$\begin{aligned} V(x) := e^\top P e + \theta ^\top \widetilde{Q}(\tau ) \theta , \end{aligned}$$

where \(e = (e_2, e_3, \ldots , e_N)\), \(e_i = \hat{z}_i - z_1\), \(\tau = (\tau _2, \tau _3, \ldots , \tau _N)\), \(\theta = (\theta _2, \theta _3, \ldots , \theta _N)\), and

$$\begin{aligned} \theta _i = K_{ii} y_i^e + \sum _{k\in {\mathscr {N}}(i)} K_{ik} y_k^e \!+\! \gamma \sum _{k\in {\mathscr {N}}(i)} (\hat{z}_i - \hat{z}_k) - \ell _i \end{aligned}$$
(16.25)

for each \(i \in {\mathscr {V}}'\), with P and \(\widetilde{Q}\) as defined above. Note that \(V(x) = 0\) for each \(x \in {\mathscr {A}}\), while for any \(x \not \in {\mathscr {A}}\), V(x) is positive. More importantly, intuitively, regardless of which timer triggers a jump, this function satisfies the useful property that \(V(x^+) - V(x)\) is upper bounded by a nonpositive function of \(\theta _i\) for all x in the jump set. Such a property is possible due to the convenient choice of the update law of the observer used at jumps, which, in the coordinates in (16.25), leads to e being mapped by the identity and \(\theta _i\) to zero. The injection of \(\ell _i\) in the flows of the local estimate in (16.20) and the continuous dynamics of \(\ell _i\) further permit a decrease of V during flows, which conveniently uses exponential functions in the definition of \(\widetilde{Q}\). These properties are exploited to arrive to the result above. The interested reader is referred to [29], where in addition to several other results pertaining to design, nominal and ISS-type robustness of the above algorithm, several examples are provided.

5.2 Distributed Synchronization

Synchronization is a property of interest in many problems emerging in science and engineering, such as spiking neurons [41, 48], formation control and flocking [21, 44], distributed sensor networks [45], and satellite constellation formation [57], among others. The literature about synchronization is quite rich, with numerous contributions employing a variety of techniques, such as Lyapunov functions [6, 24], convergence [46, 58], contraction theory [61], and incremental input-to-state stability [3, 9]. Synchronization for continuous-time systems where communication coupling occurs at discrete events is an emergent area of study. In [9], the authors study a case of synchronization where agents have nonlinear continuous-time dynamics with continuous coupling and impulsive perturbations. In [38], the authors use Lyapunov-like analysis to derive sufficient conditions for the synchronization of continuously coupled nonlinear systems with impulsive resets on the difference between neighboring agents. In [36], a distributed Event-Triggered control strategy was developed to drive the outputs of the agents in a network to synchronization. Using a sample-and-hold Self-Triggered controller policy, a practical synchronization result was established in [17] for the case of first-order integrator dynamics. On the other hand, methods for the design of algorithms that guarantee synchronization of multi-agent systems with information arriving at impulsive, asynchronous time instances are not available.

In this section, we consider the problem of synchronizing the state of N networked agents from intermittent measurements of the state (or of a function of it) over a digraph. Each agent runs a decentralized hybrid algorithm that uses information received from its neighbors. The nominal model of the agents is given as follows: for each \(i \in {\mathscr {V}}\),

$$\begin{aligned} \dot{z}_i = A z_i + B u_i, \end{aligned}$$
(16.26)

where A is the nominal system matrix and B is the input matrix. The ith agent in the network measures its local output, denoted \(y_i\), and the information received from its neighbors, denoted \(y_k\), at the communication events, where

$$\begin{aligned} y_i = H z_i \end{aligned}$$
(16.27)

with H being the output matrix. Following the network models proposed in Sect. 16.2.2, in particular, those in Example 16.4, we employ the model in (16.7) to trigger the events at which the ith agent receives the \(y_k\)’s.

To globally synchronize the states of the N agents, we propose the following decentralized hybrid algorithm for each \(i \in {\mathscr {V}}\): the algorithm has a memory state, denoted \(\ell _i\), that when information arrives, is updated to the relative error between the output of the ith agent and those received from its neighbors, namely,

$$\begin{aligned} \ell _i^+ = K \sum _{k \in {\mathscr {N}}(i)} (y_i - y_k) = K H \sum _{k \in {\mathscr {N}}(i)} (z_i - z_k), \end{aligned}$$
(16.28)

where K is a constant matrix to be designed, and in between communication events is continuously updated according to

$$\begin{aligned} \dot{\ell }_i = M \ell _i, \end{aligned}$$
(16.29)

where M is a constant matrix to be designed. Following the construction of the hybrid algorithms in Sect. 16.5.1, this algorithm can be modeled as \({\mathscr {H}}_i^K\) in (16.8) with state \( \eta _i = \ell _i \), input \( v_i = (y_i,\{y_k\}_{k \in {\mathscr {N}}(i)},\mu _i) \), and data given by

$$\begin{aligned} F^K_i(\eta _i,v_i)&:= M \ell _i \end{aligned}$$
(16.30)
$$\begin{aligned} C^K_i&:= \left\{ (\eta _i,v_i)\ :\ \mu _i \in [0,T_2^i]\right\} \end{aligned}$$
(16.31)
$$\begin{aligned} G^K_i(\eta _i,v_i)&:= K H \sum _{k \in {\mathscr {N}}(i)} (z_i - z_k) \end{aligned}$$
(16.32)
$$\begin{aligned} D^K_i&= \left\{ (\eta _i,v_i)\ :\ \mu _i = 0\right\} \end{aligned}$$
(16.33)
$$\begin{aligned} H^K_i(\eta _i,v_i)&:= \ell _i \end{aligned}$$
(16.34)

and \(E^K_i\) the entire state and input space of \({\mathscr {H}}_i^K\). Also, note that the input to the algorithm includes the output \(\mu _i\) of the network model in (16.7), leading to jumps of the network and the hybrid algorithm for the ith agent occurring simultaneously.

The goal of the synchronization algorithm introduced above is to guarantee that, for each \(i, k \in {\mathscr {V}}\), the error between \(z_i\) and \(z_k\) converges to zero, with stability. These requirements correspond to the notions of stable and attractive synchronization introduced in Sect. 16.3.2. When the estimates are equal to \(z_1\), the update law maps \(\ell _i\) to zero. Noting that when the states of all of the agents coincide we have that the \(\ell _i\)’s are reset to zero and that the timers \(\tau _i\) in the model of the network remain within the set \([0,T_2^i]\), the goal of the algorithm is to render the set

$$\begin{aligned} {\mathscr {A}} := \left\{ x\ :\ z_i = z_k\ \ \forall i, k \in {\mathscr {V}}, \mu _i \in [0,T_2^i], \ell _i = 0\ \ \forall i \in {\mathscr {V}}\right\} \end{aligned}$$
(16.35)

globally asymptotically stable for the resulting closed-loop system \({\mathscr {H}}\) with state x, which is given by the stack of the state variables of each agent (\(z_i\)), each algorithm (\(\eta _i\)’s), and each network (\(\mu _i\)’s).

Results for the design of the parameters M and K of the proposed hybrid algorithm can be found in [50] to guarantee that the set \({\mathscr {A}}\) in (16.35) is globally exponentially stable for \({\mathscr {H}}\), and hence, global exponential synchronization is achieved. Given the network parameters \(0 < T_1^i \leqslant T_2^i\) for each \(i \in {\mathscr {V}}\) and a undirected graph \(\varGamma \), the set \({\mathscr {A}}\) in (16.35) is globally exponentially stable for the hybrid closed-loop system \({\mathscr {H}}\) resulting from controlling the agents in (16.26) with hybrid algorithms as in (16.30)–(16.34) over a network modeled as in (16.7) if there exist scalars \(\sigma > 0\), \(\varepsilon \in (0,1)\), matrices K and M, and positive definite symmetric matrices \(P_i\), \(Q_i\) for each \(i \in {\mathscr {V}}'\), satisfying

$$\begin{aligned} {\mathscr {M}}(\nu ) := \begin{bmatrix}\text {He}(P,{\bar{A}})&- P {\bar{B}} + \exp ({\sigma \nu })(\bar{K} \bar{A} - \bar{M} \bar{K})^\top Q\\ \star&\text {He}(\exp ({\sigma \nu })Q,\bar{M} - \bar{K} \bar{B} - \frac{\sigma }{2} I)\end{bmatrix} < 0 \qquad \forall \nu \in [0, \overline{T}], \end{aligned}$$
(16.36)

where \(\bar{A} = I \otimes A + \varLambda \otimes BKH\), \(\bar{B} = I \otimes B\), \(\bar{M} = I \otimes M\), \(\bar{K} = \varLambda \otimes KH\), \(\varLambda = \mathop {\mathrm{diag}}\nolimits (\lambda _2,\lambda _3,\) \(\dots ,\lambda _N)\) where \(\lambda _i\) are the nonzero eigenvalues of \(\mathscr {L}\), and

$$\begin{aligned} (1 - \varepsilon ) \underline{T} - \frac{\alpha _2 \sigma \overline{T}}{\beta } > 0, \end{aligned}$$
(16.37)

where \(\underline{T} := \min _{i\in {\mathscr {V}}} T_1^i\), \(\overline{T} := \max _{i\in {\mathscr {V}}} T_2^i\),

$$\begin{aligned} \beta&= -\max \limits _{\nu \in [0, \overline{T}]} \bar{\lambda }({\mathscr {M}}(\nu )) \\ \alpha _2&= \max \{\overline{\lambda }(P), \overline{\lambda }(Q)\exp (\sigma \overline{T})\}. \end{aligned}$$

Moreover, every maximal solution \(\phi \) to the closed-loop system satisfies

$$\begin{aligned} |\phi (t,j)|_{\mathscr {A}} \leqslant \kappa \exp \left( - r (t+j) \right) |\phi (0,0)|_{\mathscr {A}} \qquad \forall (t,j) \in \mathop {\mathrm{dom}}\nolimits \phi , \end{aligned}$$
(16.38)

where \(\kappa = \sqrt{\frac{\alpha _2}{\alpha _1}} \exp \left( \frac{\beta (1-\varepsilon ) \underline{T}}{2 \alpha _2} \right) \) and \(r = \frac{\beta }{2 \alpha _2 N} \min \left\{ \varepsilon N, (1 - \varepsilon ) \underline{T} - \frac{\alpha _2 \sigma \overline{T}}{\beta } \right\} \), and \(\alpha _1 = \min \{\underline{\lambda }(P), \underline{\lambda }(Q)\}\).

To arrive at these design conditions, we employed the property that

$$\begin{aligned} \theta _i&= K H \sum _{k \in {\mathscr {N}}(i)} (z_i - z_k) - \ell _i \end{aligned}$$
(16.39)

is reset to zero at jumps due to the timer \(\tau _i\) expiring alone. It follows that the quantity

$$\begin{aligned} V(x) = \begin{bmatrix}z \\ \theta \end{bmatrix}^\top {\bar{\varPsi }} R(\tau ) {\bar{\varPsi }}^\top \begin{bmatrix}z \\ \theta \end{bmatrix}, \end{aligned}$$
(16.40)

withFootnote 7 \({\bar{\varPsi }} =\) diag\(({\widetilde{\varPsi }} \otimes I_n, {\widetilde{\varPsi }} \otimes I_p)\), where \({\widetilde{\varPsi }} = (\psi _2,\psi _3,\dots ,\psi _N) \in \mathbb {R}^{N\times N-1}\), \(\psi _{i} = (\psi _{i1},\psi _{i2},\dots ,\psi _{iN})\) being the orthonormal eigenvector corresponding to the nonzero eigenvalue \(\lambda _i\) of \(\mathscr {L}\), \(i \in {\mathscr {V}}\) (furthermore, \(\sum _{k = 1}^N \psi _{ik} = 0\)), \(R(\tau ) =\) diag\((P,Q \exp (\sigma {\bar{ \tau }}))\), \(\bar{\tau }= \frac{1}{N} \sum _{i = 1}^N \tau _i\), \(P = \mathop {\mathrm{diag}}\nolimits (P_2,P_3,\dots ,P_N)\), and \(Q = \mathop {\mathrm{diag}}\nolimits (Q_2,Q_3,\dots ,Q_N)\), decreases during flows due to (16.36), while at jumps, its potential growth can be dominated by imposing (16.37); cf. the construction of the Lyapunov function in Sect. 16.5.1, where such a Lyapunov function decreases during flows and has a nonpositive change at jumps. To guarantee exponential stability of the synchronization set, the result [23, Proposition 3.29], which uses a balancing condition between jumps and flows to guarantee that solutions converge to the desired set, exploited.

6 Final Remarks and Acknowledgments

Hybrid systems models, along with their associated notions and tools, lead to powerful methods for the design of algorithms conferring desired dynamical properties in complex networks. The methods summarized in this book chapter are suitable for settings in which the combination of continuous and discrete behavior is unavoidable, digital networks govern the exchange of information between the agents, information is limited and with uncertainty, and the algorithms are distributed. The proposed networked hybrid systems framework allows for hybrid models at the agent, network, and algorithm level. The applications of the notions and tools to estimation, consensus, and synchronization over networks are just examples of the power of the hybrid systems framework, being the hope that they will inspire the formulation of new notions and tools suitable for networked hybrid systems as well as the solution to challenging applications.

I would like to acknowledge and thank my collaborators who have contributed to the ideas presented in this book chapter. Part of the work presented here was done in collaboration with my Ph.D. students Yuchun Li and Sean Phillips, who, respectively, have lead our research on distributed estimation and distributed synchronization using hybrid systems methods. The distributed estimation strategy and the nondeterministic network model using timers were developed with Francesco Ferrante, Frederic Gouaisbaut, and Sophie Tarbouriech. The formulation of the safety notion was inspired by work with my Ph.D. student Jun Chai. The formulation of the security notion follows our recent work with Sean Phillips, Alessandra Duz, and Fabio Pasqualetti. Part of the work presented here has recently appeared in conference venues and journal publications, and associated papers are available at https://hybrid.soe.ucsc.edu. I would also like to thank the support received to fund part of this work from the National Science Foundation under CAREER Grant No. ECS-1450484 and Grant No. CNS-1544396, the Air Force Office of Scientific Research under Grant No. FA9550-16-1-0015, as well as CITRIS and the Banatao Institute at the University of California.