Recently, substantial consensus problems have been studied in many previous literature [1,2,3,4,5,6]. Due to the energy and bandwidth constraints of the communication channels, the transmitted information in the multi-agent network needs to be quantized. The study on control problems using quantized information has a long history [7]. Over the past few years, considerable effort has been devoted to studying the information quantization on feedback control systems [8,9,10,11].

How to realize a distributed consensus with quantization has drawn considerable attention [12,13,14,15,16,17,18]. In [16], a coding–decoding scheme was developed to solve the average consensus problem with quantized information. In [14, 19], under the condition that each uniform quantizer has infinite quantization levels, it was shown that the multi-agent network could achieve practical consensus.

In this chapter, we will discuss the multi-agent network consensus problem with communication quantization and time delays simultaneously. It is shown that consensus can be achieved for the network under communication quantization and delays under certain topology conditions. Different from Chap. 3, the consensus protocol proposed in this chapter only considers quantized transmitted information. Moreover, the protocol does not assume that the communication delay is the same between different neighboring agents.

4.1 Discrete-Time Case

In this section, the consensus problem of discrete-time multi-agent networks with quantized data and delays is studied. The remainder of this section is organized as follows. In Sect. 4.1.1, the discrete-time multi-agent network model with communication quantization and time delays is presented. In Sect. 4.1.2, the consensus analysis of the proposed protocol is presented in detail. Finally, a numerical simulation is given to demonstrate the validity of the theoretical results in Sect. 4.1.3.

4.1.1 Model Description

Consider the following network model with discrete-time integrator agents with dynamics:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \displaystyle x_{i}(k+1)=x_{i}(k)+u_{i}(k),~~~i\in\mathcal{N},~ \end{array} \end{aligned} $$
(4.1)

where \(x_{i}(k)\in \mathbb {R}\) is the state of the agent i and u i(k) is called the protocol.

The goal is to design the protocol u i(k) yielding the consensus of the states, i.e.,

(4.2)

where c is a constant.

Due to the communication bandwidth constraints in many real multi-agent networks, the agents can only use quantized information of the neighboring agents. The following consensus protocol

$$\displaystyle \begin{aligned} \begin{array}{rcl} u_{i}(k)=\sum_{j\in \mathcal {N}_{i}}a_{ij}[q_{\mu}(x_{j}(k-\tau_{ij}))-x_{i}(k)],~~~i\in\mathcal {N}, \end{array} \end{aligned} $$

will be studied in this section, i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle x_{i}(k+1)=x_{i}(k)+\sum_{j\in \mathcal {N}_{i}}a_{ij}[q_{\mu}(x_{j}(k-\tau_{ij}))-x_{i}(k)],~~~i\in\mathcal{N},~ \end{array} \end{aligned} $$
(4.3)

where τ ij is a nonnegative integer representing the communication delays from agent j to agent i, and q μ(⋅) denotes one-parameter family of uniform quantizers which is defined by (1.8), i.e.,

$$\displaystyle \begin{aligned} q_{\mu}(x)=\left\{ \begin{array}{ll} \lfloor\frac{x}{\mu}\rfloor\mu, & x\geq0,\\ -\lfloor\frac{-x}{\mu}\rfloor\mu, & x<0. \end{array} \right. \end{aligned} $$
(4.4)

In this section, we assume that time delays only exist when the information is transmitted from one agent to another, i.e., \(\tau _{ii}=0,~~i\in \mathcal {N}\). Moreover, the following assumption is proposed in this section.

Assumption 4.1

A is a stochastic matrix such that \(a_{ii}>0,~~i\in \mathcal {N}\), and \(\mathcal {G}\) is strongly connected.

4.1.2 Main Results

We introduce the main notations here which will be used in this section. For arbitrary fixed \(k_{0}\in \mathbb {R}\), denote

  • \(\tau =\mbox{max}\{\tau _{ij},\,\,\,i,\,j\in \mathcal {N}\};~ \,\varUpsilon _{-\tau }=\{-\tau ,-\tau +1,\,\cdots ,\,0\};\)

  • \( \mathbb {Z}_{\mu }=\{l\mu ,\,l\in \mathbb {Z}\};~\,X=\{\psi :\varUpsilon _{-\tau }\longmapsto \mathbb {R}\}\);

  • \(\overline {V}(k)=\displaystyle \mbox{max}_{\theta \in \varUpsilon _{-\tau }}\mbox{max}_{i\in \mathcal {N}}\{q_{\mu }(x_{i}(k+\theta ))\};~ \overline {v}(k)=\min _{\theta \in \varUpsilon _{-\tau }}\min _{i\in \mathcal {N}}\{q_{\mu }(x_{i}(k+\theta ))\};\)

  • for any \(b\in \mathbb {Z}_{\mu }\), \(\varGamma _{b}(k)=\{i\in \mathcal {N}: \exists \theta \in \varUpsilon _{-\tau },\, q_{\mu }(x_{i}(k+\theta ))=b\}\).

For a set B with finite elements, |B| denotes the cardinality of B, i.e., the number of the element in the set B.

In the following, we will study the consensus result of model (4.3). The initial conditions associated with (4.3) are given as \(x_{i}(s)\in X,\,i\in \mathcal {N}\). Before the main theorem of this section be given, we here give two important lemmas first, which will be used in the proof of Theorem 4.4.

Lemma 4.2

Suppose that x(t) is the solution to (4.3). Under Assumption4.1 , for any finite communication delays τ ij, \(\overline {V}(k)\)is a non-increasing function for k, and\(\overline {v}(k)\)is a non-decreasing function for k.

Proof

For \(\forall i\in \mathcal {N}\), we have

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} x_{i}(k+1)& =&\displaystyle x_{i}(k)+\sum_{j\in \mathcal {N}_{i}}a_{ij}(q_{\mu}(x_{j}(k-\tau_{ij}))-x_{i}(k))\\ & \leq&\displaystyle x_{i}(k)+\sum_{j\in \mathcal {N}_{i}}a_{ij}(\overline{V}(k)-x_{i}(k))\\ & =&\displaystyle \overline{V}(k)+a_{ii}(x_{i}(k)-\overline{V}(k))\\ & <&\displaystyle \overline{V}(k)+\mu. \end{array} \end{aligned} $$
(4.5)

Note that \(q_{\mu }(x_{i}(k+1))\in \mathbb {Z}_{\mu }\) and \(\overline {V}(k)\in \mathbb {Z}_{\mu }\); then, we can obtain that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} q_{\mu}(x_{i}(k+1)) & \leq&\displaystyle \overline{V}(k), \end{array} \end{aligned} $$
(4.6)

which implies that \(\overline {V}(k+1)\leq \overline {V}(k)\). Hence, \(\overline {V}(k)\) is a non-increasing function for k. Similarly, it can be proved that \(\overline {v}(k)\) is a non-decreasing function for k.

Lemma 4.3

For arbitrary fixed\(k_{0}\in \mathbb {R}\) , suppose\(M=\overline {V}(k_{0})\)and\(m=\overline {v}(k_{0})\) . If M  m, we have the following conclusion:

  1. (i)

    If M > 0, then |Γ M(k)| is a non-increasing function for k, and\(\varGamma _{M}(k)=\varnothing \)in finite time.

  2. (ii)

    If m < 0, then |Γ m(k)| is a non-increasing function for t, and\(\varGamma _{m}(k)=\varnothing \)in finite time.

Proof

We only prove conclusion (i). Conclusion (ii) can be proved similarly, and hence the proof is omitted here. For \(M=\overline {V}(k_{0})>0\) and arbitrary k 1 ≥ k 0, it follows from Lemma 4.2 that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} q_{\mu}(x_{j}(k_{1}-\tau_{ij}))& \leq&\displaystyle M,\,\forall j\in \mathcal {N}_{i}. \end{array} \end{aligned} $$
(4.7)

If x i(k 1) < M, we can deduce that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} x_{i}(k_{1}+1)& =&\displaystyle x_{i}(k_{1})+\sum_{j\in \mathcal {N}_{i}}a_{ij}(q_{\mu}(x_{j}(k_{1}-\tau_{ij}))-x_{i}(k_{1}))\\ & \leq&\displaystyle x_{i}(k_{1})+\sum_{j\in \mathcal {N}_{i}}a_{ij}(M-x_{i}(k_{1}))\\ & =&\displaystyle M+a_{ii}(x_{i}(k_{1})-M)\\ & <&\displaystyle M, \end{array} \end{aligned} $$
(4.8)

which implies that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} q_{\mu}(x_{i}(k_{1}+1))& <&\displaystyle M. \end{array} \end{aligned} $$
(4.9)

The inequalities (4.8) and (4.9) imply that iΓ M(k 1 + 1) if iΓ M(k 1). Hence, |Γ M(k)| is a non-increasing function for k ≥ k 0.

Next, we shall prove \(\varGamma _{M}(k)=\varnothing \) in finite time, i.e., there exists a \(\tilde {k}_{0}>k_{0}\) such that \(\varGamma _{M}(\tilde {k}_{0})=\varnothing \).

According to M ≠ m, there exist \(j_{1}\in \mathcal {N}\) and θ 1 ∈ Υ τ such that

$$\displaystyle \begin{aligned} q_{\mu}(x_{j_{1}}(k_{0}+\theta_{1}))=m<M. \end{aligned} $$
(4.10)

Equations (4.7)–(4.10) imply that

$$\displaystyle \begin{aligned} q_{\mu}(x_{j_{1}}(k))<M,\quad \forall k\geq k_{0}+\theta_{1}. \end{aligned} $$
(4.11)

Hence,

$$\displaystyle \begin{aligned} j_{1}\not\in\varGamma_{M}(k),\,\forall k\geq k_{0}+\tau. \end{aligned} $$
(4.12)

Let \(\varLambda _{j_{1}}=\{l\in \mathcal {N}:j_{1}\in \mathcal {N}_{l}\}.\) For any \(j_{2}\in \varLambda _{j_{1}}\), we consider the following two cases:

Case 1: :

j 2Γ M(k 0).

Equations (4.8) and (4.9) imply that j 2Γ M(k), ∀k ≥ k 0.

Case 2: :

j 2 ∈ Γ M(k 0).

Claim I There exists a k 2 > k 0, such that j 2Γ M(k 2). Next, we shall prove Claim I by using a contradiction approach. If for any k > k 0, j 2 ∈ Γ M(k), we can obtain that

$$\displaystyle \begin{aligned} q_{\mu}(x_{j_{2}}(k))= M,~~\forall k\geq k_{0}.\end{aligned} $$
(4.13)

Then, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} x_{j_{2}}(k+1) & =&\displaystyle x_{j_{2}}(k)+\sum_{j\in \mathcal {N}_{j_{2}}}a_{j_{2}j}(q_{\mu}(x_{j}(k-\tau_{j_{2}j}))-x_{j_{2}}(k))\\ & \leq&\displaystyle x_{j_{2}}(k)+a_{j_{2}j_{1}}(q_{\mu}(x_{j_{1}}(k-\tau_{j_{2}j_{1}}))-x_{j_{2}}(k))\\ & \leq&\displaystyle x_{j_{2}}(k)-a_{j_{2}j_{1}}\mu. \end{array} \end{aligned} $$
(4.14)

Hence, \(q_{\mu }(x_{j_{2}}(k))\leq x_{j_{2}}(k)<M\) in finite time, which contradicts with (4.13). Thus, Claim I holds, which means that there exists k 2 > k 0, such that for any \(j_{2}\in \varLambda _{j_{1}}\), it holds j 2Γ M(k), ∀k ≥ k 2.

For any \(j_{2}\in \varLambda _{j_{1}}\), same procedure applies to the agents set \(\varLambda _{j_{2}}=\{\tilde {l}\in \mathcal {N}: j_{2}\in \mathcal {N}_{\tilde {l}}\}.\) It can be obtained that there exists k 3 > k 2, such that for any \(j_{3}\in \varLambda _{j_{2}}\), j 3Γ M(k), ∀k ≥ k 3.

Repeat the above procedure. Given that the network is strongly connected, it implies that there exists a \(\tilde {k}_{0}>k_{0}\) such that \(\varGamma _{M}(k)=\varnothing ,\,\forall k\geq \tilde {k}_{0}\). This completes the proof of Lemma 4.3.

Theorem 4.4

Under Assumption4.1 , for any finite communication delays τ ij , the multi-agent network (4.3) will asymptotically achieve consensus for arbitrary initial conditions. That is,

(4.15)

where c is a constant.

Proof

The proof of Theorem 4.4 is divided into two steps.

Step 1

We shall prove that for any fixed\(k_{0}\in \mathbb {R}\), there exists\(\bar {k}_{0}\geq k_{0}\)such that

$$\displaystyle \begin{aligned} \overline{V}(\bar{k}_{0})=\overline{v}(\bar{k}_{0}). \end{aligned} $$
(4.16)

The following three cases are considered:

Case 1: :

\(\overline {V}(k_{0})\geq 0\) and \(\overline {v}(k_{0})\geq 0\).

  • If \(\overline {V}(k_{0})=\overline {v}(k_{0}),\) select \(\bar {k}_{0}=k_{0}\).

  • If \( \overline {V}(k_{0})\neq \overline {v}(k_{0})\), it follows from Lemma 4.3 that there exists k 1 > k 0, such that \( \varGamma _{\overline {V}(k_{0})}(k_{1})=\varnothing , \) which implies that \(\overline {V}(k_{1})<\overline {V}(k_{0}).\)

  • If \(\overline {V}(k_{1})=\overline {v}(k_{1}),\) select \( \bar {k}_{0}=k_{1}.\)

  • If \(\overline {V}(k_{1})\neq \overline {v}(k_{1})\), there exists k 2 ≥ k 1, such that \( \overline {v}(k_{2})<\overline {V}(k_{2}).\)

Repeat the above procedure, we can finally find a \(\bar {k}_{2}\in \mathbb {R}\), such that \( \overline {V}(\bar {k}_{2})=\overline {v}(\bar {k}_{2}). \) Select \(\bar {k}_{0}=\bar {k}_{2}.\)

Case 2: :

\(\overline {V}(k_{0})\leq 0\) and \(\overline {v}(k_{0})\leq 0\). Similar to the procedure of Case 1 (replace \(\varGamma _{\overline {V}(k_{0})}(k)\) by \(\varGamma _{\overline {V}(k_{0})}(k)\)), we can find a \(\bar {k}_{0}\geq k_{0}\) such that

$$\displaystyle \begin{aligned} \overline{V}(\bar{k}_{0})=\overline{v}(\bar{k}_{0}). \end{aligned} $$
(4.17)
Case 3: :

\(\overline {V}(k_{0})>0\) and \(\overline {v}(k_{0})<0\).

According to Lemma 4.3, there exists k 1 > k 0 such that

$$\displaystyle \begin{aligned} \varGamma_{\overline{V}(k_{0})}(k_{1})=\varnothing, \text{and } \varGamma_{\overline{v}(k_{0})}(k_{1})=\varnothing, \end{aligned}$$

which implies that \(\overline {V}(k_{1})<\overline {V}(k_{0})\) and \(\overline {v}(k_{1})>\overline {v}(k_{0})\).

  • If \(\overline {V}(k_{1})=\overline {v}(k_{1}),\) select \( \bar {k}_{0}=k_{1}.\)

  • If \(\overline {V}(k_{1})\neq \overline {v}(k_{1})\), one of the following three subcases holds:

(1) \(\overline {V}(k_{1})=\overline {v}(k_{1})=0;\) (2) \(\overline {V}(k_{1})>0\) and \(\overline {v}(k_{1})\geq 0;\) and (3) \(\overline {V}(k_{1})\leq 0\) and \(\overline {v}(k_{l})<0. \)

For subcase (1), choose \( \bar {k}_{0}=k_{1}.\) Subcases (2) and (3) have been reduced to the Cases 2 and 3, respectively.

This completes Step 1 of the proof, i.e., there exists \(\bar {k}_{0}\geq k_{0}\) such that

$$\displaystyle \begin{aligned} \overline{V}(\bar{k}_{0})=\overline{v}(\bar{k}_{0}). \end{aligned} $$
(4.18)

Step 2

We shall prove that the multi-agent network (4.3) achieves consensus asymptotically.

From (4.18) and Lemma 4.2, it can be obtained that

$$\displaystyle \begin{aligned} \overline{V}(k)=\overline{v}(k),\,k\geq \bar{k}_{0},\end{aligned} $$
(4.19)

which implies

$$\displaystyle \begin{aligned} q_{\mu}(x_{i}(k+\theta_{1}))=q_{\mu}(x_{j}(k+\theta_{2})),\,\forall i,\,j\in\mathcal{N},\,\forall \theta_{1},\,\theta_{2}\in\varUpsilon_{-\tau},\,k\geq \bar{k}_{0}.\end{aligned} $$
(4.20)

Let \(c=q_{\mu }(x_{i}(\bar {k}_{0}))\). It follows from (4.20) that for any \(i\in \mathcal {N}\) and \(k\geq \bar {k}_{0}\), system (4.3) can be written as follows:

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle x_{i}(k+1)& =&\displaystyle x_{i}(k)+\sum_{j\in \mathcal {N}_{i}}a_{ij}(q_{\mu}(x_{j}(k-\tau_{ij}))-x_{i}(k))\\ & =&\displaystyle x_{i}(k)+\sum_{j\in \mathcal {N}_{i}}a_{ij}(c-x_{i}(k))\\ & =&\displaystyle a_{ii}x_{i}(k)+(1-a_{ii})c. \end{array} \end{aligned} $$
(4.21)

From (4.21), we have

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle x_{i}(k+1)-x_{i}(k)& =&\displaystyle a_{ii}(x_{i}(k)-x_{i}(k-1)),~~k\geq \bar{k}_{0}+1, \end{array} \end{aligned} $$
(4.22)

which implies that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle x_{i}(k+1)-x_{i}(k)& =&\displaystyle a_{ii}^{k-\bar{k}_{0}}(x_{i}(\bar{k}_{0}+1)-x_{i}(\bar{k}_{0})), ~~k\geq \bar{k}_{0}+1. \end{array} \end{aligned} $$
(4.23)

Hence, it follows from Assumption 4.1 that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle\lim_{k \to +\infty} (x_{i}(k+1)-x_{i}(k))& =&\displaystyle \lim_{k \to +\infty}a_{ii}^{k-\bar{k}_{0}}(x_{i}(\bar{k}_{0}+1)-x_{i}(\bar{k}_{0})) =0.\quad \end{array} \end{aligned} $$
(4.24)

It follows from (4.21) and (4.24) that there exists a constant \(c\in \mathbb {R}\) such that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle\lim_{k \to +\infty}x_{i}(k)=c,~~~\forall i\in \mathcal {N}.\end{array} \end{aligned} $$
(4.25)

4.1.3 Numerical Example

In this section, an example is given to illustrate the correctness of the theoretical results.

Consider network (4.3) with the topology shown in Fig. 4.1. Assume that μ = 1 and \(\tau _{ij}=1,\,\forall i\in \mathcal {N},\,j\in \mathcal {N}_{i}\). The initial condition of network (4.3) is randomly chosen from (−5, 5). Suppose the weight of each edge is set as \(\frac {1}{4}\). The stochastic matrix A is

(4.26)
Fig. 4.1
figure 1

Network topology in example

The state responses of multi-agent networks (4.3) are shown in Fig. 4.2. It can be observed from Fig. 4.2 that the multi-agent network achieves consensus asymptotically, which illustrates Theorem 4.4 very well.

Fig. 4.2
figure 2

The state responses of the multi-agent system

4.2 Continuous-Time Case

In Sect. 4.1, discrete-time multi-agent network consensus problem with quantization and time delays is studied. In this section, we shall investigate the corresponding continuous-time cases. The organization of the remaining part is given as follows. In Sect. 4.2.1, consensus protocol with quantization and time delays is formulated. In Sect. 4.2.2, the existence of the Filippov solution is presented. In Sect. 4.2.3, the consensus analysis of the proposed protocol is presented in detail. In Sect. 4.2.4, a numerical example is given to show the correctness of the theoretical results.

4.2.1 Model Description and Preliminaries

In Chap. 3, the following multi-agent network model has been investigated:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \displaystyle\frac{dx_{i}(t)}{dt}=\sum_{j\in \mathcal {N}_{i}}a_{ij}[q_{\mu}(x_{j}(t-\tau))-q_{\mu}(x_{i}(t))],~~~~~i\in\mathcal{N}, \end{array} \end{aligned} $$

where τ is the communication delays from agent j to agent i.

In many real multi-agent networks, each agent can obtain its own precise information, which will not be effected by the limited communication bandwidth. Moreover, communication delays may be different between different neighboring agents. Hence, the following consensus protocol will be studied in this section:

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle\frac{dx_{i}(t)}{dt}=\sum_{j\in \mathcal {N}_{i}}a_{ij}[q_{\mu}(x_{j}(t-\tau_{ij}))-x_{i}(t)],~~~~~i\in\mathcal{N}, \end{array} \end{aligned} $$
(4.27)

where τ ij is the communication delay from agent j to agent i and q μ(z) is defined by (4.4). For \(x=(x_{1},\,x_{2},\,\ldots ,\,x_{N})^{\top }\in \mathbb {R}^{N}\), let q μ(x) = (q μ(x 1), q μ(x 2), …, q μ(x N)). The initial conditions associated with (4.27) are given as

$$\displaystyle \begin{aligned} \begin{array}{rcl} x_{i}(s)=\phi_{i}(s)\in \mathcal{C}([-\tau,0], \,\mathbb{R}), ~~~~~i\in\mathcal{N}. \end{array} \end{aligned} $$

Different from the discrete-time cases, system (4.27) may not have global solution in the sense of \(Carath\acute {e}odory\) due to the discontinuity of the function q(⋅). Hence, we need to prove the existence of the global Filippov solution to differential equation (4.27) as in Chap. 1.

4.2.2 The Existence of the Filippov Solution

The concept of the Filippov solution to the differential equation (4.27) is given as follows.

Definition 4.5

A function \(x(t): [-\tau ,\,T)\rightarrow \mathbb {R}^{N}\) (T might be ) is a solution in the sense of Filippov for the discontinuous system (4.27) on [−τ, T), if

  1. 1.

    x(t) is continuous on [−τ, T) and absolutely continuous on [0, T);

  2. 2.

    x(t) satisfies that

    $$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle\frac{dx_{i}(t)}{dt}\in\mathcal{K}[\sum_{j\in \mathcal {N}_{i}}a_{ij}(q_{\mu}(x_{j}(t-\tau_{ij}))-x_{i}(t))],~~~~i\in\mathcal{N}. \end{array} \end{aligned} $$
    (4.28)

It follows from Lemma 1.16 that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} & &\displaystyle \mathcal{K}[\sum_{j\in \mathcal {N}_{i}}a_{ij}(q_{\mu}(x_{j}(t-\tau_{ij}))-x_{i}(t))]\\ & &\displaystyle \quad \subseteq\sum_{j\in \mathcal {N}_{i}}a_{ij}(\mathcal{K}[q_{\mu}(x_{j}(t-\tau_{ij}))]-x_{i}(t)). \end{array} \end{aligned} $$
(4.29)

Similar to Chap. 3, if x(t) is the solution of system (4.27), there exists the output function \(\gamma (t)\in \mathcal {K}[q_{\mu }(x(t))]\) such that for a.e. t ∈ [0, T), the following equation is true:

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle\frac{dx_{i}(t)}{dt}=\sum_{j\in \mathcal {N}_{i}}a_{ij}(\gamma_{j}(t-\tau_{ij})-x_{i}(t)),~~~~~i\in\mathcal{N}. \end{array} \end{aligned} $$
(4.30)

Definition 4.6

For any continuous function \(\phi :\,[-\tau ,\,0]\rightarrow \mathbb {R}^{N}\) and any measurable selection \(\psi :\,[-\tau ,\,0]\rightarrow \mathbb {R}^{N}\), such that \(\psi (s)\in \mathcal {K}[q_{\mu }(\phi (s))]\) for a.e. s ∈ [−τ, 0], an absolute continuous function x(t) = x(t, ϕ, ψ) is said to be a solution of the Cauchy problem for system (4.27) on [0, T) with initial value (ϕ, ψ), if

$$\displaystyle \begin{aligned} \left\{ \begin{aligned} \overset{.}x_{i}(t) &=\sum_{j=1,\,j\neq i}^{N}a_{ij}(\gamma_{j}(t-\tau_{ij})-x_{i}(t)),~for\,~ a.e.~\,t\in[0,\,T),~~~ i\in\mathcal{N}, \\ x(s) &=\phi(s), ~~~\forall s\in[-\tau,\,0],\\ \gamma(s) &=\psi(s)~~~ a.e.~~s\in[-\tau,\,0].\\ \end{aligned} \right. \end{aligned} $$
(4.31)

Next, we shall study the existence of the global solution to the system (4.31).

Lemma 4.7

Suppose x(⋅) is a Filippov solution to (4.27). Let\(M(t)=\mathit{\mbox{max}}_{i\in \mathcal {N}}\mathit{\mbox{max}}_{\theta \in [-\tau ,\,0]}\{x_{i}(t+\theta )\}\)and\(m(t)=\displaystyle \mathit{\mbox{min}}_{i\in \mathcal {N}}\mathit{\mbox{min}}_{\theta \in [-\tau ,\,0]}\{x_{i}(t+\theta )\}\) . Then, we have the following conclusion:

  1. (i)

    If M(t) ≥ 0, then M(t) is a non-increasing function for t.

  2. (ii)

    If m(t) ≤ 0, then m(t) is a non-decreasing function for t.

Proof

We only prove the conclusion (i). (ii) can be proved similarly. For any fixed t 0 ≥ 0, suppose M(t 0) ≥ 0. Next, we will show that M(t) ≤ M(t 0) for any t ≥ t 0 by contradiction.

Suppose there exist \(\overline {t}_{0}\) and t 0 such that

$$\displaystyle \begin{aligned} M(\overline{t}_{0})>M(t_{0}),\, \overline{t}_{0}>t_{0}\geq0. \end{aligned} $$
(4.32)

Similar to the proof of Steps 1 and 2 of Theorem 3.12, it can be proved that there exist \(i_{0}\in \mathcal {N}\), \(t_{0}^{*}\in [t_{0},\,\overline {t}_{0})\), and δ > 0 such that

$$\displaystyle \begin{aligned} M(t_{0})=M(t_{0}^{*})=x_{i_{0}}(t_{0}^{*}),\end{aligned} $$
(4.33)

and

$$\displaystyle \begin{aligned} M(t)>M(t_{0}^{*}),\,\forall t\in(t_{0}^{*},\,\overline{t}_{0}],\end{aligned} $$
(4.34)

and

$$\displaystyle \begin{aligned} M(t)=x_{i_{0}}(t+\theta(t)),\,\theta(t)\in[-\tau,\,0],\, \forall t\in(t_{0}^{*},\,t_{0}^{*}+\delta).\end{aligned} $$
(4.35)

Let \(\delta _{1}=\min _{j\in \mathcal {N}_{i}}\{\tau _{ij}\}\) and \(\delta _{2}=\min \{\delta ,\delta _{1}\}\). Since

$$\displaystyle \begin{aligned} x_{i_{0}}(t+\theta(t))=M(t)>M(t_{0}^{*})=x_{i_{0}}(t_{0}^{*}),\,\forall t\in(t_{0}^{*},\,t_{0}^{*}+\delta_{2}), \end{aligned} $$
(4.36)

there exists \(t_{1}\in (t_{0}^{*},\,t_{0}^{*}+\delta _{2}]\) such that

$$\displaystyle \begin{aligned} x_{i_{0}}(t_{1})>x_{i_{0}}(t_{0}^{*}). \end{aligned} $$
(4.37)

Let \(t_{1}^{*}=\sup \{t\in [t_{0}^{*},\,t_{1}]:\,x_{i_{0}}(t)=x_{i_{0}}(t_{0}^{*})\}\). Due to the continuity of function \(x_{i_{0}}(t)\), we have

$$\displaystyle \begin{aligned} x_{i_{0}}(t_{1}^{*})=x_{i_{0}}(t_{0}^{*}). \end{aligned} $$
(4.38)

Hence, for any \(t\in (t_{1}^{*},\,t_{1}]\), we have

$$\displaystyle \begin{aligned} x_{i_{0}}(t)\geq M(t_{0}^{*})&\geq \mbox{max}_{j\in \mathcal {N}_{i}}\mbox{max}_{t\in[t_{1}^{*},\,t_{1}]}\{x_{j}(t-\tau_{ij})\}\\ &\geq\mbox{max}_{j\in \mathcal {N}_{i}}\mbox{max}_{t\in[t_{1}^{*},\,t_{1}]}\{\gamma_{j}(t-\tau_{ij})\}. \end{aligned}$$

It follows from

$$\displaystyle \begin{aligned} \displaystyle\dot{x}_{i_{0}}(t) =\sum_{j=1,\,j\neq i_{0}}^{N}a_{ij}(\gamma_{j}(t-\tau)-x_{i_{0}}(t)),\,\,a.e.\,\, t\in(t_{1}^{*},\,t_{1}], \end{aligned} $$
(4.39)

that \(\dot {x}_{i_{0}}(t)\leq 0,\, a.e.\,\, t\in (t_{1}^{*},\,t_{1}]\). However, since \(x_{i_{0}}(t_{1})\geq x_{i_{0}}(t_{1}^{*})\), there must exist a subset \(\mathcal {I}_{1}\) of \((t_{1}^{*},\,t_{1}]\) such that \(\mathcal {I}_{1}\) has a positive measure and

$$\displaystyle \begin{aligned} \dot{x}_{i_{0}}(t)>0, \,\, a.e.\, t\in \mathcal{I}_{1}, \end{aligned} $$
(4.40)

which is contradictory with \(\dot {x}_{i_{0}}(t)\leq 0\) for a.e. \(t\in (t_{1}^{*},\,t_{1}]\).

Therefore, M(t) is a non-increasing function for t if M(t) ≥ 0. Similarly, m(t) is a non-decreasing function for t if m(t) ≤ 0.

Theorem 4.8

For any initial function ϕ and the selection of the output function\(\psi (s)\in \mathcal {K}[q_{\mu }(\phi (s))]\) , there exists a global solution for the system (4.31).

Proof

Similar to the proof of Theorem 3.12, the proof of Theorem 4.8 can also be divided into two parts:

Part (I) Existence of local solution

Similar to the proof of Lemma 1 in [20], one can conclude the existence of the solution defined on [0, T) for system (4.31).

Part (II) The boundedness of the solution

Suppose x(t, ϕ, ψ) is a solution of system (4.31). Let \(M(t)=\displaystyle \mbox{max}_{i\in \mathcal {N}}\) maxθ ∈ [−τ,0]{x i(t + θ)}, and \(m(t)=\min _{i\in \mathcal {N}}\min _{\theta \in [-\tau ,\,0]}\{x_{i}(t+\theta )\}\). It follows from Lemma 4.7 that M(t) ≤max{M(0), 0} and \(m(t)\geq \min \{m(0),0\}\). Hence, the solution x(t) is bounded. According to the theory of functional differential equations [21], a global solution can be guaranteed by the boundedness of the local solution. This completes the proof of this theorem.

4.2.3 Consensus Analysis Under Quantization and Time Delays

In this section, we shall study the consensus result of the multi-agent system (4.27). We assume that the network topology is undirected in this section. The initial conditions associated with (4.27) are given as \(x_{i}(s)=\phi _{i}(s)\in \mathcal {C}([-\tau ,0], \,\mathbb {R}), (i\in \mathcal {N})\). The Filippov solution of system (4.27) is defined in (4.31), and ψ j(s), s ∈ [−τ, 0] is the initial condition of measurable selection of γ j(s).

Lemma 4.9

Suppose x(t) is a Filippov solution to (4.27). For any 𝜖 > 0, let\(\varPhi =\{x(t+\theta )\in \mathcal {C}([-\tau ,0];\,\mathbb {R}^{N}):\)\(|\gamma _{i}(t)-\gamma _{j}(t-\tau _{ij})|<\frac {\epsilon }{2},\,|x_{i}(t)-\gamma _{j}(t-\tau _{ij})|<\frac {\epsilon }{2},~~\forall i\in \mathcal {N},\,j\in \mathcal {N}_{i}\}\) . Then, we have the following conclusion:

  1. (i)

    There exists T 0 , such that for any\(i\in \mathcal {N}\),

    $$\displaystyle \begin{aligned} \displaystyle| x_{i}(t+\vartheta)-x_{i}(t)|\leq\epsilon,~~~~\forall t\geq T_{0},\,\forall \vartheta\in[0,\,\tau]. \end{aligned} $$
    (4.41)
  2. (ii)

    For arbitrary fixed t 0 ≥ 0, there exists t 1 ≥ t 0such that the agents in the network will go into the set of Φ at time t 1.

Proof

Consider the function

$$\displaystyle \begin{aligned} V(t)=V_{1}(t)+V_{2}(t), \end{aligned} $$
(4.42)

where

$$\displaystyle \begin{aligned} V_{1}(t)=\sum_{i=1}^{N}x_{i}^{2}(t)+\sum_{i=1}^{N}\int_{0}^{x_{i}(t)}q_{\mu}(s)ds, \end{aligned} $$
(4.43)

and

$$\displaystyle \begin{aligned} V_{2}(t)=\sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}\int_{t-\tau_{ij}}^{t}a_{ij}\gamma_{j}^{2}(s)ds. \end{aligned} $$
(4.44)

Note that cq μ(c) ≥ 0 for any \(c\in \mathbb {R}\), and then we have V 1(t) ≥ 0 and V 2(t) ≥ 0.

Notice that for \(p_{i}(s)=\int _{0}^{s}q_{\mu }(u)du\), and we have

$$\displaystyle \begin{aligned} \partial_{c}p_{i}(s)=\{v\in\mathbb{R}:\,q_{\mu}^{-}(s)\leq v\leq q_{\mu}^{+}(s)\}, \end{aligned} $$
(4.45)

where \(q_{\mu }^{+}(s)\) and \(q_{\mu }^{-}(s)\) denote the right and left limits of the function q μ at the point s. Based on Lemma 1.19, V 1(t) is differentiable for a.e. t ≥ 0 and

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \frac{dV_{1}(t)}{dt} & =&\displaystyle \sum_{i=1}^{N}x_{i}(t)\sum_{j=1,\,j\neq i}^{N}a_{ij}[\gamma_{j}(t-\tau_{ij})-x_{i}(t)]+\sum_{i=1}^{N}\gamma_{i}(t)\sum_{j=1,\,j\neq i}^{N}a_{ij}\\& &\displaystyle \times[\gamma_{j}(t-\tau_{ij})-x_{i}(t)]\\ & =&\displaystyle \frac{1}{2}\sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}a_{ij}[2x_{i}(t)\gamma_{j}(t-\tau_{ij})-2x_{i}^{2}(t)]+ \frac{1}{2}\sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}a_{ij}\\ & &\displaystyle \times [2\gamma_{i}(t)\gamma_{j}(t-\tau_{ij})-2\gamma_{i}(t)x_{i}(t)]\\ & \leq&\displaystyle \frac{1}{2}\sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}a_{ij}[2x_{i}(t)\gamma_{j}(t-\tau_{ij})-2x_{i}^{2}(t)]+ \frac{1}{2}\sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}a_{ij}\\& &\displaystyle \times[2\gamma_{i}(t)\gamma_{j}(t-\tau_{ij})-2\gamma_{i}^{2}(t)]. \end{array} \end{aligned} $$
(4.46)

Since \(\gamma _{j}(t)\in \mathcal {K}[q_{\mu }(x_{j}(t))],\,\forall j\in \mathcal {N}\), we have that γ j(t) is locally integrable. Hence, V 2(t) is differentiable for a.e. t ≥ 0 and

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \frac{dV_{2}(t)}{dt}& =&\displaystyle \sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}a_{ij}[\gamma_{j}^{2}(t)- \gamma_{j}^{2}(t-\tau_{ij})]\\ & =&\displaystyle \frac{1}{2}\sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}a_{ij}[2\gamma_{i}^{2}(t)-2\gamma_{j}^{2}(t-\tau_{ij})]\\ & \leq&\displaystyle \frac{1}{2}\sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}a_{ij}[\gamma_{i}^{2}(t)-\gamma_{j}^{2}(t-\tau_{ij})]+ \frac{1}{2}\sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}a_{ij}[x_{i}^{2}(t)\\& &\displaystyle -\gamma_{j}^{2}(t-\tau_{ij})]. \end{array} \end{aligned} $$
(4.47)

Combining (4.46) and (4.47) gives that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \frac{dV(t)}{dt}& =&\displaystyle \frac{dV_{1}(t)}{dt}+\frac{dV_{2}(t)}{dt}\\ & \leq&\displaystyle \frac{1}{2}\sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}a_{ij}[2\gamma_{i}(t)\gamma_{j}(t-\tau_{ij})-\gamma_{i}^{2}(t)-\gamma_{j}^{2}(t-\tau_{ij})]\\& &\displaystyle +\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}a_{ij}[2x_{i}(t)\gamma_{j}(t-\tau_{ij})-x_{i}^{2}(t)-\gamma_{j}^{2}(t-\tau_{ij})]\\ & =&\displaystyle -\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}a_{ij}(\gamma_{i}(t)-\gamma_{j}(t-\tau_{ij}))^{2}\\ & &\displaystyle -\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}a_{ij}(x_{i}(t)-\gamma_{j}(t-\tau_{ij}))^{2}\\ & \leq&\displaystyle 0. \end{array} \end{aligned} $$
(4.48)

Hence, V (t) is non-increasing for t. Together with V (t) ≥ 0, it gives that limt→+ V (t) exists. Let \(\bar {a}=\displaystyle \mbox{max}_{1\leq i<j\leq N,\,a_{ij}>0}\{a_{ij}\}\). Then, for any 𝜖 > 0 and \(i,\,j\in \mathcal {N}\), there exists T 0 such that ∀t ≥ T 0, 𝜗 ∈ [0, τ],

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \frac{\epsilon^{2}}{2N\bar{a}\tau}& \geq&\displaystyle | V(t+\vartheta)-V(t)|\\ & =&\displaystyle \displaystyle|\int_{t}^{t+\vartheta}\dot{V}(s)ds|\\ & \geq&\displaystyle \frac{1}{2}\sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}a_{ij}\int_{t}^{t+\vartheta}(x_{i}(s)-\gamma_{j}(s-\tau_{ij}))^{2}ds\\& &\displaystyle +\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}a_{ij}\int_{t}^{t+\vartheta}(\gamma_{i}(s)-\gamma_{j}(s-\tau_{ij}))^{2}ds. \end{array} \end{aligned} $$
(4.49)

Hence, for any \(i\in \mathcal {N}\) and t ≥ T 0, 𝜗 ∈ [0, τ],

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \sum_{j=1,\,j\neq i}^{N}a_{ij}\int_{t}^{t+\vartheta}(x_{i}(s)-\gamma_{j}(s-\tau_{ij}))^{2}ds \leq\frac{\epsilon^{2}}{N\bar{a}\tau}. \end{array} \end{aligned} $$
(4.50)

It follows from Lemma 1.20 that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} & &\displaystyle \displaystyle\left|\sum_{j=1,\,j\neq i}^{N}a_{ij}\int_{t}^{t+\vartheta}(\gamma_{j}(s-\tau_{ij})-x_{i}(s))ds\right|{}^{2}\\ & &\displaystyle \quad \leq\displaystyle \left(\sum_{j=1,\,j\neq i}^{N}a_{ij}\int_{t}^{t+\vartheta}|\gamma_{j}(s-\tau_{ij})-x_{i}(s)| ds\right)^{2}\\ & &\displaystyle \quad \leq N\sum_{j=1,\,j\neq i}^{N}a_{ij}^{2}\left(\int_{t}^{t+\vartheta}|\gamma_{j}(s-\tau_{ij})-x_{i}(s)| ds\right)^{2}\\ & &\displaystyle \quad \leq\displaystyle\tau N\sum_{j=1,\,j\neq i}^{N}a_{ij}\bar{a}\int_{t}^{t+\vartheta}(x_{i}(s)-\gamma_{j}(s-\tau_{ij}))^{2}ds \\ & &\displaystyle \quad \leq\epsilon^{2}. \end{array} \end{aligned} $$
(4.51)

Hence, for any \(i\in \mathcal {N}\) and t ≥ T 0, 𝜗 ∈ [0, τ],

$$\displaystyle \begin{aligned}\displaystyle\left|\sum_{j=1,\,j\neq i}^{N}a_{ij}\int_{t}^{t+\vartheta}(\gamma_{j}(s-\tau)-x_{i}(s))ds\right|\leq\epsilon.\end{aligned}$$

It follows from (4.31) that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle\left| x_{i}(t+\vartheta)-x_{i}(t)\right| & =&\displaystyle \displaystyle\left|\int_{t}^{t+\vartheta}\dot{x}_{i}(s)ds\right|\\ & =&\displaystyle \displaystyle\left|\sum_{j=1,\,j\neq i}^{N}a_{ij}\int_{t}^{t+\vartheta}(\gamma_{j}(s-\tau)-x_{i}(s))ds\right|\\ & \leq&\displaystyle \epsilon. \end{array} \end{aligned} $$
(4.52)

Thus, for any 𝜖 > 0 and \(i\in \mathcal {N}\), there exists T 0 such that for ∀t ≥ T 0, 𝜗 ∈ [0, τ],

$$\displaystyle \begin{aligned} \displaystyle| x_{i}(t+\vartheta)-x_{i}(t)|\leq\epsilon. \end{aligned} $$
(4.53)

Next, we will prove conclusion (ii). Let J = {t ≥ t 0 : x(t + θ)∉Φ}. For \(x(t+\theta )\in \mathcal {C}([-\tau ,0];\,\mathbb {R}^{N})\) and t ∈ J, there exist \(i,\,j\in \mathcal {N},\, i\neq j\) and a ij ≠ 0 such that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} |\gamma_{i}(t)-\gamma_{j}(t-\tau_{ij})|\geq\frac{\epsilon}{2}, \end{array} \end{aligned} $$
(4.54)

or

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} |x_{i}(t)-\gamma_{j}(t-\tau_{ij})|\geq\frac{\epsilon}{2}. \end{array} \end{aligned} $$
(4.55)

Hence, for a.e. t ∈ J,

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \dot{V}(t) & \leq&\displaystyle -\frac{1}{8}\sum_{i=1}^{N}\sum_{j=1,\,j\neq i}^{N}a_{ij}\epsilon^{2}\\ & \leq&\displaystyle -\frac{1}{8}\varsigma\epsilon^{2}, \end{array} \end{aligned} $$
(4.56)

where \(\varsigma =\displaystyle \sum _{i=1}^{N}\sum _{j=1,\,j\neq i}^{N}a_{ij}\). Next, we will prove the claim by contradiction.

Suppose that t ∈ J for any t ≥ t 0. Then, inequality (4.56) implies that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} V(t)-V(t_{0})& \leq&\displaystyle -\frac{1}{8}\varsigma\epsilon^{2}(t-t_{0}),\,t\geq t_{0}. \end{array} \end{aligned} $$
(4.57)

For \(t>\frac {8V(t_{0})}{\varsigma \epsilon ^{2}}+t_{0}\), it follows from inequality (4.57) that V (t) < 0, which is a contradiction to the definition of V (t). Therefore, for arbitrary t 0 ≥ 0, there exists \(\bar {t}_{0}\geq t_{0}\) such that the agents in the network will go into the set of Φ at time \(\bar {t}_{0}\).

This completes the proof of this lemma.

Theorem 4.10

Consider the multi-agent network (4.27) with communication topology that is defined by an undirected, connected graph G. Then, for any finite communication delay τ ij , the multi-agent network will achieve consensus, i.e., there exists a constant c such that

$$\displaystyle \begin{aligned} \displaystyle\lim_{t \to +\infty}x_{i}(t)=c. \end{aligned} $$
(4.58)

Proof

Step 1

We shall show some inequality to be used at later steps.

For arbitrary 𝜖 > 0 (without loss of generality, assume \(\epsilon <\frac {\mu }{N}\)), it follows from Lemma 4.9 that there exists T 0 > 0 such that for any \(i\in \mathcal {N}\),

$$\displaystyle \begin{aligned} \displaystyle|x_{i}(t+\vartheta)-x_{i}(t)|\leq\epsilon,~~~~\forall t\geq T_{0},\, \forall \vartheta\in[0,\,\tau]. \end{aligned} $$
(4.59)

Moreover, there exists T 1 ≥ T 0 such that for any \(i\in \mathcal {N}\) and \(j\in \mathcal {N}_{i}\),

$$\displaystyle \begin{aligned} |\gamma_{i}(T_{1})-\gamma_{j}(T_{1}-\tau_{ij})|<\frac{\epsilon}{4}, \end{aligned} $$
(4.60)

and

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} |x_{i}(T_{1})-\gamma_{j}(T_{1}-\tau_{ij})|<\frac{\epsilon}{4}. \end{array} \end{aligned} $$
(4.61)

It follows from (4.60) and (4.61) that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} |x_{i}(T_{1})-\gamma_{i}(T_{1})|& \leq&\displaystyle |\gamma_{i}(T_{1})-\gamma_{j}(T_{1}-\tau_{ij})| +|x_{i}(T_{1})-\gamma_{j}(T_{1}-\tau_{ij})|\\ & <&\displaystyle \frac{\epsilon}{4}+\frac{\epsilon}{4}=\frac{\epsilon}{2}. \end{array} \end{aligned} $$
(4.62)

Step 2

Fixx i(T 1), and without loss of generality, we assumex i(T 1) ≥ 0 (the proof is similar forx i(T 1) < 0). We shall prove that\(|x_{j}(T_{1})-x_{i}(T_{1})|\leq \epsilon ,\,j\in \mathcal {N}_{i},\)by considering the following two cases:

Case 1::

\(x_{i}(T_{1})\neq k_{0}\mu ,\,\forall k_{0}\in \mathbb {Z}\). Then,

$$\displaystyle \begin{aligned} \gamma_{i}(T_{1})=q_{\mu}(x_{i}(T_{1}))=\lfloor\frac{x_{i}(t)}{\mu}\rfloor\mu. \end{aligned} $$
(4.63)

If γ i(T 1) = 0, it is easy to see from (4.59) and (4.62) that γ i(T 1 − τ ji) = γ i(T 1) = 0. Hence, it can be obtained that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} |x_{j}(T_{1})-x_{i}(T_{1})| & \leq&\displaystyle |x_{j}(T_{1})-\gamma_{i}(T_{1}-\tau_{ji})|+|\gamma_{i}(T_{1}-\tau_{ji})-\gamma_{i}(T_{1})|\\& &\displaystyle +|\gamma_{i}(T_{1})-x_{i}(T_{1})|\\ & \leq&\displaystyle \frac{\epsilon}{4}+0+\frac{\epsilon}{2}\\ & <&\displaystyle \epsilon. \end{array} \end{aligned} $$
(4.64)

If γ i(T 1) ≠ 0, we claim that for any \(j\in \mathcal {N}_{i}\),

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} x_{j}(T_{1})& \geq&\displaystyle \lfloor\frac{x_{i}(T_{1})}{\mu}\rfloor\mu. \end{array} \end{aligned} $$
(4.65)

Otherwise, we have \(x_{j}(T_{1})<\lfloor \frac {x_{i}(T_{1})}{\mu }\rfloor \mu \), which implies that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \gamma_{j}(T_{1})& \leq&\displaystyle (\lfloor\frac{x_{i}(T_{1})}{\mu}\rfloor-1)\mu. \end{array} \end{aligned} $$
(4.66)

It follows from (4.59) that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} x_{j}(T_{1}-\tau_{ij})& \leq&\displaystyle x_{j}(T_{1})+\epsilon\\ & \leq&\displaystyle (\lfloor\frac{x_{i}(T_{1})}{\mu}\rfloor-1)\mu+2\epsilon\\ & <&\displaystyle \lfloor\frac{x_{i}(T_{1})}{\mu}\rfloor\mu, \end{array} \end{aligned} $$
(4.67)

which implies \(\gamma _{j}(T_{1}-\tau _{ij})\leq (\lfloor \frac {x_{i}(t)}{\mu }\rfloor -1)\mu \). Hence, it can be obtained that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} |x_{i}(T_{1})-\gamma_{j}(T_{1}-\tau_{ij})|& \geq&\displaystyle |\gamma_{i}(T_{1})-\gamma_{j}(T_{1}-\tau_{ij})| -|x_{i}(T_{1})-\gamma_{i}(T_{1})| \\ & \geq&\displaystyle \mu-\frac{\epsilon}{2}\\ & >&\displaystyle \epsilon, \end{array} \end{aligned} $$
(4.68)

which contradicts with (4.60). Hence,

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} x_{j}(T_{1})& \geq&\displaystyle \lfloor\frac{x_{i}(t)}{\mu}\rfloor\mu. \end{array} \end{aligned} $$
(4.69)

The inequality \(|x_{j}(T_{1})-\gamma _{i}(T_{1}-\tau _{ji})|\leq \frac {\epsilon }{4}\) implies that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \gamma_{i}(T_{1}-\tau_{ji})& \geq&\displaystyle \lfloor\frac{x_{i}(t)}{\mu}\rfloor\mu-\frac{\epsilon}{4}. \end{array} \end{aligned} $$
(4.70)

It follows from (4.59) and (4.63) that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \gamma_{i}(T_{1}-\tau_{ji})& \leq&\displaystyle \lfloor\frac{x_{i}(t)}{\mu}\rfloor\mu. \end{array} \end{aligned} $$
(4.71)

Hence, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} |x_{j}(T_{1})-x_{i}(T_{1})| & \leq&\displaystyle |x_{j}(T_{1})-\gamma_{i}(T_{1}-\tau_{ji})|+|\gamma_{i}(T_{1}-\tau_{ji})-\gamma_{i}(T_{1})|\\& &\displaystyle +|\gamma_{i}(T_{1})-x_{i}(T_{1})|\\ & \leq&\displaystyle \frac{\epsilon}{4}+\frac{\epsilon}{4}+\frac{\epsilon}{2}=\epsilon. \end{array} \end{aligned} $$
(4.72)
Case 2::

There exists a \(\bar {k}_{0}\in \mathbb {Z}\) such that \(x_{i}(T_{1})= \bar {k}_{0}\mu .\)

If \(\bar {k}_{0}=0\), it follows from |x i(T 1) − x i(T 1 − τ ji)| < 𝜖 that

$$\displaystyle \begin{aligned} \gamma_{i}(T_{1}-\tau_{ji})=\gamma_{i}(T_{1})=0. \end{aligned} $$
(4.73)

Hence, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} |x_{j}(T_{1})-x_{i}(T_{1})| & \leq&\displaystyle |x_{j}(T_{1})-\gamma_{i}(T_{1}-\tau_{ji})|+|\gamma_{i}(T_{1}-\tau_{ji})-\gamma_{i}(T_{1})|\\& &\displaystyle +|\gamma_{i}(T_{1})-x_{i}(T_{1})|\\ & \leq&\displaystyle \frac{\epsilon}{4}+0+\frac{\epsilon}{2}\\ & <&\displaystyle \epsilon. \end{array} \end{aligned} $$
(4.74)

If \(\bar {k}_{0}\geq 1\), we claim that x j(T 1) ≥ x i(T 1). Otherwise, we have \(\gamma _{j}(T_{1})\leq (\bar {k}_{0}-1)\mu \). It follows from (4.59) and (4.62) that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} x_{j}(T_{1}-\tau_{ji})& \leq&\displaystyle x_{j}(T_{1})+\epsilon\\ & \leq&\displaystyle \gamma_{j}(T_{1})+\epsilon+\epsilon\\ & \leq&\displaystyle (\bar{k}_{0}-1)\mu+2\epsilon, \end{array} \end{aligned} $$
(4.75)

which implies that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \gamma_{j}(T_{1}-\tau_{ji})& \leq&\displaystyle (\bar{k}_{0}-1)\mu. \end{array} \end{aligned} $$
(4.76)

However, we can obtain from (4.61) that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \gamma_{j}(T_{1}-\tau_{ji})& \geq&\displaystyle x_{i}(T_{1})-\frac{\epsilon}{4}\geq\bar{k}_{0}\mu-\frac{\epsilon}{4}, \end{array} \end{aligned} $$
(4.77)

which contradicts with (4.76). Hence, x j(T 1) ≥ x i(T 1).

If \(x_{j}(T_{1})=x_{i}(T_{1})=\bar {k}_{0}\mu \), then

$$\displaystyle \begin{aligned} |x_{j}(T_{1})-x_{i}(T_{1})|=0<\epsilon. \end{aligned} $$
(4.78)

If \(x_{j}(T_{1})>x_{i}(T_{1})=\bar {k}_{0}\mu \), it can be easily obtained that \(\gamma _{j}(T_{1})=\bar {k}_{0}\mu \). Then,

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} |x_{j}(T_{1})-x_{i}(T_{1})| & =&\displaystyle |x_{j}(T_{1})-\gamma_{j}(T_{1})|\leq\epsilon. \end{array} \end{aligned} $$
(4.79)

In conclusion, we proved that for any 𝜖 > 0, there exists T 1 > 0 such that for any fixed x i(T 1),

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} |x_{j}(T_{1})-x_{i}(T_{1})|\leq\epsilon,\,j\in\mathcal{N}_{i}. \end{array} \end{aligned} $$
(4.80)

Step 3

We shall show that the multi-agent network (4.27) can achieve consensus.

Since the network is connected, we can obtain from (4.80) that for any fixed x i(T 1) and \(\forall j\in \mathcal {N}\),

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} |x_{j}(T_{1})-x_{i}(T_{1})|\leq (N-1)\epsilon. \end{array} \end{aligned} $$
(4.81)

Denote \(M(t)=\displaystyle \operatorname *{\max }_{i\in \mathcal {N}} \operatorname *{\max }_{\theta \in [-\tau ,\,0]}\{x_{i}(t+\theta )\}\) and \(m(t)=\displaystyle \min _{i\in \mathcal {N}}\min _{\theta \in [-\tau ,\,0]}\{x_{i}(t+\theta )\}\). It follows from (4.59) and (4.81) that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} |M(T_{1})-m(T_{1})|\leq (N+1)\epsilon. \end{array} \end{aligned} $$
(4.82)

From (4.82), we can assume that m(T 1) and M(T 1) belong to the set of \(\varOmega _{1}=((k_{0}-1)\mu ,\,(k_{0}+1)\mu ),\,k_{0}=\lfloor \frac {m(T_{1})}{\mu }\rfloor \). Without loss of generality, we assume k 0 ≥ 0 (the proof for k 0 < 0 is similar to the case k 0 > 0, which we omitted here). Next, we will prove that multi-agent networks achieve consensus by considering the following four cases.

Case 1::

μ > M(T 1) ≥ m(T 1) > −μ, i.e., k 0 = 0. From the definition of the quantizer function q μ(⋅) and output function γ(⋅), we have that

$$\displaystyle \begin{aligned} \gamma_{i}(T_{1}+\theta_{1})=\gamma_{j}(T_{1}+\theta_{2})=0,\,\forall i,\,j\in\mathcal{N},\,\forall \theta_{1},\,\theta_{2}\in[-\tau,\,0].\end{aligned} $$
(4.83)

It follows from Lemma 4.9 that

$$\displaystyle \begin{aligned} \gamma_{i}(t)=0,\,\forall i\in\mathcal{N},\,t\geq T_{1}.\end{aligned} $$
(4.84)

Hence, for t ≥ T 1, the multi-agent network model is reduced to be

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle\frac{dx_{i}(t)}{dt}=-(1-a_{ii})x_{i}(t),~~~~i\in\mathcal {N}. \end{array} \end{aligned} $$
(4.85)

From (4.85), it is easy to find that all the agents will achieve consensus and the final consensus value is 0.

Case 2::

k 0 ≥ 1 and M(T 1) ≥ k 0 μ ≥ m(T 1). From the definition of the quantizer function q μ(⋅) and output function γ(⋅), we can obtain that

$$\displaystyle \begin{aligned} \gamma_{i}(T_{1}+\theta_{1})\in[(k_{0}-1)\mu,\,k_{0}\mu],\,\,\forall i\in\mathcal{N},\,\forall \theta_{1}\in[-\tau,\,0].\end{aligned} $$
(4.86)

It follows from Lemma 4.9 that

$$\displaystyle \begin{aligned} \gamma_{i}(t)\in[(k_{0}-1)\mu,\,k_{0}\mu],\,\,\forall i\in\mathcal {N},\,\forall t\geq T_{1}.\end{aligned} $$
(4.87)

If there exists t 1 ≥ T 1 such that x i(t 1) < k 0 μ, (4.27) implies that

$$\displaystyle \begin{aligned} x_{i}(t)< k_{0}\mu,\,\forall i\in\mathcal{N},\,\forall t\geq t_{1}. \end{aligned} $$
(4.88)

If m(T 1) < k 0 μ, clearly, there exist \(i_{m}\in \mathcal {N}\) and θ m ∈ [−τ, 0] such that

$$\displaystyle \begin{aligned} m(T_{1})=x_{i_{m}}(T_{1}+\theta_{m})\quad \mathrm{and} \quad \gamma_{i_{m}}(T_{1}+\theta_{m})=(k_{0}-1)\mu. \end{aligned} $$
(4.89)

For t ≥ T 1, we obtain that

$$\displaystyle \begin{aligned} (k_{0}-1)\mu<x_{i_{m}}(t)<k_{0}\mu \quad \mathrm{and} \quad \gamma_{i_{m}}(t)=(k_{0}-1)\mu. \end{aligned} $$
(4.90)

For any j such that \(i_{m}\in \mathcal {N}_{j}\), we have

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle\frac{dx_{j}(t)}{dt}=\sum_{l\in \mathcal {N}_{j}}a_{jl}(\gamma_{l}(t-\tau_{jl})-x_{j}(t)). \end{array} \end{aligned} $$
(4.91)

Hence, if x j(t 1) ≥ k 0 μ, there exists t 2 ≥ t 1 such that x j(t 2) < k 0 μ. Since the network is connected, it follows from (4.88) and (4.91) that there exists T 2 ≥ t 2 such that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle (k_{0}-1)\mu<x_{i}(t)<k_{0}\mu,\,\forall i\in\mathcal {N},\,\forall t\geq T_{2}, \end{array} \end{aligned} $$
(4.92)

and Eq. (4.30) is reduced to be

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle\frac{dx_{i}(t)}{dt}=-(1-a_{ii})(x_{i}(t)-(k_{0}-1)\mu),~~i\in\mathcal {N},\,t\geq T_{2}+\tau. \end{array} \end{aligned} $$
(4.93)

Hence, the multi-agent network will achieve consensus, and the final consensus value is (k 0 − 1)μ.

If m(T 1) = k 0 μ, by similar analyses, we can also obtain that the multi-agent network will achieve consensus and the final consensus value is (k 0 − 1)μ or k 0 μ.

Case 3::

M(T 1) ≥ m(T 1) > k 0 μ. In this case, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \gamma_{i}(T_{1}+\theta_{1})=k_{0}\mu,\,\,\forall i\in\mathcal {N},\,\forall \theta_{1}\in[-\tau,\,0]. \end{array} \end{aligned} $$

It follows from (4.30) and Lemma 4.9 that

$$\displaystyle \begin{aligned} (k_{0}+1)\mu>x_{i}(t)>k_{0}\mu \quad \mathrm{and} \quad \gamma_{i}(t)=k_{0}\mu,~~\forall i\in\mathcal {N},\quad \forall t\geq T_{1}. \end{aligned}$$

Then, the system (4.30) is reduced to be

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle\frac{dx_{i}(t)}{dt}=-(1-a_{ii})(x_{i}(t)-k_{0}\mu),~~~~i\in\mathcal {N}. \end{array} \end{aligned} $$
(4.94)

It is easy to see that all the agents will achieve consensus and the final consensus value is k 0 μ.

Case 4::

k 0 μ > M(T 1) ≥ m(T 1).

The analysis of this case is similar to Case 3, which is omitted here. In this case, the multi-agent network will achieve consensus and the final consensus value is (k 0 − 1)μ.

In conclusion, the multi-agent network (4.27) achieves consensus asymptotically. This completes the proof of this theorem.

Remark 4.11

Different from Chap. 3, we have shown that the multi-agent network can achieve complete consensus other than practical consensus in this section. However, it is difficult to estimate the final consensus state c of model (4.27). It is an interesting problem to estimate how the final consensus value depends on the quantization and time delays in our future work.

4.2.4 Numerical Example

Consider the multi-agent system (4.27) with communication quantization and time delays, where the network structure is shown in Fig. 4.3 with the weights on the connections. The graph (Fig. 4.3) is generated by the scale-free algorithm. Suppose that initial conditions are randomly chosen from (0, 10).

Fig. 4.3
figure 3

Network topology in the simulation example

Figure 4.4 shows the state responses of multi-agent network (4.27) with respect to μ = 1. It can be observed that the agents converge to a constant value, which illustrates Theorem 4.10 very well.

Fig. 4.4
figure 4

The states of the system in example

4.3 Summary

In this chapter, we mainly addressed the consensus problem of multi-agent networks where each agent can only obtain the quantized and delayed measurements of the states of its neighbors. Discrete-time formulation of the problem was studied first. We showed that the multi-agent network can achieve consensus for arbitrary finite communication delays. For the continuous-time cases, it was shown that Filippov solutions of the resulting system exist for any initial condition. We have proved that for the multi-agent network model considering quantization and time delays simultaneously, Filippov solutions of the resulting system converged to a constant value asymptotically under certain network topology. The theoretical results have been well illustrated by numerical examples.