1 Introduction

In recent decades, the cooperative control of multi-agent systems (MAS) has attracted great attention because of its wide application in smart grid [1], sensor network [2], Unmanned Aerial Vehicle (UAV) formation [3], and multi-robot systems [4]. MAS has the advantage of solving complex problems that cannot be completed by a single agent, and it has higher flexibility and stronger adaptability [5]. As an important research branch of MAS cooperation, the consensus has received extensive attention [68]. The consensus design involves a reasonable and effective control protocol based on the local information exchange between agents. In this manner, all agents in the system reaching a final state can be ensured.

According to whether a leader is present in a system, the consensus algorithm is divided into a leaderless consensus algorithm and a leader-following consensus algorithm with leaders. Olfati-Saber et al. [9] and Wen et al. [10] studied the problem of consensus without leaders. The consensus state of the system was determined on the basis of the information exchange and initial state between agents. In [11] and [12], the problem of leader-following consensus of systems was also studied. Interestingly, the abovementioned studies [912] have a common feature: each agent needs to communicate continuously with its neighbors to reach a consensus. However, in practical applications, the continuous communication between agents is not always sufficient, and it even increases the burden of communication and energy consumption, especially in the case of limited resources of a single agent.

Despite the limitations, the communication network resources can be effectively utilized by introducing an event-triggered strategy into the multi-agent consensus problem. In [13], the real-time performance of event-triggered systems was found to be better than that of time-triggered systems. In [14], the average consensus of event-trigger conditions in MAS was considered to have single integrator dynamics. In [15], a decentralized event-triggered consensus algorithm was considered for MAS with single integrator and double integrator dynamics. In [16], a distributed adaptive event-triggered protocol was designed on the basis of local sampling state or local output information, and the leaderless and leader-following consensus issues were simultaneously considered. Zhang et al. [17] considered the event-triggered tracking control problem of nonlinear MAS with unknown disturbances and proposed a new adaptive event-triggered control method for this system. In [18], a distributed event-triggered consensus controller for each agent was proposed to achieve consensus without the need for continuous communication between agents. The abovementioned studies entailed in-depth research of event-trigger conditions to effectively reduce the communication burden and energy consumption of systems. However, the key performance of the control system, namely, convergence performance, should also be considered.

The finite-time convergence of closed-loop systems is the time-optimal control method from the perspective of time optimization of control systems. However, the event-triggered consensus in the literature represents an asymptotic consensus. As the fractional power term is present in the finite-time controller, the finite-time consensus has better robustness and anti-interference than the asymptotic consensus [1922]. In [23], the finite-time consensus problem of leaderless and leader-follower MAS was investigated, and two new nonlinear consensus protocols were proposed to substantially reduce communication cost and controller update frequency. In [24], the tracking control problem of MAS with bounded disturbances was studied with respect to the sliding mode controller. In [25], the finite-time consensus of second-order MAS with internal nonlinear dynamics and external bounded disturbances was explored. However, the abovementioned results mainly concentrated on integrator-type dynamics, and they could not deal with general linear dynamics. In the research about general linear dynamics in [26], the problem of finite-time consensus with distributed event-trigger conditions was analyzed, and a dynamic threshold that could converge to zero within a finite time in the trigger function was designed. Cao et al. [27] proposed a distributed event-triggered control strategy to achieve the finite-time consensus of general linear MAS.

In the abovementioned studies, a common assumption is that the state of the system is measurable. However, in many practical engineering applications, not all system state variables can be directly detected. In such cases, the control based on the state feedback cannot be used, and the one based on the output feedback should be utilized [28]. An unstable operating system with an unknown state is generally reconstructed using an observer, and then this reconstructed state is used to replace the real state of the system as a means of achieving the required state feedback. In [2931], the event-triggered control of the first-order MAS based on the output feedback is studied. In [32], two observer-based control protocols (centralized and distributed protocols) were considered in relation to the relative output information. In [33], a fully distributed event-triggered control strategy was proposed for general linear MAS, and the schemes for leaderless and leader-follower consensus were simultaneously considered. In [34], an observer-based consensus tracking control based on the distributed velocity estimation method was designed for second-order leader-follower MAS. In [35], for general linear MAS with different topologies and unknown states, observer-based distributed controllers are proposed for different situations, and finite-time coordinated tracking is achieved. To the best of our knowledge, the studies on the event-triggered finite-time consensus control of MAS with unmeasured states are few, hence the motivation of the present research.

On the basis of the above discussion and existing results, this study investigates the observer-based control of event-triggered finite-time consensus for general linear leader-follower MAS. Some of the contributions of this research are as follows. First, construct a universal observer based on the output feedback information of the system to estimate the real state of the system and use the estimated state information of the observer to define the measurable error. Second, on the basis of the measurable error, a dynamic threshold that converges to zero within a finite time is introduced in the trigger function, and a new model-based event-triggered controller is proposed. Finally, we prove that the finite-time leader-follower consensus can be achieved under the observer-based and event-triggered scheme, which ensures the finite-time convergence of the system under consideration and substantially reduces the controller update. In addition, the Zeno behavior is analyzed comprehensively, and the appropriate parameters can be selected to avoid the Zeno behavior.

The rest of this paper is organized as follows. Section 2 briefly discusses the preparations and problem description. The main results are given in Sect. 3. An example is given in Sect. 4. Section 5 concludes the study.

Notation

In this study, R, \({R^{n},}\) and \({R^{n \times m}}\) denote the set of real numbers, an n-dimensional real vector, and the set of \(n \times m\) real matrices, respectively. \({I_{n}}\) denote n-dimensional identity matrix. Given a matrix \(x = [{x_{1}},{x_{2}}, \ldots ,{x_{n}}]\) and \(\alpha > 0\), Define \(\operatorname{sig}{ ( x )^{\alpha}} = [\operatorname{sig} ( {x_{1}^{\alpha}} ), \ldots , \operatorname{sig} ( {x_{n}^{\alpha}} )]^{T}\), \(\operatorname{sig} ( {x_{n}^{\alpha}} ) = \operatorname{sign} ( {{x_{n}}} ){ \vert {{x_{n}}} \vert ^{\alpha}}\), where \(\operatorname{sign}( \cdot )\) means signum function. \({\lambda _{\max }} ( x )\) and \({\lambda _{\min }} ( x )\) indicate the maximum and minimum eigenvalues of x, respectively. \(\Vert x \Vert \) represents the Euclidean norm of vector x or the induced matrix 2-norm. ⊗ is the Kronecker product.

2 Preliminaries and problem formulation

2.1 Graph theory and lemmas

The topology of MAS is usually described using a directed graph or an undirected graph. In this study, the leader-follower MAS consists of a leader (marked as 0) and N followers. The weighted undirected topological graphs between N agents are described by graph \(\mathcal{G} = ( \mathcal{V},\mathcal{E},\mathcal{A} )\), where \(\mathcal{V} = \{ {1,2,\ldots,N} \}\) represents the set of nodes, \(\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}\) denotes the set of edges, and \(\mathcal{A} = { [ {{a_{ij}}} ]_{N \times N}}\) means the related weighted adjacency matrix, where \({a_{ii}} = 0\). If \(( {j,i} ) \in \mathcal{E}\), then \({a_{ij}} > 0\); otherwise \({a_{ij}} = 0\). \({N_{i}}\) represents the set of neighbors of the follower i (\({i = 1,\ldots,N} \)). \(D = \operatorname{diag} \{ {{d_{1}},\ldots,{d_{N}}} \} \in {R^{N \times N}}\) represents the connection matrix between the follower and the leader. If the follower i can obtain information from the leader, then \({d_{i}} > 0\); otherwise, \({d_{i}} = 0\). The Laplacian matrix \(L = [ {{l_{ij}}} ] \in {R^{N \times N}}\) of \(\mathcal{G}\) related to the adjacency matrix \(\mathcal{A}\) is defined as \({l_{ij}} = - {a_{ij}}\), \(i \ne j\) and \({l_{ii}} = \sum_{j = 1,j \ne i}^{N} {{a_{ij}}}\). If \({a_{ij}} = {a_{ji}}\), \(\forall i, j = 1,\ldots,N\), then the graph \(\mathcal{G}\) is undirected; otherwise, it is directed. \(H = L + D\) is defined to describe the topological information of the leader-follower MAS. The path from \({v_{i}}\) to \({v_{j}}\) in the graph is a sequence of adjacent nodes starting from \({v_{i}}\) and ending in \({v_{j}}\). If a path is present between any two agents, then the graph is connected.

Lemma 1

([22])

The matrix H has positive eigenvalues \({\lambda _{1}},{\lambda _{2}},\ldots,\lambda _{N}\), and the H is positive definite if and only if the undirected graph \(\mathcal{G}\) is connected.

Lemma 2

([23])

For any \({x_{i}} \in R\), \(i = 1,\ldots,N\) and \(0 < p \le 1\), then

$$ { \Biggl( {\sum_{i = 1}^{N} { \vert {{x_{i}}} \vert } } \Biggr)^{p}} \le \sum _{i = 1}^{N} {{{ \vert {{x_{i}}} \vert }^{p}}} \le {N^{1 - p}} { \Biggl( {\sum _{i = 1}^{N} { \vert {{x_{i}}} \vert } } \Biggr)^{p}}.$$

Lemma 3

([25])

Consider the system \(\dot{x} = f ( x )\), \(x \in {R^{n}}\) and \(f ( 0 ) = 0\). Suppose that there exists a positive definite continuously differentiable function V: \({R^{n}} \to R\) and the real number \(c > 0\), \(0 < \alpha < 1\), making the following inequality

$$ \dot{V} \bigl( {x ( t )} \bigr) \le - c{V^{\alpha}} \bigl( {x ( t )} \bigr)$$

holds, then the origin of the system is global finite-time stable. The settling time T is bounded as

$$ T \le \frac{{{V^{1 - \alpha }} ( {x ( 0 )} )}}{{c ( {1 - \alpha } )}}.$$

Lemma 4

([27])

For the Laplacian matrix L, we have

$$ {x^{T}} ( t )Lx ( t ) = \frac{1}{2}\sum _{i = 1}^{N} {\sum_{j = 1}^{N} {{a_{ij}}} } { \bigl( {{x_{i}} ( t ) - {x_{j}} ( t )} \bigr)^{T}} \bigl( {{x_{i}} ( t ) - {x_{j}} ( t )} \bigr),$$

where L is positive semidefinite. In addition, when \({1^{T}}x ( t ) = 0\), \({\lambda _{2}} ( L ){x^{T}} ( t )x ( t ) \le {x^{T}} ( t )Lx ( t ) \le { \lambda _{\max }} ( L ){x^{T}} ( t )x ( t )\).

Lemma 5

([36])

Young’s inequality: Let \(a,b > 0\) and \(p,q > 1\) be real numbers with \(\frac{1}{p} + \frac{1}{q} = 1\), then inequality \(ab \le \frac{1}{p}{a^{p}} + \frac{1}{q}{b^{q}}\) holds.

2.2 Problem formulation

Consider a MAS with N followers and a leader. The dynamic equation of follower i is

$$\begin{aligned} \textstyle\begin{cases} {{\dot{x}}_{i}} ( t ) = A{x_{i}} ( t ) + B{u_{i}} ( t ), \\ {y_{i}} ( t ) = C{x_{i}} ( t ), \end{cases}\displaystyle \quad i = 1,2,\ldots,N \end{aligned}$$
(1)

where \({x_{i}} ( t ) \in {R^{n}}\), \({u_{i}} ( t ) \in {R^{m}}\), and \({y_{i}} ( t ) \in {R^{p}}\) represent the state, control input, and output of follower i, respectively. A, B, and C denote the constant matrix with appropriate dimensions.

For system (1), the following assumptions are introduced:

Assumption 1

The matrix pairs \((A, B)\) and \((C, A)\) are stabilizable and detectable, respectively.

Assumption 2

Graph \(\mathcal{G}\) is composed of N followers and a leader, and it contains a directed spanning tree, and leader 0 is its root node.

The dynamics of leader 0 is given by

$$\begin{aligned} \textstyle\begin{cases} {{\dot{x}}_{0}} ( t ) = A{x_{0}} ( t ), \\ {y_{0}} ( t ) = C{x_{0}} ( t ), \end{cases}\displaystyle \end{aligned}$$
(2)

where \({x_{0}} ( t ) \in {R^{n}}\) and \({y_{0}} ( t ) \in {R^{p}}\) denote the state and output of the leader, respectively.

Definition 1

Consider a general linear leader-follower MAS with a fixed undirected graph \(\mathcal{G}\). For any initial conditions, if the system state satisfies \(\mathop {\lim } _{t \to \infty } \Vert {{x_{i}} ( t ) - {x_{0}} ( t )} \Vert = 0\), \(i = 1,2, \ldots ,N\), then the system has achieved a consensus.

The local degree matrix is defined as \(D = \operatorname{diag}\{ {d_{i}}\} \in {R^{N \times N}}\), \(i = 1,2, \ldots ,N\) to represent the communication mode between the follower and the leader. If the follower can receive the information from the leader, then \({d_{i}} = 1\); otherwise, \({d_{i}} = 0\).

3 Main results

Given the difficulty of obtaining or detecting the complete state in many systems, the following observers are considered:

$$\begin{aligned} \textstyle\begin{cases} {{\dot {\hat{x}}_{i}}} ( t ) = A{{\hat{x}}_{i}} ( t ) + B{u_{i}} ( t ) + G ( {{{\hat{y}}_{i}} ( t ) - {y_{i}} ( t )} ), \\ {{\hat{y}}_{i}} ( t ) = C{{\hat{x}}_{i}} ( t ), \end{cases}\displaystyle \quad i = 1,2,\ldots,N, \end{aligned}$$
(3)

where \({\hat{x}_{i}} ( t ) \in {R^{n}}\) and G represent the state and gain matrix of the observer, respectively, and \({\hat{y}_{i}} ( t ) \in {R^{p}}\) is the output information of the observer-based system.

On the basis of the above observer (3), the event-triggered and leader-following consensus protocols are designed as follows:

$$\begin{aligned} {u_{i}} ( t ) = {}& - K \biggl( {\sum_{j \in {N_{i}}} {{a_{ij}} \bigl( {{{\hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {{ \hat{x}}_{j}} \bigl( {t_{k'}^{j}} \bigr)} \bigr) + {d_{i}} \bigl( {{{ \hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {x_{0}} \bigl( {t_{k}^{i}} \bigr)} \bigr)} } \biggr) \\ &{}- K\operatorname{sig} { \biggl( {\sum_{j \in {N_{i}}} {{a_{ij}} \bigl( {{{ \hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {{\hat{x}}_{j}} \bigl( {t_{k'}^{j}} \bigr)} \bigr) + {d_{i}} \bigl( {{{ \hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {x_{0}} \bigl( {t_{k}^{i}} \bigr)} \bigr)} } \biggr)^{\alpha}}, \end{aligned}$$
(4)

where \(K = {B^{T}}P \in {R^{m \times n}}\) is the control gain matrix, P is a positive definite matrix, \(0 < \alpha \le 0.5\), \({a_{ij}}\) represents the ijth item of the adjacency matrix \(\mathcal{A}\), \(t_{k}^{i}\) is the latest trigger moment of agent i, and \(\hat{x} ( {t_{k}^{i}} )\) represents the latest broadcast state of agent i.

Remark 1

Controller (4) is designed in two parts. The first part aims to reduce the state error to near zero, and the second part ensures that the state error converges to zero within a finite time.

The state tracking error and observation error are defined as follows:

$$\begin{aligned} {\tilde{x}_{i}} ( t ) = {\hat{x}_{i}} ( t ) - {x_{0}} ( t ), \end{aligned}$$
(5)
$$\begin{aligned} {h_{i}} ( t ) = {\hat{x}_{i}} ( t ) - {x_{i}} ( t ). \end{aligned}$$
(6)

Therefore, the following result is obtained:

$$\begin{aligned} {{\dot {\tilde{x}}_{i}}} ( t ) ={}& A{\tilde{x}_{i}} ( t ) - B{B^{T}}P \biggl( {\sum _{j \in {N_{i}}} {{a_{ij}} \bigl( {{{\hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {{\hat{x}}_{j}} \bigl( {t_{k'}^{j}} \bigr)} \bigr) + {d_{i}} \bigl( {{{ \hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {x_{0}} \bigl( {t_{k}^{i}} \bigr)} \bigr)} } \biggr) \\ & {}- B{B^{T}}Psig{ \biggl( {\sum_{j \in {N_{i}}} {{a_{ij}} \bigl( {{{\hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {{\hat{x}}_{j}} \bigl( {t_{k'}^{j}} \bigr)} \bigr) + {d_{i}} \bigl( {{{\hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {x_{0}} \bigl( {t_{k}^{i}} \bigr)} \bigr)} } \biggr)^{\alpha}} \\ &{}+ GC \bigl( {{{\hat{x}}_{i}} ( t ) - {x_{i}} ( t )} \bigr). \end{aligned}$$
(7)

The measurement error of the current state and the trigger state is defined as follows:

$$\begin{aligned} {{{e_{i}} ( t ) = {\tilde{x}_{i}} \bigl( {t_{k}^{i}} \bigr) - {\tilde{x}_{i}} ( t ).}} \end{aligned}$$
(8)

Substituting (8) into (7) obtains

$$\begin{aligned} {{\dot {\tilde{x}}_{i}}} ( t ) ={}& A{{\tilde{x}}_{i}} ( t ) - B{B^{T}}P ( {\sum _{j \in {N_{i}}} {{a_{ij}} \bigl( { \bigl( {{{\tilde{x}}_{i}} ( t ) - {{\tilde{x}}_{j}} ( t )} \bigr) + \bigl( {{e_{i}} ( t ) - {e_{j}} ( t )} \bigr)} \bigr)} } \\ & {} + {d_{i}} \bigl( {{{\tilde{x}}_{i}} ( t ) + {e_{i}} ( t )} \bigr) - B{B^{T}}P\operatorname{sig} \biggl( {\sum_{j \in {N_{i}}} {{a_{ij}} \bigl( { \bigl( {{{ \tilde{x}}_{i}} ( t ) - {{\tilde{x}}_{j}} ( t )} \bigr)} } } \\ & + \bigl( e_{i} ( t ) - e_{j} ( t ) \bigr) \bigr) + d_{i} \bigl(\tilde{x}_{i} ( t ) + e_{i} ( t ) \bigr) \biggr)^{\alpha} + GC{h_{i}} ( t ). \end{aligned}$$
(9)

The event-triggered mechanism can be applied by introducing a dynamic variable as follows:

$$\begin{aligned} {\dot{\vartheta}_{i}} ( t ) = - {\varepsilon _{i}} \operatorname{sig} { \bigl( {{\vartheta _{i}} ( t )} \bigr)^{\gamma}}, \end{aligned}$$
(10)

where \({\vartheta _{i}}\) is a non-zero real number, \({\varepsilon _{i}} > 0\), and \(\gamma \in ( {0,1} )\).

The event-triggered function of agent i is designed as follows:

$$\begin{aligned} {f_{i}} \bigl( {t,{e_{i}} ( t ),{{\tilde{x}}_{i}} ( t ),{\vartheta _{i}} ( t )} \bigr) ={}& {\eta _{1}} { \bigl\Vert {{e_{i}} ( t )} \bigr\Vert ^{2}} + {\eta _{2}} { \bigl\Vert {{e_{i}} ( t )} \bigr\Vert ^{2\alpha }} + {\eta _{3}} { \bigl\Vert {{{ \tilde{x}}_{i}} ( t )} \bigr\Vert ^{2\alpha }} \\ &{}- \rho {\varepsilon _{i}}\delta { \bigl\vert {{\vartheta _{i}} ( t )} \bigr\vert ^{2\gamma }}, \end{aligned}$$
(11)

where \(\rho \in ( {0,1} )\), \(\delta > 0\). In addition,

$$\begin{aligned}& {\eta _{1}} \ge {a_{1}} {\lambda _{\max }} \bigl( {{H^{T}}H \otimes PB{B^{T}}P} \bigr), \\& {\eta _{2}} \ge {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ ( {{N_{n}}} )^{1 - \alpha }} {2^{\alpha}} { \bigl\Vert { ( {H \otimes {I_{N}}} )} \bigr\Vert ^{2\alpha }}, \\& {\eta _{3}} \ge \bigl( {{a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){{ ( {{N_{n}}} )}^{1 - \alpha }} {2^{\alpha}} + 1} \bigr) \times { \bigl\Vert { ( {H \otimes {I_{N}}} )} \bigr\Vert ^{2 \alpha }}, \end{aligned}$$

here \({a_{1}} > 0\), \({a_{2}} > 0\).

Under the proposed event-triggered consensus strategy, the agent i not only monitors its own state but also receives the broadcast state of its in-neighbors. The event will be triggered when \({f_{i}} ( {t,{e_{i}} ( t ),{{\tilde{x}}_{i}} ( t ),{\vartheta _{i}} ( t )} ) > 0\). Then, the agent i updates its controller with its current state and broadcasts its current state to out-neighbors. At the same time, \({e_{i}} ( t )\) is reset to zero. If the agent i receives the broadcast state of its in-neighbor, the controller will also be updated.

Remark 2

The threshold used in [15] and [18] is suitable for asymptotic consensus control; however, the threshold is incompatible for finite-time control because it cannot converge to 0 within a finite time. In contrast to the methods in [15] and [18], the threshold used in this study can ensure the convergence of the control to 0 within a finite time. This scheme also plays an important role in verifying the accuracy of the proposed MAS finite-time event-triggering algorithm.

Theorem 1

Consider systems (1) and (2) with observer (3) and control protocol (4). Suppose that Assumption 1holds. If there exists a positive definite matrix P and an appropriate positive scalar μ and β, such that the following Riccati inequality

$$\begin{aligned} PA + {A^{T}}P - 2\mu PB{B^{T}}P + \beta {I_{n}} < 0 \end{aligned}$$
(12)

holds, and the trigger function is given by (11), then the finite-time leader-following consensus can be achieved for all initial conditions.

Proof

With the Kronecker product, (9) can be written in compact form as follows:

$$\begin{aligned} \dot {\tilde{x}} ( t ) ={}& \bigl( {{I_{N}} \otimes A - H \otimes B{B^{T}}P} \bigr)\tilde{x} ( t ) - \bigl( {H \otimes B{B^{T}}P} \bigr)e ( t ) \\ &{}- \bigl( {{I_{N}} \otimes B{B^{T}}P} \bigr) \operatorname{sig} { \bigl( { ( {H \otimes {I_{N}}} ) \bigl( {\tilde{x} ( t ) + e ( t )} \bigr)} \bigr)^{\alpha}} \\ &{} + ( {{I_{N}} \otimes GC} )h ( t ), \end{aligned}$$
(13)

where \(\tilde{x} ( t ) = { [ {\tilde{x}_{1}^{T} ( t ),\ldots,\tilde{x}_{N}^{T} ( t )} ]^{T}}\), \(e ( t ) = { [ {e_{1}^{T} ( t ),\ldots,e_{N}^{T} ( t )} ]^{T}}\), \(h ( t ) = { [ {h_{1}^{T} ( t ),\ldots, h_{N}^{T} ( t )} ]^{T}}\).

When Assumption 2 is satisfied, matrix H has N eigenvalues, and the real part of each eigenvalue is positive.

According to the observation error, \({\dot{h}_{i}} ( t ) = ( {A + GC} ){h_{i}} ( t )\). Thus,

$$\begin{aligned} \dot{h} ( t ) = \bigl( {{I_{N}} \otimes ( {A + GC} )} \bigr)h ( t ). \end{aligned}$$
(14)

If the observer feedback matrix G is designed such that \(A + GC\) is a Hurwitz matrix, then \({h_{i}} ( t )\) will asymptotically approach zero. According to (13) and (14), the estimation error \(h ( t )\) is decoupled from the dynamics \(\tilde{x} ( t )\), and the stability of (13) is equivalent to the stability of the following system:

$$\begin{aligned} \dot {\tilde{x}} ( t ) ={}& \bigl( {{I_{N}} \otimes A - H \otimes B{B^{T}}P} \bigr)\tilde{x} ( t ) - \bigl( {H \otimes B{B^{T}}P} \bigr)e ( t ) \\ &{}- \bigl( {{I_{N}} \otimes B{B^{T}}P} \bigr) \operatorname{sig} { \bigl( { ( {H \otimes {I_{N}}} ) \bigl( {\tilde{x} ( t ) + e ( t )} \bigr)} \bigr)^{\alpha}}. \end{aligned}$$
(15)

For system (15), the Lyapunov function is constructed as follows:

$$\begin{aligned} V = {\tilde{x}^{T}} ( {{I_{N}} \otimes P} ) \tilde{x} + \sum_{i = 1}^{N} {\frac{\delta }{{1 + \gamma }}} { \vert {{ \vartheta _{i}}} \vert ^{1 + \gamma }}. \end{aligned}$$
(16)

Let \({V_{1}} = {\tilde{x}^{T}} ( {{I_{N}} \otimes P} )\tilde{x}\), \({V_{2}} = \sum_{i = 1}^{N} {\frac{\delta }{{1 + \gamma }}} { \vert {{\vartheta _{i}}} \vert ^{1 + \gamma }}\). Take the derivative of \({V_{1}}\) along the trajectory of system (15)

$$\begin{aligned} {\dot{V}_{1}} ={}& 2{\tilde{x}^{T}} ( {{I_{N}} \otimes P} ) \dot {\tilde{x}} \\ ={}& 2{{\tilde{x}}^{T}} ( {{I_{N}} \otimes P} ) \bigl[ { \bigl( {{I_{N}} \otimes A - H \otimes B{B^{T}}P} \bigr)\tilde{x} - \bigl( {H \otimes B{B^{T}}P} \bigr)e} \\ &{} - \bigl( {{I_{N}} \otimes B{B^{T}}P} \bigr) \operatorname{sig} {{ \bigl( { ( {H \otimes {I_{N}}} ) ( {\tilde{x} + e} )} \bigr)}^{\alpha}} \bigr]. \\ ={}& 2{{\tilde{x}}^{T}} \bigl( {{I_{N}} \otimes P - H \otimes PB{B^{T}}P} \bigr)\tilde{x} - 2{{\tilde{x}}^{T}} \bigl( {H \otimes PB{B^{T}}P} \bigr)e. \\ &{} - 2{{\tilde{x}}^{T}} \bigl( {{I_{N}} \otimes PB{B^{T}}P} \bigr)\operatorname{sig} { \bigl( { ( {H \otimes {I_{N}}} ) ( {\tilde{x} + e} )} \bigr)^{\alpha}}. \end{aligned}$$
(17)

After analysis, the first item of (17) obtains the following results:

$$\begin{aligned} &2{\tilde{x}^{T}} \bigl( {{I_{N}} \otimes P - H \otimes PB{B^{T}}P} \bigr)\tilde{x} \\ &\quad = {\tilde{x}^{T}} \bigl( {{I_{N}} \otimes \bigl( {PA + {A^{T}}P} \bigr) - 2H \otimes PB{B^{T}}P} \bigr)\tilde{x} \\ &\quad \le \sum_{i = 1}^{N} {\xi _{i}^{T}} \bigl( { \bigl( {PA + {A^{T}}P} \bigr) - 2{\lambda _{1}}PB{B^{T}}P} \bigr){\xi _{i}} \\ &\quad \le \sum_{i = 1}^{N} {\xi _{i}^{T}} \bigl( { \bigl( {PA + {A^{T}}P} \bigr) - 2\mu PB{B^{T}}P} \bigr){\xi _{i}} \\ &\quad \le - \beta \sum_{i = 1}^{N} {\xi _{i}^{T}} {\xi _{i }} \\ &\quad \le - \beta { \Vert {\tilde{x}} \Vert ^{2}}. \end{aligned}$$
(18)

According to Lemma 5, the second term in (17) is bounded as follows:

$$\begin{aligned} - 2{\tilde{x}^{T}} \bigl( {H \otimes PB{B^{T}}P} \bigr)e ={}& - 2{ \bigl( { \bigl( {{I_{N}} \otimes {B^{T}}P} \bigr)\tilde{x}} \bigr)^{T}} \bigl( {H \otimes {B^{T}}P} \bigr)e \\ \le{}& \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{1}}}}{ \Vert {\tilde{x}} \Vert ^{2}} \\ & {} + {a_{1}} {\lambda _{\max }} \bigl( {{H^{T}}H \otimes PB{B^{T}}P} \bigr){ \Vert e \Vert ^{2}}. \end{aligned}$$
(19)

According to Lemma 5, the last item in (17) is bounded as follows:

$$\begin{aligned} &{}- 2{\tilde{x}^{T}} \bigl( {{I_{N}} \otimes PB{B^{T}}P} \bigr)\operatorname{sig} { \bigl( { ( {H \otimes {I_{N}}} ) ( {\tilde{x} + e} )} \bigr)^{\alpha}} \\ &\quad = - 2{ \bigl( { \bigl( {{I_{N}} \otimes {B^{T}}P} \bigr)\tilde{x}} \bigr)^{T}} \bigl( {{I_{N}} \otimes {B^{T}}P} \bigr)\operatorname{sig} { ( q )^{\alpha}} \\ &\quad \le \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{2}}}}{ \Vert {\tilde{x}} \Vert ^{2}} + {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ \bigl( {\operatorname{sig} {{ ( q )}^{\alpha}}} \bigr)^{T}}\operatorname{sig} { ( q )^{\alpha}}, \end{aligned}$$
(20)

where \({a_{1}} > 0\), \({a_{2}} > 0\), and \(q = ( {H \otimes {I_{N}}} ) ( {\tilde{x} + e} )\).

Substituting (18), (19), and (20) into (17) yields

$$\begin{aligned} {\dot{V}_{1}} ={}& 2{\tilde{x}^{T}} ( {{I_{N}} \otimes P} ) \dot {\tilde{x}} \\ \le{}& - \beta { \Vert {\tilde{x}} \Vert ^{2}} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{1}}}}{ \Vert {\tilde{x}} \Vert ^{2}} + {a_{1}} {\lambda _{\max }} \bigl( {{H^{T}}H \otimes PB{B^{T}}P} \bigr){ \Vert e \Vert ^{2}} \\ & {} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{2}}}}{ \Vert {\tilde{x}} \Vert ^{2}} + {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ \bigl( {\operatorname{sig} {{ ( q )}^{\alpha}}} \bigr)^{T}}\operatorname{sig} { ( q )^{\alpha}}. \end{aligned}$$
(21)

On the basis of equation (10), the time derivative of \({V_{2}}\) satisfies

$$\begin{aligned} {\dot{V}_{2}} = \sum_{i = 1}^{N} \delta \operatorname{sig} { ( {{ \vartheta _{i}}} )^{\gamma}} { \dot{\vartheta}_{i}} = - \sum_{i = 1}^{N} {\delta {\varepsilon _{i}} {{ \vert {{\vartheta _{i}}} \vert }^{2\gamma }}}. \end{aligned}$$
(22)

Consequently, combining (21) and (22), we obtain

$$\begin{aligned} \dot{V} ={}& {\dot{V}_{1}} + {\dot{V}_{2}} \\ \le{}& - \beta { \Vert {\tilde{x}} \Vert ^{2}} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{1}}}}{ \Vert {\tilde{x}} \Vert ^{2}} + {a_{1}} {\lambda _{\max }} \bigl( {{H^{T}}H \otimes PB{B^{T}}P} \bigr){ \Vert e \Vert ^{2}} \\ &{} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{2}}}}{ \Vert {\tilde{x}} \Vert ^{2}} + {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ \bigl( {\operatorname{sig} {{ ( q )}^{\alpha}}} \bigr)^{T}}\operatorname{sig} { ( q )^{\alpha}} \\ & {} - \sum_{i = 1}^{N} {\delta {\varepsilon _{i}} {{ \vert {{ \vartheta _{i}}} \vert }^{2\gamma }}} \\ ={}& \biggl( { - \beta + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{1}}}} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{2}}}}} \biggr){ \Vert {\tilde{x}} \Vert ^{2}} \\ & {} + {a_{1}} {\lambda _{\max }} \bigl( {{H^{T}}H \otimes PB{B^{T}}P} \bigr){ \Vert e \Vert ^{2}} \\ & {} + {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ \bigl( {\operatorname{sig} {{ ( q )}^{\alpha}}} \bigr)^{T}}\operatorname{sig} { ( q )^{\alpha}} - \sum _{i = 1}^{N} {\delta {\varepsilon _{i}} {{ \vert {{\vartheta _{i}}} \vert }^{2\gamma }}}. \end{aligned}$$
(23)

According to Lemma 2, for \({a_{2}}{\lambda _{\max }} ( {PB{B^{T}}P} ){ ( {\operatorname{sig}{{ ( q )}^{\alpha}}} )^{T}}\operatorname{sig}{ ( q )^{\alpha}}\), the following results can be attained:

$$\begin{aligned} &{a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ \bigl( {\operatorname{sig} {{ ( q )}^{\alpha}}} \bigr)^{T}}\operatorname{sig} { ( q )^{\alpha}} \\ &\quad = {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr) \bigl\Vert { ( {H \otimes {I_{N}}} ) ( {\tilde{x} + e} )} \bigr\Vert _{2\alpha }^{2\alpha } \\ &\quad \le {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ ( {{N_{n}}} )^{1 - \alpha }} { \bigl\Vert { ( {H \otimes {I_{N}}} ) \tilde{x} + ( {H \otimes {I_{N}}} )e} \bigr\Vert ^{2\alpha }} \\ & \quad \le {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ ( {{N_{n}}} )^{1 - \alpha }} { \bigl( {2{{ \bigl\Vert { ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert }^{2}} + 2{{ \bigl\Vert { ( {H \otimes {I_{N}}} )e} \bigr\Vert }^{2}}} \bigr)^{\alpha}} \\ &\quad \le {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ ( {{N_{n}}} )^{1 - \alpha }} {2^{\alpha}} \bigl( {{{ \bigl\Vert { ( {H \otimes {I_{N}}} ) \tilde{x}} \bigr\Vert }^{2\alpha }} + {{ \bigl\Vert { ( {H \otimes {I_{N}}} )e} \bigr\Vert }^{2\alpha }}} \bigr) \\ &\quad = \bigl( {{a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){{ ( {{N_{n}}} )}^{1 - \alpha }} {2^{\alpha}}+ 1} \bigr){ \bigl\Vert { ( {H \otimes {I_{N}}} ) \tilde{x}} \bigr\Vert ^{2\alpha }} - { \bigl\Vert { ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert ^{2\alpha }} \\ & \qquad {} + {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ ( {{N_{n}}} )^{1 - \alpha }} {2^{\alpha}} { \bigl\Vert { ( {H \otimes {I_{N}}} )e} \bigr\Vert ^{2\alpha }} \\ &\quad \le \bigl( {{a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){{ ( {{N_{n}}} )}^{1 - \alpha }} {2^{\alpha}}+ 1} \bigr){ \bigl\Vert { ( {H \otimes {I_{N}}} )} \bigr\Vert ^{2 \alpha }} { \Vert {\tilde{x}} \Vert ^{2\alpha }} - { \bigl\Vert { ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert ^{2\alpha }} \\ & \qquad {} + {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ ( {{N_{n}}} )^{1 - \alpha }} {2^{\alpha}} { \bigl\Vert { ( {H \otimes {I_{N}}} )} \bigr\Vert ^{2\alpha }} { \Vert e \Vert ^{2 \alpha }}. \end{aligned}$$
(24)

Substituting (24) into (23) obtains the following:

$$\begin{aligned} \dot{V}\le{}& \biggl( { - \beta + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{1}}}} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{2}}}}} \biggr){ \Vert {\tilde{x}} \Vert ^{2}} \\ & {} + {a_{1}} {\lambda _{\max }} \bigl( {{H^{T}}H \otimes PB{B^{T}}P} \bigr){ \Vert e \Vert ^{2}} \\ & {} + \bigl( {{a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){{ ( {{N_{n}}} )}^{1 - \alpha }} {2^{\alpha}}+ 1} \bigr){ \bigl\Vert { ( {H \otimes {I_{N}}} )} \bigr\Vert ^{2 \alpha }} { \Vert {\tilde{x}} \Vert ^{2\alpha }} \\ &{} + {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ ( {{N_{n}}} )^{1 - \alpha }} {2^{\alpha}} { \bigl\Vert { ( {H \otimes {I_{N}}} )} \bigr\Vert ^{2\alpha }} { \Vert e \Vert ^{2\alpha }} \\ & {} - { \bigl\Vert { ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert ^{2\alpha }} - \sum_{i = 1}^{N} {\delta { \varepsilon _{i}} {{ \vert {{\vartheta _{i}}} \vert }^{2\gamma }}}. \end{aligned}$$
(25)

Using the triggering functions (11) and (25), can be rewritten as

$$\begin{aligned} \dot{V} \le{}& \biggl( { - \beta + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{1}}}} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{2}}}}} \biggr){ \Vert {\tilde{x}} \Vert ^{2}} \\ & {} - { \bigl\Vert { ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert ^{2\alpha }} - ( {1 - \rho } )\delta \sum_{i = 1}^{N} {{\varepsilon _{i}} {{ \vert {{\vartheta _{i}}} \vert }^{2\gamma }}}. \end{aligned}$$
(26)

Let \(\beta > \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{1}}}} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{2}}}}\). Then,

$$\begin{aligned} \dot{V} \le - { \bigl\Vert { ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert ^{2\alpha }} - ( {1 - \rho } )\delta \sum _{i = 1}^{N} {{\varepsilon _{i}} {{ \vert {{\vartheta _{i}}} \vert }^{2\gamma }}}. \end{aligned}$$
(27)

After analysis, the first part of (27) obtains the following results:

$$\begin{aligned} - { \bigl\Vert { ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert ^{2 \alpha }} = {}& - { \bigl\Vert {{{\tilde{x}}^{T}} \bigl( {{H^{T}} \otimes {I_{N}}} \bigr) ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert ^{\alpha}} \\ = {}& - { \bigl\Vert {{{\tilde{x}}^{T}} \bigl( {{H^{T}}H \otimes {I_{N}}} \bigr)\tilde{x}} \bigr\Vert ^{\alpha}} \\ = {}& - { \biggl( { \frac{{{{\tilde{x}}^{T}} ( {{H^{T}}H \otimes {I_{N}}} )\tilde{x}}}{{{{\tilde{x}}^{T}} ( {{I_{N}} \otimes P} )\tilde{x}}}{{ \tilde{x}}^{T}} ( {{I_{N}} \otimes P} )\tilde{x}} \biggr)^{\alpha}} \\ ={}& - { \biggl( { \frac{{{{\tilde{x}}^{T}} ( {{H^{T}}H \otimes {I_{N}}} )\tilde{x}}}{{{{\tilde{x}}^{T}} ( {{I_{N}} \otimes P} )\tilde{x}}}} \biggr)^{\alpha}}V_{1}^{\alpha} \\ \le{} & - { \biggl( { \frac{{{\lambda _{\min }} ( {{H^{T}}H} )}}{{{\lambda _{\max }} ( P )}}} \biggr)^{\alpha}}V_{1}^{\alpha} \\ = {}& - {c_{1}}V_{1}^{\alpha}, \end{aligned}$$
(28)

where \({c_{1}} = { ( { \frac{{{\lambda _{\min }} ( {{H^{T}}H} )}}{{{\lambda _{\max }} ( P )}}} )^{\alpha}} > 0\).

Meanwhile, the second part of (27) indicates that the term can be bounded as follows:

$$\begin{aligned} - ( {1 - \rho } )\delta \sum_{i = 1}^{N} {{ \varepsilon _{i}} {{ \vert {{\vartheta _{i}}} \vert }^{2\gamma }}} ={} &- ( {1 - \rho } ){\varepsilon _{\min }}\sum _{i = 1}^{N} { ( {1 + \gamma } ) \frac{\delta }{{1 + \gamma }}{{ \vert {{\vartheta _{i}}} \vert }^{ ( {1 + \gamma } )\frac{{2\gamma }}{{1 + \gamma }}}}} \\ \le {}& - ( {1 - \rho } ){\varepsilon _{\min }} \frac{{{{ ( {1 + \gamma } )}^{\frac{{2\gamma }}{{1 + \gamma }}}}}}{{{\delta ^{\frac{{\gamma - 1}}{{1 + \gamma }}}}}}{ \Biggl( {\sum_{i = 1}^{N} {\frac{\delta }{{1 + \gamma }}{{ \vert {{\vartheta _{i}}} \vert }^{ ( {1 + \gamma } )}}} } \Biggr)^{\frac{{2\gamma }}{{1 + \gamma }}}} \\ = {}& - {c_{2}}V_{2}^{\frac{{2\gamma }}{{1 + \gamma }}}, \end{aligned}$$
(29)

where \({c_{2}} = ( {1 - \rho } ){\varepsilon _{\min }} \frac{{{{ ( {1 + \gamma } )}^{\frac{{2\gamma }}{{1 + \gamma }}}}}}{{{\delta ^{\frac{{\gamma - 1}}{{1 + \gamma }}}}}} > 0\), and \({\varepsilon _{\min }}=\min \{ {{\varepsilon _{1}},\ldots,{ \varepsilon _{N}}} \}\).

According to (28) and (29), (27) can further obtain the following results:

$$\begin{aligned} \dot{V} \le - {c_{1}}V_{1}^{\alpha}- {c_{2}}V_{2}^{ \frac{{2\gamma }}{{1 + \gamma }}}. \end{aligned}$$
(30)

According to (30), V will converge to \(V \le 1\) within a finite time. This scenario indicates that \({V_{1}} \le 1\) and \({V_{2}} \le 1\) hold within a finite time. Furthermore, with \(\gamma \in ( {0,1} )\) in (10), one has \(\alpha < \frac{{2\alpha }}{{1 + \alpha }}\) and \(\frac{{2\gamma }}{{1 + \gamma }} < \frac{{2\alpha }}{{1 + \alpha }}\), then \(V_{1}^{\alpha}> V_{1}^{\frac{{2\alpha }}{{1 + \alpha }}}\) and \(V_{2}^{\frac{{2\gamma }}{{1 + \gamma }}} > V_{2}^{ \frac{{2\alpha }}{{1 + \alpha }}}\). Then, we have

$$\begin{aligned} \dot{V} &\le - {c_{1}}V_{1}^{\frac{{2\alpha }}{{1 + \alpha }}} - {c_{2}}V_{2}^{ \frac{{2\alpha }}{{1 + \alpha }}} \\ &< - \min ( {{c_{1}},{c_{2}}} ) \bigl( {V_{1}^{ \frac{{2\alpha }}{{1+ \alpha }}}+ V_{2}^{ \frac{{2\alpha }}{{1+ \alpha }}}} \bigr) \\ &< - \min ( {{c_{1}},{c_{2}}} ){ ( {{V_{1}}+ {V_{2}}} )^{\frac{{2\alpha }}{{1+ \alpha }}}} \\ &< - c{V^{\eta}}, \end{aligned}$$
(31)

where \(c = \min ( {{c_{1}},{c_{2}}} )\) and \(\eta = \frac{{2\alpha }}{{1+ \alpha }}\).

According to Lemma 3, we can derive \(V ( t ) \to 0\) within a finite time T. At this phase, the proof is completed. □

Remark 3

Consider the event-triggered mechanism and finite-time consensus [19, 20, 22, 25, 27] and the event-triggered mechanism and the problem of unmeasurable state [28, 29, 31, 32, 34] in the literature. The problems of unmeasurable state and convergence within finite time are also considered in [35]. However, the aforementioned studies have not considered simultaneously the problems of unmeasurable state, event-triggered mechanism, and finite-time consensus. These problems are jointly investigated in the current research.

Remark 4

An observer-based event-triggered strategy is proposed, and the event-triggered condition (11) is distributed, and the trigger time of each agent is independent. Under the finite-time event-triggered consensus protocol, when the state-based measurement error of agent i exceeds a given threshold, an event will be triggered for it, the controller will be updated with the current state, and its current state will be broadcast to external neighbors. At the same time, the state-based agent i measurement error is reset to zero. If the state-based measurement error is less than the given threshold, it will not trigger, and no communication is required until the next event is triggered.

Theorem 2

Consider the leader-follower MAS (1) and (2). If the event-trigger condition (11) is satisfied, then the Zeno behavior can be avoided under the effect of consensus control protocol (4).

Proof

Assuming that the current trigger time is \(t_{k}^{i}\), the next trigger time \(t_{k+ 1}^{i}\) is determined by event-trigger condition (11). Consider the time interval \(t \in [ {t_{k}^{i},t_{k + 1}^{i}} )\), and let the event interval time be \(\tau = t_{k + 1}^{i} - t_{k}^{i}\). From the previous analysis, we know that \({\tilde{x}_{i}}\) and \({e_{i}}\) are convergent, and \(\Vert {{{\tilde{x}}_{i}}} \Vert \) and \(\Vert {{e_{i}}} \Vert \) are bounded. Let the upper bounds of \(\Vert {{{\tilde{x}}_{i}}} \Vert \) and \(\Vert {{e_{i}}} \Vert \) be \({b_{1}}\tau \) and \({b_{2}}\tau \), respectively, where \({b_{1}}\) and \({b_{2}}\) are positive constants. Then, we can derive the following results:

$$\begin{aligned} {\eta _{1}} { \Vert {{e_{i}}} \Vert ^{2}} + {\eta _{2}} { \Vert {{e_{i}}} \Vert ^{2\alpha }} + {\eta _{3}} { \Vert {\tilde{x}} \Vert ^{2 \alpha }} \le {\eta _{1}} { ( {{b_{1}}\tau } )^{2}} + { \eta _{2}} { ( {{b_{1}}\tau } )^{2\alpha }}+ { \eta _{3}} { ( {{b_{2}}\tau } )^{2\alpha }} \stackrel{ \Delta}{=} {\mathrm{{q}}} ( \tau ). \end{aligned}$$
(32)

The lower bound of the time interval can be determined using the solution to (32).

$$\begin{aligned} \rho {\varepsilon _{i}}\delta { \bigl\vert {{\vartheta _{i}} ( t )} \bigr\vert ^{2\gamma }}={}&{\eta _{1}} { \Vert {{e_{i}}} \Vert ^{2}} + {\eta _{2}} { \Vert {{e_{i}}} \Vert ^{2\alpha }} + {\eta _{3}} { \Vert {\tilde{x}} \Vert ^{2\alpha }} \\ = {}&{\eta _{1}} { ( {{b_{1}} {\tau _{1}}} )^{2}} + {\eta _{2}} { ( {{b_{1}} {\tau _{1}}} )^{2\alpha }}+ {\eta _{3}} { ( {{b_{2}} {\tau _{1}}} )^{2\alpha }}. \end{aligned}$$
(33)

According to equation (33), if \({\vartheta _{i}} ( t ) \ne 0\), then \(\tau \ge {\tau _{1}} > 0\). As the proposed dynamic threshold will converge to 0 within a finite time, the system cannot guarantee that the lower bound of the time interval between events will be strictly greater than zero. However, if the appropriate parameters are selected such that the time of system consensus is less than the time \({\vartheta _{i}} ( t )\) converges to 0 (i.e., the finite-time consensus is achieved before the dynamic threshold of each agent converges to 0), then the Zeno behavior will not occur. □

Remark 5

The research results of this study provide ideas for solving the problem of finite-time output consensus in general linear MAS using event-triggered mechanisms. In particular, this study can help to ensure that the Zeno behavior will not occur when appropriate parameters are selected. Our future research will focus on the event-triggered mechanism that will not have Zeno behavior at all.

4 Numerical simulation

A numerical example is given to verify the theoretical results. Consider an undirected topology consisting of five followers and a leader, as shown in Fig. 1. The connection weight between agents is 1, and the Laplacian matrix of the network topology is given by

L= [ 2 1 0 1 0 1 2 1 0 0 0 1 2 0 1 1 0 0 2 1 0 0 1 1 2 ] .

The adjacency matrix is \(D = \operatorname{diag} \{ {1,1,1,0,0} \}\).

Figure 1
figure 1

Communication topology

The constant matrix of system dynamics are as follows: \(A = [ {0 \ 5; - 2 \ 2} ]\), \(B = [ {1;1} ]\), \(G = [ {1;1} ]\), \(C = [ {1,0} ]\). Then, we select the relevant parameters, namely, \(\alpha = 0.5\), \(\gamma = 0.45\), \({a_{1}} = 3\), \({a_{2}} = 2\), \(\delta = 60\), \(\rho = 0.8\), and \(\mu = 0.4\). Let the initial state of system (1) be \({x_{1}} ( 0 ) = { [ {2, - 1.3} ]^{T}}\), \({x_{2}} ( 0 ) = { [ {0.5, - 1.8} ]^{T}}\), \({x_{3}} ( 0 ) = { [ {1.5, - 0.8} ]^{T}}\), \({x_{4}} ( 0 ) = { [ {0.8, - 1} ]^{T}}\), and \({x_{5}} ( 0 ) = { [ {2.6, - 1.2} ]^{T}}\). The initial state of the leader is \({x_{0}} ( 0 ) = { [ {0,0} ]^{T}}\).

Figures 2 and 3 show the state tracking error of each agent reaching zero within a finite time. Figure 4 is the event-triggered update state \({x_{ij}} ( {{t_{k}}} )\) (\(i = 1, \ldots ,5\), \(j = 1,2\)). Figures 5 and 6 show the observer error reaching zero within a finite time. Figure 7 shows the event-triggered interval of each agent under strategy (11). The error and threshold in the trigger function are shown in Fig. 8. The state tracking error \({x_{i}} - {x_{0}}\), event-triggered update state \(x ( {{t_{k}}} )\), and observer error \({h_{i}} ( t )\) all approach zero within a finite time, which means that the system can achieve consensus within a finite time. The results of the numerical simulation verify the feasibility and effectiveness of the control method and event-triggered strategy.

Figure 2
figure 2

State tracking error of each agent (first component)

Figure 3
figure 3

State tracking error of each agent (second component)

Figure 4
figure 4

Event-triggered update state

Figure 5
figure 5

Observer error \({h_{i1}} ( t )\)

Figure 6
figure 6

Observer error \({h_{i2}} ( t )\)

Figure 7
figure 7

The trigger interval of each agent under the control strategy

Figure 8
figure 8

The errors and thresholds for each agent

5 Conclusion

The finite-time leader-following consensus of observer-based MAS is studied. As the state of some systems cannot be measured directly, an observer is used to estimate the state of the system. An observer-based distributed control protocol is proposed, in which the dynamic event-triggered mechanism depends on an external dynamic threshold to achieve consensus within a finite time. Then, the finite-time consensus is obtained using matrix theory, the Lyapunov control method, and algebraic graph theory. The analysis indicates that the Zeno behavior can be avoided by selecting the appropriate parameters. Finally, a numerical example is given to verify the effectiveness of the method. Although the current work only considers general linear dynamics, in future work, we will consider how to extend the results to other practical nonlinear systems.