1 Introduction

Consensus of multi-agent systems has been a hot topic in control theory community for the past few years because of the wide applications in formation control, distributed monitoring and so forth [7, 19, 24, 27, 34]. The objective of consensus control is to solve a protocol via which the states of all subsystems can achieve synchronization in some sense [2, 23, 38, 44, 48].

As a class of typical multi-agent systems, second-order nonlinear multi-agent systems have been widely investigated in recent years. In [43], a finite-time protocol was proposed for a class of second-order nonlinear multi-agent systems, where the estimations of the convergence time and final state were given. The periodically intermittent pinning control strategy was applied to a second-order nonlinear multi-agent systems with time delays in [31]. Consensus for second-order nonlinear multi-agent systems with fractional-order dynamics was investigated in [10]. Wang et al. [29] presented a protocol with only aperiodically intermittent position measurements for second-order nonlinear multi-agent systems with time delays. By using the stochastic theory, the means square consensus problem was addressed for a class of second-order nonlinear multi-agent systems subject to white noises in [13]. It is noted that in these literature, the eigenvalues of Laplacian matrix are necessary in the protocol design, so the protocols are not fully distributed. To overcome this drawback, many schemes based on the adaptive control strategy have been developed. In [12, 50], fully distributed consensus for second-order nonlinear multi-agent systems with unmodeled dynamics was investigated. Iterative learning-based fully distributed consensus protocols were proposed in [14, 35]. By the protocol in [42], consensus for the second-order nonlinear multi-agent systems can be reached in finite time without global information. Two edge-based distributed adaptive protocols were, respectively, designed for leaderless and leader-following multi-agent systems in [39]. For a second-order nonlinear multi-agent system with position constraints and unknown control directions, the fully distributed protocol was presented in [1]. In [32], a fully distributed protocol was derived without using velocity measurements. However, continuous data transmissions among subsystems are required in the implementation of these fully distributed protocols.

Event-triggered control is one of the important techniques to avoid the continuous data transmission [5, 25, 26, 33]. In recent years, the event-triggered control strategy has been widely used in cooperative control of multi-agent systems. For instance, the event-triggered formation control for multiple robots was considered in [37]; the containment control for multi-agent systems with event-triggered protocols was investigated in [15, 45]; for the cooperative output regulation problem, event-triggered protocols were given in [21, 41]; in [9, 49], the event-triggered consensus tracking control problem was addressed. As a matter of fact, some efforts have been made on fully distributed event-triggered consensus of multi-agent systems. In [46], fully distributed event-triggered consensus for double-integrator multi-agent systems was studied. The higher-order linear multi-agent systems were considered in [17, 20, 30, 36]. In [3, 18], the fully distributed consensus for nonlinear multi-agent systems was addressed; however, the nonlinear terms are required to be bounded and satisfy the Lipschitz condition, respectively. There are also many results focusing on the event-triggered consensus for nonlinear multi-agent systems without these strict conditions, such as in [8, 16, 28]. However, the protocol design or the convergence performance relies on some global information, so the protocols in these results are not fully distributed. To the best of our knowledge, the fully distributed event-triggered consensus for more general nonlinear multi-agent systems has not been considered in existing literature. Designing fully distributed consensus protocol for totally unknown second-order nonlinear multi-agent systems by event-triggered strategies is thus the task of this article. The main work and contributions of the article are given as follows. a) A new consensus protocol is developed for a class of nonlinear multi-agent systems, where the implementation of protocol does not depend on any global information or continuous communications. b) Compared with [1, 12, 14, 32, 35, 39, 42, 50], the continuous data transmissions among subsystems are avoided in this article. Compared with [8, 16, 28], the protocol is designed in the fully distributed manner. c) Compared with [17, 20, 30, 36, 46], the nonlinear multi-agent system is considered here. In addition, the nonlinear terms are more general than those in [3, 18]. The main difficulty of the work lies in that the proposed protocol needs to deal with both the unknown topology information and the more general nonlinearities, and ensure that no Zeno behavior occurs.

We organize the rest of the article as follows. The model, objective and some mathematical tools are introduced in Sect. 2. In Sect. 3, the main results are given. Simulation results are shown in Sect. 4. Conclusions are summarized in Sect. 5.

Notations: \(\mathbf{{R}}\) denotes the set of all real numbers. \(\mathbf{{R}}^{m\times n}\) represents the set of all \(m \times n\) real matrices. \(\mathbf{{1}}_n\) is an n-dimension vector with all elements being 1 and \(I_n\) is the n-dimension identity matrix. The symbol \(\otimes \) is used to denote the Kronecker product operator. Denote by sig\((\cdot )\) the sign function.

2 Problem Formulation and Preliminaries

This article considers a multi-agent system consisting of n subsystems described by:

$$\begin{aligned} {{{\dot{\xi }} }_i}= & {} {\psi _i} \nonumber \\ {{{\dot{\psi }} }_i}= & {} {c_i}{\upsilon _i} + {\phi _i}({\xi _i},{\psi _i})+\rho _i,\quad i=1,2,\ldots ,n, \end{aligned}$$
(1)

where \( {\xi }_i\) and \(\psi _i \in \mathbf{{R}}\) are position and velocity of the ith subsystem, respectively; \({\upsilon _i}\in \mathbf{{R}}\) is the input to be designed, and \(c_i\ge {{\underline{c}}_i}>0\) is an unknown gain coefficient (\({\underline{c}}_i\) is a known positive constant); \({\phi _i}( \cdot )\in \mathbf{{R}}\) is an unknown continuous nonlinear function, and \(\rho _i\) is the external disturbance satisfying that \(\left| {{\rho _i}} \right| \le {\bar{\rho }}_i \) (\({\bar{\rho }}_i>0\) is a constant).

Remark 1

Second-order nonlinear multi-agent systems have drawn much attention because of the typicality and wide applicability. In the existing literature, the similar multi-agent systems have been widely investigated.

Definition 1

(bounded consensus [38, 48]) Multi-agent systems with subsystems in the form of (1) are said to reach bounded consensus if there are two positive constants \({\varepsilon _1}\) and \({\varepsilon _2}\) such that

$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow \infty } \left| {{\xi _i} - {\xi _j}} \right| \le {\varepsilon _1}\quad \text {and}\quad \mathop {\lim }\limits _{t \rightarrow \infty } \left| {{\psi _i} - {\psi _j}} \right| \le {\varepsilon _2}. \end{aligned}$$
(2)

This article aims to design an appropriate event-triggered protocol without using the global information such that the bounded consensus can be reached. Moreover, the convergence errors are supposed to be as small as possible by adjusting the parameters in the protocol appropriately.

Remark 2

The task of this article is inspired by [1, 3, 12, 14, 17, 18, 20, 30, 32, 35, 36, 39, 42, 46, 50]. Compared with [1, 12, 14, 32, 35, 39, 42, 50], the continuous data transmissions among subsystems are avoided in this article. Compared with [17, 20, 30, 36, 46], the nonlinear multi-agent system is considered here. In addition, the nonlinear terms are more general than those in [3, 18], and the neural network method was used to deal with the nonlinearities. In [8, 16, 28], the neural network-based event-triggered consensus protocols were given for nonlinear systems; however, these protocols are not fully distributed.

Lemma 1

[11] For any vectors \({\chi _1}\), \({\chi _2}\in \mathbf {R}^n\), the following inequality holds:

$$\begin{aligned} {{\chi }_1}^T{{\chi }_2} \le \gamma {{\chi }_1}^T{{\chi }_1} + \frac{1}{\gamma }{{\chi }}_2^T{{\chi }_2}, \end{aligned}$$
(3)

where \(\gamma >0\) is a positive real number.

2.1 Graph Theory

The subsystems are communicated under an undirected graph topology in this article. \({\mathcal {G}}{\text { = \{ }}{\mathcal {V}},{\mathcal {E}}{\text {\} }}\) is used to denote a graph, where \({\mathcal {V}}{\text { = \{ }}{{\text {v}}_1}{\text {,}} \ldots {{\text {v}}_n}{\text {\} }}\) denotes the nodes set and \({\mathcal {E}} \subseteq {\mathcal {V}} \times {\mathcal {V}}\) is the edge set. For each edge, there is a corresponding weight \(a_{ij}>0\) if \({\text {(}}{{\text {v}}_j}{\text {,}}{{\text {v}}_i}{\text {)}} \in {\mathcal {E}}\), otherwise, \(a_{ij}=0\). A path is a series of linked edges with positive weights. An undirected graph is connected if there is at lease a path between any two nodes. An important matrix, Laplacian matrix is defined as \(L = {[{l_{ij}}]_{n \times n}}\in \mathbf{{R}}^{n\times n}\), where \({l_{ii}} = \sum \limits _{j = 1,j \ne i}^n {{a_{ij}}}\) and \( {l_{ij}} = - {a_{ij}}\), \( i \ne j\).

Lemma 2

[6] If graph \({\mathcal {G}}\) is undirected and connected, then 0 is a simple eigenvalue of the Laplacian matrix. Denote by \(\lambda _q\) the qth smallest eigenvalue of L, then we have \(0 = {\lambda _1} < {\lambda _2} \le \ldots \le {\lambda _n}\) and \({\lambda _2} = {\min _{{{\mathbf {1}}_n^T}x = 0,x \ne 0}}\frac{{{x^T}Lx}}{{{x^T}x}}\).

2.2 Radial Basis Function Neural Networks

The Radial basis function neural networks will be used to approximate the unknown continuous functions in this article. A necessary lemma is given as follows.

Lemma 3

[40] For any continuous function \(\phi ({y})\) defined on a compact set \({\mathcal {C}}\in \mathbf{{R}}^h\), the neural network \({\vartheta ^T}\varUpsilon (y)\) can approximate \(\phi ({y})\) with a bounded error \(\sigma (y)\), that is,

$$\begin{aligned} \phi (y) = {\vartheta ^T}\varUpsilon (y) + \sigma (y),\quad \left| {\sigma (y)} \right| \le \sigma , \end{aligned}$$
(4)

where \(\vartheta \in {\mathbf{{R}}^h}\) is the ideal weight vector, \(\varUpsilon (y)={[{\varUpsilon _1}(y), \ldots ,{\varUpsilon _h}(y)]^T}\in {\mathbf{{R}}^h}\) is the basis function vector, and \(\sigma >0\) is a constant. The basis function is chosen as \({\varUpsilon _j}(y) = {\text {exp}} \left( { - \frac{{{{(y - {c_j})}^T}(y - {c_j} )}}{{\mu _j^2}}}\right) \), where \(c_j\) and \(\mu _j\) are the center and the width, respectively.

3 Main Results

3.1 Protocol Design

To show the design protocol clearly, we introduce the following parameters and variables: \(M = \left[ {\begin{array}{*{20}{c}} 0&{}1\\ 0&{}0 \end{array}} \right] \), \(N = \left[ {\begin{array}{*{20}{c}} 0\\ 1 \end{array}} \right] \); \(\varOmega = \left[ {\begin{array}{*{20}{c}} {{\omega _1}}&{}{{\omega _2}}\\ {{\omega _2}}&{}{{\omega _3}} \end{array}} \right] \) is a positive definite matrix satisfying

$$\begin{aligned} {M^T}\varOmega + \varOmega M - \varOmega N{N^T}\varOmega \le - \beta {I_n}, \end{aligned}$$
(5)

where \(\beta >0\) is a constant.

A fully distributed consensus protocol is proposed in the following form:

$$\begin{aligned} {{{\dot{\varsigma }} }_i}= & {} {\varPhi _i} \nonumber \\ {{{\dot{\varPhi }} }_i}= & {} {\upsilon _i}^o\triangleq l{{{\hat{\theta }} }_i}{e_i}(t_k^i) \nonumber \\ {\upsilon _i}= & {} {f_i}({\xi _i},{\varsigma _i},{\psi _i},{\varPhi _i},{{{\hat{\vartheta }} }_i}), \quad i=1,2,\ldots ,n, \end{aligned}$$
(6)

where \(t \in [t_k^i,t_{k + 1}^i)\), \(t_k^i\) is the triggered point for subsystem i, \(k = 0,1, \ldots \), \(t_0^i = 0\); \({\varsigma _i}\), \({\varPhi _i}\), \({{{\hat{\theta }} }_i}\) and \({{{\hat{\vartheta }} }_i}\) are adaptive variables to be designed; \({e_i} = {\omega _2}\sum \limits _{j = 1}^n {{a_{ij}}({\varsigma _j} - {\varsigma _i})} + {\omega _3}\sum \limits _{j = 1}^n {{a_{ij}}({\varPhi _j} - {\varPhi _i})} \) is a constructed error variable; \(l>0\) is a gain constant and \({f_i}( \cdot )\) is a operator to be designed.

Remark 3

In this article, the protocol form is well motivated by [48], in which the finite-time consensus for switched nonlinear multi-agent systems. Compared with [48], the input in (1) is considered with the unknown gain. Moreover, the protocol in this article is fully distributed and designed based on the event-triggered control strategy.

In the implementation of protocol (6), the operator \({f_i}( \cdot )\), variables \({{{\hat{\theta }} }_i}\), \({{{\hat{\vartheta }} }_i}\) and the triggered condition should be determined. Then, we will derive the components to be determined in the protocol by the Lyapunov functional method.

Define \(\varLambda = {I_n} - \frac{1}{n}{} \mathbf{{1}}_n^T{\mathbf{{1}}_n}\) and \(z = (\varLambda \otimes {I_2}){[{\varsigma _1},{\varPhi _1}, \ldots ,{\varsigma _n},{\varPhi _n}]^T}\). Consider the following function

$$\begin{aligned} {V_0} = {z^T}({{{\mathcal {L}}}} \otimes \varOmega )z + \frac{\gamma }{2}\sum \limits _{i = 1}^n {{{({{{\hat{\theta }} }_i} - \theta )}^2}}, \end{aligned}$$
(7)

where \(\gamma \) and \(\theta \) are two positive constants to be given later.

Define \({\eta _i} = \left[ {\begin{array}{*{20}{c}} {{\eta _{i,1}}}\\ {{\eta _{i,2}}} \end{array}} \right] = {\left[ \begin{array}{*{20}{c}} {\sum \limits _{j = 1}^n {{a_{ij}}({\varsigma _j} - {\varsigma _i})} },&{\sum \limits _{j = 1}^n {{a_{ij}}({\varPhi _j} - {\varPhi _i})} } \end{array}\right] ^T}\), \(\eta = {[\eta _1^T, \ldots ,\eta _n^T]^T}\), \({\tilde{\eta }} = \eta - \eta (t_k^i) = {[{\tilde{\eta }} _1^T, \ldots ,{\tilde{\eta }} _n^T]^T}\), \({\varXi _i} = {[{\varsigma _i},{\varPhi _i}]^T}\) and \(\varXi = {[\varXi _1^T, \ldots ,\varXi _n^T]^T}\).

The dynamical equation of \(\varXi \) can be obtained as

$$\begin{aligned} {\dot{\varXi }} = ({I_n} \otimes M)\varXi + ({\hat{\theta }} \otimes lN{N^T}\varOmega )(\eta - {\tilde{\eta }} ), \end{aligned}$$
(8)

where \({\hat{\theta }} = diag\{ {{{\hat{\theta }} }_1}, \ldots ,{{\hat{\theta }}_n}\} \).

Then, we can get

$$\begin{aligned} \dot{z} = ({I_n} \otimes M)z + (\varLambda {\hat{\theta }} \otimes lN{N^T}\varOmega )(\eta - {\tilde{\eta }} ), \end{aligned}$$
(9)

where the fact that \(L\varLambda = \varLambda L = L\) was used.

Combining (7) and (9), we have

$$\begin{aligned}&{{\dot{V}}_0} = {z^T}(L \otimes (\varOmega M + {M^T}\varOmega ))z -2 {\eta ^T}({\hat{\theta }} \otimes l\varOmega N{N^T}\varOmega )(\eta - \tilde{\eta }) \nonumber \\&\quad \qquad +\gamma \sum \limits _{i = 1}^n {({{{\hat{\theta }} }_i} - \theta )\dot{\hat{\theta _i} }} . \end{aligned}$$
(10)

By Lemma 1, we have that

$$\begin{aligned}&2{\eta ^T}({\hat{\theta }} \otimes l\varOmega N{N^T}\varOmega ){\tilde{\eta }} \nonumber \\&\quad \le {\eta ^T}({\hat{\theta }} \otimes l\varOmega N{N^T}\varOmega )\eta \mathrm{{ + }}{{{\tilde{\eta }} }^T}({\hat{\theta }} \otimes l\varOmega N{N^T}\varOmega ){\tilde{\eta }}. \end{aligned}$$
(11)

If the variable \({\hat{\theta }}_i \) evolves as

$$\begin{aligned} {{\dot{{\hat{\theta }} }}_i} = me_i^2(t_k^i),\quad { {{\hat{\theta }} }}_i(0) >0,\, \forall t \in [t_k^i,t_{k + 1}^i), \end{aligned}$$
(12)

we can obtain that

$$\begin{aligned}&{{\dot{V}}_0} \le {z^T}(L \otimes (\varOmega M + {M^T}\varOmega ))z - {\eta ^T}({\hat{\theta }} \otimes l\varOmega N{N^T}\varOmega )\eta \nonumber \\&\quad \qquad + {{{\tilde{\eta }} }^T}({\hat{\theta }} \otimes l\varOmega N{N^T}\varOmega ){\tilde{\eta }} +m\gamma \sum \limits _{i = 1}^n {({{\hat{\theta }}_i} - \theta )e_i^2 \left( {t_{k_t^i}^i} \right) }, \end{aligned}$$
(13)

where \(t \in [t_k^i,t_{k + 1}^i)\), \(m>0\) is a constant, \(k_t^i = {\arg }_q{\min _{q \in N,t \ge t_k^i}}\{ t - t_q^i\} \).

In addition,

$$\begin{aligned}&m\gamma \sum \limits _{i = 1}^n {{{{\hat{\theta }} }_i}e_i^2 \left( t_{k_t^i}^i \right) } \nonumber \\&\quad = m\gamma \sum \limits _{i = 1}^n {{{{\hat{\theta }} }_i}{{\left( {N^T}\varOmega {\eta _i}\left( t_{k_t^i}^i \right) \right) }^2}} \nonumber \\&\quad = m\gamma \sum \limits _{i = 1}^n {{{{\hat{\theta }} }_i}{{ \left( {N^T}\varOmega ({\eta _i} - {{{\tilde{\eta }} }_i})\right) }^2}} \nonumber \\&\quad \le 2m\gamma {\eta ^T}({\hat{\theta }} \otimes \varOmega N{N^T}\varOmega )\eta + 2m\gamma {{{\tilde{\eta }} }^T}({\hat{\theta }} \otimes \varOmega N{N^T}\varOmega ){\tilde{\eta }} . \end{aligned}$$
(14)

The triggered rule is designed as:

$$\begin{aligned} t_{k + 1}^i = {\inf _{t > t_k^i}} \left\{ t\left| {{g_i}({e_i},{e_i}(t_k^i),t) = 0} \right. \right\} , \end{aligned}$$
(15)

where \({g_i}({e_i},{e_i}(t_k^i),t) = {({e_i} - {e_i}(t_k^i))^2} - {\delta _1}e_i^2 - \frac{{{\delta _2}}}{{{{{\hat{\theta }} }_i}}}{\text {e}^{ - {\delta _3}t}}\), \(0<\delta _1<1\), \(\delta _2>0\) and \(\delta _3>0\) are constants.

It can be concluded from (13) to (15) that

$$\begin{aligned} {{\dot{V}}_0}\le & {} {z^T}(L \otimes (\varOmega M + {M^T}\varOmega ))z - (l - 2m\gamma ){\eta ^T}({\hat{\theta }} \otimes \varOmega N{N^T}\varOmega )\eta \nonumber \\&+ (l + 2m\gamma ){{{\tilde{\eta }} }^T}({\hat{\theta }} \otimes \varOmega N{N^T}\varOmega ){\tilde{\eta }} - m\gamma \theta \sum \limits _{i = 1}^n {e_i^2 \left( t_{k_t^i}^i \right) } \nonumber \\\le & {} {z^T}(L \otimes (\varOmega M + {M^T}\varOmega ))z - m\gamma \theta \sum \limits _{i = 1}^n {e_i^2 \left( t_{k_t^i}^i \right) } +n{\delta _2}(l + 2m\gamma ){\mathrm{{e}}^{ - {\delta _3}t}}\nonumber \\&-((1 - {\delta _1})l - 2(1 + {\delta _1})m\gamma ){\eta ^T}(\hat{\theta }\otimes \varOmega N{N^T}\varOmega )\eta . \end{aligned}$$
(16)

According to Lemma 1 and triggered rule (15), it can be obtained that

$$\begin{aligned} - e_i^2(t_k^i)\le & {} - \frac{p}{{1 + p}}e_i^2 + p{\left( {e_i} - {e_i} \left( t_k^i \right) \right) ^2} \nonumber \\\le & {} p \left( {\left( {e_i} - {e_i} \left( t_k^i \right) \right) ^2} - {\delta _1}e_i^2 \right) - \left( \frac{p}{{1 + p}} - {\delta _1}p \right) e_i^2 \nonumber \\\le & {} \frac{{p{\delta _2}}}{{{{{\hat{\theta }} }_i}}}{\mathrm{{e}}^{ - {\delta _3}t}} - \left( \frac{p}{{1 + p}} - {\delta _1}p \right) e_i^2, \end{aligned}$$
(17)

where \(0<p<\frac{1}{{{\delta _1}}} - 1\) is a constant.

It is noted that

$$\begin{aligned} {e^T}e= & {} \sum \limits _{i = 1}^n {{{({N^T}\varOmega {\eta _i})}^2}} \nonumber \\= & {} {\eta ^T}({I_n} \otimes \varOmega N{N^T}\varOmega )\eta \nonumber \\= & {} {\varXi ^T}({L^2} \otimes \varOmega N{N^T}\varOmega )\varXi \nonumber \\= & {} {z^T}({L^2} \otimes \varOmega N{N^T}\varOmega )z. \end{aligned}$$
(18)

If we choose \(\gamma \) such that \((1 - {\delta _1})l > 2(1 + {\delta _1})m\gamma \), then we can get from (16) to (18) that

$$\begin{aligned} {{\dot{V}}_0} \le {z^T} \left( L \otimes \left( \varOmega M + {M^T}\varOmega \right) \right) z - \theta {\kappa _1}{z^T} \left( {L^2} \otimes \varOmega N{N^T}\varOmega \right) z\mathrm{{ + }}{\kappa _{\mathrm{{2}}}}{\mathrm{{e}}^{ - {\delta _3}t}}, \end{aligned}$$
(19)

where \({\kappa _{\mathrm{{1}}}}\mathrm{{ = }}m\gamma \left( \frac{p}{{1 + p}} - {\delta _1}p \right) \), \({\kappa _{\mathrm{{2}}}}\mathrm{{ = }}n{\delta _2}(l + 2m\gamma )\mathrm{{ + }}m\gamma \theta \sum \limits _{i = 1}^n {\frac{{p{\delta _2}}}{{{{{\hat{\theta }} }_i}(0)}}} \). It is noted that \({\kappa _{\mathrm{{1}}}}\mathrm{{ > 0}}\) and \({{{\hat{\theta }} }_i}(0) \le {{{\hat{\theta }} }_i}\), because of \(0<p<\frac{1}{{{\delta _1}}} - 1\) and \({{\dot{ {\hat{\theta }}} }_i} \ge 0\), respectively. Here, we choose \(\theta \ge \frac{1}{{{\kappa _1}{\lambda _2}}}\). From Lemma 2, we know that \(\lambda _2>0\).

In what follows, the operator \(f_i(\cdot )\) and the dynamical system for variable \({{{\hat{\vartheta }} }_i}\) will be designed. Considering the following function

$$\begin{aligned} {V_{i,1}} = \frac{1}{2}{({\xi _i} - {\varsigma _i})^2},\quad i=1,\ldots ,n. \end{aligned}$$
(20)

The derivation of \(V_{i,1}\) can be obtained as \({{\dot{V}}_{i,1}} = ({\xi _i} - {\varsigma _i})({\psi _i} - {\varPhi _i})\).

Define \({\varpi _{i,1}} = {\xi _i} - {\varsigma _i}\), \({\varpi _{i,2}} = {\psi _i} - {\varPhi _i}\), \(\varpi _{i,2}^ * = - r({\xi _i} - {\varsigma _i})\) and \({{{\tilde{\varpi }} }_{i,2}} = {\varpi _{i,2}} - \varpi _{i,2}^ * \), where \(r>1\) is a constant. Considering the function

$$\begin{aligned} {V_{i,2}} = {V_{i,1}} + \frac{1}{{2{c_i}}}{\tilde{\varpi }} _{i,2}^2, \end{aligned}$$
(21)

we can obtain that

$$\begin{aligned}&{{\dot{V}}_{i,2}} = {{{\tilde{\varpi }} }_{i,2}} \left( {\upsilon _i} + \frac{{{\phi _i}({\xi _i},{\psi _i})}}{{{c_i}}} + \frac{{{\rho _i}}}{{{c_i}}}- \frac{l}{{{c_i}}}{{{\hat{\theta }} }_i}{e_i} \left( t_k^i \right) + \frac{r}{{{c_i}}}{\varpi _{i,2}} \right) \nonumber \\&\quad \qquad - r\varpi _{i,1}^2 + {\varpi _{i,1}}{\tilde{\varpi }} _{i,2} . \end{aligned}$$
(22)

By Lemma 1, it is easy to get

$$\begin{aligned}&{\varpi _{i,1}}{{{\tilde{\varpi }} }_{i,2}} \le \frac{1}{2}\varpi _{i,1}^2\mathrm{{ + }}\frac{1}{2}{\tilde{\varpi }} _{i,2}^2, \end{aligned}$$
(23)
$$\begin{aligned}&\frac{r}{{{c_i}}}{{{\tilde{\varpi }} }_{i,2}}{\varpi _{i,2}} = \frac{r}{{{c_i}}}{{{\tilde{\varpi }} }_{i,2}}({{{\tilde{\varpi }} }_{i,2}} - r{\varpi _{i,1}}) \le \frac{r}{{{c_i}}}{\tilde{\varpi }} _{i,2}^2 + \frac{{{r^4}}}{{2c_i^2}}{\tilde{\varpi }} _{i,2}^2 + \frac{1}{2}\varpi _{i,1}^2. \end{aligned}$$
(24)

It follows from (22)-(24) that

$$\begin{aligned}&{{\dot{V}}_{i,2}} \le {{\tilde{\varpi }} _{i,2}} \left( {\upsilon _i} + \frac{{{\phi _i} ({\xi _i},{\psi _i})}}{{{c_i}}} +\frac{{{\rho _i}}}{{{c_i}}}- \frac{l}{{{c_i}}}{{{\hat{\theta }} }_i}{e_i} \left( t_k^i \right) \right) \nonumber \\&\qquad - (r - 1)\varpi _{i,1}^2 + \left( \frac{{{r^4}}}{{2c_i^2}} + \frac{r}{{{c_i}}} + \frac{1}{2} \right) {\tilde{\varpi }} _{i,2}^2. \end{aligned}$$
(25)

Since \({\phi _i}({\xi _i},{\psi _i})\) and \(c_i\) are unknown, the neural network approximation strategy will be adopted. Here, we use the neural network \(\vartheta _i^T{\varUpsilon _i}({\xi _i},{\psi _i})\) to approximate \(\frac{{{\phi _i}({\xi _i},{\psi _i})}}{{{c_i}}}\), that is,

$$\begin{aligned} \frac{{{\phi _i}({\xi _i},{\psi _i})}}{{{c_i}}} = \vartheta _i^T{\varUpsilon _i}({\xi _i},{\psi _i}) + {\sigma _i}({\xi _i},{\psi _i}), \end{aligned}$$
(26)

where Lemma 3 was used, \({\varUpsilon _i}({\xi _i},{\psi _i})={[{\varUpsilon _{i,1}}({\xi _i},{\psi _i}), \ldots ,{\varUpsilon _{i,h}}({\xi _i},{\psi _i})]^T}\) is the basis function vector, \(\vartheta _i\) is the ideal weight vector, \(\left| {{\sigma _i}({\xi _i},{\psi _i})} \right| \le {\sigma _i}\) is the approximation error, \({\sigma _i}>0\) is a constant.

By Lemma 1, we can get that

$$\begin{aligned} {\varpi _{i,2}}{\sigma _i}({\xi _i},{\psi _i}) \le \frac{1}{2}\varpi _{i,2}^2 + \frac{1}{2}\sigma _i^2. \end{aligned}$$
(27)

Construct the following Lyapunov function candidate

$$\begin{aligned} {V_i} = {V_{i,2}} + \frac{1}{2}{\tilde{\vartheta }} _i^T{{\tilde{\vartheta }}_i}, \end{aligned}$$
(28)

where \({{{\tilde{\vartheta }} }_i} = {\vartheta _i} - {{{\hat{\vartheta }}}_i}\), \({{\hat{\vartheta }} }_i\) is the adaptive law to be designed.

If the adaptive law \({{{\hat{\vartheta }} }_i}\) and input \({\upsilon _i}\) are designed as

$$\begin{aligned}&{{\dot{{\hat{\vartheta }}} }_i} = {{{\tilde{\varpi }} }_{i,2}}{\varUpsilon _i}({\xi _i},{\psi _i}) - {\iota _2}{{{\hat{\vartheta }} }_i}, \end{aligned}$$
(29)
$$\begin{aligned}&{\upsilon _i}= {f_i}({\xi _i},{\varsigma _i},{\psi _i},{\varPhi _i},{{{\hat{\vartheta }} }_i})\nonumber \\&\,\quad =- \left( \frac{{{r^4}}}{{2{\underline{c}}_i^2}} + \frac{r}{{{{\underline{c}}_i}}} + \frac{1}{2}+ {\iota _1} \right) {{\tilde{\varpi }}_{i,2}} - {\hat{\vartheta }} _i^T{\varUpsilon _i}({\xi _i},{\psi _i}) + \upsilon _i^o, \end{aligned}$$
(30)

where \({\iota _1}>0\), \({\iota _2}>0\) are constants, \(\upsilon _i^o = - \frac{{\mathrm{sig}({{{\tilde{\varpi }} }_{i,2}})}}{{{\underline{c}}_i}}({{{\bar{\rho }} }_i} + l{{{\hat{\theta }} }_i}\left| {{e_i}(t_k^i)} \right| )\) is the compensation component.

It can be concluded from (25) to (29) that

$$\begin{aligned} {{\dot{V}}_i} \le - (r - 1)\varpi _{i,1}^2 - {\iota _1}\varpi _{i,2}^2 + {\iota _2}{\tilde{\vartheta }} _i^T{{{\hat{\vartheta }} }_i} + \frac{1}{2}\sigma _i^2. \end{aligned}$$
(31)

Remark 4

The proposed protocol is in fact composed by (6), (12), (15), (29) and (30). Based on the above analysis, the results on the bounded consensus can be easily derived, which will be shown in the next subsection.

3.2 Consensus Analysis

Theorem 1

Protocol (6) ensures that the bounded consensus of multi-agent system (1) under a connected graph can be reached if variables \({{{\hat{\theta }} }_i}\), \({{{\hat{\vartheta }} }_i}\) and function \({f_i}( \cdot )\) are designed as in (12), (29) and (30), respectively, and the triggered rule is given as in (15).

Proof

It can be easily obtained that there is a unitary matrix U such that \(U^T LU=L_u = diag\{ 0,{\lambda _2}, \ldots ,{\lambda _n}\} \). Define \(z' = {[z_1{'}^T, \ldots ,z_n{'}^T]^T}=({U^{ - 1}} \otimes {I_2})z\), we have that

$$\begin{aligned} {{\dot{V}}_0} \le {{z'}^T}({L_u} \otimes (\varOmega M + {M^T}\varOmega ))z' - \theta {\kappa _1}{{z'}^T}({L_u}^2 \otimes \varOmega N{N^T}\varOmega )z'{\text { + }}{\kappa _{\text {2}}}{{\text {e}}^{ - {\delta _3}t}}. \end{aligned}$$
(32)

Since \(\theta \ge \frac{1}{{{\kappa _1}{\lambda _2}}}\), we can get that

$$\begin{aligned} {{\dot{V}}_0}\le & {} \sum \limits _{i = 2}^n {{\lambda _i}{{z'}_i}^T(\varOmega M + {M^T}\varOmega - \varOmega N{N^T}\varOmega )} {{z'}_i}{{ + }}{\kappa _{{2}}}{{\text {e}}^{ - {\delta _3}t}} \nonumber \\\le & {} - \beta \sum \limits _{i = 2}^n {{\lambda _i}{{z'}_i}^T} {{z'}_i}{{ + }}{\kappa _{{2}}}{{{e}}^{ - {\delta _3}t}}. \end{aligned}$$
(33)

From (33), we obtain that

$$\begin{aligned} {V_0} \le {V_0}(0) + \int _0^t {{\kappa _2}{e^{ - {\delta _3}\tau }}} \mathrm{d}\tau \le {V_0}(0) + \frac{{{\kappa _2}}}{{{\delta _3}}}. \end{aligned}$$
(34)

The boundedness of \(V_0\) means that monotonically nondecreasing \({\hat{\theta }}_i\) converges to a positive value and \(e_i(t)\) is bounded. Then, it can be analyzed that \(\mathop {\lim }\limits _{t \rightarrow \infty } {e_i}(t_k^i) = 0\). From the triggered rule (15), we can get that

$$\begin{aligned} (1 - {\delta _1})e_i^2(t) \le 2{e_i}(t){e_i}(t_k^i) - e_i^2(t_k^i) + \frac{{{\delta _2}}}{{{{{\hat{\theta }} }_i}(0)}}{{\text {e}}^{ - {\delta _3}t}}, \end{aligned}$$
(35)

which implies that \(\mathop {\lim }\limits _{t \rightarrow \infty } {e_i}(t) = 0\).

It can be obtained that

$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow \infty } ({\varsigma _i} - {\varsigma _j}) = 0, \quad \mathop {\lim }\limits _{t \rightarrow \infty } ({\varPhi _i} - {\varPhi _j}) = 0. \end{aligned}$$
(36)

It is noted that

$$\begin{aligned} {\iota _2}{\tilde{\vartheta }} _i^T{{{\hat{\vartheta }} }_i} = - {\iota _2}{\tilde{\vartheta }} _i^T{{{\tilde{\vartheta }} }_i} + {\iota _2}{\tilde{\vartheta }} _i^T{\vartheta _i} \le - \frac{{{\iota _2}}}{2}{\tilde{ \vartheta }}_i^T{{{\tilde{\vartheta }} }_i} + \frac{{{\iota _2}}}{2}\vartheta _i^T{\vartheta _i}. \end{aligned}$$
(37)

It can be concluded from (31) and (37) that

$$\begin{aligned} {{\dot{V}}_i} \le - \alpha _i {V_i} + {d_i}, \end{aligned}$$
(38)

where \(\alpha = \min \{ 2(r - 1),2{c_i}{\iota _1},{\iota _2}\} \), \({d_i} = \frac{1}{2}\sigma _i^2 + \frac{{{\iota _2}}}{2}\vartheta _i^T{\vartheta _i}\).

We can obtain from (38) that \(\mathop {\lim }\limits _{t \rightarrow \infty } {V_i} \le \frac{{{d_i}}}{{{\alpha _i}}}\), which means that

$$\begin{aligned}&\mathop {\lim }\limits _{t \rightarrow \infty } \left| {{\xi _i} - {\varsigma _i}} \right| \le {\varepsilon _{i,o,1}} = \sqrt{\frac{{2{d_i}}}{{{\alpha _i}}}} , \nonumber \\&\mathop {\lim }\limits _{t \rightarrow \infty } \left| {{\psi _i} - {\varPhi _i}} \right| \le {\varepsilon _{i,o,2}} = \sqrt{\frac{{\max \{ 4,4{c_i}{r^2}\} {d_i}}}{{{\alpha _i}}}}. \end{aligned}$$
(39)

We can conclude from (36) and (39) that

$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow \infty } \left| {{\xi _i} - {\xi _j}} \right| \le {\varepsilon _{i,o,1}} + {\varepsilon _{j,o,1}},\;\mathop {\lim }\limits _{t \rightarrow \infty } \left| {{\varPhi _i} - {\varPhi _j}} \right| \le {\varepsilon _{i,o,2}} + {\varepsilon _{j,o,2}}. \end{aligned}$$
(40)

The proof is complete. \(\square \)

Remark 5

Since there are errors in the neural network approximation for the unknown functions, only bounded consensus can be reached. The consensus errors can be as small as possible by using as many neural network nodes as possible.

Then, we show that the Zeno behavior can be excluded.

Theorem 2

If the protocol (6) is applied to multi-agent system (1) under a connected graph, where variables \({{{\hat{\theta }} }_i}\), \({{{\hat{\vartheta }} }_i}\) and function \({f_i}( \cdot )\) are designed as in (12), (29) and (30), respectively, and the triggered rule is given as in (15), then the Zeno behavior can be excluded.

Proof

Motivated by [20, 22], we hope to exclude the Zeno behavior by showing that \(\mathop {\lim }\limits _{k \rightarrow \infty } t_k^i = \infty \). Since \({e_i} = {\omega _2}\sum \limits _{j = 1}^n {{a_{ij}}({\varsigma _j} - {\varsigma _i})} + {\omega _3}\sum \limits _{j = 1}^n {{a_{ij}}({\varPhi _j} - {\varPhi _i})} \), we have

$$\begin{aligned}&\frac{{d{{({e_i} - {e_i}(t_k^i))}^2}}}{{dt}} = 2({e_i} - {e_i}(t_k^i)){{\dot{e}}_i}\nonumber \\&\qquad \le 2({e_i} - {e_i}(t_k^i)){N^T}\varOmega \left[ {\begin{array}{*{20}{c}} {\sum \limits _{j = 1}^n {{a_{ij}}({\varPhi _j} - {\varPhi _i})} } \\ {\sum \limits _{j = 1}^n {{a_{ij}}({{{\dot{\varPhi }} }_j} - {{{\dot{\varPhi }} }_i})} }. \end{array}} \right] \end{aligned}$$
(41)

From Theorem 1, we can obtain that \({\sum \limits _{j = 1}^n {{a_{ij}}({\varPhi _j} - {\varPhi _i})} }\) is bounded. From (6), we can also obtain that \({{{{\dot{\varPhi }} }_i}}\) is bounded since \(\hat{\theta }_i\) converges to a constant. In combining with triggered rule (15), we can conclude that \(\frac{{d{{({e_i} - {e_i}(t_k^i))}^2}}}{{dt}}\) is bounded, that is, there is a constant \(\pi _i\) such that \(\frac{{d{{({e_i} - {e_i}(t_k^i))}^2}}}{{dt}} \le {\pi _i}\). Therefore, we have

$$\begin{aligned} {({e_i} - {e_i}(t_k^i))^2} \le {\pi _i}(t - t_k^i), \quad t \in [t_k^i,t_{k + 1}^i). \end{aligned}$$
(42)

According to triggered (15), it can be seen that

$$\begin{aligned} {({e_i}(t_{k + 1}^i) - {e_i}(t_k^i))^2} = {\delta _1}e_i^2(t_{k + 1}^i) + \frac{{{\delta _2}}}{{{{{\hat{\theta }} }_i}(t_{k + 1}^i)}}{{\text {e}}^{ - {\delta _3}t_{k + 1}^i}}, \end{aligned}$$
(43)

which means that

$$\begin{aligned} \frac{{{\delta _2}}}{{{{{\hat{\theta }} }_i}(t_{k + 1}^i)}}{{\text {e}}^{ - {\delta _3}t_{k + 1}^i}} \le {({e_i}(t_{k + 1}^i) - {e_i}(t_k^i))^2} \le {\pi _i}(t_{k + 1}^i - t_k^i). \end{aligned}$$
(44)

If \(\mathop {\lim }\limits _{k \rightarrow \infty } t_k^i = {{{\bar{t}}}^i} < \infty \), then from\(\mathop {\lim }\limits _{k \rightarrow \infty } (t_{k + 1}^i - t_k^i) = 0\), we have

$$\begin{aligned} 0 < \mathop {\lim }\limits _{k \rightarrow \infty } \frac{{{\delta _2}}}{{{{{\hat{\theta }} }_i}(t_{k + 1}^i)}}{{\text {e}}^{ - {\delta _3}{{{\bar{t}}}^i}}} \le \mathop {\lim }\limits _{k \rightarrow \infty } {\pi _i}(t_{k + 1}^i - t_k^i) = 0, \end{aligned}$$
(45)

which results in a contradiction. As such, the Zeno behavior can be excluded.

The proof is complete. \(\square \)

In the implementation of (6), the triggered condition should be checked continuously. If the information of \({\varsigma _i}\) and \({\varPhi _i}\) is required to be transmitted continuously, the event-triggered scheme will be meaningless. In fact, the information can be accurately estimated by neighbored subsystems and the drawback can be overcome.

It is noted that

$$\begin{aligned}&\left[ {\begin{array}{*{20}{c}} {{\varsigma _j}} \\ {{\varPhi _j}} \end{array}} \right] = {e^{M(t - t_k^j)}}\left[ {\begin{array}{*{20}{c}} {{\varsigma _j}} \\ {{\varPhi _j}} \end{array}} \right] (t_k^j) + \int _{t_k^j}^t {N{{{\dot{\varPhi }} }_j}(\tau )} \mathrm{d}\tau \nonumber \\&{{{\hat{\theta }} }_j} = {{{\hat{\theta }} }_j}(t_k^j) + me_j^2(t_k^j)(t - t_k^j). \end{aligned}$$
(46)

It can be seen that the information of \({\varsigma _i}\) and \({\varPhi _i}\) can be well estimated by neighbors under the discontinuous communication. Of course, when the ith subsystem is triggered, \(e_i(t_k^i)\) is required to be transmitted to neighbors.

Remark 6

By the data transmissions on triggered points, the information of \({\varsigma _i}\) and \({\varPhi _i}\) can be well estimated by neighbors. This method, which is an effective tool to avoid the continuous data transmissions, has been widely used, such as in [4, 47]. In addition to saving communication resources, the protocol is in fully distributed form and of scalability. Since more general nonlinearities are considered in this paper, the proposed control scheme can be applied to more physical systems.

4 Simulation Study

In this section, an example is given to illustrate the effectiveness of the proposed protocol. Consider multi-agent system (1) with 6 subsystems, and the corresponding graph is shown in Fig. 1.

Fig. 1
figure 1

The graph of the multi-agent system

Fig. 2
figure 2

State trajectories of \(\hat{\theta _i}\)

Fig. 3
figure 3

State trajectories of \(\xi _i\)

Fig. 4
figure 4

State trajectories of \(\psi _i\)

Fig. 5
figure 5

Trajectories of \({\upsilon _i}^o\)

The nonlinear functions are given as follows:

$$\begin{aligned}&{\phi _{{1}}}({\xi _{{1}}},{\psi _{\text {1}}}) = \sin {\xi _{{1}}} + \cos {\psi _{{1}}},\;{\phi _2}({\xi _2},{\psi _2}) = \cos {\xi _{{1}}} + \sin {\psi _{{1}}},\;\nonumber \\&{\phi _3}({\xi _3},{\psi _3}) = \sin {\xi _{\text {3}}}\cos {\psi _{\text {3}}},\;\;\;{\phi _4}({\xi _4},{\psi _4}) = 4{\xi _4} + 2{\psi _4},\;\nonumber \\&{\phi _5}({\xi _5},{\psi _5}) = {{\text {e}}^{ - 3\left| {{\xi _5}} \right| }} + 2{\psi _5},\;\;\;\;{\phi _6}({\xi _6},{\psi _6}) = {{\text {e}}^{ - 3\left| {{\xi _6}} \right| }}{\cos ^2}{\psi _6}. \end{aligned}$$
(47)

We can see that the constraints on nonlinear terms in [3, 18] are not satisfied here, so the proposed protocol is of wider applicability. The gain coefficients are given as \(c_1=\ldots =c_6=0.85\), and the external disturbances are given as \(\rho _i(t)=0.1 \sin t\), \(i=1,2,3,4,5,6\).

By solving (5), we can get that \(\varOmega = \left[ {\begin{array}{*{20}{c}} {{\omega _1}}&{}{{\omega _2}}\\ {{\omega _2}}&{}{{\omega _3}} \end{array}} \right] =\left[ {\begin{array}{*{20}{c}} {12}&{}2 \\ 2&{}6 \end{array}} \right] \) is a solution for \(\beta =4\). The other parameters in the protocol are chosen as follows: \(l=3\), \(m=3\), \({\iota _1} = 2\), \({\iota _1} = 4\), \(r=4\), \(\bar{\rho }_i=0.1\), \(\delta _1=0.02\), \(\delta _2=0.8\) and \(\delta _3=10\). The neural networks are constructed by the basis functions

$$\begin{aligned} {\varUpsilon _{i,j}}({\xi _i},{\psi _i}) = {{\text {e}}^{ - \frac{{{{({\xi _i} - 3 + j)}^2} + {{({\psi _i} - 3 + j)}^2}}}{{{5^2}}}}}, \quad j=1,2,3,4,5. \end{aligned}$$

The initial values of designed dynamical variables and states for subsystems are set as:

$$\begin{aligned}&{{{\hat{\theta }} }_i}(0) = 0.8,\;\;i = 1,2,3,4,5,6, \\&{{{\hat{\vartheta }} }_i}(0) = {[0, 0, 0, 0, 0]^T}, \;\;i = 1,2,3,4,5,6,\\&[{\varsigma _1}(0),{\varsigma _2}(0),{\varsigma _3}(0),{\varsigma _4}(0),{\varsigma _5}(0),{\varsigma _6}(0)] = [{{0.3, -0.5, -0.3 , - 0.1,0. 5,0. 7}}],\\&[{\varPhi _1}(0),{\varPhi _2}(0),{\varPhi _3}(0),{\varPhi _4}(0),{\varPhi _5}(0),{\varPhi _6}(0)] = [{{ 0.4, - 0.7}}{{5, 0}}{{.075, 0.1, -0. 1 ,0.2 }}],\\&[{\xi _1}(0),{\xi _2}(0),{\xi _3}(0),{\xi _4}(0),{\xi _5}(0),{\xi _6}(0)] = [0.6,{{ - 1, -0.6, - 0.2 , 1 ,1.4}}], \\&[{\psi _1}(0),{\psi _2}(0),{\psi _3}(0),{\psi _4}(0),{\psi _5}(0),{\psi _6}(0)] = [0.8,{{ - 1.5,0.1}}{{5,0.2, -0.2,0.4}}]. \end{aligned}$$

The simulation is carried out by the M-file tool from Matlab. In the simulation, the step length is chosen as \(0.0001\text {s}\). The trajectories of \({\hat{\theta }}_i\), \(i=1,2,3,4,5,6\), are shown in Fig. 2. Figures 3 and 4 display the positions and velocities of 6 subsystems, respectively. The simulation results show that the proposed protocol can make the multi-agent system achieve bounded consensus successfully. Since the event instants can be reflected by signals \(\upsilon _i^o\), \(i=1,2,3,4,5,6\), we depict the trajectories in Fig. 5, where the jump instants are event instants. From the simulation data, we can get that the minimum time interval is 0.0011 s, which is much lager than the step length. Therefore, there is no Zeno behavior in this numerical example.

5 Conclusion

In this article, an event-triggered protocol was proposed, under which the practical consensus of the considered multi-agent system can be reached. Due to that the global information is unavailable, the protocol is of fully distributed feature. The consumption of communication resources among subsystems can be well reduced by the developed control scheme. Via a numerical example, the validity of the protocol was verified. In four future work, fully distributed event-triggered consensus for nonlinear multi-agent systems with switching networks will be investigated.