1 Introduction

It is well known that real-world systems are inevitable to contain uncertainties. Driven by practical requirements and theoretical challenges, adaptive controller design of nonlinear systems has become an important research domain recently. An adaptive fault-tolerant tracking control problem of a class of nonlinear systems with multiple delayed state perturbations was investigated in [1]. Adaptive tracking and stabilization problems of the nonlinear large-scale systems with dynamic uncertainties was discussed in [2,3,4,5], where the inverse dynamics of the subsystems considered in [4, 5] were required to be stochastic input-to-state stable. Finite-time stabilization of a class of switched stochastic nonlinear systems under arbitrary switchings was investigated in [6] by adopting the adding a power integrator technique, subsequently, a tracking control method was proposed in [7] for a class of switched stochastic nonlinear systems with unknown dead-zone input by using the common Lyapunov function method. Two additional factors of adaptive control in nonlinear systems were considered in [8, 9], where time delay was considered in [8], and unmeasured dynamic uncertainties was considered in [9]. Adaptive fuzzy output-constrained tracking fault-tolerant control was focused in [10], where the barrier Lyapunov function was used to guarantee that all the signals in the closed-loop system were bounded in probability and the system outputs was constrained in a given compact set. Two novel adaptive finite-time control schemes were developed in [11, 12] for high-order nonlinear systems, where the result presented in [11] was the first time to solve the finite-time control problem of the nonlinear systems with uncertain time-varying control coefficients and unknown nonlinear perturbations. Under some appropriate assumptions, adaptive practical finite-time stabilization for a class of switched nonlinear systems in pure-feedback form was investigated in [13].

However, it should be emphasized that all the aforementioned results were based on continuous-time control. In the past decades, due to the remarkable development in digital technology, it becomes a commonly known standard that sampled-data controllers with analog-to-digital and digital-to-analog devices for interfacing are being digitally implemented into practical systems, such as aircraft systems [14] and multi-robot systems [15]. The three main approaches for the design of sampled-data controllers are (e.g., see [16,17,18]):

  1. 1.

    Design based on a continuous-time plant model plus a controller discretization (the CTD method);

  2. 2.

    Design based on the exact or approximate discrete-time plant model ignoring inter-sample behavior (respectively, the DTD method and the approximate DTD method);

  3. 3.

    Design based on the sampled-data model of the plant which takes into account the inter-sample behavior in the design procedure (the SDD method).

The majority of work for the sampled-data control of nonlinear systems uses the CTD method, and various sampled-data control algorithms for nonlinear systems depicted in continuous dynamics were presented in many existing literature, for example, two sampled-data control schemes were developed in [16, 19] to realize global practical tracking of nonlinear systems, a systematic design scheme was developed in [20] to construct a linear sampled-data output feedback controller that semi-globally asymptotically stabilizes a class of uncertain systems with both higher-order and linear growth nonlinearities. By designing observers, sampled-data control methods were proposed for the nonlinear systems with improved maximum allowable transmission delay [21] and input delay [22]. An observer, which was featured with a special feedforward propagation structure, was constructed in [23] to estimate the unmeasured states of a class of uncertain nonlinear systems with uncontrollable and unobservable linearizations, and global sampled-data stabilization of the nonminimum-phase nonlinear systems were addressed in [24]. Using the small gain approach, a result concerning the sampled-data observer design for a wide classes of nonlinear systems with delayed measurements was considered in [25]. Stabilization for the sampled-data systems under noisy sampling interval was investigated in [26]. A memory sampled-data control scheme which involved a constant signal transmission delay was proposed in [27] for the chaotic systems. Sampled-data stabilization and exponentially synchronization were, respectively, investigated in [28, 29] for the T–S fuzzy systems and the Markovian neural networks. Reliable dissipative control and non-fragile \({H_\infty }\) control for the Markov jump systems were focused in [30, 31].

Although sampled-data control has received much attention, there are few works on sampled-data adaptive practical stabilization for uncertain nonlinear systems, which is mainly due to the complexity arising from the design of adaptive laws for sampled-data controller, that is, under sampled-data situation that system states can only be measured at the sampling points, adaptive laws cannot be constructed like continuous-time control by using system states which are measured everywhere. Thus, an issue naturally arises: under that situation, how to design an adaptive controller to practically stabilize such nonlinear systems by only using the sampled system states? In this paper, we will try to give an answer, and the main contributions can be summarized as follows:

  1. (i)

    An adaptive practical stabilization problem is considered in this paper for a class of uncertain nonlinear systems via sampled-data control. Different from some existing nonlinear sampled-data control results, such as [16, 19], our scheme can be obtained under a weaker assumption that the diffusion terms of the considered systems are not required to satisfy any linear growth and bounded conditions.

  2. (ii)

    Unlike existing adaptive continuous-time control schemes of nonlinear systems, such as [2,3,4,5], the adaptive laws in this paper are designed based on the sampled signals due to that the system states can only be measured at the sampling points, and then, we use the sampling values of the adaptive parameters to construct the sampled-data controller to practically stabilize the considered nonlinear systems.

  3. (iii)

    The sampling period in this paper is non-periodic, which is different from those in [18,19,20,21,22,23,24].

The remainder of the paper is organized as follows: in Sect. 2, we present the problem statements and some preliminaries. An adaptive sampled-data controller design are presented in Sect. 3. The main theorem is shown in Sect. 4, where the stabilization proof is also obtained using the well-known Gronwall–Bellman inequality. Numerical simulations and discussions are shown in Sect. 5 to demonstrate the effectiveness of the proposed method. Finally, conclusions are drawn in Sect. 6.

Notations

R denotes the set of all real numbers; \({R^n}\) indicates the real n-dimension-al space; \({R^{m \times n}}\) denotes the real \(m \times n\) matrix space; \(\left\| \cdot \right\| \) denotes the Euclidean norm; K denotes the set of all functions: \({R^ + } \rightarrow {R^ + }\), which are continuous, strictly increasing and vanish at zero; \(K_{\infty }\) denotes the set of all functions which are class K and unbounded; \({C^i}\) denotes a set of functions whose ith-order derivatives are continuous and differentiable; the argument of the functions will be omitted throughout the paper for simplicity whenever no confusion arises, such as functions \(V,{x_1}\) and \({z_2}\) to be used hereafter.

2 Problem statement and preliminaries

Consider the following uncertain nonlinear system:

$$\begin{aligned} \left\{ \begin{array}{l} \dot{z}_0 = q(z_0,{x_1}),\\ {{\dot{x}}_i} = {g_i}({{{\bar{x}}}_i}){x_{i + 1}} + {f_i}({{{\bar{x}}}_i}) \\ \qquad \,\, + {\varDelta _i}({{{\bar{x}}}_i},z_0),~~~i = 1,2, \ldots ,n - 1,\\ {{\dot{x}}_n} = {g_n}({{{\bar{x}}}_n})u + {f_n}({{{\bar{x}}}_n}) + {\varDelta _n}({{{\bar{x}}}_n},z_0),\\ y = {x_1}, \end{array}\right. \end{aligned}$$
(1)

where \(x = {[{x_1}, {x_2}, \ldots ,{x_n}]^\mathrm{T}} \in {R^n}\) are the states which can only be measured at the sampling points \({t_k},k = 1,2, \ldots , + \infty \), and \({{{\bar{x}}}_i} = {[{x_1},{x_2}, \ldots ,{x_i}]^\mathrm{T}} \in {R^i},i = 1,2, \ldots ,n; z_0 \in {R^{n_0}}\) is an unmeasured state; \(u \in R\) and \(y \in R\) are the control input and output, respectively; \({f_i}( \cdot ),{g_i}( \cdot ), i = 1,2, \ldots ,n,\) are unknown nonlinear smooth functions, and \({f_i}(0, \ldots ,0) = 0; {\varDelta _i}( \cdot ),i = 1,2, \ldots ,n,\) are external disturbances, and \({\varDelta _i}(0,\ldots ,0) = 0; q(\cdot )\) is an unknown Lipschitz function.

Remark 1

In practical engineering, some nonlinear systems, such as thruster assisted position mooring systems [32], levitated ball systems [33], flexible crane systems [34] and single-link manipulator systems [35], can be described as or converted into system (1). Therefore, it is of importance to investigate system (1).

Remark 2

Note that, for strict-feedback nonlinear systems, the adaptive sampled-data stabilization problem has been addressed in [18] by designing the controller for the systems’ Euler approximate discrete-time model; however, the diffusion terms of the systems considered in [18] are \(\phi _i^\mathrm{T}({{\bar{x}}_i})\theta , i = 1,2, \ldots ,n\), where \(\theta \) is an unknown constant and \(\phi _i(\cdot )\) is a known nonlinear function; furthermore, the control coefficients of the former (\(i-1\)) subsystems’ equations in [18] are 1; thus, system (1) is more general than the systems investigated in [18], and the method proposed in [18] maybe invalid for system (1).

We make the following assumptions.

Assumption 1

[36] The external disturbances \({\varDelta _i}({{\bar{x}}_i},z_0),i = 1,2, \ldots ,n,\) satisfy

$$\begin{aligned} \left| {{\varDelta _i}({{{\bar{x}}}_i},z_0)} \right| \leqslant {\phi _{i1}}(\left\| z_0 \right\| ) + {\phi _{i2}}(\left\| {{{{\bar{x}}}_i}} \right\| ), \end{aligned}$$
(2)

where \({\phi _{i1}}( \cdot ) \geqslant 0\) and \({\phi _{i2}}( \cdot ) \geqslant 0\) are unknown continuous functions.

Assumption 2

There exist two unknown constants \(g_i^{\min }\) and \(g_i^{\max }\) such that \(0 < g_i^{\min } \leqslant \left| {{g_i}( \cdot )} \right| \leqslant g_i^{\max }\). Without loss of generality, we assume \(0 < g_i^{\min } \leqslant {{g_i}( \cdot )} \leqslant g_i^{\max }\).

Remark 3

It should be pointed out that continuous-time control of the strict-feedback nonlinear systems has been investigated in [4, 7, 12]; however, in those literature, the upper and lower bounds of the time-varying control coefficients \({g_i}({{\bar{x}}_i}), i = 1,2, \ldots ,n,\) are required to be known.

Assumption 3

[36] The dynamic \(\dot{z}_0 = q(z_0,{x_1})\) is exp-ISPS (exponentially input-state-practically stable), that is, there exists a \({C^1}\) function \({V_0}(z_0)\) such that

$$\begin{aligned} {{\bar{\alpha }} _1}(\left\| z_0 \right\| )\leqslant & {} {V_0}(z_0) \leqslant {{\bar{\alpha }} _2}(\left\| z_0 \right\| ), \end{aligned}$$
(3a)
$$\begin{aligned} \frac{{\partial {V_0}(z_0)}}{{\partial z_0}}q(z_0,{x_1})\leqslant & {} - c{V_0}(z_0) + {{\varGamma (\left| {{x_1}} \right| )}} + d, \end{aligned}$$
(3b)

where \({{\bar{\alpha }} _1}( \cdot ),{{\bar{\alpha }} _2}( \cdot )\) are class \({K_\infty }\) functions, \(\varGamma ( \cdot )\) is a smooth function with \(\varGamma (0) = 0\), and \(c > 0,d \geqslant 0\) are constants.

Remark 4

In the existing results on sampled-data control [19,20,21,22,23,24], the diffusion terms \({f_i}({{\bar{x}}_i}), i=1,2,\ldots ,n,\) of the considered nonlinear systems are required to satisfy the linear growth condition; however, in system (1), such restriction for the diffusion terms has been relaxed.

Lemma 1

([37], Gronwall–Bellman inequality) Let \(D: = [a,b] \subset R\) be a real region. Suppose \(\mu (t),\rho (t)\) and \(\omega (t) \geqslant 0\) defined in D are real continuous functions. If \(\mu (t)\) satisfies the following inequality

$$\begin{aligned} \mu (t) \leqslant \rho (t) + \int _a^t {\omega (s)\mu (s)\mathrm{d}s}, \end{aligned}$$
(4a)

the following holds for \(t \in D\):

$$\begin{aligned} \mu (t) \leqslant \rho (t) + \int _a^t {\rho (s)\omega (s)\exp \bigg (\int _s^t {\omega (r)\mathrm{d}r}\bigg )} \mathrm{d}s.\nonumber \\ \end{aligned}$$
(4b)

Lemma 2

[36] If \({V_0}(z_0)\) is a \(C^1\) function for \(\dot{z}_0 = q(z_0,x_1)\) such that (4a) and (4b) hold, then, for any constant \({\bar{c}} \in (0,c)\), any initial instant \({t_0} > 0\), any initial condition \(z_0({t_0}), {\gamma _0}> 0\), for any continuous function \({{{\bar{\gamma }} (\left| {{x_1}} \right| )}}\) such that \({{{\bar{\gamma }} (\left| {{x_1}} \right| ) \geqslant \varGamma (\left| {{x_1}} \right| )}}\), there exist a finite \({T_0} = \max \left\{ {0,\mathrm{ln} ({V_0}({z_0})/{\gamma _0})/(c - {\bar{c}})} \right\} \geqslant 0\), a function \(D(t,{t_0}) \geqslant 0\) defined for all \(t \geqslant {t_0}\), and a signal described by

$$\begin{aligned} {{\dot{v}(t) = - {\bar{c}}v(t) + {\bar{\gamma }} (\left| {{x_1(t) }} \right| ) + d, v({t_0}) = {v_0}, \quad \forall t \geqslant {t_0},}}\nonumber \\ \end{aligned}$$
(5)

such that \(D(t,{t_0}) = 0\) for \(t \geqslant {t_0} + {T_0}\), and \(D(t,{t_0}) = \max \{ 0,{e^{ - c(t - {t_0})}}{V_0}({z_0}) - {e^{ - {\bar{c}}(t - {t_0})}}{\gamma _0}\}\) such that \({V_0}(z_0) \leqslant v(t) + D(t,{t_0})\) for \(\forall t \geqslant {t_0}\). Without loss of generality, we assume \({\bar{\gamma }} (\left| {{x_1}} \right| )= \varGamma (\left| {{x_1}} \right| )\).

Lemma 3

[38] For any real-valued continuous function f(xy), where \(x \in {R^m},y \in {R^n}\), there exist smooth scalar functions \({\varphi _1}(x)\geqslant 0\) and \({\varphi _2}(y)\geqslant 0\), such that

$$\begin{aligned} \left| {f(x,y)} \right| \leqslant {\varphi _1}(x) + {\varphi _2}(y). \end{aligned}$$
(6)

Lemma 4

([39], Young’s inequality) Let \(x \in {R^n},y \in {R^n}\), and \(p> 1,q > 1\) be two constants, \((p - 1)(q - 1) = 1\), given any real number \(\varepsilon > 1\), the following inequality holds:

$$\begin{aligned} {x^\mathrm{T}}y \leqslant \frac{1}{{p{\varepsilon ^p}}}{\left\| x \right\| ^p} + \frac{{{\varepsilon ^q}}}{q}{\left\| y \right\| ^q}. \end{aligned}$$
(7a)

Specially, when \(p = q = 2\), we have

$$\begin{aligned} {x^\mathrm{T}}y \leqslant \frac{1}{{2{\varepsilon ^2}}}{\left\| x \right\| ^2} + \frac{{{\varepsilon ^2}}}{2}{\left\| y \right\| ^2}. \end{aligned}$$
(7b)

Lemma 5

[40] Let H(z) be a continuous function defined on a compact set \(\varOmega _{z}\), then a neural network can be constructed as the following form to approximate it:

$$\begin{aligned} H(z) = {W^{*T}}T(z) + D(z),\quad \forall z \in {\varOmega _{z}}, \end{aligned}$$
(8)

where \(T(z) = {[{T_1}(z),\ldots ,{T_l}(z)]^\mathrm{T}} \in {R^l}\) is the basic function vector with the node number \(l\geqslant 1, {W^*} = {[W_1^*,W_2^*, \ldots ,W_l^*]^\mathrm{T}} \in {R^l}\) is the ideal weight vector, D(z) is the approximate error satisfying \(\left| {D(z)} \right| \leqslant D_1^*\) with bound \(D_1^*>0\).

The basic function \({T_i}(z)\) is taken as the Gaussian function, which has the following form:

$$\begin{aligned} {T_i}(z) = \exp \left( - \frac{{{{(z - {c_i})}^\mathrm{T}}(z - {c_i})}}{{\mu _i^{*2}}}\right) ,\quad i = 1,2, \ldots ,l, \end{aligned}$$

where \({c_i}\) is the center of the radial basic function vector, and \({\mu _i} > 0\) is the width of the Gaussian function.

The value of the ideal weight vector \({W^*}\) is determined by W that minimizes the approximate error D(z) for all \(z \in {\varOmega _{z}}\):

$$\begin{aligned} {W^*} = \arg \mathop {\min }\limits _{W \in {R^l}} \bigg \{ \mathop {\sup }\limits _{z \in {\varOmega _{z}}} \left| {H(z) - {W^\mathrm{T}}(z)T(z)} \right| \bigg \}. \end{aligned}$$

The objective of the paper is to design the following adaptive sampled-data controller

$$\begin{aligned} u(t)= & {} u({t_k})\,=\,K({x_1}({t_k}), {x_2}({t_k}), \ldots ,{x_n}({t_k})),\nonumber \\&\quad \forall t \in [{t_k},{t_{k + 1}}) \end{aligned}$$
(9)

for nonlinear system (1) such that all states of the resulting closed-loop system to be semi-globally uniformly ultimately bounded.

3 Adaptive sampled-data controller design

In this section, an adaptive sampled-data controller will be constructed by adopting the backstepping technique. The whole design procedure needs n steps, which depends on the following changes of coordinates: \({z_i} = {x_i} - {\alpha _{i - 1}},i = 2,3, \ldots ,n\), where \({\alpha _{i - 1}}\) is a virtual control law.

Step 1 In the first step, we choose the following Lyapunov function candidate:

$$\begin{aligned} {V_1} = \displaystyle \frac{{x_1^2}}{{2g_1^{\min }}} + \frac{1}{2{\eta _1}}{\tilde{\theta }} _1^2\mathrm{{ + }}\frac{{{v^2}}}{{2{{{\bar{\lambda }} }_1}}}, \end{aligned}$$
(10)

where \({\eta _1}> 0, {{\bar{\lambda }} _1}>0\) are design parameters, \({{\tilde{\theta }} _1} = {\theta _1} - {{\hat{\theta }} _1}, {{\hat{\theta }} _1}\) is the estimate of \({\theta _1}, {\theta _1}\) will be defined later. From Assumption 1, the time derivative of \(V_1\) satisfies

$$\begin{aligned} {\dot{V}_1}\leqslant & {} \displaystyle \frac{{g_1^{\max }}}{{g_1^{\min }}}\left| {{x_1}} \right| \left| {{x_2} - {\alpha _1}} \right| + \frac{{{g_1}({x_1})}}{{g_1^{\min }}}{x_1}{\alpha _1} + \frac{{{x_1}{f_1}({x_1})}}{{g_1^{\min }}} \nonumber \\&+\, \frac{{\left| {{x_1}} \right| }}{{g_1^{\min }}}{\phi _{11}}(\left\| z_0 \right\| )+ {{\frac{{\left| {{x_1}} \right| }}{{g_1^{\min }}}{\phi _{12}}(\left| {{x_1}} \right| )}} - \frac{1}{{{\eta _1}}}{{\tilde{\theta }} _1}{\dot{{\hat{\theta }}}_1} \nonumber \\&-\, \frac{{{\bar{c}}}}{{{{{\bar{\lambda }} }_1}}}{v^2} {{+ \frac{{\left| {{x_1}} \right| }}{{{{{\bar{\lambda }} }_1}}}\displaystyle \Phi (\left| {{x_1}} \right| )v}} + g_{11}^*\displaystyle \sqrt{{V_1}}, \end{aligned}$$
(11)

where \(g_{11}^* = d\displaystyle \sqrt{\displaystyle \frac{2}{{{{{\bar{\lambda }} }_1}}}}\), and \(\displaystyle \Phi ( \cdot )\geqslant 0\) is an unknown smooth function.

According to Assumption 3, Lemmas 2 and 3, we obtain, for \(\forall t \geqslant {t_0}\),

$$\begin{aligned} \left| {{x_1}} \right| {\phi _{11}}(\left\| z_0 \right\| )\leqslant & {} \left| {{x_1}} \right| {\phi _{11}} \circ {\bar{\alpha }} _1^{ - 1}(v(t) + D(t,{t_0})) \nonumber \\\leqslant & {} \left| {{x_1}} \right| ({\varphi _{11}}(v(t)) + {\varphi _{12}}(D(t,{t_0}))),\nonumber \\ \end{aligned}$$
(12)

where \({\varphi _{11}}( \cdot ) \geqslant 0\) and \({\varphi _{12}}( \cdot ) \geqslant 0\) are smooth functions. Because \(D(t,{t_0})\) is a bounded function for \(\forall t \geqslant {t_0}\), there exists a constant \(\theta _{12}^* > 0\) such that \({\varphi _{12}}(D(t,{t_0})) \leqslant \theta _{12}^*\) holds, substituting (12) into (11) results in

$$\begin{aligned} {\dot{V}_1}\leqslant & {} \frac{{g_1^{\max }}}{{g_1^{\min }}}\left| {{x_1}{z_2}} \right| + \frac{{{g_1}({x_1})}}{{g_1^{\min }}}{x_1}{\alpha _1} + \left| {{x_1}} \right| {{\bar{\phi }} _1}({x_1})\nonumber \\&-\, \frac{1}{{{\eta _1}}}{{\tilde{\theta }} _1}{\dot{{\hat{\theta }}}_1} - \frac{{{\bar{c}}}}{{{{{\bar{\lambda }} }_1}}}{v^2} + g_{11}^*\sqrt{{V_1}}, \end{aligned}$$
(13)

where \({{{\bar{\phi }} }_1}({X_1})=\displaystyle \frac{{\left| {{f_1}({x_1})} \right| }}{{g_1^{\min }}} + \displaystyle \frac{1}{{g_1^{\min }}}({\varphi _{11}}(v) + \theta _{12}^* + {\phi _{12}}(\left| {{x_1}} \right| )) + \displaystyle \frac{{v\Phi (\left| {{x_1}} \right| )}}{{{}{{{\bar{\lambda }} }_1}}}\) with \({X_1} = {[{x_1},v]^\mathrm{T}}\).

As mentioned in Lemma 5, neural network can approximate any continuous function to any desired accuracy; thus, a neural network is constructed as the following form to approximate \({{\bar{\phi }} _1}({X_1})\):

$$\begin{aligned} {{\bar{\phi }} _1}({X_1}) = H_1^{*T}{S_1}({X_1}) + {\varepsilon _1}({X_1}), \end{aligned}$$
(14)

where \(H_1^*\) is an unknown weight vector, \({S_1}({X_1}) = {[{S_{11}}({X_1}), {S_{12}}({X_1}), \ldots ,{S_{1{L_1}}}({X_1})]^\mathrm{T}} \in {R^{L_1}}\) is a basic function vector and \({S_{1i}}( \cdot ),i = 1,2, \ldots ,L_1,\) are chosen as Gaussian functions, \({\varepsilon _1}({X_1})\) is the approximation error satisfying \(\left| {{\varepsilon _1}({X_1})} \right| \leqslant \varepsilon _1^*\) with \(\varepsilon _1^* > 0\).

Substituting (14) into (13), and using Lemma 4, we arrive at

$$\begin{aligned} {\dot{V}_1}\leqslant & {} \frac{{g_1^{\max }}}{{g_1^{\min }}}\left| {{x_1}}{{z_2}} \right| + \frac{{{g_1}({x_1})}}{{g_1^{\min }}}{x_1}{\alpha _1} + x_1^2{\theta _1} - \frac{1}{{{\eta _1}}}{{\tilde{\theta }} _1}{\dot{{\hat{\theta }}} _1} \nonumber \\&-\, \frac{{{\bar{c}}}}{{{{{\bar{\lambda }} }_1}}}{v^2}+\displaystyle \sum \limits _{i = 1}^2 {g_{1i}^*}\displaystyle \sqrt{{V_1}}, \end{aligned}$$
(15)

where \({\theta _1} = {\left\| {H_1^*} \right\| ^2}L\) and \(g_{12}^* = \varepsilon _1^*\sqrt{2g_1^{\min }}+ \sqrt{2g_1^{\min }{\theta _1}}\).

Choose a virtual control law and a “virtual adaptive law” for Step 1 as follows:

$$\begin{aligned} {\alpha _1}= & {} -\, {x_1}\sqrt{1 + {\hat{\theta }} _1^2} - {c_1}{x_1}, \end{aligned}$$
(16a)
$$\begin{aligned} {\dot{{\hat{\theta }}}_1}= & {} {\eta _1}x_1^2 - {\eta _1}{{\hat{\theta }} _1}, \end{aligned}$$
(16b)

where \({c_1} > 0\) is a design parameter. Furthermore, we have the following relation from (10) and (16):

$$\begin{aligned} {{\frac{1}{{{\eta _1}}}{{\tilde{\theta }} _1}{\dot{{\hat{\theta }}} _1}\leqslant x_1^2{\tilde{\theta }} _1-{{\tilde{\theta }} _1}{{\hat{\theta }} _1}\leqslant x_1^2{\tilde{\theta }} _1- {\theta _1}\displaystyle \sqrt{2{\eta _1}{V_1}} +{\tilde{\theta }} _1^2.}}\nonumber \\ \end{aligned}$$
(17)

Substituting (16) and (17) into (15) yields

$$\begin{aligned} {\dot{V}_1}\leqslant & {} \frac{{g_1^{\max }}}{{g_1^{\min }}}\left| {{x_1}}{{z_2}} \right| - {\tilde{\theta }} _1^2 - \frac{{{\bar{c}}}}{{{{{\bar{\lambda }} }_1}}}{v^2} \nonumber \\&+\, (g_{11}^* + g_{12}^* + g_{13}^*)\sqrt{{V_1}} - {c_1}x_1^2, \end{aligned}$$
(18)

where \(g_{13}^* = {\theta _1}\displaystyle \sqrt{2{\eta _1}}\).

Step 2 Choose a Lyapunov function candidate as the following form:

$$\begin{aligned} {V_2} = {V_1} + \frac{{z_2^2}}{{2g_2^{\min }}} + \frac{1}{{2{\eta _2}}}{\tilde{\theta }} _2^2, \end{aligned}$$
(19)

where \({\eta _2} > 0\) is a design parameter, \({{\tilde{\theta }} _2} = {\theta _2} - {{\hat{\theta }} _2}, {{\hat{\theta }} _2}\) is the estimate of \({\theta _2}, {\theta _2}\) will be defined later. The time derivative of \(V_2\) satisfies

$$\begin{aligned} {\dot{V}_2}\leqslant & {} {\dot{V}_1} + \displaystyle \frac{{g_2^{\max }}}{{g_2^{\min }}}\left| {{z_2}} \right| \left| {{x_3} - {\alpha _2}} \right| + \frac{{{g_2}({{{\bar{x}}}_2})}}{g_2^{\min }}{z_2}{\alpha _2} \nonumber \\&+\, \frac{{{g_2}({{{\bar{x}}}_2})}}{{g_2^{\min }}}\left| {{z_2}} \right| \left( {\phi _{21}}(\left\| z_0 \right\| ) + {\phi _{22}}(\left\| {{{{\bar{x}}}_2}} \right\| )\right) \nonumber \\&+\, \left| {{z_2}} \right| \frac{{g_1^{\max }}}{{g_1^{\min }}}\left| {\frac{{\partial {\alpha _1}}}{{\partial {x_1}}}} \right| \left| {{x_2}} \right| + \frac{{\left| {{z_2}} \right| }}{g_2^{\min }}\left| {\frac{{\partial {\alpha _1}}}{{\partial {x_1}}}} \right| \left| {{f_1}({x_1})} \right| \nonumber \\&+\, \frac{{{\eta _1}\left| {{z_2}} \right| }}{g_2^{\min }}\left| {\frac{{\partial {\alpha _1}}}{{\partial {{{\hat{\theta }} }_1}}}} \right| \left| {{{{\hat{\theta }} }_1}} \right| - \frac{1}{{{\eta _2}}}{{\tilde{\theta }} _2}{\dot{{\hat{\theta }}}_2} \nonumber \\&+\,\frac{{\left| {{z_2}} \right| }}{g_2^{\min }}\left| {{f_2}({{{\bar{x}}}_2})} \right| + \frac{{ {\eta _1}x_1^2\left| {{z_2}} \right| }}{{g_2^{\min }}}\left| {\frac{{\partial {\alpha _1}}}{{\partial {{{\hat{\theta }} }_1}}}} \right| \nonumber \\&+\,\frac{{\left| {{z_2}} \right| }}{g_2^{\min }}\left| {\frac{{\partial {\alpha _1}}}{{\partial {x_1}}}} \right| ({\phi _{11}}(\left\| z_0 \right\| )+ {{{\phi _{12}}(\left| {{x_1}} \right| )}}). \end{aligned}$$
(20)

According to Assumption 3, Lemmas 2 and 3, we have, for \(\forall t \geqslant {t_0}\),

$$\begin{aligned} \left| {{z_2}} \right| {\phi _{21}}(\left\| z_0 \right\| )\leqslant & {} \left| {{z_2}} \right| {\phi _{21}} \circ {{{\bar{\alpha }}_1}^{ - 1}}(v(t) + D(t,{t_0})) \nonumber \\\leqslant & {} \left| {{z_2}} \right| {\varphi _{21}}(v(t)) + \left| {{z_2}} \right| {\varphi _{22}}(D(t,{t_0})),\nonumber \\ \end{aligned}$$
(21)

where \({\varphi _{21}}( \cdot ) \geqslant 0\) and \({\varphi _{22}}( \cdot ) \geqslant 0\) are two smooth functions. Furthermore, there exists a constant \(\theta _{22}^* > 0\) such that \({\varphi _{22}}(D(t,{t_0}))\leqslant \theta _{22}^*\) holds for \(\forall t \geqslant {t_0}\). Substituting (21) into (20) leads to

$$\begin{aligned} {{\dot{V}}_2}\leqslant & {} {\dot{V}_1}+\displaystyle \frac{{g_2^{\max }}}{{g_2^{\min }}}\left| {{z_2}{z_3}} \right| + \displaystyle \frac{{{g_2}({{{\bar{x}}}_2})}}{g_2^{\min }}{z_2}{\alpha _2}\mathrm{{ + }}\left| {{z_2}} \right| {{{\bar{\phi }} }_2}({X_2}) \nonumber \\&-\, \displaystyle \frac{1}{{{\eta _2}}}{{{\tilde{\theta }} }_2}{{\dot{{\hat{\theta }}}}_2}- \displaystyle \frac{{g_1^{\max }}}{{g_1^{\min }}}z_2^2\left| {{x_1}} \right| , \end{aligned}$$
(22)

where

$$\begin{aligned} {{\bar{\phi }} _2}({X_2})= & {} \displaystyle \frac{1}{{g_2^{\min }}}({\varphi _{21}}(v) + \theta _{22}^* + {\phi _{22}}(\left\| {{{{\bar{x}}}_2}} \right\| )) \\&+\, \frac{{g_1^{\max }\left| {{x_2}} \right| }}{{g_2^{\min }}}\left| {\frac{{\partial {\alpha _1}}}{{\partial {x_1}}}} \right| + \frac{{\left| {{f_1}({x_1})} \right| }}{{g_2^{\min }}}\left| {\frac{{\partial {\alpha _1}}}{{\partial {x_1}}}}\right| \\&+\,\frac{{g_1^{\max }}}{{g_1^{\min }}}\left| {{z_2}{x_1}} \right| \\&+\,\displaystyle \frac{1}{{g_2^{\min }}}\left| {\frac{{\partial {\alpha _1}}}{{\partial {x_1}}}} \right| ({\varphi _{11}}(v) + \theta _{12}^* + {\phi _{12}}(\left| {{x_1}} \right| ))\\&+\, \frac{{\eta _1}{x_1^2}}{{g_2^{\min }}}\left| {\frac{{\partial {\alpha _1}}}{{\partial {{{\hat{\theta }} }_1}}}} \right| + \frac{{{\eta _1}}}{{g_2^{\min }}}\left| {\frac{{\partial {\alpha _1}}}{{\partial {x_1}}}} \right| \left| {{{{\hat{\theta }} }_1}} \right| \\&+\,\frac{{\left| {{f_2}({{{\bar{x}}}_2})} \right| }}{{g_2^{\min }}}, { {{X_2} = {[{\bar{x}}_2^\mathrm{T},{{\hat{\theta }} _1},v]^\mathrm{T}}}}. \end{aligned}$$

From Lemma 5, we adopt the following neural network to approximate the unknown nonlinear continuous function \({{\bar{\phi }} _2}({X_2})\):

$$\begin{aligned} {{\bar{\phi }} _2}({X_2}) = H_2^{*T}{S_2}({X_2}) + {\varepsilon _2}({X_2}), \end{aligned}$$
(23)

where \(H_2^*\) is an unknown weight vector, \({S_2}({X_2}) = {[{S_{21}}({X_2}), {S_{22}}({X_2}), \ldots ,{S_{2{L_2}}}({X_2})]^\mathrm{T}} \in {R^{L_2}}\) is a basic function vector, \({\varepsilon _2}({X_2})\) is the approximation error satisfying \(\left| {{\varepsilon _2}({X_2})} \right| \leqslant \varepsilon _2^*\) with \(\varepsilon _2^* > 0\).

Substituting (23) into (22), and using Lemma 4, we have

$$\begin{aligned} {\dot{V}_2}\leqslant & {} -\, {c_1}x_1^2 + \frac{{g_2^{\max }}}{{g_2^{\min }}}\left| {{z_2}{z_3}} \right| + \displaystyle \frac{{{g_2}({{{\bar{x}}}_2})}}{{g_2^{\min }}}{z_2}{\alpha _2} + z_2^2{\theta _2} \nonumber \\&+\, g_{21}^*\displaystyle \sqrt{{V_2}}-\displaystyle \frac{1}{{{\eta _2}}}{{{\tilde{\theta }} }_2}{{\dot{{\hat{\theta }}}}_2} \nonumber \\&+\, \left( \frac{g_1^{\max }}{4}\sqrt{\frac{2}{{g_1^{\min }}}}+ \displaystyle \sum \limits _{i = 1}^3 {g_{1i}^*} \right) \displaystyle \sqrt{{V_1}}-{\tilde{\theta }} _1^2, \end{aligned}$$
(24)

where \({\theta _2} = {\left\| {H_2^*} \right\| ^2}L\) and \(g_{21}^* = \varepsilon _2^*\displaystyle \sqrt{g_2^{\min }}+\displaystyle \sqrt{2g_2^{\min }{\theta _2}}\).

Take a virtual control law and a “virtual adaptive law” for Step 2 as follows:

$$\begin{aligned} {\alpha _2}= & {} -\, {z_2}\sqrt{1 + {\hat{\theta }} _2^2} - {c_2}{z_2}, \end{aligned}$$
(25a)
$$\begin{aligned} {\dot{{\hat{\theta }}}_2}= & {} {\eta _2}z_2^2 - {\eta _2}{{\hat{\theta }} _2}, \end{aligned}$$
(25b)

where \({c_2} > 0\) is a design parameter.

Substituting (25) into (24) results in

$$\begin{aligned} {{\dot{V}}_2}\leqslant & {} -\, {c_1}x_1^2 - {c_2}z_2^2 + \frac{{g_2^{\max }}}{{g_2^{\min }}}\left| {{z_2}{z_3}} \right| \nonumber \\&+\, (g_{21}^* + g_{22}^*)\sqrt{{V_2}} + g_1^*\sqrt{{V_1}} - {\tilde{\theta }} _1^2 - {\tilde{\theta }} _2^2, \end{aligned}$$
(26)

where \(g_1^* = \displaystyle \sum \limits _{i = 1}^3 {g_{1i}^*}+\frac{{g_1^{\max }}}{4}\sqrt{\frac{2}{{g_1^{\min }}}}\) and \(g_{22}^* = {\theta _2}\displaystyle \sqrt{2{\eta _2}}\).

Step \({{\varvec{h}}}(\mathbf{3 } \leqslant {\varvec{h}} \leqslant {\varvec{n}} - \mathbf{1})\) Consider a Lyapunov function candidate as

$$\begin{aligned} {V_h} = {V_{h - 1}} + \frac{{z_h^2}}{{2g_h^{\min }}} + \frac{1}{{2{\eta _h}}}{\tilde{\theta }} _h^2, \end{aligned}$$
(27)

where \({\eta _h} > 0\) is a design parameter, \({{\tilde{\theta }} _h} = {\theta _h} - {{\hat{\theta }} _h}, {{\hat{\theta }} _h}\) is the estimate of \({\theta _h}, {\theta _h}\) will be given later. From (18), (26) and Assumption 1, the time derivative of \({V_h}\) satisfies

$$\begin{aligned} {\dot{V}_h}\leqslant & {} {\dot{V}_{h - 1}} + \displaystyle \frac{{g_h^{\max }}}{{g_h^{\min }}}\left| {{z_h}} \right| \left| {{x_{h + 1}} - {\alpha _h}} \right| + \displaystyle \frac{{{g_h}({{{\bar{x}}}_h})}}{g_h^{\min }}{z_h}{\alpha _h} \nonumber \\&+\,\displaystyle \frac{{\left| {{z_h}} \right| }}{{g_h^{\min }}}({\phi _{h1}}(\left\| z_0 \right\| ) + {\phi _{h2}}(\left\| {{{{\bar{x}}}_h}} \right\| ))\nonumber \\&+\,\sum \limits _{i = 1}^{h - 1} {\frac{{\left| {{f_i}({{{\bar{x}}}_i})} \right| \left| {{z_h}} \right| }}{{g_h^{\min }}}} \left| {\frac{{\partial {\alpha _{h - 1}}}}{{\partial {x_i}}}} \right| \nonumber \\&+\, \sum \limits _{i = 1}^{h - 1} {\frac{{g_h^{\max }\left| {{z_h}} \right| }}{{g_h^{\min }}}} \left| {\frac{{\partial {\alpha _{h - 1}}}}{{\partial {x_i}}}} \right| \left| {{x_{i + 1}}} \right| + \frac{\left| {{z_h}} \right| }{{g_h^{\min }}}\left| {{f_h}({{{\bar{x}}}_h})} \right| \nonumber \\&+\, \sum \limits _{i = 1}^{h - 1} {\frac{{\left| {{z_h}} \right| }}{{g_h^{\min }}}} \left| {\frac{{\partial {\alpha _{h - 1}}}}{{\partial {x_i}}}} \right| ({\phi _{i1}}(\left\| z_0 \right\| ) + {\phi _{i2}}(\left\| {{{{\bar{x}}}_i}} \right\| ))\nonumber \\&+\, \sum \limits _{i = 2}^{h - 1} {\frac{{\eta _i}{z_i^2}\left| {{z_h}} \right| }{{{g_h^{\min }}}}} \left| {\frac{{\partial {\alpha _{h - 1}}}}{{\partial {{{\hat{\theta }} }_i}}}} \right| - \frac{1}{{{\eta _h}}}{{{\tilde{\theta }} }_h}{{\dot{{\hat{\theta }}} }_h}\nonumber \\&+\,\displaystyle \frac{{\eta _1}{x_1^2}\left| {{z_h}} \right| }{{{g_h^{\min }}}}\left| {\frac{{\partial {\alpha _1}}}{{\partial {{{\hat{\theta }} }_1}}}} \right| + \sum \limits _{i = 1}^{h - 1} {\frac{{{\eta _i}\left| {{z_h}} \right| }}{{{g_h^{\min }}}}} \left| {\frac{{\partial {\alpha _{h - 1}}}}{{\partial {{{\hat{\theta }} }_i}}}} \right| \left| {{{{\hat{\theta }} }_i}} \right| .\nonumber \\ \end{aligned}$$
(28)

Using Assumption 3, Lemmas 1 and 2, we obtain, for \(\forall t \geqslant {t_0}\),

$$\begin{aligned} \left| {{z_h}} \right| {\phi _{h1}}(\left\| z_0 \right\| )\leqslant & {} \left| {{z_h}} \right| {\phi _{h1}} \circ {\bar{\alpha }} _1^{ - 1}(v(t) + D(t,{t_0})) \nonumber \\\leqslant & {} \left| {{z_h}} \right| {\varphi _{h1}}(v(t)) + \left| {{z_h}} \right| {\varphi _{h2}}(D(t,{t_0})),\nonumber \\ \end{aligned}$$
(29)

where \({\varphi _{h1}}( \cdot )\geqslant 0\) and \({\varphi _{h2}}( \cdot )\geqslant 0\) are two smooth functions. Similar to Steps 1 and 2, there exists a constant \(\theta _{h2}^*>0\) such that \({\varphi _{h2}}(D(t,{t_0})) \leqslant \theta _{h2}^*\) holds. Substituting (29) into (28) leads to

$$\begin{aligned} {\dot{V}_h}\leqslant & {} {{\dot{V}}_{h - 1}}+\displaystyle \frac{{g_h^{\max }}}{{g_h^{\min }}}\left| {{z_h}{z_{h + 1}}} \right| + \displaystyle \frac{{{g_h}({{{\bar{x}}}_h})}}{{g_h^{\min }}}{z_h}{\alpha _h} \nonumber \\&+\, \left| {{z_h}} \right| {{\bar{\phi }} _h}({X_h}) - \displaystyle \frac{{g_{h-1}^{\max }}}{{{g_{h-1}^{\min }}}}z_h^2\left| {{z_{h - 1}}} \right| -\frac{1}{{{\eta _h}}}{{{\tilde{\theta }} }_h}{{\dot{{\hat{\theta }}} }_h},\nonumber \\ \end{aligned}$$
(30)

where

$$\begin{aligned} {{\bar{\phi }} _h}({X_h})= & {} \displaystyle \frac{1}{{g_h^{\min }}}\left( {\varphi _{h1}}(v) + \theta _{h2}^* + {\phi _{h2}}(\left\| {{{{\bar{x}}}_h}} \right\| )\right) \\&+\,\sum \limits _{i = 1}^{h - 1} {\frac{{g_i^{\max }}}{{g_h^{\min }}}} \left| {\displaystyle \frac{{\partial {\alpha _{h - 1}}}}{{\partial {x_i}}}} \right| \left| {{x_{i + 1}}} \right| + \displaystyle \frac{{\eta _1}{x_1^2}}{{g_h^{\min }}}\left| {\frac{{\partial {\alpha _1}}}{{\partial {{{\hat{\theta }} }_1}}}} \right| \\&+\,\displaystyle \frac{{\left| {{f_h}({{{\bar{x}}}_h})} \right| }}{g_h^{\min }}+\displaystyle \sum \limits _{i = 1}^{h - 1}\frac{{\left| {{f_i}({{{\bar{x}}}_i})} \right| }}{{g_h^{\min }}} \left| {\frac{{\partial {\alpha _{h - 1}}}}{{\partial {x_i}}}} \right| \\&+\, \sum \limits _{i = 1}^{h - 1} {\frac{1}{{g_h^{\min }}}} \left| {\frac{{\partial {\alpha _{h - 1}}}}{{\partial {x_i}}}} \right| ({\varphi _{i1}}(v) \\&+ \theta _{i2}^* + {\phi _{i2}}(\left\| {{{{\bar{x}}}_i}} \right\| ))\\&+\, \displaystyle \frac{{g_{h - 1}^{\max }}}{{g_{h - 1}^{\min }}}\left| {{z_h}} \right| \left| {{z_{h + 1}}} \right| +\displaystyle \sum \limits _{i = 2}^{h - 1} {\displaystyle \frac{{\eta _i}{z_i^2}}{{g_h^{\min }}}} \left| {\displaystyle \frac{{\partial {\alpha _{h - 1}}}}{{\partial {{{\hat{\theta }} }_i}}}} \right| \\&+\,\displaystyle \sum \limits _{i = 1}^{h - 1} {\frac{{{\eta _i}}}{{g_h^{\min }}}\left| {\frac{{\partial {\alpha _{h - 1}}}}{{\partial {{{\hat{\theta }} }_i}}}} \right| } \left| {{{{\hat{\theta }} }_i}} \right| , {X_h} \\&= {[{\bar{x}}_h^\mathrm{T},{{\hat{\theta }} _1}, \ldots ,{{\hat{\theta }} _{h - 1}},v]^\mathrm{T}}. \end{aligned}$$

To approximate the unknown nonlinear continuous function \({{\bar{\phi }} _h}({X_h})\), we adopt the following neural network:

$$\begin{aligned} {{\bar{\phi }} _h}({X_h}) = H_h^{*T}{S_h}({X_h}) + {\varepsilon _h}({X_h}), \end{aligned}$$
(31)

where \(H_h^*\) is an unknown ideal weight vector, \({S_h}({X_h}) = [{S_{h1}}({X_h}),{S_{h2}}({X_h}), \ldots ,{S_{h{L_h}}}({X_h})] \in {R^{L_h}}\) is a basic function vector, \({\varepsilon _h}({X_h})\) is the approximation error satisfying \(\left| {{\varepsilon _h}({X_h})} \right| \leqslant \varepsilon _h^*\) with \(\varepsilon _h^* > 0\).

Substituting (31) into (30), and using Lemma 4, we obtain

$$\begin{aligned} {{\dot{V}}_h}\leqslant & {} - {c_1}x_1^2 - \sum \limits _{i = 2}^{h - 1} {{c_i}z_i^2} + \frac{{g_h^{\max }}}{{g_h^{\min }}}\left| {{z_h}{z_{h + 1}}} \right| \nonumber \\&+\, \frac{{{g_h}({{{\bar{x}}}_h})}}{{{g_h^{\min }}}}{z_h}{\alpha _h} + z_h^2{\theta _h} \nonumber \\&+\, g_{h1}^*\sqrt{{V_h}}- \displaystyle \frac{1}{{{\eta _h}}}{{{\tilde{\theta }} }_h}{{\dot{{\hat{\theta }}} }_h} \nonumber \\&+\, \displaystyle \sum \limits _{i = 1}^{h - 2} {g_i^*} \displaystyle \sqrt{{V_i}} \nonumber \\&+\,\bigg (\displaystyle \frac{{g_{h - 1}^{\max }}}{4}\displaystyle \sqrt{\displaystyle \frac{2}{{g_{h - 1}^{\min }}}}+\displaystyle \sum \limits _{i = 1}^2 {g_{h - 1,i}^*} \bigg )\displaystyle \sqrt{{V_{h - 1}}}\nonumber \\&-\, \displaystyle \sum \limits _{i = 1}^{h - 1} {{\tilde{\theta }} _i^2}, \end{aligned}$$
(32)

where \({\theta _h} = {\left\| {H_h^*} \right\| ^2}L\) and \(g_{h1}^* = \varepsilon _h^*\displaystyle \sqrt{2{{g_h^{\min }}}} + \displaystyle \displaystyle \sqrt{2{{g_h^{\min }}}{\theta _h}}\).

Choose a virtual control law and a “virtual adaptive law” for Step h as follows:

$$\begin{aligned} {\alpha _h}= & {} -\, {z_h}\sqrt{1 + {\hat{\theta }} _h^2} - {c_h}{z_h}, \end{aligned}$$
(33a)
$$\begin{aligned} {\dot{{\hat{\theta }}}_h}= & {} {\eta _h}z_h^2 - {\eta _h}{{\hat{\theta }} _h}, \end{aligned}$$
(33b)

where \({c_h} > 0\) is a design parameter.

Substituting (33) into (32) yields

$$\begin{aligned} {{\dot{V}}_h}\leqslant & {} - {c_1}x_1^2 - \sum \limits _{i = 2}^h {{c_i}z_i^2} + \frac{{g_h^{\max }}}{{g_h^{\min }}}\left| {{z_h}}{{z_{h+1}}} \right| \nonumber \\&+\, (g_{h1}^* + g_{h2}^*)\sqrt{{V_h}} + \sum \limits _{i = 1}^{h - 1} {g_i^*} \sqrt{{V_i}} - \sum \limits _{i = 1}^h {{\tilde{\theta }} _i^2},\nonumber \\ \end{aligned}$$
(34)

where \(g_{h - 1}^* = \displaystyle \sum \limits _{i = 1}^2 {g_{h - 1,i}^*} + \displaystyle \frac{{g_{h - 1}^{\max }}}{4}\displaystyle \sqrt{\displaystyle \frac{2}{{g_{h - 1}^{\min }}}}\) and \(g_{h2}^* = {\theta _h}\displaystyle \sqrt{2{\eta _h}}\).

Step \({{\varvec{n}}}\) Consider the following Lyapunov function candidate as

$$\begin{aligned} {V_n} = {V_{n - 1}} + \displaystyle \frac{{{1}}}{{2g_n^{\min }}}z_n^2 + \displaystyle \frac{{{1}}}{{2{\eta _n}}}{\tilde{\theta }} _n^2, \end{aligned}$$
(35)

where \({{\tilde{\theta }} _n} = {\theta _n} - {{\hat{\theta }} _n}, {{\hat{\theta }} _n}\) is the estimate of \({\theta _n}, {\theta _n}\) will be given later. From (18), (26) and (34), the time derivative of \({V_n}\) satisfies

$$\begin{aligned} {\dot{V}_n}\leqslant & {} {\dot{V}_{n - 1}} + \displaystyle \frac{{g_n^{\max }}}{{g_n^{\min }}}\left| {{z_n}} \right| \left| {u - {\alpha _n}} \right| + \displaystyle \frac{{{g_n}({{{\bar{x}}}_n})}}{{g_n^{\min }}}{z_n} \nonumber \\&+\, \displaystyle \frac{{\left| {{z_n}} \right| }}{{g_n^{\min }}}({\phi _{n1}}(\left\| z_0 \right\| ) + {\phi _{n2}}(\left\| {{{{\bar{x}}}_n}} \right\| ))\nonumber \\&+\, \sum \limits _{i = 1}^{n - 1} {\frac{{{g_i^{\max }}\left| {{z_n}} \right| }}{{{g_n^{\max }}}}} \left| {\frac{{\partial {\alpha _{n - 1}}}}{{\partial {x_i}}}} \right| \left| {{x_{i + 1}}} \right| \nonumber \\&+\, \sum \limits _{i = 1}^{n - 1} {\frac{{\left| {{z_n}} \right| }}{{g_n^{\min }}}} \left| {\frac{{\partial {\alpha _{n - 1}}}}{{\partial {x_i}}}} \right| \left| {{f_i}({{{\bar{x}}}_i})} \right| - \frac{{{1}}}{{{\eta _n}}}{{{\tilde{\theta }} }_n}{{\dot{{\hat{\theta }}} }_n}\nonumber \\&+\, \sum \limits _{i = 1}^{n - 1} {\frac{{\left| {{z_n}} \right| }}{{{g_n^{\min }}}}} \left| {\frac{{\partial {\alpha _{n - 1}}}}{{\partial {x_i}}}} \right| ({\phi _{i1}}(\left\| z_0 \right\| ) + {\phi _{i2}}(\left\| {{{{\bar{x}}}_i}} \right\| )) \nonumber \\&+\, \sum \limits _{i = 1}^{n - 1} {\frac{{{\eta _i}\left| {{z_n}} \right| }}{{g_n^{\min }}}\left| {\frac{{\partial {\alpha _{n - 1}}}}{{\partial {{{\hat{\theta }} }_i}}}} \right| } \left| {{{{\hat{\theta }} }_i}} \right| \nonumber \\&+\,\frac{{{{\eta _1}}x_1^2\left| {{z_n}} \right| }}{{g_n^{\min }}}\left| {\frac{{\partial {\alpha _{n - 1}}}}{{\partial {{{\hat{\theta }} }_i}}}} \right| + \sum \limits _{i = 2}^n {\frac{{{\eta _i}\left| {{z_n}} \right| z_i^2}}{{g_n^{\min }}}} \left| {\frac{{\partial {\alpha _{n - 1}}}}{{\partial {{{\hat{\theta }} }_i}}}} \right| \nonumber \\&+\, \frac{{\left| {{z_n}} \right| }}{{g_n^{\min }}}\left| {{f_n}({{{\bar{x}}}_n})} \right| . \end{aligned}$$
(36)

Under Assumption 1, for \(\forall t \geqslant {t_0}\), the following inequality holds:

$$\begin{aligned} \left| {{z_n}} \right| {\phi _{n1}}(\left\| z_0 \right\| )\leqslant & {} \left| {{z_n}} \right| {\phi _{n1}} \circ {\bar{\alpha }} _1^{ - 1}(v(t) + D(t,{t_0})) \nonumber \\\leqslant & {} \left| {{z_n}} \right| {\varphi _{n1}}(v(t)) + \left| {{z_n}} \right| {\varphi _{n2}}(D(t,{t_0})),\nonumber \\ \end{aligned}$$
(37)

where \({\varphi _{n1}}( \cdot )\geqslant 0\) and \({\varphi _{n2}}( \cdot )\geqslant 0\) are smooth functions.

Note that \(D(t,{t_0})\) is a bounded function for \(\forall t \geqslant {t_0}\), then we can obtain \({\varphi _{n2}}(D(t,{t_0}))\leqslant \theta _{n2}^*\), where \(\theta _{n2}^*>0\) is a constant. Substituting (37) into (36) results in

$$\begin{aligned} {{\dot{V}}_n}\leqslant & {} {{\dot{V}}_{n - 1}} + \displaystyle \frac{{g_n^{\max }}}{{g_n^{\min }}}\left| {{z_n}} \right| \left| {u - {\alpha _n}} \right| + \displaystyle \frac{{{g_n}({{{\bar{x}}}_n})}}{{g_n^{\min }}}{z_n}{\alpha _n} \nonumber \\&+\, \left| {{z_n}} \right| {{{\bar{\phi }} }_n}({X_n})- \displaystyle \frac{{{1}}}{{{\eta _n}}}{{{\tilde{\theta }} }_n}{{\dot{{\hat{\theta }}} }_n}\nonumber \\&-\, \displaystyle \frac{{g_{n - 1}^{\max }}}{{g_{n - 1}^{\min }}}z_n^2\left| {{z_{n - 1}}} \right| , \end{aligned}$$
(38)

where

$$\begin{aligned} {{{\bar{\phi }} }_n}({X_n})= & {} \displaystyle \sum \limits _{i = 1}^{n - 1} {\displaystyle \frac{{{1}}}{{g_n^{\min }}}} \left| {\displaystyle \frac{{\partial {\alpha _{n - 1}}}}{{\partial {x_i}}}} \right| \left| {{f_i}({{{\bar{x}}}_i})} \right| \\&+\, \displaystyle \sum \limits _{i = 1}^{n - 1} \frac{{g_i^{\max }}}{{g_n^{\min }}}\left| {\displaystyle \frac{{\partial {\alpha _{n - 1}}}}{{\partial {x_i}}}} \right| \left| {{x_{i + 1}}} \right| +\displaystyle \frac{{g_{n - 1}^{\max }}}{{g_{n - 1}^{\min }}}\left| {{z_n}{z_{n - 1}}}\right| \\&+\,\displaystyle \frac{{\left| {{f_n}({{{\bar{x}}}_n})} \right| }}{{g_n^{\min }}}+ \displaystyle \frac{{{\eta _1}x_1^2}}{{g_n^{\min }}}\left| {\frac{{\partial {\alpha _{n - 1}}}}{{\partial {{{\hat{\theta }} }_i}}}} \right| \\&+\,\displaystyle \sum \limits _{i = 2}^n {\displaystyle \frac{{{\eta _i}z_i^2}}{{g_n^{\min }}}} \left| {\displaystyle \frac{{\partial {\alpha _{n - 1}}}}{{\partial {{{\hat{\theta }} }_i}}}} \right| \\&+\,\sum \limits _{i = 1}^{n - 1} {\frac{{{1}}}{{g_n^{\min }}}} \left| {\frac{{\partial {\alpha _{n - 1}}}}{{\partial {x_i}}}} \right| ({\varphi _{i1}}(v) + \theta _{i2}^*\\&+\, {\phi _{i2}}(\left\| {{{{\bar{x}}}_i}} \right\| ))\\&+\,\displaystyle \frac{{{1}}}{{g_n^{\min }}}\left( {\varphi _{n1}}(v) + \theta _{n2}^* + {\phi _{n2}}(\left\| {{{{\bar{x}}}_n}} \right\| )\right) \\&+\, \displaystyle \frac{{{1}}}{{g_n^{\min }}}\sum \limits _{i = 1}^{n - 1} {{\eta _i}} \left| {\frac{{\partial {\alpha _{n - 1}}}}{{\partial {{{\hat{\theta }} }_i}}}} \right| \left| {{{{\hat{\theta }} }_i}} \right| , {X_n}\\= & {} [{\bar{x}}_n^\mathrm{T}, {{{{\hat{\theta }} }_1},\ldots ,{{{\hat{\theta }} }_{n - 1}},v]^\mathrm{T}}. \end{aligned}$$

As same as the previous steps, we adopt a neural network to approximate the unknown nonlinear continuous function \({{{\bar{\phi }} }_n}({X_n})\), that is,

$$\begin{aligned} {{{\bar{\phi }} }_n}({X_n}) = H_n^{*T}{S_n}({X_n}) + {\varepsilon _n}({X_n}), \end{aligned}$$
(39)

where \(H_n^*\) is an unknown ideal weight vector, \({S_n}({X_n}) = [{S_{n1}}({X_n}),{S_{n2}}({X_n}), \ldots ,{S_{n{L_n}}}({X_n})] \in {R^{L_n}}\) is a basic function vector, \({\varepsilon _n}({X_n})\) is the approximation error satisfying \(\left| {{\varepsilon _n}({X_n})} \right| \leqslant \varepsilon _n^*\) with \(\varepsilon _n^* > 0\).

Substituting (39) into (38), and using Lemma 4, we obtain

$$\begin{aligned} {{\dot{V}}_n}\leqslant & {} - {c_1}x_1^2 - \displaystyle \sum \limits _{i = 2}^{n - 1} {{c_i}} z_i^2 + \displaystyle \frac{{g_n^{\max }}}{{g_n^{\min }}}\left| {{z_n}} \right| \left| {u - {\alpha _n}} \right| \nonumber \\&+\, \displaystyle \frac{{{g_n}({{\bar{x}}_n})}}{{g_n^{\min }}}{z_n}{\alpha _n} + z_n^2{\theta _n}-\displaystyle \frac{{{1}}}{{{\eta _n}}}{{\tilde{\theta }}_n}{{\dot{\hat{\theta }} }_n} \nonumber \\&+\,g_{n1}^*\displaystyle \sqrt{{V_n}}\nonumber \\&+\, \left( \displaystyle \frac{{g_{n - 1}^{\max }}}{4}\displaystyle \sqrt{\frac{2}{{g_{n - 1}^{\min }}}} +\sum \limits _{h = 1}^2 g_{n - 1,h}^* \right) \nonumber \\&\displaystyle \times \,\sqrt{{V_{n - 1}}}- \displaystyle \sum \limits _{i = 1}^{n - 1} {\tilde{\theta }_i^2}, \end{aligned}$$
(40)

where \({\theta _n} = {\left\| {H_n^*} \right\| ^2}L\) and \(g_{n1}^* = \varepsilon _n^*\displaystyle \sqrt{{2g_n^{\min }}} + \displaystyle \sqrt{{{2g_n^{\min }{\theta _n}}}}\).

A virtual control law and a “virtual adaptive law” for the last step are chosen as

$$\begin{aligned} {\alpha _n}= & {} -\, {z_n}\sqrt{1 + {\hat{\theta }} _n^2} - {c_n}{z_n}, \end{aligned}$$
(41a)
$$\begin{aligned} {\dot{{\hat{\theta }}}_n}= & {} {\eta _n}z_n^2 - {\eta _n}{{\hat{\theta }} _n}, \end{aligned}$$
(41b)

where \({c_n} > 0\) is a design parameter.

Substituting (41) into (40) results in

$$\begin{aligned} {{\dot{V}}_n}\leqslant & {} - {c_1}x_1^2 - \sum \limits _{i = 2}^{n} {{c_i}z_i^2}+ \displaystyle \frac{{g_n^{\max }}}{{g_n^{\min }}}\left| {{z_n}} \right| \left| {u - {\alpha _n}} \right| \nonumber \\&-\, \displaystyle \sum \limits _{i = 1}^{n} {{\tilde{\theta }} _i^2}+ \displaystyle \sum \limits _{i = 1}^{n} {g_i^*\displaystyle \sqrt{{V_i}} }, \end{aligned}$$
(42)

where \(g_{n - 1}^* = \displaystyle \sum \limits _{i = 1}^2 {g_{n - 1,i}^*} + \displaystyle \frac{{g_{n - 1}^{\max }}}{4}\displaystyle \sqrt{\displaystyle \frac{2}{{g_{n - 1}^{\min }}}}\) and \(g_n^* = g_{n1}^* + {\displaystyle \theta _n}\displaystyle \sqrt{2{\eta _n}}\).

Note that the states of system (1) are measurable at the sampling points, hence, from (16), (25), (33) and (41), for \(\forall t \in [{t_k},{t_{k + 1}}), k = 0,1, \ldots , + \infty \), we design the adaptive sampled-data controller as

$$\begin{aligned} \left\{ \begin{array}{l} u(t) = u({t_k})\\ = - \displaystyle \sum \limits _{i = 1}^n {\left[ \displaystyle \prod \limits _{h = i}^n {\left( \sqrt{1 + \mu _h^2({t_k})} + {c_h}\right) }\right] {x_i}({t_k}),} \\ {{{{\dot{\mu }}} }_1}(t) = {\eta _1}x_1^2({t_k}) - {\eta _1}{\mu _1}(t),\\ {{{\dot{\mu }} }_l}(t) = {\eta _l}{\varrho _l^2}({t_k}) - {\eta _l}{\mu _l}(t), \quad l = 2,3, \ldots ,n, \end{array} \right. \end{aligned}$$
(43)

where \(\varrho _l({t_k}) = {x_l}({t_k}) + \varrho _{l-1}({t_k})\big (\displaystyle \sqrt{1 + \mu _{l - 1}^2({t_k})} + {c_{l - 1}}\big ), \varrho _1({t_k}) = {x_1}({t_k})\).

For each sampling interval \(\left[ {{t_k},{t_{k + 1}}} \right) \), relation (42) can be further represented as

$$\begin{aligned} {{\dot{V}}_n}(t)\leqslant & {} - {\tilde{c}} {V_n}(t) + \displaystyle \frac{{g_n^{\max }}}{{g_n^{\min }}}\left| {{z_n}(t)} \right| \left| {u({t_k}) - {\alpha _n}(t)} \right| \nonumber \\&+\, {\bar{g}}\displaystyle \sqrt{{V_n(t)}}, \quad \forall t \in [{t_k},{t_{k + 1}}), \end{aligned}$$
(44)

where

$$\begin{aligned} {\tilde{c}}= & {} \min \big \{ 2{g_1^{\min }}{c_1},2{g_2^{\min }}{c_2}, \ldots ,2{g_n^{\min }}{c_n},2{\eta _1}, 2{\eta _2}, \ldots ,\\&\quad 2{\eta _n},2{\bar{c}}\big \}\hbox { and }{\bar{g}} = \displaystyle \sum \limits _{i = 1}^n {g_i^*}. \end{aligned}$$

Remark 5

It should be mentioned that the term \(x_1^2\theta \) in (15) was necessary for the adaptive sampled-data controller design since such term can deal with the uncertainties existed in the first step with virtual controller (16) [see (14)–(18)], and such way to deal with the uncertainties was motivated by the existing literature on the adaptive continuous-time control, such as [2,3,4].

Remark 6

In comparison with the design of adaptive laws in continuous-time control, it can be seen from (43) that the adaptive laws were constructed just based on the sampled values of the system states, which means that the proposed adaptive design can better save the recourses and helps to reduce the computation burden of the controller.

4 Stability analysis

In what follows, the stability of the closed-loop system under sampled-data controller (43) will be demonstrated by using the Gronwall–Bellman inequality.

Theorem 1

Suppose system (1) satisfies Assumptions 13, for the given allowable sampling period and any bounded initial condition, sampled-data controller (43) will render all states of the resulting closed-loop system to be semi-globally uniformly ultimately bounded.

Proof

Choose the following Lyapunov function candidate

$$\begin{aligned} V= & {} {V_n}+\sum \limits _{i = 1}^n {\frac{{{\tilde{\mu }} _i^2}}{{2{\beta _i}}}} =\displaystyle \frac{{x_1^2}}{{2{g_1^{\min }}}} + \displaystyle \sum \limits _{i = 2}^{n} {\displaystyle \frac{{z_i^2}}{{2{g_i^{\min }}}}} \nonumber \\&+\, \displaystyle \sum \limits _{i = 1}^{n} {\displaystyle \frac{{{\tilde{\theta }} _i^2}}{{2{\eta _i}}}} +\sum \limits _{i = 1}^n {\frac{{{\tilde{\mu }} _i^2}}{{2{\beta _i}}}}+ \displaystyle \frac{{{v^2}}}{{2{{{\bar{\lambda }} }_1}}}, \end{aligned}$$
(45)

where \({{\tilde{\mu }} _i} = \mu _i - {\theta _i}, {\beta _i} > 0\) is a design parameter. \(\square \)

Let \(p>0\) be a constant, where \(p \geqslant V({t_0})\); and define a compact set as

$$\begin{aligned} \varOmega\equiv & {} \left\{ \displaystyle \frac{{x_1^2}}{{2{g_1^{\min }}}} + \displaystyle \sum \limits _{i = 2}^{n} {\displaystyle \frac{{z_i^2}}{{2{g_i^{\min }}}}} + \displaystyle \sum \limits _{i = 1}^{n} {\displaystyle \frac{{{\tilde{\theta }} _i^2}}{{2{\eta _i}}}}\right. \\&\left. +\sum \limits _{i = 1}^n {\frac{{{\tilde{\mu }} _i^2}}{{2{\beta _i}}}}+ \displaystyle \frac{{{v^2}}}{{2{{{\bar{\lambda }} }_1}}} \leqslant p \right\} , \end{aligned}$$

next, we will demonstrate that \(V({t})\leqslant p\) is an invariant set for \(\forall t\geqslant t_0\) and \(V({t_0})\leqslant p\).

Under the situation that \(V(t) \leqslant p\), we know that the following derivations hold: according to (45), we have that \({x_1(t)},{z_2(t)},\ldots , {z_n(t)}, {{\hat{\theta }} _1(t)}, {{\hat{\theta }} _2(t)}, \ldots ,{{\hat{\theta }} _n(t)}\) and v(t) are bounded, which implies that there exist constants \({\Xi _1},{\Xi _2}, \ldots ,{\Xi _n},\) \({{\bar{\varDelta }} _1}, {{\bar{\varDelta }} _2}, \ldots , {{\bar{\varDelta }} _n}\) and \({{{\bar{H}}}}\) such that \(\left| {{x_1}(t)} \right| \leqslant {\Xi _1}, \left| {{z_2}(t)} \right| \leqslant {\Xi _2}, \ldots , \left| {{z_n}(t)} \right| \leqslant {\Xi _n}, \left| {{{{\hat{\theta }} }_1}(t)} \right| \leqslant {{\bar{\varDelta }} _1},\left| {{{{\hat{\theta }} }_{2}}(t)} \right| \leqslant {{\bar{\varDelta }} _2}, \ldots ,\left| {{{{\hat{\theta }} }_n}(t)} \right| \leqslant {{\bar{\varDelta }} _n}\) and \(\left| v (t)\right| \leqslant {{{\bar{H}}}}\) hold; furthermore, according to (16), (25), (33) and (41), we know that the virtual control laws are all bounded, noting that \({z_i} = {x_i} - {\alpha _{i - 1}},i = 2,3, \ldots ,n\), we can obtain the boundedness of \({x_1(t)},{x_2(t)}, \ldots ,{x_n(t)}\), i.e., there exist constants \(\sigma _2^*, \sigma _3^*,\ldots , \sigma _n^*\) such that \(\left| {{x_2}(t)} \right| \leqslant \sigma _2^*, \left| {{x_3}(t)} \right| \leqslant \sigma _3^*, \ldots ,\left| {{x_n}(t)} \right| \leqslant \sigma _n^*\) hold; then, for the sampling points, we also have \(\left| {{x_1}({t_k})} \right| \leqslant {\Xi _1}, \left| {{x_2}({t_k})} \right| \leqslant \sigma _2^*, \ldots ,\left| {{x_n}({t_k})} \right| \leqslant \sigma _n^*\).

On the other hand, under the situation that \(V(t) \leqslant p\), the following derivations also hold: from (43), we know that \({\mu _1(t)}\) is bounded by the boundedness of \({x_1}({t_k})\), then the boundedness of \({\mu _1}({t_k})\) can also be guaranteed, which means that \(\left| {{\mu _1}(t)} \right| \leqslant {{\bar{\mu }} _1}\) and \(\left| {{\mu _1}({t_k})} \right| \leqslant {{{\bar{\mu }} }_1}\) hold, where \({{{\bar{\mu }} }_1}>0\) is a constant; since

$$\begin{aligned} {\varrho _2}({t_k}) = {x_2}({t_k}) + {x_1}({t_k})\bigg (\displaystyle \sqrt{1 + \mu _1^2({t_k})} + {c_1}\bigg ), \end{aligned}$$

we can obtain the boundedness of \({\varrho _2}({t_k})\), that is, \(\left| {{\varrho _2}({t_k})} \right| \leqslant {\lambda _2}\) holds, where \({\lambda _2}>0\) is a constant; according to (43) again, we have the boundedness of \({\mu _2(t)}\), equally, \({\mu _2}({t_k})\) is bounded, which means that \(\left| {{\mu _2}(t)} \right| \leqslant {{\bar{\mu }} _2}\) and \(\left| {{\mu _2}({t_k})} \right| \leqslant {{{\bar{\mu }} }_2}\) hold, where \({{{\bar{\mu }} }_2}>0\) is a constant; furthermore, we know that \({\varrho _3}({t_k})\) is bounded due to the boundedness of \({x_3}({t_k}),{\mu _3}({t_k})\) and \({\varrho _2}({t_k})\), i.e., \(\left| {{\rho _3}({t_k})} \right| \leqslant {\lambda _3}\) holds for a constant \({\lambda _3} > 0\); using (43) repeatedly, we can obtain the boundedness of \({\varrho _3}({t_k}), {\varrho _4}({t_k}), \ldots ,{\varrho _n}({t_k}),{\mu _3}({t_k}), \ldots ,{\mu _n}({t_k})\), that is, there exist constants \({\lambda _3}, {\lambda _4}, \ldots , {\lambda _n}, {{{\bar{\mu }} }_3},{{{\bar{\mu }} }_4}, \ldots ,{{{\bar{\mu }} }_n}\) such that \(\left| {{\varrho _3}({t_k})} \right| \leqslant {\lambda _3}, \ldots ,\left| {{\varrho _n}({t_k})} \right| \) \(\leqslant {\lambda _n}, \left| {{\mu _3}(t)} \right| \leqslant {{\bar{\mu }} _3}, \ldots ,\left| {{\mu _n}(t)} \right| \leqslant {{\bar{\mu }} _n}, \left| {{\mu _3}({t_k})} \right| \leqslant {{{\bar{\mu }} }_3}, \ldots ,\left| {{\mu _n}({t_k})} \right| \leqslant {{{\bar{\mu }} }_n}\) hold; furthermore, since x(t) and v(t) are bounded, we know that there exists a constant \({H^*}>0\) such that

$$\begin{aligned}&\sqrt{3} \sum \limits _{i = 1}^n {\left| {{f_i}({{{\bar{x}}}_i}(t))} \right| } \\&\quad + \sqrt{3} \sum \limits _{i = 1}^n {\bigg ({\varphi _{i1}}(v(t)) + \theta _{i2}^* + {\phi _{i2}}(\left\| {{{{\bar{x}}}_i}(t)} \right\| )\bigg )} \leqslant {H^*}. \end{aligned}$$

holds; then according to (44) and (45), the time derivative of V(t) satisfies the following relation:

$$\begin{aligned} \dot{V}(t)\leqslant & {} - {\tilde{c}}V(t) + \frac{{g_n^{\max }}}{{g_n^{\min }}}\left| {{z_n}(t)} \right| \left| {u({t_k}) - {\alpha _n}(t)} \right| \nonumber \\&+\, {{\bar{g}}}\sqrt{V(t)} +\frac{{\eta _1}{x_1^2({t_k})}}{{{\beta _1}}}\left| {{{{\tilde{\mu }} }_1}(t)} \right| \nonumber \\&+\, \sum \limits _{i = 2}^n {\frac{{\eta _i}{\rho _i^2({t_k})}}{{{\beta _i}}}} \left| {{{{\tilde{\mu }} }_i}(t)} \right| - \sum \limits _{i = 1}^n {\frac{{{\eta _i}}}{{{\beta _i}}}} {\tilde{\mu }} _i^2(t) \nonumber \\&+\, \sum \limits _{i = 1}^n {\frac{{{\eta _i}{\theta _i}}}{{{\beta _i}}}\left| {{{{\tilde{\mu }} }_i}(t)} \right| }\nonumber \\\leqslant & {} - {\tilde{c}}V(t) + \frac{{g_n^{\max }}}{{g_n^{\min }}}\left| {{z_n}(t)} \right| \left| {u({t_k}) - {\alpha _n}(t)} \right| \nonumber \\&+\, {d^*}\sqrt{V(t)}, \quad \forall t \in [{t_k},{t_{k + 1}}), \end{aligned}$$
(46)

where \({d^*} = {\bar{g}} + \Xi _1^2{\eta _1}\displaystyle \sqrt{\frac{2}{{{\beta _1}}}} + \sum \limits _{i = 2}^n {\lambda _i^2}{\eta _i} \sqrt{\displaystyle \frac{2}{{{\beta _i}}}} + \sum \limits _{i = 1}^n {{\eta _i}{\theta _i}} \sqrt{\displaystyle \frac{2}{{{\beta _i}}}}\).

Denote \(\xi (t) = [{\bar{x}}_n^\mathrm{T}(t),{{{\hat{\theta }} }_1}(t),{{{\hat{\theta }} }_2}(t), \ldots ,{{{\hat{\theta }} }_n}(t),{\mu _1}(t), \ldots ,{\mu _n}(t)]^\mathrm{T}\), then we conclude from (16), (25), (33), (41) and (43) that \({\dot{\xi }}(t) = \Psi (\xi (t),\xi ({t_k})),\forall t \in [{t_k},{t_{k + 1}})\). further, from Assumptions 1 and 2, for \(\forall t \in [{t_k},{t_{k + 1}})\), we obtain

$$\begin{aligned}&\left\| {\xi (t) - \xi ({t_k})} \right\| \leqslant \displaystyle \sqrt{n} \int _{{t_k}}^t {\left\| {\Psi (\xi (s),\xi ({t_k}))} \right\| } \mathrm{d}s\nonumber \\&\quad \leqslant \int _{{t_k}}^t {(g^*\displaystyle \sqrt{3n(n - 2)}+\alpha )\left\| {\xi (s) - \xi ({t_k})} \right\| \mathrm{d}s}\nonumber \\&\qquad +(t - {t_k})\beta ^*(\left\| {\xi ({t_k})} \right\| + 1), \end{aligned}$$
(47)

where \(\alpha = 2n\max \big \{ \displaystyle \sqrt{{\eta _1}} ,\displaystyle \sqrt{{\eta _2}} , \ldots ,\displaystyle \sqrt{{\eta _n}} \big \}, {g^*} = \max \big \{g_1^{\max }, \ldots ,g_n^{\max }\big \}, \beta ^* = \max \big \{ {\varTheta _1},{\varTheta _2}\big \}\) with

$$\begin{aligned} \left\{ \begin{array}{l} {\varTheta _1} = {g^*}\displaystyle \sqrt{3n(n - 2)} \\ \qquad \quad +\, {g^*}\displaystyle \sqrt{3n} \left( \displaystyle \sum \limits _{i = 1}^n {\prod \limits _{h = i}^n {\displaystyle \sqrt{1 + {\bar{\varDelta }}_h^2} } } + {c_h}\right) ,\\ {\varTheta _2} = \displaystyle \sqrt{n} \left( 2\sqrt{2} \Xi _1^2 + \displaystyle \sum \limits _{i = 2}^n {\sqrt{2} } \Xi _i^2 + \displaystyle \sum \limits _{i = 2}^n {\displaystyle \sqrt{2} } \lambda _i^2\right) \\ \qquad \quad +\, {H^*}\sqrt{n} . \end{array} \right. \end{aligned}$$

Furthermore, according to Lemma 4, we have the following relation:

$$\begin{aligned}&\int _{{t_k}}^t {\left( g^*\displaystyle \sqrt{3n(n - 2)}\right) \left\| {\xi (s) - \xi ({t_k})} \right\| } \mathrm{d}s \nonumber \\&\quad \leqslant {\varTheta _3}(t - {t_k})\left( \left\| {\xi ({t_k})} \right\| + 1\right) ,\quad \forall t \in [{t_k},{t_{k+1}}),\nonumber \\ \end{aligned}$$
(48)

where \({\varTheta _3} = \max \left\{ {g^*}\displaystyle \sqrt{6n(n - 2)} ,{g^*}\displaystyle \sqrt{6n(n - 2)}\right. \left. \left( {\Xi _1} + \sum \nolimits _{i = 2}^n {\sigma _i^* + \sum \nolimits _{i = 1}^n {{{{\bar{\mu }} }_i}} } \right) \right\} \).

Substituting (48) into (47), we obtain, for \(\forall t \in [{t_k},{t_{k+1}})\),

$$\begin{aligned}&\left\| {\xi (t) - \xi ({t_k})} \right\| \leqslant \varTheta ^*(t - {t_k})(\left\| {\xi ({t_k})} \right\| + 1)\nonumber \\&\quad + \alpha \int _{{t_k}}^t \left\| {\xi (s) - \xi ({t_k})} \right\| \mathrm{d}s, \end{aligned}$$
(49)

where \(\varTheta ^* = {\varTheta _3} + \beta ^*\).

Using Lemma 1, relation (49) can be further rewritten as

$$\begin{aligned}&\left\| {\xi (t) - \xi ({t_k})} \right\| \nonumber \\&\quad \leqslant (t - {t_k})\varTheta ^*(\left\| {\xi ({t_k})} \right\| + 1) \nonumber \\&\qquad + \int _{{t_k}}^t {\alpha ^2 (s - {t_k})\varTheta ^*\left( \left\| {\xi ({t_k})} \right\| + 1\right) } {e^{\alpha (t - s)}}\mathrm{d}s\nonumber \\&\quad \leqslant {\bar{\alpha }} \left( \sqrt{V({t_k})} {+} 1\right) \left( {e^{\alpha \left( t - {t_k}\right) }} - 1\right) , \forall t \in \left[ {t_k},{t_{k + 1}}\right) ,\nonumber \\ \end{aligned}$$
(50)

where \({\bar{\alpha }} = \max \bigg \{ \displaystyle \sqrt{2g_1^{\min }} ,\displaystyle \sum \limits _{i = 2}^n {\sigma _i^* + \displaystyle \sum \limits _{i = 1}^n {{\bar{\mu }} _i} } + \varTheta ^* \bigg \}\).

Noting that \(\left| {{x_i}(t) - {x_i}({t_k})} \right| \leqslant \left\| {\xi (t) - \xi ({t_k})} \right\| ,i = 1,2, \ldots ,n\), hold for each sampling period, then for \(\forall t \in [{t_k},{t_{k+1}})\), we can deduce from (43) that

$$\begin{aligned} \left| {u(t_k) - {\alpha _n}({t})} \right|\leqslant & {} \displaystyle \frac{{{\tilde{c}}}}{{2g_n^{\max }}{\bar{\alpha }} }\sqrt{\frac{{g_n^{\min }}}{2}}\left\| {\xi (t) - \xi ({t_k})} \right\| \nonumber \\&+\,2{b^*}\left( {\Xi _1} + \sum \limits _{i = 2}^n {\sigma _i^*} + \sum \limits _{i = 1}^n {{{\bar{\varDelta }} _i}} \right. \nonumber \\&\left. +\, \sum \limits _{i = 1}^n {{{{\bar{\mu }} }_i}} \right) + {\varTheta _4}\left( {\Xi _1} + \displaystyle \sum \limits _{i = 2}^n {\sigma _i^*}\right) ,\nonumber \\ \end{aligned}$$
(51)

where \({b^*} = \displaystyle \sqrt{n} \max \big \{ b_1^*,\ldots ,b_n^*\big \}, b_i^* = \prod \nolimits _{h = i}^n {\bigg (\displaystyle \sqrt{1 + {{\bar{\varDelta }}_h^2}} + {c_i}\bigg )}, i = 1,2, \ldots ,n\); and \({\varTheta _4} > 0\) is a constant.

From (45), (50) and (51), for \(\forall t \in [{t_k},{t_{k+1}})\), we have

$$\begin{aligned}&\frac{{g_n^{\max }}}{{g_n^{\min }}}\left| {{z_n}(t)} \right| \left| {u({t_k}) - {\alpha _n}(t)} \right| \nonumber \\&\quad \leqslant \displaystyle \frac{{{\tilde{c}}}}{2}({e^{\alpha (t - {t_k})}} - 1)(\sqrt{V({t_k})} + 1)\sqrt{V(t)} + {\bar{d}} \sqrt{V(t)},\nonumber \\ \end{aligned}$$
(52)

where \({\bar{d}} = g_n^{\max }\bigg (2{b^*}\bigg ({\Xi _1} + \displaystyle \sum \nolimits _{i = 2}^n {\sigma _i^*} + \displaystyle \sum \nolimits _{i = 1}^n {{{\bar{\varDelta }} _i}} + \displaystyle \sum \nolimits _{i = 1}^n {{{{\bar{\mu }} }_i}} \bigg ) + {\varTheta _4}\bigg ({\Xi _1} + \displaystyle \sum \nolimits _{i = 2}^n {\sigma _i^*} \bigg )\bigg )\sqrt{\displaystyle \frac{2}{{g_n^{\min }}}}\).

Substituting (52) into (46) results in

$$\begin{aligned} \dot{V}(t)\leqslant & {} - {\tilde{c}}V(t) + \frac{{{\tilde{c}}}}{{{2}}}\displaystyle \sqrt{V(t)} \displaystyle \sqrt{V({t_k})} ({e^{\alpha (t - {t_k})}} - 1) \nonumber \\&+\,{{{\bar{d}}}}\displaystyle \sqrt{V(t)}+d^*\displaystyle \sqrt{V(t)}\nonumber \\&+\, \frac{{{\tilde{c}}}}{{{2}}}\displaystyle \sqrt{V(t)} ({e^{\alpha (t - {t_k})}} - 1),\quad \forall t \in [{t_k},{t_{k+1}}).\nonumber \\ \end{aligned}$$
(53)

Let \(W(t) = \displaystyle \sqrt{V(t)}\), from (53), for \(\forall t \in [{t_k},{t_{k + 1}})\), a straightforward calculation leads to

$$\begin{aligned} \dot{W}(t)\leqslant & {} - \frac{1}{2}{\tilde{c}}W(t) + \frac{{{\tilde{c}}}}{{4}}({e^{\alpha {T_k}}} - 1)W({t_k}) \nonumber \\&+\, \frac{{{\tilde{c}}}}{{4}}({e^{\alpha {T_k}}} - 1) +\displaystyle \frac{1}{2}({d^*} + {\bar{d}}), \end{aligned}$$
(54)

where \({T_k} = {t_{k + 1}} - {t_k}\) is the sampling period. Choose a constant \({\lambda _0}\) satisfying

$$\begin{aligned} {\lambda _0} \geqslant \max \{ {\beta _0},1\}, \end{aligned}$$
(55)

where \({\beta _0} = \displaystyle \frac{{{\tilde{c}}}}{{{\tilde{c}}\sqrt{p} - 2({d^*} + {\bar{d}})}} > 0\); and the allowable sampling period \({T_k}\) is given as

$$\begin{aligned} 0 < {T_k} \leqslant \displaystyle \frac{1}{\alpha }\mathrm{ln}\bigg (1 + \displaystyle \frac{1}{{{\lambda _0}}}\bigg ),~~~k = 0,1, \ldots , + \infty . \end{aligned}$$
(56)

Define

$$\begin{aligned} \left\{ \begin{array}{l} {\sigma _1} = \displaystyle \frac{1}{2}{\tilde{c}},\\ {\sigma _{2k}} = \displaystyle \frac{{{\tilde{c}}}}{{4}}({e^{\alpha {T_k}}} - 1),\\ {\sigma _{3k}} = \displaystyle \frac{{{\tilde{c}}}}{{4}}({e^{\alpha {T_k}}} - 1)+\displaystyle \frac{1}{2}({d^*} + {\bar{d}}),\\ k = 0,1, \ldots , + \infty , \end{array} \right. \end{aligned}$$
(57)

then we can obtain the following relations by substituting (55) and (56) into (57):

$$\begin{aligned} {\sigma _{2k}}\leqslant & {} \displaystyle \frac{{{\tilde{c}}}}{{4}}, ~~~{\sigma _{3k}} \nonumber \\\leqslant & {} \displaystyle \frac{{{\tilde{c}}}}{{4{\lambda _0}}} + \frac{1}{2}({d^*} + {\bar{d}}), ~~~k = 0,1, \ldots , + \infty , \end{aligned}$$
(58)

under which, for \(\forall t \in [{t_0},{t_1})\) and \({{W(t_0) \leqslant {\displaystyle \sqrt{p}}}}\), it can be concluded that

$$\begin{aligned} \dot{W}(t)\leqslant & {} - {\sigma _1}W(t) + {\sigma _{20}}W({t_0}) + {\sigma _{30}}\nonumber \\\leqslant & {} - \displaystyle \frac{1}{2}{\tilde{c}}W(t) + \displaystyle \frac{{{\tilde{c}}}}{{4}}\sqrt{p} + \displaystyle \frac{{{\tilde{c}}}}{{4{\lambda _0}}} + \frac{1}{2}({d^*} + {\bar{d}}).\nonumber \\ \end{aligned}$$
(59)

When \(V(t) = p, \forall t \in [{t_0},{t_1})\), then from (59), we have \(\dot{W}(t) \leqslant 0\), which means that \(\{ {W(t) \leqslant {\sqrt{p}}} \}\) is a invariant set for the first sampling period, that is, \(V(t) \leqslant p\) holds for the interval \([{t_0},{t_1})\). Due to that W(t) is a continuous function for \(\forall t \geqslant {t_0}\), we can obtain \(W(t_1^ - ) = W({t_1}) \leqslant \displaystyle \sqrt{p}\); hence, for \(\forall t \in [{t_1},{t_2})\), applying the same analysis approach which is used for the first interval \([{t_0},{t_1})\), we can know that \(V(t) \leqslant p\) also holds, and we equally have \(W(t_2^ - ) = W({t_2}) \leqslant \displaystyle \sqrt{p}\). Following this line of argument, we finally know that \(W(t) \leqslant \sqrt{p}\) holds for every sampling period, that is, \(V(t) \leqslant p\) holds for \(\forall t \geqslant {t_0}\), which has proved that \(V({t})\leqslant p\) is an invariant set for \(\forall t\geqslant t_0\) and \(V({t_0})\leqslant p\). Substituting (56) into (54), and solving the inequality, we obtain, for \(\forall t \in [{t_k},{t_{k + 1}})\),

$$\begin{aligned} W(t)\leqslant & {} W({t_k})\left( {e^{ - {\sigma _1}(t - {t_k})}} + \displaystyle \frac{{{\sigma _{2k}}}}{{{\sigma _1}}}\left( 1 - {e^{ - {\sigma _1}(t - {t_k})}}\right) \right) \nonumber \\&+\, \displaystyle \frac{{1 - {e^{ - {\sigma _1}(t - {t_k})}}}}{{{\sigma _1}}}{\sigma _{3k}}. \end{aligned}$$
(60)

From (57) and (58), (60) can be further rewritten as

$$\begin{aligned} W(t) \leqslant {Q^*}(t - {t_k})W({t_k}) + \displaystyle \frac{{{\sigma _{3k}}}}{{{\sigma _1}}},\quad \forall t \in [{t_k},{t_{k + 1}}),\nonumber \\ \end{aligned}$$
(61)

where \({Q^*}(t - {t_k}) = \displaystyle \frac{1}{2} + \displaystyle \frac{1}{2}{e^{ - {\sigma _1}(t - {t_k})}}\). Substituting \(t = {t_{k + 1}}\) into (61) results in

$$\begin{aligned} W({t_{k + 1}}) \leqslant {Q^*}({T_k})W({t_k}) + \displaystyle \frac{{{\sigma _{3k}}}}{{{\sigma _1}}},\; k = 0,1, \ldots , {+} \infty .\nonumber \\ \end{aligned}$$
(62)

Obviously, \({Q^*}({T_k}) \in (0,1)\), and we can deduce from (62) that

$$\begin{aligned}&W({t_{k + 1}}) \leqslant \prod \limits _{i = 0}^k {{Q^*}({T_i})} W({t_0}) \nonumber \\&\qquad +\, \left( \frac{1}{{2{\lambda _0}}} + \frac{{{d^*} + {\bar{d}}}}{{{\tilde{c}}}}\right) \left( 1 + \sum \limits _{i = 1}^{k - 1} {\prod \limits _{h = i}^k {{Q^*}({T_h})}}\right) \nonumber \\&\quad \leqslant {Q^{*k + 1}}({T^*})W({t_0}) \nonumber \\&\qquad +\, \left( \frac{1}{{2{\lambda _0}}} + \frac{{{d^*} + {\bar{d}}}}{{{\tilde{c}}}}\right) \left( 1 + \sum \limits _{i = 1}^{k - 1} {\prod \limits _{h = i}^k {{Q^*}({T^*})} } \right) \nonumber \\&\quad \leqslant {Q^{*k + 1}}({T^*})W({t_0}) \nonumber \\&\qquad +\, \left( \frac{1}{{2{\lambda _0}}} + \frac{{{d^*} + {\bar{d}}}}{{{\tilde{c}}}}\right) \left( \frac{{1 - {Q^{*k + 1}}({T^*})}}{{1 - {Q^*}({T^*})}}\right) ,\nonumber \\ \end{aligned}$$
(63)

where \(T^*>0\) is a constant.

From (63), it is easy to obtain that

$$\begin{aligned} \mathop {\lim }\limits _{k \rightarrow \infty } W({t_{k + 1}}) {\leqslant } \left( \frac{1}{{2{\lambda _0}}} {+} \frac{{{d^*} {+} {\bar{d}}}}{{{\tilde{c}}}}\right) \left( \frac{1}{{1 - {Q^*}({T^*})}}\right) ,\nonumber \\ \end{aligned}$$
(64)

i.e.,

$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow \infty } W(t) \leqslant {W^*}, \end{aligned}$$
(65)

where \({W^*} = \bigg (\displaystyle \frac{1}{{2{\lambda _0}}} +\displaystyle \frac{{{d^*} + {\bar{d}}}}{{{\tilde{c}}}}\bigg )\bigg (\displaystyle \frac{1}{{1 - {Q^*}({T^*})}}\bigg )\). Using (45) and (65), we get

$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow \infty } \left| {{x_1}(t)} \right|\leqslant & {} \sqrt{2g_1^{\min }} {W^{*}}, \mathop {\lim }\limits _{t \rightarrow \infty } \left| {{{{\tilde{u}}}_i}(t)} \right| \nonumber \\\leqslant & {} \sqrt{2{\beta _i}} {W^*}, \quad i = 1,2, \ldots ,n, \end{aligned}$$
(66a)
$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow \infty } \left| {{z_i}(t)} \right|\leqslant & {} \sqrt{2g_i^{\min }} {W^*}, \quad i = 2,3, \ldots ,n, \end{aligned}$$
(66b)
$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow \infty } \left| {{{{\hat{\theta }} }_i}(t)} \right|\leqslant & {} \left| {{\theta _i}} \right| + \sqrt{2{\eta _i}} {W^*}, \quad i = 1,2, \ldots ,n.\nonumber \\ \end{aligned}$$
(66c)

Further, for \(i = 2,3, \ldots ,n\), it can be deduced from (43) and (66) that

$$\begin{aligned}&\mathop {\lim }\limits _{t \rightarrow \infty } \left| {{x_i}(t)} \right| \leqslant \sqrt{2g_i^{\min }} {W^*} \nonumber \\&\quad +\, \sqrt{2g_{i - 1}^{\min }} {W^*}\Bigg (\sqrt{1 + {{\left( \left| {{\theta _{i - 1}}} \right| + \sqrt{2{\eta _{i - 1}}} {W^*}\right) }^2}} \nonumber \\&\quad +\, {c_{i - 1}}\Bigg ), \end{aligned}$$
(67)

Finally, from Assumption 3 and (66), we have the boundedness of \(z_0(t)\) as \(t\rightarrow +\infty \). The proof of Theorem 1 is completed.

Remark 7

Compared with the periodic sampling considered in the existing literature [18,19,20,21,22,23,24], the sampling period in this paper can be chosen freely within an allowable constant [see (56)], which means that the designer can increase the sampling frequency in the initial running stage to make the system to be stable as soon as possible, and decrease the sampling frequency to reduce the computation burden when the system tends to be stable; thus, for the real systems, non-periodic sampling has more advantages than the periodic sampling.

Remark 8

For nonlinear system (1), the adaptive sampled-data controller can be constructed as follows:

Step 1: Determine the constant \(\alpha \) from the parameters \(\eta _i>0, i= 1,2, \ldots ,n\), and choose \({\lambda _0}\) satisfies (55), then work out the allowable sampling period \({T_k}, k = 0,1, \ldots , + \infty \).

Step 2: The adaptive sampled-data controller (43) can be constructed with the design parameters \(c_i>0, i= 1,2, \ldots ,n\), and the sampled values of the system states.

Remark 9

Motivated by the existing literature on sampled-data control, such as [41,42,43,44,45,46,47], the proposed sampled-data control method will be extended to solve the stabilization problem of the Markovian jump systems in the form of (1); furthermore, by choosing appropriate Lyapunov–Krasovskii functional, the proposed sampled-data control method will also be extended to solve the stabilization problem of the systems in the form of (1) with time-varying delays.

5 Simulation examples

In this section, two examples will be provided to show effectiveness of the proposed results.

Example 1

Consider the following second-order nonlinear system

$$\begin{aligned} \left\{ \begin{array}{l} {{\dot{z}_0 = - 2z_0 + x_1^2\sin ({x_1}),}}\\ {{\dot{x}}_1} = (\sin ({x_1}) + 2){x_2} \\ \qquad \quad +\, \sin (x_1^2)\cos ({x_1}) + \sin ({x_1}) + {\varDelta _1}({x_1},z_0),\\ {{\dot{x}}_2} = (\sin ({x_1}{x_2}) + 3)u \\ \qquad \quad +\, \sin ({x_1}{x_2}) + \sin ({x_1})\cos ({x_2}) + {\varDelta _2}({{{\bar{x}}}_2},z_0),\\ y = {x_1}, \end{array} \right. \nonumber \\ \end{aligned}$$
(68)

where \({\varDelta _1}({x_1},z_0) = {z_0^2}\sin (z_0) + {\cos ^2}(x_1^2){\sin ^2}(z_0)\) and \({\varDelta _2}({{{\bar{x}}}_2},z_0) = {z_0^2}\cos (z_0)+\sin ({x_1^2}{x_2^2})\cos ({x_2})\).

Fig. 1
figure 1

Trajectories of \(x_1\) (solid line) and \(x_2\) (dashed line)

Fig. 2
figure 2

Trajectories of \({\mu _1}\) (solid line) and \({\mu _2}\) (dashed line)

From (43), for \(\forall t \in [{t_k},{t_{k + 1}})\), we construct the sampled-data controller as

$$\begin{aligned} u(t)= & {} -\, {x_1}({t_k})\left( \sqrt{1 + \mu _2^2({t_k})} + {c_2}\right) \nonumber \\&\times \,\left( \sqrt{1 + \mu _1^2({t_k})} + {c_1}\right) \nonumber \\&-\, {x_2}({t_k})\left( \displaystyle \sqrt{1 + \mu _2^2({t_k})} + {c_2}\right) , \end{aligned}$$
(69)

where \({c_1} = 0.2, {c_2} = 0.3\), and \(\mu _1(t_k), \mu _2(t_k)\) can be obtained from the following equations:

$$\begin{aligned} \left\{ \begin{array}{l} {{\dot{\mu }} _1}(t) = {\eta _1}x_1^2({t_k}) -{\eta _1} {\mu _1}(t),\\ {{\dot{\mu }} _2}(t) = {\eta _2}\varrho _2^2({t_k}) - {\eta _2}{\mu _2}(t), \quad \forall t \in [{t_k},{t_{k + 1}}), \end{array} \right. \end{aligned}$$
(70)

where \({\eta _1} = 0.3, {\eta _2} = 0.4\) and \(\varrho _2({t_k}) = {x_1}({t_k}) + {x_1}({t_k})\displaystyle \sqrt{1 + \mu _2^2({t_k})} + {c_1}{x_1}({t_k})\).

Simulation results are shown in Figs. 1, 2, 3 and 4, where \(x(0) = {[0.3, - 0.2]^\mathrm{T}}, {\mu _1}(0) = 0.1, {\mu _2}(0) = 0.2\) and \(z_0(0) = 0.3\), the sampling period is chosen randomly in the interval \((0,0.08\hbox { s})\). It can be seen from Fig. 1 that all states of the resulting closed-loop system converge to a neighborhood of the origin, which demonstrates effectiveness of the proposed sampled-data control method.

Fig. 3
figure 3

Trajectory of \(z_0\)

Fig. 4
figure 4

Control input u

Example 2

In the example, a popular benchmark of application example will be provided, that is, the stabilization of an one-link manipulator actuated by a brush dc (BDC) motor, the dynamics of a one-link manipulator actuated by a BDC motor can be expressed as follows [48]:

$$\begin{aligned} \left\{ \begin{array}{l} {{\dot{z}_0(t) = - z_0(t) + 2{q^2}(t)\cos (q(t)),}}\\ D\ddot{q}(t) + B\dot{q}(t) + N\sin (q(t)) \\ \quad = I(t) + {\varDelta _{I1}}(q(t),\dot{q}(t),z_0(t)),\\ M\dot{I}(t) = - QI(t) - {K_m}\dot{q}(t) + V_E(t)\\ \qquad \qquad \quad +\, {\varDelta _{I2}}(q(t),\dot{q}(t),I(t),z_0(t)), \end{array} \right. \end{aligned}$$
(71)

where \(q(t),\dot{q}(t),\) and \(\ddot{q}(t)\) are the link angular position, velocity, and acceleration, respectively; I(t) is the motor current; \({\varDelta _{I1}}( \cdot )\) is the additive bounded disturbance; \({\varDelta _{I2}}( \cdot )\) is the additive bounded voltage disturbance; \(V_E\) is the input control voltage; M is the armature inductance; Q is the armature resistance; \({K_m}\) is the back-emf coefficient; \(z_0\) is an unmeasured states;

$$\begin{aligned} \left\{ \begin{array}{l} D = \displaystyle \frac{J}{{{K_\tau }}} + \displaystyle \frac{{mL_0^2}}{{3{K_\tau }}} + \displaystyle \frac{{2{M_0}R_0^2}}{{5{K_\tau }}},\\ N = \displaystyle \frac{{m{L_0}G}}{{2{K_\tau }}} + \displaystyle \frac{{{M_0}{L_0}G}}{{{K_\tau }}}, B = \displaystyle \frac{{{B_0}}}{{{K_\tau }}}, \end{array} \right. \end{aligned}$$

J is rotor inertia; \({R_0}\) is the radius of the load; \({B_0}\) is the coefficient of viscous friction at the joint; m is the link mass; \({L_0}\) is the link length; G is the gravity coefficient, and \({K_\tau }\) is the coefficient which characterizes the electromechanical conversion of armature current to torque; \(q(t),\dot{q}(t)\) and I(t) can only be measured at the sampling points.

In the simulation, the design parameters are selected as \(J = 1.252 \,\times \, {10^{ - 3}}\hbox { kg} \, {\hbox {m}^2}, M = 15.0 \,\times \, {10^{ - 3}}H, {L_0} = 0.205\hbox { m}, Q = 3\;{\Omega }, m= 0.506\hbox { kg}, {M_0} = 0.423\hbox { kg}, {B_0} = 15.25 \times {10^{ - 3}}\hbox { N} \, \hbox {m} \, \hbox {s/rad}, {K_\tau } = 1.5 \times {10^{ - 3}}\hbox { N} \, \hbox {m/A}\), and \({K_m} = 0.7 \hbox { N} \, \hbox {m/A}\).

Now, setting \({x_1}(t) = q(t), {x_2}(t) = \dot{q}(t)\) and \({x_3}(t) = I(t)\), then (71) can be expressed as

$$\begin{aligned} \left\{ \begin{array}{l} {{\dot{z}_0(t) = -\, z_0(t) + 2x_1^2(t)\cos ({x_1}(t)),}}\\ {{\dot{x}}_1}(t) = {x_2}(t),\\ {{\dot{x}}_2}(t) = \displaystyle \frac{1}{D}{x_3}(t) + \displaystyle \frac{B}{D}{x_2}(t) \\ \qquad \qquad - \displaystyle \frac{{N\sin ({x_1}(t))}}{D} + \displaystyle \frac{{\varDelta _{I1}}({{{\bar{x}}}_2}(t),z_0(t))}{D},\\ {{\dot{x}}_3}(t) = \displaystyle \frac{1}{M}{V_E}(t) - \displaystyle \frac{Q}{M}{x_3}(t) \\ \qquad \qquad - \displaystyle \frac{{{K_m}}}{M}{x_2}(t) + \displaystyle \frac{{\varDelta _{I2}}({{{\bar{x}}}_3}(t),z_0(t))}{M}, \end{array} \right. \end{aligned}$$
(72)

where \({\varDelta _{I1}}({{{\bar{x}}}_2},z_0) = {z_0^2}\sin ({x_2}z_0) + \sin (x_1^2x_2^2), {\varDelta _{I2}}({{{\bar{x}}}_3},z_0) = {z_0^2}\cos ({x_1}{x_2}){\sin ^2}({x_3}) + \sin ({x_1}{x_2})\).

To solve the stabilization problem of system (72), we use the following sampled-data controller:

$$\begin{aligned} \left\{ \begin{array}{l} {\bar{\alpha }} _1({t_k}) = - {x_1}({t_k})\displaystyle \sqrt{1 + \mu _1^2({t_k})} - {c_1}{x_1}({t_k}),\\ {\bar{\alpha }} _2({t_k}) = - {\bar{z}}_2({t_k})\displaystyle \sqrt{1 + \mu _2^2({t_k})} - {c_2}{\bar{z}}_2({t_k}),\\ V_E({t}) = - {\bar{z}}_3({t_k})\displaystyle \sqrt{1 + \mu _3^2({t_k})} - {c_3}{\bar{z}}_3({t_k}),\\ \quad \forall t \in [{t_k},{t_{k + 1}}), \end{array} \right. \end{aligned}$$
(73)

where \({\bar{z}}_i = x_i - {\bar{\alpha }} _{i - 1},i = 2,3\), and \({c_1} = 0.2,{c_2} = 0.3,{c_3} = 0.1\).

According to (43), the adaptive laws are taken as

$$\begin{aligned} \left\{ \begin{array}{l} {{\dot{\mu }} _1}(t) = {\eta _1}x_1^2({t_k}) - {\eta _1}{\mu _1}(t),\\ {{\dot{\mu }} _2}(t) = {\eta _2}\varrho _2^2({t_k}) - {\eta _2}{\mu _2}(t),\\ {{\dot{\mu }} _3}(t) = {\eta _3}\varrho _3^2({t_k}) - {\eta _3}{\mu _3}(t),\quad \forall t \in [{t_k},{t_{k + 1}}), \end{array} \right. \end{aligned}$$
(74)

where \({\eta _1} = 0.3, {\eta _2} = 0.4,{\eta _3} = 0.2\) and

$$\begin{aligned} \left\{ \begin{array}{l} {\varrho _1}({t_k}) = {x_1}({t_k}),\\ {\varrho _2}({t_k}) = {x_2}({t_k}) + {\varrho _1}({t_k})\displaystyle \sqrt{1 + \mu _1^2({t_k})} + {\varrho _1}({t_k}){c_1},\\ {\varrho _3}({t_k}) = {x_3}({t_k}) + {\varrho _2}({t_k})\displaystyle \sqrt{1 + \mu _2^2({t_k})} + {\varrho _2}({t_k}){c_2}. \end{array} \right. \end{aligned}$$

Simulation results are shown in Figs. 5, 6, 7 and 8 with initial conditions \(x(0) = {[0.3, - 0.2,} {0.2]^\mathrm{T}}, \mu _1(0) = 0.1, \mu _2(0) = 0.2, \mu _3(0) = 0.1\) and \(z_0(0) = 0.3\), the sampling period can be chosen randomly in the interval \((0,0.06 \hbox { s})\). It can be seen from Fig. 5 that the link angular position q(t), velocity \(\dot{q}(t)\) and motor current I(t) of system (71) converge to a neighborhood of the origin, which shows effectiveness of the proposed methods.

Fig. 5
figure 5

Trajectories of q (solid line), \(\dot{q}\) (dashed line), and I (dotted line)

Fig. 6
figure 6

Trajectories of \(\mu _1\) (solid line), \(\mu _2\) (dashed line), and \(\mu _3\) (dotted line)

Fig. 7
figure 7

Trajectory of \(z_0\)

Fig. 8
figure 8

Control input \(V_E\)

6 Conclusions

In this paper, an adaptive sampled-data control technique was developed to practically stabilize a class of nonlinear systems in strict-feedback structure with uncertain functions which were not required to satisfy any linear growth condition. During adaptive sampled-data controller design, neural networks were employed to approximate the unknown nonlinear functions, then an adaptive sampled-data controller was constructed by utilizing the virtual control laws and “virtual adaptive laws” which were obtained via the backstepping technique. With the help of Gronwall–Bellman inequality, it was demonstrated that the proposed sampled-data controller can make all states of the resulting closed-loop system to be semi-globally uniformly ultimately bounded with appropriate choice of the sampling period. Finally, two examples were provided to show validness of the obtained results. In addition, the proposed results can be extended to the switched nonlinear systems and multi-agent systems, which is our future work.