1 Introduction

Since the excellent universal approximation ability of the neural network (NN) was proven [1], the NN has become an active research topic in the control domain. In particularly, adaptive NN control has attracted much attention and shown to be particularly useful for control of highly uncertain, nonlinear, and complex systems [2, 3]. Through years of progress, a lot of adaptive NN control approaches have been proposed based on Lyapunov stability theory and the backstepping technique [46]. In [5], an adaptive backstepping control was proposed for uncertain strict-feedback nonlinear systems by using RBF NN with fixed centers and widths.

In recent decades, adaptive control of discrete-time nonlinear systems has attracted much research interest. In contrast to the above-mentioned continuous-time nonlinear systems, the Lyapunov design for discrete-time nonlinear systems becomes much more intractable. The reason lies in that the linearity property of the derivative of a Lyapunov function in continuous time is not present in the difference of Lyapunov function in the discrete-time [7]. Hence, many control schemes for continuous-time systems may be not suitable for discrete-time systems. Such as the noncausal problem will be encountered if we directly apply backstepping design to discrete-time systems in lower triangular form, such as strict-feedback form [8]. Specifically, to solve the noncausal problem for general strict-feedback discrete-time system, system transformation using prediction functions of future states was studied in [9], in which adaptive NN backstepping design has been applied to the transformed strict-feedback discrete-time systems without noncausal problem. Through years of progress, a lot of adaptive NN control schemes were developed for discrete-time nonlinear systems in strict-feedback form via backstepping in [1018]; for instance, based on backstepping design, an adaptive NN control was developed for a class of discrete-time nonlinear strict-feedback systems with unknown bounded disturbances in [12]. In [15], to deal with the systems with unknown control directions, an adaptive robust control was studied for a class of discrete-time nonlinear strict-feedback systems based on discrete Nussbaum gain.

However, for previously mentioned discrete-time nonlinear strict-feedback systems [818], the complexity growing problem and explosion of learning parameters are universal exists in the conventional adaptive NN backstepping control design procedure. That is, the complexity of the controller grows drastically and a large number of parameters needed to be updated online as the order of discrete-time system increases. As a result, the online computation burden is very heavy and the learning time tends to unacceptably long. In recent years, some control methods have been developed to eliminate the two drawbacks partly.

On the one hand, to solve the complexity growing problem, by introducing first-order filters of the synthetic inputs at each intermediate steps of the traditional backstepping approach, a dynamic surface control (DSC) technique was proposed in [19, 20]. But the DSC technique can only eliminate the complexity growing problem in a certain degree. In order to eliminate the complexity growing problem completely, only one NN was used to approximate the lumped unknown function of the system at the last step, which was presented for a class of nonlinear strict-feedback systems in [21] and pure-feedback systems in [22]. Afterward, a single neural network (SNN)-approximation-based adaptive control approach was developed for a class of uncertain discrete-time nonlinear strict-feedback systems in [23], but this method is also suffered from the explosion of learning parameters, that is, when NN is used to approximate the lumped unknown function, a large numbers of weights of neural network are need to be updated online.

On the other hand, to address the explosion of learning parameters, a novel adaptive robust tracking control approach with fewer online parameters was proposed for a class of nonlinear systems by combination of the backstepping technique and fuzzy logic system in [24]. In [25, 26], based on minimal learning parameters (MLP) technique, the robust adaptive neural network (or fuzzy) control approach was presented for a class of strict-feedback nonlinear systems. By this approach, the number of parameters updated online for each subsystem was reduced to 2. Recently, a direct adaptive control algorithm with less online updated parameters was developed for a class of discrete-time nonlinear systems in strict-feedback form in [27], but this method still needs many parameters to be adjusted online when the number of state variables of the system increases and suffers from the complexity growing problem, in particular for high-order systems.

As we know, many industrial control systems have constraints on their inputs. In practice, input saturation constraint is one of the most important input constraints because the saturation is a potential problem that degrades the control system performance, gives rise to undesirable inaccuracy, or even affects system stability, and the control actions are usually limited in energy or magnitude. Recently, to analyze and design control systems with saturation nonlinearities, a decentralized adaptive neural control with auxiliary design system was proposed for a class of interconnected large-scale uncertain Nussbaum gain function systems with input saturation in [28]. Afterward, based on Nussbaum gain functions, the direct adaptive fuzzy backstepping control approaches were presented for a class of uncertain nonlinear strict-feedback systems with input saturation in [29, 30]. But these elegant control approaches are studied for continuous-time systems, and due to difficulties in discrete-time systems, there is few research on the adaptive control for discrete-time nonlinear systems in presence of input saturation.

In this paper, based on above observation, in order to eliminate the complexity growing problem, alleviate the explosion of learning parameters, and address the problem of input saturation constraint, a novel adaptive NN control approach is presented for a class of discrete-time nonlinear systems in strict-feedback form. Compared with the existing methods, the adaptive mechanism with much simpler controller structure and minimal learning parameterization is achieved; hence, the computational burden is lighter.

The main contributions of this paper are as follows:

  1. (i)

    Based on SNN approximation, the designed controller contains only one actual control law and one adaptive law, so the complexity growing problem is eliminated drastically.

  2. (ii)

    By combining SNN approximation and MLP technique, the numbers of input variables and weights of neural network updated online are decreased drastically, and the number of parameter updated online for whole system is reduced to only one; hence, the explosion of learning parameters is alleviated significantly.

  3. (iii)

    An auxiliary design system is employed to analyze the effect of input saturation, and states of auxiliary design system are utilized to develop the tracking control.

This paper is organized as follows. Section 2 introduces the discrete-time nonlinear systems and preliminaries that are necessary for adaptive fuzzy control design. In Sect. 3, by combining the SNN approximation and the MLP technique, the adaptive backstepping control design procedure for a class of discrete-time nonlinear strict-feedback systems is carried out. The stability analysis for the closed-loop control system are presented in Sect. 4. In Sect. 5, simulation results via two examples are presented. This paper ends with conclusion in Sect. 6.

2 Problem formulation and preliminaries

2.1 Problem formulation

Consider the following single-input and single-output (SISO) discrete-time nonlinear system in strict-feedback form with input saturation:

$$\begin{aligned} \left\{ {\begin{array}{l} x_i (k\!+\!1)\!=\!f_i (\bar{x}_i (k))\!+\!x_{i+1} (k),\;i\!=\!1,2,\ldots ,n\!-\!1, \\ x_n (k+1)=f_n (\bar{x}_n (k))+u(k), \\ y_k =x_1 (k), \\ \end{array}} \right. \end{aligned}$$
(1)

where \(\bar{{x}}_{i} (k)=[x_{1} (k),\ldots ,x_{i} (k)]^{T}\in R^{i},\,i=1,2,\ldots ,n\) and \(y_{k} \in R\) are the state variables and system output, respectively; \(f_{i} (\bar{{x}}_{i} (k)),\,i=1,2,\ldots ,n\) are unknown smooth nonlinear functions; and \(u(k)\in R\) represents the control input with saturation constraint.

In this paper, considering the presence of input saturation constraints on u(k) as

$$\begin{aligned} -u_{\min } \le u(k)\le u_{\max } \end{aligned}$$
(2)

where \({{u}_{\mathrm{min}}}\) and \({{u}_{\mathrm{max}}}\) are the positive known lower limit and upper limit of the input saturation constraints of u(k), respectively. Thus,

$$\begin{aligned} u(k)= & {} {s}{a}{t}({v}(k))\nonumber \\= & {} \left\{ {{\begin{array}{l@{\quad }l} {u}_{\mathrm{max}}&{}\hbox {if} \ {v}(k)>{u}_{\mathrm{max}} \\ {v}(k)&{} \hbox {if} \ -{u}_{\mathrm{min}}\le {v}(k)\le {u}_{\mathrm{max}} \\ -{u}_{\mathrm{min}}&{} \hbox {if} \ {v}(k)<-{u}_{\mathrm{min}} \\ \end{array}}} \right. \end{aligned}$$
(3)

where v(k) is the designed control input of the system.

The control objective is to design an adaptive NN controller for system (1) such that: (i) All the signals in the closed-loop system are uniformly ultimately bounded (UUB) and (ii) the system output \(y_{k}\) follows the desired reference signal \(y_{d}(k)\).

Assumption 1

The desired reference signal \(y_{d} (k)\in \,\Omega _{y} ,\,\forall k>0\) is smooth, known, and bounded, where \(\Omega _{y} :=\{\chi \left| \chi \right. =x_{1} \}\).

2.2 High-order neural network (HONN)

There are many well-developed approaches used to approximate an unknown function. NN is one of the most frequently employed approximation methods due to the fact that NN is shown to be capable of universally approximating any unknown continuous function to arbitrary precision. HONNs satisfies the conditions of the Stone–Weierstrass theorem. Because of its high-order interaction between neurons, HONN can approximate any continuous nonlinear smooth function to any desired accuracy over a compact set. In this paper, a HONNs is employed to approximate the lumped unknown function of the system. The structure of HONNs is expressed as follows:

$$\begin{aligned} \varphi (W,z)= & {} W^{T}S(z), \,\,W\hbox { and }S(z)\in R^{l},\nonumber \\ S(z)= & {} [s_{1} (z),s_{2} (z),\ldots ,s_{l} (z)]^{T}, \end{aligned}$$
(4)
$$\begin{aligned} s_{i} (z)=\prod \limits _{j\in I_{i} } {[s(z_{j} )]}^{d_{j} (i)},\, i=1,2,\ldots l, \end{aligned}$$
(5)

where \(z=[z_{1} ,z_{2} ,\ldots ,z_{m} ]^{T}\in \Omega _{z} \subset R^{m}\); positive integer l denotes the NN node number; \(\{I_{1},\,I_{2},\ldots ,I_{l}\}\) is a collection of l not-ordered subsets of \(\{1,2,\ldots ,m\}\) and \(d_{j}(i)\) are nonnegative integers; W is an adjustable synaptic weight vector; and \(s(z_{j})\) is chosen as a hyperbolic tangent function \(s(z_{j})=(e^{z_{j}}-e^{-z_{j}})\big / (e^{z_{j}}-e^{-z_{j}})\).

For a smooth function \(\phi (z)\), there exists ideal weight \(W^{*}\) such that the smooth function \(\phi (z)\) can be approximated by an ideal NN on a compact set \(\Omega _{z} \subset R^{m}\):

$$\begin{aligned} \phi (z)=W^{*T}S(z)+\varepsilon _{z}, \end{aligned}$$
(6)

where \(\varepsilon _{z} \) is called the NN approximation error with an assumption of \(\left| \varepsilon \right| \le \varepsilon ^{*}\), where the unknown constant \(\varepsilon ^{*}>0\) for all \(z\in \Omega _{z} \). The NN approximation error is a critical quantity, representing the minimum possible deviation of the ideal approximator \(W^{*T}S(z)\) form the unknown smooth function \(\phi (z)\).

The ideal weight vector \(W^{*}\) is an “artificial” quantity required for analytical purposes. \(W^{*}\) is defined as the value of W that minimizes \(\left| {\varepsilon _{z} } \right| \) for all \(z\in \Omega _{z} \subset R^{m}\) in a compact region, i.e.,

$$\begin{aligned} W^{*}\!:=\!\arg \,\mathop {\min }\limits _{W\in R^{l}} \left\{ {\mathop {\sup }\limits _{z\in \Omega _{z} } \left| {\phi (z)-W^{T}S(z)} \right| } \right\} ,\,\Omega _{z} \subset R^{m}\nonumber \\ \end{aligned}$$
(7)

In general, the ideal NN weight \(W^{*}\) is unknown and needs to be estimated. In this paper, we shall consider \(\hat{{W}}\) being the estimate of the ideal NN weight \(W^{*}\).

Assumption 2

On a compact set \(\Omega _{z} \subset R^{m}\), the ideal NN weight \(W^{*}\) satisfies \(\left\| {W^{*}} \right\| \le w_{m} \) where \(w_{m} \) is a positive constant.

Lemma 1

[9] Consider the basis functions of HONN (4) with z being the input vector. The following properties of HONNs will be used in the proof of the closed-loop system stability:

$$\begin{aligned} \lambda _{\max } [S(z)S^{T}(z)]<1,\,S^{T}(z)S(z)<l \end{aligned}$$
(8)

Lemma 2

[24] For any given real continuous function f(x) with f(0) \(=\) 0, if the continuous function separation technique and HONN approximation technique are used, then f(x) can be denoted as

$$\begin{aligned} f(x)=\bar{{S}}(x)Ax \end{aligned}$$
(9)

where \(\bar{{S}}(x)\!=\![1,S(x)]\!=\![1,s_{1} (x),s_{2} (x),\ldots ,s_{l} (x)],A^{T}=[\varepsilon ,W^{T}],\,\varepsilon ^{T}=[\varepsilon _{1} ,\varepsilon _{2} ,\ldots ,\varepsilon _{n} ]\) is a vector of the approximation errors, and

$$\begin{aligned} W=\left[ {\begin{array}{cccc} w_{11}^{*} &{} w_{12}^{*} &{} \cdots &{} w_{1n}^{*} \\ w_{21}^{*} &{} w_{22}^{*} &{} \cdots &{} w_{2n}^{*} \\ \vdots &{} \vdots &{} \vdots &{} \vdots \\ w_{l1}^{*} &{} w_{l2}^{*} &{} \cdots &{} w_{ln}^{*} \\ \end{array}} \right] \end{aligned}$$

is a weight matrix.

3 Adaptive backstepping control design

Consider the discrete-time nonlinear system in strict-feedback form with unknown control gain described in (1). The causality contradiction is one of the major problems that we will encounter in discrete-time domain when we construct a controller for the general strict-feedback nonlinear system through backstepping. However, the above problem can be avoided if we transform the system equation into a special form which is suitable for backstepping design. If we consider the original system (1) as a one-step-ahead predictor, we can transform it into an equivalent maximum n-step-ahead predictor, which can predict the future states \(x_{1} (k+n),\,x_{2} (k+n-1),\ldots , \,x_{n} (k+1)\), then the causality contradiction is avoided when controller is constructed based on the maximum n-step-ahead prediction. Similarly, with the help of the transformation process in [9], system (1) is equivalent to

$$\begin{aligned}&x_{i} (k+n-i+1)=F_{i} (\bar{{x}}_{n} (k))+x_{i+1} (k+n-i), \nonumber \\&\quad i=1,2,\ldots ,n-1, \nonumber \\&x_{n} (k+1)=f_{n} (\bar{{x}}_{n} (k))+u(k), \nonumber \\&\quad y_{k} =x_{1} (k), \end{aligned}$$
(10)

where \(F_{i} (\bar{{x}}_{n} (k))\) is an unknown function depending on \(f_{j} (\cdot ),\,j=1,\ldots ,n-1\). It should be noted that the functions \(F_{i} (\bar{{x}}_{n} (k)),\;i=1,2,\ldots ,n-1\) are highly nonlinear.

For convenience of analysis and discussion, for \(i=1,\ldots ,n-1\), let

$$\begin{aligned} F_{i} (k)=F_{i} (\bar{{x}}_{n} (k)),\,\,f_{n} (k)=f_{n} (\bar{{x}}_{n} (k)) \end{aligned}$$

they are functions of system states \(\bar{{x}}_{n} (k)\) at the kth step.

Step 1: For \(\eta _{1} (k)=x_{1} (k)-y_{d} (k)\), its nth difference is given by

$$\begin{aligned} \eta _{1} (k+n)= & {} x_{1} (k+n)-y_{d} (k+n) \nonumber \\= & {} F_{1} (k)+x_{2} (k+n-1)-y_{d} (k+n) \end{aligned}$$
(11)

Consider \(x_{2} (k+n-1)\) as a virtual control for (11) and \(\alpha _{2} (k+n-1)\) as the ideal intermediate function. By introducing the error variable

$$\begin{aligned} \eta _{2} (k+n-1)=x_{2} (k+n-1)-\alpha _{2} (k+n-1) \end{aligned}$$
(12)

if we choose

$$\begin{aligned} {{\alpha }{_{2} }(k+n-{1})=-F{_{1} }(k)+y{_{d} }(k+n)} \end{aligned}$$
(13)

it is obvious that \(\eta _{1} (k+n)=0\). Substituting (13) into (12) leads to

$$\begin{aligned} x_{2} (k\!+\!n-1)\!=\!\eta _{2} (k+n-1)\!-\!F_{1} (k)+y_{d} (k+n)\nonumber \\ \end{aligned}$$
(14)

Substituting (14) into (11), the error Eq. (11) can be rewritten as

$$\begin{aligned} \eta _{1} (k+n)=\eta _{2} (k+n-1) \end{aligned}$$
(15)

Step 2: as defined before, \(\eta _{2} (k)=x_{2} (k)-\alpha _{2} (k)\). Its \((n-1)\)th difference is given by

$$\begin{aligned} \eta _{2} (k+n-1)= & {} x_{2} (k+n-1)+\alpha _{2} (k+n-1)\nonumber \\= & {} x_{3} (k+n-2)+F_{2} (k)+F_{1} (k) \nonumber \\&-y_{d} (k+n) \nonumber \\= & {} x_{3} (k+n-2)+F_{2}^{*} (k)-y_{d} (k+n)\nonumber \\ \end{aligned}$$
(16)

where \(F_{2}^{*}(k)=F_{2} (k)+F_{1} (k)\) is an unknown smooth function

Similarly, consider \(x_{3} (k+n-2)\) as a virtual control for (16) and \(\alpha _{3} (k+n-2)\) as the ideal intermediate function. By introducing the error variable

$$\begin{aligned} \eta _{3} (k+n-2)=x_{3} (k+n-2)-\alpha _{3} (k+n-2) \end{aligned}$$
(17)

if we choose

$$\begin{aligned} {{\alpha }{_{3} }(k+n-{2})=-F{_{2}^{*} }(k)+y{_{d} }(k+n)} \end{aligned}$$
(18)

it is obvious that \(\eta _{2} (k+n-1)=0\). Substituting (18) into (17) leads to

$$\begin{aligned} x_{3} (k+n-2)= & {} \eta _{3} (k+n-2)+\alpha _{3} (k+n-2) \nonumber \\= & {} \eta _{3} (k+n-2)-F_{2}^{*} (k)+y_{d} (k+n)\nonumber \\ \end{aligned}$$
(19)

Substituting (19) into (16), the error Eq. (16) is rewritten as

$$\begin{aligned} \eta _{2} (k+n-1)=\eta _{3} (k+n-2) \end{aligned}$$
(20)

Step i: Following the same procedure as in step 2, for \(\eta _{i} (k)=x_{i} (k)-\alpha _{i} (k)\). Its \((n-i+1)\)th difference is given by

$$\begin{aligned}&\eta _{i} (k+n-i+1)\nonumber \\&\quad =x_{i} (k+n-i+1)-\alpha _{i} (k+n-i+1) \nonumber \\&\quad =x_{i+1} (k+n-i)+F_{i} (k)+F_{i-1}^{*} (k) \nonumber \\&\qquad -y_{d} (k+n) \nonumber \\&\quad =x_{i+1} (k+n-i)+F_{i}^{*} (k)-y_{d} (k+n) \end{aligned}$$
(21)

where \(F_{i}^{*} (k)=F_{i} (k)+F_{i-1}^{*} (k)\) is an unknown smooth function.

Consider \(x_{i+1} (k+n-i)\) as a virtual control for (21) and \(\alpha _{i+1} (k+n-i)\) as the ideal intermediate function. By introducing the error variable

$$\begin{aligned} \eta _{i+1} (k+n-i)=x_{i+1} (k+n-i)-\alpha _{i+1} (k+n-i)\nonumber \\ \end{aligned}$$
(22)

if we choose

$$\begin{aligned} {\alpha _{i+1}(k+n-{i})=-F{_{i}^{*} }(k)+y_{d} (k+n)} \end{aligned}$$
(23)

it is obvious that \(\eta _{i} (k+n-i+1)=0\). Substituting (23) into (22) leads to

$$\begin{aligned} x_{i+1} (k+n-i)= & {} \eta _{i+1} (k+n-i)\!+\!\alpha _{i+1} (k+n-i)\nonumber \\= & {} \!\eta _{i+1} (k\!+\!n\!-\!i)\!-\!F_{i}^{*} (k)\!+\!y_{d} (k\!+\!n)\nonumber \\ \end{aligned}$$
(24)

Substituting (24) into (21), the error Eq. (21) is rewritten as

$$\begin{aligned} \eta _{i} (k+n-i+1)=\eta _{i+1} (k+n-i) \end{aligned}$$
(25)

Step n: For \(\eta _{n} (k)=x_{n} (k)-\alpha _{n} (k)\), its first difference is given by

$$\begin{aligned} \eta _{n} (k+1)= & {} x_{n} (k+1)-\alpha _{n} (k+1) \nonumber \\= & {} f_{n} (k)+u(k)+F_{n-1}^{*} (k)-y_{d} (k+n) \nonumber \\= & {} u(k)+F_{n}^{*} (k)-y_{d} (k+n) \end{aligned}$$
(26)

where \(F_{n}^{*} (k)=f_{i} (k)+F_{n-1}^{*} (k)\) is an unknown smooth function.

Since \(F_{n}^{*} (k)\) is unknown and \(F_{n}^{*} (k)\) is a function of system state \(\bar{{x}}_{n} (k)\), therefore the HONN can be used to approximate \(F_{n}^{*} (k)\). According to Lemma 2, a suitable HONN \(\hat{{f}}(x,A)\) with input vector \(\bar{{x}}_{n} \in \Omega _{\bar{{x}}_{n} } \), where \(\Omega _{\bar{{x}}_{n} } \) is a compact set and A is a matrix containing unknown weights, which is proposed here to approximate unknown function f(x). Then, the unknown function \(F_{n}^{*} (k)\) can be approximated by HONN as follows:

$$\begin{aligned} F_{n}^{*} (k)= & {} S(z(k))A\bar{{x}}_{n}^{T} (k)+\varepsilon \nonumber \\= & {} S(z(k))A\left[ {\begin{array}{l} \eta _{1} (k)+y_{d} (k) \\ \eta _{2} (k)+\alpha _{2} (k) \\ \quad \quad \;\;\vdots \\ \eta _{n} (k)+\alpha _{n} (k) \\ \end{array}} \right] ^{T}+\varepsilon \end{aligned}$$
(27)
$$\begin{aligned} z(k)= & {} [\bar{{x}}_{n}^{T} (k),y_{d} (k+n)]^{T}\in \Omega _{Z} \subset R^{n+1} \end{aligned}$$
(28)

where \(\varepsilon \) is the approximation error. Let \(b=\left\| A \right\| \), the normalized term \(A^{m}=A / {\left\| A \right\| }=A/b\), and \(\omega =A^{m} \times \bar{{\eta }}_{n} (k)\) with \(\bar{{\eta }}_{n} (k)=[\eta _{1} (k),\eta _{2} (k),\ldots ,\eta _{n} (k)]^{T}\). Then, one has

$$\begin{aligned} F_{n}^{*} (k)=bS(z(k))\omega +{d}' \end{aligned}$$
(29)

where \({d}'=S(z(k))A\left[ {y_{d} (k)+\sum \nolimits _{j=2}^n {\alpha _{j} (k)} } \right] +\varepsilon \). By noticing the bound of \(\varepsilon \), one has

$$\begin{aligned} \left\| d \right\|\le & {} \left\| {S(z(k))A\left[ {y_{d} (k)+\sum \nolimits _{j=2}^n {\alpha _{j} (k)} } \right] +\varepsilon ^{*}} \right\| \nonumber \\\le & {} c_{\min } \theta \psi (z(k)) \end{aligned}$$
(30)

where \(\theta =c_{\min }^{-1} \max \Big ( \left\| {Ay_{d} (k)} \right\| ,\left\| {\sum \nolimits _{j=2}^n {A\alpha _{j} (k)} } \right\| ,\left\| {\varepsilon ^{*}} \right\| \Big )\), and \(\psi (z(k))=1+\left\| {S(z(k))} \right\| \). It is clear that \(\left\| d \right\| \) is bounded because \(\theta \) is bounded due to the boundedness of \(y_{d} (k)\) and \(\varepsilon ^{*}\).

For convenience of constraint effect analysis of the input saturation, the following auxiliary design system is given by

$$\begin{aligned} e(k+1)=\left\{ {\begin{array}{l} -k_{1} e(k)+\Delta u,\quad \left| {e(k)} \right| \ge \mu \\ 0,\quad \left| {e(k)} \right| < \mu \\ \end{array}} \right. \end{aligned}$$
(31)

where \(k_{1} =k_{2} +\frac{\left| {\eta _{1} (k)\Delta u} \right| +0.5\Delta u^{2}}{e^{2}(k)}>0,\,k_{2} >0,\,{{\Delta }{u}}\) \({=u(k)-{v}(k)}\) is the control input error, \(\mu \) is a small positive design constant; e(k) is a variable of the auxiliary design system introduced to ease the analysis of the effect of the input saturation; and control law v(k) with input saturation will be designed.

If we choose

$$\begin{aligned} {u(k)={u}{^{*}}(k)=-F{_{n}^{*} }(k)+y{_{d} }(k+n)} \end{aligned}$$
(32)

it is obvious that \(\eta _{n} (k+1)=0\). Then, choose the control command \(u^{*}(k)\) as

$$\begin{aligned}&u^{*}(k)=\lambda (k)\Phi (z(k)) \end{aligned}$$
(33)
$$\begin{aligned}&\lambda (k)=c_{\mathrm{min}}^{-1} \max (b^{2},\theta ^{2}) \end{aligned}$$
(34)
$$\begin{aligned}&\Phi (z(k))=\left\| {S(z(k))} \right\| ^{2}+\psi ^{2}(z(k)) \end{aligned}$$
(35)

Considering the input saturation effect, choose the actual control law v(k) as follows:

$$\begin{aligned} {v(k)={\hat{{\lambda }}}(k){\Phi }(z(k))+e(k)} \end{aligned}$$
(36)

and the updating algorithm as:

$$\begin{aligned} \hat{{\lambda }}(k+1)=\hat{{\lambda }}(k)-\Gamma [\Phi (z(k))\eta _{1} (k+1)+\sigma \hat{{\lambda }}(k)]\nonumber \\ \end{aligned}$$
(37)

where \(\Gamma \) and \(\sigma \) are positive design constants and \(\hat{{\lambda }}(k)\) is the estimate of \(\lambda (k)\).

Substituting (36) into (26), the error Eq. (26) is rewritten as

$$\begin{aligned} {\eta }{_{n} }(k+{1})= & {} {F}{_{n}^{*} }(k)-{y}{_{d} }(k+n)\nonumber \\&+\,{\hat{{\lambda }}}(k){\Phi }(z(k))+e(k)+{\Delta }{u} \end{aligned}$$
(38)

Adding and subtracting \(u^{*}(k)\) on the right-hand side of (38) and noting (32), we have

$$\begin{aligned} {\eta }{_{n} }(k+{1})= & {} {F}{_{n}^{*} }(k)-{y}{_{d} }(k+n)+{\hat{{\lambda }}}(k){\Phi }({z}(k))\nonumber \\&+\,{e}(k)+{\Delta }{u}-{\lambda }(k){\Phi }({z}(k))+{u}{^{*}}(k);\nonumber \\ \end{aligned}$$
(39)

substituting (31) into (39) leads to

$$\begin{aligned} {\eta }{_{n} }(k+{1})={\tilde{\lambda }}(k){\Phi }({z}(k))+{e}(k)+{\Delta }u \end{aligned}$$
(40)

Remark 1

For all discrete-time nonlinear systems in form (1), only one HONN is used to approximate the lumped unknown function at the last step, and the designed controller only contains an actual control law (36) and an adaptive law (37). And it can be observed that the number of parameter updated online for whole system (1) is reduced to only one. As a result, both the complexity growing problem and the explosion of learning parameters are eliminated drastically and the structure of the controller is much simpler.

Remark 2

As pointed out in Remark 2 [24], the principle of designing the NNs is to use as few neurons as possible to approximate the unknown functions, which implies that minimal inputs to the NNs are imposed. It should be noted that in this paper, the number of HONN input variables updated online is decreased to \(n+1\), where n is the number of state variables in whole system. Meanwhile, the number of weights updated online for HONN is also decreased drastically. Hence, the proposed approach can alleviate the explosion of learning parameters significantly by reducing the dimension of the argument vector of the function to be approximated.

Remark 3

By fusion of the SNN approximation and MLP technique, the approach proposed in this paper can simultaneously eliminate the complexity growing problem and explosion of learning parameters, which leads to a much simpler controller with minimal learning parameters, computational burden is lighter, and it is easy to be implemented in real applications. In particular for higher-order systems, the superiority of the controller design approach would be even greater.

4 Stability analysis

The stability analysis of the closed-loop system is given in this section.

Theorem 1

Consider the closed-loop system consisting of the system (10), input saturation effect auxiliary design system (31), the actual control (36), and the adaptive law (37). Then, for any bounded initial conditions, all the closed-loop system signals remain uniformly ultimately bounded, and the steady state tracking error converges to a neighborhood around zero by appropriately choosing control parameters.

Proof

Choose the Lyapunov function candidate of the closed-loop system as follows:

$$\begin{aligned} V(k)=\eta _{1}^{2} (k)\Gamma +\sum \nolimits _{i=2}^n {\eta _{i}^{2} (k)} +\tilde{{\lambda }}^{2}(k)\Gamma ^{-1}+e^{2}(k)\Gamma \nonumber \\ \end{aligned}$$
(41)

The first difference of (38) along (34) and (37) is given by:

$$\begin{aligned} \Delta V(k)= & {} [\eta _{1}^{2} (k+1)-\eta _{1}^{2} (k)]\Gamma +\sum \nolimits _{i=2}^n {[\eta _{i}^{2} (k+1)} \nonumber \\&-\,\eta _{i}^{2} (k)]+\tilde{{\lambda }}^{2}(k+1)\Gamma ^{-1}-\tilde{{\lambda }}^{2}(k)\Gamma ^{-1} \nonumber \\&+\,e^{2}(k+1)\Gamma -e^{2}(k)\Gamma \end{aligned}$$
(42)

According to (15), (20) and (25), we have

$$\begin{aligned} \eta _{i} (k+1)=\eta _{i+1} (k), \,i=1,2,\ldots ,n-1 \end{aligned}$$
(43)

then the difference of the Lyapunov function as

$$\begin{aligned} \Delta V= & {} \eta _{n}^{2} (k+1)-\eta _{1}^{2} (k)\Gamma -(1-\Gamma )\eta _{2}^{2} (k)-2\tilde{{\lambda }}(k) \\&\times [\Phi (z(k))\eta _{1} (k+1)+\sigma \hat{{\lambda }}(k)]+\Gamma [\Phi (z(k)) \\&\times \eta _{1} (k+1)+\sigma \hat{{\lambda }}(k)]^{2}+e^{2}(k+1)\Gamma -e^{2}(k)\Gamma \\= & {} -{\bar{\gamma }}{\eta }{_{1}^{2} }(k)-({1}-{\bar{\gamma }}){g}{_{1}^{2} }{\eta }{_{2}^{2} }(k)+{\tilde{{\lambda }}}^{2}(k){\Phi }^{2}({z}(k)) \\&+\,{e}^{2}(k)\!+\!{\Delta }{u}^{2}\!+\!{2}{\tilde{{\lambda }}}(k){\Phi }({z}(k)){e}(k)+{2}{\Delta }{u}{e}(k)\\&+\,{2}{\tilde{{\lambda }}}(k){\Phi }({z}(k)){\Delta }{u}-{2}{\tilde{{\lambda }}}(k){\Phi }({z}(k)){\eta }{_{1} }(k+{1})\\&-\,{2}{\sigma }{\tilde{{\lambda }}}(k){\hat{{\lambda }}}(k)+{\Phi }^{2}({z}(k)){\Gamma }{\eta }{_{1}^{2} }(k+{1}) \\&+\,{\sigma }^{2}{\Gamma }{\hat{{\lambda }}}^{2}(k)+{2}{\sigma }{\hat{{\lambda }}}(k){\Gamma }{\Phi }({z}(k)){\eta }{_{1} }(k+{1})\\&-\,({1}-{k}{_{1}^{2} }){\Gamma }{e}^{2}(k)-{2}{\Gamma }{k}{_{1} }{\Delta }{u}{e}(k)+{\Gamma }{\Delta }{u}^{2} \\&\le -\,{\bar{\gamma }}{\eta }{_{1}^{2} }(k)-({1}\!-\!{\bar{\gamma }}){g}{_{1}^{2} }{\eta }{_{2}^{2} }(k)\!+\!{\tilde{{\lambda }}}^{2}(k){\Phi }^{2}({z}(k)) \\&+\,{e}^{2}(k)\!+\!{\Delta }{u}^{2}\!+\!{2}{\tilde{{\lambda }}}(k){\Phi }({z}(k)){e}(k)\!+\!{2}{\Delta }{u}{e}(k) \\&+\,{2}{\tilde{{\lambda }}}(k){\Phi }({z}(k)){\Delta }{u}+{2}{\tilde{{\lambda }}}(k){\Phi }({z}(k)){\eta }{_{1} }(k+{1})\\&-\,{2}{\sigma }{\tilde{{\lambda }}}(k){\hat{{\lambda }}}(k)+{\Phi }^{2}({z}(k)){\Gamma }{\eta }{_{1}^{2} }(k+{1}) \\&+\,{\sigma }^{2}{\Gamma }{\hat{{\lambda }}}^{2}(k)+{2}{\sigma }{\hat{{\lambda }}}(k){\Gamma }{\Phi }({z}(k)){\eta }{_{1} }(k+{1})\\&-\,({1}-{k}{_{1}^{2} }){\Gamma }{e}^{2}(k)+{2}{\Gamma }{k}{_{1} }{\Delta }{u}{e}(k)+{\Gamma }{\Delta }{u}^{2} \end{aligned}$$

Using the facts that

$$\begin{aligned}&\Phi ^{2}(k)<l,\quad \quad \Gamma \Phi ^{2}(k)<\bar{\gamma }l, \\&2\tilde{{\lambda }}(k)\Phi (z(k))e(k)\le l\tilde{{\lambda }}^{2}(k)+e^{2}(k), \\&2\tilde{{\lambda }}(k)\Phi (z(k))\Delta u\le l\tilde{{\lambda }}^{2}(k)+\Delta u^{2}, \\&2e(k)\Delta u\le e^{2}(k)+\Delta u^{2}, \\&2\tilde{{\lambda }}(k)\Phi (z(k))\eta _{1} (k+1)\le \frac{1}{\bar{\gamma }}\tilde{{\lambda }}^{2}(k)+\bar{\gamma }l\eta _{1}^{2} (k+1), \\&2\sigma {\hat{\lambda }}(k)\Gamma \Phi (z(k))\eta _{1} (k\!+\!1)\!\le \! \bar{\gamma }l\eta _{1}^{2} (k\!+\!1)\!+\!\bar{\gamma }\sigma ^{2}{\hat{\lambda }}^{2}(k), \\&2\tilde{{\lambda }}(k){\hat{\lambda }}(k)=\tilde{{\lambda }}^{2}(k)+{\hat{\lambda }}^{2}(k)-\lambda ^{2}(k), \\&2\tilde{{\lambda }}(k)\Phi (z(k))e(k)\le e^{2}(k)+l\tilde{{\lambda }}^{2}(k), \\&2\Gamma k_{1} \Delta ue(k)\le \bar{\gamma }k_{1}^{2} e^{2}(k)+\bar{\gamma }\Delta u^{2}, \end{aligned}$$

we obtain

$$\begin{aligned}&{\Delta }{V}(k)\nonumber \\&\quad \le \!-\!{\bar{\gamma }}{\eta }{_{1}^{2} }(k)\!-\!({1}\!-\!{\bar{\gamma }}-{3}{\bar{\gamma }}{l}){g}{_{1}^{2} }{\eta }{_{2}^{2} }(k)-{\sigma }({1}-{2}{\bar{\gamma }}{\sigma })\nonumber \\&\quad \times {\hat{{\lambda }}}^{2}(k)-({\bar{\gamma }}-{3}-{\bar{\gamma }}{k}{_{1}^{2} }){e}^{2}(k)+{\beta } \end{aligned}$$

where \({{\beta }={\sigma }{\lambda }^{2}(k)+({3}+{2}{\bar{\gamma }}){\Delta }{u}^{2}}\).

If we choose the design parameters as follows:

$$\begin{aligned} 0<\bar{\gamma }\!<\!\frac{1}{1\!+\!3l},\,0\!<\!\sigma <\frac{1}{2\bar{\gamma }}, \,0\!<\!k_{1} <{\sqrt{\frac{{{\bar{\gamma }}\!-\!{3}}}{\bar{\gamma }}} }\!<\!{1},\nonumber \\ \end{aligned}$$
(44)

then \(\Delta V\le 0\) once the error \(\eta _{1} (k)\) is larger than \(\sqrt{\beta }\). This demonstrates that the tracking error \(\eta _{i} (k), \,i=1,2,\ldots ,n\) is bounded and will converge to the compact set denoted by \(\Omega _{\eta } \subset R\), where \(\Omega _{\eta } :=\{\chi \left| \right. \quad \chi \le \sqrt{\beta }\}\). Based on the boundedness of \(\eta _{i} (k),\;i=1, \,\ldots ,n\), we can easily prove that e(k) is bounded.

The adaptation dynamic (37) can be written as

$$\begin{aligned} \tilde{{\lambda }}(k+1)\!= & {} \!\tilde{{\lambda }}(k)\!-\!\Gamma [\Phi (z(k))\eta _{1} (k+1)\!+\!\sigma \tilde{{\lambda }}(k)\!+\!\sigma \lambda ^{*}] \\= & {} \tilde{{\lambda }}(k)-\Gamma [\Phi (z(k))\eta _{2} (k)+\sigma \tilde{{\lambda }}(k)+\sigma \lambda ^{*}] \\= & {} (1-\sigma \Gamma )\tilde{{\lambda }}(k)-\Gamma \Phi (z(k))\eta _{2} (k)+\sigma \Gamma \lambda ^{*} \end{aligned}$$

Similar to the proof in [9, 10], since \(\eta _{2} (k)\) is bounded, so \(\tilde{{\lambda }}(k)\), or equivalently of \({\hat{\lambda }}(k)\), is bounded in a compact set denoted by \(\Omega _{\lambda } \). Based on above analysis, it can be seen that \(V(k),\,{\hat{\lambda }}(k),\,\eta _{i} (k)\) and e(k) are bounded; hence, \(x_{i} (k)\) are bounded for \(i=1,2,\ldots ,n\). Thus, all signals of the closed-loop system are uniformly ultimately bounded. \(\square \)

5 Simulation examples

In this section, two simulation examples and simulation comparisons with the literature [9, 30] are provided to illustrate the effectiveness and merits of the proposed adaptive NN control approach.

Example 1

Consider the discrete-time nonlinear strict-feedback systems with input saturation as follows:

$$\begin{aligned} x_{1} (k+1)= & {} 0.7x_{1}^{2} (k)+x_{2} (k), \nonumber \\ x_{2} (k+1)= & {} u(k), \nonumber \\ y_{k}= & {} x_{1} (k), \end{aligned}$$
(45)

The initial condition for system state is \(x(0)= [0.5\;0]^{T}\), and the adaptive laws are \({\hat{\lambda }}(0)=0.1\). Other controller parameters are \(l=12,\,\Gamma =0.001,\,\sigma =0.5,\,k_{1} =0.01,\,\mu =0.01\). The input constraints are \(u_{\min } =u_{\max } =0.6\). The reference signal is \(y_{d} (k) =\sin ({k\pi } / 20) / 2\).

The simulation results are presented in Figs. 12, and 3. Figure 1 shows the output \(y_{k}\) and the reference signal \(y_{d}(k)\), and it can be seen that the fairly tracking performance is obtained. Figure 2 gives the control signal of the closed-loop system. Figure 3 illustrates the trajectory of estimation of parameter.

Fig. 1
figure 1

\(y_{d}\) (red trajectory) and y (black trajectory) of Example 1

Fig. 2
figure 2

Control input u(k) of Example 1

Fig. 3
figure 3

The parameter trajectory of Example 1

Example 2

Consider the discrete-time nonlinear strict-feedback systems in presence of input saturation as follows:

$$\begin{aligned} x_{1} (k+1)= & {} \frac{x_{1}^{2} (k)}{1+x_{1}^{2} (k)}+x_{2} (k), \nonumber \\ x_{2} (k+1)= & {} \frac{x_{1} (k)}{1+x_{1}^{2} (k)+x_{2}^{2} (k)}+u(k), \nonumber \\ y_{k}= & {} x_{1} (k), \end{aligned}$$
(46)

The initial condition for system state is \(x(0)= [0\;0]^{T}\), and the adaptive laws are \({\hat{\lambda }}(0)=0.1\). Other controller parameters are \(l=13,\,\Gamma =0.001,\,\sigma =1, k_{1} =0.01,\,\mu =0.01\). The input constraints are \(u_{\min } =u_{\max } =4\). The reference signal is \(y_{d} (k)= \sin ({k\pi } /20) / 2+\sin ({k\pi }/ 10) /2\).

The simulation results are presented in Figs. 45, and 6. By comparing our simulation results with those in [9, 30], it can be seen from Fig. 4 that the better tracking performance for output \(y_{k}\) relative to the reference signal \(y_{d}(k)\) and the control magnitude of u(k) in our paper are much smaller. Figures 5 and 6 illustrate the control signal of the closed-loop system and the trajectory of estimation of parameter.

Fig. 4
figure 4

\(y_{d}\) (red trajectory) and y (black trajectory) of Example 2

Fig. 5
figure 5

Control input u(k) of Example 2

Fig. 6
figure 6

The parameter trajectory of Example 2

6 Conclusion

This paper investigates the adaptive backstepping control for a class of discrete-time nonlinear strict-feedback systems with input saturation based on SNN approximation and MLP technique. The proposed approach is able to eliminate the complexity growing problem and alleviate the explosion of learning parameters inherent in conventional adaptive control design procedure. And an auxiliary design system is incorporated into the control scheme to overcome the problem of input saturation constraints. The designed controller contains only one actual control law and one adaptive law, the numbers of input variables and weights of neural network updated online are decreased drastically, and the number of parameter updated online for whole system is reduced to only one. As a result, the adaptive mechanism with much simpler controller structure and minimal learning parameterization is achieved, and the computational burden is lighter. It is shown via Lyapunov theory that all signals in the closed-loop system are uniformly ultimately bounded (UUB) and that the tracking error converges to a small neighborhood of zero by choosing the control parameters appropriately. Finally, simulation results via two examples are employed to illustrate the effectiveness and merits of the proposed scheme. Future research will be concentrated on the application of SNN approximation and MLP technique for uncertain discrete-time nonlinear systems in presence of input saturation in pure-feedback form and multiple-input multiple-output (MIMO) systems.