Keywords

10.1 Introduction

The discrete-time implementation of controllers is important. There are two methods for designing the digital controller. One method, called emulation, is to design a controller based on the continuous-time system, then discrete the controller. The other method is to design the discrete controller directly based on the discrete system. In this section, we consider the second approach to design the NN-based nonlinear controller.

Discrete-time adaptive control design is much more complicated than the continuous-time adaptive control design since the discrete-time Lyapunov derivatives tend to have pure and coupling quadratic terms in the states and/or NN weights.

There have been many papers to be published about adaptive neural control for discrete-time nonlinear systems [115], where a direct adaptive neural controller for second-order discrete-time nonlinear systems was proposed in [14], and a NN-based adaptive controller without off-line training is presented for a class of discrete-time multi-input multi-output (MIMO) nonlinear systems [15].

In this chapter, we introduce two typical examples for discrete neural network controller design: analysis and simulation.

10.2 Direct RBF Control for a Class of Discrete-Time Nonlinear System

10.2.1 System Description

A nonlinear model is

$$ y(k+1)=f\left( {y(k),u(k)} \right). $$
(10.1)

Assumption

  1. 1.

    The unknown nonlinear function \( f(\cdot ) \) is continuous and differentiable.

  2. 2.

    The neural number of hidden neurons is l.

  3. 3.

    Assume that the partial derivative \( {g_1}\geq \left| {\frac{{\partial f}}{{\partial u}}} \right|>\epsilon >0, \) where both \( \varepsilon \) and \( {g_1} \) are positive constants.

Assume that \( {y_{\mathrm{ m}}}(k+1) \) is the system’s desired output at time \( k+1. \) In the ideal case, there is no disturbance; we can show that if the control input \( {u^{*}}(k) \) satisfying

$$ f\left( {y(k),{u^{*}}(k)} \right)-{y_{\mathrm{ m}}}(k+1)=0. $$
(10.2)

Then the system’s output tracking error will converge to zero.

Define the tracking error as \( e(k)=y(k)-{y_{\mathrm{ m}}}(k), \) and then the tracking error dynamic equation is given as follows:

$$ e(k+1)=f\left( {y(k),u(k)} \right)-{y_{\mathrm{ m}}}(k+1) $$

10.2.2 Controller Design and Stability Analysis

For the system (10.1), a RBF neural network was designed to realize the controller directly [12, 16]. Figure 10.1 shows the closed-loop neural-based adaptive control scheme.

Fig. 10.1
figure 1

Block diagram of direct RBF control scheme

The ideal control input is \( {u^{*}}, \) and

$$ {u^{*}}(k)={u^{*}}(z)={w^{*}}^{\mathrm{ T}}h(z)+{\varepsilon_{\mathrm{ u}}}(z). $$
(10.3)

On the compact set \( {\Omega_{\mathrm{ z}}}, \) the ideal neural network weights \( {w^{*}} \) and the approximation error are bounded by

$$ ||w*||\leq {w_{\mathrm{ m}}},\ |{\epsilon_{\mathrm{ u}}}(z)|\leq {\epsilon_{\mathrm{ l}}} $$
(10.4)

where \( {w_{\mathrm{ m}}} \) and \( {\epsilon_{\mathrm{ l}}} \) are positive constants.

Define \( \hat{w}(k) \) as the actual neural network weight value, the control law is designed as by using RBF neural network directly

$$ u(k)={{\hat{w}}^{\mathrm{ T}}}(k)h(z) $$
(10.5)

where \( h(z) \) is Gaussian function output and \( z(k) \) is input of RBF.

By noticing (10.3), we have

$$ \begin{array}{lllll} u(k)-{u^{*}}(k)=\ \ \ {{\hat{w}}^{\mathrm{ T}}}(k)h(z)-\left( {{w^{*}}^{\mathrm{T}}h(z)+{\epsilon_{\mathrm{u}}}(z)} \right) \\\quad\quad\quad\quad\quad\;\;\; =\ \ \ {{\tilde{w}}^{\mathrm{T}}}(k)h(z)-{\epsilon_{\mathrm{u}}}(z)\end{array}$$
(10.6)

where \( \tilde{w}(k)=\hat{w}(k)-{w^{*}} \) is the weight approximation error.

The weight update law was designed as follows [12, 16]:

$$ \hat{w}(k+1)=\hat{w}(k)-\gamma \left( {h(z)e(k+1)+\sigma \hat{w}(k)} \right) $$
(10.7)

where \( \gamma\ >\ 0 \) is a positive number and \( \sigma\ >\ 0. \)

Subtracting \( {w^{*}} \) to both sides of Eq. (10.6), we have

$$ \tilde{w}(k+1)=\tilde{w}(k)-\gamma \left( {h(z)e(k+1)+\sigma \hat{w}(k)} \right) $$
(10.8)

Mean Value Theorem

If a function \( f(x) \) is continuous on the closed internal \( [a,b] \) and differentiable on the open interval \( (a,b), \) then there exists a point c in \( (a,b) \) such that

$$ f(b)=f(a)+(b-a){f}^{\prime}(c){|_{{c\in (a,b)}}}. $$
(10.9)

Using mean value theorem, let \( a={u^{*}}(k), \) \( b=u(k)={u^{*}}(k)+{{\tilde{w}}^{\mathrm{ T}}}(k)h(z)-{\epsilon_{\mathrm{ u}}}(z),c=\zeta, \) noticing Eqs. (10.2) and (10.6), then

$$ \begin{array}{lllll} f\left( {y(k),u(k)} \right)=\ \ \ f\left( {y(k),{u^{*}}(k)+{{\tilde{w}}^{\mathrm{T}}}(k)h(z)-{\epsilon_{\mathrm{ u}}}(z)} \right) \\ \quad\quad\quad\quad\quad\quad =\ \ \ f\left( {y(k),{u^{*}}(k)}\right)+\left( {{{\tilde{w}}^{\mathrm{T}}}(k)h(z)-{\epsilon_{\mathrm{ u}}}(z)} \right){f_u} \\ \quad\quad\quad\quad\quad\quad =\ \ \ {y_{\mathrm{ m}}}(k+1)+\left({{{\tilde{w}}^{\mathrm{ T}}}(k)h(z)-{\epsilon_{\mathrm{ u}}}(z)}\right)f_u\end{array}$$

where \( {f_u}=\frac{{\partial f}}{{\partial u}}\left| {_{{u=\zeta }}} \right., \) \( \zeta \in \left( {{u^{*}}(k),u(k)} \right). \)

Then we get

$$ \begin{array}{lllll} e(k+1)=\ f\left( {y(k),u(k)}\right)-{y_{\mathrm{ m}}}(k+1) \\ \qquad\quad\;\;\;= \ \left({{{\tilde{w}}^{\mathrm{ T}}}(k)h(z)-{\epsilon_{\mathrm{ u}}}(z)}\right)f_u\end{array}$$

and

$$ {{\tilde{w}}^{\mathrm{ T}}}(k)h(z)=\frac{e(k+1) }{{{f_u}}}+{\epsilon_{\mathrm{ u}}}(z). $$
(10.10)

The stability analysis of the closed system is given by Zhang et al. [16] as follows.

Firstly the Lyapunov function is designed as

$$ J(k)=\frac{1}{{{g_1}}}{e^2}(k)+\frac{1}{\gamma}\tilde{w}{(k)^{\mathrm{ T}}}\tilde{w}(k). $$
(10.11)

Then we have

$$ \begin{array}{lll} \Delta{}J(k)=\ & J(k+1)-J(k) \\\qquad\;\;\;\; =\ & \frac{1}{{{g_1}}}\left( {{e^2}(k+1)-{e^2}(k)}\right)+\frac{1}{\gamma}\tilde{w}{(k+1)^{\mathrm{T}}}\tilde{w}(k+1)-\frac{1}{\gamma}\tilde{w}{(k)^{\mathrm{T}}}\tilde{w}(k) \\ \qquad\;\;\;\;=\ & \frac{1}{{{g_1}}}\left({{e^2}(k+1)-{e^2}(k)} \right)+\frac{1}{\gamma }{{\left({\tilde{w}(k)-\gamma \left( {h(z)e(k+1)+\sigma \hat{w}(k)} \right)}\right)}^{\mathrm{ T}}} \\& \left( {\tilde{w}(k)-\gamma \left({h(z)e(k+1)+\sigma \hat{w}(k)} \right)}\right)-\frac{1}{\gamma}\tilde{w}{(k)^{\mathrm{ T}}}\tilde{w}(k)\\ \qquad\;\;\;\;=\ & \frac{1}{{{g_1}}}\left( {{e^2}(k+1)-{e^2}(k)}\right)-2\tilde{w}{(k)^{\mathrm{ T}}}h(z)e(k+1)-2\sigma\tilde{w}{(k)^{\mathrm{ T}}}\hat{w}(k) \\\ & +\gamma {h^{\mathrm{T}}}(z)h(z){e^2}(k+1)+2\gamma \sigma {{\hat{w}}^{\mathrm{T}}}(k)h(z)e(k+1)+\gamma {\sigma^2}\hat{w}{(k)^{\mathrm{T}}}\hat{w}(k) \end{array} $$

Since

$$ |{h_i}(z)|\leq 1,\ ||h(z)||\leq \sqrt{l}\leq l,\ {h^{\mathrm{ T}}}(z)h(z)=||h(z)|{|^2}\leq l,\;i=1,2,\ldots,l $$
$$ \begin{array}{llll} 2\sigma \tilde{w}{(k)^{\mathrm{ T}}}\hat{w}(k)=\sigma \tilde{w}{(k)^{\mathrm{ T}}}\left( {\tilde{w}(k)+{w^{*}}} \right)+\sigma {{\left( {\hat{w}(k)-{w^{*}}} \right)}^{\mathrm{ T}}}\hat{w}(k) \hfill \\=\sigma \left( {||\tilde{w}(k)|{|^2}+||\hat{w}(k)|{|^2}+\tilde{w}{(k)^{\mathrm{ T}}}{w^{*}}-{w^{*}}\hat{w}(k)} \right)=\sigma \left( {||\tilde{w}(k)|{|^2}+||\hat{w}(k)|{|^2}-||{w^{*}}|{|^2}} \right) \hfill \\\end{array} $$
$$ \gamma {h^{\mathrm{ T}}}(z)h(z){e^2}(k+1)\leq \gamma l{e^2}(k+1) $$
$$ 2\gamma \sigma {{\hat{w}}^{\mathrm{ T}}}(k)h(z)e(k+1)\leq \gamma \sigma l\left[ {||\hat{w}(k)|{|^2}+{e^2}(k+1)} \right] $$
$$ \gamma {\sigma^2}{{\hat{w}}^{\mathrm{ T}}}(k)\hat{w}(k)=\gamma {\sigma^2}||\hat{w}(k)|{|^2}, $$

we obtain

$$ \begin{array}{lllll} \Delta{}J(k) \leq &\frac{1}{{{g_1}}}\left( {{e^2}(k+1)-{e^2}(k)} \right)-2\left({\frac{e(k+1)}{{{f_u}}}+{\epsilon_{\mathrm{ u}}}(z)} \right)e(k+1)\\& -\sigma \left({||\tilde{w}(k)|{|^2}+||\hat{w}(k)|{|^2}-||{w^{*}}|{|^2}}\right)+\gamma l{e^2}(k+1) \\& +\gamma \sigma l\left[{||\hat{w}(k)|{|^2}+{e^2}(k+1)} \right]+\gamma{\sigma^2}||\hat{w}(k)|{|^2} \\ \qquad\;\;\;= &\left({\frac{1}{{{g_1}}}-\frac{2}{{{f_{\mathrm{ u}}}}}+\gamma (1+\sigma)l} \right){e^2}(k+1)-\frac{1}{{{g_1}}}{e^2}(k)-2{\epsilon_{\mathrm{u}}}(z)e(k+1) \\& -\sigma ||\tilde{w}(k)|{|^2}+\sigma||{w^{*}}|{|^2}+\sigma(-1+\gamma l+\gamma \sigma)||\hat{w}(k)|{|^2} \end{array} $$

Noticing Assumption (10.3), from \( 0\ <\ \epsilon\ <\ {f_u}\ \leq\ {g_1}, \) we can derive that

$$ \begin{array}{llll} \frac{1}{{{g_1}}}-\frac{2}{{{f_u}}}\leq \frac{1}{{{g_1}}}-\frac{2}{{{g_1}}}=-\frac{1}{{{g_1}}}\ <\ 0 \\-2{\epsilon_{\mathrm{ u}}}(z)e(k+1)\leq {k_0}\epsilon_{\mathrm{ l}}^2+\frac{1}{{{k_0}}}{e^2}(k+1) \\\end{array} $$

where \( {k_0} \) is a positive number. Thus, we have

$$ \begin{array}{lllll} \Delta{}J(k) \leq& \left({-\frac{1}{{{g_1}}}+\gamma (1+\sigma )l+\frac{1}{{{k_0}}}}\right){e^2}(k+1)+\sigma\left( {\gamma l+\gamma \sigma -1}\right)||\hat{w}(k)|{|^2} \\& -\frac{1}{{{g_1}}}{e^2}(k)-\sigma||\tilde{w}(k)|{|^2}+\sigma w_{\mathrm{m}}^2+{k_0}\epsilon_{\mathrm{ l}}^2 \\ \qquad\;\;\;\;=& -\left({\frac{1}{{{g_1}}}-(1+\sigma )l\gamma -\frac{1}{{{k_0}}}}\right){e^2}(k+1)+\sigma \left( {(l+\sigma )\gamma -1}\right)||\hat{w}(k)|{|^2}\\& -\frac{1}{{{g_1}}}\left({{e^2}(k)-\beta } \right)-\sigma ||\tilde{w}(k)||^2\end{array}$$

where \( \beta \) is a positive number as

$$ \beta ={g_1}(\sigma w_{\mathrm{ m}}^2+{k_0}\epsilon_{\mathrm{ l}}^2) $$
(10.12)

Choosing the positive constants as \( {k_0}, \) \( \lambda \) and \( \sigma, \) these constants must be satisfied the following inequalities: \( \frac{1}{{{g_1}}}-\frac{1}{{{k_0}}}\geq 0,\quad \frac{1}{{{g_1}}}-(1+\sigma )l\gamma -\frac{1}{{{k_0}}}\geq 0,\quad (l+\sigma )\gamma -1\leq 0, \) that is,

$$ 0<{g_1}\leq {k_0} $$
(10.13)
$$ 0<(1+\sigma )l\gamma \leq \frac{1}{{{g_1}}}-\frac{1}{{{k_0}}} $$
(10.14)
$$ 0<(l+\sigma )\gamma \leq 1 $$
(10.15)

Then we have \( \Delta{}J(k)\leq 0 \) once \( {e^2}(k)\geq \beta. \) This states that for all \( k\geq 0, \) \( J(k) \) is bounded because

$$ J(k)=J(0)+\sum\limits_{j=0}^k {\Delta{}J(i)<\infty } $$

Define compact set \( {\Omega_e}=\left\{ {e|{e^2}\leq \beta } \right\}, \) then we can see that the tracking error \( e(k) \) will converge to \( {\Omega_e} \) if \( e(k) \) is out of compact \( {\Omega_e}. \)

10.2.3 Simulation Examples

10.2.3.1 First Example: For Linear Plant

Consider a linear discrete-time system as

$$ \begin{array}{llll} {x_1}(k+1)={x_2}(k) \hfill \\{x_2}(k+1)=u(k) \hfill \\y(k)={x_1}(k). \hfill \\\end{array} $$

Its model is described as the following form

$$ y(k+1)=f\left( {y(k),u(k)} \right). $$

In the program, we set \( M=1 \) for linear plant. Since \( {g_1}\geq \frac{{\partial f}}{{\partial u}}=1, \) from simulation test, we can set \( {g_1}=\text{5,} \) and from (10.13), we can choose \( {k_0}=10. \)

From (10.14), we have \( 0<(1+\sigma )l\gamma \leq \frac{1}{5}-\frac{1}{10}=\frac{1}{10}=0\text{.10} \); from (10.15), we have \( 0<(l+\sigma )\gamma \leq \text{1,} \); if we choose \( l=9, \) we can get the conditions as: \( 0<9(1+\sigma )\gamma \leq 0.10 \) and \( 0<(9+\sigma )\gamma \leq 1. \)

To satisfy (10.14) and (10.15), we can choose \( \gamma =0\text{.01,} \) \( \sigma =0.001. \)

For RBF neural network, the structure is 2-9-1, the input is chosen as \( \boldsymbol{ z}(k)={{\left[ {{x_1}(k)\quad {x_2}(k)} \right]}^{\mathrm{ T}}}, \) the parameters of Gaussian function \( {{\boldsymbol{ c}}_{\mathrm{ i}}} \) and \( {b_i} \) are chosen as \( [\begin{array}{lllll} {-2} \ \ {-1.5} \ \ {-1.0} \ \ {-0.5} \ \ 0 \ \ {0.5} \ \ {1.0} \ \ {1.5} \ \ 2 \end{array}]\) and 2.0, and the initial weight value is chosen as random value in the range \( (\text{0,1}). \) The initial value of the plant is \( [\begin{array}{lllll} 0 \ \ 0 \\ \end{array}]. \) The reference signal is \( {y_{\mathrm{ m}}}(k)=\sin \left( {\frac{\pi }{1000 }k} \right). \) The simulation results are shown from Figs. 10.2, 10.3, and 10.4.

Fig. 10.2
figure 2

Position tracking (M = 1)

Fig. 10.3
figure 3

Control input (M = 1)

Fig. 10.4
figure 4

Weight norm (M = 1)

The program of this example is chap10_1.m (M = 1), which is given in the Appendix.

10.2.3.2 Second Example: For Nonlinear Plant

Consider a nonlinear discrete-time system as

$$ \begin{array}{lllll} {x_1}(k+1)=\ & {x_2}(k)\\{x_2}(k+1)=\ & \frac{{{x_1}(k){x_2}(k)\left( {{x_1}(k)+2.5}\right)}}{{1+x_1^2(k)+x_2^2(k)}}+u(k)+0.1{u^3}(k). \\y(k) \qquad\;=\& {x_1}(k)\end{array} $$

Its model is described as the following form

$$ y(k+1)=\ f\left( {y(k),u(k)} \right). $$

In the program, we set \( M=2 \) for nonlinear plant. Since \( {g_1}\geq \frac{{\partial f}}{{\partial u}}=1+0.3{u^2}(k), \) from simulation test, we can set \( {g_1}=\text{5,} \) and from (10.13), we can choose \( {k_0}=10. \)

From (10.14), we have \( 0<(1+\sigma )l\gamma \leq \frac{1}{5}-\frac{1}{10}=\frac{1}{10}=0\text{.10,} \) from (10.15), we

have \( 0\ <\ \left( {l+\sigma } \right)\gamma \leq \text{1,} \) if we choose \( l=9, \) we can get the conditions as: \( 0\ <\ 9(1+\sigma )\gamma \leq 0.10 \) and \( 0\ <\ (9+\sigma )\gamma \leq 1. \)

To satisfy (10.14) and (10.15), we can choose \( \gamma =0\text{.01,} \) \( \sigma =0.001. \)

For RBF neural network, the structure is 2-9-1, the input is chosen as \( \boldsymbol{ z}(k)={{\left[ {{x_1}(k)\quad {x_2}(k)} \right]}^{\mathrm{ T}}}, \) the parameters of Gaussian function \( {{\boldsymbol{ c}}_{\mathrm{ i}}} \) and \( {b_i} \) are chosen as \( [\begin{array}{lllll} {-2} \ \ {-1.5} \ \ {-1.0} \\ {-0.5} \ \ 0 \ \ {0.5} \ \ {1.0} \ \ {1.5} \ \ 2 \\ \end{array}]\) and 2.0, and the initial weight value is chosen as random value in the range \( (\text{0,1}). \) The initial value of the plant is \( [\begin{array}{lllll} 0 & 0 \\\end{array}]. \) The reference signal is \( {y_{\mathrm{ m}}}(k)=\sin \left( {\frac{\pi }{1000 }k} \right). \) The simulation results are shown from Figs. 10.5, 10.6, and 10.7.

Fig. 10.5
figure 5

Position tracking (M = 2)

Fig. 10.6
figure 6

Control input (M = 2)

Fig. 10.7
figure 7

Weight norm (M = 2)

The program of this example is chap10_1.m (M = 2), which is given in the Appendix.

10.3 Adaptive RBF Control for a Class of Discrete-Time Nonlinear System

10.3.1 System Description

Consider a nonlinear discrete system as follows:

$$ y(k+1)=f\left( {\boldsymbol{ x}(k)} \right)+u(k) $$
(10.16)

where \( \boldsymbol{ x}(k)={{\left[ {y(k)\;y(k-1)\ldots y(k-n+1)} \right]}^{\mathrm{ T}}} \) is the state vector, \( u(k) \) is the control input, and \( y(k) \) is the plant output. The nonlinear smooth function \( f:{R^n}\to R \) is assumed unknown.

10.3.2 Traditional Controller Design

The tracking error \( e(k) \) is defined as \( e(k)=y(k)-{y_{\mathrm{ d}}}(k). \) If \( f\left( {\boldsymbol{ x}(k)} \right) \) were known, a feedback linearization-type control law can be designed as

$$ u(k)={y_{\mathrm{ d}}}(k+1)-f\left( {\boldsymbol{ x}(k)} \right)-{c_1}e(k). $$
(10.17)

Submitting (10.17) into (10.16), we can get an asymptotical convergence error dynamic system as

$$ e(k+1)+{c_1}e(k)=0 $$
(10.18)

where \( |{c_1}|<1. \)

10.3.3 Adaptive Neural Network Controller Design

If \( f(\boldsymbol{ x}(k)). \) is unknown, RBF neural network can be used to approximate \( f(\boldsymbol{ x}(k)). \) The network output is given as

$$ \hat{f}\left( {\boldsymbol{ x}(k)} \right)=\hat{\boldsymbol{ w}} {(k)^{\mathrm{ T}}}\boldsymbol{ h}\left( {\boldsymbol{ x}(k)} \right) $$
(10.19)

where \( \hat{\boldsymbol{ w}} (k) \) denotes the network output weight vector and \( \boldsymbol{ h}\left( {\boldsymbol{ x}(k)} \right) \) denotes the vector of Gaussian basis functions.

Given any arbitrary nonzero approximation error bound \( {\epsilon_f}, \) there exist some optimal weight vector \( {{\boldsymbol{ w}}^{*}} \) such that

$$ f(\boldsymbol{ x})=\hat{f}\left( {\boldsymbol{ x},{{\boldsymbol{ w}}^{*}}} \right)-{\Delta_f}(\boldsymbol{ x}) $$
(10.20)

where \( {\Delta_f}(\boldsymbol{ x}) \) denotes the optimal network approximation error, and \( |{\Delta_f}(\boldsymbol{ x})|<{\epsilon_f}. \)

Then we can get the general network approximation error as

$$ \begin{array}{lllll} \tilde{f}\left( {\boldsymbol{x}(k)} \right)=\ & f\left( {\boldsymbol{ x}(k)}\right)-\hat{f}\left( {\boldsymbol{x}(k)} \right) \\\qquad\;\;\;\;\;\;=\ & \hat{f}\left( {\boldsymbol{ x},{{\boldsymbol{w}}^{*}}} \right)-{\Delta_f}\left({\boldsymbol{x}(k)}\right)-\hat{\boldsymbol{ w}} {(k)^{\mathrm{ T}}}\boldsymbol{h}\left( {\boldsymbol{ x}(k)} \right) \\\qquad\;\;\;\;\;\;=\ &-\tilde{\boldsymbol{ w}} {(k)^{\mathrm{ T}}}\boldsymbol{ h}\left({\boldsymbol{ x}(k)} \right)-{\Delta_f}(\boldsymbol{x}(k))\end{array} $$
(10.21)

where \( \tilde{\boldsymbol{ w}} (k)=\hat{\boldsymbol{ w}} (k)-{{\boldsymbol{ w}}^{*}}. \)

The control law with RBF approximation was proposed in [17] as follows

$$ u(k)={y_{\mathrm{ d}}}(k+1)-\hat{f}\left( {\boldsymbol{ x}(k)} \right)-{c_1}e(k). $$
(10.22)

Figure 10.8 shows the closed-loop neural-based adaptive control scheme.

Fig. 10.8
figure 8

Block diagram of the control scheme

Substituting (10.22) into (10.16), yields

$$ e(k+1)=\tilde{f}\left( {\boldsymbol{ x}(k)} \right)-{c_1}e(k). $$

Thus,

$$ e(k)+{c_1}e(k-1)=\tilde{f}\left( {\boldsymbol{ x}(k-1)} \right). $$
(10.23)

The term (10.23) can also be expressed as

$$ e(k)={\Gamma^{-1 }}\left( {{z^{-1 }}} \right)\tilde{f}\left( {\boldsymbol{ x}(k-1)} \right) $$
(10.24)

where \( \Gamma\left( {{z^{-1 }}} \right)=1+{c_1}{z^{-1 }}, \) \( {z^{-1 }} \) denotes the discrete-time delay operator.

Define a new augmented error as [17]

$$ {e_1}(k)=\beta \left( {e(k)-{\Gamma^{-1 }}\left( {{z^{-1 }}} \right)v(k)} \right) $$
(10.25)

where \( \beta\ >\ 0. \)

Substituting (10.24) into (10.25) yields

$$ \begin{array}{lll} {e_1}(k)=\ & \beta {\Gamma^{-1 }}\left( {{z^{-1 }}} \right)\left( {\tilde{f}\left( {\boldsymbol{ x}(k-1)} \right)-v(k)} \right) \\ \qquad \; \; =\ & \beta \frac{1}{{1+{c_1}{z^{-1 }}}}\left( {\tilde{f}\left( {\boldsymbol{ x}(k-1)} \right)-v(k)} \right)\end{array} $$

which leads to the relation as

$$ {e_1}(k-1)=\frac{{\beta \left( {\tilde{f}\left( {\boldsymbol{ x}(k-1)} \right)-v(k)} \right)-{e_1}(k)}}{{{c_1}}}. $$
(10.26)

The adaptive law as was proposed in [17] as

$$ \Delta\hat{\boldsymbol{ w}} (k)=\left\{ {\begin{array}{lllll} {\frac{\beta }{{\gamma c_1^2}}\boldsymbol{ h}\left( {\boldsymbol{ x}(k-1)} \right){e_1}(k)} \hfill & \ {\mathrm{ if} |{e_1}(k)|\ >\ {\epsilon_f}/\mathrm{ G}\ } \hfill \\0 \hfill & {\ \mathrm{ if} |{e_1}(k)|\leq {\epsilon_f}/\mathrm{ G}} \hfill \\\end{array}} \right. $$
(10.27)

where \( \Delta\hat{\boldsymbol{ w}} (k)=\hat{\boldsymbol{ w}} (k)-\hat{\boldsymbol{ w}} (k-1), \), \( \gamma \) and G are strictly positive constants.

10.3.4 Stability Analysis

The discrete-time Lyapunov function is designed by [17] as

$$ V(k)=e_1^2(k)+\gamma {{\tilde{\boldsymbol{ w}}}^{\mathrm{ T}}}(k)\tilde{\boldsymbol{ w}} (k) $$
(10.28)

The first difference is

$$ \begin{array}{lllll} & \Delta{}V(k)=V(k)-V(k-1) \\& \,\qquad\quad=e_1^2(k)-e_1^2(k-1)+\gamma \left( {{{{\tilde{\boldsymbol{w}}}}^{\mathrm{ T}}}(k)+{{{\tilde{\boldsymbol{ w}}}}^{\mathrm{T}}}(k-1)} \right)\left( {\tilde{\boldsymbol{ w}}(k)-\tilde{\boldsymbol{ w}} (k-1)} \right)\end{array} $$

The stability proof was given with the following three steps [17].

Firstly, using (10.26) for \( {e_1}(k-1), \) it follows that

$$ \begin{array}{llll} \Delta{}V(k)=e_1^2(k)-\frac{{e_1^2(k)+{\beta^2}{{{\left( {\tilde{f}\left( {\boldsymbol{ x}(k-1)} \right)-v(k)} \right)}}^2}-2\beta \left( {\tilde{f}\left( {\mathbf{x}(k-1)} \right)-v(k)} \right){e_1}(k)}}{{c_1^2}} \hfill \\ + \gamma \left( {{{{\left( {\hat{\boldsymbol{ w}} (k)-{{\boldsymbol{ w}}^{*}}} \right)}}^{\mathrm{ T}}}+{{{\left( {\hat{\boldsymbol{ w}} (k-1)-{{\boldsymbol{ w}}^{*}}} \right)}}^{\mathrm{ T}}}} \right)\left( {\left( {\hat{\boldsymbol{ w}} (k)-{{\boldsymbol{ w}}^{*}}} \right)-\left( {\hat{\boldsymbol{ w}} (k-1)-{{\boldsymbol{ w}}^{*}}} \right)} \right) \hfill \\=-{V_1}+\frac{{2\beta \left( {\tilde{f}\left( {\boldsymbol{ x}(k-1)} \right)-v(k)} \right){e_1}(k)}}{{c_1^2}}+\gamma \left( {\Delta{{{\hat{\boldsymbol{ w}}}}^{\mathrm{ T}}}(k)+2{{{\tilde{\boldsymbol{ w}}}}^{\mathrm{ T}}}(k-1)} \right)\Delta\hat{\boldsymbol{ w}} (k) \hfill \\\end{array} $$

where \( {V_1}=\displaystyle\frac{{e_1^2(k)\left( {1-c_1^2} \right)}}{{c_1^2}}+\frac{{{\beta^2}{{{\left( {\tilde{f}\left( {\boldsymbol{ x}(k-1)} \right)-v(k)} \right)}}^2}}}{{c_1^2}}\geq 0. \)

Secondly, substituting for \( \tilde{f}\left( {\boldsymbol{ x}(k-1)} \right) \) via (10.21) yields

$$ \begin{array}{lllll} \Delta{}V(k)= & -{V_1}+\frac{{2\beta \left( {-\tilde{\boldsymbol{ w}} {(k-1)^{\mathrm{ T}}}\boldsymbol{h}\left( {\boldsymbol{ x}(k-1)} \right)-{\Delta_f}\left( {\boldsymbol{ x}(k-1)} \right)-v(k)} \right){e_1}(k)}}{{c_1^2}} \\& +\gamma \Delta{{{\hat{\boldsymbol{w}}}}^{\mathrm{ T}}}(k)\Delta\hat{\boldsymbol{ w}} (k)+2\gamma {{{\tilde{\boldsymbol{ w}}}}^{\mathrm{ T}}}(k-1)\Delta\hat{\boldsymbol{ w}} (k) \\ \qquad\;\;\;\;= & -{V_1}+2{{{\tilde{\boldsymbol{ w}}}}^{\mathrm{ T}}}(k-1)\left( {\gamma \Delta\hat{\boldsymbol{ w}} (k)-\frac{\beta }{{c_1^2}}\boldsymbol{ h}\left( {\boldsymbol{ x}(k-1)} \right){e_1}(k)} \right) \\& -\frac{{2\beta }}{{c_1^2}}\left( {{\Delta_f}\left( {\boldsymbol{ x}(k-1)} \right)+v(k)} \right){e_1}(k)+\gamma \Delta{{{\hat{\boldsymbol{ w}}}}^{\mathrm{ T}}}(k)\Delta\hat{\boldsymbol{ w}} (k) \end{array} $$

Thirdly, substituting the adaptive law (10.27) into above, \( \Delta{}V(k) \) is

$$ \Delta{}V(k)=\left\{ \begin{array}{llll}-{V_1}-\frac{{2\beta }}{{c_1^2}}\left( {{\Delta_f}\left( {\boldsymbol{ x}(k-1)} \right)+v(k)} \right){e_1}(k)+ \hfill \\{{\left( {\frac{\beta }{{\sqrt{\gamma }c_1^2}}} \right)}^2}{{\boldsymbol{ h}}^{\mathrm{ T}}}\left( {\boldsymbol{ x}(k-1)} \right)\boldsymbol{ h}\left( {\boldsymbol{ x}(k-1)} \right)e_1^2(k)\text{,}\kern0.5em \mathrm{ if} |{e_1}(k)|\ >\ {\epsilon_f}/G\ \hfill \\-{V_1}-\frac{{2\beta }}{{c_1^2}}\left[ {\left( {{{{\tilde{\boldsymbol{ w}}}}^{\mathrm{ T}}}(k-1)\boldsymbol{ h}\left( {\boldsymbol{ x}(k-1)} \right)} \right)} \right.+ \hfill \\\left. {v(k)+{\Delta_f}\left( {\boldsymbol{ x}(k-1)} \right){e_1}(k)} \right]\text{,}\kern0.75em \mathrm{ if} |{e_1}(k)|\leq {\epsilon_f}/G \hfill \\\end{array} \right. $$
(10.29)

The auxiliary signal \( v(k) \) must also be designed so that \( {e_1}(k)\to 0 \) could deduce \( e(k)\to 0. \) The auxiliary signal is designed as [17]

$$ v(k)={v_1}(k)+{v_2}(k) $$
(10.30)

with \( {v_1}(k)=\frac{\beta }{{2\gamma c_1^2}}{{\boldsymbol{ h}}^{\mathrm{ T}}}\left( {\boldsymbol{ x}(k-1)} \right)\boldsymbol{ h}\left( {\boldsymbol{ x}(k-1)} \right){e_1}(k) \) and \( {v_2}(k)=G{e_1}(k). \)

If \( |{e_1}(k)|\ >\ {\epsilon_f}/G, \) substituting for \( v(k) \) in (10.30) to (10.29), it follows that

$$ \begin{array}{lllll} \Delta{}V(k)= & -{V_1}-\frac{{2\beta }}{{c_1^2}}\left( {{\Delta_f}\left( {\boldsymbol{ x}(k-1)} \right)+G{e_1}(k)} \right){e_1}(k) \\& \leq -\frac{{2\beta }}{{c_1^2}}\left( {{\Delta_f}\left( {\boldsymbol{ x}(k-1)} \right)+G{e_1}(k)} \right){e_1}(k) \end{array} $$

Since \( |{\Delta_f}(\boldsymbol{ x})|<{\epsilon_f} \) and \( |{e_1}(k)|>{\epsilon_f}/G, \) then \( |{e_1}(k)|>\frac{{|{\Delta_f}\left( {\boldsymbol{ x}(k-1)} \right)|}}{G} \) and \( e_1^2(k)>-\frac{{{\Delta_f}\left( {\boldsymbol{ x}(k-1)} \right){e_1}(k)}}{G}, \) thus \( \left( {{\Delta_f}\left( {\boldsymbol{ x}(k-1)} \right)+G{e_1}(k)} \right){e_1}(k)>0, \) then \( \Delta{}V(k)<0. \)

If \( |{e_1}(k)|\leq {\epsilon_f}/G, \) tracking performance can be satisfied and \( \Delta{}V(k) \) can be taken on any value.

In the simulation, we give three remarks as follows:

Remark 1

From (10.25), we have \( {e_1}(k)=\beta \left( {e(k)-\frac{1}{{1+{c_1}{z^{-1 }}}}v(k)} \right), \) then \( {e_1}(k)\left( {1+{c_1}{z^{-1 }}} \right)=\beta \left( {e(k)\left( {1+{c_1}{z^{-1 }}} \right)-v(k)} \right); \) therefore,

$$ {e_1}(k)=-{c_1}{e_1}(k-1)+\beta \left( {e(k)+{c_1}e(k-1)-v(k)} \right) $$
(10.31)

Remark 2

From Lyapunov analysis, if \( k\to \infty, \) \( {e_1}(k)\to 0, \) from (10.30), we have \( v(k)\to 0, \) and then from (10.31), \( e(k)+{c_1}e(k-1)\to 0, \) considering \( |{c_1}|<1, \) we get \( e(k)\to 0. \)

Remark 3

Consider \( v(k) \) as a virtual variable, for (10.30), let \( {{v^{\prime}}_1}(k)=\frac{\beta }{{2\gamma c_1^2}}{{\boldsymbol{ h}}^{\mathrm{ T}}}\left( {\boldsymbol{ x}(k-1)} \right)\boldsymbol{ h}\left( {\boldsymbol{ x}(k-1)} \right), \) then we get \( v(k)=\left( {{{{v^{\prime}}}_1}(k)+G} \right){e_1}(k); \) substituting \( v(k) \) into (10.31), we have \( {e_1}(k)=-{c_1}{e_1}(k-1)+\beta \left( {e(k)+{c_1}e\left( {k-1} \right)-\left( {{{{v^{\prime}}}_1}(k)+G} \right){e_1}(k)} \right), \) then

$$ {e_1}(k)=\frac{{-{c_1}{e_1}(k-1)+\beta \left( {e(k)+{c_1}e(k-1)} \right)}}{{1+\beta \left( {{{{v^{\prime}}}_1}(k)+G} \right)}}. $$
(10.32)

10.3.5 Simulation Examples

10.3.5.1 First Example: Linear Discrete-Time System

Consider a linear discrete-time system as

$$ y(k)=0.5y(k-1)+u(k-1) $$

where \( f\left( {x(k-1)} \right)=0.5y\left( {k-1} \right). \)

We use RBF to approximate \( f\left( {x(k-1)} \right). \) For RBF neural network, the structure is 1-9-1; from \( f\left( {x(k-1)} \right) \) expression, only one input is chosen as \( y(k-1); \) the parameters of Gaussian function \( {{\boldsymbol{ c}}_{\mathrm{ i}}} \) and \( {b_i} \) are chosen as \( \left[ {\begin{array}{lllll} {-1} & {-0.5} & 0 & {0.5} & 1 \\\end{array}} \right] \) and \( 15\left( {i=1,\quad j=1,2,\ldots,5} \right) \); and the initial weight value is chosen as random value in the range \( (\text{0,1}). \) The initial value of the plant is set as zero. The reference signal is \( {y_{\mathrm{ d}}}(k)=\sin 2\pi t. \) Using the control law (10.22) with adaptive law (10.27), \( {e_1}(k) \) is calculated by (10.32); the parameters are chosen as \( {c_1}=-0.01, \) \( \beta =0.001,\gamma =0.001,\gamma =0.001,G=50000,{\epsilon_f}=0.003. \) The results are shown in Figs. 10.9, 10.10, and 10.11.

Fig. 10.9
figure 9

Position tracking

Fig. 10.10
figure 10

Control input

Fig. 10.11
figure 11

\( f\left( {x(k-1)} \right) \) and its estimation

The program of this example is chap10_2.m, which is given in the Appendix.

10.3.5.2 Second Example: Nonlinear Discrete-Time System

Consider a nonlinear discrete-time system as

$$ y(k)=\frac{{0.5y\left( {k-1} \right)\left( {1-y(k-1)} \right)}}{{1+\exp \left( {-0.25y(k-1)} \right)}}+u(k-1) $$

where \( f(x(k-1))=\displaystyle\frac{0.5y(k-1)(1-y(k-1)) }{{1+\exp (-0.25y(k-1))}}. \)

Firstly, we assume \( f\left( {x(k-1)} \right) \) is known, use the control law (10.17), and set \( {c_1}=-0.01, \) the results are shown in Figs. 10.12 and 10.13.

Fig. 10.12
figure 12

Position tracking

Fig. 10.13
figure 13

Control input

Then we use RBF to approximate \( f\left( {x(k-1)} \right). \) For RBF neural network, the structure is 1-9-1; from \( f\left( {x(k-1)} \right) \) expression, only one input \( y(k-1) \) is chosen; the parameters of Gaussian function \( {{\boldsymbol{ c}}_{\mathrm{ i}}} \) and \( {b_i} \) are chosen as \( [\begin{array}{lllll} {-2} \ {-1.5} \ {-1.0} \ {-0.5} \ 0 \ {0.5} \ {1.0} \ {1.5} \ 2 \\\end{array}] \) and \( 15\left( {i=1,\quad j=1,2,\ldots,9} \right), \) and the initial weight value is chosen as random value in the range \( (\text{0,1}). \) The initial value of the plant is set as zero. The reference signal is \( {y_{\mathrm{ d}}}(k)=\sin t. \) Using the control law (10.22) with adaptive law (10.27), \( {e_1}(k) \) is calculated by (10.32); the parameters are chosen as \( {c_1}=-0.01,\beta =0.001,\gamma =0.001,\gamma =0.001,G=50000,{\epsilon_f}=0.003. \) The results are shown in Figs. 10.14, 10.15, and 10.16.

Fig. 10.14
figure 14

Position tracking

Fig. 10.15
figure 15

Control input

Fig. 10.16
figure 16

\( f\left( {x(k-1)} \right) \) and its estimation

The program of this example is chap10_3.m and chap10_4.m, which are given in the Appendix.

10.3.5.3 Third Example: Nonlinear Discrete-Time System

Consider a nonlinear discrete-time system as

$$ y(k)=f(x(k-1))+u(k-1) $$

where \( f\left( {x(k-1)} \right)=\frac{1.5y(k-1)y(k-2) }{{1+{y^2}(k-1)+{y^2}(k-2)}}+0.35\sin \left( {y(k-1)+y(k-2)} \right). \)

Then we use RBF to approximate \( f\left( {x(k-1)} \right). \) For RBF neural network, the structure is 2-9-1; from \( f\left( {x(k-1)} \right) \) expression, two inputs are chosen as \( y(k-1) \) and \( y(k-2), \) the parameters of Gaussian function \( {{\boldsymbol{ c}}_{\mathrm{ ij}}} \) and \( {b_j} \) are chosen as \( \left[ \begin{array}{lllll}{-2} \ {-1.5} \ {-1.0} \ {-0.5} \ 0 \ {0.5} \ {1.0} \ {1.5} \ 2 \hfill\\ {-2} \ {-1.5} \ {-1.0} \ {-0.5} \ 0 \ {0.5} \ {1.0} \ {1.5} \ 2 \\ \end{array} \\ \right] \) and \( 15\left( {i=1,2,\quad j=1,2,\ldots,9} \right); \) and the initial weight value is chosen as random value in the range \( (\text{0,1}). \) The initial value of the plant is set as zero. The reference signal is \( {y_{\mathrm{ d}}}(k)=\sin t. \) Using the control law (10.22) with adaptive law (10.27), \( {e_1}(k) \) is calculated by (10.32); the parameters are chosen as \( {c_1}=-0.01,\beta =0.001,\gamma =0.001,\gamma =0.001,G=50000,{\epsilon_f}=0.003. \) The results are shown in Figs. 10.17, 10.18, and 10.19.

Fig. 10.17
figure 17

Position tracking

Fig. 10.18
figure 18

Control input

Fig. 10.19
figure 19

\( f\left( {x(k-1)} \right) \) and its estimation

The program of this example is chap10_5.m, which is given in the Appendix.