Abstract
This chapter introduces two kinds of adaptive discrete neural network controllers for discrete nonlinear system, including a direct RBF controller and an indirect RBF controller. For the two control laws, the adaptive laws are designed based on the Lyapunov stability theory; the closed-loop system stability can be achieved.
Access provided by Autonomous University of Puebla. Download chapter PDF
Keywords
10.1 Introduction
The discrete-time implementation of controllers is important. There are two methods for designing the digital controller. One method, called emulation, is to design a controller based on the continuous-time system, then discrete the controller. The other method is to design the discrete controller directly based on the discrete system. In this section, we consider the second approach to design the NN-based nonlinear controller.
Discrete-time adaptive control design is much more complicated than the continuous-time adaptive control design since the discrete-time Lyapunov derivatives tend to have pure and coupling quadratic terms in the states and/or NN weights.
There have been many papers to be published about adaptive neural control for discrete-time nonlinear systems [1–15], where a direct adaptive neural controller for second-order discrete-time nonlinear systems was proposed in [14], and a NN-based adaptive controller without off-line training is presented for a class of discrete-time multi-input multi-output (MIMO) nonlinear systems [15].
In this chapter, we introduce two typical examples for discrete neural network controller design: analysis and simulation.
10.2 Direct RBF Control for a Class of Discrete-Time Nonlinear System
10.2.1 System Description
A nonlinear model is
Assumption
-
1.
The unknown nonlinear function \( f(\cdot ) \) is continuous and differentiable.
-
2.
The neural number of hidden neurons is l.
-
3.
Assume that the partial derivative \( {g_1}\geq \left| {\frac{{\partial f}}{{\partial u}}} \right|>\epsilon >0, \) where both \( \varepsilon \) and \( {g_1} \) are positive constants.
Assume that \( {y_{\mathrm{ m}}}(k+1) \) is the system’s desired output at time \( k+1. \) In the ideal case, there is no disturbance; we can show that if the control input \( {u^{*}}(k) \) satisfying
Then the system’s output tracking error will converge to zero.
Define the tracking error as \( e(k)=y(k)-{y_{\mathrm{ m}}}(k), \) and then the tracking error dynamic equation is given as follows:
10.2.2 Controller Design and Stability Analysis
For the system (10.1), a RBF neural network was designed to realize the controller directly [12, 16]. Figure 10.1 shows the closed-loop neural-based adaptive control scheme.
The ideal control input is \( {u^{*}}, \) and
On the compact set \( {\Omega_{\mathrm{ z}}}, \) the ideal neural network weights \( {w^{*}} \) and the approximation error are bounded by
where \( {w_{\mathrm{ m}}} \) and \( {\epsilon_{\mathrm{ l}}} \) are positive constants.
Define \( \hat{w}(k) \) as the actual neural network weight value, the control law is designed as by using RBF neural network directly
where \( h(z) \) is Gaussian function output and \( z(k) \) is input of RBF.
By noticing (10.3), we have
where \( \tilde{w}(k)=\hat{w}(k)-{w^{*}} \) is the weight approximation error.
The weight update law was designed as follows [12, 16]:
where \( \gamma\ >\ 0 \) is a positive number and \( \sigma\ >\ 0. \)
Subtracting \( {w^{*}} \) to both sides of Eq. (10.6), we have
Mean Value Theorem
If a function \( f(x) \) is continuous on the closed internal \( [a,b] \) and differentiable on the open interval \( (a,b), \) then there exists a point c in \( (a,b) \) such that
Using mean value theorem, let \( a={u^{*}}(k), \) \( b=u(k)={u^{*}}(k)+{{\tilde{w}}^{\mathrm{ T}}}(k)h(z)-{\epsilon_{\mathrm{ u}}}(z),c=\zeta, \) noticing Eqs. (10.2) and (10.6), then
where \( {f_u}=\frac{{\partial f}}{{\partial u}}\left| {_{{u=\zeta }}} \right., \) \( \zeta \in \left( {{u^{*}}(k),u(k)} \right). \)
Then we get
and
The stability analysis of the closed system is given by Zhang et al. [16] as follows.
Firstly the Lyapunov function is designed as
Then we have
Since
we obtain
Noticing Assumption (10.3), from \( 0\ <\ \epsilon\ <\ {f_u}\ \leq\ {g_1}, \) we can derive that
where \( {k_0} \) is a positive number. Thus, we have
where \( \beta \) is a positive number as
Choosing the positive constants as \( {k_0}, \) \( \lambda \) and \( \sigma, \) these constants must be satisfied the following inequalities: \( \frac{1}{{{g_1}}}-\frac{1}{{{k_0}}}\geq 0,\quad \frac{1}{{{g_1}}}-(1+\sigma )l\gamma -\frac{1}{{{k_0}}}\geq 0,\quad (l+\sigma )\gamma -1\leq 0, \) that is,
Then we have \( \Delta{}J(k)\leq 0 \) once \( {e^2}(k)\geq \beta. \) This states that for all \( k\geq 0, \) \( J(k) \) is bounded because
Define compact set \( {\Omega_e}=\left\{ {e|{e^2}\leq \beta } \right\}, \) then we can see that the tracking error \( e(k) \) will converge to \( {\Omega_e} \) if \( e(k) \) is out of compact \( {\Omega_e}. \)
10.2.3 Simulation Examples
10.2.3.1 First Example: For Linear Plant
Consider a linear discrete-time system as
Its model is described as the following form
In the program, we set \( M=1 \) for linear plant. Since \( {g_1}\geq \frac{{\partial f}}{{\partial u}}=1, \) from simulation test, we can set \( {g_1}=\text{5,} \) and from (10.13), we can choose \( {k_0}=10. \)
From (10.14), we have \( 0<(1+\sigma )l\gamma \leq \frac{1}{5}-\frac{1}{10}=\frac{1}{10}=0\text{.10} \); from (10.15), we have \( 0<(l+\sigma )\gamma \leq \text{1,} \); if we choose \( l=9, \) we can get the conditions as: \( 0<9(1+\sigma )\gamma \leq 0.10 \) and \( 0<(9+\sigma )\gamma \leq 1. \)
To satisfy (10.14) and (10.15), we can choose \( \gamma =0\text{.01,} \) \( \sigma =0.001. \)
For RBF neural network, the structure is 2-9-1, the input is chosen as \( \boldsymbol{ z}(k)={{\left[ {{x_1}(k)\quad {x_2}(k)} \right]}^{\mathrm{ T}}}, \) the parameters of Gaussian function \( {{\boldsymbol{ c}}_{\mathrm{ i}}} \) and \( {b_i} \) are chosen as \( [\begin{array}{lllll} {-2} \ \ {-1.5} \ \ {-1.0} \ \ {-0.5} \ \ 0 \ \ {0.5} \ \ {1.0} \ \ {1.5} \ \ 2 \end{array}]\) and 2.0, and the initial weight value is chosen as random value in the range \( (\text{0,1}). \) The initial value of the plant is \( [\begin{array}{lllll} 0 \ \ 0 \\ \end{array}]. \) The reference signal is \( {y_{\mathrm{ m}}}(k)=\sin \left( {\frac{\pi }{1000 }k} \right). \) The simulation results are shown from Figs. 10.2, 10.3, and 10.4.
The program of this example is chap10_1.m (M = 1), which is given in the Appendix.
10.2.3.2 Second Example: For Nonlinear Plant
Consider a nonlinear discrete-time system as
Its model is described as the following form
In the program, we set \( M=2 \) for nonlinear plant. Since \( {g_1}\geq \frac{{\partial f}}{{\partial u}}=1+0.3{u^2}(k), \) from simulation test, we can set \( {g_1}=\text{5,} \) and from (10.13), we can choose \( {k_0}=10. \)
From (10.14), we have \( 0<(1+\sigma )l\gamma \leq \frac{1}{5}-\frac{1}{10}=\frac{1}{10}=0\text{.10,} \) from (10.15), we
have \( 0\ <\ \left( {l+\sigma } \right)\gamma \leq \text{1,} \) if we choose \( l=9, \) we can get the conditions as: \( 0\ <\ 9(1+\sigma )\gamma \leq 0.10 \) and \( 0\ <\ (9+\sigma )\gamma \leq 1. \)
To satisfy (10.14) and (10.15), we can choose \( \gamma =0\text{.01,} \) \( \sigma =0.001. \)
For RBF neural network, the structure is 2-9-1, the input is chosen as \( \boldsymbol{ z}(k)={{\left[ {{x_1}(k)\quad {x_2}(k)} \right]}^{\mathrm{ T}}}, \) the parameters of Gaussian function \( {{\boldsymbol{ c}}_{\mathrm{ i}}} \) and \( {b_i} \) are chosen as \( [\begin{array}{lllll} {-2} \ \ {-1.5} \ \ {-1.0} \\ {-0.5} \ \ 0 \ \ {0.5} \ \ {1.0} \ \ {1.5} \ \ 2 \\ \end{array}]\) and 2.0, and the initial weight value is chosen as random value in the range \( (\text{0,1}). \) The initial value of the plant is \( [\begin{array}{lllll} 0 & 0 \\\end{array}]. \) The reference signal is \( {y_{\mathrm{ m}}}(k)=\sin \left( {\frac{\pi }{1000 }k} \right). \) The simulation results are shown from Figs. 10.5, 10.6, and 10.7.
The program of this example is chap10_1.m (M = 2), which is given in the Appendix.
10.3 Adaptive RBF Control for a Class of Discrete-Time Nonlinear System
10.3.1 System Description
Consider a nonlinear discrete system as follows:
where \( \boldsymbol{ x}(k)={{\left[ {y(k)\;y(k-1)\ldots y(k-n+1)} \right]}^{\mathrm{ T}}} \) is the state vector, \( u(k) \) is the control input, and \( y(k) \) is the plant output. The nonlinear smooth function \( f:{R^n}\to R \) is assumed unknown.
10.3.2 Traditional Controller Design
The tracking error \( e(k) \) is defined as \( e(k)=y(k)-{y_{\mathrm{ d}}}(k). \) If \( f\left( {\boldsymbol{ x}(k)} \right) \) were known, a feedback linearization-type control law can be designed as
Submitting (10.17) into (10.16), we can get an asymptotical convergence error dynamic system as
where \( |{c_1}|<1. \)
10.3.3 Adaptive Neural Network Controller Design
If \( f(\boldsymbol{ x}(k)). \) is unknown, RBF neural network can be used to approximate \( f(\boldsymbol{ x}(k)). \) The network output is given as
where \( \hat{\boldsymbol{ w}} (k) \) denotes the network output weight vector and \( \boldsymbol{ h}\left( {\boldsymbol{ x}(k)} \right) \) denotes the vector of Gaussian basis functions.
Given any arbitrary nonzero approximation error bound \( {\epsilon_f}, \) there exist some optimal weight vector \( {{\boldsymbol{ w}}^{*}} \) such that
where \( {\Delta_f}(\boldsymbol{ x}) \) denotes the optimal network approximation error, and \( |{\Delta_f}(\boldsymbol{ x})|<{\epsilon_f}. \)
Then we can get the general network approximation error as
where \( \tilde{\boldsymbol{ w}} (k)=\hat{\boldsymbol{ w}} (k)-{{\boldsymbol{ w}}^{*}}. \)
The control law with RBF approximation was proposed in [17] as follows
Figure 10.8 shows the closed-loop neural-based adaptive control scheme.
Substituting (10.22) into (10.16), yields
Thus,
The term (10.23) can also be expressed as
where \( \Gamma\left( {{z^{-1 }}} \right)=1+{c_1}{z^{-1 }}, \) \( {z^{-1 }} \) denotes the discrete-time delay operator.
Define a new augmented error as [17]
where \( \beta\ >\ 0. \)
Substituting (10.24) into (10.25) yields
which leads to the relation as
The adaptive law as was proposed in [17] as
where \( \Delta\hat{\boldsymbol{ w}} (k)=\hat{\boldsymbol{ w}} (k)-\hat{\boldsymbol{ w}} (k-1), \), \( \gamma \) and G are strictly positive constants.
10.3.4 Stability Analysis
The discrete-time Lyapunov function is designed by [17] as
The first difference is
The stability proof was given with the following three steps [17].
Firstly, using (10.26) for \( {e_1}(k-1), \) it follows that
where \( {V_1}=\displaystyle\frac{{e_1^2(k)\left( {1-c_1^2} \right)}}{{c_1^2}}+\frac{{{\beta^2}{{{\left( {\tilde{f}\left( {\boldsymbol{ x}(k-1)} \right)-v(k)} \right)}}^2}}}{{c_1^2}}\geq 0. \)
Secondly, substituting for \( \tilde{f}\left( {\boldsymbol{ x}(k-1)} \right) \) via (10.21) yields
Thirdly, substituting the adaptive law (10.27) into above, \( \Delta{}V(k) \) is
The auxiliary signal \( v(k) \) must also be designed so that \( {e_1}(k)\to 0 \) could deduce \( e(k)\to 0. \) The auxiliary signal is designed as [17]
with \( {v_1}(k)=\frac{\beta }{{2\gamma c_1^2}}{{\boldsymbol{ h}}^{\mathrm{ T}}}\left( {\boldsymbol{ x}(k-1)} \right)\boldsymbol{ h}\left( {\boldsymbol{ x}(k-1)} \right){e_1}(k) \) and \( {v_2}(k)=G{e_1}(k). \)
If \( |{e_1}(k)|\ >\ {\epsilon_f}/G, \) substituting for \( v(k) \) in (10.30) to (10.29), it follows that
Since \( |{\Delta_f}(\boldsymbol{ x})|<{\epsilon_f} \) and \( |{e_1}(k)|>{\epsilon_f}/G, \) then \( |{e_1}(k)|>\frac{{|{\Delta_f}\left( {\boldsymbol{ x}(k-1)} \right)|}}{G} \) and \( e_1^2(k)>-\frac{{{\Delta_f}\left( {\boldsymbol{ x}(k-1)} \right){e_1}(k)}}{G}, \) thus \( \left( {{\Delta_f}\left( {\boldsymbol{ x}(k-1)} \right)+G{e_1}(k)} \right){e_1}(k)>0, \) then \( \Delta{}V(k)<0. \)
If \( |{e_1}(k)|\leq {\epsilon_f}/G, \) tracking performance can be satisfied and \( \Delta{}V(k) \) can be taken on any value.
In the simulation, we give three remarks as follows:
Remark 1
From (10.25), we have \( {e_1}(k)=\beta \left( {e(k)-\frac{1}{{1+{c_1}{z^{-1 }}}}v(k)} \right), \) then \( {e_1}(k)\left( {1+{c_1}{z^{-1 }}} \right)=\beta \left( {e(k)\left( {1+{c_1}{z^{-1 }}} \right)-v(k)} \right); \) therefore,
Remark 2
From Lyapunov analysis, if \( k\to \infty, \) \( {e_1}(k)\to 0, \) from (10.30), we have \( v(k)\to 0, \) and then from (10.31), \( e(k)+{c_1}e(k-1)\to 0, \) considering \( |{c_1}|<1, \) we get \( e(k)\to 0. \)
Remark 3
Consider \( v(k) \) as a virtual variable, for (10.30), let \( {{v^{\prime}}_1}(k)=\frac{\beta }{{2\gamma c_1^2}}{{\boldsymbol{ h}}^{\mathrm{ T}}}\left( {\boldsymbol{ x}(k-1)} \right)\boldsymbol{ h}\left( {\boldsymbol{ x}(k-1)} \right), \) then we get \( v(k)=\left( {{{{v^{\prime}}}_1}(k)+G} \right){e_1}(k); \) substituting \( v(k) \) into (10.31), we have \( {e_1}(k)=-{c_1}{e_1}(k-1)+\beta \left( {e(k)+{c_1}e\left( {k-1} \right)-\left( {{{{v^{\prime}}}_1}(k)+G} \right){e_1}(k)} \right), \) then
10.3.5 Simulation Examples
10.3.5.1 First Example: Linear Discrete-Time System
Consider a linear discrete-time system as
where \( f\left( {x(k-1)} \right)=0.5y\left( {k-1} \right). \)
We use RBF to approximate \( f\left( {x(k-1)} \right). \) For RBF neural network, the structure is 1-9-1; from \( f\left( {x(k-1)} \right) \) expression, only one input is chosen as \( y(k-1); \) the parameters of Gaussian function \( {{\boldsymbol{ c}}_{\mathrm{ i}}} \) and \( {b_i} \) are chosen as \( \left[ {\begin{array}{lllll} {-1} & {-0.5} & 0 & {0.5} & 1 \\\end{array}} \right] \) and \( 15\left( {i=1,\quad j=1,2,\ldots,5} \right) \); and the initial weight value is chosen as random value in the range \( (\text{0,1}). \) The initial value of the plant is set as zero. The reference signal is \( {y_{\mathrm{ d}}}(k)=\sin 2\pi t. \) Using the control law (10.22) with adaptive law (10.27), \( {e_1}(k) \) is calculated by (10.32); the parameters are chosen as \( {c_1}=-0.01, \) \( \beta =0.001,\gamma =0.001,\gamma =0.001,G=50000,{\epsilon_f}=0.003. \) The results are shown in Figs. 10.9, 10.10, and 10.11.
The program of this example is chap10_2.m, which is given in the Appendix.
10.3.5.2 Second Example: Nonlinear Discrete-Time System
Consider a nonlinear discrete-time system as
where \( f(x(k-1))=\displaystyle\frac{0.5y(k-1)(1-y(k-1)) }{{1+\exp (-0.25y(k-1))}}. \)
Firstly, we assume \( f\left( {x(k-1)} \right) \) is known, use the control law (10.17), and set \( {c_1}=-0.01, \) the results are shown in Figs. 10.12 and 10.13.
Then we use RBF to approximate \( f\left( {x(k-1)} \right). \) For RBF neural network, the structure is 1-9-1; from \( f\left( {x(k-1)} \right) \) expression, only one input \( y(k-1) \) is chosen; the parameters of Gaussian function \( {{\boldsymbol{ c}}_{\mathrm{ i}}} \) and \( {b_i} \) are chosen as \( [\begin{array}{lllll} {-2} \ {-1.5} \ {-1.0} \ {-0.5} \ 0 \ {0.5} \ {1.0} \ {1.5} \ 2 \\\end{array}] \) and \( 15\left( {i=1,\quad j=1,2,\ldots,9} \right), \) and the initial weight value is chosen as random value in the range \( (\text{0,1}). \) The initial value of the plant is set as zero. The reference signal is \( {y_{\mathrm{ d}}}(k)=\sin t. \) Using the control law (10.22) with adaptive law (10.27), \( {e_1}(k) \) is calculated by (10.32); the parameters are chosen as \( {c_1}=-0.01,\beta =0.001,\gamma =0.001,\gamma =0.001,G=50000,{\epsilon_f}=0.003. \) The results are shown in Figs. 10.14, 10.15, and 10.16.
The program of this example is chap10_3.m and chap10_4.m, which are given in the Appendix.
10.3.5.3 Third Example: Nonlinear Discrete-Time System
Consider a nonlinear discrete-time system as
where \( f\left( {x(k-1)} \right)=\frac{1.5y(k-1)y(k-2) }{{1+{y^2}(k-1)+{y^2}(k-2)}}+0.35\sin \left( {y(k-1)+y(k-2)} \right). \)
Then we use RBF to approximate \( f\left( {x(k-1)} \right). \) For RBF neural network, the structure is 2-9-1; from \( f\left( {x(k-1)} \right) \) expression, two inputs are chosen as \( y(k-1) \) and \( y(k-2), \) the parameters of Gaussian function \( {{\boldsymbol{ c}}_{\mathrm{ ij}}} \) and \( {b_j} \) are chosen as \( \left[ \begin{array}{lllll}{-2} \ {-1.5} \ {-1.0} \ {-0.5} \ 0 \ {0.5} \ {1.0} \ {1.5} \ 2 \hfill\\ {-2} \ {-1.5} \ {-1.0} \ {-0.5} \ 0 \ {0.5} \ {1.0} \ {1.5} \ 2 \\ \end{array} \\ \right] \) and \( 15\left( {i=1,2,\quad j=1,2,\ldots,9} \right); \) and the initial weight value is chosen as random value in the range \( (\text{0,1}). \) The initial value of the plant is set as zero. The reference signal is \( {y_{\mathrm{ d}}}(k)=\sin t. \) Using the control law (10.22) with adaptive law (10.27), \( {e_1}(k) \) is calculated by (10.32); the parameters are chosen as \( {c_1}=-0.01,\beta =0.001,\gamma =0.001,\gamma =0.001,G=50000,{\epsilon_f}=0.003. \) The results are shown in Figs. 10.17, 10.18, and 10.19.
The program of this example is chap10_5.m, which is given in the Appendix.
References
Jagannathan S, Lewis FL (1994) Discrete-time neural net controller with guaranteed performance. In: Proceedings of the American control conference, pp 3334–3339
Ge SS, Li GY, Lee TH (2003) Adaptive NN control for a class of strict-feedback discrete- time nonlinear systems. Automatica 39(5):807–819
Yang C, Li Y, Ge SS, Lee TH (2010) Adaptive control of a class of discrete-time MIMO nonlinear systems with uncertain couplings. Int J Control 83(10):2120–2133
Ge SS, Yang C, Dai S, Jiao Z, Lee TH (2009) Robust adaptive control of a class of nonlinear strict-feedback discrete-time systems with exact output tracking. Automatica 45(11):2537–2545
Yang C, Ge SS, Lee TH (2009) Output feedback adaptive control of a class of nonlinear discrete-time systems with unknown control directions. Automatica 45(1):270–276
Yang C, Ge SS, Xiang C, Chai T, Lee TH (2008) Output feedback NN Control for two classes of discrete-time systems with unknown control directions in a unified approach. IEEE Trans Neural Netw 19(11):1873–1886
Ge SS, Yang C, Lee TH (2008) Adaptive robust control of a class of nonlinear strict- feedback discrete-time systems with unknown control directions. Syst Control Lett 57:888–895
Ge SS, Yang C, Lee TH (2008) Adaptive predictive control using neural network for a class of pure-feedback systems in discrete time. IEEE Trans Neural Netw 19(9):1599–1614
Zhang J, Ge SS, Lee TH (2005) Output feedback control of a class of discrete MIMO nonlinear systems with triangular form inputs. IEEE Trans Neural Netw 16(6):1491–1503
Ge SS, Zhang J, Lee TH (2004) State feedback NN control of a class of discrete MIMO nonlinear systems with disturbances. IEEE Trans Syst, Man Cybern (Part B) Cybern 34(4):1630–1645
Ge SS, Li Y, Zhang J, Lee TH (2004) Direct adaptive control for a class of MIMO nonlinear systems using neural networks. IEEE Trans Autom Control 49(11):2001–2006
Ge SS, Zhang J, Lee TH (2004) Adaptive MNN control for a class of non-affine NARMAX systems with disturbances. Syst Control Lett 53:1–12
Ge SS, Lee TH, Li GY, Zhang J (2003) Adaptive NN control for a class of discrete-time nonlinear systems. Int J Control 76(4):334–354
Lee S (2001) Neural network based adaptive control and its applications to aerial vehicles. Ph.D. dissertation, School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA
Shin DH, Kim Y (2006) Nonlinear discrete-time reconfigurable flight control law using neural networks. IEEE Trans Control Syst Technol 14(3):408–422
Zhang J, Ge SS, Lee TH (2002) Direct RBF neural network control of a class of discrete-time non-affine nonlinear systems. In: Proceedings of the American control conference, pp 424–429
Fabri SG, Kadirkamanathan V (2001) Functional adaptive control: an intelligent systems approach. Springer, New York
Author information
Authors and Affiliations
Appendix
Appendix
10.1.1 Programs for Sect. 10.2.3
Simulation program: chap10_1.m
%Discrete neural controller
clear all;
close all;
L=9; %Hidden neural nets
c=[-2 -1.5 -1 -0.5 0 0.5 1 1.5 2;
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2];
b=2;
w=rand(L,1);
w_1=w;
u_1=0;
x1_1=0;x1_2=0;
x2_1=0;
z=[0,0]';
Gama=0.01;rou=0.001;
L*(1+rou)*Gama %<=1/g1-1/k0
(L+rou)*Gama %<=1.0
for k=1:1:10000
time(k)=k;
ym(k)=sin(pi/1000*k);
y(k)=x1_1; %tol=1
e(k)=y(k)-ym(k);
M=2;
if M==1 %Linear model
x1(k)=x2_1;
x2(k)=u_1;
elseif M==2 %Nonlinear model
x1(k)=x2_1;
x2(k)=(x1(k)*x2_1*(x1(k)+2.5))/(1+x1(k)^2+x2_1^2)+u_1+0.1*u_1^3;
end
z(1)=x1(k);z(2)=x2(k);
for j=1:1:L
h(j)=exp(-norm(z-c(:,j))^2/(2*b^2));
end
w=w_1-Gama*(h'*e(k)+rou*w_1);
wn(k)=norm(w);
u(k)=w'*h';
%u(k)=0.20*(ym(k)-x1(k)); %P control
x1_2=x1_1;
x1_1=x1(k);
x2_1=x2(k);
w_1=w;
u_1=u(k);
end
figure(1);
plot(time,ym,'r',time,x1,'k:','linewidth',2);
xlabel('k');ylabel('ym,y');
legend('Ideal position signal','Position tracking');
figure(2);
plot(time,u,'r','linewidth',2);
xlabel('k');ylabel('Control input');
figure(3);
plot(time,wn,'r','linewidth',2);
xlabel('k');ylabel('Weight Norm');
10.1.2 Programs for Sect. 10.3.5.1
Simulation program: chap10_2.m
%Discrete RBF controller
clear all;
close all;
ts=0.001;
c1=-0.01;
beta=0.001;
epcf=0.003;
gama=0.001;
G=50000;
b=15;
c=[-1 -0.5 0 0.5 1];
w=rands(5,1);
w_1=w;
u_1=0;
y_1=0;
e1_1=0;
e_1=0;
fx_1=0;
for k=1:1:2000
time(k)=k*ts;
yd(k)=sin(2*pi*k*ts);
yd1(k)=sin(2*pi*(k+1)*ts);
%Nonlinear plant
fx(k)=0.5*y_1;
y(k)=fx_1+u_1;
e(k)=y(k)-yd(k);
x(1)=y_1;
for j=1:1:5
h(j)=exp(-norm(x-c(:,j))^2/(2*b^2));
end
v1_bar(k)=beta/(2*gama*c1^2)*h*h';
e1(k)=(-c1*e1_1+beta*(e(k)+c1*e_1))/(1+beta*(v1_bar(k)+G));
if abs(e1(k))>epcf/G
w=w_1+beta/(gama*c1^2)*h'*e1(k);
elseif abs(e1(k))<=epcf/G
w=w_1;
end
fnn(k)=w'*h';
u(k)=yd1(k)-fnn(k)-c1*e(k);
%u(k)=yd1(k)-fx(k)-c1*e(k); %With precise fx
fx_1=fx(k);
y_1=y(k);
w_1=w;
u_1=u(k);
e1_1=e1(k);
e_1=e(k);
end
figure(1);
plot(time,yd,'r',time,y,'k:','linewidth',2);
xlabel('time(s)');ylabel('yd,y');
legend('Ideal position signal','Position tracking');
figure(2);
plot(time,u,'r','linewidth',2);
xlabel('time(s)');ylabel('Control input');
figure(3);
plot(time,fx,'r',time,fnn,'k:','linewidth',2);
xlabel('time(s)');ylabel('fx and fx estimation');
legend('Ideal fx','fx estimation');
10.1.3 Programs for Sect. 10.3.5.2
Simulation program with known f(x(k − 1)): chap10_3.m
%Discrete controller
clear all;
close all;
ts=0.001;
c1=-0.01;
u_1=0;y_1=0;
fx_1=0;
for k=1:1:20000
time(k)=k*ts;
yd(k)=sin(k*ts);
yd1=sin((k+1)*ts);
%Nonlinear plant
fx(k)=0.5*y_1*(1-y_1)/(1+exp(-0.25*y_1));
y(k)=fx_1+u_1;
e(k)=y(k)-yd(k);
u(k)=yd1-fx(k)-c1*e(k);
y_1=y(k);
u_1=u(k);
fx_1=fx(k);
end
figure(1);
plot(time,yd,'r',time,y,'k:','linewidth',2);
xlabel('time(s)');ylabel('yd,y');
legend('Ideal position signal','Position tracking');
figure(2);
plot(time,u,'r','linewidth',2);
xlabel('time(s)');ylabel('Control input');
Simulation program with unknown f(x(k − 1)): chap10_4.m
%Discrete RBF controller
clear all;
close all;
ts=0.001;
c1=-0.01;
beta=0.001;
epcf=0.003;
gama=0.001;
G=50000;
b=15;
c=[-2 -1.5 -1 -0.5 0 0.5 1 1.5 2];
w=rands(9,1);
w_1=w;
u_1=0;
y_1=0;
e1_1=0;
e_1=0;
fx_1=0;
for k=1:1:10000
time(k)=k*ts;
yd(k)=sin(k*ts);
yd1(k)=sin((k+1)*ts);
%Nonlinear plant
fx(k)=0.5*y_1*(1-y_1)/(1+exp(-0.25*y_1));
y(k)=fx_1+u_1;
e(k)=y(k)-yd(k);
x(1)=y_1;
for j=1:1:9
h(j)=exp(-norm(x-c(:,j))^2/(2*b^2));
end
v1_bar(k)=beta/(2*gama*c1^2)*h*h';
e1(k)=(-c1*e1_1+beta*(e(k)+c1*e_1))/(1+beta*(v1_bar(k)+G));
if abs(e1(k))>epcf/G
w=w_1+beta/(gama*c1^2)*h'*e1(k);
elseif abs(e1(k))<=epcf/G
w=w_1;
end
fnn(k)=w'*h';
u(k)=yd1(k)-fnn(k)-c1*e(k);
%u(k)=yd1(k)-fx(k)-c1*e(k); %With precise fx
fx_1=fx(k);
y_1=y(k);
w_1=w;
u_1=u(k);
e1_1=e1(k);
e_1=e(k);
end
figure(1);
plot(time,yd,'r',time,y,'k:','linewidth',2);
xlabel('time(s)');ylabel('yd,y');
legend('Ideal position signal','Position tracking');
figure(2);
plot(time,u,'r','linewidth',2);
xlabel('time(s)');ylabel('Control input');
figure(3);
plot(time,fx,'r',time,fnn,'k:','linewidth',2);
xlabel('time(s)');ylabel('fx and fx estimation');
10.1.4 Programs for Sect. 10.3.5.3
Simulation program: chap10_5.m
%Discrete RBF controller
clear all;
close all;
ts=0.001;
c1=-0.01;
beta=0.001;
epcf=0.003;
gama=0.001;
G=50000;
b=15;
c=[-2 -1.5 -1 -0.5 0 0.5 1 1.5 2;
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2];
w=rands(9,1);
w_1=w;
u_1=0;y_1=0;y_2=0;
e1_1=0;e_1=0;
x=[0 0]';
fx_1=0;
for k=1:1:10000
time(k)=k*ts;
yd(k)=sin(k*ts);
yd1(k)=sin((k+1)*ts);
%Linear model
fx(k)=1.5*y_1*y_2/(1+y_1^2+y_2^2)+0.35*sin(y_1+y_2);
y(k)=fx_1+u_1;
e(k)=y(k)-yd(k);
x(1)=y_1;x(2)=y_2;
for j=1:1:9
h(j)=exp(-norm(x-c(:,j))^2/(2*b^2));
end
v1_bar(k)=beta/(2*gama*c1^2)*h*h';
e1(k)=(-c1*e1_1+beta*(e(k)+c1*e_1))/(1+beta*(v1_bar(k)+G));
if abs(e1(k))>epcf/G
w=w_1+beta/(gama*c1^2)*h'*e1(k);
elseif abs(e1(k))<=epcf/G
w=w_1;
end
fnn(k)=w'*h';
u(k)=yd1(k)-fnn(k)-c1*e(k);
%u(k)=yd1(k)-fx(k)-c1*e(k); %With precise fx
%u(k)=0.10*(x1d(k)-x1(k)); %P control
fx_1=fx(k);
y_2=y_1;
y_1=y(k);
w_1=w;
u_1=u(k);
e1_1=e1(k);
e_1=e(k);
end
figure(1);
plot(time,yd,'r',time,y,'k:','linewidth',2);
xlabel('time(s)');ylabel('yd,y');
legend('Ideal position signal','Position tracking');
figure(2);
plot(time,u,'r','linewidth',2);
xlabel('time(s)');ylabel('Control input');
figure(3);
plot(time,fx,'r',time,fnn,'k:','linewidth',2);
xlabel('time(s)');ylabel('fx and fx estimation');
Rights and permissions
Copyright information
© 2013 Tsinghua University Press, Beijing and Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Liu, J. (2013). Discrete Neural Network Control. In: Radial Basis Function (RBF) Neural Network Control for Mechanical Systems. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34816-7_10
Download citation
DOI: https://doi.org/10.1007/978-3-642-34816-7_10
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-34815-0
Online ISBN: 978-3-642-34816-7
eBook Packages: EngineeringEngineering (R0)