Abstract
To improve the robustness of the traditional inverse system method, the internal model control based on a novel least square support vector machines (LS-SVM) is proposed. The novel LS-SVM considers general errors that include noises of input variables and output variables as empirical errors. The data of original MIMO discrete system is exploited to approximate its inverse model by the novel LS-SVM. By cascading the inverse model and the original system to constitute a decoupling pseudo-linear system, the internal model control strategy is carried out to the pseudo-linear system to realize the effective control. Simulation validates that the novel LS-SVM used in the inverse system identification is effective and shows that the internal model control of nonlinear discrete systems has better robustness of anti-interference and parameters varying than that of the open-loop system only based on inverse control.
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Internal model control has been extensively studied in the case of linear systems, and has been shown good robust properties against disturbances. Nonlinear systems widely exist in industrial processes, and the development of internal model control for nonlinear models has already been paid great attention [1, 2]. For nonlinear systems, whether continuous systems or discrete systems, they are very difficult to establish internal models because of imperfect mathematic models of original systems. The internal model control cannot be implemented without inversion models, which hinders the internal model control method to solve control problems of nonlinear systems [3]. Hence, it is very insistent to find an approach to obtain inverse functions effectively by the information and knowledge of original systems.
In recent years, the birth of intelligent learning algorithms encourages the development of nonlinear control without precise models of processes. Neural networks, as early algorithms, have been discussed in the internal model control of nonlinear systems and have played an important role [4, 5]. Support vector machines (SVM) introduced by Vapnik are a new methodology in the area of nonlinear modeling after neural networks. While neural networks suffer from problems like the existence of many local minima and the choice of the number of hidden units [6], SVM are characterized by convex optimization problems based on sound theoretical principles, up to the determination of a few additional tuning parameters, and provide better generalization performance than that of neural networks [7, 8]. The convex quadratic programming problem is solved in dual space in order to determine the SVM model. In the optimization formulation, one works with equality instead of inequality constraints and the sum squared error instead of the epsilon-insensitive cost function, thus LS-SVM are proposed. This reformulation greatly simplifies the problem in such a way that the solution is a Karush-Kuhn-Tucker (KKT) system [9, 10]. Because errors in SVM and LS-SVM are only the noises of output variables, which is unreasonable since the input variables may be polluted by noises. Reference [11] proposed the total least squares method to deal with the problem and the normal least square support vector machine in reference [12] considered that noises of the input variable based on the total least squares method.
In this paper, we consider a novel LS-SVM with general errors that include the noises of input variables and output variables and an iterative algorithm is introduced to solve the LS-SVM. Then we propose the internal model control of LS-SVM with general errors for MIMO nonlinear discrete systems. We use LS-SVM to approximate the inverse system based on input-output data from the original system. The internal model is a pseudo linear system by connecting with the inverse model and the original system. The internal model controller is then designed by cascading a filter and the inversion of pseudo linear system. We focus on robust properties of internal model control in the case of disturbing signals and parameters varying. Comparatively speaking, the inverse control is a simple open-loop method whose ability is limit for disturbances and parameters varying. Simulation shows that the internal model control strategy based on LS-SVM is effective and has good performance.
This paper is organized as follows. In Sect. 2, we give a description of the class of MIMO nonlinear discrete systems, which helps to model an inverse system, and the control method based on inverse system is also be introduced. In Sect. 3, LS-SVM with general errors are given and an iterative algorithm is used to solve the LS-SVM, then the inverse model is approached by the novel LS-SVM based on effective data. In Sect. 4, we present the approach of internal model control and analyze the ability of the close-loop system. In Sect. 5, simulated processes are conducted to illustrate robustness of the internal model control system based on LS-SVM, comparing to the inverse system method proposed in the past. Conclusion is in Sect. 6.
2 System description and the control based on inverse system
We are interested in reversible MIMO nonlinear discrete systems. This kind of system is described by the following discrete nonlinear input-output model, \(\Upsigma: {\bf u}(k)\rightarrow {\bf y}(k)\)
where \({\bf y}=(y_{1},\ldots,y_{n})\in R^{n}, {\bf u}=(u_{1},\ldots,u_{m})\in R^{m}, {\bf y}(k+\alpha-p)=(y_{1}(k+\alpha_{1}-p_{1}),y_{2}(k+\alpha_{2}-p_{2}),\ldots,y_{n}(k+\alpha_{n}-p_{n})), {\bf u}(k-q)=(u_{1}(k-q_{1}),u_{2}(k-q_{2}),\ldots, u_{m}(k-q_{m})), \alpha\) expressed relative delays of outputs to inputs, q denoted the input delays and \(max\{p_{1},p_{2}, \ldots,p_{n}\}\) denoted the order of the system.
The inverse system of the described system \(\Upsigma\) is expressed in the formula, \(\Upsigma^{'}:{\bf y}(k) \rightarrow {\bf u}(k)\)
Denote \(\varphi(k)={\bf y}(k+\alpha)\) and \(\varphi_{1}(k)=y_{1}(k+\alpha_{1}),\ldots, \varphi_{n}(k)=y_{n}(k+\alpha_{n}).\) \( {\mathbf{y}}(k + \alpha - 1) = z^{{ - 1}} \varphi (k),{\mathbf{y}}(k + \alpha - 2) = z^{{ - 2}} \varphi (k), \ldots ,{\mathbf{y}}(k + \alpha - p) = z^{{ - p}} \varphi (k). \) For a reversible MIMO nonlinear discrete system, we express the α-th order inverse system [13] as follows, \(\Upsigma^{''}:\varphi(k) \rightarrow {\bf u}(k)\)
The formula with the input \(\varphi(k)\) and the output u(k) is the inverse expression. We cascade the inverse system and the original system to rebuild a composite system as shown in the Fig. 1. The composite system is a pseudo-linear system with the following decoupling transfer function,
The relationship of nonlinear coupling still exists in the composite system, but it has the standard linear relationship in view of the transfer function, namely the original MIMO system is decoupled into independent single-input single-output pseudo-linear subsystems.
3 The inverse model based on LS-SVM
3.1 LS-SVM for nonlinear function estimation
Given a training data set of M points \(\{{\bf x}_{i},y_{i}\}^{M}_{i=1}\) with input data \({\bf x}_{i}\in R^{n_{1}}\) and output data y i ∈ R, one considers the following optimization problem [14] in primal space,
A function \(\phi:R^{n_{1}}\rightarrow R^{n_{2}},\) it maps the input space into a high dimensional (possibly infinite dimensional) feature space. A weight vector w, a error variable e i ∈ R and a bias term b ∈ R are in primal space. The cost function J consists of a fitting error and a regularization term. The relative importance of these terms is determined by the positive real constant γ. In the case, a smaller γ value can avoid the over fitting of noise data. The regression model of LS-SVM is \(f({\bf x})={\bf w}^{{\rm T}}\phi({\bf x})+b\) in primal space. The weight vector may be infinite dimensional, which makes a calculation of w from (5) impossible in general. Therefore, one computes the model in the dual space instead of the primal space. The Lagrange function constructed for problem (5) is
\(\beta_{i}, i=1,\ldots,M\) is a Lagrange multiplier. According to KKT conditions [15], let the first order derivatives of L be zeros, namely
The following equations can be acquired
According to Mercer kernel conditions, one can choose a kernel \(K(\cdot,\cdot),\) such that \(K({\bf x}_{1},{\bf x}_{2})=\phi({\bf x}_{1})^{{\rm T}}\phi({\bf x}_{2}).\) After elimination of w and e i , the optimization problem leads to the following linear system,
The linear equation can be rewritten as:
where \({\bf A}={\bf K}+\varvec{\theta}, {\bf K}={(K_{ij})_{M\times M}}, \varvec{\theta}=diag({\frac{1}{\gamma}},\ldots,{\frac{1} {\gamma}}), {\bf I}=(1;1;\ldots;1), \varvec{\beta}=(\beta_{1};\beta_{2};\ldots;\beta_{M}),\) and \({\bf y}=(y_{1};y_{2}; \ldots;y_{M})\). We focus on an RBF kernel with parameters γ and σ2 in this paper. β i and b are the solution to the linear equations. The LS-SVM model at x becomes
Errors in optimization problem of LS-SVM only measures noises of output variables which use errors between expected outputs and predictions as the empirical errors, and then minimize the sum of square errors. But the noises of input variables still exist. So we describe the empirical errors [12] in the features pace as follows.
The revised optimization problem in primal space is:
We discuss an iterative learning algorithm to solve the optimization problem in the formula (13) in the dual space. At the t − th iteration, the Lagrange function is also rewritten as:
According to KKT conditions, partial differentiation of the Lagrange function is the following,
We have the equation
Define \(\alpha_{i}^{(t)}={\frac{\beta_{i}^{(t)}}{\sqrt{1+\parallel {\bf w}^{(t-1)}\parallel^{2}}}}, i=1,\ldots,M,\) we have equations
So we get the linear equation:
where still \({\bf A}={\bf K}+\varvec{\theta}, \varvec{\alpha}^{(t)}=(\alpha^{(t)}_{1};\alpha^{(t)}_{2};\ldots;\alpha^{(t)}_{M}).\) In order to solve the Eq. (19), we need to update \(\alpha^{(t)}, b^{(t)}\) and \(\parallel {\bf w}^{(t-1)}\parallel^{2}\) to find the solution of the optimization problem (13).
Lemma
LetAbe an invertible matrix, for the given matrixAandU, V, D, defineB = D − VA−1U, then the inverse matrix
Specially, if U = I and \({\bf U}={\bf V}^{{\rm T}}\) hold, it has
The original matrix in LS-SVM linear equation is invertible and the revised matrix still is invertible due to the same as rank of the original matrix. We use special symbols respectively to express the original matrix and the revised matrix:
According to the linear Eq. (19), our motivation is to get \((\Uppsi^{(t)})^{-1}.\) From the lemma and the formula (21), if we hope to get \((\Uppsi^{(t)})^{-1},\) we need \(({\bf A}+\parallel {\bf w}^{(t-1)}\parallel^{2}\varvec{\theta})^{-1}.\) Since A is symmetric and positive definite, there exists an orthogonal matrix P and \({\bf P}^{{\rm T}}={\bf P}^{-1},\) so that \({\bf A}={\bf P}^{{\rm T}}\varvec{\Uplambda} {\bf P},\) where \(\varvec{\Uplambda}=diag(\lambda_{1},\lambda_{2},\ldots,\lambda_{M})\) and \(\lambda_{1},\lambda_{2},\ldots, \lambda_{M}\) are the positive eigenvalues of A. So we have
We derive the inverse of \({\bf A}+\parallel {\bf w}^{(t-1)}\parallel^{2}\varvec{\theta},\) then update \((\Uppsi^{(t)})^{-1}\) by using the formula (21), and find the solution of (19). The iterative updating algorithm can be summarized as follows.
Algorithm 1
Iterative updating learning algorithm for solving the novel LS-SVM.
-
1.
Set parameters of LS-SVM. Find the orthogonal matrix P and the diagonal matrix \(\varvec{\Uplambda},\) so that \({\bf A}={\bf P}^{{\rm T}}\varvec{\Uplambda}{\bf P}^{-1}.\) Store P and \(\varvec{\Uplambda}.\)
-
2.
Computer the inverse of \(\Uppsi\)using the formula (21) and solve the problem (10). Set the solution of LS-SVM as \(\varvec{\beta}^{0}, b^{0}.\) Let t = 1 and \(\varvec{\alpha}^{0}=\varvec{\beta}^{0}\) and computer \(\parallel {\bf w}^{(0)} \parallel^{2}=(\alpha^{(0)})^{{\rm T}} {\bf K} \alpha^{(0)}\).
-
3.
Computer \(({\bf A}+\parallel {\bf w}^{(t-1)}\parallel^{2}\varvec{\theta})^{-1}\) using the formula (25), then \((\Uppsi^{(t)})^{-1}\) can be computed using (21).
-
4.
The solution of the novel LS-SVM in the formula (13) can be obtained by multiplying \((\Uppsi^{(t)})^{-1}.\) Record the solution of the Eq. (13) as \(\varvec{\alpha}^{(t)}\) and b (t), and computer \(\parallel {\bf w}^{(t)} \parallel^{2}=(\varvec{\alpha}^{(t)})^{{\rm T}} {\bf K} \varvec{\alpha}^{(t)}.\)
-
5.
If the stop condition \(\eta=\parallel \frac{\sqrt{\parallel{\bf w}^{(t)}\parallel^{2}}}{\parallel{\bf w}^{(t)}\parallel^{2}}-\frac{\sqrt{\parallel{\bf w}^{(t-1)}\parallel^{2}}} {\parallel{\bf w}^{(t-1)} \parallel^{2}} \parallel < \zeta\) holds for a positive number \(\zeta,\) go to 6; Otherwise, set t = t + 1, go to 3.
-
6.
Let \(\beta_{i}=\frac{\alpha_{i}^{(t)}}{\sqrt{1+\parallel {\bf w}^{(t-1)} \parallel^{2}}}\) and b = b (t). The output of the novel LS-SVM is \(y({\bf x})=\sum^{M}_{i=1}\beta_{i}K({\bf x},{\bf x}_{i})+b.\)
3.2 α-th order inverse model based on LS-SVM
For the described nonlinear discrete system, the α-th order inverse system is expressed in the formula (3). Both the precise mathematic model of the original system and the explicit expression of u(k) cannot be obtained. For the imperfect model, we adapt the novel LS-SVM to approximate the inverse model based on the input-output data acquired from the original system. Because LS-SVM can only be used for the estimation of single output functions, in order to identify multiple output objects, it is necessary to learn respectively for each subsystem. The number of subsystems is equal to the number of output variables.
Algorithm 2
Inverse model approaching algorithm based on the novel LS-SVM.
-
1.
Select a proper excitation signal, such as the white noise, etc.
-
2.
Obtain input-output data of original system by using the excitation signal as the input. Sort data into training and testing samples in the form of \(\{S_{i},u_{i}\}^{M}_{i=1}.\)
-
3.
Use parameters γ and σ2 and train the novel LS-SVM to acquire the inverse submodel of each decoupling subsystem.
-
4.
Test the generalization ability of inverse models using testing data.
-
5.
Assemble all inverse submodels to get the inverse model of original system due to decoupling subsystems.
After acquiring the so-called inverse system of the original MIMO system, cascade the inverse model and the original system to constitute the pseudo-linear system which is a simple open loop control based on inverse system.
4 The internal model control of the MIMO nonlinear discrete system
The internal model control for discrete processes has the following properties [16].
Property 1
Stability Criterion. When the internal model is exact, stability of both controller and plant is sufficient for overall system stability.
Property 2
Perfect Controller. Under the assumption that the internal model is perfect and that the plant and the controller is stable, if there is no disturbance, the perfect control can be achieved when the controller is the inverse of internal model.
The pseudo-linear system acquired by connecting with the inverse model and the original system has basically linearization, the basic diagram of internal model control based on LS-SVM is shown in the Fig. 2. G m (z) represents the pulse transfer function matrix of internal model, G(z) represents the controlled plant, G c (z) represents the pulse transfer function matrix of internal model controller, D(z) is the disturbance function, R(z) is the input function, Y(z) is the output function. The internal model control strategy provides a feedback control for nonlinear systems. One usually just chooses a diagonal matrix constituted by relative orders of independent subsystems as the transfer function of the internal model, namely \(G_{m}(z)=diag\{z^{-\alpha_{1}},z^{-\alpha_{2}},\ldots,z^{-\alpha_{n}}\}.\) Considering that the actual composite system G(z) maybe have an error in modeling, the pseudo-linear system can be assumed \(G(z)=G_{m}(z)(1+h_{m}(z)),\) h m (z) expresses the unmodeled error function. We assume that h m (z) is linear and bounded. The MIMO internal model control is illustrated by the Fig. 3.
The internal model controller denotes as G f (z) that is the product of a robust filter F(z) and G c (z). Still let \(G_{c}(z)=G_{m}^{-1}(z).\) The robust filter F(z) is usually to reduce the sensitivity of the internal model control system. Reference [16] offers a detailed introduction about how to design the robust filter. The internal model controller can be rewritten as \(G_{f}(z)=F(z)G_{m}^{-1}(z).\) According to Property 1, which requires that the object and the controller are input-output stable to make sure the control system is stable. For a decoupling linear system, \(G_{m}(z)=diag\{z^{-\alpha_{1}},z^{-\alpha_{2}},\ldots,z^{-\alpha_{n}}\},\) we ask \(G_{m}^{-1}(z)=diag\{1,1,\ldots,1\}\) in order to keep the controller stable. Using the following filter
the output of the closed-loop system can be described as
and the error is
5 Simulation
In this section, aiming at the multivariate, nonlinear and strong coupling plant, we illustrate the performance of internal model control based on LS-SVM. The discrete model in the simulation is as follows,
Suppose that the precise mathematic model of original system is unknown and it is reversible. α1 = 1, α2 = 1, m = 2, \(n=2, p_{1}=1, p_{2}=1, q_{1}=2, q_{2}=2.\)
5.1 The internal model control and the open loop control based on inverse system
Give white noise sequences to the two input ends and the above model is used to produce data of 1000 groups. Utilize 500 groups to train and the other 500 groups to test. Fitting factors of every group are \(S_{1}=\{y_{1}(k),y_{1}(k-1),y_{1}(k-2),u_{1}(k-2),y_{2}(k-2),u_{2}(k-2)\}\) and \(S_{2}=\{y_{2}(k),y_{2}(k-1),y_{2}(k-2),u_{2}(k-2),y_{1}(k-2),u_{1}(k-2)\}\) respectively. With RBF kernel function, γ = 1000 and σ2 = 30 are selected. We can obtain inverse submodels by the novel LS-SVM. The index of root mean square error is denoted as \(RSME=\sqrt{{\frac{\sum_{i=1}^{n}x_{i}^{2}}{n}}}.\) RSMEs of inverse submodels for testing data equal 0.136 and 0.096. The Fig. 4 shows testing curves and error curves of the inverse models approximated.
Due to a decoupling linear system, assemble all inverse submodels to get the inverse model of original system. As the relative order of the system \(\alpha_{1}=\alpha_{2}=1,\) the internal model G m (z) takes as \(diag\{{\frac{1}{z}},{\frac{1}{z}}\}.\) The simple controller is \(G_{f}(z)=G_{m}^{-1}(z)=diag\{1,\ldots,1\}\). \(G_{f}(z)=F(z)G_{m}^{-1}(z)\) is a internal model control with the filter \(F(z)=diag\{{\frac{1-l_{i}}{1-l_{i}z^{-1}}}\},\) \(0\leq l_{i} \leq 1, i=1,2.\) Set \(l_{1}=l_{2}=0.5\) in the simulation.
By cascading the inverse system, the multivariable coupling system has been decoupled into two pseudo-linear systems. The performance of open-loop system based on inverse control under a given square-wave reference is shown in the Fig. 5. Comparing with the open-loop system only based on inverse control under the same reference, the internal model control system achieves a good tracking to a square-wave. In order to reduce jitter, we introduced a filter into the simple internal model control system. The performance of open-loop system based on inverse control is bigger errors and jitter than that of the internal model control system according to tracking curves from trajectories shown in the figure. We use a mixture signal of sin waves with different frequency as a reference signal again. The tracking performances of the open-loop system and the internal model control system for the reference signal are shown in the Fig. 6, which illustrate a unit delay tracking for the reference signal. The internal model control system has better tracking performance than that of the open-loop system.
5.2 Robustness of disturbance rejection
At k = 100 and k = 110, the two decoupling subsystems are interfered respectively by an external step signal with the amplitude of 0.1. The square-wave response of internal model control system can be observed from the Fig. 7, the strategy of simple internal model control has good robustness to inhibit a step disturbance and keep following the reference signal after a little jitter, but the duration of the jitter is a litter longer than the time excepted. After adding a filter, the duration of the jitter is reduced. When the open loop system is disturbed by the same step signal, the system can not follow the reference signal and deviates from the reference signal greatly, which shows weak robustness. Disturbances lead to bad steady-state errors in the open-loop system.
5.3 Robustness of variable parameters
Parameters of the nonlinear system change, namely the original nonlinear system changes into the following formula,
When parameters of nonlinear system vary as described in (30), it is equal to the mismatch between the object and the model. Simulation results are shown in the Fig. 8. The simple internal model control can still achieve tracking the reference, but a large jitter exists. Under a filter, the jitter is reduced. But the disturbance of parameters varying induces a larger oscillations for the open loop system.
6 Conclusion
In this study, we firstly introduce the novel LS-SVM which considers noises of input variables and output variables, then the internal model control based on the novel LS-SVM for MIMO nonlinear discrete systems is presented. The proposed method overcomes the problem of the imperfect mathematic model of original system and identifies the inverse model accurately by way of input-output data. The advantage of internal model control is the excellent robustness with respect to a disturbance signal and a model mismatch, which is illustrated in the simulation.
References
Lightbody G, Irwin GW (1997) Nonlinear control structures based on embedded neural system models. IEEE Trans Neural Netw 8(3):553–567
Rivals I, Personnaz L (2000) Nonlinear internal model control using neural networks: application to processes with delay and design issues. IEEE Trans Neural Netw 11(1):80–90
Li M, Zhou ZK, Shi LL (2004) The neural network decoupling control based on the internal model control. In: Proceedings of the 5th world congress on intelligent control and automation, Hangzhou, China, pp 2643–2646
Haber RE, Alique JR (2004) Nonlinear internal model control using neural networks: an application for machining processes. Neural Comput Appl 13:47–55
Chidrawar SK, Patre BM (2008) Implementation of neural network for internal model control and adaptive control. In: Proceedings of the international conference on computer and communication engineering, Kuala Lumpur, Malaysia, pp 741–746
Cui WT, Yan XF (2009) Adaptive weighted least square support vector machine regression integrated with outlier detection and its application in QSAR. Chemom Intell Lab Syst 98:130–135
Vapnik VN (1998) Statistical learning theory. Wiley, NewYork
Evgeniou T, Pontil M, Poggio T (2000) Regularization networks and support vector machiness. Adv Comput Math 13(1):1–50
Suykens JAK, Vandewalle J (1999) Least squares support vector machines classifiers. Neural Process Lett 9(3):293–300
Suykens, JAK, Lukas L, Vandewalle J (2000) Sparse approximation using least-squares support vector machines. In: Proceedings of the IEEE international symposium on circuits and systems, Geneva, Switzerland, pp 11757–11760
Markovsky I, VanHuffel S (2007) Overview of total least squares methods. Signal Process 87:2283–2302
Peng XJ, Wang YF (2009) A normal least squares support vector machine (NLS-SVM) and its learning algorithm. Neurocomputing 72:3734–3741
Dai XZ (2005) Inverse control method based on neural networks for multi-variables nonlinear system (in Chinese). Science Press, Beijing
Sun CY, Mu CX, Li XM (2009) A weighted LS-SVM approach for the identification of a class of nonlinear inverse systems. Sci China Ser F Inf Sci 52(5):770–779
Suykens JAK, Brabanter JD, Lukas L, Vandewalle J (2002) Weighted least squares support vector machiness: robustness and sparse approximation. Neurocomputing 48:85–105
Garcia CE, Morari M (1982) Internal model control: a unifying review and some new results. Ind Eng Chem Process Des 21(2):308–323
Acknowledgments
This work has been supported by the National Natural Science Foundation of China (No. 60874013 and No. 60953001).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Mu, C., Sun, C. & Yu, X. Internal model control based on a novel least square support vector machines for MIMO nonlinear discrete systems. Neural Comput & Applic 20, 1159–1166 (2011). https://doi.org/10.1007/s00521-010-0468-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-010-0468-3